Updates from: 09/07/2023 02:24:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
See an example of a [validation-error response](#example-of-a-validation-error-r
## Before sending the token (preview) > [!IMPORTANT]
-> API connectors used in this step are in preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> API connectors used in this step are in preview. For more information about previews, see [Product Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
An API connector at this step is invoked when a token is about to be issued during sign-ins and sign-ups. An API connector for this step can be used to enrich the token with claim values from external sources.
active-directory-b2c Cookie Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/cookie-definitions.md
The following table lists the cookies used in Azure AD B2C.
| `x-ms-cpim-ctx` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Context | | `x-ms-cpim-rp` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used for storing membership data for the resource provider tenant. | | `x-ms-cpim-rc` | b2clogin.com, login.microsoftonline.com, branded domain | End of [browser session](session-behavior.md) | Used for storing the relay cookie. |
+| `x-ms-cpim-geo` | b2clogin.com, login.microsoftonline.com, branded domain | 1 Hour | Used as a hint to determine the resource tenants home geographic location. |
## Cross-Site request forgery token
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 06/06/2023 Last updated : 09/06/2023
The following table summarizes the Security Assertion Markup Language (SAML) app
|Feature |User flow |Custom policy |Notes | ||::|::||
-|[API connectors](api-connectors-overview.md) | Preview | GA | |
-|[Secure with basic authentication](secure-rest-api.md#http-basic-authentication) | Preview | GA | |
-|[Secure with client certificate authentication](secure-rest-api.md#https-client-certificate-authentication) | Preview | GA | |
+|[After federating with an identity provider during sign-up](api-connectors-overview.md?pivots=b2c-user-flow#after-federating-with-an-identity-provider-during-sign-up) | GA | GA | |
+|[Before creating the user](api-connectors-overview.md?pivots=b2c-user-flow#before-creating-the-user) | GA | GA | |
+|[Before including application claims in token](api-connectors-overview.md?pivots=b2c-user-flow#before-sending-the-token-preview)| Preview | GA | |
+|[Secure with basic authentication](secure-rest-api.md#http-basic-authentication) | GA | GA | |
+|[Secure with client certificate authentication](secure-rest-api.md#https-client-certificate-authentication) | GA | GA | |
|[Secure with OAuth2 bearer authentication](secure-rest-api.md#oauth2-bearer-authentication) | NA | GA | | |[Secure API key authentication](secure-rest-api.md#api-key-authentication) | NA | GA | |
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Last updated 06/26/2023 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Manage Custom Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md
-+ Last updated 02/14/2020
active-directory-b2c Openid Connect Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md
Previously updated : 03/04/2021 Last updated : 08/22/2023
The technical profile also returns claims that aren't returned by the identity p
| MarkAsFailureOnStatusCode5xx | No | Indicates whether a request to an external service should be marked as a failure if the Http status code is in the 5xx range. The default is `false`. | | DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token.If you need to build the metadata endpoint URL based on Issuer, set this to `true`.| | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
+|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt`. For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
|token_signing_algorithm| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.| | SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](./session-behavior.md#sign-out). Possible values: `true` (default), or `false`. | |ReadBodyClaimsOnIdpRedirect| No| Set to `true` to read claims from response body on identity provider redirect. This metadata is used with [Apple ID](identity-provider-apple-id.md), where claims return in the response payload.|
Examples:
- [Add Microsoft Account (MSA) as an identity provider using custom policies](identity-provider-microsoft-account.md) - [Sign in by using Azure AD accounts](identity-provider-azure-ad-single-tenant.md) - [Allow users to sign in to a multi-tenant Azure AD identity provider using custom policies](identity-provider-azure-ad-multi-tenant.md)+
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Previously updated : 07/18/2022 Last updated : 08/23/2023
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
+**2.1.26**
+
+- Replaced `Keypress` to `Key Down` event and avoid `Asterisk` for non-required in classic mode.
+
+**2.1.25**
+
+- Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version.
+
+- Introduced Captcha mechanism for Self-asserted and Unified SSP Flows (_Beta-version-Internal use only_).
+
+**2.1.24**
+
+- Fixed accessibility bugs.
+
+- Fixed MFA related issue and IE11 compatibility issues.
+
+**2.1.23**
+
+- Fixed accessibility bugs.
+
+- Reduced `min-width` value for UI viewport for default template.
+
+**2.1.22**
+
+- Fixed accessibility bugs.
+
+- Added logic to adopt QR Code Image generated from backend library.
+
+**2.1.21**
+
+- Additional sanitization of script tags to avoid XSS attacks.
+ **2.1.20**-- Fixed an XSS issue on input from textbox
+- Fixed Enter event trigger on MFA.
+- CSS changes rendering page text/control in vertical manner for small screens
**2.1.19**-- Fixed accessibility bugs-- Handle Undefined Error message for existing user sign up-- Move Password Mismatch Error to Inline instead of Page Level
+- Fixed accessibility bugs.
+- Handled Undefined Error message for existing user sign up.
+- Moved Password mismatch error to Inline instead of page level.
- Accessibility changes related to High Contrast button display and anchor focus improvements **2.1.18**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Enforce Validation Error Update on control change and enable continue on email verified - Added additional field to error code to validation failure response + **2.1.16** - Fixed "Claims for verification control have not been verified" bug while verifying code. - Hide error message on validation succeeds and send code to verify
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**2.1.10** - Correcting to the tab index-- Fixing WCAG 2.1 accessibility and screen reader issues
+- Fixed WCAG 2.1 accessibility and screen reader issues
**2.1.9**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
+**2.1.14**
+
+- Replaced `Keypress` to `Key Down` event.
+
+**2.1.13**
+
+- Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version
+
+- Introduced Captcha mechanism for Self-asserted and Unified SSP Flows (_Beta-version-Internal use only_)
+
+**2.1.12**
+
+- Removed `ReplaceAll` function for IE11 compatibility.
+
+**2.1.11**
+
+- Fixed accessibility bugs.
+
+**2.1.10**
+
+- Added additional sanitization of script tags to avoid XSS attacks.
+ **2.1.9**-- Fix accessibility bugs+
+- Fixed accessibility bugs.
+ - Accessibility changes related to High Contrast button display and anchor focus improvements
-
+ **2.1.8** - Add descriptive error message and fixed forgotPassword link!
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## MFA page (multifactor)
+**1.2.12**
+
+- Replaced `KeyPress` to `KeyDown` event.
+
+**1.2.11**
+
+- Removed `ReplaceAll` function for IE11 compatibility.
+
+**1.2.10**
+
+- Fixed accessibility bugs.
+
+**1.2.9**
+
+- Fixed `Enter` event trigger on MFA.
+
+- CSS changes render page text/control in vertical manner for small screens
+
+- Fixed Multifactor tab navigation bug.
+
+**1.2.8**
+
+- Passed the response status for MFA verification with error for backend to further triage.
+
+**1.2.7**
+
+- Fixed accessibility issue on label for retries code.
+
+- Fixed issue caused by incompatibility of default parameter on IE 11.
+
+- Set up `H1` heading and enable by default.
+
+- Updated HandlebarJS version to 4.7.7.
+
+**1.2.6**
+
+- Corrected the `autocomplete` value on verification code field from false to off.
+
+- Fixed a few XSS encoding issues.
+ **1.2.5** - Fixed a language encoding issue that is causing the request to fail.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Exception Page (globalexception)
+**1.2.5**
+
+- Removed `ReplaceAl`l function for IE11 compatibility.
+
+**1.2.4**
+
+- Fixed accessibility bugs.
+
+**1.2.3**
+
+- Updated HandlebarJS version to 4.7.7.
+
+**1.2.2**
+
+- Set up `H1` heading and enable by default.
+ **1.2.1**+ - Updated jQuery version to 3.5.1. - Updated HandlebarJS version to 4.7.6.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Other pages (ProviderSelection, ClaimsConsent, UnifiedSSD)
+**1.2.4**
+
+- Remove `ReplaceAll` function for IE11 compatibility.
+
+**1.2.3**
+
+- Fixed accessibility bugs.
+
+**1.2.2**
+
+- Updated HandlebarJS version to 4.7.7
+ **1.2.1**+ - Updated jQuery version to 3.5.1. - Updated HandlebarJS version to 4.7.6.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Next steps For details on how to customize the user interface of your applications in custom policies, see [Customize the user interface of your application using a custom policy](customize-ui-with-html.md).++
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
The following XML snippet is an example of a RESTful technical profile configure
## OAuth2 bearer authentication - Bearer token authentication is defined in [OAuth2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). In bearer token authentication, Azure AD B2C sends an HTTP request with a token in the authorization header. ```http
A bearer token is an opaque string. It can be a JWT access token or any string t
- **Bearer token**. To be able to send the bearer token in the Restful technical profile, your policy needs to first acquire the bearer token and then use it in the RESTful technical profile. - **Static bearer token**. Use this approach when your REST API issues a long-term access token. To use a static bearer token, create a policy key and make a reference from the RESTful technical profile to your policy key. - ## Using OAuth2 Bearer The following steps demonstrate how to use client credentials to obtain a bearer token and pass it into the Authorization header of the REST API calls.
Add the validation technical profile reference to the sign up technical profile,
++ For example:
- ```XML
- <ValidationTechnicalProfiles>
- ....
- <ValidationTechnicalProfile ReferenceId="REST-AcquireAccessToken" />
- ....
- </ValidationTechnicalProfiles>
- ```
-
+```ruby
+```XML
+<ValidationTechnicalProfiles>
+ ....
+ <ValidationTechnicalProfile ReferenceId="REST-AcquireAccessToken" />
+ ....
+</ValidationTechnicalProfiles>
+```
+```
::: zone-end
To configure a REST API technical profile with API key authentication, create th
1. For **Key usage**, select **Encryption**. 1. Select **Create**. - ### Configure your REST API technical profile to use API key authentication After creating the necessary key, configure your REST API technical profile metadata to reference the credentials.
The following XML snippet is an example of a RESTful technical profile configure
::: zone pivot="b2c-custom-policy" - Learn more about the [Restful technical profile](restful-technical-profile.md) element in the custom policy reference. ::: zone-end+
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
The response from the API call looks similar to the following json:
{ "directorySizeQuota": { "used": 211802,
- "total": 300000
+ "total": 50000000
} } ]
If your tenant usage is higher that 80%, you can remove inactive users or reques
## Request increase directory quota size
-You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
+You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 08/01/2023 Last updated : 09/01/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## August 2023
+
+### Updated articles
+
+- [Page layout versions](page-layout.md) - Editorial updates
+- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - Oauth Bearer Authentication updated to GA
+ ## June 2023 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Build a global identity solution with funnel-based approach](azure-ad-b2c-global-identity-funnel-based-design.md) - [Use the Azure portal to create and delete consumer users in Azure AD B2C](manage-users-portal.md)
-## April 2023
-
-### Updated articles
--- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md) - Update partner-bindid.md-- [Tutorial: Enable secure hybrid access for applications with Azure Active Directory B2C and F5 BIG-IP](partner-f5.md) - Update partner-f5.md-
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c
+ Last updated 01/29/2023 - # Known issues: Service principal alerts in Azure Active Directory Domain Services
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
Last updated 04/03/2023 --+ #Customer intent: As an identity administrator, I want to create an Azure AD Domain Services forest and one-way outbound trust from an Azure Active Directory Domain Services forest to an on-premises Active Directory Domain Services forest using Azure PowerShell to provide authentication and resource access between forests.- # Create an Azure Active Directory Domain Services forest trust to an on-premises domain using Azure PowerShell
For more conceptual information about forest types in Azure AD DS, see [How do f
[Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->
-[powershell-gallery]: https://www.powershellgallery.com/
+[powershell-gallery]: https://www.powershellgallery.com/
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Last updated 01/29/2023 --+ # Enable Azure Active Directory Domain Services using PowerShell
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Previously updated : 01/29/2023 Last updated : 09/06/2023 -+ # Configure scoped synchronization from Azure AD to Azure Active Directory Domain Services using Azure AD PowerShell
foreach ($groupName in $groupsToAdd)
Write-Output "****************************************************************************`n" Write-Output "`n****************************************************************************"
-$currentAssignments = Get-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId
+$currentAssignments = Get-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -All $true
Write-Output "Total current group-assignments: $($currentAssignments.Count), SP-ObjectId: $($sp.ObjectId)" $currAssignedObjectIds = New-Object 'System.Collections.Generic.HashSet[string]'
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Last updated 01/29/2023 -+ # Harden an Azure Active Directory Domain Services managed domain
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef
+ Last updated 04/03/2023 - # How objects and credentials are synchronized in an Azure Active Directory Domain Services managed domain
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
-+ Last updated 06/01/2023
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b
+ Last updated 01/29/2023 - # Common errors and troubleshooting steps for Azure Active Directory Domain Services
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
+ Last updated 04/03/2023 - #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To see this managed domain in action, create and join a virtual machine to the d
[availability-zones]: ../reliability/availability-zones-overview.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus
-<!-- EXTERNAL LINKS -->
+<!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
+ Last updated 08/01/2023 - #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
Before you domain-join VMs and deploy applications that use the managed domain,
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
-[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
+[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Applications and systems that support customization of the attribute list includ
> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes). > [!NOTE]
-> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
+> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. Provisioning multi-valued directory extension attributes is not supported.
When you're editing the list of supported attributes, the following properties are provided:
active-directory Inbound Provisioning Api Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-concepts.md
This document provides a conceptual overview of the Azure AD API-driven inbound user provisioning. > [!IMPORTANT]
-> API-driven inbound provisioning is currently in public preview and is governed by [Preview Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> API-driven inbound provisioning is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Introduction
active-directory Inbound Provisioning Api Configure App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-configure-app.md
This tutorial describes how to configure [API-driven inbound user provisioning](inbound-provisioning-api-concepts.md). > [!IMPORTANT]
-> API-driven inbound provisioning is currently in public preview and is governed by [Preview Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> API-driven inbound provisioning is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
This feature is available only when you configure the following Enterprise Gallery apps: * API-driven inbound user provisioning to Azure AD
If you're configuring inbound user provisioning to on-premises Active Directory,
## Create your API-driven provisioning app
-1. Log in to the [Microsoft Entra portal](<https://entra.microsoft.com>).
+1. Log in to the [Microsoft Entra admin center](<https://entra.microsoft.com>).
2. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 3. Click on **New application** to create a new provisioning application. [![Screenshot of Entra Admin Center.](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png)](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png#lightbox)
active-directory Inbound Provisioning Api Curl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-curl-tutorial.md
## Verify processing of the bulk request payload
-1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run.
active-directory Inbound Provisioning Api Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-custom-attributes.md
You have configured API-driven provisioning app. You're provisioning app is succ
In this step, we'll add the two attributes "HireDate" and "JobCode" that are not part of the standard SCIM schema to the provisioning app and use them in the provisioning data flow.
-1. Log in to Microsoft Entra portal with application administrator role.
+1. Log in to Microsoft Entra admin center with application administrator role.
1. Go to **Enterprise applications** and open your API-driven provisioning app. 1. Open the **Provisioning** blade. 1. Click on the **Edit Provisioning** button.
In this step, we'll add the two attributes "HireDate" and "JobCode" that are not
1. **Save** your changes > [!NOTE]
-> If you'd like to add only a few additional attributes to the provisioning app, use Microsoft Entra Portal to extend the schema. If you'd like to add more custom attributes (let's say 20+ attributes), then we recommend using the [`UpdateSchema` mode of the CSV2SCIM PowerShell script](inbound-provisioning-api-powershell.md#extending-provisioning-job-schema) which automates the above manual process.
+> If you'd like to add only a few additional attributes to the provisioning app, use Microsoft Entra admin center to extend the schema. If you'd like to add more custom attributes (let's say 20+ attributes), then we recommend using the [`UpdateSchema` mode of the CSV2SCIM PowerShell script](inbound-provisioning-api-powershell.md#extending-provisioning-job-schema) which automates the above manual process.
## Step 2 - Map the custom attributes
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
Depending on how your API client authenticates with Azure AD, you can select bet
## Configure a service principal This configuration registers an app in Azure AD that represents the external API client and grants it permission to invoke the inbound provisioning API. The service principal client id and client secret can be used in the OAuth client credentials grant flow.
-1. Log in to Microsoft Entra portal (https://entra.microsoft.com) with global administrator or application administrator login credentials.
+1. Log in to Microsoft Entra admin center (https://entra.microsoft.com) with global administrator or application administrator login credentials.
1. Browse to **Azure Active Directory** -> **Applications** -> **App registrations**. 1. Click on the option **New registration**. 1. Provide an app name, select the default options, and click on **Register**.
This section describes how you can assign the necessary permissions to a managed
## Next steps - [Quick start using cURL](inbound-provisioning-api-curl-tutorial.md) - [Quick start using Postman](inbound-provisioning-api-postman.md)-- [Quick start using Postman](inbound-provisioning-api-graph-explorer.md)
+- [Quick start using Graph Explorer](inbound-provisioning-api-graph-explorer.md)
- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Inbound Provisioning Api Graph Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-graph-explorer.md
This tutorial describes how you can quickly test [API-driven inbound provisionin
## Verify processing of bulk request payload
-You can verify the processing either from the Microsoft Entra portal or using Graph Explorer.
+You can verify the processing either from the Microsoft Entra admin center or using Graph Explorer.
-### Verify processing from Microsoft Entra portal
-1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+### Verify processing from Microsoft Entra admin center
+1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run.
active-directory Inbound Provisioning Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-postman.md
In this step, you'll configure the Postman app and invoke the API using the conf
If the API invocation is successful, you see the message `202 Accepted.` Under Headers, the **Location** attribute points to the provisioning logs API endpoint. ## Verify processing of bulk request payload
-You can verify the processing either from the Microsoft Entra portal or using Postman.
+You can verify the processing either from the Microsoft Entra admin center or using Postman.
-### Verify processing from Microsoft Entra portal
-1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+### Verify processing from Microsoft Entra admin center
+1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run.
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
To illustrate the procedure, let's use the CSV file `Samples/csv-with-2-records.
This section explains how to send the generated bulk request payload to your inbound provisioning API endpoint.
-1. Log in to your Entra portal as *Application Administrator* or *Global Administrator*.
+1. Log in to your Microsoft Entra admin center as *Application Administrator* or *Global Administrator*.
1. Copy the `ServicePrincipalId` associated with your provisioning app from **Provisioning App** > **Properties** > **Object ID**. :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/object-id.png" alt-text="Screenshot of the Object ID." lightbox="./media/inbound-provisioning-api-powershell/object-id.png":::
This section explains how to send the generated bulk request payload to your inb
$ThumbPrint = $ClientCertificate.ThumbPrint ``` The generated certificate is stored **Current User\Personal\Certificates**. You can view it using the **Control Panel** -> **Manage user certificates** option.
-1. To associate this certificate with a valid service principal, log in to your Entra portal as *Application Administrator*.
+1. To associate this certificate with a valid service principal, log in to your Microsoft Entra admin center as *Application Administrator*.
1. Open [the service principal you configured](inbound-provisioning-api-grant-access.md#configure-a-service-principal) under **App Registrations**. 1. Copy the **Object ID** from the **Overview** blade. Use the value to replace the string `<AppObjectId>`. Copy the **Application (client) Id**. We will use it later and it is referenced as `<AppClientId>`. 1. Run the following command to upload your certificate to the registered service principal.
PS > CSV2SCIM.ps1 -Path <path-to-csv-file>
> [!NOTE] > The `AttributeMapping` and `ValidateAttributeMapping` command-line parameters refer to the mapping of CSV column attributes to the standard SCIM schema elements.
-It doesn't refer to the attribute mappings that you perform in the Entra portal provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes.
+It doesn't refer to the attribute mappings that you perform in the Microsoft Entra admin center provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes.
| Parameter | Description | Processing remarks | |-|-|--|
active-directory On Premises Sap Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-sap-connector-configure.md
Title: Azure AD Provisioning to SAP ERP Central Component (SAP ECC) 7.0
-description: This document describes how to configure Azure AD to provision users into SAP ECC 7.
+ Title: Azure AD Provisioning into SAP ERP Central Component (SAP ECC, formerly SAP R/3) with NetWeaver AS ABAP 7.0 or later.
+description: This document describes how to configure Azure AD to provision users into SAP ERP Central Component (SAP ECC, formerly SAP R/3) with NetWeaver AS ABAP 7.0 or later.
Previously updated : 06/30/2023 Last updated : 08/25/2023
-# Configuring Azure AD to provision users into SAP ECC 7.0
-The following documentation provides configuration and tutorial information demonstrating how to provision users from Azure AD into SAP ERP Central Component (SAP ECC) 7.0. If you are using other versions such as SAP R/3, you can still use the guides provided in the [download center](https://www.microsoft.com/download/details.aspx?id=51495) as a reference to build your own template and configure provisioning.
+# Configuring Azure AD to provision users into SAP ECC with NetWeaver AS ABAP 7.0 or later
+The following documentation provides configuration and tutorial information demonstrating how to provision users from Azure AD into SAP ERP Central Component (SAP ECC, formerly SAP R/3) with NetWeaver 7.0 or later. If you are using other versions such as SAP R/3, you can still use the guides provided in the [download center](https://www.microsoft.com/download/details.aspx?id=51495) as a reference to build your own template and configure provisioning.
[!INCLUDE [app-provisioning-sap.md](../../../includes/app-provisioning-sap.md)]
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
+ Last updated 10/20/2022
Next, if one or more of the users that will need access to the application do no
The following sections outline how to create extension attributes for a tenant with cloud only users, and for a tenant with Active Directory users. ## Create an extension attribute in a tenant with cloud only users
-You can use Microsoft Graph and PowerShell to extend the user schema for users in Azure AD. This is necessary if you do not have any users who need that attribute and originate in on-premises Active Directory. (If you do have Active Directory, then continue reading below in the section on how to [use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD](#create-an-extension-attribute-using-azure-ad-connect).)
+You can use Microsoft Graph and PowerShell to extend the user schema for users in Azure AD. This is necessary if you have any users who need that attribute and do not originate in on-premises Active Directory. (If you do have Active Directory, then continue reading below in the section on how to [use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD](#create-an-extension-attribute-using-azure-ad-connect).)
Once schema extensions are created, these extension attributes are automatically discovered when you next visit the provisioning page in the Azure portal, in most cases.
Content-type: application/json
"extension_inputAppId_extensionName": "extensionValue" } ```
-Finally, verify the attribute for the user. To learn more, see [Get a user](/graph/api/user-get).
+Finally, verify the attribute for the user. To learn more, see [Get a user](/graph/api/user-get). Note that the Graph v1.0 does not by default return any of a user's directory extension attributes, unless the attributes are specified in the request as one of the properties to return.
```json GET https://graph.microsoft.com/v1.0/users/{id}?$select=displayName,extension_inputAppId_extensionName
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 03/14/2023 Last updated : 08/14/2023
In Azure Active Directory (Azure AD), the term *app provisioning* refers to auto
Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more.
-Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. Your application must support [SCIM](https://aka.ms/scimoverview). Or, you must build a SCIM gateway to connect to your legacy application. If so, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support these applications as well.
-
-App provisioning lets you:
+Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. The table below provides a mapping of protocols to connectors supported.
+
+|Protocol |Connector|
+|--|--|
+| SCIM | [SCIM - SaaS](use-scim-to-provision-users-and-groups.md) <br />[SCIM - On-prem / Private network](./on-premises-scim-provisioning.md) |
+| LDAP | [LDAP](./on-premises-ldap-connector-configure.md)|
+| SQL | [SQL](./tutorial-ecma-sql-connector.md) |
+| REST | [Web Services](./on-premises-web-services-connector.md)|
+| SOAP | [Web Services](./on-premises-web-services-connector.md)|
+| Flat-file| [PowerShell](./on-premises-powershell-connector.md) |
+| Custom | [Custom ECMA connectors](./on-premises-custom-connector.md) <br /> [Connectors and gateways built by partners](./partner-driven-integrations.md)|
- **Automate provisioning**: Automatically create new accounts in the right systems for new people when they join your team or organization. - **Automate deprovisioning**: Automatically deactivate accounts in the right systems when people leave the team or organization.
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
+ Last updated 11/17/2022
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
+ Last updated 11/17/2022
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Azure Active Directory (Azure AD) Application Proxy has partnered with PingAcces
With PingAccess for Azure AD, you can give users access and single sign-on (SSO) to applications that use headers for authentication. Application Proxy treats these applications like any other, using Azure AD to authenticate access and then passing traffic through the connector service. PingAccess sits in front of the applications and translates the access token from Azure AD into a header. The application then receives the authentication in the format it can read.
-Your users wonΓÇÖt notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so theyΓÇÖll still balance loads automatically.
+Your users won't notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so they'll still balance loads automatically.
## How do I get access?
For more information, see [Azure Active Directory editions](../fundamentals/what
## Publish your application in Azure
-This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If youΓÇÖve already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
+This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If you've already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
> [!NOTE] > Since this scenario is a partnership between Azure AD and PingAccess, some of the instructions exist on the Ping Identity site.
To publish your own on-premises application:
> [!NOTE] > For a more detailed walkthrough of this step, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad).
- 1. **Internal URL**: Normally you provide the URL that takes you to the appΓÇÖs sign-in page when youΓÇÖre on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
+ 1. **Internal URL**: Normally you provide the URL that takes you to the app's sign-in page when you're on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
> [!WARNING] > For this type of single sign-on, the internal URL must use `https` and can't use `http`. Also, there is a constraint when configuring an application that no two apps should have the same internal URL as this allows App Proxy to maintain distinction between applications.
To publish your own on-premises application:
1. **Translate URL in Headers**: Choose **No**. > [!NOTE]
- > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
+ > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener you've configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
1. Select **Add**. The overview page for the new application appears.
In addition to the external URL, an authorize endpoint of Azure Active Directory
Finally, set up your on-premises application so that users have read access and other applications have read/write access:
-1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the APIs for Windows Azure Active Directory.
+1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the permissions for Microsoft Graph.
![Shows the Request API permissions page](./media/application-proxy-configure-single-sign-on-with-ping-access/required-permissions.png)
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
-+ Last updated 08/29/2022
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
-+ Last updated 08/29/2022
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
-+ Last updated 08/29/2022
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
-+ Last updated 08/29/2022
active-directory Architecture Icons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/architecture-icons.md
+
+ Title: Microsoft Entra architecture icons
+description: Learn about the official collection of Microsoft Entra icons that you can use in architectural diagrams, training materials, or documentation.
+++++ Last updated : 08/15/2023+++
+# Customer intent: As a new or existing customer, I want to learn how I can use the official Microsoft Entra icons in architectural diagrams, training materials, or documentation.
++
+# Microsoft Entra architecture icons
+
+Helping our customers design and architect new solutions is core to the Microsoft Entra mission. Architecture diagrams can help communicate design decisions and the relationships between components of a given workload. This article provides information about the official collection of Microsoft Entra icons that you can use in architectural diagrams, training materials, or documentation.
+
+## General guidelines
+
+### Do's
+
+- Use the icon to illustrate how products can work together.
+- In diagrams, we recommend including the product name somewhere close to the icon.
+
+### Don'ts
+
+- Don't crop, flip, or rotate icons.
+- Don't distort or change the icon shape in any way.
+- Don't use Microsoft product icons to represent your product or service.
+- Don't use Microsoft product icons in marketing communications.
+
+## Icon updates
+
+| Month | Change description |
+|-|--|
+| August 2023 | Added a downloadable package that contains the Microsoft Entra architecture icons, branding playbook (which contains guidelines about the Microsoft Security visual identity), and terms of use. |
+
+## Icon terms
+
+Microsoft permits the use of these icons in architectural diagrams, training materials, or documentation. You may copy, distribute, and display the icons only for the permitted use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.
+
+ > [!div class="button"]
+ > [I agree to the above terms. Download icons.](https://download.microsoft.com/download/a/4/2/a4289cad-4eaf-4580-87fd-ce999a601516/Microsoft-Entra-architecture-icons.zip?wt.mc_id=microsoftentraicons_downloadmicrosoftentraicons_content_cnl_csasci)
+
+## More icon sets from Microsoft
+
+- [Azure architecture icons](/azure/architecture/icons)
+- [Microsoft 365 architecture icons and templates](/microsoft-365/solutions/architecture-icons-templates)
+- [Dynamics 365 icons](/dynamics365/get-started/icons)
+- [Microsoft Power Platform icons](/power-platform/guidance/icons)
active-directory Govern Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md
Last updated 02/09/2023 -+
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md
Previously updated : 04/19/2023 Last updated : 08/21/2023 -+ # Common considerations for multi-tenant user management
Additionally, while you can use the following Conditional Access conditions, be
- **Sign-in risk and user risk.** User behavior in their home tenant determines, in part, the sign-in risk and user risk. The home tenant stores the data and risk score. If resource tenant policies block an external user, a resource tenant admin might not be able to enable access. [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md) explains how Identity Protection detects compromised credentials for Azure AD users. - **Locations.** The named location definitions in the resource tenant determine the scope of the policy. The scope of the policy doesn't evaluate trusted locations managed in the home tenant. If your organization wants to share trusted locations across tenants, define the locations in each tenant where you define the resources and Conditional Access policies.
-## Other access control considerations
+## Securing your multi-tenant environment
+Review the [security checklist](/azure/security/fundamentals/steps-secure-identity) and [best practices](/azure/security/fundamentals/operational-best-practices) for guidance on securing your tenant. Ensure these best practices are followed and review them with any tenants that you collaborate closely with.
+### Conditional access
The following are considerations for configuring access control. - Define [access control policies](../external-identities/authentication-conditional-access.md) to control access to resources. - Design Conditional Access policies with external users in mind. - Create policies specifically for external users.-- If your organization is using the [**all users** dynamic group](../external-identities/use-dynamic-groups.md) condition in your existing Conditional Access policy, this policy affects external users because they are in scope of **all users**. - Create dedicated Conditional Access policies for external accounts.
-### Require user assignment
+### Monitoring your multi-tenant environment
+- Monitor for changes to cross-tenant access policies using the [audit logs UI](../reports-monitoring/concept-audit-logs.md), [API](/graph/api/resources/azure-ad-auditlog-overview), or [Azure Monitor integration](../reports-monitoring/tutorial-configure-log-analytics-workspace.md) (for proactive alerts). The audit events use the categories "CrossTenantAccessSettings" and "CrossTenantIdentitySyncSettings." By monitoring for audit events under these categories, you can identify any cross-tenant access policy changes in your tenant and take action. When creating alerts in Azure Monitor, you can create a query such as the one below to identify any cross-tenant access policy changes.
+
+```
+AuditLogs
+| where Category contains "CrossTenant"
+```
+
+- Monitor application access in your tenant using the [cross-tenant access activity](../reports-monitoring/workbook-cross-tenant-access-activity.md) dashboard. This allows you to see who is accessing resources in your tenant and where those users are coming from.
++
+### Dynamic groups
+
+If your organization is using the [**all users** dynamic group](../external-identities/use-dynamic-groups.md) condition in your existing Conditional Access policy, this policy affects external users because they are in scope of **all users**.
+
+### Require user assignment for applications
If an application has the **User assignment required?** property set to **No**, external users can access the application. Application admins must understand access control impacts, especially if the application contains sensitive information. [Restrict your Azure AD app to a set of users in an Azure AD tenant](../develop/howto-restrict-your-app-to-a-set-of-users.md) explains how registered applications in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who successfully authenticate.
+### Privileged Identity Management
+Minimize persistent administrator access by enabling [privileged identity management](/azure/security/fundamentals/steps-secure-identity#implement-privilege-access-management).
+
+### Restricted Management Units
+When you're using security groups to control who is in scope for cross-tenant synchronization, you will want to limit who can make changes to the security group. Minimize the number of owners of the security groups assigned to the cross-tenant synchronization job and include the groups in a [restricted management unit](../roles/admin-units-restricted-management.md). This will limit the number of people that can add or remove group members and provision accounts across tenants.
+
+## Other access control considerations
+ ### Terms and conditions [Azure AD terms of use](../conditional-access/terms-of-use.md) provides a simple method that organizations can use to present information to end users. You can use terms of use to require external users to approve terms of use before accessing your resources.
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md
Last updated 04/19/2023 -+
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recoverability-overview.md
Create a process of predefined communications to make others aware of the issue
Document the state of your tenant and its objects regularly. Then if a hard delete or misconfiguration occurs, you have a roadmap to recovery. The following tools can help you document your current state: - [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations.-- [Azure AD Exporter](https://github.com/microsoft/azureadexporter) is a tool you can use to export your configuration settings.
+- [Entra Exporter](https://github.com/microsoft/entraexporter) is a tool you can use to export your configuration settings.
- [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) is a module of the PowerShell Desired State Configuration framework. You can use it to export configurations for reference and application of the prior state of many settings. - [Conditional Access APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) can be used to manage your Conditional Access policies as code.
Microsoft Graph APIs are highly customizable based on your organizational needs.
*Securely store these configuration exports with access provided to a limited number of admins.
-The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provide most of the documentation you need:
+The [Entra Exporter](https://github.com/microsoft/entraexporter) can provide most of the documentation you need:
- Verify that you've implemented the desired configuration. - Use the exporter to capture current configurations.
The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provid
- Store the output in a secure location with limited access. > [!NOTE]
-> Settings in the legacy multifactor authentication portal for Application Proxy and federation settings might not be exported with the Azure AD Exporter, or with the Microsoft Graph API.
+> Settings in the legacy multifactor authentication portal for Application Proxy and federation settings might not be exported with the Entra Exporter, or with the Microsoft Graph API.
The [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) module uses Microsoft Graph and PowerShell to retrieve the state of many of the configurations in Azure AD. This information can be used as reference information or, by using PowerShell Desired State Configuration scripting, to reapply a known good state. Use [Conditional Access Graph APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) to manage policies like code. Automate approvals to promote policies from preproduction environments, backup and restore, monitor change, and plan ahead for emergencies.
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-client-app.md
Learn more:
* [Token cache serialization](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization) * [Token cache serialization in MSAL.NET](../develop/msal-net-token-cache-serialization.md)
-* [Custom token cache serialization in MSAL for Java](../develop/msal-java-token-cache-serialization.md)
-* [Custom token cache serialization in MSAL for Python](../develop/msal-python-token-cache-serialization.md).
+* [Custom token cache serialization in MSAL for Java](/entra/msal/java/advanced/msal-java-token-cache-serialization)
+* [Custom token cache serialization in MSAL for Python](/entra/msal/python/advanced/msal-python-token-cache-serialization).
![Diagram of a device and and application using MSAL to call Microsoft Identity](media/resilience-client-app/resilience-with-microsoft-authentication-library.png)
active-directory Resilient External Processes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-external-processes.md
Identity experience framework (IEF) policies allow you to call an external syste
- If the data that is necessary for authentication is relatively static and small, and has no other business reason to be externalized from the directory, then consider having it in the directory. -- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and cripple your application. For example, using CAPTCHA in your sign in, sign up flow can help.
+- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and disable your application. For example, using CAPTCHA in your sign in, sign up flow can help.
- Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, it's likely that you don't have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale.
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md
Last updated 02/07/2023 -+
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md
Last updated 02/08/2023 -+
active-directory Certificate Based Authentication Federation Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-android.md
description: Learn about the supported scenarios and the requirements for config
+ Last updated 09/30/2022
active-directory Certificate Based Authentication Federation Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-get-started.md
description: Learn how to configure certificate-based authentication with federa
+ Last updated 05/04/2022
- # Get started with certificate-based authentication in Azure Active Directory with federation
active-directory Certificate Based Authentication Federation Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-ios.md
description: Learn about the supported scenarios and the requirements for config
+ Last updated 09/30/2022
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Previously updated : 06/06/2023 Last updated : 07/21/2023
To get started with passwordless sign-in, see [Enable passwordless sign-in with
The Authenticator app can help prevent unauthorized access to accounts and stop fraudulent transactions by pushing a notification to your smartphone or tablet. Users view the notification, and if it's legitimate, select **Verify**. Otherwise, they can select **Deny**.
-![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png)
+> [!NOTE]
+> Starting in August, 2023, sign-ins from unfamiliar locations no longer generate notifications. Similar to how unfamiliar locations work in [Smart lockout](howto-password-smart-lockout.md), a location becomes "familiar" during the first 14 days of use, or the first 10 sign-ins. If the location is unfamiliar, or if the relevant Google or Apple service responsible for push notifications isn't available, users won't see their notification as usual. In that case, they should open Microsoft Authenticator, or Authenticator Lite in a relevant companion app like Outlook, refresh by either pulling down or hitting **Refresh**, and approve the request.
-In some rare instances where the relevant Google or Apple service responsible for push notifications is down, users may not receive their push notifications. In these cases users should manually navigate to the Microsoft Authenticator app (or relevant companion app like Outlook), refresh by either pulling down or hitting the refresh button, and approve the request.
+![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png)
-> [!NOTE]
-> If your organization has staff working in or traveling to China, the *Notification through mobile app* method on Android devices doesn't work in that country/region as Google play services(including push notifications) are blocked in the region. However iOS notification do work. For Android devices ,alternate authentication methods should be made available for those users.
+In China, the *Notification through mobile app* method on Android devices doesn't work because as Google play services (including push notifications) are blocked in the region. However, iOS notifications do work. For Android devices, alternate authentication methods should be made available for those users.
## Verification code from mobile app
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 06/22/2023 Last updated : 08/16/2023
The following table lists each setting that can be set to Microsoft managed and
| [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled |
+| [Report suspicious activity](howto-mfa-mfasettings.md#report-suspicious-activity) | Disabled |
As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
-OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse).
:::image type="content" border="true" source="./media/concept-authentication-methods/oath-tokens.png" alt-text="Screenshot of OATH token management." lightbox="./media/concept-authentication-methods/oath-tokens.png":::
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| [Feitian](https://shop.ftsafe.us/pages/microsoft) | ![y] | ![y]| ![y]| ![y]| ![y] | | [Fortinet](https://www.fortinet.com/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [Giesecke + Devrient (G+D)](https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication) | ![y] | ![y]| ![y]| ![y]| ![n] |
+| [Google](https://store.google.com/us/product/titan_security_key) | ![n] | ![y]| ![y]| ![n]| ![n] |
| [GoTrustID Inc.](https://www.gotrustid.com/idem-key) | ![n] | ![y]| ![y]| ![y]| ![n] | | [HID](https://www.hidglobal.com/products/crescendo-key) | ![n] | ![y]| ![y]| ![n]| ![n] | | [HIDEEZ](https://hideez.com/products/hideez-key-4) | ![n] | ![y]| ![y]| ![y]| ![n] |
The following providers offer FIDO2 security keys of different form factors that
| [Nymi](https://www.nymi.com/nymi-band) | ![y] | ![n]| ![y]| ![n]| ![n] | | [Octatco](https://octatco.com/) | ![y] | ![y]| ![n]| ![n]| ![n] | | [OneSpan Inc.](https://www.onespan.com/products/fido) | ![n] | ![y]| ![n]| ![y]| ![n] |
+| [PONE Biometrics](https://ponebiometrics.com/) | ![y] | ![n]| ![n]| ![y]| ![n] |
| [Precision Biometric](https://www.innait.com/product/fido/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [RSA](https://www.rsa.com/products/securid/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [Sentry](https://sentryenterprises.com/) | ![n] | ![n]| ![y]| ![n]| ![n] |
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
Previously updated : 06/02/2023 Last updated : 08/23/2023 -+
An authentication strength Conditional Access policy works together with [MFA tr
## Limitations -- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength doesn't restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue.
+- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength doesn't restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue.
- **Require multifactor authentication and Require authentication strength can't be used together in the same Conditional Access policy** - These two Conditional Access grant controls can't be used together because the built-in authentication strength **Multifactor authentication** is equivalent to the **Require multifactor authentication** grant control.
An authentication strength Conditional Access policy works together with [MFA tr
- **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. But if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they get prompted to sign in with Windows Hello for Business. +
+## Known isssues
+
+The following known issues are currently being addressed:
+
+- **Sign-in frequency** - If both sign-in frequency and authentication strength requirements apply to a sign-in, and the user has previously signed in using a method that meets the authentication strength requirements, the sign-in frequency requirement doesn't apply. [Sign-in frequency](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md) allows you to set the time interval for re-authentication of users based on their credentials, but it isn't fully integrated with authentication strength yet. It works independently and doesn't currently impact the actual sign-in procedure. Therefore, you may notice that some sign-ins using expired credentials don't prompt re-authentication and the sign-in process proceeds successfully.
+
+- **FIDO2 security key Advanced options** - Advanced options aren't supported for external users with a home tenant that is located in a different Microsoft cloud than the resource tenant.
+ ## FAQ ### Should I use authentication strength or the Authentication methods policy?
Authentication strength is based on the Authentication methods policy. The Authe
For example, the administrator of Contoso wants to allow their users to use Microsoft Authenticator with either push notifications or passwordless authentication mode. The administrator goes to the Microsoft Authenticator settings in the Authentication method policy, scopes the policy for the relevant users and set the **Authentication mode** to **Any**.
-Then for ContosoΓÇÖs most sensitive resource, the administrator wants to restrict the access to only passwordless authentication methods. The administrator creates a new Conditional Access policy, using the built-in **Passwordless MFA strength**.
+Then for Contoso's most sensitive resource, the administrator wants to restrict the access to only passwordless authentication methods. The administrator creates a new Conditional Access policy, using the built-in **Passwordless MFA strength**.
As a result, users in Contoso can access most of the resources in the tenant using password + push notification from the Microsoft Authenticator OR only using Microsoft Authenticator (phone sign-in). However, when the users in the tenant access the sensitive application, they must use Microsoft Authenticator (phone sign-in).
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
-+ # Certificate user IDs
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
The following table lists partners who are Microsoft-compatible FIDO2 security k
| [Feitian](https://shop.ftsafe.us/pages/microsoft) | ![y] | ![y]| ![y]| ![y]| ![y] | | [Fortinet](https://www.fortinet.com/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [Giesecke + Devrient (G+D)](https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication) | ![y] | ![y]| ![y]| ![y]| ![n] |
+| [Google](https://store.google.com/us/product/titan_security_key) | ![n] | ![y]| ![y]| ![n]| ![n] |
| [GoTrustID Inc.](https://www.gotrustid.com/idem-key) | ![n] | ![y]| ![y]| ![y]| ![n] | | [HID](https://www.hidglobal.com/products/crescendo-key) | ![n] | ![y]| ![y]| ![n]| ![n] | | [HIDEEZ](https://hideez.com/products/hideez-key-4) | ![n] | ![y]| ![y]| ![y]| ![n] |
The following table lists partners who are Microsoft-compatible FIDO2 security k
| [Nymi](https://www.nymi.com/nymi-band) | ![y] | ![n]| ![y]| ![n]| ![n] | | [Octatco](https://octatco.com/) | ![y] | ![y]| ![n]| ![n]| ![n] | | [OneSpan Inc.](https://www.onespan.com/products/fido) | ![n] | ![y]| ![n]| ![y]| ![n] |
+| [PONE Biometrics](https://ponebiometrics.com/) | ![y] | ![n]| ![n]| ![y]| ![n] |
| [Precision Biometric](https://www.innait.com/product/fido/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [RSA](https://www.rsa.com/products/securid/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [Sentry](https://sentryenterprises.com/) | ![n] | ![n]| ![y]| ![n]| ![n] |
active-directory Concept Mfa Regional Opt In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-regional-opt-in.md
For Voice verification, the following region codes require an opt-in.
| 236 | Central African Republic | | 237 | Cameroon | | 238 | Cabo Verde |
-| 239 | Sao Tome and Principe |
+| 239 | São Tomé and Príncipe |
| 240 | Equatorial Guinea | | 241 | Gabon | | 242 | Congo |
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
description: Learn about the combined password policy and check for weak passwor
+ Last updated 04/02/2023
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
tags: azuread+
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
-+ # Password policies and account restrictions in Azure Active Directory
active-directory Concepts Azure Multi Factor Authentication Prompts Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
description: Learn about the recommended configuration for reauthentication prom
+ Previously updated : 03/28/2023 Last updated : 08/31/2023
Azure Active Directory (Azure AD) has multiple settings that determine how often
The Azure AD default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
-It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken).
+It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions by using Microsoft Graph PowerShell](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession).
This article details recommended configurations and how different settings work and interact with each other.
To optimize the frequency of authentication prompts for your users, you can conf
### Evaluate session lifetime policies
-Without any session lifetime settings, there are no persistent cookies in the browser session. Every time a user closes and open the browser, they get a prompt for reauthentication. In Office clients, the default time period is a rolling window of 90 days. With this default Office configuration, if the user has reset their password or there has been inactivity of over 90 days, the user is required to reauthenticate with all required factors (first and second factor).
+Without any session lifetime settings, there are no persistent cookies in the browser session. Every time a user closes and opens the browser, they get a prompt for reauthentication. In Office clients, the default time period is a rolling window of 90 days. With this default Office configuration, if the user has reset their password or there has been inactivity of over 90 days, the user is required to reauthenticate with all required factors (first and second factor).
A user might see multiple MFA prompts on a device that doesn't have an identity in Azure AD. Multiple prompts result when each application has its own OAuth Refresh Token that isn't shared with other client apps. In this scenario, MFA prompts multiple times as each application requests an OAuth Refresh Token to be validated with MFA.
This setting allows configuration of lifetime for token issued by Azure Active D
Now that you understand how different settings works and the recommended configuration, it's time to check your tenants. You can start by looking at the sign-in logs to understand which session lifetime policies were applied during sign-in.
-Under each sign-in log, go to the **Authentication Details** tab and explore **Session Lifetime Policies Applied**. For more information, see [Authentication details](../reports-monitoring/concept-sign-ins.md#authentication-details).
+Under each sign-in log, go to the **Authentication Details** tab and explore **Session Lifetime Policies Applied**. For more information, see the [Learn about the sign-in log activity details](../reports-monitoring/concept-sign-in-log-activity-details.md) article.
![Screenshot of authentication details.](./media/concepts-azure-multi-factor-authentication-prompts-session-lifetime/details.png)
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
The following tables show which transports are supported for each platform. Supp
|||--|--| | Edge | &#10060; | &#10060; | &#10060; | | Chrome | &#x2705; | &#10060; | &#10060; |
-| Firefox | &#10060; | &#10060; | &#10060; |
+| Firefox | &#x2705; | &#10060; | &#10060; |
### iOS
active-directory How To Authentication Find Coverage Gaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-find-coverage-gaps.md
There are different ways to check if your admins are covered by an MFA policy.
![Screenshot of the sign-in log.](./media/how-to-authentication-find-coverage-gaps/auth-requirement.png)
- Click **Authentication details** for [details about the MFA requirements](../reports-monitoring/concept-sign-ins.md#authentication-details).
+ When viewing the details of a specific sign-in, select the **Authentication details** tab for details about the MFA requirements. For more information, see [Sign-in log activity details](../reports-monitoring/concept-sign-in-log-activity-details.md).
![Screenshot of the authentication activity details.](./media/how-to-authentication-find-coverage-gaps/details.png)
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
-+ # How to configure Azure AD certificate-based authentication
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
Microsoft Authenticator Lite is another surface for Azure Active Directory (Azur
Users receive a notification in Outlook mobile to approve or deny sign-in, or they can copy a TOTP to use during sign-in. >[!NOTE]
->This is an important security enhancement for users authenticating via telecom transports. On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ. If you no longer wish for this feature to be enabled, move the state from 'default' toΓÇÿdisabledΓÇÖ or set users to include and exclude groups.
+>These are important security enhancements for users authenticating via telecom transports:
+>- On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ in the Authentication methods policy. If you no longer wish for this feature to be enabled, move the state from 'default' to ΓÇÿdisabledΓÇÖ or scope it to only a group of users.
+>- Starting September 18, Authenticator Lite will be enabled as part of the *Notification through mobile app* verification option in the per-user MFA policy. If you don't want this feature enabled, you can disable it in the Authentication methods policy following the steps below.
## Prerequisites -- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the modern Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API. Organizations with an active MFA server or that have not started migration from per-user MFA are not eligible for this feature.
+- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for all users or select groups. We recommend enabling Microsoft Authenticator by using the modern [Authentication methods policy](concept-authentication-methods-manage.md#authentication-methods-policy). You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API. Organizations with an active MFA server are not eligible for this feature.
>[!TIP] >We recommend that you also enable [system-preferred multifactor authentication (MFA)](concept-system-preferred-multifactor-authentication.md) when you enable Authenticator Lite. With system-preferred MFA enabled, users try to sign-in with Authenticator Lite before they try less secure telephony methods like SMS or voice call.
Users receive a notification in Outlook mobile to approve or deny sign-in, or th
## Enable Authenticator Lite
-By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings). On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ
+By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings) in the Authentication methods policy. On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ. Authenticator Lite is also included as part of the *Notification through mobile app* verification option in the per-user MFA policy.
### Disabling Authenticator Lite in Azure portal UX
To disable Authenticator Lite in the Azure portal, complete the following steps:
1. In the Azure portal, click Azure Active Directory > Security > Authentication methods > Microsoft Authenticator. In the Entra admin center, on the sidebar select Azure Active Directory > Protect & Secure > Authentication methods > Microsoft Authenticator.
- 2. On the Enable and Target tab, click Yes and All users to enable the Authenticator policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push.
+ 2. On the Enable and Target tab, click Enable and All users to enable the Authenticator policy for everyone or add select groups. Set the Authentication mode for these users/groups to Any or Push.
- Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. Android users utilizing a personal and work profile on their device may be prompted to register if Authenticator is present on a different profile from the Outlook application.
+Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. Android users utilizing a personal and work profile on their device may be prompted to register if Authenticator is present on a different profile from the Outlook application.
-<img width="1112" alt="Entra portal Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png">
+<img width="1112" alt="Microsoft Entra admin center Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png">
3. On the Configure tab, for **Microsoft Authenticator on companion applications**, change Status to Disabled, and click Save. <img width="664" alt="Authenticator Lite configuration settings" src="https://user-images.githubusercontent.com/108090297/228603364-53f2581f-a4e0-42ee-8016-79b23e5eff6c.png">
+>[!NOTE]
+> If your organization still manages authentication methods in the per-user MFA policy, you'll need to disable *Notification through mobile app* as a verification option there in addition to the steps above. We recommend doing this only after you've enabled Microsoft Authenticator in the Authentication methods policy. You can contine to manage the remainder of your authentication methods in the per-user MFA policy while Microsoft Authenticator is managed in the modern Authentication methods policy. However, we recommend [migrating](how-to-authentication-methods-manage.md) management of all authentication methods to the modern Authentication methods policy. The ability to manage authentication methods in the per-user MFA policy will be retired September 30, 2024.
+ ### Enable Authenticator Lite via Graph APIs | Property | Type | Description |
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 06/29/2023 Last updated : 08/22/2023
Take a look at our video for an overview of the MFA Server Migration Utility and
## Limitations and requirements -- The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.
+- The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You don't have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.
- The MFA Server Migration Utility copies the data from the database file onto the user objects in Azure AD. During migration, users can be targeted for Azure AD MFA for testing purposes using [Staged Rollout](../hybrid/connect/how-to-connect-staged-rollout.md). Staged migration lets you test without making any changes to your domain federation settings. Once migrations are complete, you must finalize your migration by making changes to your domain federation settings. - AD FS running Windows Server 2016 or higher is required to provide MFA authentication on any AD FS relying parties, not including Azure AD and Office 365. - Review your AD FS access control policies and make sure none requires MFA to be performed on-premises as part of the authentication process.
A few important points:
During the previous phases, you can remove users from the Staged Rollout folders to take them out of scope of Azure AD MFA and route them back to your on-premises Azure MFA server for all MFA requests originating from Azure AD.
-**Phase 3** requires moving all clients that authenticate to the on-premises MFA Server (VPNs, password managers, and so on) to Azure AD federation via SAML/OAUTH. If modern authentication standards arenΓÇÖt supported, you're required to stand up NPS server(s) with the Azure AD MFA extension installed. Once dependencies are migrated, users should no longer use the User portal on the MFA Server, but rather should manage their authentication methods in Azure AD ([aka.ms/mfasetup](https://aka.ms/mfasetup)). Once users begin managing their authentication data in Azure AD, those methods won't be synced back to MFA Server. If you roll back to the on-premises MFA Server after users have made changes to their Authentication Methods in Azure AD, those changes will be lost. After user migrations are complete, change the [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) domain federation setting. The change tells Azure AD to no longer perform MFA on-premises and to perform _all_ MFA requests with Azure AD MFA, regardless of group membership.
+**Phase 3** requires moving all clients that authenticate to the on-premises MFA Server (VPNs, password managers, and so on) to Azure AD federation via SAML/OAUTH. If modern authentication standards aren't supported, you're required to stand up NPS server(s) with the Azure AD MFA extension installed. Once dependencies are migrated, users should no longer use the User portal on the MFA Server, but rather should manage their authentication methods in Azure AD ([aka.ms/mfasetup](https://aka.ms/mfasetup)). Once users begin managing their authentication data in Azure AD, those methods won't be synced back to MFA Server. If you roll back to the on-premises MFA Server after users have made changes to their Authentication Methods in Azure AD, those changes will be lost. After user migrations are complete, change the [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) domain federation setting. The change tells Azure AD to no longer perform MFA on-premises and to perform _all_ MFA requests with Azure AD MFA, regardless of group membership.
The following sections explain the migration steps in more detail.
Open MFA Server, click **Company Settings**:
|OATH Token tab|Not applicable; Azure AD MFA uses a default message for OATH tokens| |Reports|[Azure AD Authentication Methods Activity reports](howto-authentication-methods-activity.md)|
-<sup>*</sup>When a PIN is used to provide proof-of-presence functionality, the functional equivalent is provided above. PINs that arenΓÇÖt cryptographically tied to a device don't sufficiently protect against scenarios where a device has been compromised. To protect against these scenarios, including [SIM swap attacks](https://wikipedia.org/wiki/SIM_swap_scam), move users to more secure methods according to Microsoft authentication methods [best practices](concept-authentication-methods.md).
+<sup>*</sup>When a PIN is used to provide proof-of-presence functionality, the functional equivalent is provided above. PINs that aren't cryptographically tied to a device don't sufficiently protect against scenarios where a device has been compromised. To protect against these scenarios, including [SIM swap attacks](https://wikipedia.org/wiki/SIM_swap_scam), move users to more secure methods according to Microsoft authentication methods [best practices](concept-authentication-methods.md).
<sup>**</sup>The default SMS MFA experience in Azure AD MFA sends users a code, which they're required to enter in the login window as part of authentication. The requirement to roundtrip the SMS code provides proof-of-presence functionality.
Open MFA Server, click **User Portal**:
|Use OATH token for fallback|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)| |Session Timeout|| |**Security Questions tab** |Security questions in MFA Server were used to gain access to the User portal. Azure AD MFA only supports security questions for self-service password reset. See [security questions documentation](concept-authentication-security-questions.md).|
-|**Passed Sessions tab**|All authentication method registration flows are managed by Azure AD and donΓÇÖt require configuration|
+|**Passed Sessions tab**|All authentication method registration flows are managed by Azure AD and don't require configuration|
|**Trusted IPs**|[Azure AD trusted IPs](howto-mfa-mfasettings.md#trusted-ips)| Any MFA methods available in MFA Server must be enabled in Azure AD MFA by using [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings).
Users can't try their newly migrated MFA methods unless they're enabled.
#### Authentication services Azure MFA Server can provide MFA functionality for third-party solutions that use RADIUS or LDAP by acting as an authentication proxy. To discover RADIUS or LDAP dependencies, click **RADIUS Authentication** and **LDAP Authentication** options in MFA Server. For each of these dependencies, determine if these third parties support modern authentication. If so, consider federation directly with Azure AD.
-For RADIUS deployments that canΓÇÖt be upgraded, youΓÇÖll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md).
+For RADIUS deployments that can't be upgraded, you'll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md).
-For LDAP deployments that canΓÇÖt be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](../architecture/auth-ldap.md). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md).
+For LDAP deployments that can't be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](../architecture/auth-ldap.md). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md).
-If you enabled the [MFA Server Authentication provider in AD FS 2.0](./howto-mfaserver-adfs-windows-server.md#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, youΓÇÖll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies.
+If you enabled the [MFA Server Authentication provider in AD FS 2.0](./howto-mfaserver-adfs-windows-server.md#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, you'll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies.
### Backup Azure AD MFA Server datafile Make a backup of the MFA Server data file located at %programfiles%\Multi-Factor Authentication Server\Data\PhoneFactor.pfdata (default location) on your primary MFA Server. Make sure you have a copy of the installer for your currently installed version in case you need to roll back. If you no longer have a copy, contact Customer Support Services.
The **Settings** option allows you to change the settings for the migration proc
- User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName: - The migration utility tries direct matching to UPN before using the on-premises Active Directory attribute. - If no match is found, it calls a Windows API to find the Azure AD UPN and get the SID, which it uses to search the MFA Server user list.
- - If the Windows API doesnΓÇÖt find the user or the SID isnΓÇÖt found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list.
+ - If the Windows API doesn't find the user or the SID isn't found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list.
- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined. - Synchronization server ΓÇô Allows the MFA Server Migration Sync service to run on a secondary MFA Server rather than only run on the primary. To configure the Migration Sync service to run on a secondary server, the `Configure-MultiFactorAuthMigrationUtility.ps1` script must be run on the server to register a certificate with the MFA Server Migration Utility app registration. The certificate is used to authenticate to Microsoft Graph.
The manual process steps are:
1. To begin the migration process for a user or selection of multiple users, press and hold the Ctrl key while selecting each of the user(s) you wish to migrate. 1. After you select the desired users, click **Migrate Users** > **Selected users** > **OK**. 1. To migrate all users in the group, click **Migrate Users** > **All users in AAD group** > **OK**.
-1. You can migrate users even if they are unchanged. By default, the utility is set to **Only migrate users that have changed**. Click **Migrate all users** to re-migrate previously migrated users that are unchanged. Migrating unchanged users can be useful during testing if an administrator needs to reset a userΓÇÖs Azure MFA settings and wants to re-migrate them.
+1. You can migrate users even if they are unchanged. By default, the utility is set to **Only migrate users that have changed**. Click **Migrate all users** to re-migrate previously migrated users that are unchanged. Migrating unchanged users can be useful during testing if an administrator needs to reset a user's Azure MFA settings and wants to re-migrate them.
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/migrate-users.png" alt-text="Screenshot of Migrate users dialog.":::
The following table lists the sync logic for the various methods.
|**Mobile App**|Maximum of five devices will be migrated or only four if the user also has a hardware OATH token.<br>If there are multiple devices with the same name, only migrate the most recent one.<br>Devices will be ordered from newest to oldest.<br>If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If there's no match on OATH Token Secret Key, match on Device Token<br>-- If found, create a Software OATH Token for the MFA Server device to allow OATH Token method to work. Notifications will still work using the existing Azure AD MFA device.<br>-- If not found, create a new device.<br>If adding a new device will exceed the five-device limit, the device will be skipped. | |**OATH Token**|If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If not found, add a new Hardware OATH Token device.<br>If adding a new device will exceed the five-device limit, the OATH token will be skipped.|
-MFA Methods will be updated based on what was migrated and the default method will be set. MFA Server will track the last migration timestamp and only migrate the user again if the userΓÇÖs MFA settings change or an admin modifies what to migrate in the **Settings** dialog.
+MFA Methods will be updated based on what was migrated and the default method will be set. MFA Server will track the last migration timestamp and only migrate the user again if the user's MFA settings change or an admin modifies what to migrate in the **Settings** dialog.
During testing, we recommend doing a manual migration first, and test to ensure a given number of users behave as expected. Once testing is successful, turn on automatic synchronization for the Azure AD group you wish to migrate. As you add users to this group, their information will be automatically synchronized to Azure AD. MFA Server Migration Utility targets one Azure AD group, however that group can encompass both users and nested groups of users.
Once complete, a confirmation will inform you of the tasks completed:
As mentioned in the confirmation message, it can take several minutes for the migrated data to appear on user objects within Azure AD. Users can view their migrated methods by navigating to [aka.ms/mfasetup](https://aka.ms/mfasetup).
+#### View migration details
+
+You can use Audit logs or Log Analytics to view details of MFA Server to Azure MFA user migrations.
+
+##### Use Audit logs
+To access the Audit logs in the Azure portal to view details of MFA Server to Azure MFA user migrations, follow these steps:
+
+1. Click **Azure Active Directory** > **Audit logs**. To filter the logs, click **Add filters**.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/add-filter.png" alt-text="Screenshot of how to add filters.":::
+
+1. Select **Initiated by (actor)** and click **Apply**.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/actor.png" alt-text="Screenshot of Initiated by Actor option.":::
+
+1. Type _Azure MFA Management_ and click **Apply**.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/apply-actor.png" alt-text="Screenshot of MFA management option.":::
+
+1. This filter displays only MFA Server Migration Utility logs. To view details for a user migration, click a row, and then choose the **Modified Properties** tab. This tab shows changes to registered MFA methods and phone numbers.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/changes.png" alt-text="Screenshot of user migration details.":::
+
+ The following table lists the authentication method for each code.
+
+ | Code | Method |
+ |:--|:|
+ | 0 | Voice mobile |
+ | 2 | Voice office |
+ | 3 | Voice alternate mobile |
+ | 5 | SMS |
+ | 6 | Microsoft Authenticator push notification |
+ | 7 | Hardware or software token OTP |
+
+1. If any user devices were migrated, there is a separate log entry.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/migrated-device.png" alt-text="Screenshot of a migrated device.":::
++
+##### Use Log Analytics
+
+The details of MFA Server to Azure MFA user migrations can also be queried using Log Analytics.
+
+```kusto
+AuditLogs
+| where ActivityDateTime > ago(7d)
+| extend InitiatedBy = tostring(InitiatedBy["app"]["displayName"])
+| where InitiatedBy == "Azure MFA Management"
+| extend UserObjectId = tostring(TargetResources[0]["id"])
+| extend Upn = tostring(TargetResources[0]["userPrincipalName"])
+| extend ModifiedProperties = TargetResources[0]["modifiedProperties"]
+| project ActivityDateTime, InitiatedBy, UserObjectId, Upn, ModifiedProperties
+| order by ActivityDateTime asc
+```
+
+This screenshot shows changes for user migration:
++
+This screenshot shows changes for device migration:
++
+Log Analytics can also be used to summarize user migration activity.
+
+```kusto
+AuditLogs
+| where ActivityDateTime > ago(7d)
+| extend InitiatedBy = tostring(InitiatedBy["app"]["displayName"])
+| where InitiatedBy == "Azure MFA Management"
+| extend UserObjectId = tostring(TargetResources[0]["id"])
+| summarize UsersMigrated = dcount(UserObjectId) by InitiatedBy, bin(ActivityDateTime, 1d)
+```
++ ### Validate and test Once you've successfully migrated user data, you can validate the end-user experience using Staged Rollout before making the global tenant change. The following process will allow you to target specific Azure AD group(s) for Staged Rollout for MFA. Staged Rollout tells Azure AD to perform MFA by using Azure AD MFA for users in the targeted groups, rather than sending them on-premises to perform MFA. You can validate and testΓÇöwe recommend using the Azure portal, but if you prefer, you can also use Microsoft Graph.
Once you've successfully migrated user data, you can validate the end-user exper
1. Are users able to authenticate successfully using Hardware OATH tokens? ### Educate users
-Ensure users know what to expect when they're moved to Azure MFA, including new authentication flows. You may also wish to instruct users to use the Azure AD Combined Registration portal ([aka.ms/mfasetup](https://aka.ms/mfasetup)) to manage their authentication methods rather than the User portal once migrations are complete. Any changes made to authentication methods in Azure AD won't propagate back to your on-premises environment. In a situation where you had to roll back to MFA Server, any changes users have made in Azure AD wonΓÇÖt be available in the MFA Server User portal.
+Ensure users know what to expect when they're moved to Azure MFA, including new authentication flows. You may also wish to instruct users to use the Azure AD Combined Registration portal ([aka.ms/mfasetup](https://aka.ms/mfasetup)) to manage their authentication methods rather than the User portal once migrations are complete. Any changes made to authentication methods in Azure AD won't propagate back to your on-premises environment. In a situation where you had to roll back to MFA Server, any changes users have made in Azure AD won't be available in the MFA Server User portal.
-If you use third-party solutions that depend on Azure MFA Server for authentication (see [Authentication services](#authentication-services)), youΓÇÖll want users to continue to make changes to their MFA methods in the User portal. These changes will be synced to Azure AD automatically. Once you've migrated these third party solutions, you can move users to the Azure AD combined registration page.
+If you use third-party solutions that depend on Azure MFA Server for authentication (see [Authentication services](#authentication-services)), you'll want users to continue to make changes to their MFA methods in the User portal. These changes will be synced to Azure AD automatically. Once you've migrated these third party solutions, you can move users to the Azure AD combined registration page.
### Complete user migration Repeat migration steps found in [Migrate user data](#migrate-user-data) and [Validate and test](#validate-and-test) sections until all user data is migrated.
Repeat migration steps found in [Migrate user data](#migrate-user-data) and [Val
Using the data points you collected in [Authentication services](#authentication-services), begin carrying out the various migrations necessary. Once this is completed, consider having users manage their authentication methods in the combined registration portal, rather than in the User portal on MFA server. ### Update domain federation settings
-Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, itΓÇÖs time to update your domain federation settings. After the update, Azure AD no longer sends MFA request to your on-premises federation server.
+Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, it's time to update your domain federation settings. After the update, Azure AD no longer sends MFA request to your on-premises federation server.
To configure Azure AD to ignore MFA requests to your on-premises federation server, install the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-&preserve-view=true) and set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `rejectMfaByFederatedIdp`, as shown in the following example.
Content-Type: application/json
} ```
-Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
+Users will no longer be redirected to your on-premises federation server for MFA, whether they're targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
>[!NOTE] >The update of the domain federation setting can take up to 24 hours to take effect.
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
description: Step-by-step guidance to migrate from MFA Server on-premises to Azu
+ Last updated 01/29/2023
active-directory How To Migrate Mfa Server To Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md
Title: Migrate to Azure AD MFA with federations
description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA with federation + Last updated 05/23/2023
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
description: Enable passwordless sign-in to Azure AD using Microsoft Authenticat
+ Last updated 05/16/2023
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
description: Learn how to enable users to sign in to Azure Active Directory with
+ Last updated 06/01/2023
- # Sign-in to Azure AD with email as an alternate login ID (Preview) > [!NOTE]
-> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse).
Many organizations want to let users sign in to Azure Active Directory (Azure AD) using the same credentials as their on-premises directory environment. With this approach, known as hybrid authentication, users only need to remember one set of credentials.
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
Title: Deployment considerations for Azure AD Multi-Factor Authentication
description: Learn about deployment considerations and strategy for successful implementation of Azure AD Multi-Factor Authentication + Last updated 03/06/2023
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 07/17/2023 Last updated : 08/16/2023 -+
To unblock a user, complete the following steps:
Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](../identity-protection/concept-identity-protection-policies.md).
-To enable **Report suspicious activity** from the Authentication Methods Settings:
+To enable **Report suspicious activity** from the Authentication methods **Settings**:
1. In the Azure portal, click **Azure Active Directory** > **Security** > **Authentication Methods** > **Settings**.
-1. Set **Report suspicious activity** to **Enabled**.
+1. Set **Report suspicious activity** to **Enabled**. The feature remains disabled if you choose **Microsoft managed**. For more information about Microsoft managed values, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md).
1. Select **All users** or a specific group.
+1. Select a **Reporting code**.
+1. Click **Save**.
+
+>[!NOTE]
+>If you enable **Report suspicious activity** and specify a custom voice reporting value while the tenant still has **Fraud Alert** enabled in parallel with a custom voice reporting number configured, the **Report suspicious activity** value will be used instead of **Fraud Alert**.
### View suspicious activity events
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
-OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse).
![Screenshot that shows the OATH tokens section.](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png)
The following table lists more numbers for different countries.
| Sri Lanka | +94 117750440 | | Sweden | +46 701924176 | | Taiwan | +886 277515260 |
-| Turkey | +90 8505404893 |
+| T├╝rkiye | +90 8505404893 |
| Ukraine | +380 443332393 | | United Arab Emirates | +971 44015046 | | Vietnam | +84 2039990161 |
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent
| **REQUEST_FORMAT_ERROR** <br> Radius Request missing mandatory Radius userName\Identifier attribute.Verify that NPS is receiving RADIUS requests | This error usually reflects an installation issue. The NPS extension must be installed in NPS servers that can receive RADIUS requests. NPS servers that are installed as dependencies for services like RDG and RRAS don't receive radius requests. NPS Extension does not work when installed over such installations and errors out since it cannot read the details from the authentication request. | | **REQUEST_MISSING_CODE** | Make sure that the password encryption protocol between the NPS and NAS servers supports the secondary authentication method that you're using. **PAP** supports all the authentication methods of Azure AD MFA in the cloud: phone call, one-way text message, mobile app notification, and mobile app verification code. **CHAPV2** and **EAP** support phone call and mobile app notification. | | **USERNAME_CANONICALIZATION_ERROR** | Verify that the user is present in your on-premises Active Directory instance, and that the NPS Service has permissions to access the directory. If you are using cross-forest trusts, [contact support](#contact-microsoft-support) for further help. |
+| **Challenge requested in Authentication Ext for User** | Organizations using a RADIUS protocol other than PAP will observe user VPN authorization failing with these events appearing in the AuthZOptCh event log of the NPS Extension server. You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. For further help, please check [Number matching using NPS Extension](how-to-mfa-number-match.md#nps-extension). |
### Alternate login ID errors
active-directory Howto Mfa Nps Extension Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md
description: Integrate your Remote Desktop Gateway infrastructure with Azure AD
+ Last updated 01/29/2023
active-directory Howto Mfa Nps Extension Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md
description: Integrate your VPN infrastructure with Azure AD MFA by using the Ne
+ Last updated 01/29/2023
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
-+ # Integrate your existing Network Policy Server (NPS) infrastructure with Azure AD Multi-Factor Authentication
active-directory Howto Mfa Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md
-+ # Use the sign-ins report to review Azure AD Multi-Factor Authentication events
active-directory Howto Mfa Userdevicesettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md
Previously updated : 07/05/2023 Last updated : 08/29/2023
Install the Microsoft.Graph.Identity.Signins PowerShell module using the followi
```powershell Install-module Microsoft.Graph.Identity.Signins
-Connect-MgGraph -Scopes UserAuthenticationMethod.ReadWrite.All
+Connect-MgGraph -Scopes "User.Read.all","UserAuthenticationMethod.Read.All","UserAuthenticationMethod.ReadWrite.All"
Select-MgProfile -Name beta ```
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
-+ # Enable per-user Azure AD Multi-Factor Authentication to secure sign-in events
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Based on your organizational requirements, you can customize the Azure AD smart
To check or modify the smart lockout values for your organization, complete the following steps:
-1. Sign in to the [Entra portal](https://entra.microsoft.com/#home).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
1. Search for and select *Azure Active Directory*, then select **Security** > **Authentication methods** > **Password protection**. 1. Set the **Lockout threshold**, based on how many failed sign-ins are allowed on an account before its first lockout.
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
description: Troubleshoot Azure AD Multi-Factor Authentication and self-service
+ Last updated 01/29/2023
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
-+ # Pre-populate user authentication contact information for Azure Active Directory self-service password reset (SSPR)
active-directory V1 Permissions Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-permissions-consent.md
Last updated 09/24/2018 -+
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Previously updated : 06/16/2023 Last updated : 08/23/2023
This article answers frequently asked questions (FAQs) about Microsoft Entra Per
Microsoft Entra Permissions Management (Permissions Management) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle. - ## What are the prerequisites to use Permissions Management? Permissions Management supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use Permissions Management.
Permissions Management currently supports the three major public clouds: Amazon
Permissions Management currently doesn't support hybrid environments.
-## What types of identities are supported by Permissions Management?
+## What types of identities does Permissions Management support?
Permissions Management supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
The Permissions Creep Index (PCI) is a quantitative measure of risk associated w
## How can customers use Permissions Management to delete unused or excessive permissions?
-Permissions Management allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
+Permissions Management allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size the permissions of that identity to permissions that are only being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
## How can customers grant permissions on-demand with Permissions Management?
No, Permissions Management doesn't have access to sensitive personal data.
You can read our [blog](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/bg-p/Identity) and visit our [web page](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-permissions-management). You can also get in touch with your Microsoft point of contact to schedule a demo.
-## What is the data destruction/decommission process?
+## What is the data destruction/decommission process?
+
+If a customer initiates a free Permissions Management 45-day trial and does not convert to a paid license within 45 days of the trial expiration, all collected data is deleted within 30 days of the trial expiration date.
+
+If a customer decides to discontinue licensing the service, all previously collected data is deleted within 30 days of license termination.
+
+Customers can also remove, export or modify specific data if a Global Administrator using the Permissions Management service files an official Data Subject Request. To file a request:
-If a customer initiates a free Permissions Management 45-day trial, but does not follow up and convert to a paid license within 45 days of the free trial expiration, we will delete all collected data on or just before 45 days.
+If you're an enterprise customer, you can contact your Microsoft representative, account team, or tenant admin to file a high-priority IcM support ticket requesting a Data Subject Request. Do not include details or any personally identifiable information in the IcM request. We'll reach out to you for these details only after an IcM is filed.
-If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 45 days of license termination.
+If you're a self-service customer (you set up a trial or paid license in the Microsoft 365 admin center) you can contact the Permissions Management privacy team by selecting your profile drop-down menu, then **Account Settings** in Permissions Management. Follow the instructions to make a Data Subject Request.
-We also have the ability to remove, export or modify specific data should the Global Administrator using the Entra Permissions Management service file an official Data Subject Request. This can be initiated by opening a ticket in the Azure portal [New support request - Microsoft Entra admin center](https://entra.microsoft.com/#blade/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical), or alternately contacting your local Microsoft representative.
+Learn more about [Azure Data Subject Requests](https://go.microsoft.com/fwlink/?linkid=2245178).
## Do I require a license to use Entra Permissions Management?
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
Previously updated : 06/16/2023 Last updated : 08/24/2023
This option detects all AWS accounts that are accessible through OIDC role acces
On the **Data Collectors** dashboard, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
- You have now completed onboarding AWS, and Permissions Management has started collecting and processing your data.
+ The status column in your Permissions Management UI shows you which step of data collection you're at:
+
+ - **Pending**: Permissions Management has not started detecting or onboarding yet.
+ - **Discovering**: Permissions Management is detecting the authorization systems.
+ - **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding.
+ - **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management.
### 7. View the data
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
Previously updated : 06/16/2023 Last updated : 08/24/2023
To view status of onboarding after saving the configuration:
### 2. Review and save. -- In **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+1. In **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
The following message appears: **Successfully Created Configuration.** On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
- You have now completed onboarding Azure, and Permissions Management has started collecting and processing your data.
+ The status column in your Permissions Management UI shows you which step of data collection you're at:
+
+ - **Pending**: Permissions Management has not started detecting or onboarding yet.
+ - **Discovering**: Permissions Management is detecting the authorization systems.
+ - **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding.
+ - **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management.
### 3. View the data. -- To view the data, select the **Authorization Systems** tab.
+1. To view the data, select the **Authorization Systems** tab.
The **Status** column in the table displays **Collecting Data.**
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
Previously updated : 06/16/2023 Last updated : 08/24/2023 # Enable or disable the controller after onboarding is complete
-With the controller, you determine what level of access to provide Permissions Management.
+With the controller, you can decide what level of access to grant in Permissions Management.
-* Enable to grant read and write access to your environment(s). You can manage permissions and remediate through Permissions Management.
+* Enable to grant read and write access to your environments. You can right-size permissions and remediate through Permissions Management.
-* Disable to grant read-only access to your environment(s).
+* Disable to grant read-only access to your environments.
This article describes how to enable the controller in Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) after onboarding is complete.
This article also describes how to disable the controller in Microsoft Azure and
## Enable the controller in AWS > [!NOTE]
-> You can enable the controller in AWS if you disabled it during onboarding. Once you enable the controller, you canΓÇÖt disable it at this time.
+> You can enable the controller in AWS if you disabled it during onboarding. Once you enable the controller in AWS, you canΓÇÖt disable it.
1. Sign in to the AWS console of the member account in a separate browser window. 1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
Previously updated : 08/09/2023 Last updated : 08/24/2023
The required commands to run in Google Cloud Shell are listed in the Manage Auth
### 3. Review and save. -- In the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+1. In the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
The following message appears: **Successfully Created Configuration**. On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**-
- You've completed onboarding GCP, and Permissions Management has started collecting and processing your data.
+
+ The status column in your Permissions Management UI shows you which step of data collection you're at:
+
+ - **Pending**: Permissions Management has not started detecting or onboarding yet.
+ - **Discovering**: Permissions Management is detecting the authorization systems.
+ - **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding.
+ - **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management.
### 4. View the data. -- To view the data, select the **Authorization Systems** tab.
+1. To view the data, select the **Authorization Systems** tab.
The **Status** column in the table displays **Collecting Data.**
active-directory Permissions Management Quickstart Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-quickstart-guide.md
+
+ Title: Microsoft Entra Permissions Management Quickstart Guide
+description: Quickstart guide - How to quickly onboard your Microsoft Entra Permissions Management product
+# CustomerIntent: As a security administrator, I want to successfully onboard Permissions Management so that I can enable identity security in my cloud environment as efficiently as possible.'
+++++++ Last updated : 08/24/2023+++
+# Quickstart guide to Microsoft Entra Permissions Management
+
+Welcome to the Quickstart Guide for Microsoft Entra Permissions Management.
+
+Permissions Management is a Cloud Infrastructure Entitlement Management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. These identities include over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management helps your organization effectively secure and manage cloud permissions by detecting, automatically right-sizing, and continuously monitoring unused and excessive permissions.
+
+With this quickstart guide, youΓÇÖll set up your multicloud environment(s), configure data collection, and enable permissions access to ensure your cloud identities are managed and secure.
+
+## Prerequisites
+
+Before you begin, you need access to these tools for the onboarding process:
+
+- Access to a local BASH shell with the Azure CLI or Azure Cloud Shell using BASH environment (Azure CLI is included).
+- Access to AWS, Azure, and GCP consoles.
+- A user must have *Global Administrator* or *Permissions Management Administrator* role assignments to create a new app registration in Entra ID tenant is required for AWS and GCP onboarding.
++
+## Step 1: Set-up Permissions Management
+
+To enable Permissions Management, you must have a Microsoft Entra ID tenant (example, Entra admin center).
+- If you have an Azure account, you automatically have an Entra admin center tenant.
+- If you donΓÇÖt already have one, create a free account at [entra.microsoft.com.](https://entra.microsoft.com)
+
+If the above points are met, continue with:
+
+[Enable Microsoft Entra Permissions Management in your organization](onboard-enable-tenant.md)
+
+Ensure you're a *Global Administrator* or *Permissions Management Administrator*. Learn more about [Permissions Management roles and permissions](product-roles-permissions.md).
+
+
+## Step 2: Onboard your multicloud environment
+
+So far youΓÇÖve,
+
+1. Been assigned the *Permissions Management Administrator* role in your Entra admin center tenant.
+2. Purchased licenses or activated your 45-day free trial for Permissions Management.
+3. Successfully launched Permissions Management.
+
+Now, you're going to learn about the role and settings of the Controller and Data collection modes in Permissions Management.
+
+### Set the controller
+The controller gives you the choice to determine the level of access you grant to users in Permissions Management.
+
+- Enabling the controller during onboarding grants Permissions Management admin access, or read and write access, so users can right-size permissions and remediate directly through Permissions Management (instead of going to the AWS, Azure, or GCP consoles).ΓÇ»
+
+- Disabling the controller during onboarding, or never enabling it, grants a Permissions Management user read-only access to your environment(s).
+
+> [!NOTE]
+> If you don't enable the controller during onboarding, you have the option to enable it after onboarding is complete. To set the controller in Permissions Management after onboarding, see [Enable or disable the controller after onboarding](onboard-enable-controller-after-onboarding.md).
+> For AWS environments, once you have enabled the controller, you *cannot* disable it.
+
+To set the controller settings during onboarding:
+1. Select **Enable** to give read and write access to Permissions Management.
+2. Select **Disable** to give read-only access to Permissions Management.
+
+### Configure data collection
+
+There are three modes to choose from in order to collect data in Permissions Management.
+
+- **Automatic (recommended)**
+Permissions Management automatically discovers, onboards, and monitors all current and future subscriptions.
+
+- **Manual**
+Manually enter individual subscriptions for Permissions Management to discover, onboard, and monitor. You can enter up to 100 subscriptions per data collection.
+
+- **Select**
+Permissions Management automatically discovers all current subscriptions. Once discovered, you select which subscriptions to onboard and monitor.
+
+> [!NOTE]
+> To use **Automatic** or **Select** modes, the controller must be enabled while configuring data collection.
+
+To configure data collection:
+1. In Permissions Management, navigate to the data collectors page.
+2. Select a cloud environment: AWS, Azure, or GCP.
+3. Click **Create configuration**.
+
+### Onboard Amazon Web Services (AWS)
+Since Permissions Management is hosted on Microsoft Entra, there are more steps to take to onboard your AWS environment.
+
+To connect AWS to Permissions Management, you must create an Entra ID application in the Entra admin center tenant where Permissions Management is enabled. This Entra ID application is used to set up an OIDC connection to your AWS environment.
+
+*OpenID Connect (OIDC) is an interoperable authentication protocol based on the OAuth 2.0 family of specifications.*
+
+### Prerequisites
+
+A user must have *Global Administrator* or *Permissions Management Administrator* role assignments to create a new app registration in Entra ID.
+
+Account IDs and roles for:
+- AWS OIDC account: An AWS member account designated by you to create and host the OIDC connection through an OIDC IdP
+- AWS Logging account (optional but recommended)
+- AWS Management account (optional but recommended)
+- AWS member accounts monitored and managed by Permissions Management (for manual mode)
+
+To use **Automatic** or **Select** data collection modes, you must connect your AWS Management account.
+
+During this step, you can enable the controller by entering the name of the S3 bucket with AWS CloudTrail activity logs (found on AWS Trails).
+
+To onboard your AWS environment and configure data collection, see [Onboard an Amazon Web Services (AWS) account](onboard-aws.md).
+
+### Onboard Microsoft Azure
+When you enabled Permissions Management in the Entra ID tenant, an enterprise application for CIEM was created. To onboard your Azure environment, you grant permissions to this application for Permissions management.
+
+1. In the Entra ID tenant where Permissions management is enabled, locate the **Cloud Infrastructure Entitlement Management (CIEM)** enterprise application.
+
+2. Assign the *Reader* role to the CIEM application to allow Permissions management to read the Entra subscriptions in your environment.
+
+### Prerequisites
+- A user with ```Microsoft.Authorization/roleAssignments/write``` permissions at the subscription or management group scope.
+
+- To use **Automatic** or **Select** data collection modes, you must assign the *Reader* role at the Management group scope.
+
+- To enable the controller, you must assign the *User Access Administrator* role to the CIEM application.
+
+To onboard your Azure environment and configure data collection, see [Onboard a Microsoft Azure subscription](onboard-azure.md).
++
+### Onboard Google Cloud Platform (GCP)
+Because Permissions Management is hosted on Microsoft Azure, there are additional steps to take to onboard your GCP environment.
+
+To connect GCP to Permissions Management, you must create an Entra admin center application in the Entra ID tenant where Permissions Management is enabled. This Entra admin center application is used to set up an OIDC connection to your GCP environment.
+
+*OpenID Connect (OIDC) is an interoperable authentication protocol based on the OAuth 2.0 family of specifications.*
+
+
+### Prerequisites
+A user with the ability to create a new app registration in Entra (needed to facilitate the OIDC connection) is needed for AWS and GCP onboarding.
+
+ID details for:
+- GCP OIDC project: a GCP project designated by you to create and host the OIDC connection through an OIDC IdP.
+ - Project number and project ID
+- GCP OIDC Workload identity
+ - Pool ID, pool provider ID
+- GCP OIDC service account
+ - G-suite IdP Secret name and G-suite IdP user email (optional)
+ - IDs for the GCP projects you wish to onboard (optional, for manual mode)
+
+Assign the *Viewer* and *Security Reviewer* roles to the GCP service account at the organization, folder, or project levels to grant Permissions management read access to your GCP environment.
+
+During this step, you have the option to **Enable** controller mode by assigning the *Role Administrator* and *Security Administrator* roles to the GCP service account at the organization, folder, or project levels.
+
+> [!NOTE]
+> The Permissions Management default scope is at the project level.
+
+To onboard your GCP environment and configure data collection, see [Onboard a GCP project](onboard-gcp.md).
+
+## Summary
+
+Congratulations! You have finished configuring data collection for your environment(s), and the data collection process has begun.
+
+The status column in your Permissions Management UI shows you which step of data collection you're at.
+
+
+- **Pending**: Permissions Management has not started detecting or onboarding yet.
+- **Discovering**: Permissions Management is detecting the authorization systems.
+- **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding.
+- **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management.
+
+> [!NOTE]
+> Data collection might take time depending on the amount of authorization systems you've onboarded. While the data collection process continues, you can begin setting up [users and groups in Permissions Management](how-to-add-remove-user-to-group.md).
+
+## Next steps
+
+- [Enable or disable the controller after onboarding](onboard-enable-controller-after-onboarding.md)
+- [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md)
+- [Create folders to organize your authorization systems](how-to-create-folders.md)
+
+References:
+- [Permissions Management Glossary](multi-cloud-glossary.md)
+- [Permissions Management FAQs](faqs.md)
active-directory Product Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-roles-permissions.md
+
+ Title: Microsoft Entra Permissions Management roles and permissions
+description: Review roles and the level of permissions assigned in Microsoft Entra Permissions Management.
+# customerintent: As a cloud administer, I want to understand Permissions Management role assignments, so that I can effectively assign the correct permissions to users.
+++++++ Last updated : 08/24/2023++++
+# Microsoft Entra Permissions Management roles and permissions levels
+
+In Microsoft Azure and Microsoft Entra Permissions Management role assignments grant users permissions to monitor and take action in multicloud environments.
+
+- **Global Administrator**: Manages all aspects of Entra Admin Center and Microsoft services that use Entra Admin Center identities.
+- **Billing Administrator**: Performs common billing related tasks like updating payment information.
+- **Permissions Management Administrator**: Manages all aspects of Entra Permissions Management.
+
+See [Microsoft Entra ID built-in roles to learn more.](product-privileged-role-insights.md)
+
+## Enabling Permissions Management
+- To activate a trial or purchase a license, you must have *Global Administrator* or *Billing Administrator* permissions.
+
+## Onboarding your Amazon Web Service (AWS), Microsoft Entra, or Google Cloud Platform (GCP) environments
+
+- To configure data collection, you must have *Permissions Management Administrator* or *Global Administrator* permissions.
+- A user with *Global Administrator* or *Permissions Management Administrator* role assignments is required for AWS and GCP onboarding.
+
+## Notes on permissions and roles in Permissions Management
+
+- Users can have the following permissions:
+ - Admin for all authorization system types
+ - Admin for selected authorization system types
+ - Fine-grained permissions for all or selected authorization system types
+- If a user isn't an admin, they're assigned Microsoft Entra ID security group-based, fine-grained permissions for all or selected authorization system types:
+ - Viewers: View the specified AWS accounts, Azure subscriptions, and GCP projects
+ - Controller: Modify Cloud Infrastructure Entitlement Management (CIEM) properties and use the Remediation dashboard.
+ - Approvers: Able to approve permission requests
+ - Requestors: Request permissions in the specified AWS accounts, Entra subscriptions, and GCP projects.
+
+## Permissions Management actions and required roles
+
+Remediation
+- To view the **Remediation** tab, you must have *Viewer*, *Controller*, or *Approver* permissions.
+- To make changes in the **Remediation** tab, you must have *Controller* or *Approver* permissions.
+
+Autopilot
+- To view and make changes in the **Autopilot** tab, you must be a *Permissions Management Administrator*.
+
+Alert
+- Any user (admin, nonadmin) can create an alert.
+- Only the user who creates the alert can edit, rename, deactivate, or delete the alert.
+
+Manage users or groups
+- Only the owner of a group can add or remove a user from the group.
+- Managing users and groups is only done in the Entra Admin Center.
++
+## Next steps
+
+For information about managing roles, policies and permissions requests in your organization, see [View roles/policies and requests for permission in the Remediation dashboard](ui-remediation.md).
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
The following messaging protocols support legacy authentication:
- Universal Outlook - Used by the Mail and Calendar app for Windows 10. - Other clients - Other protocols identified as utilizing legacy authentication.
-For more information about these authentication protocols and services, see [Sign-in activity reports in the Azure portal](../reports-monitoring/concept-sign-ins.md#filter-sign-in-activities).
+For more information about these authentication protocols and services, see [Sign-in log activity details](../reports-monitoring/concept-sign-in-log-activity-details.md).
### Identify legacy authentication use
Before you can block legacy authentication in your directory, you need to first
#### Sign-in log indicators
-1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
1. Add the **Client App** column if it isn't shown by clicking on **Columns** > **Client App**. 1. Select **Add filters** > **Client App** > choose all of the legacy authentication protocols and select **Apply**. 1. If you've activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
## Create a Conditional Access policy
-Filter for devices is an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API.
+Filter for devices is an optional control when creating a Conditional Access policy.
The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios). Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
description: What are cloud apps, actions, and authentication context in an Azur
+ Previously updated : 06/27/2023 Last updated : 08/31/2023
# Conditional Access: Target resources
-Target resources (formerly Cloud apps, actions, and authentication context) are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, actions, or authentication context.
+Target resources (formerly Cloud apps, actions, and authentication context) are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, services, actions, or authentication context.
-- Administrators can choose from the list of applications that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../app-proxy/what-is-application-proxy.md).
+- Administrators can choose from the list of applications or services that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../app-proxy/what-is-application-proxy.md).
- Administrators may choose to define policy not based on a cloud application but on a [user action](#user-actions) like **Register security information** or **Register or join devices**, allowing Conditional Access to enforce controls around those actions. - Administrators can target [traffic forwarding profiles](#traffic-forwarding-profiles) from Global Secure Access for enhanced functionality. - Administrators can use [authentication context](#authentication-context) to provide an extra layer of security in applications.
-![Define a Conditional Access policy and specify cloud apps](./media/concept-conditional-access-cloud-apps/conditional-access-cloud-apps-or-actions.png)
## Microsoft cloud applications
Targeting this group of applications helps to avoid issues that may arise becaus
Administrators can exclude the entire Office 365 suite or specific Office 365 cloud apps from the Conditional Access policy.
-The following key applications are affected by the Office 365 cloud app:
--- Exchange Online-- Microsoft 365 Search Service-- Microsoft Forms-- Microsoft Planner (ProjectWorkManagement)-- Microsoft Stream-- Microsoft Teams-- Microsoft To-Do-- Microsoft Flow-- Microsoft Office 365 Portal-- Microsoft Office client application-- Microsoft To-Do WebApp-- Microsoft Whiteboard Services-- Office Delve-- Office Online-- OneDrive-- Power Apps-- Power Automate-- Security & compliance portal-- SharePoint Online-- Skype for Business Online-- Skype and Teams Tenant Admin API-- Sway-- Yammer- A complete list of all services included can be found in the article [Apps included in Conditional Access Office 365 app suite](reference-office-365-application-contents.md). ### Microsoft Azure Management
Because the policy is applied to the Azure management portal and API, services,
- Azure Data Factory portal - Azure Event Hubs - Azure Service Bus -- [Azure SQL Database](/azure/azure-sql/database/conditional-access-configure)
+- Azure SQL Database
- SQL Managed Instance - Azure Synapse - Visual Studio subscriptions administrator portal -- [Microsoft IoT Central](https://apps.azureiotcentral.com/)
+- Microsoft IoT Central
> [!NOTE] > The Microsoft Azure Management application applies to [Azure PowerShell](/powershell/azure/what-is-azure-powershell), which calls the [Azure Resource Manager API](../../azure-resource-manager/management/overview.md). It does not apply to [Azure AD PowerShell](/powershell/azure/active-directory/overview), which calls the [Microsoft Graph API](/graph/overview).
For more information on how to set up a sample policy for Microsoft Azure Manage
When a Conditional Access policy targets the Microsoft Admin Portals cloud app, the policy is enforced for tokens issued to application IDs of the following Microsoft administrative portals: -- Microsoft 365 Admin Center-- Exchange admin center - Azure portal
+- Exchange admin center
+- Microsoft 365 admin center
+- Microsoft 365 Defender portal
- Microsoft Entra admin center-- Security and Microsoft Purview compliance portal
+- Microsoft Intune admin center
+- Microsoft Purview compliance portal
-Other Microsoft admin portals will be added over time.
+We're continually adding more administrative portals to the list.
> [!IMPORTANT]
-> Microsoft Admin Poratls (preview) is not currently supported in Government clouds.
+> Microsoft Admin Portals (preview) is not currently supported in Government clouds.
> [!NOTE] > The Microsoft Admin Portals app applies to interactive sign-ins to the listed admin portals only. Sign-ins to the underlying resources or services like Microsoft Graph or Azure Resource Manager APIs are not covered by this application. Those resources are protected by the [Microsoft Azure Management](#microsoft-azure-management) app. This enables customers to move along the MFA adoption journey for admins without impacting automation that relies on APIs and PowerShell. When you are ready, Microsoft recommends using a [policy requiring administrators perform MFA always](howto-conditional-access-policy-admin-mfa.md) for comprehensive protection.
User actions are tasks that can be performed by a user. Currently, Conditional A
## Traffic forwarding profiles
-Traffic forwarding profiles in Global Secure Access enable administrators to define and control how traffic is routed through Microsoft Entra Internet Access and Microsoft Entra Private Access. Traffic forwarding profiles can be assigned to devices and remote networks. For an example of how to configure these traffic profiles in Conditional Access policy, see the article [How to require a compliant network check](../../global-secure-access/how-to-compliant-network.md).
+Traffic forwarding profiles in Global Secure Access enable administrators to define and control how traffic is routed through Microsoft Entra Internet Access and Microsoft Entra Private Access. Traffic forwarding profiles can be assigned to devices and remote networks. For an example of how to apply a Conditional Access policy to these traffic profiles, see the article [How to apply Conditional Access policies to the Microsoft 365 traffic profile](../../global-secure-access/how-to-target-resource-microsoft-365-profile.md).
For more information about these profiles, see the article [Global Secure Access traffic forwarding profiles](../../global-secure-access/concept-traffic-forwarding.md).
For example, an organization may keep files in SharePoint sites like the lunch m
### Configure authentication contexts
-Authentication contexts are managed in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Authentication context**.
+Authentication contexts are managed under **Azure Active Directory** > **Security** > **Conditional Access** > **Authentication context**.
-![Manage authentication context in the Azure portal](./media/concept-conditional-access-cloud-apps/conditional-access-authentication-context-get-started.png)
-Create new authentication context definitions by selecting **New authentication context** in the Azure portal. Organizations are limited to a total of 25 authentication context definitions. Configure the following attributes:
+Create new authentication context definitions by selecting **New authentication context**. Organizations are limited to a total of 25 authentication context definitions. Configure the following attributes:
- **Display name** is the name that is used to identify the authentication context in Azure AD and across applications that consume authentication contexts. We recommend names that can be used across resources, like "trusted devices", to reduce the number of authentication contexts needed. Having a reduced set limits the number of redirects and provides a better end to end-user experience. - **Description** provides more information about the policies it's used by Azure AD administrators and those applying authentication contexts to resources.
Create new authentication context definitions by selecting **New authentication
Administrators can select published authentication contexts in their Conditional Access policies under **Assignments** > **Cloud apps or actions** and selecting **Authentication context** from the **Select what this policy applies to** menu. #### Delete an authentication context
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
This setting has an effect on access attempts made from the following mobile app
| Outlook mobile app | Exchange Online | Android, iOS | | Power BI app | Power BI service | Windows 10, Windows 8.1, Windows 7, Android, and iOS | | Skype for Business | Exchange Online| Android, iOS |
-| Visual Studio Team Services app | Visual Studio Team Services | Windows 10, Windows 8.1, Windows 7, iOS, and Android |
+| Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) app | Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) | Windows 10, Windows 8.1, Windows 7, iOS, and Android |
### Exchange ActiveSync clients
active-directory Concept Conditional Access Policy Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md
Policies in this category provide new ways to protect against compromise.
-Find these templates in the **[Microsoft Entra admin center](https://entra.microsoft.com)** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category.
+Find these templates in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category.
:::image type="content" source="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png" alt-text="Screenshot that shows how to create a Conditional Access policy from a preconfigured template in the Microsoft Entra admin center." lightbox="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png"::: > [!IMPORTANT]
-> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. Simply navigate to **Microsoft Entra admin center** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Policies**, select the policy to open the editor and modify the excluded users and groups to select accounts you want to exclude.
+> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. You can find these policies in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Conditional Access** > **Policies**. Select a policy to open the editor and modify the excluded users and groups to select accounts you want to exclude.
By default, each policy is created in [report-only mode](concept-conditional-access-report-only.md), we recommended organizations test and monitor usage, to ensure intended result, before turning on each policy.
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
For more information, see the article [Configure authentication session manageme
- **Disable** only work when **All cloud apps** are selected, no conditions are selected, and **Disable** is selected under **Session** > **Customize continuous access evaluation** in a Conditional Access policy. You can choose to disable all users or specific users and groups. ## Disable resilience defaults
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
By default the policy provides an option to exclude the current user from the po
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-If you do find yourself locked out, see [What to do if you're locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
+If you do find yourself locked out, see [What to do if you're locked out?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out)
### External partner access
active-directory Concept Continuous Access Evaluation Strict Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-strict-enforcement.md
Repeat steps 2 and 3 with expanding groups of users until Strictly Enforce Locat
Administrators can investigate the Sign-in logs to find cases with **IP address (seen by resource)**.
-1. Sign in to the **Azure portal** as at least a Global Reader.
-1. Browse to **Azure Active Directory** > **Sign-ins**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
1. Find events to review by adding filters and columns to filter out unnecessary information. 1. Add the **IP address (seen by resource)** column and filter out any blank items to narrow the scope. The **IP address (seen by resource)** is blank when that IP seen by Azure AD matches the IP address seen by the resource.
active-directory Concept Continuous Access Evaluation Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md
Last updated 07/22/2022
-+
-# Continuous access evaluation for workload identities (preview)
+# Continuous access evaluation for workload identities
Continuous access evaluation (CAE) for [workload identities](../workload-identities/workload-identities-overview.md) provides security benefits to your organization. It enables real-time enforcement of Conditional Access location and risk policies along with instant enforcement of token revocation events for workload identities.
Continuous access evaluation doesn't currently support managed identities.
## Scope of preview
-The continuous access evaluation for workload identities public preview scope includes support for Microsoft Graph as a resource provider.
+The continuous access evaluation for workload identities is supported only on access requests sent to Microsoft Graph as a resource provider. More resource providers will be added over time.
-The preview targets service principals for line of business (LOB) applications.
+Service principals for line of business (LOB) applications are supported
We support the following revocation events:
When a clientΓÇÖs access to a resource is blocked due to CAE being triggered, th
The following steps detail how an admin can verify sign in activity in the sign-in logs:
-1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt. ## Next steps
The following steps detail how an admin can verify sign in activity in the sign-
- [Register an application with Azure AD and create a service principal](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) - [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) - [Sample application using continuous access evaluation](https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae)
+- [Securing workload identities with Azure AD Identity Protection](../identity-protection/concept-workload-identity-risk.md)
- [What is continuous access evaluation?](../conditional-access/concept-continuous-access-evaluation.md)
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
The CAE setting has been moved to under the Conditional Access blade. New CAE cu
#### Migration
-Customers who have configured CAE settings under Security before have to migrate settings to a new Conditional Access policy. Use the steps that follow to migrate your CAE settings to a Conditional Access policy.
--
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**.
-1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point.
-1. Browse to **Conditional Access** and you find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it.
+Customers who have configured CAE settings under Security before have to migrate settings to a new Conditional Access policy.
The following table describes the migration experience of each customer group based on previously configured CAE settings.
Changes made to Conditional Access policies and group membership made by adminis
When Conditional Access policy or group membership changes need to be applied to certain users immediately, you have two options. - Run the [revoke-mgusersign PowerShell command](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession) to revoke all refresh tokens of a specified user.-- Select "Revoke Session" on the user profile page in the Azure portal to revoke the user's session to ensure that the updated policies are applied immediately.
+- Select "Revoke Session" on the user profile page to revoke the user's session to ensure that the updated policies are applied immediately.
### IP address variation and networks with IP address shared or unknown egress IPs
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
Application filters are a new feature for Conditional Access that allows organiz
In this document, you create a custom attribute set, assign a custom security attribute to your application, and create a Conditional Access policy to secure the application. > [!IMPORTANT]
-> Filter for applications is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Filter for applications is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Assign roles
Custom security attributes are security sensitive and can only be managed by del
1. Assign the appropriate role to the users who will manage or report on these attributes at the directory scope.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+ For detailed steps, see [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md).
## Create custom security attributes
Follow the instructions in the article, [Add or deactivate custom security attri
:::image type="content" source="media/concept-filter-for-applications/edit-filter-for-applications.png" alt-text="A screenshot showing a Conditional Access policy with the edit filter window showing an attribute of require MFA." lightbox="media/concept-filter-for-applications/edit-filter-for-applications.png":::
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
Set up a sample application that, demonstrates how a job or a Windows service ca
When you don't have a service principal listed in your tenant, it can't be targeted. The Office 365 suite is an example of one such service principal.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
1. Select the service principal you want to apply a custom security attribute to. 1. Under **Manage** > **Custom security attributes (preview)**, select **Add assignment**. 1. Under **Attribute set**, select **ConditionalAccessTest**.
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
Token protection (sometimes referred to as token binding in the industry) attemp
Token protection creates a cryptographically secure tie between the token and the device (client secret) it's issued to. Without the client secret, the bound token is useless. When a user registers a Windows 10 or newer device in Azure AD, their primary identity is [bound to the device](../devices/concept-primary-refresh-token.md#how-is-the-prt-protected). What this means: A policy can ensure that only bound sign-in session (or refresh) tokens, otherwise known as Primary Refresh Tokens (PRTs) are used by applications when requesting access to a resource. > [!IMPORTANT]
-> Token protection is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
+> Token protection is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens (refresh tokens) for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices. > [!IMPORTANT]
Users who perform specialized roles like those described in [Privileged access s
The steps that follow help create a Conditional Access policy to require token protection for Exchange Online and SharePoint Online on Windows devices.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
Monitoring Conditional Access enforcement of token protection before and after e
Use Azure AD sign-in log to verify the outcome of a token protection enforcement policy in report only mode or in enabled mode.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
1. Select a specific request to determine if the policy is applied or not. 1. Go to the **Conditional Access** or **Report-Only** pane depending on its state and select the name of your policy requiring token protection. 1. Under **Session Controls** check to see if the policy requirements were satisfied or not.
active-directory How To App Protection Policy Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-app-protection-policy-windows.md
Previously updated : 07/14/2023 Last updated : 09/05/2023
App protection policies apply mobile application management (MAM) to specific ap
## Prerequisites
-Customers interested in the public preview will need to opt-in using the [MAM for Windows Public Preview Sign Up Form](https://aka.ms/MAMforWindowsPublic).
+Customers interested in the public preview need to opt in using the [MAM for Windows Public Preview Sign Up Form](https://aka.ms/MAMforWindowsPublic).
## User exclusions [!INCLUDE [active-directory-policy-exclusions](../../../includes/active-directory-policy-exclude-user.md)]
The following policy is put in to [Report-only mode](howto-conditional-access-in
### Require app protection policy for Windows devices
-The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [Preview: App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows).
+The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [Preview: App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows). The following policy includes multiple controls allowing devices to either use app protection policies for mobile application management (MAM) or be managed and compliant with mobile device management (MDM) policies.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
The following steps help create a Conditional Access policy requiring an app pro
1. **Client apps**, set **Configure** to **Yes**. 1. Select **Browser** only. 1. Under **Access controls** > **Grant**, select **Grant access**.
- 1. Select **Require app protection policy**
+ 1. Select **Require app protection policy** and **Require device to be marked as compliant**.
1. **For multiple controls** select **Require one of the selected controls** 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
+
+> [!TIP]
+> Organizations should also deploy a policy that [blocks access from unsupported or unknown device platforms](howto-policy-unknown-unsupported-device.md) along with this policy.
## Sign in to Windows devices
-When users attempt to sign in to a site that is protected by an app protection policy for the first time, they're prompted: To access your service, app or website, you may need to sign in to Microsoft Edge using `username@domain.com` or register your device with `organization` if you are already signed in.
+When users attempt to sign in to a site that is protected by an app protection policy for the first time, they're prompted: To access your service, app or website, you may need to sign in to Microsoft Edge using `username@domain.com` or register your device with `organization` if you're already signed in.
Clicking on **Switch Edge profile** opens a window listing their Work or school account along with an option to **Sign in to sync data**.
Clicking on **Switch Edge profile** opens a window listing their Work or school
This process opens a window offering to allow Windows to remember your account and automatically sign you in to your apps and websites. > [!CAUTION]
-> You must *CLEAR THE CHECKBOX* **Allow my organization to manage my device**. Leaving this checked enrolls your device in mobile device maangment (MDM) not mobile application management (MAM).
+> You must *CLEAR THE CHECKBOX* **Allow my organization to manage my device**. Leaving this checked enrolls your device in mobile device maangment (MDM) not mobile application management (MAM).
+>
+> Don't select **No, sign in to this app only**.
![Screenshot showing the stay signed in to all your apps window. Uncheck the allow my organization to manage my device checkbox.](./media/how-to-app-protection-policy-windows/stay-signed-in-to-all-your-apps.png)
-After selecting **OK** you may see a progress window while policy is applied. After a few moments you should see a window saying "you're all set", app protection policies are applied.
+After selecting **OK**, you may see a progress window while policy is applied. After a few moments, you should see a window saying "you're all set", app protection policies are applied.
## Troubleshooting
To resolve these possible scenarios:
### Existing account
-If there's a pre-existing, unregistered account, like `user@contoso.com` in Microsoft Edge, or if a user signs in without registering using the Heads Up Page, then the account isn't properly enrolled in MAM. This configuration blocks the user from being properly enrolled in MAM. This is a known issue that is currently being worked on.
+If there's a pre-existing, unregistered account, like `user@contoso.com` in Microsoft Edge, or if a user signs in without registering using the Heads Up Page, then the account isn't properly enrolled in MAM. This configuration blocks the user from being properly enrolled in MAM. This is a known issue.
## Next steps
active-directory How To Policy Mfa Admin Portals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-mfa-admin-portals.md
Microsoft recommends securing access to any Microsoft admin portals like Microso
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory How To Policy Phish Resistant Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-phish-resistant-admin-mfa.md
Organizations can choose to include or exclude roles as they see fit.
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md
description: Using the Azure AD Conditional Access APIs and PowerShell to manage
+ Last updated 09/10/2020
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
If you haven't integrated Azure AD logs with Azure Monitor logs, you need to tak
To access the insights and reporting workbook:
-1. Sign in to the **Azure portal**.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Insights and reporting**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Insights and reporting**.
### Get started: Select parameters
You can also investigate the sign-ins of a specific user by searching for sign-i
To configure a Conditional Access policy in report-only mode:
-1. Sign into the **Azure portal** as a Conditional Access Administrator, security administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select an existing policy or create a new policy. 1. Under **Enable policy** set the toggle to **Report-only** mode. 1. Select **Save**
To configure a Conditional Access policy in report-only mode:
### Why are queries failing due to a permissions error?
-In order to access the workbook, you need the proper Azure AD permissions and Log Analytics workspace permissions. To test whether you have the proper workspace permissions by running a sample log analytics query:
+In order to access the workbook, you need the proper permissions in Azure AD and Log Analytics. To test whether you have the proper workspace permissions by running a sample log analytics query:
-1. Sign in to the **Azure portal**.
-1. Browse to **Azure Active Directory** > **Log Analytics**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Log Analytics**.
1. Type `SigninLogs` into the query box and select **Run**. 1. If the query doesn't return any results, your workspace may not have been configured correctly.
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Organizations can choose to include or exclude roles as they see fit.
The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication. Some organizations may be ready to move to stronger authentication methods for their administrators. These organizations may choose to implement a policy like the one described in the article [Require phishing-resistant multifactor authentication for administrators](how-to-policy-phish-resistant-admin-mfa.md).
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Organizations that use [Subscription Activation](/windows/deployment/windows-10-
The following steps help create a Conditional Access policy to require all users do multifactor authentication.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Authentication Strength External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md
The authentication methods that external users can use to satisfy MFA requiremen
Determine if one of the built-in authentication strengths will work for your scenario or if you'll need to create a custom authentication strength.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Authentication methods** > **Authentication strengths**.
1. Review the built-in authentication strengths to see if one of them meets your requirements. 1. If you want to enforce a different set of authentication methods, [create a custom authentication strength](https://aka.ms/b2b-auth-strengths).
Determine if one of the built-in authentication strengths will work for your sce
Use the following steps to create a Conditional Access policy that applies an authentication strength to external users.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
The following steps will help create a Conditional Access policy to require user
> [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Block Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md
For organizations with a conservative cloud migration approach, the block all policy is an option that can be used. > [!CAUTION]
-> Misconfiguration of a block policy can lead to organizations being locked out of the Azure portal.
+> Misconfiguration of a block policy can lead to organizations being locked out.
Policies like these can have unintended side effects. Proper testing and validation are vital before enabling. Administrators should utilize tools such as [Conditional Access report-only mode](concept-conditional-access-report-only.md) and [the What If tool in Conditional Access](what-if-tool.md) when making changes.
The following steps will help create Conditional Access policies to block access
The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Block Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Compliant Device Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md
Organizations can choose to include or exclude roles as they see fit.
The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Requiring a hybrid Azure AD joined device is dependent on your devices already b
The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
With the location condition in Conditional Access, you can control access to you
## Define locations
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Named locations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Named locations**.
1. Choose the type of location to create. 1. **Countries location** or **IP ranges location**. 1. Give your location a name.
More information about the location condition in Conditional Access can be found
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Organizations can choose to deploy this policy using the steps outlined below or
The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to be in a trusted network location, do multifactor authentication or use Temporary Access Pass credentials.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration with TAP**. 1. Under **Assignments**, select **Users or workload identities**.
Organizations may choose to require other grant controls with or in place of **R
For [guest users](../external-identities/what-is-b2b.md) who need to register for multifactor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
description: Customize Azure AD authentication session configuration including u
+ Last updated 07/18/2023
To make sure that your policy works as expected, the recommended best practice i
### Policy 1: Sign-in frequency control
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Choose all required conditions for customerΓÇÖs environment, including the target cloud apps.
To make sure that your policy works as expected, the recommended best practice i
### Policy 2: Persistent browser session
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Choose all required conditions.
To make sure that your policy works as expected, the recommended best practice i
1. Select **Persistent browser session**. > [!NOTE]
- > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane in the Azure portal for the same user if you have configured both policies.
+ > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane for the same user if you have configured both policies.
1. Select a value from dropdown. 1. Save your policy. ### Policy 3: Sign-in frequency control every time risky user
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Administrators can monitor and troubleshoot sign in events where [continuous acc
Administrators can monitor user sign-ins where continuous access evaluation (CAE) is applied. This information is found in the Azure AD sign-in logs:
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
1. Apply the **Is CAE Token** filter. [ ![Screenshot showing how to add a filter to the Sign-ins log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox)
The continuous access evaluation insights workbook allows administrators to view
Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Workbooks**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
1. Under **Public Templates**, search for **Continuous access evaluation insights**. The **Continuous access evaluation insights** workbook contains the following table:
Admins can view records filtered by time range and application. Admins can compa
To unblock users, administrators can add specific IP addresses to a trusted named location.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations.
> [!NOTE] > Before adding an IP address as a trusted named location, confirm that the IP address does in fact belong to the intended organization.
active-directory Howto Policy App Enforced Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-app-enforced-restriction.md
Block or limit access to SharePoint, OneDrive, and Exchange content from unmanag
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
The following steps will help create a Conditional Access policy requiring an ap
Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
After administrators confirm the settings using [report-only mode](howto-conditi
This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Policy Guest Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-guest-mfa.md
Require guest users perform multifactor authentication when accessing your organ
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Policy Persistent Browser Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-persistent-browser-session.md
Protect user access on unmanaged devices by preventing browser sessions from rem
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Howto Policy Unknown Unsupported Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-unknown-unsupported-device.md
Previously updated : 07/18/2023 Last updated : 09/05/2023
# Common Conditional Access policy: Block access for unknown or unsupported device platform
-Users will be blocked from accessing company resources when the device type is unknown or unsupported.
+Users are blocked from accessing company resources when the device type is unknown or unsupported.
+
+The [device platform condition](concept-conditional-access-conditions.md#device-platforms) is based on user agent strings. Conditional Access policies using it should be used with another policy, like one requiring device compliance or app protection policies.
## User exclusions [!INCLUDE [active-directory-policy-exclusions](../../../includes/active-directory-policy-exclude-user.md)]
Users will be blocked from accessing company resources when the device type is u
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
Users will be blocked from accessing company resources when the device type is u
1. Set **Configure** to **Yes**. 1. Under **Include**, select **Any device** 1. Under **Exclude**, select **Android**, **iOS**, **Windows**, and **macOS**.
+ > [!NOTE]
+ > For the exclusion select any platforms that your organization knowingly uses, and leave the others unselected.
1. Select, **Done**. 1. Under **Access controls** > **Grant**, select **Block access**, then select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy. After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.+ ## Next steps [Conditional Access templates](concept-conditional-access-policy-common.md)
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
The location found using the public IP address a client provides to Azure Active
## Named locations
-Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.
+Locations exist under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.
> [!VIDEO https://www.youtube.com/embed/P80SffTIThY]
To define a named location by IPv4/IPv6 address ranges, you need to provide:
- One or more IP ranges. - Optionally **Mark as trusted location**.
-![New IP locations in the Azure portal](./media/location-condition/new-trusted-location.png)
+![New IP locations](./media/location-condition/new-trusted-location.png)
Named locations defined by IPv4/IPv6 address ranges are subject to the following limitations:
To define a named location by country/region, you need to provide:
- Add one or more countries/regions. - Optionally choose to **Include unknown countries/regions**.
-![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
+![Country as a location](./media/location-condition/new-named-location-country-region.png)
If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries/regions to block traffic from countries/regions where they don't do business.
Some IP addresses don't map to a specific country or region. To capture these IP
## Define locations 1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**.
+1. Browse to **Protection** > **Conditional Access** > **Named locations**.
1. Choose **New location**. 1. Give your location a name. 1. Choose **IP ranges** if you know the specific externally accessible IPv4 address ranges that make up that location or **Countries/Regions**.
active-directory Migrate Approved Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/migrate-approved-client-app.md
The following steps make an existing Conditional Access policy require an approv
Organizations can choose to update their policies using the following steps.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select a policy that uses the approved client app grant. 1. Under **Access controls** > **Grant**, select **Grant access**. 1. Select **Require approved client app** and **Require app protection policy**
The following steps help create a Conditional Access policy requiring an approve
Organizations can choose to deploy this policy using the following steps.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md
Title: What is Conditional Access in Azure Active Directory?
-description: Learn how Conditional Access is at the heart of the new identity-driven control plane.
+description: Conditional Access is the Zero Trust policy engine at the heart of the new identity-driven control plane.
Previously updated : 06/20/2023 Last updated : 08/24/2023
# What is Conditional Access?
-Microsoft is providing Conditional Access templates to organizations in report-only mode starting in January of 2023. We may add more policies as new threats emerge.
- The modern security perimeter extends beyond an organization's network perimeter to include user and device identity. Organizations now use identity-driven signals as part of their access control decisions.
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4MwZs]
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4MwZs]
Azure AD Conditional Access brings signals together, to make decisions, and enforce organizational policies. Conditional Access is Microsoft's [Zero Trust policy engine](/security/zero-trust/deploy/identity) taking signals from various sources into account when enforcing policy decisions. :::image type="content" source="media/overview/conditional-access-signal-decision-enforcement.png" alt-text="Diagram showing concept of Conditional Access signals plus decision to enforce organizational policy.":::
-Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multifactor authentication to access it.
+Conditional Access policies at their simplest are if-then statements; **if** a user wants to access a resource, **then** they must complete an action. For example: If a user wants to access an application or service like Microsoft 365, then they must perform multifactor authentication to gain access.
Administrators are faced with two primary goals:
These signals include:
- Users with devices of specific platforms or marked with a specific state can be used when enforcing Conditional Access policies. - Use filters for devices to target policies to specific devices like privileged access workstations. - Application
- - Users attempting to access specific applications can trigger different Conditional Access policies.
+ - Users attempting to access specific applications can trigger different Conditional Access policies.
- Real-time and calculated risk detection
- - Signals integration with [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify and remediate risky users and sign-in behavior.
+ - Signals integration with [Microsoft Entra ID Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify and remediate risky users and sign-in behavior.
- [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps) - Enables user application access and sessions to be monitored and controlled in real time. This integration increases visibility and control over access to and activities done within your cloud environment.
Many organizations have [common access concerns that Conditional Access policies
- Requiring multifactor authentication for users with administrative roles - Requiring multifactor authentication for Azure management tasks - Blocking sign-ins for users attempting to use legacy authentication protocols-- Requiring trusted locations for Azure AD Multifactor Authentication registration
+- Requiring trusted locations for security information registration
- Blocking or granting access from specific locations - Blocking risky sign-in behaviors - Requiring organization-managed devices for specific applications
Administrators can create policies from scratch or start from a template policy
## Administrator experience
-Administrators with the [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) role can manage policies in Azure AD.
+Administrators with the [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) role can manage policies.
-Conditional Access is found in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access**.
+Conditional Access is found in the [Microsoft Entra admin center](https://entra.microsoft.com) under **Protection** > **Conditional Access**.
- The **Overview** page provides a summary of policy state, users, devices, and applications as well as general and security alerts with suggestions. - The **Coverage** page provides a synopsis of applications with and without Conditional Access policy coverage over the last seven days.
Conditional Access is found in the Azure portal under **Azure Active Directory**
Customers with [Microsoft 365 Business Premium licenses](/office365/servicedescriptions/office-365-service-descriptions-technet-library) also have access to Conditional Access features.
-Risk-based policies require access to [Identity Protection](../identity-protection/overview-identity-protection.md), which is an Azure AD P2 feature.
+Risk-based policies require access to [Identity Protection](../identity-protection/overview-identity-protection.md), which requires P2 licenses.
Other products and features that may interact with Conditional Access policies require appropriate licensing for those products and features.
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Taking into account our learnings in the use of Conditional Access and supportin
**Ensure that every app has at least one Conditional Access policy applied**. From a security perspective it's better to create a policy that encompasses **All cloud apps**, and then exclude applications that you don't want the policy to apply to. This practice ensures you don't need to update Conditional Access policies every time you onboard a new application. > [!TIP]
-> Be very careful in using block and all apps in a single policy. This could lock admins out of the Azure portal, and exclusions cannot be configured for important endpoints such as Microsoft Graph.
+> Be very careful in using block and all apps in a single policy. This could lock admins out, and exclusions cannot be configured for important endpoints such as Microsoft Graph.
### Minimize the number of Conditional Access policies
active-directory Policy Migration Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration-mfa.md
# Migrate a classic policy in the Azure portal
-This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies in the Azure portal](policy-migration.md) before you start migrating your classic policies.
+This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies](policy-migration.md) before you start migrating your classic policies.
![Classic policy details requiring MFA for Salesforce app](./media/policy-migration/33.png)
The migration process consists of the following steps:
## Open a classic policy
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Navigate to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Browse to **Protection** > **Conditional Access**.
1. Select, **Classic policies**.
The migration process consists of the following steps:
1. In the list of classic policies, select the policy you wish to migrate. Document the configuration settings so that you can re-create with a new Conditional Access policy.
-For examples of common policies and their configuration in the Azure portal, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md).
+For examples of common policies and their configuration, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md).
## Disable the classic policy
active-directory Require Tou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md
In this quickstart, you'll configure a Conditional Access policy in Azure Active
To complete the scenario in this quickstart, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability. You can sign up for a trial in the Azure portal.
+- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability.
- A test account to sign-in with - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user). ## Sign-in without terms of use - The goal of this step is to get an impression of the sign-in experience without a Conditional Access policy.
-1. Sign in to the [Azure portal](https://portal.azure.com) as your test user.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as your test user.
1. Sign out. ## Create your terms of use
This section provides you with the steps to create a sample ToU. When you create
1. In Microsoft Word, create a new document. 1. Type **My terms of use**, and then save the document on your computer as **mytou.pdf**.
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or a Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
- :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use shown in the Azure portal highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png":::
+ :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png":::
1. In the menu on the top, select **New terms**.
- :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy in the Azure portal." lightbox="media/require-tou/new-terms-of-use-creation.png":::
+ :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy." lightbox="media/require-tou/new-terms-of-use-creation.png":::
1. In the **Name** textbox, type **My TOU**. 1. Upload your terms of use PDF file.
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
You can configure Conditional Access resilience defaults from the Azure portal,
### Azure portal
-1. Navigate to the **Azure portal** > **Security** > **Conditional Access**
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Create a new policy or select an existing policy 1. Open the Session control settings 1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select, **New terms**. ![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png)
-1. In the **Name** box, enter a name for the terms of use policy used in the Azure portal.
+1. In the **Name** box, enter a name for the terms of use policy.
1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it. 1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user sees is based on their browser preferences. 1. In the **Display name** box, enter a title that users see when they sign in.
Once you've completed your terms of use policy document, use the following proce
The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use policy.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
![Terms of use blade listing the number of user show have accepted and declined](./media/terms-of-use/view-tou.png)
If you want to view more activity, Azure AD terms of use policies include audit
To get started with Azure AD audit logs, use the following procedure:
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select a terms of use policy. 1. Select **View audit logs**. 1. On the Azure AD audit logs screen, you can filter the information using the provided lists to target specific audit log information.
Users can review and see the terms of use policies that they've accepted by usin
You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit. 1. Select **Edit terms**. 1. In the Edit terms of use pane, you can change the following options:
You can edit some details of terms of use policies, but you can't modify an exis
## Update the version or pdf of an existing terms of use
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit. 1. Select **Edit terms**. 1. For the language that you would like to update a new version, select **Update** under the action column
You can edit some details of terms of use policies, but you can't modify an exis
## View previous versions of a ToU
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy for which you want to view a version history. 1. Select **Languages and version history** 1. Select **See previous versions.**
You can edit some details of terms of use policies, but you can't modify an exis
## See who has accepted each version
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want. 1. By default, the next page will show you the current state of each user's acceptance to the ToU 1. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each user's events in details about each version and what happened.
You can edit some details of terms of use policies, but you can't modify an exis
The following procedure describes how to add a ToU language.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to edit. 1. Select **Edit Terms** 1. Select **Add language** at the bottom of the page.
If a user is using browser that isn't supported, they're asked to use a differen
You can delete old terms of use policies using the following procedure.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Terms of use**.
1. Select the terms of use policy you want to remove. 1. Select **Delete terms**. 1. In the message that appears asking if you want to continue, select **Yes**.
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Organizations should avoid the following configurations:
**For all users, all cloud apps:** - **Block access** - This configuration blocks your entire organization.-- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.
+- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back in to change the policy.
- **Require Hybrid Azure AD domain joined device** - This policy block access has also the potential to block access for all users in your organization if they don't have a hybrid Azure AD joined device. - **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you're an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure.
More information can be found about the problem by clicking **More Details** in
To find out which Conditional Access policy or policies applied and why do the following.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Global Reader.
-1. Browse to **Azure Active Directory** > **Sign-ins**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
1. Find the event for the sign-in to review. Add or remove filters and columns to filter out unnecessary information. 1. Add filters to narrow the scope: 1. **Correlation ID** when you have a specific event to investigate.
To determine the service dependency, check the sign-ins log for the application
:::image type="content" source="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png" alt-text="Screenshot that shows an example sign-in log showing an Application calling a Resource. This scenario is also known as a service dependency." lightbox="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png":::
-## What to do if you're locked out of the Azure portal?
+## What to do if you're locked out?
-If you're locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
+If you're locked out of the due to an incorrect setting in a Conditional Access policy:
-- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access to the Azure portal can disable the policy that is impacting your sign-in.
+- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access can disable the policy that is impacting your sign-in.
- If none of the administrators in your organization can update the policy, submit a support request. Microsoft support can review and upon confirmation update the Conditional Access policies that are preventing access. ## Next steps - [Use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md)-- [Sign-in activity reports in the Azure portal](../reports-monitoring/concept-sign-ins.md)
+- [Sign-in activity reports](../reports-monitoring/concept-sign-ins.md)
- [Troubleshooting Conditional Access using the What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
Find these options in the **Azure portal** > **Azure Active Directory**, **Diagn
## Use the audit log
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Audit logs**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Identity** > **Monitoring & health** > **Audit logs**.
1. Select the **Date** range you want to query. 1. From the **Service** filter, select **Conditional Access** and select the **Apply** button.
active-directory What If Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/what-if-tool.md
When the evaluation has finished, the tool generates a report of the affected po
## Running the tool
-You can find the **What If** tool in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**.
+You can find the **What If** tool under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**.
Before you can run the What If tool, you must provide the conditions you want to evaluate.
Before you can run the What If tool, you must provide the conditions you want to
The only condition you must make is selecting a user or workload identity. All other conditions are optional. For a definition of these conditions, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md). ## Evaluation
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
# Conditional Access for workload identities
-Conditional Access policies have historically applied only to users when they access apps and services like SharePoint online or the Azure portal. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
+Conditional Access policies have historically applied only to users when they access apps and services like SharePoint Online. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
A [workload identity](../workload-identities/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they:
Conditional Access for workload identities enables blocking service principals f
Create a location based Conditional Access policy that applies to service principals.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
Create a risk-based Conditional Access policy that applies to service principals
:::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png":::
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
Create a risk-based Conditional Access policy that applies to service principals
1. Set the **Configure** toggle to **Yes**. 1. Select the levels of risk where you want this policy to trigger. 1. Select **Done**.
-1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range.
+1. Under **Grant**, **Block access** is the only available option. Access is blocked when the specified risk levels are seen.
1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**. 1. Select **Create** to complete your policy.
If you wish to roll back this feature, you can delete or disable any created pol
The sign-in logs are used to review how policy is enforced for service principals or the expected affects of policy when using report-only mode.
-1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service principal sign-ins**.
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs** > **Service principal sign-ins**.
1. Select a log entry and choose the **Conditional Access** tab to view evaluation information. Failure reason when Service Principal is blocked by Conditional Access: ΓÇ£Access has been blocked due to Conditional Access policies.ΓÇ¥
To view results of a risk-based policy, refer to the **Report-only** tab of even
You can get the objectID of the service principal from Azure AD Enterprise Applications. The Object ID in Azure AD App registrations canΓÇÖt be used. This identifier is the Object ID of the app registration, not of the service principal.
-1. Browse to the **Azure portal** > **Azure Active Directory** > **Enterprise Applications**, find the application you registered.
+1. Browse to **Identity** > **Applications** > **Enterprise Applications**, find the application you registered.
1. From the **Overview** tab, copy the **Object ID** of the application. This identifier is the unique to the service principal, used by Conditional Access policy to find the calling app. ### Microsoft Graph
active-directory Api Find An Api How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/api-find-an-api-how-to.md
- Title: Find an API for a custom-developed app
-description: How to configure the permissions you need to access a particular API in your custom developed Azure AD application
-------- Previously updated : 09/27/2021----
-# How to find a specific API needed for a custom-developed application
-
-Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration.
-
-## Configuring a resource application to expose web APIs
-
-When you expose your web API, the API be displayed in the **Select an API** list when adding permissions to an app registration. To add access scopes, follow the steps outlined in [Configure an application to expose web APIs](quickstart-configure-app-expose-web-apis.md).
-
-## Configuring a client application to access web APIs
-
-When you add permissions to your app registration, you can **add API access** to exposed web APIs. To access web APIs, follow the steps outlined in [Configure a client application to access web APIs](quickstart-configure-app-access-web-apis.md).
-
-## Next steps
--- [Understanding the Azure Active Directory application manifest](./reference-app-manifest.md)
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Last updated 05/22/2023 -+
This article describes application registration, application objects, and servic
## Application registration
-To delegate identity and access management functions to Azure AD, an application must be registered with an Azure AD tenant. When you register your application with Azure AD, you're creating an identity configuration for your application that allows it to integrate with Azure AD. When you register an app in the Azure portal, you choose whether it's a [single tenant](single-and-multi-tenant-apps.md#who-can-sign-in-to-your-app), or [multi-tenant](single-and-multi-tenant-apps.md#who-can-sign-in-to-your-app), and can optionally set a [redirect URI](reply-url.md). For step-by-step instructions on registering an app, see the [app registration quickstart](quickstart-register-app.md).
+To delegate identity and access management functions to Azure AD, an application must be registered with an Azure AD tenant. When you register your application with Azure AD, you're creating an identity configuration for your application that allows it to integrate with Azure AD. When you register an app, you choose whether it's a [single tenant](single-and-multi-tenant-apps.md#who-can-sign-in-to-your-app), or [multi-tenant](single-and-multi-tenant-apps.md#who-can-sign-in-to-your-app), and can optionally set a [redirect URI](reply-url.md). For step-by-step instructions on registering an app, see the [app registration quickstart](quickstart-register-app.md).
-When you've completed the app registration, you have a globally unique instance of the app (the application object) that lives within your home tenant or directory. You also have a globally unique ID for your app (the app/client ID). In the portal, you can then add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.
+When you've completed the app registration, you have a globally unique instance of the app (the application object) that lives within your home tenant or directory. You also have a globally unique ID for your app (the app/client ID). You can add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.
-If you register an application in the portal, an application object and a service principal object are automatically created in your home tenant. If you register/create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
+If you register an application, an application object and a service principal object are automatically created in your home tenant. If you register/create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
## Application object
The application object describes three aspects of an application:
- The resources that the application might need to access - The actions that the application can take
-You can use the **App registrations** page in the [Azure portal] to list and manage the application objects in your home tenant.
+You can use the **App registrations** page in the [Microsoft Entra admin center](https://entra.microsoft.com) to list and manage the application objects in your home tenant.
![App registrations blade](./media/app-objects-and-service-principals/app-registrations-blade.png)
There are three types of service principal:
- **Application** - This type of service principal is the local representation, or application instance, of a global application object in a single tenant or directory. In this case, a service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
- When an application is given permission to access resources in a tenant (upon registration or consent), a service principal object is created. When you register an application using the Azure portal, a service principal is created automatically. You can also create service principal objects in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, and other tools.
+ When an application is given permission to access resources in a tenant (upon registration or consent), a service principal object is created. When you register an application, a service principal is created automatically. You can also create service principal objects in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, and other tools.
- **Managed identity** - This type of service principal is used to represent a [managed identity](../managed-identities-azure-resources/overview.md). Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. When a managed identity is enabled, a service principal representing that managed identity is created in your tenant. Service principals representing managed identities can be granted access and permissions, but can't be updated or modified directly.
There are three types of service principal:
The Microsoft Graph [ServicePrincipal entity][ms-graph-sp-entity] defines the schema for a service principal object's properties.
-You can use the **Enterprise applications** page in the Azure portal to list and manage the service principals in a tenant. You can see the service principal's permissions, user consented permissions, which users have done that consent, sign in information, and more.
+You can use the **Enterprise applications** page in the Microsoft Entra admin center to list and manage the service principals in a tenant. You can see the service principal's permissions, user consented permissions, which users have done that consent, sign in information, and more.
![Enterprise apps blade](./media/app-objects-and-service-principals/enterprise-apps-blade.png)
You can find the service principals associated with an application object.
# [Browser](#tab/browser)
-In the [Azure portal](https://portal.azure.com), navigate to the application registration overview. Select **Managed application in local directory**.
+In the Microsoft Entra admin center, navigate to the application registration overview. Select **Managed application in local directory**.
:::image type="content" alt-text="Screen shot that shows the Managed application in local directory option in the overview." source="./media/app-objects-and-service-principals/find-service-principal.png" border="false":::
In this example scenario:
Learn how to create a service principal: -- [Using the Azure portal](howto-create-service-principal-portal.md)
+- [Using the Microsoft Entra admin center](howto-create-service-principal-portal.md)
- [Using Azure PowerShell](howto-authenticate-service-principal-powershell.md) - [Using Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli) - [Using Microsoft Graph](/graph/api/serviceprincipal-post-serviceprincipals) and then use [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to query both the application and service principal objects.
Learn how to create a service principal:
[ms-graph-app-entity]: /graph/api/resources/application [ms-graph-sp-entity]: /graph/api/resources/serviceprincipal
-[Azure portal]: https://portal.azure.com
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Previously updated : 04/18/2023 Last updated : 09/05/2023
The Microsoft Enterprise SSO plug-in relies on Apple's [enterprise SSO](https://
For the SSO plug-in to function properly, Apple devices should be allowed to reach to both identity provider URLs and its own URLs without additional interception. This means that those URLs need to be excluded from network proxies, interception and other enterprise systems. Here is the minimum set of URLs that need to be allowed for the SSO plug-in to function:+ - `*.cdn-apple.com` - `*.networking.apple` - `login.microsoftonline.com`
Here is the minimum set of URLs that need to be allowed for the SSO plug-in to f
- `login.microsoftonline.us` - `login-us.microsoftonline.com`
-Additional Apple's URLs that may need to be allowed are documented here: https://support.apple.com/en-us/HT210060
+> [!WARNING]
+> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
+
+If your organization blocks these URLs users may see errors like `1012 NSURLErrorDomain error` or `1000 com.apple.AuthenticationServices.AuthorizationError`.
+
+Other Apple URLs that may need to be allowed are documented in their support article, [Use Apple products on enterprise networks](https://support.apple.com/HT210060).
#### Use Intune for simplified configuration
active-directory Application Consent Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-consent-experience.md
The following diagram and table provide information about the building blocks of
| 5 | Publisher name and verification | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process. If the app is publisher verified, the publisher name is displayed. If the app isn't publisher verified, "Unverified" is displayed instead of a publisher name. For more information, read about [Publisher Verification](publisher-verification-overview.md). Selecting the publisher name displays more app info as available, such as the publisher name, publisher domain, date created, certification details, and reply URLs. | | 6 | Microsoft 365 Certification | The Microsoft 365 Certification logo means that an app has been vetted against controls derived from leading industry standard frameworks, and that strong security and compliance practices are in place to protect customer data. For more information, read about [Microsoft 365 Certification](/microsoft-365-app-certification/docs/enterprise-app-certification-guide).| | 7 | Publisher information | Displays whether the application is published by Microsoft. |
-| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it's best to request access, to the permissions with the least privilege. |
+| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer, it's best to request access to the permissions with the least privilege. |
| 9 | Permission description | This value is provided by the service exposing the permissions. To see the permission descriptions, you must toggle the chevron next to the permission. | | 10 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. | | 11 | Report it here | This link is used to report a suspicious app if you don't trust the app, if you believe the app is impersonating another app, if you believe the app will misuse your data, or for some other reason. |
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Title: Microsoft identity platform authentication flows & app scenarios
+ Title: Microsoft identity platform app types and authentication flows
description: Learn about application scenarios for the Microsoft identity platform, including authenticating identities, acquiring tokens, and calling protected APIs. Previously updated : 05/05/2022 Last updated : 08/11/2023
-#Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
+# Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
-# Authentication flows and application scenarios
+# Microsoft identity platform app types and authentication flows
The Microsoft identity platform supports authentication for different kinds of modern application architectures. All of the architectures are based on the industry-standard protocols [OAuth 2.0 and OpenID Connect](./v2-protocols.md). By using the [authentication libraries for the Microsoft identity platform](reference-v2-libraries.md), applications authenticate identities and acquire tokens to access protected APIs.
This article describes authentication flows and the application scenarios that t
## Application categories
-Tokens can be acquired from several types of applications, including:
+[Security tokens](./security-tokens.md) can be acquired from several types of applications, including:
- Web apps - Mobile apps
The following sections describe the categories of applications.
Authentication scenarios involve two activities: -- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md), developed and supported by Microsoft.
+- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](msal-overview.md), developed and supported by Microsoft.
- **Protecting a web API or a web app**: One challenge of protecting these resources is validating the security token. On some platforms, Microsoft offers [middleware libraries](reference-v2-libraries.md). ### With users or without users
The available authentication flows differ depending on the sign-in audience. Som
For more information, see [Supported account types](v2-supported-account-types.md#account-type-support-in-authentication-flows).
-## Application scenarios
+## Application types
The Microsoft identity platform supports authentication for these app architectures:
For a desktop app to call a web API that signs in users, use the interactive tok
There's another possibility for Windows-hosted applications on computers joined either to a Windows domain or by Azure Active Directory (Azure AD). These applications can silently acquire a token by using [integrated Windows authentication](https://aka.ms/msal-net-iwa).
-Applications running on a device without a browser can still call an API on behalf of a user. To authenticate, the user must sign in on another device that has a web browser. This scenario requires that you use the [device code flow](https://aka.ms/msal-net-device-code-flow).
+Applications running on a device without a browser can still call an API on behalf of a user. To authenticate, the user must sign in on another device that has a web browser. This scenario requires that you use the [device code flow](v2-oauth2-device-code.md).
![Device code flow](media/scenarios/device-code-flow-app.svg)
Similar to a desktop app, a mobile app calls the interactive token-acquisition m
MSAL iOS and MSAL Android use the system web browser by default. However, you can direct them to use the embedded web view instead. There are specificities that depend on the mobile platform: Universal Windows Platform (UWP), iOS, or Android.
-Some scenarios, like those that involve Conditional Access related to a device ID or a device enrollment, require a broker to be installed on the device. Examples of brokers are Microsoft Company Portal on Android and Microsoft Authenticator on Android and iOS. MSAL can now interact with brokers. For more information about brokers, see [Leveraging brokers on Android and iOS](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/leveraging-brokers-on-Android-and-iOS).
+Some scenarios, like those that involve Conditional Access related to a device ID or a device enrollment, require a broker to be installed on the device. Examples of brokers are Microsoft Company Portal on Android and Microsoft Authenticator on Android and iOS. MSAL can now interact with brokers. For more information about brokers, see [Leveraging brokers on Android and iOS](msal-net-use-brokers-with-xamarin-apps.md).
For more information, see [Mobile app that calls web APIs](scenario-mobile-overview.md).
active-directory Authentication National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-national-cloud.md
Including the global Azure cloud, Azure Active Directory (Azure AD) is deplo
- Microsoft Azure operated by 21Vianet - Azure Germany ([Closed on October 29, 2021](https://www.microsoft.com/cloud-platform/germany-cloud-regions)). Learn more about [Azure Germany migration](#azure-germany-microsoft-cloud-deutschland).
-The individual national clouds and the global Azure cloud are cloud _instances_. Each cloud instance is separate from the others and has its own environment and _endpoints_. Cloud-specific endpoints include OAuth 2.0 access token and OpenID Connect ID token request endpoints, and URLs for app management and deployment, like the Azure portal.
+The individual national clouds and the global Azure cloud are cloud _instances_. Each cloud instance is separate from the others and has its own environment and _endpoints_. Cloud-specific endpoints include OAuth 2.0 access token and OpenID Connect ID token request endpoints, and URLs for app management and deployment.
As you develop your apps, use the endpoints for the cloud instance where you'll deploy the application.
The following table lists the base URLs for the Azure AD endpoints used to regis
## Application endpoints
-You can find the authentication endpoints for your application in the Azure portal.
+You can find the authentication endpoints for your application.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. Select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, and then select **Endpoints** in the top menu.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **Endpoints** in the top menu.
- The **Endpoints** page is displayed showing the authentication endpoints for the application registered in your Azure AD tenant.
+ The **Endpoints** page is displayed showing the authentication endpoints for the application.
Use the endpoint that matches the authentication protocol you're using in conjunction with the **Application (client) ID** to craft the authentication request specific to your application.
active-directory Authentication Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-protocols.md
- Title: Microsoft identity platform authentication protocols
-description: An overview of the authentication protocols supported by the Microsoft identity platform
-------- Previously updated : 09/27/2021------
-# Microsoft identity platform authentication protocols
-
-The Microsoft identity platform supports several of the most widely used authentication and authorization protocols. The topics in this section describe the supported protocols and their implementation in Microsoft identity platform. The topics included a review of supported claim types, an introduction to the use of federation metadata, detailed OAuth 2.0. and SAML 2.0 protocol reference documentation, and a troubleshooting section.
-
-## Authentication protocols articles and reference
-
-* [Important Information About Signing Key Rollover in Microsoft identity platform](./signing-key-rollover.md) ΓÇô Learn about Microsoft identity platformΓÇÖs signing key rollover cadence, changes you can make to update the key automatically, and discussion for how to update the most common application scenarios.
-* [Supported Token and Claim Types](id-tokens.md) - Learn about the claims in the tokens that the Microsoft identity platform issues.
-* [OAuth 2.0 in Microsoft identity platform](v2-oauth2-auth-code-flow.md) - Learn about the implementation of OAuth 2.0 in Microsoft identity platform.
-* [OpenID Connect 1.0](v2-protocols-oidc.md) - Learn how to use OAuth 2.0, an authorization protocol, for authentication.
-* [Service to Service Calls with Client Credentials](v2-oauth2-client-creds-grant-flow.md) - Learn how to use OAuth 2.0 client credentials grant flow for service to service calls.
-* [Service to Service Calls with On-Behalf-Of Flow](v2-oauth2-on-behalf-of-flow.md) - Learn how to use OAuth 2.0 On-Behalf-Of flow for service to service calls.
-* [SAML Protocol Reference](./saml-protocol-reference.md) - Learn about the Single Sign-On and Single Sign-out SAML profiles of Microsoft identity platform.
-
-## See also
-
-* [Microsoft identity platform overview](v2-overview.md)
-* [Active Directory Code Samples](sample-v2-code.md)
active-directory Configure App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-app-multi-instancing.md
The IDP initiated SSO feature exposes the following settings for each applicatio
### Configure IDP initiated SSO
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
1. Open any SSO enabled enterprise app and navigate to the SAML single sign-on blade. 1. Select **Edit** on the **User Attributes & Claims** panel. 1. Select **Edit** to open the advanced options blade.
active-directory Consent Framework Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework-links.md
- Title: How application consent works
-description: Learn more about how the Azure AD consent framework works to see how you can use it when developing applications on Azure AD
--------- Previously updated : 09/27/2021----
-# How application consent works
-
-This article is to help you learn more about how the Azure AD consent framework works so you can develop applications more effectively.
-
-## Recommended documents
--- Get a general understanding of [how consent allows a resource owner to govern an application's access to resources](./developer-glossary.md#consent).-- Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md).-- For more depth, learn [how a multi-tenant application can use the consent framework](./howto-convert-app-to-be-multi-tenant.md) to implement "user" and "admin" consent, supporting more advanced multi-tier application patterns.-- For more depth, learn [how consent is supported at the OAuth 2.0 protocol layer during the authorization code grant flow.](v2-oauth2-auth-code-flow.md#request-an-authorization-code)-
-## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Custom Extension Configure Saml App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-configure-saml-app.md
Title: Source claims from an external store (SAML app)
description: Use a custom claims provider to augment tokens with claims from an external identity system. Configure a SAML app to receive tokens with external claims. -+
The following steps are for registering a demo [XRayClaims](https://adfshelp.mic
Add a new, non-gallery SAML application in your tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
-1. Go to **Azure Active Directory** and then **Enterprise applications**. Select **New application** and then **Create your own application**.
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
+
+1. Select **New application** and then **Create your own application**.
1. Add a name for the app. For example, **AzureADClaimsXRay**. Select the **Integrate any other application you don't find in the gallery (Non-gallery)** option and select **Create**.
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
Title: Get started with custom claims providers (preview)
description: Learn how to develop and register an Azure Active Directory custom authentication extensions REST API. The custom authentication extension allows you to source claims from a data store that is external to Azure Active Directory. -+ Previously updated : 05/23/2023 Last updated : 08/16/2023
# Configure a custom claim provider token issuance event (preview)
-This article describes how to configure and setup a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
+This article describes how to configure and set up a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application. Before you start, take a look at following video, which demonstrates how to configure Azure AD custom claims provider with Function App:
The following screenshot demonstrates how to configure the Azure HTTP trigger fu
In this step, you configure a custom authentication extension, which will be used by Azure AD to call your Azure function. The custom authentication extension contains information about your REST API endpoint, the claims that it parses from your REST API, and how to authenticate to your REST API. Follow these steps to register a custom authentication extension:
-# [Azure portal](#tab/azure-portal)
+# [Microsoft Entra admin center](#tab/entra-admin-center)
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Under **Azure services**, select **Azure Active Directory**.
-1. Ensure your user account has the Global Administrator or Application Administrator and Authentication Extensibility Administrator role. Otherwise, learn how to [assign a role](../roles/manage-roles-portal.md).
-1. From the menu, select **Enterprise applications**.
-1. Under **Manage**, select the **Custom authentication extensions**.
-1. Select **Create a custom authentication extension**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Administrator](../roles/permissions-reference.md#application-developer) and [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
+1. Select **Custom authentication extensions**, and then select **Create a custom authentication extension**.
1. In **Basics**, select the **tokenIssuanceStart** event and select **Next**. 1. In **Endpoint Configuration**, fill in the following properties:
In this step, you configure a custom authentication extension, which will be use
# [Microsoft Graph](#tab/microsoft-graph)
-Create an Application Registration to authenticate your custom authentication extension to your Azure Function.
+Register an application to authenticate your custom authentication extension to your Azure Function.
-1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/applications`
-1. Select **Request Body** and paste the following JSON:
+1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. The account must have the privileges to create and manage an application registration in the tenant.
+2. Run the following request.
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/applications
+ Content-type: application/json
+
{
- "displayName": "authenticationeventsAPI"
+ "displayName": "authenticationeventsAPI"
} ```
-1. Select **Run Query** to submit the request.
-
-1. Copy the **Application ID** value (*appId*) from the response. You need this value later, which is referred to as the `{authenticationeventsAPI_AppId}`. Also get the object ID of the app (*ID*), which is referred to as `{authenticationeventsAPI_ObjectId}` from the response.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/csharp/v1/tutorial-application-basics-create-app-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/go/v1/tutorial-application-basics-create-app-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/javascript/v1/tutorial-application-basics-create-app-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ Snippet not available.
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/powershell/v1/tutorial-application-basics-create-app-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/python/v1/tutorial-application-basics-create-app-python-snippets.md)]
+
+
-Create a service principal in the tenant for the authenticationeventsAPI app registration:
+3. From the response, record the value of **id** and **appId** of the newly created app registration. These values will be referenced in this article as `{authenticationeventsAPI_ObjectId}` and `{authenticationeventsAPI_AppId}` respectively.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals`
-1. Select **Request Body** and paste the following JSON:
+Create a service principal in the tenant for the authenticationeventsAPI app registration.
- ```json
- {
- "appId": "{authenticationeventsAPI_AppId}"
- }
- ```
+Still in Graph Explorer, run the following request. Replace `{authenticationeventsAPI_AppId}` with the value of **appId** that you recorded from the previous step.
-1. Select **Run Query** to submit the request.
+```http
+POST https://graph.microsoft.com/v1.0/servicePrincipals
+Content-type: application/json
+
+{
+ "appId": "{authenticationeventsAPI_AppId}"
+}
+```
### Set the App ID URI, access token version, and required resource access Update the newly created application to set the application ID URI value, the access token version, and the required resource access.
-1. Set the HTTP method to **PATCH**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}`
-1. Select **Request Body** and paste the following JSON:
+In Graph Explorer, run the following request.
+ - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
+ - Set the `{authenticationeventsAPI_AppId}` value with the **appId** that you recorded earlier.
+ - An example value is `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as you'll use it later in this article in place of `{functionApp_IdentifierUri}`.
- Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
-
- Set the `{authenticationeventsAPI_AppId}` value with the App ID generated from the app registration created in the previous step.
-
- An example value would be `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as it is used in following steps and is referenced as `{functionApp_IdentifierUri}`.
-
- ```json
+```http
+POST https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}
+Content-type: application/json
+
+{
+"identifierUris": [
+ "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
+],
+"api": {
+ "requestedAccessTokenVersion": 2,
+ "acceptMappedClaims": null,
+ "knownClientApplications": [],
+ "oauth2PermissionScopes": [],
+ "preAuthorizedApplications": []
+},
+"requiredResourceAccess": [
{
- "identifierUris": [
- "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
- ],
- "api": {
- "requestedAccessTokenVersion": 2,
- "acceptMappedClaims": null,
- "knownClientApplications": [],
- "oauth2PermissionScopes": [],
- "preAuthorizedApplications": []
- },
- "requiredResourceAccess": [
+ "resourceAppId": "00000003-0000-0000-c000-000000000000",
+ "resourceAccess": [
{
- "resourceAppId": "00000003-0000-0000-c000-000000000000",
- "resourceAccess": [
- {
- "id": "214e810f-fda8-4fd7-a475-29461495eb00",
- "type": "Role"
- }
- ]
+ "id": "214e810f-fda8-4fd7-a475-29461495eb00",
+ "type": "Role"
} ] }
- ```
-
-1. Select **Run Query** to submit the request.
+]
+}
+```
### Register a custom authentication extension
-Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the App Registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
+Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the app registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/beta/identity/customAuthenticationExtensions`
-1. Select **Request Body** and paste the following JSON:
+1. In Graph Explorer, run the following request. Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+ - You'll need the *CustomAuthenticationExtension.ReadWrite.All* delegated permission.
- Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/beta/identity/customAuthenticationExtensions
+ Content-type: application/json
- ```json
{ "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtension", "displayName": "onTokenIssuanceStartCustomExtension",
Next, you register the custom authentication extension. You register the custom
] } ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
-1. Select **Run Query** to submit the request.
+
-Record the ID value of the created custom claims provider object. The ID is needed in a later step and is referred to as the `{customExtensionObjectId}`.
+1. Record the **id** value of the created custom claims provider object. You'll use the value later in this tutorial in place of `{customExtensionObjectId}`.
### 2.2 Grant admin consent
-After your custom authentication extension is created, you'll be taken to the **Overview** tab of the new custom authentication extension.
+After your custom authentication extension is created, open the **Overview** tab of the new custom authentication extension.
From the **Overview** page, select the **Grant permission** button to give admin consent to the registered app, which allows the custom authentication extension to authenticate to your API. The custom authentication extension uses `client_credentials` to authenticate to the Azure Function App using the `Receive custom authentication extension HTTP requests` permission.
Follow these steps to register the **jwt.ms** web application:
### 3.1 Register a test web application
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Active Directory**.
-1. Select **App registrations**, and then select **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Administrator](../roles/permissions-reference.md#application-developer).
+1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application. For example, **My Test application**. 1. Under **Supported account types**, select **Accounts in this organizational directory only**. 1. In the **Select a platform** dropdown in **Redirect URI**, select **Web** and then enter `https://jwt.ms` in the URL text box.
The following screenshot shows how to register the *My Test application*.
### 3.1 Get the application ID
-In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps.
+In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. In Microsoft Graph, it's referenced by the **appId** propety.
:::image type="content" border="false"source="media/custom-extension-get-started/get-the-test-application-id.png" alt-text="Screenshot that shows how to copy the application ID.":::
For tokens to be issued with claims incoming from the custom authentication exte
Follow these steps to connect the *My Test application* with your custom authentication extension:
-# [Azure portal](#tab/azure-portal)
+# [Microsoft Entra admin center](#tab/entra-admin-center)
First assign the custom authentication extension as a custom claims provider source:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Active Directory**.
-1. Select **App registrations**, and find the *My Test application* registration you created.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Administrator](../roles/permissions-reference.md#application-administrator).
+1. Browse to **Identity** > **Applications** > **Application registrations**.
1. In the **Overview** page, under **Managed application in local directory**, select **My Test application**. 1. Under **Manage**, select **Single sign-on**. 1. Under **Attributes & Claims**, select **Edit**.
Next, assign the attributes from the custom claims provider, which should be iss
# [Microsoft Graph](#tab/microsoft-graph)
-First create an event listener to trigger a custom authentication extension using the token issuance start event:
-
-1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/beta/identity/authenticationEventListeners`
-1. Select **Request Body** and paste the following JSON:
+First create an event listener to trigger a custom authentication extension for the *My Test application* using the token issuance start event.
- Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier.
+1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
+1. Run the following request. Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier.
+ - You'll need the *EventListener.ReadWrite.All* delegated permission.
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/beta/identity/authenticationEventListeners
+ Content-type: application/json
+
{ "@odata.type": "#microsoft.graph.onTokenIssuanceStartListener", "conditions": {
First create an event listener to trigger a custom authentication extension usin
} ```
-1. Select **Run Query** to submit the request.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+
++
+Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider.
-Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider:
+1. Still in Graph Explorer, run the following request. You'll need the *Policy.ReadWrite.ApplicationConfiguration* delegated permission.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/policies/claimsmappingpolicies`
-1. Select **Request Body** and paste the following JSON:
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies
+ Content-type: application/json
+ { "definition": [ "{\"ClaimsMappingPolicy\":{\"Version\":1,\"IncludeBasicClaimSet\":\"true\",\"ClaimsSchema\":[{\"Source\":\"CustomClaimsProvider\",\"ID\":\"DateOfBirth\",\"JwtClaimType\":\"dob\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CustomRoles\",\"JwtClaimType\":\"my_roles\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CorrelationId\",\"JwtClaimType\":\"correlationId\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"ApiVersion\",\"JwtClaimType\":\"apiVersion \"},{\"Value\":\"tokenaug_V2\",\"JwtClaimType\":\"policy_version\"}]}}"
Next, create the claims mapping policy, which describes which claims can be issu
"isOrganizationDefault": false } ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-claimsmappingpolicies-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-claimsmappingpolicies-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-claimsmappingpolicies-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-claimsmappingpolicies-php-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-claimsmappingpolicies-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-claimsmappingpolicies-python-snippets.md)]
+
+
-1. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
-1. Select **Run Query** to submit the request.
+2. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
+
+Get the service principal object ID:
+
+1. Run the following request in Graph Explorer. Replace `{App_to_enrich_ID}` with the **appId** of *My Test Application*.
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')
+ ```
-Get the `servicePrincipal` objectId:
+Record the value of **id**.
-1. Set the HTTP method to **GET**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')/claimsMappingPolicies/$ref`. Replace `{App_to_enrich_ID}` with *My Test Application* App ID.
-1. Record the `id` value, later it's referred to as `{test_App_Service_Principal_ObjectId}`.
+Assign the claims mapping policy to the service principal of *My Test Application*.
-Assign the claims mapping policy to the `servicePrincipal` of *My Test Application*:
+1. Run the following request in Graph Explorer. You'll need the *Policy.ReadWrite.ApplicationConfiguration* and *Application.ReadWrite.All* delegated permission.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref`
-1. Select **Request Body** and paste the following JSON:
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref
+ Content-type: application/json
- ```json
{ "@odata.id": "https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/{claims_mapping_policy_ID}" } ```
-1. Select **Run Query** to submit the request.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-serviceprincipal-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-serviceprincipal-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-serviceprincipal-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-serviceprincipal-php-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-serviceprincipal-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-serviceprincipal-python-snippets.md)]
+
+
If you configured the [Microsoft identity provider](#step-5-protect-your-azure-f
1. Under the **App registration**, enter the application ID (client ID) of the *Azure Functions authentication events API* app registration [you created previously](#step-2-register-a-custom-authentication-extension).
-1. Go to your Azure AD tenant in which your custom authentication extension is registered, and select **Azure Active Directory** > **App registrations**.
+1. In the Microsoft Entra admin center:
1. Select the *Azure Functions authentication events API* app registration [you created previously](#step-2-register-a-custom-authentication-extension). 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**. 1. Add a description for your client secret.
active-directory Custom Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-troubleshoot.md
Title: Troubleshoot a custom claims provider
description: Troubleshoot and monitor your custom claims provider API. Learn how to use logging and Azure AD sign-in logs to find errors and issues in your custom claims provider API. -+
Azure AD sign-in logs also integrate with [Azure Monitor](../../azure-monitor/in
To access the Azure AD sign-in logs:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the **Enterprise apps** experience for your given application, select on the **Sign-in** logs tab.
-1. Select the latest sign-in log.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
+1. Select **Sign-in logs**, and then select the latest sign-in log.
1. For more details, select the **Authentication Events** tab. Information related to the custom authentication extension REST API call is displayed, including any [error codes](#error-codes-reference). :::image type="content" source="media/custom-extension-troubleshoot/authentication-events.png" alt-text="Screenshot that shows the authentication events information." :::
Use the following table to diagnose an error code.
Your REST API is protected by Azure AD access token. You can test your API by obtaining an access token with the [application registration](custom-extension-get-started.md#22-grant-admin-consent) associated with the custom authentiction extensions. After you acquire an access token, pass it the HTTP `Authorization` header. To obtain an access token, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure administrator account.
-1. Select **Azure Active Directory** > **App registrations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Application registrations**.
1. Select the *Azure Functions authentication events API* app registration [you created previously](custom-extension-get-started.md#step-2-register-a-custom-authentication-extension). 1. Copy the [application ID](custom-extension-get-started.md#22-grant-admin-consent). 1. If you haven't created an app secret, follow these steps:
active-directory Delegated And App Perms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-and-app-perms.md
- Title: Differences between delegated and app permissions
-description: Learn about delegated and application permissions, how they are used by clients and exposed by resources for applications you are developing with Azure AD
--------- Previously updated : 11/10/2022----
-# How to recognize differences between delegated and application permissions
-
-## Recommended documents
--- Learn more about how client applications use [delegated and application permission requests](developer-glossary.md#permissions) to access resources.-- Learn about [delegated and application permissions](permissions-consent-overview.md).-- See step-by-step instructions on how to [configure a client application's permission requests](quickstart-configure-app-access-web-apis.md)-- For more depth, learn how resource applications expose [scopes](developer-glossary.md#scopes) and [application roles](developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal. -
-## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Deploy Web App Authentication Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/deploy-web-app-authentication-pipeline.md
Add a [service connection](/azure/devops/pipelines/library/service-endpoints) so
An application is also created in your Azure AD tenant that provides an identity for the pipeline. You need the display name of the app registration in later steps. To find the display name:
-1. Sign into the [Entra admin portal](https://entra.microsoft.com/).
-1. Select **App registrations** in the left navigation pane, and then the **All applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. Browse to **Identity** > **Applications** > **App registrations** > **All applications**.
1. Find the display name of the app registration, which is of the form `{organization}-{project}-{guid}`. Grant the service connection permission to access the pipeline:
Grant the service connection permission to access the pipeline:
The `DeployAzureResources` stage that you create in the next section uses several values to create and deploy resources to Azure: -- The Azure AD tenant ID (find in the [Entra admin portal](https://entra.microsoft.com/)).
+- The Azure AD tenant ID (find in the [Microsoft Entra admin center](https://entra.microsoft.com/)).
- The region, or location, where the resources are deployed. - A resource group name. - The App Service service plan name.
Next, add a stage to the pipeline that deploys Azure resources. The pipeline us
The inline script runs in the context of the pipeline, assign the [Application.Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) role to the app so the script can create app registrations:
-1. Sign into the [Entra admin portal](https://entra.microsoft.com/).
-1. In the left navigation pane, select **Roles & admins**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **Roles & admins** > **Roles & admins**.
1. Select **Application Administrator** from the list of built-in roles and then **Add assignment**. 1. Search for the pipeline app registration by display name. 1. Select the app registration from the list and select **Add**.
A `DeployWebApp` stage is defined with several tasks:
- [DownloadBuildArtifacts@1](/azure/devops/pipelines/tasks/reference/download-build-artifacts-v1) downloads the build artifacts that were published to the pipeline in a previous stage. - [AzureRmWebAppDeployment@4](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v4) deploys the web app to App Service.
-View the deployed website on App Service. Navigate to your App Service in Azure portal and select the instance's **Default domain**: `https://pipelinetestwebapp.azurewebsites.net`.
+View the deployed website on App Service. Navigate to your App Service and select the instance's **Default domain**: `https://pipelinetestwebapp.azurewebsites.net`.
:::image type="content" alt-text="Screen shot that shows the default domain URL." source="./media/deploy-web-app-authentication-pipeline/default-domain.png" border="true":::
Save your changes and run the pipeline.
## Verify limited access to the web app
-To verify that access to your app is limited to users in your organization, navigate to your App Service in the [Azure portal](https://portal.azure.com) and select the instance's **Default domain**: `https://pipelinetestwebapp.azurewebsites.net`.
+To verify that access to your app is limited to users in your organization, navigate to your App Service and select the instance's **Default domain**: `https://pipelinetestwebapp.azurewebsites.net`.
You should be directed to a secured sign-in page, verifying that unauthenticated users aren't allowed access to the site. Sign in as a user in your organization to gain access to the site.
Clean up your Azure resources and Azure DevOps environment so you're not charged
### Delete the resource group
-In the Azure portal, select **Resource groups** from the menu and select the resource group that contains your deployed web app.
+Select **Resource groups** from the menu and select the resource group that contains your deployed web app.
Select **Delete resource group** to delete the resource group and all the resources.
Choose this option if you don't need your DevOps project for future reference. T
### Delete app registrations in Azure AD
-In the [Entra admin center](https://entra.microsoft.com/), select **Applications** > **App registrations** > **All applications**.
+In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Identity** > **Applications** > **App registrations** > **All applications**.
Select the application for the pipeline, the display name has the form `{organization}-{project}-{guid}`, and delete it.
active-directory Developer Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-glossary.md
Title: Glossary of terms in the Microsoft identity platform
-description: Definitions of terms commonly found in Microsoft identity platform documentation, Azure portal, and authentication SDKs like the Microsoft Authentication Library (MSAL).
+description: Definitions of terms commonly found in Microsoft identity platform documentation, Microsoft Entra admin center, and authentication SDKs like the Microsoft Authentication Library (MSAL).
# Glossary: Microsoft identity platform
-You see these terms when you use our documentation, the Azure portal, our authentication libraries, and the Microsoft Graph API. Some terms are Microsoft-specific while others are related to protocols like OAuth or other technologies you use with the Microsoft identity platform.
+You see these terms when you use our documentation, the Microsoft Entra admin center, our authentication libraries, and the Microsoft Graph API. Some terms are Microsoft-specific while others are related to protocols like OAuth or other technologies you use with the Microsoft identity platform.
## Access token
The application ID, or _[client ID](https://datatracker.ietf.org/doc/html/rfc674
## Application manifest
-A feature provided by the [Azure portal], which produces a JSON representation of the application's identity configuration, used as a mechanism for updating its associated [Application][Graph-App-Resource] and [ServicePrincipal][Graph-Sp-Resource] entities. See [Understanding the Azure Active Directory application manifest][AAD-App-Manifest] for more details.
+An application manifest is a feature that produces a JSON representation of the application's identity configuration, used as a mechanism for updating its associated [Application][Graph-App-Resource] and [ServicePrincipal][Graph-Sp-Resource] entities. See [Understanding the Azure Active Directory application manifest][AAD-App-Manifest] for more details.
## Application object
-When you register/update an application in the [Azure portal], the portal creates/updates both an application object and a corresponding [service principal object](#service-principal-object) for that tenant. The application object _defines_ the application's identity configuration globally (across all tenants where it has access), providing a template from which its corresponding service principal object(s) are _derived_ for use locally at run-time (in a specific tenant).
+When you register/update an application, both an application object and a corresponding [service principal object](#service-principal-object) are created/updated for that tenant. The application object _defines_ the application's identity configuration globally (across all tenants where it has access), providing a template from which its corresponding service principal object(s) are _derived_ for use locally at run-time (in a specific tenant).
For more information, see [Application and Service Principal Objects][AAD-App-SP-Objects].
A [client application](#client-application) gains access to a [resource server](
They also surface during the [consent](#consent) process, giving the administrator or resource owner the opportunity to grant/deny the client access to resources in their tenant.
-Permission requests are configured on the **API permissions** page for an application in the [Azure portal], by selecting the desired "Delegated Permissions" and "Application Permissions" (the latter requires membership in the Global Administrator role). Because a [public client](#client-application) can't securely maintain credentials, it can only request delegated permissions, while a [confidential client](#client-application) has the ability to request both delegated and application permissions. The client's [application object](#application-object) stores the declared permissions in its [requiredResourceAccess property][Graph-App-Resource].
+Permission requests are configured on the **API permissions** page for an application, by selecting the desired "Delegated Permissions" and "Application Permissions" (the latter requires membership in the Global Administrator role). Because a [public client](#client-application) can't securely maintain credentials, it can only request delegated permissions, while a [confidential client](#client-application) has the ability to request both delegated and application permissions. The client's [application object](#application-object) stores the declared permissions in its [requiredResourceAccess property][Graph-App-Resource].
## Refresh token
Like [scopes](#scopes), app roles provide a way for a [resource server](#resourc
App roles can support two assignment types: "user" assignment implements role-based access control for users/groups that require access to the resource, while "application" assignment implements the same for [client applications](#client-application) that require access. An app role can be defined as user-assignable, app-assignabnle, or both.
-Roles are resource-defined strings (for example "Expense approver", "Read-only", "Directory.ReadWrite.All"), managed in the [Azure portal] via the resource's [application manifest](#application-manifest), and stored in the resource's [appRoles property][Graph-Sp-Resource]. The Azure portal is also used to assign users to "user" assignable roles, and configure client [application permissions](#permissions) to request "application" assignable roles.
+Roles are resource-defined strings (for example "Expense approver", "Read-only", "Directory.ReadWrite.All"), managed via the resource's [application manifest](#application-manifest), and stored in the resource's [appRoles property][Graph-Sp-Resource]. Users can be assigned to "user" assignable roles and client [application permissions](#permissions) can be configured to request "application" assignable roles.
-For a detailed discussion of the application roles exposed by the Microsoft Graph API, see [Graph API Permission Scopes][Graph-Perm-Scopes]. For a step-by-step implementation example, see [Add or remove Azure role assignments using the Azure portal][AAD-RBAC].
+For a detailed discussion of the application roles exposed by the Microsoft Graph API, see [Graph API Permission Scopes][Graph-Perm-Scopes]. For a step-by-step implementation example, see [Add or remove Azure role assignments][AAD-RBAC].
## Scopes Like [roles](#roles), scopes provide a way for a [resource server](#resource-server) to govern access to its protected resources. Scopes are used to implement [scope-based][OAuth2-Access-Token-Scopes] access control, for a [client application](#client-application) that has been given delegated access to the resource by its owner.
-Scopes are resource-defined strings (for example "Mail.Read", "Directory.ReadWrite.All"), managed in the [Azure portal] via the resource's [application manifest](#application-manifest), and stored in the resource's [oauth2Permissions property][Graph-Sp-Resource]. The Azure portal is also used to configure client application [delegated permissions](#permissions) to access a scope.
+Scopes are resource-defined strings (for example "Mail.Read", "Directory.ReadWrite.All"), managed via the resource's [application manifest](#application-manifest), and stored in the resource's [oauth2Permissions property][Graph-Sp-Resource]. Client application [delegated permissions](#permissions) can be configured to access a scope.
A best practice naming convention, is to use a "resource.operation.constraint" format. For a detailed discussion of the scopes exposed by Microsoft Graph API, see [Graph API Permission Scopes][Graph-Perm-Scopes]. For scopes exposed by Microsoft 365 services, see [Microsoft 365 API permissions reference][O365-Perm-Ref].
A signed document containing claims, such as an OAuth 2.0 token or SAML 2.0 asse
## Service principal object
-When you register/update an application in the [Azure portal], the portal creates/updates both an [application object](#application-object) and a corresponding service principal object for that tenant. The application object _defines_ the application's identity configuration globally (across all tenants where the associated application has been granted access), and is the template from which its corresponding service principal object(s) are _derived_ for use locally at run-time (in a specific tenant).
+When you register/update an application, both an [application object](#application-object) and a corresponding service principal object are created/updated for that tenant. The application object _defines_ the application's identity configuration globally (across all tenants where the associated application has been granted access), and is the template from which its corresponding service principal object(s) are _derived_ for use locally at run-time (in a specific tenant).
For more information, see [Application and Service Principal Objects][AAD-App-SP-Objects].
An instance of an Azure AD directory is referred to as an Azure AD tenant. It pr
- authentication of user accounts and registered applications - REST endpoints required to support various protocols including OAuth 2.0 and SAML, including the [authorization endpoint](#authorization-endpoint), [token endpoint](#token-endpoint) and the "common" endpoint used by [multi-tenant applications](#multi-tenant-application).
-Azure AD tenants are created/associated with Azure and Microsoft 365 subscriptions during sign-up, providing Identity & Access Management features for the subscription. Azure subscription administrators can also create additional Azure AD tenants via the Azure portal. See [How to get an Azure Active Directory tenant][AAD-How-To-Tenant] for details on the various ways you can get access to a tenant. See [Associate or add an Azure subscription to your Azure Active Directory tenant][AAD-How-Subscriptions-Assoc] for details on the relationship between subscriptions and an Azure AD tenant, and for instructions on how to associate or add a subscription to an Azure AD tenant.
+Azure AD tenants are created/associated with Azure and Microsoft 365 subscriptions during sign-up, providing Identity & Access Management features for the subscription. Azure subscription administrators can also create additional Azure AD tenants. See [How to get an Azure Active Directory tenant][AAD-How-To-Tenant] for details on the various ways you can get access to a tenant. See [Associate or add an Azure subscription to your Azure Active Directory tenant][AAD-How-Subscriptions-Assoc] for details on the relationship between subscriptions and an Azure AD tenant, and for instructions on how to associate or add a subscription to an Azure AD tenant.
## Token endpoint
Many of the terms in this glossary are related to the OAuth 2.0 and OpenID Conne
[AAD-Multi-Tenant-Overview]:howto-convert-app-to-be-multi-tenant.md [AAD-Security-Token-Claims]: ./authentication-vs-authorization.md#claims-in-azure-ad-security-tokens [AAD-Tokens-Claims]:access-tokens.md
-[Azure portal]: https://portal.azure.com
[AAD-RBAC]: ../../role-based-access-control/role-assignments-portal.md [JWT]: https://tools.ietf.org/html/rfc7519 [Microsoft-Graph]: https://developer.microsoft.com/graph
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
If you need an answer to a question or help in solving a problem not covered in
<img alt='Azure support' src='./media/common/logo_azure.svg'> </div>
-Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits you. There are two options to create and manage support requests in the Azure portal:
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits you. There are two options to create and manage support requests in the Microsoft Entra admin center:
-- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+- If you already have an Azure Support Plan, [open a support request here](https://entra.microsoft.com/#view/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical).
-- If you're using Azure AD for customers (preview), the support request feature is currently unavailable in customer tenants. However, you can use the **Give Feedback** link on the **New support request** page to provide feedback. Or, you can switch to your Azure AD workforce tenant and [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+- If you're using Azure AD for customers (preview), the support request feature is currently unavailable in customer tenants. However, you can use the **Give Feedback** link on the **New support request** page to provide feedback. Or, you can switch to your Azure AD workforce tenant and [open a support request](https://entra.microsoft.com/#view/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical).
- If you're not an Azure customer, you can open a support request with [Microsoft Support for business](https://support.serviceshub.microsoft.com/supportforbusiness).
active-directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/enterprise-app-role-management.md
You can customize the role claim in the access token that is received after an a
Use the following steps to locate the enterprise application:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the left pane, select **Azure Active Directory**.
-1. Select **Enterprise applications**, and then select **All applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
1. Enter the name of the existing application in the search box, and then select the application from the search results. 1. After the application is selected, copy the object ID from the overview pane.
- :::image type="content" source="media/enterprise-app-role-management/record-objectid.png" alt-text="Screenshot that shows how to locate and record the object identifier for the application.":::
- ## Add roles Use the Microsoft Graph Explorer to add roles to an enterprise application.
Use the Microsoft Graph Explorer to add roles to an enterprise application.
Update the attributes to define the role claim that is included in the token.
-1. Locate the application in the Azure portal, and then select **Single sign-on** in the left menu.
+1. Locate the application in the Microsoft Entra admin center, and then select **Single sign-on** in the left menu.
1. In the **Attributes & Claims** section, select **Edit**. 1. Select **Add new claim**. 1. In the **Name** box, type the attribute name. This example uses **Role Name** as the claim name.
Update the attributes to define the role claim that is included in the token.
1. From the **Source attribute** list, select **user.assignedroles**. 1. Select **Save**. The new **Role Name** attribute should now appear in the **Attributes & Claims** section. The claim should now be included in the access token when signing into the application.
- :::image type="content" source="media/enterprise-app-role-management/attributes-summary.png" alt-text="Screenshot that shows a display of the list of attributes and claims defined for the application.":::
- ## Assign roles After the service principal is patched with more roles, you can assign users to the respective roles.
-1. In the Azure portal, locate the application to which the role was added.
+1. Locate the application to which the role was added in the Microsoft Entra admin center.
1. Select **Users and groups** in the left menu and then select the user that you want to assign the new role. 1. Select **Edit assignment** at the top of the pane to change the role. 1. Select **None Selected**, select the role from the list, and then select **Select**. 1. Select **Assign** to assign the role to the user.
- :::image type="content" source="media/enterprise-app-role-management/assign-role.png" alt-text="Screenshot that shows how to assign a role to a user of an application.":::
- ## Update roles To update an existing role, perform the following steps:
active-directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-applications-are-added.md
Last updated 10/26/2022 -+
active-directory Howto Add App Roles In Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-apps.md
To create an app role by using the Azure portal's user interface:
When the app role is set to enabled, any users, applications or groups who are assigned has it included in their tokens. These can be access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user. If set to disabled, it becomes inactive and no longer assignable. Any previous assignees will still have the app role included in their tokens, but it has no effect as it is no longer actively assignable.
+## Assign application owner
+
+If you have not already done so, you'll need to assign yourself as the application owner.
+
+1. In your app registration, under **Manage**, select **Owners**, and **Add owners**.
+1. In the new window, find and select the owner(s) that you want to assign to the application. Selected owners appear in the right panel. Once done, confirm with **Select**. The app owner(s) will now appear in the owner's list.
+
+>[!NOTE]
+>
+> Ensure that both the API application and the application you want to add permissions to both have an owner, otherwise the API will not be listed when requesting API permissions.
+ ## Assign users and groups to roles Once you've added app roles in your application, you can assign users and groups to the roles. Assignment of users and groups to roles can be done through the portal's UI, or programmatically using [Microsoft Graph](/graph/api/user-post-approleassignments). When the users assigned to the various app roles sign in to the application, their tokens will have their assigned roles in the `roles` claim.
active-directory Howto Add Terms Of Service Privacy Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-terms-of-service-privacy-statement.md
Examples: `https://myapp.com/terms-of-service` and `https://myapp.com/privacy-st
When the terms of service and privacy statement are ready, you can add links to these documents in your app using one of these methods:
-* [Through the Azure portal](#azure-portal)
+* [Through the Microsoft Entra admin center](#entra-admin-center)
* [Using the app object JSON](#app-object-json) * [Using the Microsoft Graph API](#msgraph-rest-api)
-### <a name="azure-portal"></a>Using the Azure portal
+### <a name="entra-admin-center"></a>Using the Microsoft Entra admin center
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-Follow these steps in the Azure portal.
+Follow these steps to add links:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select the correct Azure AD tenant(not B2C).
-2. Navigate to the **App registrations** section and select your app.
-3. Under **Manage**, select **Branding & properties**.
-4. Fill out the **Terms of service URL** and **Privacy statement URL** fields.
-5. Select **Save**.
-
- ![App properties contains terms of service and privacy statement URLs](./media/howto-add-terms-of-service-privacy-statement/azure-portal-terms-service-privacy-statement-urls.png)
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. Browse to **Identity** > **User experiences** > **Company branding**.
+1. Select **Getting started**, and then select **Edit** for the **Default sign-in experience**.
+1. Select **Footer** and fill out the URL for **Terms of Use** and **Privacy & Cookies**.
+1. Select **Review + save**.
### <a name="app-object-json"></a>Using the app object JSON
-If you prefer to modify the app object JSON directly, you can use the manifest editor in the Azure portal or Application Registration Portal to include links to your app's terms of service and privacy statement.
+If you prefer to modify the app object JSON directly, you can use the manifest editor to include links to your app's terms of service and privacy statement.
1. Navigate to the **App Registrations** section and select your app. 2. Open the **Manifest** pane.
active-directory Howto Call A Web Api With Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-curl.md
zone_pivot_groups: web-api-howto-prereq
::: zone pivot="no-api"
-This article shows you how to call a protected ASP.NET Core web API using Client URL (cURL). cURL is a command line tool that developers use to transfer data to and from a server. In this article, you'll register a web app and a web API in a tenant on the Azure portal. The web app is used to get an access token generated by the Microsoft identity platform. Next, you'll use the token to make an authorized call to the web API using cURL.
+This article shows you how to call a protected ASP.NET Core web API using Client URL (cURL). cURL is a command line tool that developers use to transfer data to and from a server. In this article, you'll register a web app and a web API in a tenant. The web app is used to get an access token generated by the Microsoft identity platform. Next, you'll use the token to make an authorized call to the web API using cURL.
::: zone-end
The Microsoft identity platform requires your application to be registered befor
Follow these steps to create the web API registration:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations > New registration**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as *NewWebAPI1*. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select **Help me choose** option. 1. Select **Register**.
Follow these steps to create the web app registration:
::: zone pivot="no-api"
-1. Select **Home** to return to the home page. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Select **Home** to return to the home page. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as `web-app-calls-web-api`. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. 1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
Follow these steps to create the web app registration:
::: zone pivot="api"
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If access to multiple tenants is available, use the Directories + subscriptions filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a Name for the application, such as `web-app-calls-web-api`. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. 1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
Follow these steps to create the web app registration:
::: zone-end
-When registration is complete, the Azure portal displays the app registration's **Overview** pane. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in later steps.
+When registration is complete, the app registration is displayed on the **Overview** pane. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in later steps.
#### Add a client secret
A client secret is a string value your app can use to identity itself, and is so
Follow these steps to configure a client secret:
-1. From the **Overview** pane in the Azure portal, under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
+1. From the **Overview** pane, under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
1. Add a description for your client secret, for example *My client secret*. 1. Select an expiration for the secret or specify a custom lifetime.
By specifying a web API's scopes in the web app registration, the web app can ob
Follow these steps to configure the web app permissions to the web API:
-1. From the **Overview** pane of your web application in the Azure portal (*web-app-that-calls-web-api*), under **Manage**, select **API permissions** > **Add a permission** > **My APIs**.
+1. From the **Overview** pane of your web application (*web-app-that-calls-web-api*), under **Manage**, select **API permissions** > **Add a permission** > **My APIs**.
1. Select **NewWebAPI1** or the API that you wish to add permissions to. 1. Under **Select permissions**, check the box next to **Forecast.Read**. You may need to expand the **Permission** list. This selects the permissions the client app should have on behalf of the signed-in user. 1. Select **Add permissions** to complete the process. After adding these permissions to your API, you should see the selected permissions under **Configured permissions**.
-You may also notice the **User.Read** permission for the Microsoft Graph API. This permission is added automatically when you register an app in the Azure portal.
+You may also notice the **User.Read** permission for the Microsoft Graph API. This permission is added automatically when you register an app.
::: zone pivot="no-api"
You may also notice the **User.Read** permission for the Microsoft Graph API. Th
1. Navigate to `ms-identity-docs-code-dotnet/web-api` folder and open `./appsettings.json` file, replace the `{APPLICATION_CLIENT_ID}` and `{DIRECTORY_TENANT_ID}` with:
- - `{APPLICATION_CLIENT_ID}` is the web API **Application (client) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
- - `{DIRECTORY_TENANT_ID}` is the web API **Directory (tenant) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
+ - `{APPLICATION_CLIENT_ID}` is the web API **Application (client) ID** on the app's **Overview** pane **App registrations**.
+ - `{DIRECTORY_TENANT_ID}` is the web API **Directory (tenant) ID** on the app's **Overview** pane **App registrations**.
1. Execute the following command to start the app:
The authorization code flow begins with the client directing the user to the `/a
``` 1. Copy the URL, replace the following parameters and paste it into your browser:
- - `{tenant_id}` is the web app **Directory (tenant) ID**. This should be the same value across both of the applications's **Overview** pane **App registrations** in the Azure portal.
- - `{web-app-calls-web-api_application_client_id}` is the **Application (client) ID** on the web app's (*web-app-calls-web-api*) **Overview** pane in the Azure portal.
- - `{web_API_application_client_id}` is the **Application (client) ID** on the web API's (*NewWebAPI1*) **Overview** pane in the Azure portal.
+ - `{tenant_id}` is the web app **Directory (tenant) ID**.
+ - `{web-app-calls-web-api_application_client_id}` is the **Application (client) ID** on the web app's (*web-app-calls-web-api*) **Overview** pane.
+ - `{web_API_application_client_id}` is the **Application (client) ID** on the web API's (*NewWebAPI1*) **Overview** pane.
1. Sign in as a user in the Azure AD tenant in which the apps are registered. Consent to any requests for access, if necessary. 1. Your browser will be redirected to `http://localhost/`. Refer to your browser's navigation bar and copy the `{authorization_code}` to use in the following steps. The URL takes the form of the following snippet:
cURL can now be used to request an access token from the Microsoft identity plat
-d 'grant_type=authorization_code' \ -d 'client_secret={client_secret}' ```
- - `{tenant_id}` is the web app **Directory (tenant) ID**. This should be the same value across both of the applications's **Overview** pane **App registrations** in the Azure portal.
- - `client_id={web-app-calls-web-api_application_client_id}`, and `session_state={web-app-calls-web-api_application_client_id}` is the **Application (client) ID** on the web application's (*web-app-calls-web-api*) **Overview** pane in the Azure portal.
- - `api://{web_API_application_client_id}/Forecast.Read` is the **Application (client) ID** on the web API's (*NewWebAPI1*) **Overview** pane in the Azure portal.
+ - `{tenant_id}` is the web app **Directory (tenant) ID**.
+ - `client_id={web-app-calls-web-api_application_client_id}`, and `session_state={web-app-calls-web-api_application_client_id}` is the **Application (client) ID** on the web application's (*web-app-calls-web-api*) **Overview** pane.
+ - `api://{web_API_application_client_id}/Forecast.Read` is the **Application (client) ID** on the web API's (*NewWebAPI1*) **Overview** pane.
- `code={authorization_code}` is the authorization code that was received in [Request an authorization code](#request-an-authorization-code). This enables the cURL tool to request an access token. - `client_secret={client_secret}` is the client secret **Value** recorded in [Add a client secret](#add-a-client-secret).
active-directory Howto Call A Web Api With Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-postman.md
zone_pivot_groups: web-api-howto-prereq
::: zone pivot="no-api"
-This article shows you how to call a protected ASP.NET Core web API using [Postman](https://www.postman.com/). Postman is an application that lets you send HTTP requests to a web API to test its authorization and access control (authentication) policies. In this article, you'll register a web app and a web API in a tenant on the Azure portal. The web app is used to get an access token generated by the Microsoft identity platform. Next, you'll use the token to make an authorized call to the web API using Postman.
+This article shows you how to call a protected ASP.NET Core web API using [Postman](https://www.postman.com/). Postman is an application that lets you send HTTP requests to a web API to test its authorization and access control (authentication) policies. In this article, you'll register a web app and a web API in a tenant. The web app is used to get an access token generated by the Microsoft identity platform. Next, you'll use the token to make an authorized call to the web API using Postman.
::: zone-end
The Microsoft identity platform requires your application to be registered befor
Follow these steps to create the web API registration:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations > New registration**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as _NewWebAPI1_. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select **Help me choose** option. 1. Select **Register**.
Follow these steps to create the web app registration:
::: zone pivot="no-api"
-1. Select **Home** to return to the home page. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+Select **Home** to return to the home page. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as `web-app-calls-web-api`. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. 1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
Follow these steps to create the web app registration:
::: zone pivot="api"
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If access to multiple tenants is available, use the Directories + subscriptions filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a Name for the application, such as `web-app-calls-web-api`. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. 1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
Follow these steps to create the web app registration:
::: zone-end
-When registration is complete, the Azure portal displays the app registration's **Overview** pane. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in later steps.
+The application's **Overview** pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in later steps.
#### Add a client secret
A client secret is a string value your app can use to identity itself, and is so
Follow these steps to configure a client secret:
-1. From the **Overview** pane in the Azure portal, under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
+1. From the **Overview** pane, under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
1. Add a description for your client secret, for example _My client secret_. 1. Select an expiration for the secret or specify a custom lifetime.
By specifying a web API's scopes, the web app can obtain an access token contain
Follow these steps to configure client's permissions to the web API:
-1. From the **Overview** pane of your application in the Azure portal, under **Manage**, select **API permissions** > **Add a permission** > **My APIs**.
+1. From the **Overview** pane of your application, under **Manage**, select **API permissions** > **Add a permission** > **My APIs**.
1. Select **NewWebAPI1** or the API that you wish to add permissions to. 1. Under **Select permissions**, check the box next to **Forecast.Read**. You may need to expand the **Permission** list. This selects the permissions the client app should have on behalf of the signed-in user. 1. Select **Add permissions** to complete the process. After adding these permissions to your API, you should see the selected permissions under **Configured permissions**.
-You may also notice the **User.Read** permission for the Microsoft Graph API. This permission is added automatically when you register an app in the Azure portal.
+You may also notice the **User.Read** permission for the Microsoft Graph API. This permission is added automatically when you register an app.
::: zone pivot="no-api"
You may also notice the **User.Read** permission for the Microsoft Graph API. Th
1. Navigate to `ms-identity-docs-code-dotnet/web-api` folder and open `appsettings.json`, replace the `{APPLICATION_CLIENT_ID}` and `{DIRECTORY_TENANT_ID}` with:
- - `{APPLICATION_CLIENT_ID}` is the web API **Application (client) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
- - `{DIRECTORY_TENANT_ID}` is the web API **Directory (tenant) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
+ - `{APPLICATION_CLIENT_ID}` is the web API **Application (client) ID** on the app's **Overview** pane.
+ - `{DIRECTORY_TENANT_ID}` is the web API **Directory (tenant) ID** on the app's **Overview** pane.
1. Execute the following command to start the app:
active-directory Howto Configure App Instance Property Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-app-instance-property-locks.md
# Customer intent: As an application developer, I want to learn how to protect properties of my application instance of being modified.
-# How to configure app instance property lock for your applications (Preview)
+# How to configure app instance property lock for your applications
Application instance lock is a feature in Azure Active Directory (Azure AD) that allows sensitive properties of a multi-tenant application object to be locked for modification after the application is provisioned in another tenant. This feature provides application developers with the ability to lock certain properties if the application doesn't support scenarios that require configuring those properties.
The following property usage scenarios are considered as sensitive:
- Credentials (`keyCredentials`, `passwordCredentials`) where usage type is `Verify`. In this scenario, your application supports an OIDC client credentials flow. - `TokenEncryptionKeyId` which specifies the keyId of a public key from the keyCredentials collection. When configured, Azure AD encrypts all the tokens it emits by using the key to which this property points. The application code that receives the encrypted token must use the matching private key to decrypt the token before it can be used for the signed-in user.
+> [!NOTE]
+> App instance lock is enabled by default for all new applications created using the Microsoft Entra admin center.
+ ## Configure an app instance lock [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To configure an app instance lock using the Azure portal:
+To configure an app instance lock:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant that contains the app registration you want to configure.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, and then select the application you want to configure.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select the application you want to configure.
1. Select **Authentication**, and then select **Configure** under the *App instance property lock* section.
- :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-overview.png" alt-text="Screenshot of an app registration's app instance lock in the Azure portal.":::
+ :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-overview.png" alt-text="Screenshot of an app registration's app instance lock.":::
2. In the **App instance property lock** pane, enter the settings for the lock. The table following the image describes each setting and their parameters.
- :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-properties.png" alt-text="Screenshot of an app registration's app instance property lock context pane in the Azure portal.":::
+ :::image type="content" source="media/howto-configure-app-instance-property-locks/app-instance-lock-configure-properties.png" alt-text="Screenshot of an app registration's app instance property lock context pane.":::
| Field | Description | | - | -- |
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
To customize the start and expiry date and other properties of the certificate,
Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell.
-In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate.
+In a PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate.
```powershell $certname = "{certificateName}" ## Replace {certificateName}
active-directory Howto Modify Supported Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-modify-supported-accounts.md
When you registered your application with the Microsoft identity platform, you specified who--which account types--can access it. For example, you might've specified accounts only in your organization, which is a *single-tenant* app. Or, you might've specified accounts in any organization (including yours), which is a *multi-tenant* app.
-In the following sections, you learn how to modify your app's registration in the Azure portal to change who, or what types of accounts, can access the application.
+In the following sections, you learn how to modify your app's registration to change who, or what types of accounts, can access the application.
## Prerequisites
In the following sections, you learn how to modify your app's registration in th
To specify a different setting for the account types supported by an existing app registration:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which the app is registered.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, select your application, and then select **Manifest** to use the manifest editor.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant where the application is registered.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select your application, and then select **Manifest** to use the manifest editor.
1. Download the manifest JSON file locally. 1. Now, specify who can use the application, sometimes referred to as the *sign-in audience*. Find the *signInAudience* property in the manifest JSON file and set it to one of the following property values:
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md
This article highlights best practices, recommendations, and common oversights when integrating with the Microsoft identity platform. This checklist will guide you to a high-quality and secure integration. Review this list on a regular basis to make sure you maintain the quality and security of your appΓÇÖs integration with the identity platform. The checklist isn't intended to review your entire application. The contents of the checklist are subject to change as we make improvements to the platform.
-If youΓÇÖre just getting started, check out the [Microsoft identity platform documentation](index.yml) to learn about authentication basics, application scenarios in the Microsoft identity platform, and more.
+If you're just getting started, check out the [Microsoft identity platform documentation](index.yml) to learn about authentication basics, application scenarios in the Microsoft identity platform, and more.
Use the following checklist to ensure that your application is effectively integrated with the [Microsoft identity platform](./index.yml). > [!TIP]
-> The *Integration assistant* in the Azure portal can help you apply many of these best practices and recommendations. Select any of your [app registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) in the Azure portal, and then select the **Integration assistant** menu item to get started with the assistant.
+> The *Integration assistant* can help you apply many of these best practices and recommendations. Select any of your app registrations, and then select the **Integration assistant** menu item to get started with the assistant.
## Basics
Use the following checklist to ensure that your application is effectively integ
![checkbox](./media/integration-checklist/checkbox-two.svg) Adhere to the [Branding guidelines for applications](/azure/active-directory/develop/howto-add-branding-in-apps).
-![checkbox](./medi). Make sure your name and logo are representative of your company/product so that users can make informed decisions. Ensure that you're not violating any trademarks.
+![checkbox](./medi). Make sure your name and logo are representative of your company/product so that users can make informed decisions. Ensure that you're not violating any trademarks.
## Privacy
Use the following checklist to ensure that your application is effectively integ
![checkbox](./medi#suitable-scenarios-for-the-oauth2-implicit-grant).
-![checkbox](./medi).
+![checkbox](./medi).
![checkbox](./medi) to store and regularly rotate your credentials.
Use the following checklist to ensure that your application is effectively integ
![checkbox](./medi). If you must hand-code for the authentication protocols, you should follow the [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx) or similar development methodology. Pay close attention to the security considerations in the standards specifications for each protocol.
-![checkbox](./medi) apps.
+![checkbox](./medi) apps.
-![checkbox](./media/integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
+![checkbox](./media/integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a "broker redirect URI" configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
![checkbox](./medi).
Use the following checklist to ensure that your application is effectively integ
![checkbox](./media/integration-checklist/checkbox-two.svg) Minimize the number of times a user needs to enter login credentials while using your app by attempting silent authentication (silent token acquisition) before interactive flows.
-![checkbox](./media/integration-checklist/checkbox-two.svg) Don't use ΓÇ£prompt=consentΓÇ¥ for every sign-in. Only use prompt=consent if youΓÇÖve determined that you need to ask for consent for additional permissions (for example, if youΓÇÖve changed your appΓÇÖs required permissions).
+![checkbox](./media/integration-checklist/checkbox-two.svg) Don't use "prompt=consent" for every sign-in. Only use prompt=consent if you've determined that you need to ask for consent for additional permissions (for example, if you've changed your app's required permissions).
![checkbox](./media/integration-checklist/checkbox-two.svg) Where applicable, enrich your application with user data. Using the [Microsoft Graph API](https://developer.microsoft.com/graph) is an easy way to do this. The [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) tool that can help you get started. ![checkbox](./medi#consent) at run time to help users understand why your app is requesting permissions that may concern or confuse users when requested on first start.
-![checkbox](./media/integration-checklist/checkbox-two.svg) Implement a [clean single sign-out experience](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut). ItΓÇÖs a privacy and a security requirement, and makes for a good user experience.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Implement a [clean single sign-out experience](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut). It's a privacy and a security requirement, and makes for a good user experience.
## Testing
-![checkbox](./media/integration-checklist/checkbox-two.svg) Test for [Conditional Access policies](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut) that may affect your usersΓÇÖ ability to use your application.
+![checkbox](./media/integration-checklist/checkbox-two.svg) Test for [Conditional Access policies](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut) that may affect your users' ability to use your application.
![checkbox](./media/integration-checklist/checkbox-two.svg) Test your application with all possible accounts that you plan to support (for example, work or school accounts, personal Microsoft accounts, child accounts, and sovereign accounts).
active-directory Identity Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md
___
<!-- IMAGES -->
-[auth-fund-01-img]: ./media/identity-videos/aad-auth-fund-01.jpg
-[auth-fund-02-img]: ./media/identity-videos/aad-auth-fund-02.jpg
-[auth-fund-03-img]: ./media/identity-videos/aad-auth-fund-03.jpg
-[auth-fund-04-img]: ./media/identity-videos/aad-auth-fund-04.jpg
-[auth-fund-05-img]: ./media/identity-videos/aad-auth-fund-05.jpg
-[auth-fund-06-img]: ./media/identity-videos/aad-auth-fund-06.jpg
+[auth-fund-01-img]: ./media/identity-videos/auth-fund-01.jpg
+[auth-fund-02-img]: ./media/identity-videos/auth-fund-02.jpg
+[auth-fund-03-img]: ./media/identity-videos/auth-fund-03.jpg
+[auth-fund-04-img]: ./media/identity-videos/auth-fund-04.jpg
+[auth-fund-05-img]: ./media/identity-videos/auth-fund-05.jpg
+[auth-fund-06-img]: ./media/identity-videos/auth-fund-06.jpg
<!-- VIDEOS --> [auth-fund-01-vid]: https://www.youtube.com/watch?v=fbSVgC8nGz4&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=1
active-directory Jwt Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/jwt-claims-customization.md
Last updated 05/01/2023 -+
These JSON Web tokens (JWT) used by OIDC and OAuth applications contain pieces o
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To view or edit the claims issued in the JWT to the application, open the application in Azure portal. Then select **Single sign-on** blade in the left-hand menu and open the **Attributes & Claims** section.
+To view or edit the claims issued in the JWT to the application:
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select the application, select **Single sign-on** in the left-hand menu, and then select **Edit** in the **Attributes & Claims** section.
An application may need claims customization for various reasons. For example, when an application requires a different set of claim URIs or claim values. Using the **Attributes & Claims** section, you can add or remove a claim for your application. You can also create a custom claim that is specific for an application based on the use case. The following steps describe how to assign a constant value:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the **Attributes & Claims** section, Select **Edit** to edit the claims.
-1. Select the required claim that you want to modify.
+1. Select the claim that you want to modify.
1. Enter the constant value without quotes in the **Source attribute** as per your organization, and then select **Save**. - The Attributes overview displays the constant value. - ## Special claims transformations You can use the following special claims transformations functions.
To apply a transformation to a user attribute:
1. **Treat source as multivalued** indicates whether the transform is applied to all values or just the first. By default, the first element in a multi-value claim is applied the transformations. When you check this box, it ensures it's applied to all. This checkbox is only enabled for multi-valued attributes. For example, `user.proxyaddresses`. 1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
- :::image type="content" source="./media/jwt-claims-customization/sso-saml-multiple-claims-transformation.png" alt-text="Screenshot of claims transformation.":::
- You can use the following functions to transform claims. | Function | Description |
You can use the following functions to transform claims.
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. | | **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain `@contoso.com`, otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
-| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with "000", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
-| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
+| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with `000`, otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
+| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with `US`, otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
| **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is `Finance_BSimon`, the matching value is `Finance_`, then the claim's output is `BSimon`. | | **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is `BSimon_US`, the matching value is `_US`, then the claim's output is `BSimon`. | | **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is `Finance_BSimon_US`, the first matching value is `Finance_`, the second matching value is `_US`, then the claim's output is `BSimon`. |
For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs
First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta. - As another example, consider when Britta Simon tries to sign in using the following configuration. Azure AD first evaluates all conditions with source `Attribute`. The source for the claim is `user.mail` when Britta's user type is **AAD guests**. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is the new source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta. - As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. The claim falls back to `user.extensionattribute1` ignoring the condition entry in both cases. ## Security considerations
-Applications that receive tokens rely on claim values that are authoritatively issued by Azure AD and can't be tampered with. When you modify the token contents through claims customization, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the customization to protect themselves from customizations created by malicious actors. This can be done in one the following ways:
+Applications that receive tokens rely on claim values that can't be tampered with. When you modify the token contents through claims customization, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified to protect themselves from customizations created by malicious actors. Protect from inappropriate customizations in one the following ways:
- [Configure a custom signing key](#configure-a-custom-signing-key) - [update the application manifest to accept mapped claims](#update-the-application-manifest).
Applications that receive tokens rely on claim values that are authoritatively i
Without this, Azure AD returns an [AADSTS50146 error code](./reference-error-codes.md#aadsts-error-codes). ## Configure a custom signing key
-For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. when setting up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to validate the token signing key.
+For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. when setting up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After you configure the custom signing key, your application code needs to validate the token signing key.
Add the following information to the service principal:
Add the following information to the service principal:
Extract the private and public key base-64 encoded from the PFX file export of your certificate. Make sure that the `keyId` for the `keyCredential` used for "Sign" matches the `keyId` of the `passwordCredential`. You can generate the `customkeyIdentifier` by getting the hash of the cert's thumbprint. ## Request
-The following example shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify".
+The following example shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is `Sign`. For the public key, the property usage is `Verify`.
``` PATCH https://graph.microsoft.com/v1.0/servicePrincipals/f47a6776-bca7-4f2e-bc6c-eec59d058e3e
Authorization: Bearer {token}
``` ## Configure a custom signing key using PowerShell
-Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
+Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After you configure the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
-To run this script you need:
+To run this script, you need:
- The object ID of your application's service principal, found in the Overview blade of your application's entry in Enterprise Applications in the Azure portal. - An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the Overview blade of the application's entry in App registrations in the Azure portal. The app registration should have the following configuration:
https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
``` ## Update the application manifest
-For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication?view=graph-rest-1.0&preserve-view=true#properties), this allows an application to use claims mapping without specifying a custom signing key.
+For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication?view=graph-rest-1.0&preserve-view=true#properties). Setting the property allows an application to use claims mapping without specifying a custom signing key.
>[!WARNING] >Do not set the acceptMappedClaims property to true for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Title: Mark an app as publisher verified
-description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Microsoft Partner Network (MPN) account that has completed the verification process and has associated this MPN account with that application registration.
+description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Cloud Partner Program (CPP) account that has completed the verification process and has associated this CPP account with that application registration.
Previously updated : 03/16/2023 Last updated : 08/17/2023
# Mark your app as publisher verified
-When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN) account and has associated this MPN account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process.
+When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Cloud Partner Program (CPP) account and has associated this CPP account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process.
## Quickstart
-If you are already enrolled in the Microsoft Partner Network (MPN) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away:
+If you are already enrolled in the [Cloud Partner Program (CPP)](/partner-center/intro-to-cloud-partner-program-membership) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away:
1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) 1. Choose an app and click **Branding & properties**.
-1. Click **Add MPN ID to verify publisher** and review the listed requirements.
+1. Click **Add Partner One ID to verify publisher** and review the listed requirements.
-1. Enter your MPN ID and click **Verify and save**.
+1. Enter your Partner One ID and click **Verify and save**.
For more details on specific benefits, requirements, and frequently asked questions see the [overview](publisher-verification-overview.md). ## Mark your app as publisher verified Make sure you meet the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified.
-1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the MPN Account in Partner Center.
+1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the CPP Account in Partner Center.
- The Azure AD user must have one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
- - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD).
+ - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): CPP Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD).
1. Navigate to the **App registrations** blade:
Make sure you meet the [pre-requisites](publisher-verification-overview.md#requi
1. Ensure the appΓÇÖs [publisher domain](howto-configure-publisher-domain.md) is set.
-1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your MPN account.
+1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your CPP account.
-1. Click **Add MPN ID to verify publisher** near the bottom of the page.
+1. Click **Add Partner One ID to verify publisher** near the bottom of the page.
-1. Enter the **MPN ID** for:
+1. Enter the **Partner One ID** for:
- - A valid Microsoft Partner Network account that has completed the verification process.
+ - A valid Cloud Partner Program account that has completed the verification process.
- The Partner global account (PGA) for your organization.
active-directory Migrate Adal Msal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-adal-msal-java.md
- Title: ADAL to MSAL migration guide (MSAL4j)
-description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Java app to the Microsoft Authentication Library (MSAL).
-------- Previously updated : 11/04/2019---
-#Customer intent: As a Java application developer, I want to learn how to migrate my v1 ADAL app to v2 MSAL.
--
-# ADAL to MSAL migration guide for Java
-
-This article highlights changes you need to make to migrate an app that uses the Azure Active Directory Authentication Library (ADAL) to use the Microsoft Authentication Library (MSAL).
-
-Both the Microsoft Authentication Library for Java (MSAL4J) and Azure AD Authentication Library for Java (ADAL4J) are used to authenticate Azure AD entities and request tokens from Azure AD. Until now, most developers have worked with Azure AD for developers platform (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using Azure AD Authentication Library (ADAL).
-
-MSAL offers the following benefits:
--- Because it uses the newer Microsoft identity platform, you can authenticate a broader set of Microsoft identities such as Azure AD identities, Microsoft accounts, and social and local accounts through Azure AD Business to Consumer (B2C).-- Your users will get the best single-sign-on experience.-- Your application can enable incremental consent, and supporting Conditional Access is easier.-
-MSAL for Java is the auth library we recommend you use with the Microsoft identity platform. No new features will be implemented on ADAL4J. All efforts going forward are focused on improving MSAL.
-
-You can learn more about MSAL and get started with an [overview of the Microsoft Authentication Library](msal-overview.md).
-
-## Scopes not resources
-
-ADAL4J acquires tokens for resources whereas MSAL for Java acquires tokens for scopes. Many MSAL for Java classes require a scopes parameter. This parameter is a list of strings that declare the desired permissions and resources that are requested. See [Microsoft Graph's scopes](/graph/permissions-reference) to see example scopes.
-
-You can add the `/.default` scope suffix to the resource to help migrate your apps from the ADAL to MSAL. For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource isn't in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
-
-For more details about the different types of scopes, refer
-[Permissions and consent in the Microsoft identity platform](./permissions-consent-overview.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles.
-
-## Core classes
-
-In ADAL4J, the `AuthenticationContext` class represents your connection to the Security Token Service (STS), or authorization server, through an Authority. However, MSAL for Java is designed around client applications. It provides two separate classes: `PublicClientApplication` and `ConfidentialClientApplication` to represent client applications. The latter, `ConfidentialClientApplication`, represents an application that is designed to securely maintain a secret such as an application identifier for a daemon app.
-
-The following table shows how ADAL4J functions map to the new MSAL for Java functions:
-
-| ADAL4J method| MSAL4J method|
-||-|
-|acquireToken(String resource, ClientCredential credential, AuthenticationCallback callback) | acquireToken(ClientCredentialParameters)|
-|acquireToken(String resource, ClientAssertion assertion, AuthenticationCallback callback)|acquireToken(ClientCredentialParameters)|
-|acquireToken(String resource, AsymmetricKeyCredential credential, AuthenticationCallback callback)|acquireToken(ClientCredentialParameters)|
-|acquireToken(String resource, String clientId, String username, String password, AuthenticationCallback callback)| acquireToken(UsernamePasswordParameters)|
-|acquireToken(String resource, String clientId, String username, String password=null, AuthenticationCallback callback)|acquireToken(IntegratedWindowsAuthenticationParameters)|
-|acquireToken(String resource, UserAssertion userAssertion, ClientCredential credential, AuthenticationCallback callback)| acquireToken(OnBehalfOfParameters)|
-|acquireTokenByAuthorizationCode() | acquireToken(AuthorizationCodeParameters) |
-| acquireDeviceCode() and acquireTokenByDeviceCode()| acquireToken(DeviceCodeParameters)|
-|acquireTokenByRefreshToken()| acquireTokenSilently(SilentParameters)|
-
-## IAccount instead of IUser
-
-ADAL4J manipulated users. Although a user represents a single human or software agent, it can have one or more accounts in the Microsoft identity system. For example, a user may have several Azure AD, Azure AD B2C, or Microsoft personal accounts.
-
-MSAL for Java defines the concept of Account via the `IAccount` interface. This is a breaking change from ADAL4J, but it's a good one because it captures the fact that the same user can have several accounts, and perhaps even in different Azure AD directories. MSAL for Java provides better information in guest scenarios because home account information is provided.
-
-## Cache persistence
-
-ADAL4J didn't have support for token cache.
-MSAL for Java adds a [token cache](msal-acquire-cache-tokens.md) to simplify managing token lifetimes by automatically refreshing expired tokens when possible and preventing unnecessary prompts for the user to provide credentials when possible.
-
-## Common Authority
-
-In v1.0, if you use the `https://login.microsoftonline.com/common` authority, users can sign in with any Azure Active Directory (Azure AD) account (for any organization).
-
-If you use the `https://login.microsoftonline.com/common` authority in v2.0, users can sign in with any Azure AD organization, or even a Microsoft personal account (MSA). In MSAL for Java, if you want to restrict login to any Azure AD account, use the `https://login.microsoftonline.com/organizations` authority (which is the same behavior as with ADAL4J). To specify an authority, set the `authority` parameter in the [PublicClientApplication.Builder](https://javadoc.io/doc/com.microsoft.azure/msal4j/1.0.0/com/microsoft/aad/msal4j/PublicClientApplication.Builder.html) method when you create your `PublicClientApplication` class.
-
-## v1.0 and v2.0 tokens
-
-The v1.0 endpoint (used by ADAL) only emits v1.0 tokens.
-
-The v2.0 endpoint (used by MSAL) can emit v1.0 and v2.0 tokens. A property of the application manifest of the web API enables developers to choose which version of token is accepted. See `accessTokenAcceptedVersion` in the [application manifest](./reference-app-manifest.md) reference documentation.
-
-For more information about v1.0 and v2.0 tokens, see [Azure Active Directory access tokens](./access-tokens.md).
-
-## ADAL to MSAL migration
-
-In ADAL4J, the refresh tokens were exposed--which allowed developers to cache them. They would then use `AcquireTokenByRefreshToken()` to enable solutions such as implementing long-running services that refresh dashboards on behalf of the user when the user is no longer connected.
-
-MSAL for Java doesn't expose refresh tokens for security reasons. Instead, MSAL handles refreshing tokens for you.
-
-MSAL for Java has an API that allows you to migrate refresh tokens you acquired with ADAL4j into the ClientApplication: [acquireToken(RefreshTokenParameters)](https://javadoc.io/static/com.microsoft.azure/msal4j/1.0.0/com/microsoft/aad/msal4j/PublicClientApplication.html#acquireToken-com.microsoft.aad.msal4j.RefreshTokenParameters-). With this method, you can provide the previously used refresh token along with any scopes (resources) you desire. The refresh token will be exchanged for a new one and cached for use by your application.
-
-The following code snippet shows some migration code in a confidential client application:
-
-```java
-String rt = GetCachedRefreshTokenForSignedInUser(); // Get refresh token from where you have them stored
-Set<String> scopes = Collections.singleton("SCOPE_FOR_REFRESH_TOKEN");
-
-RefreshTokenParameters parameters = RefreshTokenParameters.builder(scopes, rt).build();
-
-PublicClientApplication app = PublicClientApplication.builder(CLIENT_ID) // ClientId for your application
- .authority(AUTHORITY) //plug in your authority
- .build();
-
-IAuthenticationResult result = app.acquireToken(parameters);
-```
-
-The `IAuthenticationResult` returns an access token and ID token, while your new refresh token is stored in the cache.
-The application will also now contain an IAccount:
-
-```java
-Set<IAccount> accounts = app.getAccounts().join();
-```
-
-To use the tokens that are now in the cache, call:
-
-```java
-SilentParameters parameters = SilentParameters.builder(scope, accounts.iterator().next()).build();
-IAuthenticationResult result = app.acquireToken(parameters);
-```
active-directory Migrate Python Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md
- Title: Python ADAL to MSAL migration guide
-description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Python app to the Microsoft Authentication Library (MSAL) for Python.
-------- Previously updated : 03/30/2023---
-#Customer intent: As a Python application developer, I want to learn how to migrate my v1 ADAL app to v2 MSAL.
--
-# ADAL to MSAL migration guide for Python
-
-This article highlights changes you need to make to migrate an app that uses the Azure Active Directory Authentication Library (ADAL) to use the Microsoft Authentication Library (MSAL).
-
-You can learn more about MSAL and get started with an [overview of the Microsoft Authentication Library](msal-overview.md).
-
-## Difference highlights
-
-ADAL works with the Azure Active Directory (Azure AD) v1.0 endpoint. The Microsoft Authentication Library (MSAL) works with the Microsoft identity platform--formerly known as the Azure Active Directory v2.0 endpoint. The Microsoft identity platform differs from Azure AD v1.0 in that it:
-
-Supports:
--- Work and school accounts (Azure AD provisioned accounts)-- Personal accounts (such as Outlook.com or Hotmail.com)-- Your customers who bring their own email or social identity (such as LinkedIn, Facebook, Google) via the Azure AD B2C offering--- Is standards compatible with:
- - OAuth v2.0
- - OpenID Connect (OIDC)
-
-For more information about MSAL, see [MSAL overview](./msal-overview.md).
-
-### Scopes not resources
-
-ADAL Python acquires tokens for resources, but MSAL Python acquires tokens for scopes. The API surface in MSAL Python doesn't have resource parameter anymore. You would need to provide scopes as a list of strings that declare the desired permissions and resources that are requested. To see some example of scopes, see [Microsoft Graph's scopes](/graph/permissions-reference).
-
-You can add the `/.default` scope suffix to the resource to help migrate your apps from the v1.0 endpoint (ADAL) to the Microsoft identity platform (MSAL). For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource isn't in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
-
-For more details about the different types of scopes, refer to [Permissions and consent in the Microsoft identity platform](./permissions-consent-overview.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles.
-
-### Error handling
-
-ADAL for Python uses the exception `AdalError` to indicate that there's been a problem. MSAL for Python typically uses error codes, instead. For more information, see [MSAL for Python error handling](msal-error-handling-python.md).
-
-### API changes
-
-The following table lists an API in ADAL for Python, and the one to use in its place in MSAL for Python:
-
-| ADAL for Python API | MSAL for Python API |
-| -- | - |
-| [AuthenticationContext](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext) | [PublicClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.__init__) or [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.__init__) |
-| N/A | [PublicClientApplication.acquire_token_interactive()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_interactive) |
-| N/A | [ConfidentialClientApplication.initiate_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.initiate_auth_code_flow) |
-| [acquire_token_with_authorization_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_authorization_code) | [ConfidentialClientApplication.acquire_token_by_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_auth_code_flow) |
-| [acquire_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token) | [PublicClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_silent) or [ConfidentialClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_silent) |
-| [acquire_token_with_refresh_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_refresh_token) | These two helpers are intended to be used during [migration](#migrate-existing-refresh-tokens-for-msal-python) only: [PublicClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_refresh_token) or [ConfidentialClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_refresh_token) |
-| [acquire_user_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_user_code) | [initiate_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.initiate_device_flow) |
-| [acquire_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_device_code) and [cancel_request_to_get_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.cancel_request_to_get_token_with_device_code) | [acquire_token_by_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_device_flow) |
-| [acquire_token_with_username_password()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_username_password) | [acquire_token_by_username_password()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_username_password) |
-| [acquire_token_with_client_credentials()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_credentials) and [acquire_token_with_client_certificate()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_certificate) | [acquire_token_for_client()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client) |
-| N/A | [acquire_token_on_behalf_of()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) |
-| [TokenCache()](https://adal-python.readthedocs.io/en/latest/#adal.TokenCache) | [SerializableTokenCache()](https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache) |
-| N/A | Cache with persistence, available from [MSAL Extensions](https://github.com/marstr/original-microsoft-authentication-extensions-for-python) |
-
-## Migrate existing refresh tokens for MSAL Python
-
-MSAL abstracts the concept of refresh tokens. MSAL Python provides an in-memory token cache by default so that you don't need to store, lookup, or update refresh tokens. Users will also see fewer sign-in prompts because refresh tokens can usually be updated without user intervention. For more information about the token cache, see [Custom token cache serialization in MSAL for Python](msal-python-token-cache-serialization.md).
-
-The following code will help you migrate your refresh tokens managed by another OAuth2 library (including but not limited to ADAL Python) to be managed by MSAL for Python. One reason for migrating those refresh tokens is to prevent existing users from needing to sign in again when you migrate your app to MSAL for Python.
-
-The method for migrating a refresh token is to use MSAL for Python to acquire a new access token using the previous refresh token. When the new refresh token is returned, MSAL for Python will store it in the cache.
-Since MSAL Python 1.3.0, we provide an API inside MSAL for this purpose.
-Please refer to the following code snippet, quoted from
-[a completed sample of migrating refresh tokens with MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/1.3.0/sample/migrate_rt.py#L28-L67)
-
-```python
-import msal
-def get_preexisting_rt_and_their_scopes_from_elsewhere():
- # Maybe you have an ADAL-powered app like this
- # https://github.com/AzureAD/azure-activedirectory-library-for-python/blob/1.2.3/sample/device_code_sample.py#L72
- # which uses a resource rather than a scope,
- # you need to convert your v1 resource into v2 scopes
- # See https://learn.microsoft.com/azure/active-directory/develop/migrate-python-adal-msal#scopes-not-resources
- # You may be able to append "/.default" to your v1 resource to form a scope
- # See https://learn.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope
-
- # Or maybe you have an app already talking to the Microsoft identity platform,
- # powered by some 3rd-party auth library, and persist its tokens somehow.
-
- # Either way, you need to extract RTs from there, and return them like this.
- return [
- ("old_rt_1", ["scope1", "scope2"]),
- ("old_rt_2", ["scope3", "scope4"]),
- ]
--
-# We will migrate all the old RTs into a new app powered by MSAL
-app = msal.PublicClientApplication(
- "client_id", authority="...",
- # token_cache=... # Default cache is in memory only.
- # You can learn how to use SerializableTokenCache from
- # https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache
- )
-
-# We choose a migration strategy of migrating all RTs in one loop
-for old_rt, scopes in get_preexisting_rt_and_their_scopes_from_elsewhere():
- result = app.acquire_token_by_refresh_token(old_rt, scopes)
- if "error" in result:
- print("Discarding unsuccessful RT. Error: ", json.dumps(result, indent=2))
-
-print("Migration completed")
-```
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
When your client requests an access token, Azure AD also returns an authenticati
Several of the platforms supported by MSAL have additional token cache-related information in the documentation for that platform's library. For example: - [Get a token from the token cache using MSAL.NET](msal-net-acquire-token-silently.md) - [Single sign-on with MSAL.js](msal-js-sso.md)-- [Custom token cache serialization in MSAL for Python](msal-python-token-cache-serialization.md)-- [Custom token cache serialization in MSAL for Java](msal-java-token-cache-serialization.md)
+- [Custom token cache serialization in MSAL for Python](/entra/msal/python/advanced/msal-python-token-cache-serialization)
+- [Custom token cache serialization in MSAL for Java](/entra/msal/java/advanced/msal-java-token-cache-serialization)
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
In this how-to, you'll learn how to configure the SDKs used by your application
This how-to assumes you know how to: -- Provision your app using the Azure portal. For more information, see the instructions for creating an app in [the Android tutorial](./tutorial-v2-android.md#create-a-project)
+- Provision your app. For more information, see the instructions for creating an app in [the Android tutorial](./tutorial-v2-android.md#create-a-project)
- Integrate your application with the [MSAL for Android](https://github.com/AzureAD/microsoft-authentication-library-for-android) ## Methods for SSO
You must register a redirect URI that is compatible with the broker. The redirec
The format of the redirect URI is: `msauth://<yourpackagename>/<base64urlencodedsignature>`
-You can use [keytool](https://manpages.debian.org/buster/openjdk-11-jre-headless/keytool.1.en.html) to generate a Base64-encoded signature hash using your app's signing keys, and then use the Azure portal to generate your redirect URI using that hash.
+You can use [keytool](https://manpages.debian.org/buster/openjdk-11-jre-headless/keytool.1.en.html) to generate a Base64-encoded signature hash using your app's signing keys, and then generate your redirect URI using that hash.
Linux and macOS:
keytool -exportcert -alias androiddebugkey -keystore %HOMEPATH%\.android\debug.k
Once you've generated a signature hash with _keytool_, use the Azure portal to generate the redirect URI:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="/azure/active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you registered your application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**.
-1. Under **Manage**, select **App registrations**, then select your application.
-1. Under **Manage**, select **Authentication** > **Add a platform** > **Android**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select your application, and then select **Authentication** > **Add a platform** > **Android**.
1. In the **Configure your Android app** pane that opens, enter the **Signature hash** that you generated earlier and a **Package name**. 1. Select the **Configure** button.
-The Azure portal generates the redirect URI for you and displays it in the **Android configuration** pane's **Redirect URI** field.
+The redirect URI is generated for you and is displayed in the **Android configuration** pane's **Redirect URI** field.
For more information about signing your app, see [Sign your app](https://developer.android.com/studio/publish/app-signing) in the Android Studio User Guide.
If the application uses a `WebView` strategy without integrating Microsoft Authe
If the application uses MSAL with a broker like Microsoft Authenticator or Intune Company Portal, then users can have SSO experience across applications if they have an active sign-in with one of the apps.
+> [!NOTE]
+> MSAL with broker utilizes WebViews instead of Custom Tabs. As a result, the Single Sign-On (SSO) state is not extended to other apps that use Custom Tabs.
+ ### WebView To use the in-app WebView, put the following line in the app configuration JSON that is passed to MSAL:
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Previously updated : 07/15/2022 Last updated : 08/11/2023
The authority you specify in your code needs to be consistent with the **Support
The authority can be: - An Azure AD cloud authority.-- An Azure AD B2C authority. See [B2C specifics](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AAD-B2C-specifics).-- An Active Directory Federation Services (AD FS) authority. See [AD FS support](https://aka.ms/msal-net-adfs-support).
+- An Azure AD B2C authority. See [B2C specifics](msal-net-b2c-considerations.md).
+- An Active Directory Federation Services (AD FS) authority. See [AD FS support](msal-net-adfs-support.md).
Azure AD cloud authorities have two parts:
You can override the redirect URI by using the `RedirectUri` property (for examp
- `RedirectUriOnAndroid` = "msauth-5a434691-ccb2-4fd1-b97b-b64bcfbc03fc://com.microsoft.identity.client.sample"; - `RedirectUriOnIos` = $"msauth.{Bundle.ID}://auth";
-For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Leveraging-the-broker-on-iOS).
+For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](msal-net-use-brokers-with-xamarin-apps.md).
For more Android details, see [Brokered auth in Android](msal-android-single-sign-on.md). ### Redirect URI for confidential client apps
To help in debugging and authentication failure troubleshooting scenarios, the M
:::column-end::: :::column::: - [Logging in MSAL for iOS/macOS](msal-logging-ios.md)
- - [Logging in MSAL for Java](msal-logging-java.md)
- - [Logging in MSAL for Python](msal-logging-python.md)
+ - [Logging in MSAL for Java](/entra/msal/java/advanced/msal-logging-java)
+ - [Logging in MSAL for Python](/entra/msal/python/advanced/msal-logging-python)
:::column-end::: :::row-end:::
active-directory Msal Error Handling Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-java.md
- Title: Handle errors and exceptions in MSAL4J
-description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL4J applications.
-------- Previously updated : 11/27/2020----
-# Handle errors and exceptions in MSAL for Java
--
-## Error handling in MSAL for Java
-
-In MSAL for Java, there are three types of exceptions: `MsalClientException`, `MsalServiceException`, and `MsalInteractionRequiredException`; all which inherit from `MsalException`.
--- `MsalClientException` is thrown when an error occurs that is local to the library or device.-- `MsalServiceException` is thrown when the secure token service (STS) returns an error response or another networking error occurs.-- `MsalInteractionRequiredException` is thrown when UI interaction is required for authentication to succeed.-
-### MsalServiceException
-
-`MsalServiceException` exposes HTTP headers returned in the requests to the STS. Access them via `MsalServiceException.headers()`
-
-### MsalInteractionRequiredException
-
-One of common status codes returned from MSAL for Java when calling `AcquireTokenSilently()` is `InvalidGrantError`. This means that additional user interaction is required before an authentication token can be issued. Your application should call the authentication library again, but in interactive mode by sending `AuthorizationCodeParameters` or `DeviceCodeParameters` for public client applications.
-
-Most of the time when `AcquireTokenSilently` fails, it's because the token cache doesn't have a token matching your request. Access tokens expire in one hour, and `AcquireTokenSilently` will try to get a new one based on a refresh token. In OAuth2 terms, this is the Refresh Token flow. This flow can also fail for various reasons such as when a tenant admin configures more stringent login policies.
-
-Some conditions that result in this error are easy for users to resolve. For example, they may need to accept Terms of Use or the request can't be fulfilled with the current configuration because the machine needs to connect to a specific corporate network.
-
-MSAL exposes a `reason` field, which you can use to provide a better user experience. For example, the `reason` field may lead you to tell the user that their password expired or that they'll need to provide consent to use some resources. The supported values are part of the `InteractionRequiredExceptionReason` enum:
-
-| Reason | Meaning | Recommended Handling |
-||--|--|
-| `BasicAction` | Condition can be resolved by user interaction during the interactive authentication flow. | Call `acquireToken` with interactive parameters. |
-| `AdditionalAction` | Condition can be resolved by additional remedial interaction with the system outside of the interactive authentication flow. | Call `acquireToken` with interactive parameters to show a message that explains the remedial action to take. The calling app may choose to hide flows that require additional action if the user is unlikely to complete the remedial action. |
-| `MessageOnly` | Condition can't be resolved at this time. Launch interactive authentication flow to show a message explaining the condition. | Call `acquireToken` with interactive parameters to show a message that explains the condition. `acquireToken` will return the `UserCanceled` error after the user reads the message and closes the window. The app may choose to hide flows that result in message if the user is unlikely to benefit from the message. |
-| `ConsentRequired`| User consent is missing, or has been revoked. |Call `acquireToken` with interactive parameters so that the user can give consent. |
-| `UserPasswordExpired` | User's password has expired. | Call `acquireToken` with interactive parameter so the user can reset their password. |
-| `None` | Further details are provided. The condition may be resolved by user interaction during the interactive authentication flow. | Call `acquireToken` with interactive parameters. |
-
-### Code Example
-
-```java
- IAuthenticationResult result;
- try {
- PublicClientApplication application = PublicClientApplication
- .builder("clientId")
- .b2cAuthority("authority")
- .build();
-
- SilentParameters parameters = SilentParameters
- .builder(Collections.singleton("scope"))
- .build();
-
- result = application.acquireTokenSilently(parameters).join();
- }
- catch (Exception ex){
- if(ex instanceof MsalInteractionRequiredException){
- // AcquireToken by either AuthorizationCodeParameters or DeviceCodeParameters
- } else{
- // Log and handle exception accordingly
- }
- }
-```
---
-## Next steps
-
-Consider enabling [Logging in MSAL for Java](msal-logging-java.md) to help you diagnose and debug issues.
active-directory Msal Error Handling Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md
The following error types are available:
- `AuthError`: Base error class for the MSAL.js library, also used for unexpected errors. -- `ClientAuthError`: Error class, which denotes an issue with Client authentication. Most errors that come from the library will be ClientAuthErrors. These errors result from things like calling a login method when login is already in progress, the user cancels the login, and so on.
+- `ClientAuthError`: Error class which denotes an issue with Client authentication. Most errors that come from the library are ClientAuthErrors. These errors result from things like calling a login method when login is already in progress, the user cancels the login, and so on.
- `ClientConfigurationError`: Error class, extends `ClientAuthError` thrown before requests are made when the given user config parameters are malformed or missing. -- `ServerError`: Error class, represents the error strings sent by the authentication server. These may be errors such as invalid request formats or parameters, or any other errors that prevent the server from authenticating or authorizing the user.
+- `ServerError`: Error class, represents the error strings sent by the authentication server. These errors may be invalid request formats or parameters, or any other errors that prevent the server from authenticating or authorizing the user.
- `InteractionRequiredAuthError`: Error class, extends `ServerError` to represent server errors, which require an interactive call. This error is thrown by `acquireTokenSilent` if the user is required to interact with the server to provide credentials or consent for authentication/authorization. Error codes include `"interaction_required"`, `"login_required"`, and `"consent_required"`.
myMSALObj.handleRedirectPromise()
myMSALObj.acquireTokenRedirect(request); ```
-The methods for pop-up experience (`loginPopup`, `acquireTokenPopup`) return promises, so you can use the promise pattern (.then and .catch) to handle them as shown:
+The methods for pop-up experience (`loginPopup`, `acquireTokenPopup`) return promises, so you can use the promise pattern (`.then` and `.catch`) to handle them as shown:
```javascript myMSALObj.acquireTokenPopup(request).then(
When calling an API requiring Conditional Access, you can receive a claims chall
See [How to use Continuous Access Evaluation enabled APIs in your applications](./app-resilience-continuous-access-evaluation.md) for more detail.
+### Using other frameworks
+
+Using toolkits like Tauri for registered single page applications (SPAs) with the identity platform are not recognized for production apps. SPAs only support URLs that start with `https` for production apps and `http://localhost` for local development. Prefixes like `tauri://localhost` cannot be used for browser apps. This format can only be supported for mobile or web apps as they have a confidential component unlike browser apps.
+ [!INCLUDE [Active directory error handling retries](./includes/error-handling-and-tips/error-handling-retries.md)] ## Next steps
active-directory Msal Error Handling Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md
- Title: Handle errors and exceptions in MSAL for Python
-description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL for Python applications.
-------- Previously updated : 03/16/2023----
-# Handle errors and exceptions in MSAL for Python
--
-## Error handling in MSAL for Python
-
-In MSAL for Python, most errors are conveyed as a return value from the API call. The error is represented as a dictionary containing the JSON response from the Microsoft identity platform.
-
-* A successful response contains the `"access_token"` key. The format of the response is defined by the OAuth2 protocol. For more information, see [5.1 Successful Response](https://tools.ietf.org/html/rfc6749#section-5.1)
-* An error response contains `"error"` and usually `"error_description"`. The format of the response is defined by the OAuth2 protocol. For more information, see [5.2 Error Response](https://tools.ietf.org/html/rfc6749#section-5.2)
-
-When an error is returned, the `"error"` key contains a machine-readable code. If the `"error"` is, for example, an `"interaction_required"`, you may prompt the user to provide additional information to complete the authentication process. If the `"error"` is `"invalid_grant"`, you may prompt the user to reenter their credentials. The following snippet is an example of error handling in MSAL for Python.
-
-```python
-
-from msal import ConfidentialClientApplication
-
-authority_url = "https://login.microsoftonline.com/your_tenant_id"
-client_id = "your_client_id"
-client_secret = "your_client_secret"
-scopes = ["https://graph.microsoft.com/.default"]
-
-app = ConfidentialClientApplication(client_id, authority=authority_url, client_credential=client_secret)
-
-result = app.acquire_token_silent(scopes=scopes, account=None)
-
-if not result:
- result = app.acquire_token_silent(scopes=scopes)
-
-if "access_token" in result:
- print("Access token: %s" % result["access_token"])
-else:
- print("Error: %s" % result.get("error"))
-
-```
-
-When an error is returned, the `"error_description"` key also contains a human-readable message, and there is typically also an `"error_code"` key which contains a machine-readable Microsoft identity platform error code. For more information about the various Microsoft identity platform error codes, see [Authentication and authorization error codes](./reference-error-codes.md).
-
-In MSAL for Python, exceptions are rare because most errors are handled by returning an error value. The `ValueError` exception is only thrown when there's an issue with how you're attempting to use the library, such as when API parameter(s) are malformed.
---
-## Retrying after errors and exceptions
-
-MSAL makes HTTP calls to the Azure AD service, and occasionally failures can occur.
-For example the network can go down or the server is overloaded.
-
-MSAL Python 1.11+ automatically performs one retry attempt for you.
-You may customize this behavior by following
-[this instruction](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.params.http_client).
-
-### HTTP 429
-
-When the Service Token Server (STS) is overloaded with too many requests,
-it returns HTTP error 429 with a hint about how long until you can try again in the `Retry-After` response field.
-
-Your app was expected to throttle the subsequent requests, and only retry after the specified period.
-That was not an easy task.
-
-MSAL Python 1.16+ made it easy for you, in that your app could blindly retry in any given time
-(say, whenever the end user clicks the sign-in button again),
-MSAL Python 1.16+ would automatically throttle those retry attempts by returning same error response from an HTTP cache,
-and only sending out a real HTTP call when that call is attempted after the specified period.
-
-By default, this throttle mechanism works by saving throttle information into a built-in in-memory HTTP cache.
-You may provide your own `dict`-like object as the HTTP cache, which you can control how to persist its content.
-See [MSAL Python's API document](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.params.http_cache)
-for more details.
-
-## Next steps
-
-Consider enabling [Logging in MSAL for Python](msal-logging-python.md) to help you diagnose and debug issues.
active-directory Msal Ios Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-ios-shared-devices.md
These Microsoft applications support Azure AD's shared device mode:
- [Microsoft Teams](/microsoftteams/platform/) (in Public Preview) > [!IMPORTANT]
-> Public preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Public preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Next steps
active-directory Msal Java Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-adfs-support.md
- Title: AD FS support (MSAL for Java)
-description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Java (MSAL4j).
-------- Previously updated : 11/21/2019---
-#Customer intent: As an application developer, I want to learn about AD FS support in MSAL for Java so I can decide if this platform meets my application development needs and requirements.
--
-# Active Directory Federation Services support in MSAL for Java
-
-Active Directory Federation Services (AD FS) in Windows Server enables you to add OpenID Connect and OAuth 2.0 based authentication and authorization to your Microsoft Authentication Library for Java (MSAL for Java) app. Once integrated, your app can authenticate users in AD FS, federated through Azure AD. For more information about scenarios, see [AD FS Scenarios for Developers](/windows-server/identity/ad-fs/ad-fs-development).
-
-An app that uses MSAL for Java will talk to Azure Active Directory (Azure AD), which then federates to AD FS.
-
-MSAL for Java connects to Azure AD, which signs in users that are managed in Azure AD (managed users) or users managed by another identity provider such as AD FS (federated users). MSAL for Java doesn't know that a user is federated. It simply talks to Azure AD.
-
-The [authority](msal-client-application-configuration.md#authority) you use in this case is the usual authority (authority host name + tenant, common, or organizations).
-
-## Acquire a token interactively for a federated user
-
-When you call `ConfidentialClientApplication.AcquireToken()` or `PublicClientApplication.AcquireToken()` with `AuthorizationCodeParameters` or `DeviceCodeParameters`, the user experience is typically:
-
-1. The user enters their account ID.
-2. Azure AD briefly displays "Taking you to your organization's page", and the user is redirected to the sign-in page of the identity provider. The sign-in page is usually customized with the logo of the organization.
-
-The supported AD FS versions in this federated scenario are:
-- Active Directory Federation Services FS v2-- Active Directory Federation Services v3 (Windows Server 2012 R2)-- Active Directory Federation Services v4 (AD FS 2016)-
-## Acquire a token via username and password
-
-When you acquire a token using `ConfidentialClientApplication.AcquireToken()` or `PublicClientApplication.AcquireToken()` with `IntegratedWindowsAuthenticationParameters` or `UsernamePasswordParameters`, MSAL for Java gets the identity provider to contact based on the username. MSAL for Java gets a [SAML 1.1 token](reference-saml-tokens.md) token from the identity provider, which it then provides to Azure AD which returns the JSON Web Token (JWT).
-
-## Next steps
-
-For the federated case, see [Configure Azure Active Directory sign-in behavior for an application by using a Home Realm Discovery policy](../manage-apps/configure-authentication-for-federated-users-portal.md)
active-directory Msal Java Get Remove Accounts Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-get-remove-accounts-token-cache.md
- Title: Get & remove accounts from the token cache (MSAL4j)
-description: Learn how to view and remove accounts from the token cache using the Microsoft Authentication Library for Java.
-------- Previously updated : 11/07/2019---
-#Customer intent: As an application developer using the Microsoft Authentication Library for Java (MSAL4J), I want to learn how to get and remove accounts stored in the token cache.
--
-# Get and remove accounts from the token cache using MSAL for Java
-
-MSAL for Java provides an in-memory token cache by default. The in-memory token cache lasts the duration of the application instance.
-
-## See which accounts are in the cache
-
-You can check what accounts are in the cache by calling `PublicClientApplication.getAccounts()` as shown in the following example:
-
-```java
-PublicClientApplication pca = new PublicClientApplication.Builder(
- labResponse.getAppId()).
- authority(TestConstants.ORGANIZATIONS_AUTHORITY).
- build();
-
-Set<IAccount> accounts = pca.getAccounts().join();
-```
-
-## Remove accounts from the cache
-
-To remove an account from the cache, find the account that needs to be removed and then call `PublicClientApplication.removeAccount()` as shown in the following example:
-
-```java
-Set<IAccount> accounts = pca.getAccounts().join();
-
-IAccount accountToBeRemoved = accounts.stream().filter(
- x -> x.username().equalsIgnoreCase(
- UPN_OF_USER_TO_BE_REMOVED)).findFirst().orElse(null);
-
-pca.removeAccount(accountToBeRemoved).join();
-```
-
-## Learn more
-
-If you are using MSAL for Java, learn about [Custom token cache serialization in MSAL for Java](msal-java-token-cache-serialization.md).
active-directory Msal Java Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-token-cache-serialization.md
- Title: Custom token cache serialization (MSAL4j)
-description: Learn how to serialize the token cache for MSAL for Java
-------- Previously updated : 11/07/2019---
-#Customer intent: As an application developer using the Microsoft Authentication Library for Java (MSAL4J), I want to learn how to persist the token cache so that it is available to a new instance of my application.
--
-# Custom token cache serialization in MSAL for Java
-
-To persist the token cache between instances of your application, you will need to customize the serialization. The Java classes and interfaces involved in token cache serialization are the following:
--- [ITokenCache](https://static.javadoc.io/com.microsoft.azure/msal4j/0.5.0-preview/com/microsoft/aad/msal4j/ITokenCache.html): Interface representing security token cache.-- [ITokenCacheAccessAspect](https://static.javadoc.io/com.microsoft.azure/msal4j/0.5.0-preview/com/microsoft/aad/msal4j/ITokenCacheAccessAspect.html): Interface representing operation of executing code before and after access. You would @Override *beforeCacheAccess* and *afterCacheAccess* with the logic responsible for serializing and deserializing the cache.-- [ITokenCacheContext](https://static.javadoc.io/com.microsoft.azure/msal4j/0.5.0-preview/com/microsoft/aad/msal4j/ITokenCacheAccessContext.html): Interface representing context in which the token cache is accessed. -
-Below is a naive implementation of custom serialization of token cache serialization/deserialization. Do not copy and paste this into a production environment.
-
-```Java
-static class TokenPersistence implements ITokenCacheAccessAspect {
-String data;
-
-TokenPersistence(String data) {
- this.data = data;
-}
-
-@Override
-public void beforeCacheAccess(ITokenCacheAccessContext iTokenCacheAccessContext) {
- iTokenCacheAccessContext.tokenCache().deserialize(data);
-}
-
-@Override
-public void afterCacheAccess(ITokenCacheAccessContext iTokenCacheAccessContext) {
- data = iTokenCacheAccessContext.tokenCache().serialize();
-}
-```
-
-```Java
-// Loads cache from file
-String dataToInitCache = readResource(this.getClass(), "/cache_data/serialized_cache.json");
-
-ITokenCacheAccessAspect persistenceAspect = new TokenPersistence(dataToInitCache);
-
-// By setting *TokenPersistence* on the PublicClientApplication, MSAL will call *beforeCacheAccess()* before accessing the cache and *afterCacheAccess()* after accessing the cache.
-PublicClientApplication app =
-PublicClientApplication.builder("my_client_id").setTokenCacheAccessAspect(persistenceAspect).build();
-```
-
-## Learn more
-
-Learn about [Get and remove accounts from the token cache using MSAL for Java](msal-java-get-remove-accounts-token-cache.md).
active-directory Msal Logging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md
- Title: Logging errors and exceptions in MSAL for Java
-description: Learn how to log errors and exceptions in MSAL for Java
-------- Previously updated : 11/25/2022-----
-# Logging in MSAL for Java
--
-## MSAL for Java logging
-
-MSAL for Java allows you to use the logging library that you're already using with your app, as long as it's compatible with SLF4J. MSAL for Java uses the [Simple Logging Facade for Java](http://www.slf4j.org/) (SLF4J) as a simple facade or abstraction for various logging frameworks, such as [java.util.logging](https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html), [Logback](http://logback.qos.ch/) and [Log4j](https://logging.apache.org/log4j/2.x/). SLF4J allows the user to plug in the desired logging framework at deployment time and automatically binds to Logback at deployment time. MSAL logs will be written to the console.
-
-This article shows how to enable MSAL4J logging using the logback framework in a spring boot web application. You can refer to the [code sample](https://github.com/Azure-Samples/ms-identity-java-webapp/tree/master/msal-java-webapp-sample) for reference.
-
-1. To implement logging, include the `logback` package in the *pom.xml* file.
-
- ```xml
- <dependency>
- <groupId>ch.qos.logback</groupId>
- <artifactId>logback-classic</artifactId>
- <version>1.2.3</version>
- </dependency>
- ```
-
-2. Navigate to the *resources* folder, and add a file called *logback.xml*, and insert the following code. This will append logs to the console. You can change the appender `class` to write logs to a file, database or any appender of your choosing.
-
- ```xml
- <?xml version="1.0" encoding="UTF-8"?>
- <configuration>
- <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
- </encoder>
- </appender>
- <root level="debug">
- <appender-ref ref="STDOUT" />
- </root>
- </configuration>
- ```
-3. Next, you should set the *logging.config* property to the location of the *logback.xml* file before the main method. Navigate to *MsalWebSampleApplication.java* and add the following code to the `MsalWebSampleApplication` public class.
-
- ```java
- @SpringBootApplication
- public class MsalWebSampleApplication {
-
- static { System.setProperty("logging.config", "C:\Users\<your path>\src\main\resources\logback.xml"); }
- public static void main(String[] arrgs) {
- // Console.log("main");
- // System.console().printf("Hello");
- // System.out.printf("Hello %s!%n", "World");
- System.out.printf("%s%n", "Hello World");
- SpringApplication.run(MsalWebSampleApplication.class, args);
- }
- }
- ```
-
-In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](./scenario-web-app-call-api-overview.md).
-
-For instructions on how to bind to other logging frameworks, see the [SLF4J manual](http://www.slf4j.org/manual.html).
-
-### Personal and organization information
-
-By default, MSAL logging doesn't capture or log any personal or organizational data. In the following example, logging personal or organizational data is off by default:
-
-```java
- PublicClientApplication app2 = PublicClientApplication.builder(PUBLIC_CLIENT_ID)
- .authority(AUTHORITY)
- .build();
-```
-
-Turn on personal and organizational data logging by setting `logPii()` on the client application builder. If you turn on personal or organizational data logging, your app must take responsibility for safely handling highly-sensitive data and complying with any regulatory requirements.
-
-In the following example, logging personal or organizational data is enabled:
-
-```java
-PublicClientApplication app2 = PublicClientApplication.builder(PUBLIC_CLIENT_ID)
- .authority(AUTHORITY)
- .logPii(true)
- .build();
-```
-
-## Next steps
-
-For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
active-directory Msal Logging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-python.md
- Title: Logging errors and exceptions in MSAL for Python
-description: Learn how to log errors and exceptions in MSAL for Python
-------- Previously updated : 01/25/2021----
-# Logging in MSAL for Python
--
-## MSAL for Python logging
-
-Logging in MSAL for Python leverages the [logging module in the Python standard library](https://docs.python.org/3/library/logging.html). You can configure MSAL logging as follows (and see it in action in the [username_password_sample](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/1.0.0/sample/username_password_sample.py#L31L32)):
-
-### Enable debug logging for all modules
-
-By default, the logging in any Python script is turned off. If you want to enable verbose logging for **all** Python modules in your script, use `logging.basicConfig` with a level of `logging.DEBUG`:
-
-```python
-import logging
-
-logging.basicConfig(level=logging.DEBUG)
-```
-
-This will print all log messages given to the logging module to the standard output.
-
-### Configure MSAL logging level
-
-You can configure the logging level of the MSAL for Python log provider by using the `logging.getLogger()` method with the logger name `"msal"`:
-
-```python
-import logging
-
-logging.getLogger("msal").setLevel(logging.WARN)
-```
-
-### Configure MSAL logging with Azure App Insights
-
-Python logs are given to a log handler, which by default is the `StreamHandler`. To send MSAL logs to an Application Insights with an Instrumentation Key, use the `AzureLogHandler` provided by the `opencensus-ext-azure` library.
-
-To install, `opencensus-ext-azure` add the `opencensus-ext-azure` package from PyPI to your dependencies or pip install:
-
-```console
-pip install opencensus-ext-azure
-```
-
-Then change the default handler of the `"msal"` log provider to an instance of `AzureLogHandler` with an instrumentation key set in the `APP_INSIGHTS_KEY` environment variable:
-
-```python
-import logging
-import os
-
-from opencensus.ext.azure.log_exporter import AzureLogHandler
-
-APP_INSIGHTS_KEY = os.getenv('APP_INSIGHTS_KEY')
-
-logging.getLogger("msal").addHandler(AzureLogHandler(connection_string='InstrumentationKey={0}'.format(APP_INSIGHTS_KEY)))
-```
-
-### Personal and organizational data in Python
-
-MSAL for Python does not log personal data or organizational data. There is no property to turn personal or organization data logging on or off.
-
-You can use standard Python logging to log whatever you want, but you are responsible for safely handling sensitive data and following regulatory requirements.
-
-For more information about logging in Python, please refer to Python's [Logging: how-to](https://docs.python.org/3/howto/logging.html#logging-basic-tutorial).
-
-## Next steps
-
-For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
MSAL Supports a wide range of application types and scenarios. Refer to [Microso
ADAL to MSAL migration guide for different platforms are available in the following links: - [Migrate to MSAL iOS and macOS](migrate-objc-adal-msal.md)-- [Migrate to MSAL Java](migrate-adal-msal-java.md)
+- [Migrate to MSAL Java](/entra/msal/java/advanced/migrate-adal-msal-java)
- [Migrate to MSAL.js](msal-compare-msal-js-and-adal-js.md) - [Migrate to MSAL .NET](msal-net-migration.md) - [Migrate to MSAL Node](msal-node-migration.md)-- [Migrate to MSAL Python](migrate-python-adal-msal.md)
+- [Migrate to MSAL Python](/entra/msal/python/advanced/migrate-python-adal-msal)
## Migration help
active-directory Msal Net Use Brokers With Xamarin Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
public static string redirectUriOnIos = "msauth.com.yourcompany.XForms://auth";
Notice that the redirect URI matches the `CFBundleURLSchemes` name that you included in the *Info.plist* file.
-Add the redirect URI to the app's registration in the [Azure portal](https://portal.azure.com). To generate a properly formatted redirect URI, use **App registrations** in the Azure portal to generate the brokered redirect URI from the bundle ID.
+Add the redirect URI to the app's registration. To generate a properly formatted redirect URI, use **App registrations** to generate the brokered redirect URI from the bundle ID.
**To generate the redirect URI:**
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. Select **Azure Active Directory** > **App registrations** > your registered app
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Search for and select the application.
1. Select **Authentication** > **Add a platform** > **iOS / macOS** 1. Enter your bundle ID, and then select **Configure**. Copy the generated redirect URI that appears in the **Redirect URI** text box for inclusion in your code:
- :::image type="content" source="media/msal-net-use-brokers-with-xamarin-apps/portal-01-ios-platform-settings.png" alt-text="iOS platform settings with generated redirect URI in Azure portal":::
+ :::image type="content" source="media/msal-net-use-brokers-with-xamarin-apps/portal-01-ios-platform-settings.png" alt-text="iOS platform settings with generated redirect URI":::
1. Select **Done** to complete generation of the redirect URI. ## Brokered authentication for Android
result = await app.AcquireTokenInteractive(scopes)
### Step 4: Add a redirect URI to your app registration
-MSAL uses URLs to invoke the broker and then return to your app. To complete that round trip, register a **Redirect URI** for your app by using the [Azure portal](https://portal.azure.com).
+MSAL uses URLs to invoke the broker and then return to your app. To complete that round trip, register a **Redirect URI** for your app.
The format of the redirect URI for your application depends on the certificate used to sign the APK. For example:
The format of the redirect URI for your application depends on the certificate u
msauth://com.microsoft.xforms.testApp/hgbUYHVBYUTvuvT&Y6tr554365466= ```
-The last part of the URI, `hgbUYHVBYUTvuvT&Y6tr554365466=`, is the Base64-encoded version of the signature that the APK is signed with. While developing your app in Visual Studio, if you're debugging your code without signing the APK with a specific certificate, Visual Studio signs the APK for you for debugging purposes. When Visual Studio signs the APK for you in this way, it gives it a unique signature for the machine it's built on. Thus, each time you build your app on a different machine, you'll need to update the redirect URI in the application's code and the application's registration in the Azure portal in order to authenticate with MSAL.
+The last part of the URI, `hgbUYHVBYUTvuvT&Y6tr554365466=`, is the Base64-encoded version of the signature that the APK is signed with. While developing your app in Visual Studio, if you're debugging your code without signing the APK with a specific certificate, Visual Studio signs the APK for you for debugging purposes. When Visual Studio signs the APK for you in this way, it gives it a unique signature for the machine it's built on. Thus, each time you build your app on a different machine, you'll need to update the redirect URI in the application's code and the application's registration in order to authenticate with MSAL.
-While debugging, you may encounter an MSAL exception (or log message) stating the redirect URI provided is incorrect. **The exception or log message also indicates the redirect URI you should be using** with the current machine you're debugging on. You can use the provided redirect URI to continue developing your app as long as you update redirect URI in code and add the provided redirect URI to the app's registration in the Azure portal.
+While debugging, you may encounter an MSAL exception (or log message) stating the redirect URI provided is incorrect. **The exception or log message also indicates the redirect URI you should be using** with the current machine you're debugging on. You can use the provided redirect URI to continue developing your app as long as you update redirect URI in code and add the provided redirect URI to the app's registration.
-Once you're ready to finalize your code, update the redirect URI in the code and the application's registration in the Azure portal to use the signature of the certificate you sign the APK with.
+Once you're ready to finalize your code, update the redirect URI in the code and the application's registration to use the signature of the certificate you sign the APK with.
In practice, this means you should consider adding a redirect URI for each member of your development team, *plus* a redirect URI for the production signed version of the APK.
As an alternative, you can configure MSAL to fall back to the embedded browser,
Here are a few tips on avoiding issues when you implement brokered authentication on Android: -- **Redirect URI** - Add a redirect URI to your application registration in the [Azure portal](https://portal.azure.com). A missing or incorrect redirect URI is a common issue encountered by developers.
+- **Redirect URI** - Add a redirect URI to your application registration. A missing or incorrect redirect URI is a common issue encountered by developers.
- **Broker version** - Install the minimum required version of the broker apps. Either of these two apps can be used for brokered authentication on Android. - [Intune Company Portal](https://play.google.com/store/apps/details?id=com.microsoft.windowsintune.companyportal) (version 5.0.4689.0 or greater) - [Microsoft Authenticator](https://play.google.com/store/apps/details?id=com.azure.authenticator) (version 6.2001.0140 or greater).
active-directory Msal Net User Gets Consent For Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-user-gets-consent-for-multiple-resources.md
# User gets consent for several resources using MSAL.NET
-The Microsoft identity platform does not allow you to get a token for several resources at once. When using the Microsoft Authentication Library for .NET (MSAL.NET), the scopes parameter in the acquire token method should only contain scopes for a single resource. However, you can pre-consent to several resources upfront by specifying additional scopes using the `.WithExtraScopeToConsent` builder method.
+The Microsoft identity platform does not allow you to get a token for several resources at once. When using the Microsoft Authentication Library for .NET (MSAL.NET), the *scopes* parameter in the acquire token method should only contain scopes for a single resource. However, you can pre-consent to several resources upfront by specifying additional scopes using the `.WithExtraScopesToConsent` builder method.
> [!NOTE] > Getting consent for several resources works for Microsoft identity platform, but not for Azure AD B2C. Azure AD B2C supports only admin consent, not user consent.
For example, if you have two resources that have 2 scopes each:
- https:\//mytenant.onmicrosoft.com/customerapi (with 2 scopes `customer.read` and `customer.write`) - https:\//mytenant.onmicrosoft.com/vendorapi (with 2 scopes `vendor.read` and `vendor.write`)
-You should use the `.WithExtraScopeToConsent` modifier which has the *extraScopesToConsent* parameter as shown in the following example:
+You should use the `.WithExtraScopesToConsent` method which has the *extraScopesToConsent* parameter as shown in the following example:
```csharp string[] scopesForCustomerApi = new string[]
string[] scopesForVendorApi = new string[]
var accounts = await app.GetAccountsAsync(); var result = await app.AcquireTokenInteractive(scopesForCustomerApi) .WithAccount(accounts.FirstOrDefault())
- .WithExtraScopeToConsent(scopesForVendorApi)
+ .WithExtraScopesToConsent(scopesForVendorApi)
.ExecuteAsync(); ```
-This will get you an access token for the first web API. Then, to access the second web API you can silently acquire the token from the token cache:
+`AcquireTokenInteractive` will return an access token for the first web API. Along with that access token, a refresh token will also be retrieved from Azure AD and cached. Then, to access the second web API, you can silently acquire the token using `AcquireTokenSilent`. MSAL will use the cached refresh token to retrieve from Azure AD the access token for the second web API.
```csharp
-AcquireTokenSilent(scopesForVendorApi, accounts.FirstOrDefault()).ExecuteAsync();
+var result = await AcquireTokenSilent(scopesForVendorApi, accounts.FirstOrDefault()).ExecuteAsync();
```
active-directory Msal Python Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-token-cache-serialization.md
- Title: Custom token cache serialization (MSAL Python)
-description: Learn how to serialize token cache using MSAL for Python
-------- Previously updated : 06/26/2023---
-#Customer intent: As an application developer using the Microsoft Authentication Library (MSAL) for Python, I want to learn how to persist the token cache so that it is available to a new instance of my application.
--
-# Custom token cache serialization in MSAL for Python
-
-In Microsoft Authentication Library (MSAL) for Python, an in-memory token cache that persists for the duration of the app session, is provided by default when you create an instance of [ClientApplication](/python/api/msal/msal.application.confidentialclientapplication).
-
-Serialization of the token cache, so that different sessions of your app can access it, isn't provided "out of the box." MSAL for Python can be used in app types that don't have access to the file system--such as Web apps. To have a persistent token cache in an app that uses MSAL for Python, you must provide custom token cache serialization.
-
-The strategies for serializing the token cache differ depending on whether you're writing a public client application (Desktop), or a confidential client application (web app, web API, or daemon app).
-
-## Token cache for a public client application
-
-Public client applications run on a user's device and manage tokens for a single user. In this case, you could serialize the entire cache into a file. Remember to provide file locking if your app, or another app, can access the cache concurrently. For a simple example of how to serialize a token cache to a file without locking, see the example in the [SerializableTokenCache](/python/api/msal/msal.token_cache.serializabletokencache) class reference documentation.
-
-## Token cache for a Web app (confidential client application)
-
-For web apps or web APIs, you might use the session, or a Redis cache, or a database to store the token cache. There should be one token cache per user (per account) so ensure that you serialize the token cache per account.
-
-## Next steps
-
-See [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/0.3.0/app.py#L66-L74) for an example of how to use the token cache for a Windows or Linux Web app or web API. The example is for a web app that calls the Microsoft Graph API.
active-directory Multi Service Web App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-authentication-app-service.md
You now have an app that's secured by the App Service authentication and authori
## Verify limited access to the web app
-When you enabled the App Service authentication/authorization module, an app registration was created in your Azure AD tenant. The app registration has the same display name as your web app. To check the settings, select **Azure Active Directory** from the portal menu, and select **App registrations**. Select the app registration that was created. In the overview, verify that **Supported account types** is set to **My organization only**.
+When you enabled the App Service authentication/authorization module, an app registration was created in your Azure AD tenant. The app registration has the same display name as your web app. To check the settings, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer) and browse to **Identity** > **Applications** > **App registrations**. Select the app registration that was created. In the overview, verify that **Supported account types** is set to **My organization only**.
:::image type="content" alt-text="Screenshot that shows verifying access." source="./media/multi-service-web-app-authentication-app-service/verify-access.png":::
active-directory Multi Service Web App Clean Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-clean-up-resources.md
This command might take several minutes to run.
## Delete the app registration
-From the portal menu, select **Azure Active Directory** > **App registrations**. Then select the application you created.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select the application you created.
+1. In the app registration overview, select **Delete**.
-In the app registration overview, select **Delete**.
active-directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims.md
You can configure optional claims for your application through the Azure portal or application manifest.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Choose the application for which you want to configure optional claims based on your scenario and desired outcome. 1. Under **Manage**, select **Token configuration**. - The UI option **Token configuration** blade isn't available for apps registered in an Azure AD B2C tenant, which can be configured by modifying the application manifest. For more information, see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
This section covers the configuration options under optional claims for changing
Complete the following steps to configure groups optional claims using the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. After you've authenticated, choose your tenant by selecting it from the top-right corner of the page.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**.
-1. Select the application you want to configure optional claims for in the list.
+1. Select the application for which you want to configure optional claims.
1. Under **Manage**, select **Token configuration**. 1. Select **Add groups claim**. 1. Select the group types to return (**Security groups**, or **Directory roles**, **All groups**, and/or **Groups assigned to the application**):
Complete the following steps to configure groups optional claims using the Azure
Complete the following steps to configure groups optional claims through the application manifest:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. After you've authenticated, choose your Azure AD tenant by selecting it from the top-right corner of the page.
-1. Search for and select **Azure Active Directory**.
-1. Select the application you want to configure optional claims for in the list.
+1. Select the application for which you want to configure optional claims.
1. Under **Manage**, select **Manifest**. 1. Add the following entry using the manifest editor:
Complete the following steps to configure groups optional claims through the app
Multiple token types can be listed:
- - idToken for the OIDC ID token
- - accessToken for the OAuth access token
- - Saml2Token for SAML tokens.
+ - `idToken` for the OIDC ID token
+ - `accessToken` for the OAuth access token
+ - `Saml2Token` for SAML tokens.
- The Saml2Token type applies to both SAML1.1 and SAML2.0 format tokens.
+ The `Saml2Token` type applies to both SAML1.1 and SAML2.0 format tokens.
For each relevant token type, modify the groups claim to use the `optionalClaims` section in the manifest. The `optionalClaims` schema is as follows:
In the following example, the Azure portal and manifest are used to add optional
Configure claims in the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. After you've authenticated, choose your tenant by selecting it from the top-right corner of the page.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**.
-1. Find the application you want to configure optional claims for in the list and select it.
+1. Select the application for which you want to configure optional claims.
1. Under **Manage**, select **Token configuration**. 1. Select **Add optional claim**, select the **ID** token type, select **upn** from the list of claims, and then select **Add**. 1. Select **Add optional claim**, select the **Access** token type, select **auth_time** from the list of claims, then select **Add**.
Configure claims in the Azure portal:
Configure claims in the manifest:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. After you've authenticated, choose your tenant by selecting it from the top-right corner of the page.
-1. Search for and select **Azure Active Directory**.
-1. Find the application you want to configure optional claims for in the list and select it.
+1. Select the application for which you want to configure optional claims.
1. Under **Manage**, select **Manifest** to open the inline manifest editor. 1. You can directly edit the manifest using this editor. The manifest follows the schema for the [Application entity](./reference-app-manifest.md), and automatically formats the manifest once saved. New elements are added to the `optionalClaims` property.
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
Depending on the permissions they require, some applications might require an ad
Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph.
-## Next steps
+## See also
- [Delegated access scenario](delegated-access-primer.md) - [User and admin consent overview](../manage-apps/user-admin-consent-overview.md) - [OpenID connect scopes](scopes-oidc.md)
+-- [Making your application multi-tenant](./howto-convert-app-to-be-multi-tenant.md)
+- [AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Perms For Given Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/perms-for-given-api.md
- Title: Select permissions for a given API
-description: Learn about how permissions requests work for client and resource applications for applications you are developing
--------- Previously updated : 11/10/2022----
-# How to select permissions for a given API
-
-## Recommended documents
--- Learn more about how client applications use [delegated and application permission requests](./developer-glossary.md#permissions) to access resources.-- Learn about [scopes and permissions in the Microsoft identity platform](scopes-oidc.md)-- See step-by-step instructions on how to [configure a client application's permission requests](./quickstart-configure-app-access-web-apis.md)-- For more depth, learn how resource applications expose [scopes](./developer-glossary.md#scopes) and [application roles](./developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal.-
-## Next steps
-
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Previously updated : 08/11/2023 Last updated : 08/17/2023
Publisher verification gives app users and organization admins information about the authenticity of the developer's organization, who publishes an app that integrates with the Microsoft identity platform.
-When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (MCPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration.
+When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (CPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration.
When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages:
Publisher verification for an app has the following benefits:
App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements. -- The developer must have an MPN ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.
+- The developer must have an Partner One ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The CPP account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.
> [!NOTE]
- > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process.
+ > The CPP account you use for publisher verification can't be your partner location Partner One ID. Currently, location Partner One IDs aren't supported for the publisher verification process.
- The app that's to be publisher verified must be registered by using an Azure AD work or school account. Apps that are registered by using a Microsoft account can't be publisher verified. -- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
+- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the CPP PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
- The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set. The feature is not supported in Azure AD B2C tenant. -- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified)
+- The domain of the email address that's used during CPP account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified)
-- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.
+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the CPP account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.
- In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
- - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD).
+ - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): CPP Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD).
- The user who initiates verification must sign in by using [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md).
active-directory Quickstart Configure App Access Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
By specifying a web API's scopes in your client app's registration, the client a
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
+Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration.
+ In the first scenario, you grant a client app access to your own web API, both of which you should have registered as part of the prerequisites. If you don't yet have both a client app and a web API registered, complete the steps in the two [Prerequisites](#prerequisites) articles. This diagram shows how the two app registrations relate to one another. In this section, you add permissions to the client app's registration.
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
# Quickstart: Configure an application to expose a web API
-In this quickstart, you'll register a web API with the Microsoft identity platform and expose it to client apps by adding a scope. By registering your web API and exposing it through scopes, you can provide permissions-based access to its resources to authorized users and client apps that access your API.
+In this quickstart, you'll register a web API with the Microsoft identity platform and expose it to client apps by adding a scope. By registering your web API and exposing it through scopes, assigning an owner and app role, you can provide permissions-based access to its resources to authorized users and client apps that access your API.
## Prerequisites
In this quickstart, you'll register a web API with the Microsoft identity platfo
## Register the web API
+Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration.
+ To provide scoped access to the resources in your web API, you first need to register the API with the Microsoft identity platform. Perform the steps in the **Register an application** section of [Quickstart: Register an app with the Microsoft identity platform](quickstart-register-app.md). Skip the **Redirect URI (optional)** section. You don't need to configure a redirect URI for a web API since no user is logged in interactively.
-With the web API registered, you can add scopes to the API's code so it can provide granular permission to consumers.
+## Assign application owner
+
+1. In your app registration, under **Manage**, select **Owners**, and **Add owners**.
+1. In the new window, find and select the owner(s) that you want to assign to the application. Selected owners appear in the right panel. Once done, confirm with **Select**. The app owner(s) will now appear in the owner's list.
+
+>[!NOTE]
+>
+> Ensure that both the API application and the application you want to add permissions to both have an owner, otherwise the API will not be listed when requesting API permissions.
+
+## Assign app role
+
+1. In your app registration, under **Manage**, select **App roles**, and **Create app role**.
+1. Next, specify the app role's attributes in the **Create app role** pane. For this walk-through, you can use the example values or specify your own.
+
+ | Field | Description | Example |
+ |-|-||
+ | **Display name** | The name of your app role | *Employee Records* |
+ | **Allowed member types** | Specifies whether the app role can be assigned to users/groups and/or applications | *Applications* |
+ | **Value** | The value displayed in the "roles" claim of a token | `Employee.Records` |
+ | **Description** | A more detailed description of the app role | *Applications have access to employee records* |
+
+1. Select the checkbox to enable the app role.
+
+With the web API registered, assigned an app role and owner, you can add scopes to the API's code so it can provide granular permission to consumers.
## Add a scope
The code in a client application requests permission to perform operations defin
First, follow these steps to create an example scope named `Employees.Read.All`:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/quickstart-configure-app-expose-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
-1. Select **Azure Active Directory** > **App registrations**, and then select your API's app registration.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
+1. Browse to **Identity** > **Applications** > **App registrations**, and then select your API's app registration.
1. Select **Expose an API** 1. Select **Add** next to **Application ID URI** if you haven't yet configured one.
First, follow these steps to create an example scope named `Employees.Read.All`:
:::image type="content" source="media/quickstart-configure-app-expose-web-apis/portal-02-expose-api.png" alt-text="An app registration's Expose an API pane in the Azure portal"::: - 1. Next, specify the scope's attributes in the **Add a scope** pane. For this walk-through, you can use the example values or specify your own. | Field | Description | Example |
active-directory Quickstart Console App Netcore Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-console-app-netcore-acquire-token.md
Last updated 03/13/2023 -+ #Customer intent: As an application developer, I want to learn how my .NET Core app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
active-directory Quickstart Console App Nodejs Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-console-app-nodejs-acquire-token.md
Last updated 09/09/2022 + #Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.- # Quickstart: Acquire a token and call Microsoft Graph from a Node.js console app
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
Many developers already have tenants through services or subscriptions that are
To check the tenant:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. Use the account you'll use to manage your application.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Tenant Creator](../roles/permissions-reference.md#tenant-creator).
1. Check the upper-right corner. If you have a tenant, you'll automatically be signed in. You see the tenant name directly under your account name. * Hover over your account name to see your name, email address, directory or tenant ID (a GUID), and domain. * If your account is associated with multiple tenants, you can select your account name to open a menu where you can switch between tenants. Each tenant has its own tenant ID.
To check the tenant:
> [!TIP] > To find the tenant ID, you can: > * Hover over your account name to get the directory or tenant ID.
-> * Search and select **Azure Active Directory** > **Overview** > **Tenant ID** in the Azure portal.
+> * Select **Identity** > **Overview** and look for **Tenant ID**.
If you don't have a tenant associated with your account, you'll see a GUID under your account name. You won't be able to do actions like registering apps until you create an Azure AD tenant. ### Create a new Azure AD tenant
-If you don't already have an Azure AD tenant or if you want to create a new one for development, see [Create a new tenant in Azure AD](../fundamentals/create-new-tenant.md) or use the [directory creation experience](https://portal.azure.com/#create/Microsoft.AzureActiveDirectory) in the Azure portal. If you want to create a tenant for app testing, see [build a test environment](test-setup-environment.md).
+If you don't already have an Azure AD tenant or if you want to create a new one for development, see [Create a new tenant in Azure AD](../fundamentals/create-new-tenant.md). If you want to create a tenant for app testing, see [build a test environment](test-setup-environment.md).
You'll provide the following information to create your new tenant:
active-directory Quickstart Daemon App Java Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-daemon-app-java-acquire-token.md
Last updated 01/10/2022 -+ #Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
To run this sample, you need:
- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater - [Maven](https://maven.apache.org/) - ## Register and download your quickstart app
-You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
-
-### Option 1: Register and auto configure your app and then download your code sample
-
-1. Go to the [Azure portal - App registrations](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs) quickstart experience.
-1. Enter a name for your application and select **Register**.
-1. Follow the instructions to download and automatically configure your new application with just one click.
-
-### Option 2: Register and manually configure your application and code sample
-
-#### Step 1: Register your application
- [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
+### Step 1: Register the application
+ To register your application and add the app's registration information to your solution manually, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later. 1. Select **Register**. 1. Under **Manage**, select **Certificates & secrets**.
To register your application and add the app's registration information to your
1. Select **Application permissions**. 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-#### Step 2: Download the Java project
+### Step 2: Download the Java project
[Download the Java daemon project](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
-#### Step 3: Configure the Java project
+### Step 3: Configure the Java project
1. Extract the zip file to a local folder close to the root of the disk, for example, *C:\Azure-Samples*. 1. Navigate to the sub folder **msal-client-credential-secret**.
To register your application and add the app's registration information to your
- `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1. >[!TIP]
->To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
+>To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page. To generate a new key, go to **Certificates & secrets** page.
-#### Step 4: Admin consent
+### Step 4: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
-##### Global tenant administrator
+#### Global tenant administrator
-If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
+If you are a global tenant administrator, go to **API Permissions** page in **App registrations** and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
-##### Standard user
+#### Standard user
If you're a standard user of your tenant, then you need to ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_i
* `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-#### Step 5: Run the application
+### Step 5: Run the application
You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
ConfidentialClientApplication cca =
| Where: |Description | |||
-| `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
-| `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+| `CLIENT_SECRET` | Is the client secret created for the application. |
+| `CLIENT_ID` | Is the **Application (client) ID** for the registered application. You can find this value in the app's **Overview** page. |
| `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant ID.| ### Requesting tokens
IAuthenticationResult result;
|Where:| Description | |||
-| `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
+| `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations**.|
[!INCLUDE [Help and support](includes/error-handling-and-tips/help-support-include.md)]
active-directory Quickstart Desktop App Nodejs Electron Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-desktop-app-nodejs-electron-sign-in.md
Last updated 01/14/2022 -+ #Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
active-directory Quickstart Mobile App Ios Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-mobile-app-ios-sign-in.md
The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-ios/ios-intro.svg)
-## Register and download your quickstart app
-You have two options to start your quickstart application:
-* [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-the-code-sample)
-* [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
-
-### Option 1: Register and auto configure your app and then download the code sample
-#### Step 1: Register your application
-To register your app,
-1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs) quickstart experience.
-1. Enter a name for your application and select **Register**.
-1. Follow the instructions to download and automatically configure your new application with just one click.
-
-### Option 2: Register and manually configure your application and code sample
-
-#### Step 1: Register your application
+## Register your quickstart app
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] To register your application and add the app's registration information to your solution manually, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later. 1. Select **Register**. 1. Under **Manage**, select **Authentication** > **Add Platform** > **iOS**.
To register your application and add the app's registration information to your
#### Step 4: Configure your project If you selected Option 1 above, you can skip these steps. 1. Open the project in XCode.
-1. Edit **ViewController.swift** and replace the line starting with 'let kClientID' with the following code snippet. Remember to update the value for `kClientID` with the clientID that you saved when you registered your app in the portal earlier in this quickstart:
+1. Edit **ViewController.swift** and replace the line starting with 'let kClientID' with the following code snippet. Remember to update the value for `kClientID` with the clientID that you saved when you registered your app earlier in this quickstart:
```swift let kClientID = "Enter_the_Application_Id_Here"
If you selected Option 1 above, you can skip these steps.
let kAuthority = "https://login.microsoftonline.de/common" ```
-3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier**.
4. Right-click **Info.plist** and select **Open As** > **Source Code**. 5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
active-directory Quickstart Single Page App Angular Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-angular-sign-in.md
This quickstart uses MSAL Angular v2 with the authorization code flow.
* [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor -
-## Register and download your quickstart application
+## Register your quickstart application
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To start your quickstart application, use either of the following options.
-
-### Option 1 (Express): Register and auto configure your app and then download your code sample
-
-1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
-1. Enter a name for your application.
-1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**.
-1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
-
-### Option 2 (Manual): Register and manually configure your application and code sample
-
-#### Step 1: Register your application
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later. 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
Modify the values in the `auth` section as described here:
- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md). - `Enter_the_Tenant_info_here` is set to one of the following: - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- If your application supports *accounts in any organizational directory*, replace this value with `organizations`. - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`. - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
- To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of **Supported account types**, go to the app registration's **Overview** page.
- `Enter_the_Redirect_Uri_Here` is `http://localhost:4200/`. The `authority` value in your *app.module.ts* should be similar to the following if you're using the main (global) Azure cloud:
active-directory Quickstart Single Page App Javascript Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-javascript-sign-in.md
See [How the sample works](#how-the-sample-works) for an illustration.
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To start your quickstart application, use either of the following options.
+### Step 1: Register your application
-### Option 1 (Express): Register and auto configure your app and then download your code sample
-
-1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
-1. Enter a name for your application.
-1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**.
-1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
-
-### Option 2 (Manual): Register and manually configure your application and code sample
-
-#### Step 1: Register your application
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later. 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
To start your quickstart application, use either of the following options.
1. Set the **Redirect URI** value to `http://localhost:3000/`. 1. Select **Configure**.
-#### Step 2: Download the project
+### Step 2: Download the project
To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-v2/archive/master.zip). -
-#### Step 3: Configure your JavaScript app
+### Step 3: Configure your JavaScript app
In the *app* folder, open the *authConfig.js* file, and then update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
Modify the values in the `msalConfig` section:
- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
- `Enter_the_Cloud_Instance_Id_Here` is the Azure cloud instance. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md). - `Enter_the_Tenant_info_here` is one of the following: - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- If your application supports *accounts in any organizational directory*, replace this value with `organizations`. - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`. - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
- To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of **Supported account types**, go to the app registration's **Overview** page.
- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`. The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages" ```
-#### Step 4: Run the project
+### Step 4: Run the project
Run the project with a web server by using Node.js.
active-directory Quickstart Single Page App React Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-react-sign-in.md
See [How the sample works](#how-the-sample-works) for an illustration.
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To start your quickstart application, use either of the following options.
+### Step 1: Register your application
-### Option 1 (Express): Register and auto configure your app and then download your code sample
-
-1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
-1. Enter a name for your application.
-1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. Select **Register**.
-1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
-
-### Option 2 (Manual): Register and manually configure your application and code sample
-
-#### Step 1: Register your application
--
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. When the **Register an application** page appears, enter a name for your application. 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
Under **Manage**, select **App registrations** > **New registration**.
1. Under **Platform Configurations** expand **Single-page application**. 1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
-#### Step 2: Download the project
-
+### Step 2: Download the project
To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip).
-#### Step 3: Configure your JavaScript app
+### Step 3: Configure your JavaScript app
In the *src* folder, open the *authConfig.js* file and update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
Modify the values in the `msalConfig` section as described here:
- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
- To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of **Application (client) ID**, go to the app registration's **Overview** page.
- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md). - `Enter_the_Tenant_info_here` is set to one of the following: - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
- To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page.
- If your application supports *accounts in any organizational directory*, replace this value with `organizations`. - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`. - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
- To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+ To find the value of **Supported account types**, go to the app registration's **Overview** page.
- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`. The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
Scroll down in the same file and update the `graphMeEndpoint`.
}; ```
-#### Step 4: Run the project
+### Step 4: Run the project
Run the project with a web server by using Node.js:
active-directory Quickstart Web App Java Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-java-sign-in.md
Last updated 01/18/2023 -+ # Quickstart: Sign in users and call the Microsoft Graph API from a Java web app
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Restricted Claim type (URI):
- `http://schemas.microsoft.com/ws/2008/06/identity/claims/groups` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/ispersistent`-- `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid`-- `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/samlissuername` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/wids`-- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdeviceclaim` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdevicegroup` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsfqbnversion`
Restricted Claim type (URI):
- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authentication` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authorizationdecision` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/denyonlysid`-- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`-- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name`-- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/privatepersonalidentifier`-- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`-- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname` - `http://schemas.xmlsoap.org/ws/2009/09/identity/claims/actor`
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50015 | ViralUserLegalAgeConsentRequiredState - The user requires legal age group consent. | | AADSTS50017 | CertificateValidationFailed - Certification validation failed, reasons for the following reasons:<ul><li>Cannot find issuing certificate in trusted certificates list</li><li>Unable to find expected CrlSegment</li><li>Cannot find issuing certificate in trusted certificates list</li><li>Delta CRL distribution point is configured without a corresponding CRL distribution point</li><li>Unable to retrieve valid CRL segments because of a timeout issue</li><li>Unable to download CRL</li></ul>Contact the tenant admin. | | AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. User account '{email}' from identity provider '{idp}' does not exist in tenant '{tenant}' and cannot access the application '{appid}'({appName}) in that tenant. This account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account. If this user should be a member of the tenant, they should be invited via the [B2B system](/azure/active-directory/b2b/add-users-administrator). For additional information, visit [AADSTS50020](/troubleshoot/azure/active-directory/error-code-aadsts50020-user-account-identity-provider-does-not-exist). |
+| AADSTS500208 | The domain is not a valid login domain for the account type - This situation occurs when the user's account does not match the expected account type for the given tenant.. For instance, if the tenant is configured to allow only work or school accounts, and the user tries to sign in with a personal Microsoft account, they will receive this error.
| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that doesn't allow access to the resource tenant. | | AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy doesn't allow this user to access this tenant. | | AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format isn't proper</li><li>External ID token from issuer failed signature verification.</li></ul> |
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md
For more information about the Microsoft Authentication Library, see the [Overvi
<!--Reference-style links --> [AAD-App-Model-V2-Overview]: v2-overview.md [Microsoft-SDL]: https://www.microsoft.com/securityengineering/sdl/
-[preview-tos]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[preview-tos]: https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all
active-directory Registration Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-how-to.md
- Title: Get the endpoints for an Azure AD app registration
-description: How to find the authentication endpoints for a custom application you're developing or registering with Azure AD.
--------- Previously updated : 11/09/2022----
-# How to discover endpoints
-
-You can find the authentication endpoints for your application in the [Azure portal](https://portal.azure.com).
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. Select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, and then select **Endpoints** in the top menu.
-
- The **Endpoints** page is displayed, showing the authentication endpoints for your tenant.
-
- Use the endpoint that matches the authentication protocol you're using in conjunction with the **Application (client) ID** to craft the authentication request specific to your application.
-
-**National clouds** (for example Azure AD China, Germany, and US Government) have their own app registration portal and Azure AD authentication endpoints. Learn more in the [National clouds overview](authentication-national-cloud.md).
-
-## Next steps
-
-For more information about endpoints in the different Azure environments, see the [National clouds overview](authentication-national-cloud.md).
active-directory Registration Config Specific Application Property How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-specific-application-property-how-to.md
- Title: Azure portal registration fields for custom-developed apps
-description: Guidance for registering a custom developed application with Azure AD
--------- Previously updated : 09/27/2021----
-# Azure portal registration fields for custom-developed apps
-
-This article gives you a brief description of all the available fields in the application registration form in the [Azure portal](https://portal.azure.com).
-
-## Register a new application
--- To register a new application, sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.--- From the left navigation pane, click **Azure Active Directory.**--- Choose **App registrations** and click **Add**.--- This open up the application registration form.-
-## Fields in the application registration form
-
-| Field | Description |
-|||
-| Name | The name of the application. It should have a minimum of four characters. |
-| Supported account types| Select which accounts you would like your application to support: accounts in this organizational directory only, accounts in any organizational directory, or accounts in any organizational directory and personal Microsoft accounts. |
-| Redirect URI (optional) | Select the type of app you're building, **Web** or **Public client (mobile & desktop)**, and then enter the redirect URI (or reply URL) for your application. For web applications, provide the base URL of your app. For example, http://localhost:31544 might be the URL for a web app running on your local machine. Users would use this URL to sign in to a web client application. For public client applications, provide the URI used by Azure AD to return token responses. Enter a value specific to your application, such as myapp://auth. To see specific examples for web applications or native applications, check out our [quickstarts](./index.yml).|
-
-Once you have filled the above fields, the application is registered in the Azure portal, and you are redirected to the application overview page. The settings pages in the left pane under **Manage** have more fields for you to customize your application. The tables below describe all the fields. You would only see a subset of these fields, depending on whether you created a web application or a public client application.
-
-### Overview
-
-| Field | Description |
-|--|--|
-| Application ID | When you register an application, Azure AD assigns your application an Application ID. The application ID can be used to uniquely identify your application in authentication requests to Azure AD, as well as to access resources like the Graph API. |
-| App ID URI | This should be a unique URI, usually of the form **https://&lt;tenant\_name&gt;/&lt;application\_name&gt;.** This is used during the authorization grant flow, as a unique identifier to specify the resource that the token should be issued for. It also becomes the 'aud' claim in the issued access token. |
-
-### Branding
-
-| Field | Description |
-|--|--|
-| Upload new logo | You can use this to upload a logo for your application. The logo must be in .bmp, .jpg or .png format, and the file size should be less than 100 KB. The dimensions for the image should be 215x215 pixels, with central image dimensions of 94x94 pixels.|
-| Home page URL | This is the sign-on URL specified during application registration.|
-
-### Authentication
-
-| Field | Description |
-|--|--|
-| Front-channel logout URL | This is the single sign-out logout URL. Azure AD sends a logout request to this URL when the user clears their session with Azure AD using any other registered application.|
-| Supported account types | This switch specifies whether the application can be used by multiple tenants. Typically, this means that external organizations can use your application by registering it in their tenant and granting access to their organization's data.|
-| Redirect URLs | The redirect, or reply, URLs are the endpoints where Azure AD returns any tokens that your application requests. For native applications, this is where the user is sent after successful authorization. Azure AD checks that the redirect URI your application supplies in the OAuth 2.0 request matches one of the registered values in the portal.|
-
-### Certificates and secrets
-
-| Field | Description |
-|--|--|
-| Client secrets | You can create client secrets, or keys, to programmatically access web APIs secured by Azure AD without any user interaction. From the **New client secret** page, enter a key description and the expiration date and save to generate the key. Make sure to save it somewhere secure, as you won't be able to access it later. |
-
-## Next steps
-
-[Managing Applications with Azure Active Directory](../manage-apps/what-is-application-management.md)
active-directory Registration Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-sso-how-to.md
- Title: Configure application single sign-on
-description: How to configure single sign-on for a custom application you are developing and registering with Azure AD.
--------- Previously updated : 07/15/2019----
-# How to configure single sign-on for an application
-
-Enabling federated single sign-on (SSO) in your app is automatically enabled when federating through Azure AD for OpenID Connect, SAML 2.0, or WS-Fed. If your end users are having to sign in despite already having an existing session with Azure AD, itΓÇÖs likely your app may be misconfigured.
-
-* If youΓÇÖre using Microsoft Authentication Library (MSAL), make sure you have **PromptBehavior** set to **Auto** rather than **Always**.
-
-* If youΓÇÖre building a mobile app, you may need additional configurations to enable brokered or non-brokered SSO.
-
-For Android, see [Enabling Cross App SSO in Android](msal-android-single-sign-on.md).
-
-For iOS, see [Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md).
-
-## Next steps
-
-[Azure AD SSO](../manage-apps/what-is-single-sign-on.md)<br>
-
-[Enabling Cross App SSO in Android](msal-android-single-sign-on.md)<br>
-
-[Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md)<br>
-
-[Integrating Apps to AzureAD](./quickstart-register-app.md)<br>
-
-[Permissions and consent in the Microsoft identity platform](./permissions-consent-overview.md)<br>
-
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
This table shows the maximum number of redirect URIs you can add to an app regis
| Microsoft work or school accounts in any organization's Azure Active Directory (Azure AD) tenant | 256 | `signInAudience` field in the application manifest is set to either *AzureADMyOrg* or *AzureADMultipleOrgs* | | Personal Microsoft accounts and work and school accounts | 100 | `signInAudience` field in the application manifest is set to *AzureADandPersonalMicrosoftAccount* |
-The maximum number of redirect URIS can't be raised for [security reasons](#restrictions-on-wildcards-in-redirect-uris). If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) as the solution.
+The maximum number of redirect URIs can't be raised for [security reasons](#restrictions-on-wildcards-in-redirect-uris). If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) as the solution.
## Maximum URI length
active-directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-claims-customization.md
By default, the Microsoft identity platform issues a SAML token to an applicatio
## View or edit claims
-To view or edit the claims issued in the SAML token to the application, open the application in Azure portal. Then open the **Attributes & Claims** section.
-
+To view or edit the claims issued in the SAML token to the application:
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select the application, select **Single sign-on** in the left-hand menu, and then select **Edit** in the **Attributes & Claims** section.
You might need to edit the claims issued in the SAML token for the following reasons:
To edit the name identifier value claim:
1. Open the **Name identifier value** page. 1. Select the attribute or transformation that you want to apply to the attribute. Optionally, you can specify the format that you want the `nameID` claim to have.
- :::image type="content" source="./media/saml-claims-customization/saml-sso-manage-user-claims.png" alt-text="Screenshot of editing the nameID (name identifier) value in the Azure portal.":::
- ### NameID format If the SAML request contains the element `NameIDPolicy` with a specific format, then the Microsoft identity platform honors the format in the request.
For more information about identifier values, see the table that lists the valid
Any constant (static) value can be assigned to any claim. Use the following steps to assign a constant value:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the **User Attributes & Claims** section, select **Edit** to edit the claims.
-1. Select the required claim that you want to modify.
-1. Enter the constant value without quotes in the **Source attribute** as per your organization and select **Save**.
-
- :::image type="content" source="./media/saml-claims-customization/organization-attribute.png" alt-text="Screenshot of the organization Attributes & Claims section in the Azure portal.":::
-
-1. The constant value is displayed as shown in the following image.
-
- :::image type="content" source="./media/saml-claims-customization/edit-attributes-claims.png" alt-text="Screenshot of editing in the Attributes & Claims section in the Azure portal.":::
+1. On the **Attributes & Claims** blade, select the required claim that you want to modify.
+1. Enter the constant value without quotes in the **Source attribute** as per your organization and select **Save**. The constant value is displayed.
### Directory Schema extensions (Preview) You can also configure directory schema extension attributes as non-conditional/conditional attributes. Use the following steps to configure the single or multi-valued directory schema extension attribute as a claim:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the **User Attributes & Claims** section, select **Edit** to edit the claims.
-1. Select **Add new claim** or edit an existing claim.
-
- :::image type="content" source="./media/saml-claims-customization/mv-extension-1.jpg" alt-text="Screenshot of the MultiValue extension configuration section in the Azure portal.":::
-
+1. On the **Attributes & Claims** blade, select **Add new claim** or edit an existing claim.
1. Select source application from application picker where extension property is defined.
- :::image type="content" source="./media/saml-claims-customization/mv-extension-2.jpg" alt-text="Screenshot of the source application selection in MultiValue extension configuration section in the Azure portal.":::
- 1. Select **Add** to add the selection to the claims. 1. Click **Save** to commit the changes.
You can use the following special claims transformations functions.
To add application-specific claims:
-1. In **User Attributes & Claims**, select **Add new claim** to open the **Manage user claims** page.
+1. On the **Attributes & Claims** blade, select **Add new claim** to open the **Manage user claims** page.
1. Enter the **name** of the claims. The value doesn't strictly need to follow a URI pattern, per the SAML spec. If you need a URI pattern, you can put that in the **Namespace** field. 1. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
To apply a transformation to a user attribute:
1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page. 1. Select the function from the transformation dropdown. Depending on the function selected, provide parameters and a constant value to evaluate in the transformation. 1. Select the source of the attribute by clicking on the appropriate radio button. Directory schema extension source is in preview currently.-
- :::image type="content" source="./media/saml-claims-customization/mv-extension-4.png" alt-text="Screenshot of claims transformation.":::
- 1. Select the attribute name from the dropdown.- 1. **Treat source as multivalued** is a checkbox indicating whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim, by checking this box it ensures it's applied to all. This checkbox is only be enabled for multi-valued attributes, for example `user.proxyaddresses`.- 1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
When the following conditions occur after **Add** or **Run test** is selected, a
## Add the UPN claim to SAML tokens
-The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md), so you can't add it in the **Attributes & Claims** section. As a workaround, you can add it as an [optional claim](./optional-claims.md) through **App registrations** in the Azure portal.
+The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#saml-restricted-claim-set). If you have custom signing key configured, you can add it in the **Attributes & Claims** section.
+In case there is no custom signing key configured, please refer to [SAML Restricted claim set](reference-claims-mapping-policy-type.md#saml-restricted-claim-set). You can add it as an [optional claim](./optional-claims.md) through **App registrations** in the Azure portal.
+
Open the application in **App registrations**, select **Token configuration**, and then select **Add optional claim**. Select the **SAML** token type, choose **upn** from the list, and then click **Add** to add the claim to the token.
+Customization done in the **Attributes & Claims** section can overwrite the optional claims in the **App Registration**.
+ ## Emit claims based on conditions You can specify the source of a claim based on user type and the group to which the user belongs.
For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs
First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta. - As another example, consider when Britta Simon tries to sign in and the following configuration is used. All conditions are first evaluated with the source of `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, the transformations are evaluated. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta. - As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim falls back to `user.extensionattribute1` instead. ## Advanced SAML claims options
active-directory Scenario Mobile App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-app-registration.md
For more information, see [Scenarios and supported authentication flows](authent
### Interactive authentication
-When you build a mobile app that uses interactive authentication, the most critical registration step is the redirect URI. You can set interactive authentication through the [platform configuration on the **Authentication** blade](https://aka.ms/MobileAppReg).
+When you build a mobile app that uses interactive authentication, the most critical registration step is the redirect URI. This experience enables your app to get single sign-on (SSO) through Microsoft Authenticator (and Intune Company Portal on Android). It also supports device management policies.
-This experience will enable your app to get single sign-on (SSO) through Microsoft Authenticator (and Intune Company Portal on Android). It will also support device management policies.
-
-The app registration portal provides a preview experience to help you compute the brokered reply URI for iOS and Android applications:
-
-1. In the app registration portal, select **Authentication** > **Try out the new experience**.
-
- ![The Authentication blade, where you choose a new experience](https://user-images.githubusercontent.com/13203188/60799285-2d031b00-a173-11e9-9d28-ac07a7ae894a.png)
-
-2. Select **Add a platform**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
+1. Enter a **Name** for the application.
+1. For **Supported account types**, select **Accounts in this organizational directory only**.
+1. Select **Register**.
+1. Select **Authentication** and then select **Add a platform**.
![Add a platform](https://user-images.githubusercontent.com/13203188/60799366-4c01ad00-a173-11e9-934f-f02e26c9429e.png)
-3. When the list of platforms is supported, select **iOS**.
+1. When the list of platforms is supported, select **iOS / macOS**.
![Choose a mobile application](https://user-images.githubusercontent.com/13203188/60799411-60de4080-a173-11e9-9dcc-d39a45826d42.png)
-4. Enter your bundle ID, and then select **Register**.
+1. Enter your bundle ID, and then select **Configure**.
![Enter your bundle ID](https://user-images.githubusercontent.com/13203188/60799477-7eaba580-a173-11e9-9f8b-431f5b09344e.png)
If your app uses only username-password authentication, you don't need to regist
However, identify your application as a public client application. To do so:
-1. Still in the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, select your app in **App registrations**, and then select **Authentication**.
+1. Still in the Microsoft Entra admin center, select your app in **App registrations**, and then select **Authentication**.
1. In **Advanced settings** > **Allow public client flows** > **Enable the following mobile and desktop flows:**, select **Yes**. :::image type="content" source="media/scenarios/default-client-type.png" alt-text="Enable public client setting on Authentication pane in Azure portal":::
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
Previously updated : 05/06/2022 Last updated : 08/11/2023
These advanced steps are covered in chapter 3 of the [3-WebApp-multi-APIs](https
The code for ASP.NET is similar to the code shown for ASP.NET Core: -- A controller action, protected by an [Authorize] attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller. (ASP.NET uses `HttpContext.User`.)
-*Microsoft.Identity.Web* adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token.
+- A controller action, protected by an `[Authorize]` attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller (ASP.NET uses `HttpContext.User`). This ensures that only authenticated users can use the app.
+**Microsoft.Identity.Web** adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token.
-If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use *Microsoft.Identity.Web* to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK.
+If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use Microsoft.Identity.Web to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK.
To get an authorization header, you get an `IAuthorizationHeaderProvider` service from the controller using an extension method `GetAuthorizationHeaderProvider`. To get an authorization header to call an API on behalf of the user, use `CreateAuthorizationHeaderForUserAsync`. To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use `CreateAuthorizationHeaderForAppAsync`.
-The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated users can use the web app.
-- The following snippet shows the action of the `HomeController`, which gets an authorization header to call Microsoft Graph as a REST API: - ```csharp [Authorize] public class HomeController : Controller
public class HomeController : Controller
# [Java](#tab/java)
-In the Java sample, the code that calls an API is in the getUsersFromGraph method in [AuthPageController.java#L62](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthPageController.java#L62).
+In the Java sample, the code that calls an API is in the `getUsersFromGraph` method in [AuthPageController.java#L62](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthPageController.java#L62).
The method attempts to call `getAuthResultBySilentFlow`. If the user needs to consent to more scopes, the code processes the `MsalInteractionRequiredException` object to challenge the user.
public ModelAndView getUserFromGraph(HttpServletRequest httpRequest, HttpServlet
# [Node.js](#tab/nodejs)
-In the Node.js sample, the code that acquires a token is in the *acquireToken* method of the **AuthProvider** class.
+In the Node.js sample, the code that acquires a token is in the `acquireToken` method of the `AuthProvider` class.
:::code language="js" source="~/ms-identity-node/App/auth/AuthProvider.js" range="79-121":::
This access token is then used to handle requests to the `/profile` endpoint:
# [Python](#tab/python)
-In the Python sample, the code that calls the API is in `app.py`.
+In the Python sample, the code that calls the API is in *app.py*.
The code attempts to get a token from the token cache. If it can't get a token, it redirects the user to the sign-in route. Otherwise, it can proceed to call the API.
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Call a web API](scenario-web-app-call-api-call-api.md?tabs=python). -+
active-directory Setup Multi Tenant App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/setup-multi-tenant-app.md
- Title: Configure a new multi-tenant application
-description: Learn how to configure an application as multi-tenant, and how multi-tenant applications work
--------- Previously updated : 11/10/2022----
-# How to configure a new multi-tenant application
-
-Here is a list of recommended topics to learn more about multi-tenant applications:
--- Get a general understanding of [what it means to be a multi-tenant application](./developer-glossary.md#multi-tenant-application)-- Learn about [tenancy in Azure Active Directory](single-and-multi-tenant-apps.md)-- Get a general understanding of [how to configure an application to be multi-tenant](./howto-convert-app-to-be-multi-tenant.md)-- Get a step-by-step overview of [how the Azure AD consent framework is used to implement consent](./quickstart-register-app.md), which is required for multi-tenant applications-- For more depth, learn [how a multi-tenant application is configured and coded end-to-end](./howto-convert-app-to-be-multi-tenant.md), including how to register, use the "common" endpoint, implement "user" and "admin" consent, how to implement more advanced multi-tier scenarios-
-## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
Title: Azure single sign-on SAML protocol
+ Title: Single sign-on SAML protocol
description: This article describes the single sign-on (SSO) SAML protocol in Azure Active Directory documentationcenter: .net
Previously updated : 08/31/2022 Last updated : 08/11/2023
To request a user authentication, cloud services send an `AuthnRequest` element
| Parameter | Type | Description | | | | |
-| ID | Required | Azure AD uses this attribute to populate the `InResponseTo` attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, `id6c1c178c166d486687be4aaf5e482730` is a valid ID. |
-| Version | Required | This parameter should be set to **2.0**. |
-| IssueInstant | Required | This is a DateTime string with a UTC value and [round-trip format ("o")](/dotnet/standard/base-types/standard-date-and-time-format-strings). Azure AD expects a DateTime value of this type, but doesn't evaluate or use the value. |
-| AssertionConsumerServiceURL | Optional | If provided, this parameter must match the `RedirectUri` of the cloud service in Azure AD. |
-| ForceAuthn | Optional | This is a boolean value. If true, it means that the user will be forced to re-authenticate, even if they have a valid session with Azure AD. |
-| IsPassive | Optional | This is a boolean value that specifies whether Azure AD should authenticate the user silently, without user interaction, using the session cookie if one exists. If this is true, Azure AD will attempt to authenticate the user using the session cookie. |
-
-All other `AuthnRequest` attributes, such as Consent, Destination, AssertionConsumerServiceIndex, AttributeConsumerServiceIndex, and ProviderName are **ignored**.
+| `ID` | Required | Azure AD uses this attribute to populate the `InResponseTo` attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, `id6c1c178c166d486687be4aaf5e482730` is a valid ID. |
+| `Version` | Required | This parameter should be set to `2.0`. |
+| `IssueInstant` | Required | This is a DateTime string with a UTC value and [round-trip format ("o")](/dotnet/standard/base-types/standard-date-and-time-format-strings). Azure AD expects a DateTime value of this type, but doesn't evaluate or use the value. |
+| `AssertionConsumerServiceURL` | Optional | If provided, this parameter must match the `RedirectUri` of the cloud service in Azure AD. |
+| `ForceAuthn` | Optional | This is a boolean value. If true, it means that the user will be forced to re-authenticate, even if they have a valid session with Azure AD. |
+| `IsPassive` | Optional | This is a boolean value that specifies whether Azure AD should authenticate the user silently, without user interaction, using the session cookie if one exists. If this is true, Azure AD will attempt to authenticate the user using the session cookie. |
+
+All other `AuthnRequest` attributes, such as `Consent`, `Destination`, `AssertionConsumerServiceIndex`, `AttributeConsumerServiceIndex`, and `ProviderName` are **ignored**.
Azure AD also ignores the `Conditions` element in `AuthnRequest`.
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
See the following table for the validation differences of various properties for
| Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key | | Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets | | Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | |
-| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). |
+| API permissions (`requiredResourceAccess`) | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 200 permissions total across all APIs. Maximum of 30 permissions per resource (for example, Microsoft Graph). |
| Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined | | Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client | | appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported |
active-directory Test Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md
You can [manually create a tenant](quickstart-create-new-tenant.md), which will
For convenience, you may want to invite yourself and other members of your development team to be guest users in the tenant. This will create separate guest objects in the test tenant, but means you only have to manage one set of credentials for your corporate account and your test account.
-1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**.
-2. Go to **Users**.
-3. Click on **New guest user** and invite your work account email address.
-4. Repeat for other members of the development and/or testing team for your application.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Invite external user** and invite your work account email address.
+1. Repeat for other members of the development and/or testing team for your application.
You can also create test users in your test tenant. If you used one of the Microsoft 365 sample packs, you may already have some test users in your tenant. If not, you should be able to create some yourself as the tenant administrator.
-1. Sign in to the [Azure portal](https://portal.azure.com), then select on **Azure Active Directory**.
-2. Go to **Users**.
-3. Click **New user** and create some new test users in your directory.
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user** and create some new test users in your directory.
### Get an Azure AD subscription (optional)
Replicating Conditional Access policies ensures you don't encounter unexpected b
Viewing your production tenant Conditional Access policies may need to be performed by a company administrator.
-1. Sign in to the [Azure portal](https://portal.azure.com) using your production tenant account.
1. Go to **Azure Active Directory** > **Enterprise applications** > **Conditional Access**. 1. View the list of policies in your tenant. Click the first one. 1. Navigate to **Cloud apps or actions**.
Viewing your production tenant Conditional Access policies may need to be perfor
In a new tab or browser session, sign in to the [Azure portal](https://portal.azure.com) to access your test tenant.
-1. Go to **Azure Active Directory** > **Enterprise applications** > **Conditional Access**.
-1. Click on **New policy**
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Conditional Access**.
+1. Select **Create new policy**
1. Copy the settings from the production tenant policy, identified through the previous steps. #### Permission grant policies Replicating permission grant policies ensures you don't encounter unexpected prompts for admin consent when moving to production.
-1. Sign in to the [Azure portal](https://portal.azure.com) using your production tenant account.
-1. Click on **Azure Active Directory**.
-1. Go to **Enterprise applications**.
-1. From your production tenant, go to **Azure Active Directory** > **Enterprise applications** > **Consent and permissions** > **User consent** settings. Copy the settings there to your test tenant.
+Browse to **Identity** > **Applications** > **Enterprise applications** > **Consent and permissions** > **User consent** settings. Copy the settings there to your test tenant.
#### Token lifetime policies
You'll need to create an app registration to use in your test environment. This
You'll need to create some test users with associated test data to use while testing your scenarios. This step might need to be performed by an admin.
-1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**.
-2. Go to **Users**.
-3. Select **New user** and create some new test users in your directory.
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user** and create some new test users in your directory.
### Add the test users to a group (optional) For convenience, you can assign all these users to a group, which makes other assignment operations easier.
-1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**.
-2. Go to **Groups**.
-3. Click **New group**.
-4. Select either **Security** or **Microsoft 365** for group type.
-5. Name your group.
-6. Add the test users created in the previous step.
+1. Browse to **Identity** > **Groups** > **All groups**.
+1. Select **New group**.
+1. Select either **Security** or **Microsoft 365** for group type.
+1. Name your group.
+1. Add the test users created in the previous step.
### Restrict your test application to specific users
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Previously updated : 08/11/2023 Last updated : 08/17/2023
If you're unable to complete the process or are experiencing unexpected behavior with [publisher verification](publisher-verification-overview.md), you should start by doing the following if you're receiving errors or seeing unexpected behavior: 1. Review the [requirements](publisher-verification-overview.md#requirements) and ensure they've all been met.-
-2. Review the instructions to [mark an app as publisher verified](mark-app-as-publisher-verified.md) and ensure all steps have been performed successfully.
-
-3. Review the list of [common issues](#common-issues).
-
-4. Reproduce the request using [Graph Explorer](#making-microsoft-graph-api-calls) to gather more info and rule out any issues in the UI.
+1. Review the instructions to [mark an app as publisher verified](mark-app-as-publisher-verified.md) and ensure all steps have been performed successfully.
+1. Review the list of [common issues](#common-issues).
+1. Reproduce the request using [Graph Explorer](#making-microsoft-graph-api-calls) to gather more info and rule out any issues in the UI.
## Common Issues Below are some common issues that may occur during the process. -- **I donΓÇÖt know my Microsoft Partner Network ID (MPN ID) or I donΓÇÖt know who the primary contact for the account is.**
- 1. Navigate to the [MPN enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new).
- 2. Sign in with a user account in the org's primary Azure AD tenant.
- 3. If an MPN account already exists, this is recognized and you are added to the account.
- 4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the MPN ID and primary account contact will be listed.
+- **I donΓÇÖt know my Cloud Partner Program ID (Partner One ID) or I donΓÇÖt know who the primary contact for the account is.**
+ 1. Navigate to the [Cloud Partner Program enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new).
+ 1. Sign in with a user account in the org's primary Azure AD tenant.
+ 1. If an Cloud Partner Program account already exists, this is recognized and you are added to the account.
+ 1. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the Partner One ID and primary account contact will be listed.
- **I donΓÇÖt know who my Azure AD Global Administrator (also known as company admin or tenant admin) is, how do I find them? What about the Application Administrator or Cloud Application Administrator?**
- 1. Sign in to the [Azure portal](https://portal.azure.com) using a user account in your organization's primary tenant.
- 1. Browse to **Azure Active Directory** > [Roles and administrators](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators).
- 3. Select the desired admin role.
- 4. The list of users assigned that role will be displayed.
+ 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Adminstrator](../roles/permissions-reference.md#cloud-application-administrator).
+ 1. Browse to **Identity** > **Roles & admins** > **Roles & admins**.
+ 1. Select the desired admin role.
+ 1. The list of users assigned that role will be displayed.
-- **I don't know who the admin(s) for my MPN account are**
- Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles.
+- **I don't know who the admin(s) for my CPP account are**
+ Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles.
-- **I am getting an error saying that my MPN ID is invalid or that I do not have access to it.**
+- **I am getting an error saying that my Partner One ID is invalid or that I do not have access to it.**
Follow the [remediation guidance](#mpnaccountnotfoundornoaccess). - **When I sign in to the Azure portal, I do not see any apps registered. Why?**
Response
204 No Content ``` > [!NOTE]
-> *verifiedPublisherID* is your MPN ID.
+> *verifiedPublisherID* is your Partner One ID.
### Unset Verified Publisher
The following is a list of the potential error codes you may receive, either whe
### MPNAccountNotFoundOrNoAccess
-The MPN ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid Partner One ID and try again.
-Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the MPN account, or an invalid MPN ID.
+Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the CPP account, or an invalid Partner One ID.
**Remediation Steps** 1. Go to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that:
- - The MPN ID is correct.
- - There are no errors or ΓÇ£pending actionsΓÇ¥ shown, and the verification status under Legal business profile and Partner info both say ΓÇ£authorizedΓÇ¥ or ΓÇ£successΓÇ¥.
-2. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account.
-3. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions).
+ - The Partner One ID is correct.
+ - There are no errors or "pending actions" shown, and the verification status under Legal business profile and Partner info both say "authorized" or "success".
+1. Go to the [CPP tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account.
+1. Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions).
### MPNGlobalAccountNotFound
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused when an MPN ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details.
+Most commonly caused when an Partner One ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details.
**Remediation Steps**
-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
-2. Use the Partner ID with type PartnerGlobal
+1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**.
+1. Use the Partner ID with type PartnerGlobal.
### MPNAccountInvalid
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused by the wrong MPN ID being provided.
+Most commonly caused by the wrong Partner One ID being provided.
**Remediation Steps**
-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
-2. Use the Partner ID with type PartnerGlobal
+1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**.
+1. Use the Partner ID with type PartnerGlobal.
### MPNAccountNotVetted
-The MPN ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again.
+The Partner One ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again.
-Most commonly caused by when the MPN account hasn't completed the [verification](/partner-center/verification-responses) process.
+Most commonly caused by when the CPP account hasn't completed the [verification](/partner-center/verification-responses) process.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that there are no errors or **pending actions** shown, and that the verification status under Legal business profile and Partner info both say **authorized** or **success**.
-2. If not, view pending action items in Partner Center and troubleshoot with [here](/partner-center/verification-responses)
+1. If not, view pending action items in Partner Center and troubleshoot with [here](/partner-center/verification-responses).
### NoPublisherIdOnAssociatedMPNAccount
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused by the wrong MPN ID being provided.
+Most commonly caused by the wrong Partner One ID being provided.
**Remediation Steps**
-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
-2. Use the Partner ID with type PartnerGlobal
+1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**.
+1. Use the Partner ID with type PartnerGlobal.
### MPNIdDoesNotMatchAssociatedMPNAccount
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused by the wrong MPN ID being provided.
+Most commonly caused by the wrong Partner One ID being provided.
**Remediation Steps**
-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
-2. Use the Partner ID with type PartnerGlobal
+1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**.
+1. Use the Partner ID with type PartnerGlobal.
### ApplicationNotFound
-The target application (`AppId`) canΓÇÖt be found. Provide a valid application ID and try again.
+The target application (`AppId`) can't be found. Provide a valid application ID and try again.
Most commonly caused when verification is being performed via Graph API, and the ID of the application provided is incorrect. **Remediation Steps**
-1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application)
-2. Log in to [Azure Active Directory](https://aad.portal.azure.com/) with a user account in your organization's primary tenant > Azure Active Directory > App Registrations blade
-3. Find your app's registration to view the Object ID
+1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Find your app's registration to view the Object ID.
### ApplicationObjectisInvalid
The target application's object ID is invalid. Please provide a valid ID and try
Most commonly caused when the verification is being performed via Graph API, and the ID of the application provided does not exist. **Remediation Steps**
-1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application)
-2. Log in to [Azure Active Directory](https://aad.portal.azure.com/) with a user account in your organization's primary tenant > Azure Active Directory > App Registrations blade
-3. Find your app's registration to view the Object ID
+1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Find your app's registration to view the Object ID.
### B2CTenantNotAllowed
The target application (`AppId`) must have a Publisher Domain set. Set a Publish
Occurs when a [Publisher Domain](howto-configure-publisher-domain.md) isn't configured on the app. **Remediation Steps**
-1. Follow the directions [here](./howto-configure-publisher-domain.md#set-a-publisher-domain-in-the-azure-portal) to set a Publisher Domain
+Follow the directions [here](./howto-configure-publisher-domain.md#set-a-publisher-domain-in-the-azure-portal) to set a Publisher Domain.
### PublisherDomainMismatch
See [requirements](publisher-verification-overview.md) for a list of allowed dom
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile), and view the email listed as Primary Contact
-2. The domain used to perform email verification in Partner Center is the portion after the ΓÇ£@ΓÇ¥ in the Primary ContactΓÇÖs email
-3. Log in to [Azure Active Directory](https://aad.portal.azure.com/) > Azure Active Directory > App Registrations blade > (`Your App`) > Branding and Properties
-4. Select **Update Publisher Domain** and follow the instructions to **Verify a New Domain**.
-5. Add the domain used to perform email verification in Partner Center as a New Domain
+1. The domain used to perform email verification in Partner Center is the portion after the "@" in the Primary Contact's email
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Application registrations** > **Branding and Properties**.
+1. Select **Update Publisher Domain** and follow the instructions to **Verify a New Domain**.
+1. Add the domain used to perform email verification in Partner Center as a New Domain.
### NotAuthorizedToVerifyPublisher You aren't authorized to set the verified publisher property on application (<`AppId`).
-Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information.
+Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information.
**Remediation Steps**
-1. Sign in to the [Azure AD Portal](https://aad.portal.azure.com) using a user account in your organization's primary tenant.
-2. Navigate to [Role Management](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators).
-3. Select the desired admin role and click ΓÇ£Add AssignmentΓÇ¥ if you have sufficient permissions.
-4. If you do not have sufficient permissions, contact an admin role for assistance
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Roles & admins** > **Roles & admins**.
+1. Select the desired admin role and select **Add Assignment** if you have sufficient permissions.
+1. If you do not have sufficient permissions, contact an admin role for assistance.
### MPNIdWasNotProvided
-The MPN ID wasn't provided in the request body or the request content type wasn't "application/json".
+The Partner One ID wasn't provided in the request body or the request content type wasn't "application/json".
-Most commonly caused when the verification is being performed via Graph API, and the MPN ID wasnΓÇÖt provided in the request.
+Most commonly caused when the verification is being performed via Graph API, and the Partner One ID wasnΓÇÖt provided in the request.
**Remediation Steps**
-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
-2. Use the Partner ID with type PartnerGlobal in the request
+1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**.
+1. Use the Partner ID with type PartnerGlobal in the request.
### MSANotSupported
The error message displayed will be: "Due to a configuration change made by your
**Remediation Steps** 1. Ensure [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) is enabled and **required** for the user you're signing in with and for this scenario
-2. Retry Publisher Verification
+1. Retry Publisher Verification
### UserUnableToAddPublisher
If you've reviewed all of the previous information and are still receiving an er
- ObjectId of target application - AppId of target application - TenantId where app is registered-- MPN ID
+- Partner One ID
- REST request being made - Error code and message being returned
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-webassembly.md
We also have a [tutorial for Blazor Server](tutorial-blazor-server.md).
- [.NET Core 7.0 SDK](https://dotnet.microsoft.com/download/dotnet-core/7.0) - An Azure AD tenant where you can register an app. If you don't have access to an Azure AD tenant, you can get one by registering with the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program) or by creating an [Azure free account](https://azure.microsoft.com/free).
-## Register the app in the Azure portal
+## Register the app
Every app that uses Azure AD for authentication must be registered with Azure AD. Follow the instructions in [Register an application](quickstart-register-app.md) with these specifications:
To create the application, run the following command. Replace the placeholders i
dotnet new blazorwasm --auth SingleOrg --calls-graph -o {APP NAME} --client-id "{CLIENT ID}" --tenant-id "{TENANT ID}" -f net7.0 ```
-| Placeholder | Azure portal name | Example |
-| - | -- | -- |
-| `{APP NAME}` | &mdash; | `BlazorWASMSample` |
+| Placeholder | Name | Example |
+| -- | - |-- |
+| `{APP NAME}` | &mdash; | `BlazorWASMSample` |
| `{CLIENT ID}` | Application (client) ID | `41451fa7-0000-0000-0000-69eff5a761fd` |
-| `{TENANT ID}` | Directory (tenant) ID | `e86c78e2-0000-0000-0000-918e0565a45e` |
+| `{TENANT ID}` | Directory (tenant) ID | `e86c78e2-0000-0000-0000-918e0565a45e` |
## Test the app
Now you'll update your app's registration and code to pull a user's emails and d
First, add the `Mail.Read` API permission to the app's registration so that Azure AD is aware that the app will request to access its users' email.
-1. In the Azure portal, select your app in **App registrations**.
+1. In the Microsoft Entra admin center, select your app in **App registrations**.
1. Under **Manage**, select **API permissions**. 1. Select **Add a permission** > **Microsoft Graph**. 1. Select **Delegated Permissions**, then search for and select the **Mail.Read** permission.
active-directory Tutorial V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md
In this tutorial:
> [!div class="checklist"] > > - Create an Android app project in _Android Studio_
-> - Register the app in the Azure portal
+> - Register the app in the Microsoft Entra admin center
> - Add code to support user sign-in and sign-out > - Add code to call the Microsoft Graph API > - Test the app
Follow these steps to create a new project if you don't already have an Android
1. Open Android Studio, and select **Start a new Android Studio project**. 2. Select **Basic Activity** and select **Next**. 3. Enter a name for the application, such as _MSALAndroidapp_.
-4. Record the package name to be used in the Azure portal in later steps.
+4. Record the package name to be used in later steps.
5. Change the language from **Kotlin** to **Java**. 6. Set the **Minimum SDK API level** to **API 19** or higher, and select **Finish**.
Follow these steps to create a new project if you don't already have an Android
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later. 1. For **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. For information on different account types, select the **Help me choose** option. 1. Select **Register**.
Follow these steps to create a new project if you don't already have an Android
</activity> ```
- - Use your Azure portal **Package name** to replace `android:host=.` value. It should look like `com.azuresamples.msalandroidapp`.
- - Use your Azure portal **Signature Hash** to replace `android:path=` value. Ensure that there's a leading `/` at the beginning of your Signature Hash. It should look like `/1wIqXSqBj7w+h11ZifsnqwgyKrY=`.
+ - Use the **Package name** to replace `android:host=.` value. It should look like `com.azuresamples.msalandroidapp`.
+ - Use the **Signature Hash** to replace `android:path=` value. Ensure that there's a leading `/` at the beginning of your Signature Hash. It should look like `/1wIqXSqBj7w+h11ZifsnqwgyKrY=`.
You can find these values in the Authentication blade of your app registration as well.
active-directory Tutorial V2 Angular Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md
In this tutorial:
> [!div class="checklist"] >
-> - Register the application in the Azure portal
+> - Register the application in the Microsoft Entra admin center
> - Create an Angular project with `npm` > - Add code to support user sign-in and sign-out > - Add code to call Microsoft Graph API
To continue with the tutorial and build the application yourself, move on to the
To complete registration, provide the application a name, specify the supported account types, and add a redirect URI. Once registered, the application **Overview** pane displays the identifiers needed in the application source code.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations > New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as _Angular-SPA-auth-code_. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. 1. Under **Redirect URI (optional)**, use the drop-down menu to select **Single-page-application (SPA)** and enter `http://localhost:4200` into the text box.
To complete registration, provide the application a name, specify the supported
export class AppModule {} ```
-1. Replace the following values with the values obtained from the Azure portal. For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
+1. Replace the following values with the values obtained from the Microsoft Entra admin center. For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
- `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application. - `authority` - This is composed of two parts:
MSAL Angular provides an `Interceptor` class that automatically acquires tokens
## Add scopes and delegated permissions
-The Microsoft Graph API requires the _User.Read_ scope to read a user's profile. The _User.Read_ scope is added automatically to every app registration you create in the Azure portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require other scopes. For example, the Microsoft Graph API requires the _Mail.Read_ scope in order to list the user's email.
+The Microsoft Graph API requires the _User.Read_ scope to read a user's profile. The _User.Read_ scope is added automatically to every app registration. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require other scopes. For example, the Microsoft Graph API requires the _Mail.Read_ scope in order to list the user's email.
As you add scopes, your users might be prompted to provide extra consent for the added scopes.
active-directory Tutorial V2 Aspnet Daemon Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md
In this tutorial:
> * Get an access token to call the Microsoft Graph API > * Call the Microsoft Graph API.
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites - [Visual Studio 2017 or 2019](https://visualstudio.microsoft.com/downloads/). - An Azure AD tenant. For more information, see [How to get an Azure AD tenant](quickstart-create-new-tenant.md).-- One or more user accounts in your Azure AD tenant. This sample won't work with a Microsoft account. If you signed in to the [Azure portal](https://portal.azure.com) with a Microsoft account and have never created a user account in your directory, do that now.
+- One or more user accounts in your tenant. This sample won't work with a Microsoft account. If you signed in with a Microsoft account and have never created a user account in your directory, do that now.
## Scenario
Or [download the sample in a zip file](https://github.com/Azure-Samples/ms-ident
This sample has one project. To register the application with your Azure AD tenant, you can either: -- Follow the steps in [Register the sample with your Azure Active Directory tenant](#register-the-client-app-dotnet-web-daemon-v2) and [Configure the sample to use your Azure AD tenant](#choose-the-azure-ad-tenant).
+- Follow the steps in [Choose the tenant](#choose-the-tenant) and [Configure the sample to use your tenant](#configure-the-sample-to-use-your-tenant).
- Use PowerShell scripts that: - *Automatically* create the Azure AD applications and related objects (passwords, permissions, dependencies) for you. - Modify the Visual Studio projects' configuration files.
If you want to use the automation:
If you don't want to use the automation, use the steps in the following sections.
-### Choose the Azure AD tenant
+### Choose the tenant
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
--
-### Register the client app (dotnet-web-daemon-v2)
-
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application, for example `dotnet-web-daemon-v2`. Users of your app might see this name, and you can change it later. 1. In the **Supported account types** section, select **Accounts in any organizational directory**. 1. In the **Redirect URI (optional)** section, select **Web** in the combo box and enter `https://localhost:44316/` and `https://localhost:44316/Account/GrantPermissions` as Redirect URIs.
If you don't want to use the automation, use the steps in the following sections
1. In the **Application permissions** section, ensure that the right permissions are selected: **User.Read.All**. 1. Select **Add permissions**.
-## Configure the sample to use your Azure AD tenant
+## Configure the sample to use your tenant
In the following steps, **ClientID** is the same as "application ID" or **AppId**.
Open the solution in Visual Studio to configure the projects.
If you used the setup scripts, the following changes will have been applied for you. 1. Open the **UserSync\Web.Config** file.
-1. Find the app key **ida:ClientId**. Replace the existing value with the application ID of the **dotnet-web-daemon-v2** application copied from the Azure portal.
-1. Find the app key **ida:ClientSecret**. Replace the existing value with the key that you saved during the creation of the **dotnet-web-daemon-v2** app in the Azure portal.
+1. Find the app key **ida:ClientId**. Replace the existing value with the application ID of the **dotnet-web-daemon-v2** application that was previously recorded.
+1. Find the app key **ida:ClientSecret**. Replace the existing value with the key that you saved during the creation of the **dotnet-web-daemon-v2** app.
## Run the sample
Visual Studio will publish the project and automatically open a browser to the p
### Update the Azure AD tenant application registration for dotnet-web-daemon-v2
-1. Go back to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. In the left pane, select the **Azure Active Directory** service, and then select **App registrations**.
-1. Select the **dotnet-web-daemon-v2** application.
+1. Go back to the Microsoft Entra admin center, and then select the **dotnet-web-daemon-v2** application in **App registrations**.
1. On the **Authentication** page for your application, update the **Front-channel logout URL** fields with the address of your service. For example, use `https://dotnet-web-daemon-v2-contoso.azurewebsites.net/Account/EndSession`. 1. From the **Branding** menu, update the **Home page URL** to the address of your service. For example, use `https://dotnet-web-daemon-v2-contoso.azurewebsites.net`. 1. Save the configuration.
active-directory Tutorial V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md
In this tutorial:
> [!div class="checklist"] > > - Create an iOS or macOS app project in _Xcode_
-> - Register the app in the Azure portal
+> - Register the app in the Microsoft Entra admin center
> - Add code to support user sign-in and sign-out > - Add code to call the Microsoft Graph API > - Test the app
If you'd like to download a completed version of the app you build in this tutor
5. Set the **Language** to **Swift** and select **Next**. 6. Select a folder to create your app and select **Create**.
-## Register your application
+## Register the application
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later. 1. Select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)** under **Supported account types**. 1. Select **Register**.
carthage update --platform macOS
You can also use Git Submodule, or check out the latest release to use as a framework in your application.
-## Add your app registration
+## Add the app registration
Next, we add your app registration to your code.
import MSAL
Next, add the following code to _ViewController.swift_ before to `viewDidLoad()`: ```swift
-// Update the below to your client ID you received in the portal. The below is for running the demo only
+// Update the below to your client ID. The below is for running the demo only
let kClientID = "Your_Application_Id_Here" let kGraphEndpoint = "https://graph.microsoft.com/" // the Microsoft Graph endpoint let kAuthority = "https://login.microsoftonline.com/common" // this authority allows a personal Microsoft account and a work or school account in any organization's Azure AD tenant to sign in
var webViewParameters : MSALWebviewParameters?
var currentAccount: MSALAccount? ```
-The only value you modify is the value assigned to `kClientID` to be your [Application ID](./developer-glossary.md#application-client-id). This value is part of the MSAL Configuration data that you saved during the step at the beginning of this tutorial to register the application in the Azure portal.
+The only value you modify is the value assigned to `kClientID` to be your [Application ID](./developer-glossary.md#application-client-id). This value is part of the MSAL Configuration data that you saved during the step at the beginning of this tutorial to register the application.
## Configure Xcode project settings
Add a new keychain group to your project **Signing & Capabilities**. The keychai
In this step, you'll register `CFBundleURLSchemes` so that the user can be redirected back to the app after sign in. By the way, `LSApplicationQueriesSchemes` also allows your app to make use of Microsoft Authenticator.
-In Xcode, open _Info.plist_ as a source code file, and add the following inside of the `<dict>` section. Replace `[BUNDLE_ID]` with the value you used in the Azure portal. If you downloaded the code, the bundle identifier is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section.
+In Xcode, open _Info.plist_ as a source code file, and add the following inside of the `<dict>` section. Replace `[BUNDLE_ID]` with the value you previously used. If you downloaded the code, the bundle identifier is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section.
```xml <key>CFBundleURLTypes</key>
This app is built for a single account scenario. MSAL also supports multi-accoun
Build and deploy the app to a test device or simulator. You should be able to sign in and get tokens for Azure AD or personal Microsoft accounts.
-The first time a user signs into your app, they'll be prompted by Microsoft identity to consent to the permissions requested. While most users are capable of consenting, some Azure AD tenants have disabled user consent, which requires admins to consent on behalf of all users. To support this scenario, register your app's scopes in the Azure portal.
+The first time a user signs into your app, they'll be prompted by Microsoft identity to consent to the permissions requested. While most users are capable of consenting, some Azure AD tenants have disabled user consent, which requires admins to consent on behalf of all users. To support this scenario, register your app's scopes.
After you sign in, the app will display the data returned from the Microsoft Graph `/me` endpoint.
active-directory Tutorial V2 Shared Device Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-shared-device-mode.md
In this tutorial:
> - Enable and detect shared-device mode > - Detect single or multiple account mode > - Detect a user switch, and enable global sign-in and sign-out
-> - Set up tenant and register the application in the Azure portal
+> - Set up tenant and register the application
> - Set up an Android device in shared-device mode > - Run the sample app
private void registerAccountChangeBroadcastReceiver(){
## Administrator guide
-The following steps describe setting up your application in the Azure portal and putting your device into shared-device mode.
+The following steps describe setting up your application and putting your device into shared-device mode.
-### Register your application in Azure Active Directory
+### Register the application
-First, register your application within your organizational tenant. Then provide these values below in auth_config.json in order for your application to run correctly.
+First, register the application within your organizational tenant. Then provide these values below in auth_config.json in order for your application to run correctly.
-For information on how to do this, refer to [Register your application](./tutorial-v2-android.md#register-your-application-with-azure-ad).
+For information on how to do this, refer to [Register your application](./tutorial-v2-android.md).
> [!NOTE] > When you register your app, please use the quickstart guide on the left-hand side and then select **Android**. This will lead you to a page where you'll be asked to provide the **Package Name** and **Signature Hash** for your app. These are very important to ensure your app configuration will work. You'll then receive a configuration object that you can use for your app that you'll cut and paste into your auth_config.json file.
-You should select **Make this change for me** and then provide the values the quickstart asks for in the Azure portal. When that's done, we'll generate all the configuration files you need.
+You should select **Make this change for me** and then provide the values the quickstart asks for. When that's done, we'll generate all the configuration files you need.
## Set up a tenant
-For testing purposes, set up the following in your tenant: at least two employees, one Cloud Device Administrator, and one Global Administrator. In the Azure portal, set the Cloud Device Administrator by modifying Organizational Roles. In the Azure portal, access your Organizational Roles by selecting **Azure Active Directory** > **Roles and Administrators** > **Cloud Device Administrator**. Add the users that can put a device into shared mode.
+For testing purposes, set up the following in your tenant: at least two employees, one Cloud Device Administrator, and one Global Administrator. Set the Cloud Device Administrator by modifying Organizational Roles. Access your Organizational Roles by selecting **Identity** > **Roles & admins** > **Roles & admins** > **All roles**, and then select **Cloud Device Administrator**. Add the users that can put a device into shared mode.
## Set up an Android device in shared mode
The device is now in shared mode.
Any sign-ins and sign-outs on the device will be global, meaning they apply to all apps that are integrated with MSAL and Microsoft Authenticator on the device. You can now deploy applications to the device that use shared-device mode features.
-## View the shared device in the Azure portal
+## View the shared device
-Once you've put a device in shared-mode, it becomes known to your organization and is tracked in your organizational tenant. You can view your shared devices by looking at the **Join Type** in the Azure Active Directory blade of your Azure portal.
+Once you've put a device in shared-mode, it becomes known to your organization and is tracked in your organizational tenant. You can view your shared devices by looking at the **Join Type**.
## Running the sample app
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
In this tutorial:
> > - Create a _Windows Presentation Foundation (WPF)_ project in Visual Studio > - Install the Microsoft Authentication Library (MSAL) for .NET
-> - Register the application in the Azure portal
+> - Register the application
> - Add code to support user sign-in and sign-out > - Add code to call Microsoft Graph API > - Test the app
The sample application that you create with this guide enables a Windows Desktop
After the user is authenticated, the sample application receives a token you can use to query Microsoft Graph API or a web API that's secured by the Microsoft identity platform.
-APIs such as Microsoft Graph require a token to allow access to specific resources. For example, a token is required to read a userΓÇÖs profile, access a userΓÇÖs calendar, or send email. Your application can request an access token by using MSAL to access these resources by specifying API scopes. This access token is then added to the HTTP Authorization header for every call that's made against the protected resource.
+APIs such as Microsoft Graph require a token to allow access to specific resources. For example, a token is required to read a user's profile, access a user's calendar, or send email. Your application can request an access token by using MSAL to access these resources by specifying API scopes. This access token is then added to the HTTP Authorization header for every call that's made against the protected resource.
MSAL manages caching and refreshing access tokens for you, so that your application doesn't need to.
MSAL manages caching and refreshing access tokens for you, so that your applicat
This guide uses the following NuGet packages:
-| Library | Description |
-| - | - |
+| Library | Description |
+| - | -- |
| [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | Microsoft Authentication Library (MSAL.NET) | ## Set up your project
Create the application using the following steps:
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-You can register your application in either of two ways.
-
-### Option 1: Express mode
-
-Use the following steps to register your application:
-
-1. Sign in to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-1. Enter a name for your application and select **Register**.
-1. Follow the instructions to download and automatically configure your new application.
-
-### Option 2: Advanced mode
-- To register and configure your application, follow these steps:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application, for example `Win-App-calling-MsGraph`. Users of your app might see this name, and you can change it later. 1. In the **Supported account types** section, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. 1. Select **Register**.
In this section, you use MSAL to get a token for the Microsoft Graph API.
#### Get a user token interactively
-Calling the `AcquireTokenInteractive` method results in a window that prompts users to sign in. Applications usually require users to sign in interactively the first time they need to access a protected resource. They might also need to sign in when a silent operation to acquire a token fails (for example, when a userΓÇÖs password is expired).
+Calling the `AcquireTokenInteractive` method results in a window that prompts users to sign in. Applications usually require users to sign in interactively the first time they need to access a protected resource. They might also need to sign in when a silent operation to acquire a token fails (for example, when a user's password is expired).
#### Get a user token silently
active-directory Tutorial V2 Windows Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md
private async Task DisplayMessageAsync(string message)
Now, register your application:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for your application, for example `UWP-App-calling-MSGraph`. Users of your app might see this name, and you can change it later. 1. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. 1. Select **Register**.
Now, register your application:
Configure authentication for your application:
-1. Back in the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, under **Manage**, select **Authentication** > **Add a platform**, and then select **Mobile and desktop applications**.
+1. In to the Microsoft Entra admin center, select **Authentication** > **Add a platform**, and then select **Mobile and desktop applications**.
1. In the **Redirect URIs** section, enter `https://login.microsoftonline.com/common/oauth2/nativeclient`. 1. Select **Configure**. Configure API permissions for your application:
-1. Under **Manage**, select **API permissions** > **Add a permission**.
+1. Select **API permissions** > **Add a permission**.
1. Select **Microsoft Graph**. 1. Select **Delegated permissions**, search for *User.Read*, and verify that **User.Read** is selected. 1. If you made any changes, select **Add permissions** to save them.
In the current sample, the `WithRedirectUri("https://login.microsoftonline.com/c
You can then remove the line of code because it's required only once, to fetch the value.
-3. In the app registration portal, add the returned value in **RedirectUri** in the **Authentication** pane.
+3. In the Microsoft Entra admin center, add the returned value in **RedirectUri** in the **Authentication** pane.
## Test your code
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
# Application types for the Microsoft identity platform
-The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](./v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios).
+The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](./v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-types).
## The basics
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
Title: Sign in with resource owner password credentials grant
+ Title: Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials
description: Support browser-less authentication flows using the resource owner password credential (ROPC) grant.
Previously updated : 08/26/2022 Last updated : 08/11/2023
The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password
> [!WARNING] > Microsoft recommends you do _not_ use the ROPC flow. In most scenarios, more secure alternatives are available and recommended. This flow requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows aren't viable. - > [!IMPORTANT] > > * The Microsoft identity platform only supports the ROPC grant within Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint.
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform
+ Title: Microsoft identity platform and OAuth 2.0 implicit grant flow
description: Secure single-page apps using Microsoft identity platform implicit flow.
Previously updated : 08/18/2022 Last updated : 08/11/2023
-# Microsoft identity platform and implicit grant flow
+# Microsoft identity platform and OAuth 2.0 implicit grant flow
The Microsoft identity platform supports the OAuth 2.0 implicit grant flow as described in the [OAuth 2.0 Specification](https://tools.ietf.org/html/rfc6749#section-4.2). The defining characteristic of the implicit grant is that tokens (ID tokens or access tokens) are returned directly from the /authorize endpoint instead of the /token endpoint. This is often used as part of the [authorization code flow](v2-oauth2-auth-code-flow.md), in what is called the "hybrid flow" - retrieving the ID token on the /authorize request along with an authorization code.
The following diagram shows what the entire implicit sign-in flow looks like and
To initially sign the user into your app, you can send an [OpenID Connect](v2-protocols-oidc.md) authentication request and get an `id_token` from the Microsoft identity platform. > [!IMPORTANT]
-> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned: `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'`
+> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned:
+>
+> `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'`
``` // Line breaks for legibility only
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| | | | | `tenant` | required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](./v2-protocols.md#endpoints).Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.| | `client_id` | required | The Application (client) ID that the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |
-| `response_type` | required |Must include `id_token` for OpenID Connect sign-in. It may also include the response_type `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, user.read on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This id_token+code response is sometimes called the hybrid flow. |
-| `redirect_uri` | recommended |The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be URL-encoded. |
-| `scope` | required |A space-separated list of [scopes](./permissions-consent-overview.md). For OpenID Connect (id_tokens), it must include the scope `openid`, which translates to the "Sign you in" permission in the consent UI. Optionally you may also want to include the `email` and `profile` scopes for gaining access to additional user data. You may also include other scopes in this request for requesting consent to various resources, if an access token is requested. |
+| `response_type` | required | Must include `id_token` for OpenID Connect sign-in. It may also include the `response_type`, `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, `user.read` on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This `id_token`+`code` response is sometimes called the hybrid flow. |
+| `redirect_uri` | recommended |The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. |
+| `scope` | required |A space-separated list of [scopes](./permissions-consent-overview.md). For OpenID Connect (`id_tokens`), it must include the scope `openid`, which translates to the "Sign you in" permission in the consent UI. Optionally you may also want to include the `email` and `profile` scopes for gaining access to additional user data. You may also include other scopes in this request for requesting consent to various resources, if an access token is requested. |
| `response_mode` | optional |Specifies the method that should be used to send the resulting token back to your app. Defaults to query for just an access token, but fragment if the request includes an id_token. | | `state` | recommended |A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. |
-| `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are 'login', 'none', 'select_account', and 'consent'. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. |
+| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. |
+| `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `select_account`, and `consent`. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via SSO, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. |
| `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](./optional-claims.md) from an earlier sign-in. | | `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they'll provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. This hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. |
code=0.AgAAktYV-sfpYESnQynylW_UKZmH-C9y_G1A
| | | | `code` | Included if `response_type` includes `code`. It's an authorization code suitable for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). | | `access_token` |Included if `response_type` includes `token`. The access token that the app requested. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. |
-| `token_type` |Included if `response_type` includes `token`. Will always be `Bearer`. |
+| `token_type` |Included if `response_type` includes `token`. This will always be `Bearer`. |
| `expires_in`|Included if `response_type` includes `token`. Indicates the number of seconds the token is valid, for caching purposes. | | `scope` |Included if `response_type` includes `token`. Indicates the scope(s) for which the access_token will be valid. May not include all the requested scopes if they weren't applicable to the user. For example, Azure AD-only scopes requested when logging in using a personal account. |
-| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. |
+| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about ID tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. |
| `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | [!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)]
For details on the query parameters in the URL, see [send the sign in request](#
> [!TIP] > Try copy & pasting the request below into a browser tab! (Don't forget to replace the `login_hint` values with the correct value for your user) >
->`https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username}`
+> ```
+> https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username}
+> ```
> > Note that this will work even in browsers without third party cookie support, since you're entering this directly into a browser bar as opposed to opening it within an iframe.
access_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q..
| Parameter | Description | | | | | `access_token` |Included if `response_type` includes `token`. The access token that the app requested, in this case for the Microsoft Graph. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. |
-| `token_type` | Will always be `Bearer`. |
+| `token_type` | This will always be `Bearer`. |
| `expires_in` | Indicates the number of seconds the token is valid, for caching purposes. |
-| `scope` | Indicates the scope(s) for which the access_token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
+| `scope` | Indicates the scope(s) for which the access token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
| `id_token` | A signed JSON Web Token (JWT). Included if `response_type` includes `id_token`. The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token` reference](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
If you receive this error in the iframe request, the user must interactively sig
## Refreshing tokens
-The implicit grant does not provide refresh tokens. Both `id_token`s and `access_token`s will expire after a short period of time, so your app must be prepared to refresh these tokens periodically. To refresh either type of token, you can perform the same hidden iframe request from above using the `prompt=none` parameter to control the identity platform's behavior. If you want to receive a new `id_token`, be sure to use `id_token` in the `response_type` and `scope=openid`, as well as a `nonce` parameter.
+The implicit grant does not provide refresh tokens. Both ID tokens and access tokens will expire after a short period of time, so your app must be prepared to refresh these tokens periodically. To refresh either type of token, you can perform the same hidden iframe request from above using the `prompt=none` parameter to control the identity platform's behavior. If you want to receive a new ID token, be sure to use `id_token` in the `response_type` and `scope=openid`, as well as a `nonce` parameter.
In browsers that do not support third party cookies, this will result in an error indicating that no user is signed in.
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
This access token is a v1.0-formatted token for Microsoft Graph. This is because
An error response is returned by the token endpoint when trying to acquire an access token for the downstream API, if the downstream API has a Conditional Access policy (such as [multifactor authentication](../authentication/concept-mfa-howitworks.md)) set on it. The middle-tier service should surface this error to the client application so that the client application can provide the user interaction to satisfy the Conditional Access policy.
+To [surface this error back](https://datatracker.ietf.org/doc/html/rfc6750#section-3.1) to the client, the middle-tier service will reply with HTTP 401 Unauthorized and with a WWW-Authenticate HTTP header containing the error and the claim challenge. The client must parse this header and acquire a new token from the token issuer, by presenting the claims challenge if one exists. Clients should not retry to access the middle-tier service using a cached access token.
+ ```json { "error":"interaction_required",
active-directory Web Api Tutorial 01 Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-tutorial-01-register-app.md
In this tutorial:
To complete registration, provide the application a name and specify the supported account types. Once registered, the application **Overview** page will display the identifiers needed in the application source code.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations > New registration**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as *NewWebAPI1*. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select **Help me choose** option. 1. Select **Register**.
active-directory Web App Tutorial 01 Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-01-register-application.md
In this tutorial:
To complete registration, provide the application a name and specify the supported account types. Once registered, the application **Overview** page will display the identifiers needed in the application source code.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer).
1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations > New registration**.
+1. Browse to **Identity** > **Applications** > **Application registrations**.
+1. Select **New registration**.
1. Enter a **Name** for the application, such as *NewWebApp1*. 1. For Supported account types, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. - The **Redirect URI (optional)** will be configured at a later stage.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 08/01/2023 Last updated : 09/04/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## August 2023
+
+### Updated articles
+
+- [Call an ASP.NET Core web API with cURL](howto-call-a-web-api-with-curl.md) - Updated sign-in steps for admin center
+- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md) - Removed references to aad.portal.azure.com and terminology updates for partner program updates
+- [Configure a custom claim provider token issuance event (preview)](custom-extension-get-started.md) - Updated MS Graph sections - custom claim provider token issuance event tutorial and custom authentication extensions references
+- [Customize claims issued in the JSON web token (JWT) for enterprise applications](jwt-claims-customization.md) - Updated sign-in steps for admin center
+- [Access tokens in the Microsoft identity platform](access-tokens.md) - Updated details about issuer validation
+ ## July 2023 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Tokens and claims overview](security-tokens.md) - Editorial review of security tokens - [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md) - Editorial review - [What's new for authentication?](reference-breaking-changes.md) - Identity breaking change: omission of unverified emails by default-
-## May 2023
-
-### New articles
--- [Access token claims reference](access-token-claims-reference.md)-- [Directory extension attributes in claims](schema-extensions.md)-- [Provide optional claims to your app](optional-claims.md)-
-### Updated articles
--- [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md)-- [What's new for authentication?](reference-breaking-changes.md)-- [A web app that calls web APIs: Acquire a token for the app](scenario-web-app-call-api-acquire-token.md)-- [A web app that calls web APIs: Code configuration](scenario-web-app-call-api-app-configuration.md)-- [A web app that calls web APIs: Call a web API](scenario-web-app-call-api-call-api.md)-- [A web API that calls web APIs: Acquire a token for the app](scenario-web-api-call-api-acquire-token.md)-- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)-- [A web API that calls web APIs: Call an API](scenario-web-api-call-api-call-api.md)-- [Confidential client assertions](msal-net-client-assertions.md)-- [Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview)](jwt-claims-customization.md)-- [Customize claims issued in the SAML token for enterprise applications](saml-claims-customization.md)-- [Desktop app that calls web APIs: Acquire a token by using WAM](scenario-desktop-acquire-token-wam.md)-- [Desktop app that calls web APIs: Acquire a token interactively](scenario-desktop-acquire-token-interactive.md)-- [Handle errors and exceptions in MSAL for Python](msal-error-handling-python.md)-- [Protected web API: Code configuration](scenario-protected-web-api-app-configuration.md)-- [Shared device mode for iOS devices](msal-ios-shared-devices.md)-- [Tutorial: Sign in users and call the Microsoft Graph API from an Android application](tutorial-v2-android.md)-- [Tutorial: Sign in users and call the Microsoft Graph API from an Angular single-page application (SPA) using auth code flow](tutorial-v2-angular-auth-code.md)-- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
Previously updated : 10/27/2022 Last updated : 08/16/2023
When you connect a Windows device with Azure AD using an Azure AD join, Azure AD
- The Azure AD joined device local administrator role - The user performing the Azure AD join
-By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
+By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to users with the Global Administrator role, you can also enable users that have been *only* assigned the Azure AD Joined Device Local Administrator role to manage a device.
-## Manage the global administrators role
+## Manage the Global Administrator role
-To view and update the membership of the Global Administrator role, see:
+To view and update the membership of the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) role, see:
- [View all members of an administrator role in Azure Active Directory](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Azure Active Directory](../fundamentals/how-subscriptions-associated-directory.md)
-## Manage the device administrator role
+## Manage the Azure AD Joined Device Local Administrator role
+You can manage the [Azure AD Joined Device Local Administrator](/azure/active-directory/roles/permissions-reference#azure-ad-joined-device-local-administrator) role from **Device settings**.
-In the Azure portal, you can manage the device administrator role from **Device settings**.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
-1. Browse to **Azure Active Directory** > **Devices** > **Device settings**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Browse to **Identity** > **Devices** > **All devices** > **Device settings**.
1. Select **Manage Additional local administrators on all Azure AD joined devices**. 1. Select **Add assignments** then choose the other administrators you want to add and select **Add**.
-To modify the device administrator role, configure **Additional local administrators on all Azure AD joined devices**.
+To modify the Azure AD Joined Device Local Administrator role, configure **Additional local administrators on all Azure AD joined devices**.
> [!NOTE] > This option requires Azure AD Premium licenses.
-Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope device administrators to a specific set of devices. Updating the device administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen:
+Azure AD Joined Device Local Administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope this role to a specific set of devices. Updating the Azure AD Joined Device Local Administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen:
- Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile.
-Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token.
+Users aren't directly listed in the local administrator group, the permissions are received through the Primary Refresh Token.
> [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device. ## Manage administrator privileges using Azure AD groups (preview)
-Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you the granularity to configure distinct administrators for different groups of devices.
+Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you with the granularity to configure distinct administrators for different groups of devices.
Organizations can use Intune to manage these policies using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10) or [Account protection policy](/mem/intune/protect/endpoint-security-account-protection-policy). A few considerations for using this policy: -- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.
+- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID equates to the property `securityIdentifier` in the API response.
- Administrator privileges using this policy are evaluated only for the following well-known groups on a Windows 10 or newer device - Administrators, Users, Guests, Power Users, Remote Desktop Users and Remote Management Users.
By default, Azure AD adds the user performing the Azure AD join to the administr
- [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) - Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/intune/enrollment-autopilot#create-an-autopilot-deployment-profile).-- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an auto-created user. Users signing in after a device has been joined aren't added to the administrators group.
+- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group.
## Manually elevate a user on a device
Additionally, you can also add users using the command prompt:
## Considerations -- You can only assign role based groups to the device administrator role.-- Device administrators are assigned to all Azure AD Joined devices. They can't be scoped to a specific set of devices.
+- You can only assign role based groups to the Azure AD Joined Device Local Administrator role.
+- The Azure AD Joined Device Local Administrator role is assigned to all Azure AD Joined devices. This role can't be scoped to a specific set of devices.
- Local administrator rights on Windows devices aren't applicable to [Azure AD B2B guest users](../external-identities/what-is-b2b.md).-- When you remove users from the device administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.
+- When you remove users from the Azure AD Joined Device Local Administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.
## Next steps -- To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md).
+- To get an overview of how to manage devices, see [managing devices using the Azure portal](manage-device-identities.md).
- To learn more about device-based Conditional Access, see [Conditional Access: Require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md).
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
The following diagrams illustrate the underlying details in issuing, renewing, a
> [!NOTE] > In Azure AD joined devices, Azure AD PRT issuance (steps A-F) happens synchronously before the user can logon to Windows. In hybrid Azure AD joined devices, on-premises Active Directory is the primary authority. So, the user is able to login hybrid Azure AD joined Windows after they can acquire a TGT to login, while the PRT issuance happens asynchronously. This scenario does not apply to Azure AD registered devices as logon does not use Azure AD credentials.
+> [!NOTE]
+> In a Hybrid Azure AD joined Windows environment, the issuance of the PRT occurs asynchronously. The issuance of the PRT may fail due to issues with the federation provider. This failure can result in sign on issues when users try to access cloud resources. It is important to troubleshoot this scenario with the federation provider.
+ | Step | Description | | :: | | | A | User enters their password in the sign in UI. LogonUI passes the credentials in an auth buffer to LSA, which in turns passes it internally to CloudAP. CloudAP forwards this request to the CloudAP plugin. |
active-directory Device Join Out Of Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-out-of-box.md
Your device may restart several times as part of the setup process. Your device
:::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-device-sign-in-info.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the sign-in experience."::: 1. Continue to follow the prompts to set up your device. 1. Azure AD checks if an enrollment in mobile device management is required and starts the process.
- 1. Windows registers the device in the organizationΓÇÖs directory in Azure AD and enrolls it in mobile device management, if applicable.
+ 1. Windows registers the device in the organizationΓÇÖs directory and enrolls it in mobile device management, if applicable.
1. If you sign in with a managed user account, Windows takes you to the desktop through the automatic sign-in process. Federated users are directed to the Windows sign-in screen to enter your credentials. :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-complete-automatic-sign-in-desktop.png" alt-text="Screenshot of Windows 11 at the desktop after first run experience Azure AD joined.":::
To verify whether a device is joined to your Azure AD, review the **Access work
## Next steps -- For more information about managing devices in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md).
+- For more information about managing devices, see [managing devices using the Azure portal](manage-device-identities.md).
- [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune) - [Overview of Windows Autopilot](/mem/autopilot/windows-autopilot) - [Passwordless authentication options for Azure Active Directory](../authentication/concept-authentication-passwordless.md)
active-directory Enterprise State Roaming Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md
Enterprise State Roaming provides users with a unified experience across their W
## To enable Enterprise State Roaming -
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Devices** > **Enterprise State Roaming**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
+1. Browse to **Identity** > **Devices** > **Overview** > **Enterprise State Roaming**.
1. Select **Users may sync settings and app data across devices**. For more information, see [how to configure device settings](./manage-device-identities.md). For a Windows 10 or newer device to use the Enterprise State Roaming service, the device must authenticate using an Azure AD identity. For devices that are joined to Azure AD, the userΓÇÖs primary sign-in identity is their Azure AD identity, so no other configuration is required. For devices that use on-premises Active Directory, the IT admin must [Configure hybrid Azure Active Directory joined devices](./hybrid-join-plan.md).
The country/region value is set as part of the Azure AD directory creation proce
Follow these steps to view a per-user device sync status report.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
1. Select the user, and then select **Devices**. 1. Select **View devices syncing settings and app data** to show sync status. 1. Devices syncing for the user are shown and can be downloaded.
active-directory Enterprise State Roaming Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-troubleshooting.md
Enterprise State Roaming requires the device to be registered with Azure AD. Alt
**Potential issue**: **WamDefaultSet** and **AzureAdJoined** both have ΓÇ£NOΓÇ¥ in the field value, the device was domain-joined and registered with Azure AD, and the device doesn't sync. If it's showing this, the device may need to wait for policy to be applied or the authentication for the device failed when connecting to Azure AD. The user may have to wait a few hours for the policy to be applied. Other troubleshooting steps may include retrying autoregistration by signing out and back in, or launching the task in Task Scheduler. In some cases, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue.
-**Potential issue**: The field for **SettingsUrl** is empty and the device doesn't sync. The user may have last logged in to the device before Enterprise State Roaming was enabled in the Azure portal. Restart the device and have the user login. Optionally, in the portal, try having the IT Admin navigate to **Azure Active Directory** > **Devices** > **Enterprise State Roaming** disable and re-enable **Users may sync settings and app data across devices**. Once re-enabled, restart the device and have the user login. If this doesn't resolve the issue, **SettingsUrl** may be empty if there's a bad device certificate. In this case, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue.
+**Potential issue**: The field for **SettingsUrl** is empty and the device doesn't sync. The user may have last logged in to the device before Enterprise State Roaming was enabled. Restart the device and have the user login. Optionally, in the portal, try having the IT Admin navigate to **Azure Active Directory** > **Devices** > **Enterprise State Roaming** disable and re-enable **Users may sync settings and app data across devices**. Once re-enabled, restart the device and have the user login. If this doesn't resolve the issue, **SettingsUrl** may be empty if there's a bad device certificate. In this case, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue.
## Enterprise State Roaming and multifactor authentication
active-directory How To Hybrid Join Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-verify.md
description: Verify configurations for hybrid Azure AD joined devices
+ Last updated 02/27/2023
For downlevel devices, see the article [Troubleshooting hybrid Azure Active Dire
## Using the Azure portal
-1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
-2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./manage-device-identities.md).
-3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle.
-4. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com)ntra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Browse to **Identity** > **Devices** > **All devices**.
+1. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle.
+1. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed.
## Using PowerShell
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
> [!IMPORTANT] > Azure AD support for Windows Local Administrator Password Solution is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
Every Windows device comes with a built-in local administrator account that you must secure and protect to mitigate any Pass-the-Hash (PtH) and lateral traversal attacks. Many customers have been using our standalone, on-premises [Local Administrator Password Solution (LAPS)](https://www.microsoft.com/download/details.aspx?id=46899) product for local administrator password management of their domain joined Windows machines. With Azure AD support for Windows LAPS, we're providing a consistent experience for both Azure AD joined and hybrid Azure AD joined devices.
Other than the built-in Azure AD roles of Cloud Device Administrator, Intune Adm
To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the devices you wish to manage. We recommend organizations [manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy). However, if your devices are Azure AD joined but you're not using Microsoft Intune or Microsoft Intune isn't supported (like for Windows Server 2019/2022), you can still deploy Windows LAPS for Azure AD manually. For more information, see the article [Configure Windows LAPS policy settings](/windows-server/identity/laps/laps-management-policy-settings).
-1. Sign in to the **Azure portal** as a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
-1. Browse to **Azure Active Directory** > **Devices** > **Device settings**
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Browse to **Identity** > **Devices** > **Overview** > **Device settings**
1. Select **Yes** for the Enable Local Administrator Password Solution (LAPS) setting and select **Save**. You may also use the Microsoft Graph API [Update deviceRegistrationPolicy](/graph/api/deviceregistrationpolicy-update?view=graph-rest-beta&preserve-view=true). 1. Configure a client-side policy and set the **BackUpDirectory** to be Azure AD.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
There are two ways to enable Azure AD login for your Linux VM:
### Azure portal - You can enable Azure AD login for any of the [supported Linux distributions](#supported-linux-distributions-and-azure-regions) by using the Azure portal. For example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD login:
To configure role assignments for your Azure AD-enabled Linux VMs:
| Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity |
- ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the page for adding a role assignment.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
After a few moments, the security principal is assigned the role at the selected scope.
The application that appears in the Conditional Access policy is called *Azure L
If the Azure Linux VM Sign-In application is missing from Conditional Access, make sure the application isn't in the tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
1. Remove the filters to see all applications, and search for **Virtual Machine**. If you don't see Microsoft Azure Linux Virtual Machine Sign-In as a result, the service principal is missing from the tenant. Another way to verify it is via Graph PowerShell:
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
--+ # Log in to a Windows virtual machine in Azure by using Azure AD including passwordless
There are two ways to enable Azure AD login for your Windows VM:
- Azure Cloud Shell, when you're creating a Windows VM or using an existing Windows VM. > [!NOTE]
-> If a device object with the same displayMame as the hostname of a VM where an extension is installed exists, the VM fails to join Azure AD with a hostname duplication error. Avoid duplication by [modifying the hostname](../../virtual-network/virtual-networks-viewing-and-modifying-hostnames.md#modify-a-hostname).
+> If a device object with the same displayName as the hostname of a VM where an extension is installed exists, the VM fails to join Azure AD with a hostname duplication error. Avoid duplication by [modifying the hostname](../../virtual-network/virtual-networks-viewing-and-modifying-hostnames.md#modify-a-hostname).
### Azure portal - You can enable Azure AD login for VM images in Windows Server 2019 Datacenter or Windows 10 1809 and later. To create a Windows Server 2019 Datacenter VM in Azure with Azure AD login:
To configure role assignments for your Azure AD-enabled Windows Server 2019 Data
| Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity |
- ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the page for adding a role assignment.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
### Azure Cloud Shell
Exit code -2145648607 translates to `DSREG_AUTOJOIN_DISC_FAILED`. The extension
- `curl https://pas.windows.net/ -D -` > [!NOTE]
- > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID** in the Azure portal.
+ > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID**.
> > Attempts to connect to `enterpriseregistration.windows.net` might return 404 Not Found, which is expected behavior. Attempts to connect to `pas.windows.net` might prompt for PIN credentials or might return 404 Not Found. (You don't need to enter the PIN.) Either one is sufficient to verify that the URL is reachable.
Share your feedback about this feature or report problems with using it on the [
If the Azure Windows VM Sign-In application is missing from Conditional Access, make sure that the application is in the tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
1. Remove the filters to see all applications, and search for **VM**. If you don't see **Azure Windows VM Sign-In** as a result, the service principal is missing from the tenant. Another way to verify it is via Graph PowerShell:
active-directory Hybrid Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-manual.md
description: Learn how to manually configure hybrid Azure Active Directory join
+ Last updated 07/05/2022
The following script helps you with the creation of the issuance transform rules
#### Remarks * This script appends the rules to the existing rules. Don't run the script twice, because the set of rules would be added twice. Make sure that no corresponding rules exist for these claims (under the corresponding conditions) before running the script again.
-* If you have multiple verified domain names (as shown in the Azure portal or via the **Get-MsolDomain** cmdlet), set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule:
+* If you have multiple verified domain names, set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule:
``` c:[Type == "http://schemas.xmlsoap.org/claims/UPN"]
active-directory Hybrid Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-plan.md
When you're using AD FS, you need to enable the following WS-Trust endpoints:
> [!WARNING] > Both **adfs/services/trust/2005/windowstransport** or **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**.
-Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-join-manual.md).
+Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-join-manual.md). If contoso.com is registered as a confirmed custom domain, users can get a PRT even if their syncronized on-premises AD DS UPN suffix is in a subdomain like test.contoso.com.
## Review on-premises AD users UPN support for hybrid Azure AD join
active-directory Manage Device Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md
Azure Active Directory (Azure AD) provides a central place to manage device identities and monitor related event information.
-[![Screenshot that shows the devices overview in the Azure portal.](./media/manage-device-identities/devices-azure-portal.png)](./media/manage-device-identities/devices-azure-portal.png#lightbox)
+[![Screenshot that shows the devices overview.](./media/manage-device-identities/devices-azure-portal.png)](./media/manage-device-identities/devices-azure-portal.png#lightbox)
You can access the devices overview by completing these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
1. Go to **Azure Active Directory** > **Devices**. In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring.
From there, you can go to **All devices** to:
- Review device-related audit logs. - Download devices.
-[![Screenshot that shows the All devices view in the Azure portal.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox)
+[![Screenshot that shows the All devices view.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox)
> [!TIP] > - Hybrid Azure AD joined Windows 10 or newer devices don't have an owner. If you're looking for a device by owner and don't find it, search by the device ID.
To view or copy BitLocker keys, you need to be the owner of the device or have o
## View and filter your devices (preview) - In this preview, you have the ability to infinitely scroll, reorder columns, and select all devices. You can filter the device list by these device attributes: - Enabled state
In this preview, you have the ability to infinitely scroll, reorder columns, and
To enable the preview in the **All devices** view:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Go to **Azure Active Directory** > **Devices** > **All devices**.
-3. Select the **Preview features** button.
-4. Turn on the toggle that says **Enhanced devices list experience**. Select **Apply**.
-5. Refresh your browser.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Devices** > **All devices**.
+1. Select the **Preview features** button.
+1. Turn on the toggle that says **Enhanced devices list experience**. Select **Apply**.
+1. Refresh your browser.
You can now experience the enhanced **All devices** view.
The exported list includes these device identity attributes:
If you want to manage device identities by using the Azure portal, the devices need to be either [registered or joined](overview.md) to Azure AD. As an administrator, you can control the process of registering and joining devices by configuring the following device settings.
-You must be assigned one of the following roles to view device settings in the Azure portal:
+You must be assigned one of the following roles to view device settings:
- Global Administrator - Global Reader
You must be assigned one of the following roles to view device settings in the A
- Windows 365 Administrator - Directory Reviewer
-You must be assigned one of the following roles to manage device settings in the Azure portal:
+You must be assigned one of the following roles to manage device settings:
- Global Administrator - Cloud Device Administrator
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
description: Learn how to remove stale devices from your database of registered
+ Last updated 09/27/2022
-#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data.
-
+#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data.
# How To: Manage stale devices in Azure AD
If the delta between the existing value of the activity timestamp and the curren
You have two options to retrieve the value of the activity timestamp: -- The **Activity** column on the [devices page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices) in the Azure portal
+- The **Activity** column on the [devices page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
- :::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot of a page in the Azure portal listing the name, owner, and other information on devices. One column lists the activity time stamp." border="false":::
+ :::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot listing the name, owner, and other information of devices. One column lists the activity time stamp." border="false":::
-- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet
+- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet.
:::image type="content" source="./media/manage-stale-devices/02.png" alt-text="Screenshot showing command-line output. One line is highlighted and lists a time stamp for the ApproximateLastLogonTimeStamp value." border="false":::
Any authentication where a device is being used to authenticate to Azure AD are
Devices managed with Intune can be retired or wiped, for more information see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe).
-To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md)
+To get an overview of how to manage devices, see [managing devices using the Azure portal](manage-device-identities.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/overview.md
Getting devices in to Azure AD can be done in a self-service manner or a control
- Learn more about [Azure AD registered devices](concept-device-registration.md) - Learn more about [Azure AD joined devices](concept-directory-join.md) - Learn more about [hybrid Azure AD joined devices](concept-hybrid-join.md)-- To get an overview of how to manage device identities in the Azure portal, see [Managing device identities using the Azure portal](manage-device-identities.md).
+- To get an overview of how to manage device identities, see [Managing device identities using the Azure portal](manage-device-identities.md).
- To learn more about device-based Conditional Access, see [Configure Azure Active Directory device-based Conditional Access policies](../conditional-access/concept-conditional-access-grant.md).
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md
Administrators can also [deploy virtual desktop infrastructure (VDI) platforms](
## Next steps
+* [Analyze your on-premises GPOs using Group Policy analytics in Microsoft Intune](/mem/intune/configuration/group-policy-analytics)
* [Plan your Azure AD join implementation](device-join-plan.md) * [Plan your hybrid Azure AD join implementation](hybrid-join-plan.md) * [Manage device identities](manage-device-identities.md)
active-directory Troubleshoot Device Windows Joined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-windows-joined.md
If you have a Windows 11 or Windows 10 device that isn't working with Azure Active Directory (Azure AD) correctly, start your troubleshooting here.
-1. Sign in to the **Azure portal**.
-1. Browse to **Azure Active Directory** > **Devices** > **Diagnose and solve problems**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Devices** > **All devices** > **Diagnose and solve problems**.
1. Select **Troubleshoot** under the **Windows 10+ related issue** troubleshooter. :::image type="content" source="media/troubleshoot-device-windows-joined/devices-troubleshoot-windows.png" alt-text="A screenshot showing the Windows troubleshooter located in the diagnose and solve pane of the Azure portal." lightbox="media/troubleshoot-device-windows-joined/devices-troubleshoot-windows.png"::: 1. Select **instructions** and follow the steps to download, run, and collect the required logs for the troubleshooter to analyze.
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Use Event Viewer to look for the log entries that are logged by the Azure AD Clo
| Error code | Reason | Resolution | | | | |
-| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled in the Azure portal. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. |
+| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. |
| **AADSTS50034: The user account `Account` does not exist in the `tenant id` directory** | Azure AD is unable to find the user account in the tenant. | <li>Ensure that the user is typing the correct UPN.<li>Ensure that the on-premises user account is being synced with Azure AD.<li>Event 1144 (Azure AD analytics logs) will contain the UPN provided. | | **AADSTS50126: Error validating credentials due to invalid username or password.** | <li>The username and password entered by the user in the Windows LoginUI are incorrect.<li>If the tenant has password hash sync enabled, the device is hybrid-joined, and the user just changed the password, it's likely that the new password hasn't synced with Azure AD. | To acquire a fresh PRT with the new credentials, wait for the Azure AD password sync to finish. | | | |
active-directory Troubleshoot Hybrid Join Windows Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md
This article provides you with troubleshooting guidance on how to resolve potent
- Hybrid Azure AD join for downlevel Windows devices works slightly differently than it does in Windows 10 or newer. Many customers don't realize that they need AD FS (for federated domains) or Seamless SSO configured (for managed domains). - Seamless SSO doesn't work in private browsing mode on Firefox and Microsoft Edge browsers. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode or if Enhanced Security Configuration is enabled.-- For customers with federated domains, if the Service Connection Point (SCP) was configured such that it points to the managed domain name (for example, contoso.onmicrosoft.com, instead of contoso.com), then Hybrid Azure AD Join for downlevel Windows devices won't work.
+- For customers with federated domains, if the Service Connection Point (SCP) was configured such that it points to the managed domain name (for example, contoso.onmicrosoft.com, instead of contoso.com), then Hybrid Azure AD Join for downlevel Windows devices doesn't work.
- The same physical device appears multiple times in Azure AD when multiple domain users sign-in the downlevel hybrid Azure AD joined devices. For example, if *jdoe* and *jharnett* sign-in to a device, a separate registration (DeviceID) is created for each of them in the **USER** info tab. - You can also get multiple entries for a device on the user info tab because of a reinstallation of the operating system or a manual re-registration. - The initial registration / join of devices is configured to perform an attempt at either sign-in or lock / unlock. There could be 5-minute delay triggered by a task scheduler task.
This command displays a dialog box that provides you with details about the join
## Step 2: Evaluate the hybrid Azure AD join status
-If the device wasn't hybrid Azure AD joined, you can attempt to do hybrid Azure AD join by clicking on the "Join" button. If the attempt to do hybrid Azure AD join fails, the details about the failure will be shown.
+If the device wasn't hybrid Azure AD joined, you can attempt to do hybrid Azure AD join by clicking on the "Join" button. If the attempt to do hybrid Azure AD join fails, the details about the failure are shown.
**The most common issues are:**
If the device wasn't hybrid Azure AD joined, you can attempt to do hybrid Azure
- It could be that AD FS and Azure AD URLs are missing in IE's intranet zone on the client. - Network connectivity issues may be preventing **autoworkplace.exe** from reaching AD FS or the Azure AD URLs. - **Autoworkplace.exe** requires the client to have direct line of sight from the client to the organization's on-premises AD domain controller, which means that hybrid Azure AD join succeeds only when the client is connected to organization's intranet.
- - If your organization uses Azure AD Seamless Single Sign-On, `https://autologon.microsoftazuread-sso.com` or `https://aadg.windows.net.nsatc.net` aren't present on the device's IE intranet settings.
+ - If your organization uses Azure AD Seamless Single Sign-On, `https://autologon.microsoftazuread-sso.com` isn't present on the device's IE intranet settings.
+ - The internet setting `Do not save encrypted pages to disk` is checked.
- You aren't signed on as a domain user :::image type="content" source="./media/troubleshoot-hybrid-join-windows-legacy/03.png" alt-text="Screenshot of the Workplace Join for Windows dialog box. Text reports that an error occurred during account verification." border="false":::
active-directory Troubleshoot Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-primary-refresh-token.md
You can find a full list and description of server error codes in [Azure AD auth
- Azure AD can't authenticate the device to issue a PRT. -- The device might have been deleted or disabled in the Azure portal. (For more information, see [Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10/11 devices?](./faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices))
+- The device might have been deleted or disabled. (For more information, see [Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10/11 devices?](./faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices))
##### Solution
-Re-register the device based on the device join type. For instructions, see [I disabled or deleted my device in the Azure portal or by using Windows PowerShell. But the local state on the device says it's still registered. What should I do?](./faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do).
+Re-register the device based on the device join type. For instructions, see [I disabled or deleted my device. But the local state on the device says it's still registered. What should I do?](./faq.yml#i-disabled-or-deleted-my-device--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do).
</details> <details>
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Last updated 10/03/2022 -+
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Last updated 03/02/2022 -+
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Previously updated : 06/23/2022 Last updated : 08/31/2023 -+
The key and templates aren't moved over when the unmanaged organization is in a
Although RMS for individuals is designed to support Azure AD authentication to open protected content, it doesn't prevent users from also protecting content. If users did protect content with the RMS for individuals subscription, and the key and templates weren't moved over, that content isn't accessible after the domain takeover. ### Azure AD PowerShell cmdlets for the ForceTakeover option+ You can see these cmdlets used in [PowerShell example](#powershell-example). cmdlet | Usage
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Last updated 06/23/2022 --+
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Last updated 06/28/2023 -+
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
Last updated 06/23/2022 -+
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
Last updated 06/24/2022 -+
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
Last updated 06/24/2022 -+
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Last updated 06/24/2022 -+ # Restore a deleted Microsoft 365 group in Azure Active Directory
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Last updated 06/12/2023 -+
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Last updated 06/24/2022 -+ # Azure Active Directory cmdlets for configuring group settings
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
Last updated 06/24/2022 -+ # Azure Active Directory version 2 cmdlets for group management
Microsoft 365 groups are created and managed in the cloud. The writeback capabil
For more details, please refer to documentation for the [Azure AD Connect sync service](../hybrid/connect/how-to-connect-syncservice-features.md).
-Microsoft 365 group writeback is a public preview feature of Azure Active Directory (Azure AD) and is available with any paid Azure AD license plan. For some legal information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Microsoft 365 group writeback is a public preview feature of Azure Active Directory (Azure AD) and is available with any paid Azure AD license plan. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Next steps
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
Last updated 01/09/2023 -+
active-directory Licensing Groups Assign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-assign.md
Title: Assign licenses to a group
-description: How to assign licenses to users by means of Azure Active Directory group licensing
+description: How to assign licenses to users with Azure Active Directory group licensing
keywords: Azure AD licensing documentationcenter: ''
Previously updated : 06/24/2022 Last updated : 08/31/2023
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
Previously updated : 06/24/2022 Last updated : 08/31/2023
To see which users and groups are consuming licenses, select a product. Under **
**Problem:** One of the products that's specified in the group contains a service plan that conflicts with another service plan that's already assigned to the user via a different product. Some service plans are configured in a way that they can't be assigned to the same user as another, related service plan. > [!TIP]
-> Exchange Online Plan1 and Plan2 were previously non-duplicable service plans. However, now they are service plans that can be duplicated.
-> If you are experiencing conflicts with these service plans, please try reprocessing them.
+> Previously, Exchange Online Plan1 and Plan2 were unique and couldn't be duplicated. Now, both service plans have been updated to allow duplication.
+> If you are experiencing conflicts with these service plans, try reprocessing them.
The decision about how to resolve conflicting product licenses always belongs to the administrator. Azure AD doesn't automatically resolve license conflicts.
Updating license assignment on a user causes the proxy address calculation to be
## LicenseAssignmentAttributeConcurrencyException in audit logs **Problem:** User has LicenseAssignmentAttributeConcurrencyException for license assignment in audit logs.
-When group-based licensing tries to process concurrent license assignment of same license to a user, this exception is recorded on the user. This usually happens when a user is a member of more than one group with same assigned license. Azure AD will retry processing the user license and will resolve the issue. There is no action required from the customer to fix this issue.
+When group-based licensing tries to process concurrent license assignment of same license to a user, this exception is recorded on the user. This usually happens when a user is a member of more than one group with same assigned license. Azure AD retries processing the user license until the issue is resolved. There is no action required from the customer to fix this issue.
## More than one product license assigned to a group
active-directory Licensing Powershell Graph Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-powershell-graph-examples.md
The purpose of this script is to remove unnecessary direct licenses from users w
```powershell
-Import-Module Microsoft.Graph
+# Import the Microsoft.Graph.Users and Microsoft.Graph.Groups modules
+Import-Module Microsoft.Graph.Users -Force
+Import-Module Microsoft.Graph.Authentication -Force
+Import-Module Microsoft.Graph.Users.Actions -Force
+Import-Module Microsoft.Graph.Groups -Force
-# Connect to the Microsoft Graph
-Connect-MgGraph
+Clear-Host
-# Get the group to be processed
-$groupId = "48ca647b-7e4d-41e5-aa66-40cab1e19101"
-
-# Get the license to be removed - Office 365 E3
-$skuId = "contoso:ENTERPRISEPACK"
-
-# Minimum set of service plans we know are inherited by this group
-$expectedDisabledPlans = @("Exchange Online", "SharePoint Online", "Lync Online")
-
-# Get the users in the group
-$users = Get-MgUser -GroupObjectId $groupId
-
-# For each user, get the license for the specified SKU
-foreach ($user in $users) {
- $license = GetUserLicense $user $skuId
-
- # If the user has the license assigned directly, continue to the next user
- if (UserHasLicenseAssignedDirectly $user $skuId) {
- continue
- }
-
- # If the user is inheriting the license from the specified group, continue to the next user
- if (UserHasLicenseAssignedFromThisGroup $user $skuId $groupId) {
- continue
- }
+if ($null -eq (Get-MgContext)) {
+ Connect-MgGraph -Scopes "Directory.Read.All, User.Read.All, Group.Read.All, Organization.Read.All" -NoWelcome
+}
- # Get the list of disabled service plans for the SKU
- $disabledPlans = GetDisabledPlansForSKU $skuId $expectedDisabledPlans
+# Get all groups with licenses assigned
+$groupsWithLicenses = Get-MgGroup -All -Property AssignedLicenses, DisplayName, Id | Where-Object { $_.assignedlicenses } | Select-Object DisplayName, Id -ExpandProperty AssignedLicenses | Select-Object DisplayName, Id, SkuId
- # Get the list of unexpected enabled plans for the user
- $extraPlans = GetUnexpectedEnabledPlansForUser $user $skuId $expectedDisabledPlans
+$output = @()
- # If there are any unexpected enabled plans, print them to the console
- if ($extraPlans.Count -gt 0) {
- Write-Warning "The user $user has the following unexpected enabled plans for the $skuId SKU: $extraPlans"
+# Check if there is any group that has licenses assigned or not
+if ($null -ne $groupsWithLicenses) {
+ # Loop through each group
+ foreach ($group in $groupsWithLicenses) {
+ # Get the group's licenses
+ $groupLicenses = $group.SkuId
+
+ # Get the group's members
+ $groupMembers = Get-MgGroupMember -GroupId $group.Id -All
+
+ # Check if the group member list is empty or not
+ if ($groupMembers) {
+ # Loop through each member
+ foreach ($member in $groupMembers) {
+ # Check if the member is a user
+ if ($member.AdditionalProperties.'@odata.type' -eq '#microsoft.graph.user') {
+ # Get the user's direct licenses
+ Write-Host "Fetching license details for $($member.AdditionalProperties.displayName)" -ForegroundColor Yellow
+
+ # Get User With Directly Assigned Licenses Only
+ $user = Get-MgUser -UserId $member.Id -Property AssignedLicenses, LicenseAssignmentStates, DisplayName | Select-Object DisplayName, AssignedLicenses -ExpandProperty LicenseAssignmentStates | Select-Object DisplayName, AssignedByGroup, State, Error, SkuId | Where-Object { $_.AssignedByGroup -eq $null }
+
+ $licensesToRemove = @()
+ if($user)
+ {
+ if ($user.count -ge 2) {
+ foreach ($u in $user) {
+ $userLicenses = $u.SkuId
+ $licensesToRemove += $userLicenses | Where-Object { $_ -in $groupLicenses }
+ }
+ }
+ else {
+ $userLicenses = $user.SkuId
+ $licensesToRemove = $userLicenses | Where-Object { $_ -in $groupLicenses }
+ }
+ } else {
+ Write-Host "No conflicting licenses found for the user $($member.AdditionalProperties.displayName)" -ForegroundColor Green
+ }
+
+
+
+ # Remove the licenses from the user
+ if ($licensesToRemove) {
+ Write-Host "Removing the license $($licensesToRemove) from user $($member.AdditionalProperties.displayName) as inherited from group $($group.DisplayName)" -ForegroundColor Green
+ $result = Set-MgUserLicense -UserId $member.Id -AddLicenses @() -RemoveLicenses $licensesToRemove
+ $obj = [PSCustomObject]@{
+ User = $result.DisplayName
+ Id = $result.Id
+ LicensesRemoved = $licensesToRemove
+ LicenseInheritedFromGroup = $group.DisplayName
+ GroupId = $group.Id
+ }
+
+ $output += $obj
+
+ }
+ else {
+ Write-Host "No action required for $($member.AdditionalProperties.displayName)" -ForegroundColor Green
+ }
+
+ }
+ }
+ }
+ else {
+ Write-Host "The licensed group $($group.DisplayName) has no members, exiting now!!" -ForegroundColor Yellow
+ }
+
}
+
+ $output | Format-Table -AutoSize
+}
+else {
+ Write-Host "No groups found with licenses assigned." -ForegroundColor Cyan
} ```
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md
+ Last updated 12/02/2020
active-directory Linkedin Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md
Last updated 06/24/2022 -+
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
-+
active-directory Users Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md
-+
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
[Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD), part of Microsoft Entra, are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your employees or to help determine who gets access to resources. This article describes how to assign, update, list, or remove custom security attributes for Azure AD.
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
-+
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
Previously updated : 06/24/2022- Last updated : 08/31/2023+
As an administrator in Azure Active Directory, open PowerShell, run ``Connect-Az
>[!NOTE] > For information on specific roles that can perform these steps review [Azure AD built-in roles](../roles/permissions-reference.md)+ ## When access is revoked Once admins have taken the above steps, the user can't gain new tokens for any application tied to Azure Active Directory. The elapsed time between revocation and the user losing their access depends on how the application is granting access:
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
After you add a guest user to the directory, you can either send the guest user
> [!IMPORTANT] > You should follow the steps in [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/properties-area.md) to add the URL of your organization's privacy statement. As part of the first time invitation redemption process, an invited user must consent to your privacy terms to continue.
-The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users.md) article.
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
description: Learn how to enforce multi-factor authentication policies for Azure
+ Last updated 04/17/2023
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
In this quickstart, you'll learn how to add a new guest user to your Azure AD di
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users.md) article.
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Last updated 07/31/2023
--
-# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
+
+# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users
active-directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/claims-mapping.md
Previously updated : 11/24/2022 Last updated : 08/30/2023
There are two possible reasons why you might need to edit the claims that are is
For information about how to add and edit claims, see [Customizing claims issued in the SAML token for enterprise applications in Azure Active Directory](../develop/saml-claims-customization.md).
-For B2B collaboration users, mapping NameID and UPN cross-tenant are prevented for security reasons.
+## UPN claims behavior for B2B users
+
+If you need to issue the UPN value as an application token claim, the actual claim mapping may behave differently for B2B users. If the B2B user authenticates with an external Azure AD identity and you issue user.userprincipalname as the source attribute, Azure AD instead issues the mail attribute.
+
+For example, letΓÇÖs say you invite an external user whose email is `james@contoso.com` and whose identity exists in an external Azure AD tenant. JamesΓÇÖ UPN in the inviting tenant is created from the invited email and the inviting tenant's original default domain. So, letΓÇÖs say JamesΓÇÖ UPN becomes `James_contoso.com#EXT#@fabrikam.onmicrosoft.com`. For the SAML application that issues user.userprincipalname as the NameID, the value passed for James is `james@contoso.com`.
+
+All [other external identity types](redemption-experience.md#invitation-redemption-flow) such as SAML/WS-Fed, Google, Email OTP issues the UPN value rather than the email value when you issue user.userprincipalname as a claim. If you want the actual UPN to be issued in the token claim for all B2B users, you can set user.localuserprincipalname as the source attribute instead.
+
+>[!NOTE]
+>The behavior mentioned in this section is same for both cloud-only B2B users and synced users who were [invited/converted to B2B collaboration](invite-internal-users.md).
## Next steps
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
Last updated 04/06/2023
-+ # Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Azure AD organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Azure AD organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md). [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) give you granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations. This article describes cross-tenant access settings, which are used to manage B2B collaboration and B2B direct connect with external Azure AD organizations, including across Microsoft clouds. More settings are available for B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts). These [external collaboration settings](external-collaboration-settings-configure.md) include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.+
+> [!IMPORTANT]
+> Microsoft is beginning to move customers using cross-tenant access settings to a new storage model on August 30, 2023. You may notice an entry in your audit logs informing you that your cross-tenant access settings were updated as our automated task migrates your settings. For a brief window while the migration processes, you will be unable to make changes to your settings. If you are unable to make a change, you should wait a few moments and try the change again. Once the migration completes, [you will no longer be capped with 25kb of storage space](/azure/active-directory/external-identities/faq#how-many-organizations-can-i-add-in-cross-tenant-access-settings-) and there will be no more limits on the number of partners you can add.
## Manage external access with inbound and outbound settings
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Use External Identities cross-tenant access settings to manage how you collaborate with other Azure AD organizations through B2B collaboration. These settings determine both the level of *inbound* access users in external Azure AD organizations have to your resources, and the level of *outbound* access your users have to external organizations. They also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations. For details and planning considerations, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md).
+> [!IMPORTANT]
+> Microsoft is beginning to move customers using cross-tenant access settings to a new storage model on August 30, 2023. You may notice an entry in your audit logs informing you that your cross-tenant access settings were updated as our automated task migrates your settings. For a brief window while the migration processes, you will be unable to make changes to your settings. If you are unable to make a change, you should wait a few moments and try the change again. Once the migration completes, [you will no longer be capped with 25kb of storage space](/azure/active-directory/external-identities/faq#how-many-organizations-can-i-add-in-cross-tenant-access-settings-) and there will be no more limits on the number of partners you can add.
+ ## Before you begin > [!CAUTION]
With inbound settings, you select which external users and groups will be able t
- In the menu next to the search box, choose either **user** or **group**. - Select **Add**.
- ![Screenshot showing adding users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add.png)
+ > [!NOTE]
+ > You cannot target users or groups in inbound default settings.
+
+ ![Screenshot showing adding users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add-new.png)
1. When you're done adding users and groups, select **Submit**.
active-directory Cross Tenant Access Settings B2b Direct Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md
Use cross-tenant access settings to manage how you collaborate with other Azure
Learn more about using cross-tenant access settings to [manage B2B direct connect](b2b-direct-connect-overview.md#managing-cross-tenant-access-for-b2b-direct-connect).
+> [!IMPORTANT]
+> Microsoft is beginning to move customers using cross-tenant access settings to a new storage model on August 30, 2023. You may notice an entry in your audit logs informing you that your cross-tenant access settings were updated as our automated task migrates your settings. For a brief window while the migration processes, you will be unable to make changes to your settings. If you are unable to make a change, you should wait a few moments and try the change again. Once the migration completes, [you will no longer be capped with 25kb of storage space](/azure/active-directory/external-identities/faq#how-many-organizations-can-i-add-in-cross-tenant-access-settings-) and there will be no more limits on the number of partners you can add.
+ ## Before you begin - Review the [Important considerations](cross-tenant-access-overview.md#important-considerations) section in the [cross-tenant access overview](cross-tenant-access-overview.md) before configuring your cross-tenant access settings.
With inbound settings, you select which external users and groups will be able t
- In the menu next to the search box, choose either **user** or **group**. - Select **Add**.
+ > [!NOTE]
+ > You cannot target users or groups in inbound default settings.
+ ![Screenshot showing adding external users for inbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/b2b-direct-connect-inbound-external-users-groups-add.png) 1. When you're done adding users and groups, select **Submit**.
active-directory Faq Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/faq-customers.md
Opt for the next generation Microsoft Entra External ID platform if:
- YouΓÇÖre starting fresh building identities into apps or you're in the early stages of product discovery. - The benefits of rapid innovation, new features and capabilities are a priority.
+### Why is Azure AD B2C not part of Entra ID/External ID?
+
+Microsoft Entra External ID and Azure AD B2C are two separate platforms powered by ESTS and IEF respectively. Entra External ID is our new converged platform which is future proof and developer friendly to meet all your identity needs ΓÇô B2E, B2B and B2C. At the same time, we will still continue to support Azure AD B2C as a separate product offering with no change in SLA, and weΓÇÖll continue investments in the product to ensure security, availability, and reliability.
+ ## Next steps
-[Learn more about Microsoft Entra External ID for customers](index.yml)
+[Learn more about Microsoft Entra External ID for customers](index.yml)
active-directory How To Add Attributes To Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-add-attributes-to-token.md
You can specify which built-in or custom attributes you want to include as claim
## Add built-in or custom attributes to the token
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**.
-1. Select **Applications** > **App registrations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select your application in the list to open the application's **Overview** page. :::image type="content" source="media/how-to-add-attributes-to-token/select-app.png" alt-text="Screenshot of the overview page of the app registration.":::
You can specify which built-in or custom attributes you want to include as claim
### Update the application manifest to accept mapped claims
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**.
-1. Select **Applications** > **App registrations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select your application in the list to open the application's **Overview** page. 1. In the left menu, under **Manage**, select **Manifest** to open the application manifest. 1. Find the **acceptMappedClaims** key and set its value to **true**.
active-directory How To Create Customer Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-create-customer-tenant-portal.md
In this article, you learn how to:
## Create a new customer tenant
-1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/).
-1. From the left menu, select **Azure Active Directory** > **Overview**.
-1. On the overview page, select **Manage tenants**
+1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/) as at least a [Contributor](/azure/role-based-access-control/built-in-roles#contributor).
+1. Browse to **Identity** > **Overview** > **Manage tenants**.
1. Select **Create**. :::image type="content" source="media/how-to-create-customer-tenant-portal/create-tenant.png" alt-text="Screenshot of the create tenant option.":::
active-directory How To Customize Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md
The following image displays the neutral default branding of the customer tenant
Before you customize any settings, the neutral default branding will appear in your sign-in and sign-up pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a [custom CSS](/azure/active-directory/fundamentals/reference-company-branding-css-template).
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.
-1. In the search bar, type and select **Company branding**.
-1. Under **Default sign-in** select **Edit**.
+1. Browse to **Company Branding** > **Default sign-in** > **Edit**.
:::image type="content" source="media/how-to-customize-branding-customers/company-branding-default-edit-button.png" alt-text="Screenshot of the company branding edit button.":::
Your customer tenant name replaces the Microsoft banner logo in the neutral defa
When no longer needed, you can remove the sign-in customization from your customer tenant via the Azure portal.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
-1.If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.
-1. In the search bar, type and select **Company branding**.
-1. Under **Default sign-in experience**, select **Edit**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.
+1. Browse to **Company branding** > **Default sign-in experience** > **Edit**.
1. Remove the elements you no longer need. 1. Once finished select **Review + save**. 1. Wait a few minutes for the changes to take effect.
active-directory How To Customize Languages Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md
You can create a personalized sign-in experience for users who sign in using a s
## Add browser language under Company branding
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.
-1. In the search bar, type and select **Company branding**.
-1. Under **Browser language customizations**, select **Add browser language**.
+1. Browse to **Company branding** > **Browser language customizations** > **Add browser language**.
:::image type="content" source="media/how-to-customize-languages-customers/company-branding-add-browser-language.png" alt-text="Screenshot of the browser language customizations tab." lightbox="media/how-to-customize-languages-customers/company-branding-add-browser-language.png":::
The following languages are supported in the customer tenant:
- Spanish (Spain) - Swedish (Sweden) - Thai (Thailand)
- - Turkish (Turkey)
+ - Turkish (T├╝rkiye)
- Ukrainian (Ukraine) 6. Customize the elements on the **Basics**, **Layout**, **Header**, **Footer**, **Sign-in form**, and **Text** tabs. For detailed instructions, see [Customize the branding and end-user experience](how-to-customize-branding-customers.md).
The following languages are supported in the customer tenant:
Language customization in the customer tenant allows your user flow to accommodate different languages to suit your customer's needs. You can use languages to modify the strings displayed to your customers as part of the attribute collection process during sign-up.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
2. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.
-3. In the left menu, select **Azure Active Directory** > **External Identities**.
-4. Select **User flows**.
+3. Browse to **Identity** > **External Identities** > **User flows**.
5. Select the user flow that you want to enable for translations. 6. Select **Languages**. 7. On the **Languages** page for the user flow, select the language that you want to customize.
active-directory How To Define Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-define-custom-attributes.md
Previously updated : 07/12/2023 Last updated : 08/31/2023
User attributes are values collected from the user during self-service sign-up.
- Street Address - Surname
-If you want to collect information beyond the built-in attributes, you can create *custom user attributes* and add them to your sign-up user flow. Custom attributes are also known as directory extension attributes because they extend the user profile information stored in your customer directory. All extension attributes for your customer tenant are stored in an app named *b2c-extensions-app*. After a user enters a value for the custom attribute during sign-up, it's added to the user object and can be called via the Microsoft Graph API.
+If you want to collect information beyond the built-in attributes, you can create *custom user attributes* and add them to your sign-up user flow. Custom attributes are also known as directory extension attributes because they extend the user profile information stored in your customer directory. All extension attributes for your customer tenant are stored in an app named *b2c-extensions-app*. After a user enters a value for the custom attribute during sign-up, it's added to the user object and can be called via the Microsoft Graph API using the naming convention `extension_<b2c-extensions-app-id>_attributename`.
If your application relies on certain built-in or custom user attributes, you can [include these attributes in the token](how-to-add-attributes-to-token.md) that is sent to your application. + ## Create custom attributes
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**.
-1. Select **External Identities** > **Overview**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **External Identities** > **Overview**.
1. Select **Custom user attributes**. The available user attributes are listed. 1. To add an attribute, select **Add**. In the **Add an attribute** pane, enter the following values:
If your application relies on certain built-in or custom user attributes, you ca
:::image type="content" source="media/how-to-define-custom-attributes/add-attribute.png" alt-text="Screenshot of the pane for adding an attribute." lightbox="media/how-to-define-custom-attributes/add-attribute.png":::
-1. Select **Create**. The custom attribute is now available in the list of user attributes and can be added to your user flows.
+1. Select **Create**. The custom attribute is now available in the list of user attributes and can be [added to your user flows](#include-custom-attributes-in-a-sign-up-flow).
+
+### About referencing custom attributes
+
+The custom attributes you create are added to the *b2c-extensions-app* registered in your customer tenant. If you want to call a custom attribute from an application or manage it via Microsoft Graph, use the naming convention `extension_<b2c-extensions-app-id>_<custom-attribute-name>` where:
+
+- `<b2c-extensions-app-id>` is the *b2c-extensions-app* application ID with no hyphens.
+- `<custom-attribute-name>` is the name you assigned to the custom attribute.
+
+To find the application ID for the *b2c-extensions-app* registered in your customer tenant:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **App registrations** > **All applications**.
+1. Select the application **b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.**
+2. On the **Overview** page, use the **Application (client) ID** value, for example: `12345678-abcd-1234-1234-ab123456789`, but remove the hyphens.
+
+**Example**: If you created a custom attribute named **loyaltyNumber**, refer to it as follows:
+
+`extension_12345678abcd12341234ab123456789_loyaltyNumber`
-## Include the attributes in a sign-up flow
+## Include custom attributes in a sign-up flow
-Follow these steps to add sign-up attributes to a user flow you've already created. (For a new user flow, see [Create a sign-up and sign-in user flow for customers](how-to-user-flow-sign-up-sign-in-customers.md).)
+Follow these steps to add custom attributes to a user flow you've already created. (For a new user flow, see [Create a sign-up and sign-in user flow for customers](how-to-user-flow-sign-up-sign-in-customers.md).)
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
-1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**.
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. Select the user flow from the list.
Follow these steps to add sign-up attributes to a user flow you've already creat
You can choose the order in which the attributes are displayed on the sign-up page.
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**.
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. From the list, select your user flow.
active-directory How To Enable Password Reset Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-enable-password-reset-customers.md
The following screenshots show the self-service password rest flow. From the app
## Enable self-service password reset for customers
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.
-1. In the navigation pane, select **Azure Active Directory**.
-1. Select **External Identities** > **User flows**.
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. From the list of **User flows**, select the user flow you want to enable SSPR. 1. Make sure that the sign-up user flow registers **Email with password** as an authentication method under **Identity providers**.
The following screenshots show the self-service password rest flow. From the app
To enable self-service password reset, you need to enable the email one-time passcode (Email OTP) authentication method for all users in your tenant. To ensure that the Email OTP feature is enabled follow the steps below:
- 1. Select **Protect & secure** from the sidebar under **Azure Active Directory** and then **Authentication methods** > **Policies**.
+ 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+
+ 1. Browse to **Identity** > **Protection** > **Authentication methods**.
- 1. Under **Method** select **Email OTP (preview)**.
+ 1. Under **Policies** > **Method** select **Email OTP (preview)**.
:::image type="content" source="media/how-to-enable-password-reset-customers/authentication-methods.png" alt-text="Screenshot that shows authentication methods.":::
active-directory How To Facebook Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md
Last updated 06/20/2023 --+ #Customer intent: As a dev, devops, or it admin, I want to
If you don't already have a Facebook account, sign up at [https://www.facebook.c
- `https://<tenant-name>.ciamlogin.com/<tenant-ID>/federation/oauth2` - `https://<tenant-name>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2` > [!NOTE]
- > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
+ > To find your customer tenant ID, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). Browse to **Identity** > **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
1. Select **Save changes** at the bottom of the page. 1. At this point, only Facebook application owners can sign in. Because you registered the app, you can sign in with your Facebook account. To make your Facebook application available to your users, from the menu, select **Go live**. Follow all of the steps listed to complete all requirements. You'll likely need to complete the business verification to verify your identity as a business entity or organization. For more information, see [Meta App Development](https://developers.facebook.com/docs/development/release).
If you don't already have a Facebook account, sign up at [https://www.facebook.c
After you create the Facebook application, in this step you set the Facebook client ID and client secret in Azure AD. You can use the Azure portal or PowerShell to do so. To configure Facebook federation in the Microsoft Entra admin center, follow these steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as the global administrator of your customer tenant.
-1. Go to **Azure Active Directory** > **External Identities** > **All identity providers**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **External Identities** > **All identity providers**.
2. Select **+ Facebook**. <!-- ![Screenshot that shows how to add Facebook identity provider in Azure AD.](./media/sign-in-with-facebook/configure-facebook-idp.png)-->
To configure Facebook federation by using PowerShell, follow these steps:
At this point, the Facebook identity provider has been set up in your customer tenant, but it's not yet available in any of the sign-in pages. To add the Facebook identity provider to a user flow:
-1. In your customer tenant, go to **Azure Active Directory** > **External Identities** > **User flows**.
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. Select the user flow where you want to add the Facebook identity provider. 1. Under Settings, select **Identity providers** 1. Under **Other Identity Providers**, select **Facebook**.
At this point, the Facebook identity provider has been set up in your customer t
## Next steps - [Add Google as an identity provider](how-to-google-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
active-directory How To Google Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-google-federation-customers.md
Last updated 05/24/2023 --+ #Customer intent: As a dev, devops, or it admin, I want to
To enable sign-in for customers with a Google account, you need to create an app
- `https://<tenant-ID>.ciamlogin.com/<tenant-ID>/federation/oauth2` - `https://<tenant-ID>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2` > [!NOTE]
- > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
+ > To find your customer tenant ID, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). Browse to **Identity** > **Overview** and copy the **Tenant ID**.
2. Select **Create**. 3. Copy the values of **Client ID** and **Client secret**. You need both values to configure Google as an identity provider in your tenant. **Client secret** is an important security credential.
To enable sign-in for customers with a Google account, you need to create an app
After you create the Google application, in this step you set the Google client ID and client secret in Azure AD. You can use the Microsoft Entra admin center or PowerShell to do so. To configure Google federation in the Microsoft Entra admin center, follow these steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as the global administrator of your customer tenant.
-1. Go to **Azure Active Directory** > **External Identities** > **All identity providers**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 
+1. Browse to **Identity** > **External Identities** > **All identity providers**.
2. Select **+ Google**. <!-- ![Screenshot that shows how to add Google identity provider in Azure AD.](./media/sign-in-with-google/configure-google-idp.png)-->
To configure Google federation by using PowerShell, follow these steps:
At this point, the Google identity provider has been set up in your Azure AD, but it's not yet available in any of the sign-in pages. To add the Google identity provider to a user flow:
-1. In your customer tenant, go to **Azure Active Directory** > **External Identities** > **User flows**.
+1. In your customer tenant, browse to **Identity** > **External Identities** > **User flows**.
1. Select the user flow where you want to add the Facebook identity provider. 1. Under Settings, select **Identity providers** 1. Under **Other Identity Providers**, select **Google**.
At this point, the Google identity provider has been set up in your Azure AD, bu
## Next steps - [Add Facebook as an identity provider](how-to-facebook-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
active-directory How To Identity Protection Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-identity-protection-customers.md
An administrator can choose to dismiss a user's risk in the Microsoft Entra admi
1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the Directories + subscriptions icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**.
-1. Browse to **Azure Active Directory** > **Protect & secure** > **Security Center**.
+1. Browse to **Identity** > **Protection** > **Security Center**.
1. Select **Identity Protection**.
Administrators can then choose to return to the user's risk or sign-ins report t
### Navigating the risk detections report
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com), browse to **Azure Active Directory** > **Protect & secure** > **Security Center**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+
+1. Browse to **Identity** > **Protection** > **Security Center**.
1. Select **Identity Protection**.
active-directory How To Manage Admin Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-admin-accounts.md
In Azure Active Directory (Azure AD) for customers, a customer tenant represents
To create a new admin account, follow these steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Select **New user** > **Create new user**. 1. Enter information for this admin:
The admin is created and added to your customer tenant. It's preferable to have
You can also invite a new guest user to manage your tenant. To invite an admin, follow these steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Select **New user** > **Invite external user**. 1. On the **New user** page, enter information for the admin:
An invitation email is sent to the user. The user needs to accept the invitation
You can assign a role when you create a user or invite a guest user. You can add a role, change the role, or remove a role for a user:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Select the user you want to change the roles for. Then select **Assigned roles**. 1. Select **Add assignments**, select the role to assign (for example, *Application administrator*), and then choose **Add**.
You can assign a role when you create a user or invite a guest user. You can add
If you need to remove a role assignment from a user, follow these steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Select the user you want to change the roles for. Then select **Assigned roles**. 1. Select the role you want to remove, for example *Application administrator*, and then select **Remove assignment**.
If you need to remove a role assignment from a user, follow these steps:
As part of an auditing process, you typically review which users are assigned to specific roles in your customer directory. Use the following steps to audit which users are currently assigned privileged roles.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Roles & admins** > **Roles & admins**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Roles & admins** > **Roles & admins**.
2. Select a role, such as **Global administrator**. The **Assignments** page lists the users with that role. ## Delete an administrator account To delete an existing user, you must have a *Global administrator* role assignment. Global admins can delete any user, including other admins. *User administrators* can delete any non-admin user.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Select the user you want to delete. 1. Select **Delete**, and then **Yes** to confirm the deletion.
active-directory How To Manage Customer Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-customer-accounts.md
To add or delete users, your account must be assigned the *User administrator* o
## Create a customer account
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Select **New user** > **Create new user**. 1. Select **Create a customer**. 1. Under **Identity**, select a **Sign in method** and enter the **Value**:
As an administrator, you can reset a user's password, if the user forgets their
To reset a customer's password:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Search for and select the user that needs the reset, and then select **Reset Password**. 1. In the **Reset password** page, select **Reset password**. 1. Copy the password and give it to the user. The user will be required to change the password during the next sign-in process. ## Delete a customer account
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.
-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. Under **Azure Active Directory**, select **Users** > **All users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Users** > **All users**.
1. Search for and select the user to delete. 1. Select **Delete**, and then **Yes** to confirm the deletion.
active-directory How To Multifactor Authentication Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-multifactor-authentication-customers.md
Create a Conditional Access policy in your customer tenant that prompts users fo
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
-1. Browse to **Azure Active Directory** > **Protect & secure** > **Security Center**.
+1. Browse to **Identity** > **Protection** > **Security Center**.
1. Select **Conditional Access** > **Policies**, and then select **New policy**.
Create a Conditional Access policy in your customer tenant that prompts users fo
Enable the email one-time passcode authentication method in your customer tenant for all users.
-1. Sign in to your customer tenant in the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. Browse to **Azure Active Directory** > **Protect & secure** > **Authentication Methods**.
+1. Browse to **Identity** > **Protection** > **Authentication methods**.
1. In the **Method** list, select **Email OTP**.
active-directory How To Register Ciam App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-register-ciam-app.md
Azure AD for customers supports authentication for Single-page apps (SPAs).
The following steps show you how to register your SPA in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant:
-
- 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**.
-
-1. On the sidebar menu, select **Azure Active Directory**.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
-1. Select **Applications**, then select **App Registrations**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **+ New registration**.
Azure AD for customers supports authentication for web apps.
The following steps show you how to register your web app in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant:
-
- 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
-1. On the sidebar menu, select **Azure Active Directory**.
-
-1. Select **Applications**, then select **App Registrations**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **+ New registration**.
If your web app needs to call an API, you must grant your web app API permission
The following steps show you how to register your app in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
-
-1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant:
-
- 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. On the sidebar menu, select **Azure Active Directory**.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
-1. Select **Applications**, then select **App Registrations**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **+ New registration**.
active-directory How To Single Page App Vanillajs Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-configure-authentication.md
- Title: Tutorial - Handle authentication flows in a vanilla JavaScript single-page app
-description: Learn how to configure authentication for a vanilla JavaScript single-page app (SPA) with your Azure Active Directory (AD) for customers tenant.
--------- Previously updated : 06/09/2023
-#Customer intent: As a developer, I want to learn how to configure vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
--
-# Tutorial: Handle authentication flows in a vanilla JavaScript single-page app
-
-In the [previous article](./how-to-single-page-app-vanillajs-prepare-app.md), you created a vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This tutorial demonstrates how to configure the application to authenticate and authorize users to access protected resources.
-
-In this tutorial;
-
-> [!div class="checklist"]
-> * Configure the settings for the application
-> * Add code to *authRedirect.js* to handle the authentication flow
-> * Add code to *authPopup.js* to handle the authentication flow
-
-## Prerequisites
-
-* Completion of the prerequisites and steps in [Prepare a single-page application for authentication](how-to-single-page-app-vanillajs-prepare-app.md).
-
-## Edit the authentication configuration file
-
-The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-grant-flow.md) to authenticate users. The Implicit Grant Flow is a browser-based flow that doesn't require a back-end server. The flow redirects the user to the sign-in page, where the user signs in and consents to the permissions that are being requested by the application. The purpose of *authConfig.js* is to configure the authentication flow.
-
-1. Open *public/authConfig.js* and add the following code snippet:
-
- ```javascript
- /**
- * Configuration object to be passed to MSAL instance on creation.
- * For a full list of MSAL.js configuration parameters, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
- */
- const msalConfig = {
- auth: {
- clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply.
- authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain
- redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/
- navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response.
- },
- cache: {
- cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO.
- storeAuthStateInCookie: false, // set this to true if you have to support IE
- },
- system: {
- loggerOptions: {
- loggerCallback: (level, message, containsPii) => {
- if (containsPii) {
- return;
- }
- switch (level) {
- case msal.LogLevel.Error:
- console.error(message);
- return;
- case msal.LogLevel.Info:
- console.info(message);
- return;
- case msal.LogLevel.Verbose:
- console.debug(message);
- return;
- case msal.LogLevel.Warning:
- console.warn(message);
- return;
- }
- },
- },
- },
- };
-
- /**
- * An optional silentRequest object can be used to achieve silent SSO
- * between applications by providing a "login_hint" property.
- */
-
- // const silentRequest = {
- // scopes: ["openid", "profile"],
- // loginHint: "example@domain.net"
- // };
-
- // exporting config object for jest
- if (typeof exports !== 'undefined') {
- module.exports = {
- msalConfig: msalConfig,
- loginRequest: loginRequest,
- };
- }
- ```
-
-1. Replace the following values with the values from the Azure portal:
- - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center.
- - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
-2. Save the file.
-
-## Adding code to the redirection file
-
-A redirection file is required to handle the response from the sign-in page. It is used to extract the access token from the URL fragment and use it to call the protected API. It is also used to handle errors that occur during the authentication process.
-
-1. Open *public/authRedirect.js* and add the following code snippet:
-
- ```javascript
- // Create the main myMSALObj instance
- // configuration parameters are located at authConfig.js
- const myMSALObj = new msal.PublicClientApplication(msalConfig);
-
- let username = "";
-
- /**
- * A promise handler needs to be registered for handling the
- * response returned from redirect flow. For more information, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md#redirect-apis
- */
- myMSALObj.handleRedirectPromise()
- .then(handleResponse)
- .catch((error) => {
- console.error(error);
- });
-
- function selectAccount() {
-
- /**
- * See here for more info on account retrieval:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
-
- const currentAccounts = myMSALObj.getAllAccounts();
-
- if (!currentAccounts) {
- return;
- } else if (currentAccounts.length > 1) {
- // Add your account choosing logic here
- console.warn("Multiple accounts detected.");
- } else if (currentAccounts.length === 1) {
- welcomeUser(currentAccounts[0].username);
- updateTable(currentAccounts[0]);
- }
- }
-
- function handleResponse(response) {
-
- /**
- * To see the full list of response object properties, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response
- */
-
- if (response !== null) {
- welcomeUser(response.account.username);
- updateTable(response.account);
- } else {
- selectAccount();
- }
- }
-
- function signIn() {
-
- /**
- * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
- */
-
- myMSALObj.loginRedirect(loginRequest);
- }
-
- function signOut() {
-
- /**
- * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
- */
-
- // Choose which account to logout from by passing a username.
- const logoutRequest = {
- account: myMSALObj.getAccountByUsername(username),
- postLogoutRedirectUri: '/signout', // remove this line if you would like navigate to index page after logout.
-
- };
-
- myMSALObj.logoutRedirect(logoutRequest);
- }
- ```
-
-1. Save the file.
-
-## Adding code to the *authPopup.js* file
-
-The application uses *authPopup.js* to handle the authentication flow when the user signs in using the pop-up window. The pop-up window is used when the user is already signed in and the application needs to get an access token for a different resource.
-
-1. Open *public/authPopup.js* and add the following code snippet:
-
- ```javascript
- // Create the main myMSALObj instance
- // configuration parameters are located at authConfig.js
- const myMSALObj = new msal.PublicClientApplication(msalConfig);
-
- let username = "";
-
- function selectAccount () {
-
- /**
- * See here for more info on account retrieval:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
-
- const currentAccounts = myMSALObj.getAllAccounts();
-
- if (!currentAccounts || currentAccounts.length < 1) {
- return;
- } else if (currentAccounts.length > 1) {
- // Add your account choosing logic here
- console.warn("Multiple accounts detected.");
- } else if (currentAccounts.length === 1) {
- username = currentAccounts[0].username
- welcomeUser(currentAccounts[0].username);
- updateTable(currentAccounts[0]);
- }
- }
-
- function handleResponse(response) {
-
- /**
- * To see the full list of response object properties, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response
- */
-
- if (response !== null) {
- username = response.account.username
- welcomeUser(username);
- updateTable(response.account);
- } else {
- selectAccount();
- }
- }
-
- function signIn() {
-
- /**
- * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
- */
-
- myMSALObj.loginPopup(loginRequest)
- .then(handleResponse)
- .catch(error => {
- console.error(error);
- });
- }
-
- function signOut() {
-
- /**
- * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
- */
-
- // Choose which account to logout from by passing a username.
- const logoutRequest = {
- account: myMSALObj.getAccountByUsername(username),
- mainWindowRedirectUri: '/signout'
- };
-
- myMSALObj.logoutPopup(logoutRequest);
- }
-
- selectAccount();
- ```
-
-1. Save the file.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Sign in and sign out of the vanilla JS SPA](./how-to-single-page-app-vanillajs-sign-in-sign-out.md)
active-directory How To Single Page App Vanillajs Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-app.md
- Title: Tutorial - Prepare a vanilla JavaScript single-page app (SPA) for authentication in a customer tenant
-description: Learn how to prepare a vanilla JavaScript single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant.
--------- Previously updated : 06/09/2023
-#Customer intent: As a developer, I want to learn how to configure vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant.
--
-# Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant
-
-In the [previous article](./how-to-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant.
-
-In this tutorial;
-
-> [!div class="checklist"]
-> * Create a vanilla JavaScript project in Visual Studio Code
-> * Install required packages
-> * Add code to *server.js* to create a server
-
-## Prerequisites
-
-* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md).
-* Although any integrated development environment (IDE) that supports vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
-* [Node.js](https://nodejs.org/en/download/).
-
-## Create a new vanilla JS project and install dependencies
-
-1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project.
-1. Open a new terminal by selecting **Terminal** > **New Terminal**.
-1. Run the following command to create a new vanilla JS project:
-
- ```powershell
- npm init -y
- ```
-1. Create additional folders and files to achieve the following project structure:
-
- ```
- ΓööΓöÇΓöÇ public
- ΓööΓöÇΓöÇ authConfig.js
- ΓööΓöÇΓöÇ authPopup.js
- ΓööΓöÇΓöÇ authRedirect.js
- ΓööΓöÇΓöÇ https://docsupdatetracker.net/index.html
- ΓööΓöÇΓöÇ signout.html
- ΓööΓöÇΓöÇ styles.css
- ΓööΓöÇΓöÇ ui.js
- ΓööΓöÇΓöÇ server.js
- ```
-
-## Install app dependencies
-
-1. In the **Terminal**, run the following command to install the required dependencies for the project:
-
- ```powershell
- npm install express morgan @azure/msal-browser
- ```
-
-## Edit the *server.js* file
-
-**Express** is a web application framework for **Node.js**. It's used to create a server that hosts the application. **Morgan** is the middleware that logs HTTP requests to the console. The server file is used to host these dependencies and contains the routes for the application. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/).
-
-1. Add the following code snippet to the *server.js* file:
-
- ```javascript
- const express = require('express');
- const morgan = require('morgan');
- const path = require('path');
-
- const DEFAULT_PORT = process.env.PORT || 3000;
-
- // initialize express.
- const app = express();
-
- // Configure morgan module to log all requests.
- app.use(morgan('dev'));
-
- // serve public assets.
- app.use(express.static('public'));
-
- // serve msal-browser module
- app.use(express.static(path.join(__dirname, "node_modules/@azure/msal-browser/lib")));
-
- // set up a route for signout.html
- app.get('/signout', (req, res) => {
- res.sendFile(path.join(__dirname + '/public/signout.html'));
- });
-
- // set up a route for redirect.html
- app.get('/redirect', (req, res) => {
- res.sendFile(path.join(__dirname + '/public/redirect.html'));
- });
-
- // set up a route for https://docsupdatetracker.net/index.html
- app.get('/', (req, res) => {
- res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html'));
- });
-
- app.listen(DEFAULT_PORT, () => {
- console.log(`Sample app listening on port ${DEFAULT_PORT}!`);
- });
-
- ```
-
-In this code, the **app** variable is initialized with the **express** module and **express** is used to serve the public assets. **Msal-browser** is served as a static asset and is used to initiate the authentication flow.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Configure SPA for authentication](how-to-single-page-app-vanillajs-configure-authentication.md)
active-directory How To Single Page App Vanillajs Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-tenant.md
- Title: Tutorial - Prepare your customer tenant to authenticate users in a Vanilla JavaScript single-page application
-description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a Vanilla JavaScript single-page app (SPA).
--------- Previously updated : 06/09/2023
-#Customer intent: As a developer, I want to learn how to configure a vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
--
-# Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app
-
-This tutorial series demonstrates how to build a vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
-
-In this tutorial;
-
-> [!div class="checklist"]
-> * Register a SPA in the Microsoft Entra admin center, and record its identifiers
-> * Define the platform and URLs
-> * Grant permissions to the SPA to access the Microsoft Graph API
-> * Create a sign in and sign out user flow in the Microsoft Entra admin center
-> * Associate your SPA with the user flow
-
-## Prerequisites
--- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions:-
- * Application administrator
- * Application developer
- * Cloud application administrator
--- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one.-
-## Register the SPA and record identifiers
--
-## Add a platform redirect URL
--
-## Grant API permissions
--
-## Create a user flow
--
-## Associate the SPA with the user flow
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Prepare your Vanilla JS SPA](how-to-single-page-app-vanillajs-prepare-app.md)
active-directory How To Single Page App Vanillajs Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-sign-in-sign-out.md
- Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant
-description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant.
-------- Previously updated : 05/25/2023
-#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
--
-# Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant
-
-In the [previous article](how-to-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality.
-
-In this tutorial;
-
-> [!div class="checklist"]
-> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface
-> * Add code to the *signout.html* file to create the sign-out page
-> * Sign in and sign out of the application
-
-## Prerequisites
-
-* Completion of the prerequisites and steps in [Create components for authentication and authorization](how-to-single-page-app-vanillajs-configure-authentication.md).
-
-## Add code to the *https://docsupdatetracker.net/index.html* file
-
-The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button.
-
-1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet:
-
- ```html
- <!DOCTYPE html>
- <html lang="en">
-
- <head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
- <title>Microsoft identity platform</title>
- <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
- <link rel="stylesheet" href="./styles.css">
-
- <!-- adding Bootstrap 5 for UI components -->
- <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet"
- integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous">
-
- <!-- msal.min.js can be used in the place of msal-browser.js -->
- <script src="/msal-browser.min.js"></script>
- </head>
-
- <body>
- <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle">
- <a class="navbar-brand" href="/">Microsoft identity platform</a>
- <div class="navbar-collapse justify-content-end">
- <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button>
- <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button>
- </div>
- </nav>
- <br>
- <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js
- </h5>
- <h5 id="welcome-div" class="card-header text-center d-none"></h5>
- <br>
- <div class="table-responsive-ms" id="table">
- <table id="table-div" class="table table-striped d-none">
- <thead id="table-head-div">
- <tr>
- <th>Claim Type</th>
- <th>Value</th>
- <th>Description</th>
- </tr>
- </thead>
- <tbody id="table-body-div">
- </tbody>
- </table>
- </div>
- <!-- importing bootstrap.js and supporting js libraries -->
- <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
- integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous">
- </script>
- <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"
- integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3"
- crossorigin="anonymous"></script>
- <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js"
- integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3"
- crossorigin="anonymous"></script>
-
- <!-- importing app scripts (load order is important) -->
- <script type="text/javascript" src="./authConfig.js"></script>
- <script type="text/javascript" src="./ui.js"></script>
- <script type="text/javascript" src="./claimUtils.js"></script>
- <!-- <script type="text/javascript" src="./authRedirect.js"></script> -->
- <!-- uncomment the above line and comment the line below if you would like to use the redirect flow -->
- <script type="text/javascript" src="./authPopup.js"></script>
- </body>
-
- </html>
- ```
-
-1. Save the file.
-
-## Add code to the *signout.html* file
-
-1. Open *public/signout.html* and add the following code snippet:
-
- ```html
- <!DOCTYPE html>
- <html lang="en">
- <head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0">
- <title>Azure AD | Vanilla JavaScript SPA</title>
- <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
-
- <!-- adding Bootstrap 4 for UI components -->
- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
- </head>
- <body>
- <div class="jumbotron" style="margin: 10%">
- <h1>Goodbye!</h1>
- <p>You have signed out and your cache has been cleared.</p>
- <a class="btn btn-primary" href="/" role="button">Take me back</a>
- </div>
- </body>
- </html>
- ```
-
-1. Save the file.
-
-## Add code to the *ui.js* file
-
-When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button.
-
-1. Open *public/ui.js* and add the following code snippet:
-
- ```javascript
- // Select DOM elements to work with
- const signInButton = document.getElementById('signIn');
- const signOutButton = document.getElementById('signOut');
- const titleDiv = document.getElementById('title-div');
- const welcomeDiv = document.getElementById('welcome-div');
- const tableDiv = document.getElementById('table-div');
- const tableBody = document.getElementById('table-body-div');
-
- function welcomeUser(username) {
- signInButton.classList.add('d-none');
- signOutButton.classList.remove('d-none');
- titleDiv.classList.add('d-none');
- welcomeDiv.classList.remove('d-none');
- welcomeDiv.innerHTML = `Welcome ${username}!`;
- };
-
- function updateTable(account) {
- tableDiv.classList.remove('d-none');
-
- const tokenClaims = createClaimsTable(account.idTokenClaims);
-
- Object.keys(tokenClaims).forEach((key) => {
- let row = tableBody.insertRow(0);
- let cell1 = row.insertCell(0);
- let cell2 = row.insertCell(1);
- let cell3 = row.insertCell(2);
- cell1.innerHTML = tokenClaims[key][0];
- cell2.innerHTML = tokenClaims[key][1];
- cell3.innerHTML = tokenClaims[key][2];
- });
- };
- ```
-
-1. Save the file.
-
-## Add code to the *styles.css* file
-
-1. Open *public/styles.css* and add the following code snippet:
-
- ```css
- .navbarStyle {
- padding: .5rem 1rem !important;
- }
-
- .table-responsive-ms {
- max-height: 39rem !important;
- padding-left: 10%;
- padding-right: 10%;
- }
- ```
-
-1. Save the file.
-
-## Run your project and sign in
-
-Now that all the required code snippets have been added, the application can be called and tested in a web browser.
-
-1. Open a new terminal and run the following command to start your express web server.
- ```powershell
- npm start
- ```
-1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`.
-1. Select **No account? Create one**, which starts the sign-up flow.
-1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application.
-1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
-
- 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
-
-1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data.
-
- :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
-
-## Sign out of the application
-
-1. To sign out of the application, select **Sign out** in the navigation bar.
-1. A window appears asking which account to sign out of.
-1. Upon successful sign out, a final window appears advising you to close all browser windows.
-
-## Next steps
--- [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory How To User Flow Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-add-application.md
Because you might want the same sign-in experience for all of your customer-faci
If you already registered your application in your customer tenant, you can add it to the new user flow. This step activates the sign-up and sign-in experience for users who visit your application. An application can have only one user flow, but a user flow can be used by multiple applications.
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory** > **External Identities** > **User flows**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. From the list, select your user flow.
active-directory How To User Flow Sign Up Sign In Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-sign-up-sign-in-customers.md
Follow these steps to create a user flow a customer can use to sign in or sign u
### To add a new user flow
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
-1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**.
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. Select **New user flow**.
Follow these steps to create a user flow a customer can use to sign in or sign u
You can choose the order in which the attributes are displayed on the sign-up page.
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**.
+1. Browse to **Identity** > **External Identities** > **User flows**.
1. From the list, select your user flow.
active-directory How To Web App Node Use Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-use-certificate.md
Azure Active Directory (Azure AD) for customers supports two types of authentication for [confidential client applications](../../../active-directory/develop/msal-client-applications.md); password-based authentication (such as client secret) and certificate-based authentication. For a higher level of security, we recommend using a certificate (instead of a client secret) as a credential in your confidential client applications.
-In production, you should purchase a certificate signed by a well-known certificate authority, and use [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) to manage certificate access and lifetime for you. However, for testing purposes, you can create a self-signed certificate and configure your apps to authenticate with it.
+In production, you should purchase a certificate signed by a well-known certificate authority, and use [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) to manage certificate access and lifetime for you. However, for testing purposes, you can create a self-signed certificate and configure your apps to authenticate with it.
-In this article, you learn to generate a self-signed certificate by using [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) on the Azure portal, OpenSSL or Windows PowerShell.
+In this article, you learn to generate a self-signed certificate by using [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) on the Azure portal, OpenSSL or Windows PowerShell. If you have a client secret already, you'll learn how to safely delete it.
When needed, you can also create a self-signed certificate programmatically by using [.NET](/azure/key-vault/certificates/quick-create-net), [Node.js](/azure/key-vault/certificates/quick-create-node), [Go](/azure/key-vault/certificates/quick-create-go), [Python](/azure/key-vault/certificates/quick-create-python) or [Java](/azure/key-vault/certificates/quick-create-java) client libraries.
After the command finishes execution, you should have a *.crt* and a *.key* file
[!INCLUDE [active-directory-customers-app-integration-add-user-flow](./includes/register-app/add-client-app-certificate.md)] + ## Configure your Node.js app to use certificate Once you associate your app registration with the certificate, you need to update your app code to start using the certificate:
-1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then update it to look similar to the following code:
+1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then update it to look similar to the following code. If you have a client secret present, make sure you remove it:
```javascript require('dotenv').config();
Once you associate your app registration with the certificate, you need to updat
auth: { clientId: process.env.CLIENT_ID || 'Enter_the_Application_Id_Here', // 'Application (client) ID' of app registration in Azure portal - this value is a GUID authority: process.env.AUTHORITY || `https://${TENANT_SUBDOMAIN}.ciamlogin.com/`,
- //clientSecret: process.env.CLIENT_SECRET || 'Enter_the_Client_Secret_Here', // Client secret generated from the app registration in Azure portal
clientCertificate: { thumbprint: "YOUR_CERT_THUMBPRINT", // replace with thumbprint obtained during step 2 above privateKey: privateKey
Once you associate your app registration with the certificate, you need to updat
You can use your existing certificate directly from Azure Key Vault:
-1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then comment the `clientSecret` property:
+1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then remove the `clientSecret` property:
```java const msalConfig = { auth: { clientId: process.env.CLIENT_ID || 'Enter_the_Application_Id_Here', // 'Application (client) ID' of app registration in Azure portal - this value is a GUID authority: process.env.AUTHORITY || `https://${TENANT_SUBDOMAIN}.ciamlogin.com/`,
- //clientSecret: process.env.CLIENT_SECRET || 'Enter_the_Client_Secret_Here', // Client secret generated from the app registration in Azure portal
}, //... };
active-directory Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/microsoft-graph-operations.md
During registration, you'll specify a **Redirect URI** which redirects the user
The following steps show you how to register your app in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant:
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
- 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar.
-
- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**.
-
-1. On the sidebar menu, select **Azure Active Directory**.
-
-1. Select **Applications**, then select **App Registrations**.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **+ New registration**.
active-directory Quickstart Get Started Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-get-started-guide.md
+
+ Title: Quickstart - Get started guide
+description: Use our quickstart guide to customize your tenant in just a few steps.
+++++++ Last updated : 08/25/2023+++
+#Customer intent: As a dev, devops, or IT admin, I want to personalize the customer tenant.
+
+# Quickstart: Get started with our guide to run a sample app and sign in your users (preview)
+
+In this quickstart, we'll guide you through customizing the look and feel of your apps in the customer tenant, setting up a user and configuring a sample app in only a few minutes. With these built-in customer configuration features, Azure AD for customers can serve as the identity provider and access management service for your customers.
+
+## Prerequisites
+
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a> or [create a tenant with customer configurations in the Microsoft Entra admin center](quickstart-tenant-setup.md).
+
+## Customize your sign-in experience
+
+You can customize your customer's sign-in and sign-up experience in the Azure AD for customers tenant. Follow the guide that will help you set up the tenant in three easy steps. First you must specify how would you like your customer to sign in. At this step you can choose between two options: **Email and password** or **Email and one-time passcode**. You can configure social accounts later, which would allow your customers to sign in using their [Google](how-to-google-federation-customers.md) or [Facebook](how-to-facebook-federation-customers.md) account. You can also [define custom attributes](how-to-define-custom-attributes.md) to collect from the user during sign-up.
+
+If you prefer, you can add your company logo, change the background color or adjust the sign-in layout. These optional changes will apply to the look and feel of all your apps in this tenant with customer configurations. After you have the created tenant, additional branding options are available. You can [customize the default branding](how-to-customize-branding-customers.md) and [add languages](how-to-customize-languages-customers.md). Once you're finished with the customization, select **Continue**.
++
+## Try out the sign-up experience and create your first user
+
+1. The guide will configure your tenant with the options you have selected. Once the configuration is complete, the button will change its text from **Setting up...** to **Run it now**.
+1. Select the **Run it now** button. A new browser tab will open with the sign-in page for your tenant that can be used to create and sign in users.
+1. Select **No account? Create one** to create a new user in the tenant.
+1. Add your new user's email address and select **Next**. Don't use the same email you used to create your trial.
+1. Complete the sign-up steps on the screen. Typically, once the user has signed in, they're redirected back to your app. However, since you havenΓÇÖt set up an app at this step, you'll be redirected to JWT.ms instead, where you can view the contents of the token issued during the sign-in process.
+1. Go back to the guide tab. At this stage, you can either exit the guide and go to the admin center to explore the full range of configuration options for your tenant. Or you can **Continue** and set up a sample app. We recommend setting up the sample app, so that you can use it to test any further configuration changes you make
+
+ :::image type="content" source="media/quickstart-trial-setup/successful-trial-setup.png" alt-text="Screenshot that shows the successful creation of the sign-up experience.":::
+
+## Set up a sample app
+
+The get started guide will automatically configure sample apps for the below app types and languages:
+
+- Single Page Application (SPA): JavaScript, React, Angular
+- Web app: Node.js (Express), ASP.NET Core
+
+Follow the steps below, to download and run the sample app.
+
+1. Proceed to set up the sample app by selecting the app type.
+1. Select your language and **Download sample app** on your machine.
+1. Follow the instructions to install and run the app. Sign into the sample app.
+
+ :::image type="content" source="media/quickstart-trial-setup/sample-app-setup.png" alt-text="Screenshot of the sample app setup.":::
+
+1. You've completed the process of creating a trial tenant, configuring the sign-in experience, creating your first user, and setting up a sample app. Select **Continue** to go to the summary page, where you can either go to the admin center or you can restart the guide to choose different options.
+
+## Explore Azure AD for customers
+
+Follow the articles below to learn more about the configuration the guide created for you or to configure your own apps. You can always come back to the [admin center](https://entra.microsoft.com/) to customize your tenant and explore the full range of configuration options for your tenant.
+
+> [!NOTE]
+> The next time you return to your tenant, you might be prompted to set up additional authentication factors for added security of your tenant admin account.
+
+## Next steps
+ - [Register an app in CIAM](how-to-register-ciam-app.md)
+ - [Customize user experience for your customers](how-to-customize-branding-customers.md)
+ - [Create a sign-up and sign-in user flow](how-to-user-flow-sign-up-sign-in-customers.md)
+ - See the [Azure AD for customers Developer Center](https://aka.ms/ciam/dev) for the latest developer content and resources
+
active-directory Quickstart Tenant Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-tenant-setup.md
In this quickstart, you'll learn how to create a tenant with customer configurat
## Create a new tenant with customer configurations
-1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/).
-1. From the left menu, select **Azure Active Directory** > **Overview**.
-1. Select **Manage tenants** at the top of the page.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Browse to **Identity** > **Overview** > **Manage tenants**.
1. Select **Create**. :::image type="content" source="media/how-to-create-customer-tenant-portal/create-tenant.png" alt-text="Screenshot of the create tenant option.":::
In this quickstart, you'll learn how to create a tenant with customer configurat
:::image type="content" source="media/how-to-create-customer-tenant-portal/tenant-successfully-created.png" alt-text="Screenshot that shows the link to the new tenant.":::
+## Customize your tenant with a guide
+
+Our guide will walk you through the process of setting up a user and configuring a sample app in just a few minutes. This means that you can quickly and easily test out different sign-in and sign-up options and set up a sample app to see what works best for you. This guide is available in any customer tenant.
+
+> [!NOTE]
+> The guide wonΓÇÖt run automatically in customer tenants that you created with the steps above. If you want to run the guide, follow the steps below.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Home** > **Go to Microsoft Entra ID**
+1. On the Get started tab, select **Start the guide**.
+
+ :::image type="content" source="media/how-to-create-customer-tenant-portal/guide-link.png" alt-text="Screenshot that shows how to start the guide.":::
+
+This link will take you to the [guide](quickstart-get-started-guide.md), where you can customize your tenant in three easy steps.
+ ## Clean up resources If you're not going to continue to use this tenant, you can delete it using the following steps:
-1. Ensure that you're signed in to the directory that you want to delete through the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the Azure portal. Switch to the target directory if needed.
-1. From the left menu, select **Azure Active Directory** > **Overview**.
-1. Select **Manage tenants** at the top of the page.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
+1. Browse to **Identity** > **Overview** > **Manage tenants**.
1. Select the tenant you want to delete, and then select **Delete**. :::image type="content" source="media/how-to-create-customer-tenant-portal/delete-tenant.png" alt-text="Screenshot that shows how to delete the tenant.":::
The tenant and its associated information are deleted.
## Next steps-- [Customize the sign-in experience](how-to-customize-branding-customers.md) -- [Register an app](how-to-register-ciam-app.md)-- [Create user flows](how-to-user-flow-sign-up-sign-in-customers.md)+
+To learn more about the set-up guide and how to customize your tenant, see the [Get started guide](quickstart-get-started-guide.md) article.
active-directory Quickstart Trial Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-trial-setup.md
During the free trial period, you'll have access to all product features with fe
:::image type="content" source="media/quickstart-trial-setup/setting-up-free-trial.png" alt-text="Screenshot of the loading page while setting up the customer tenant free trial.":::
-## Customize your sign-in experience
+## Get started guide
-You can customize your customer's sign-in and sign-up experience in the Azure AD for customers tenant. Follow the guide that will help you set up the tenant in three easy steps. First you must specify how would you like your customer to sign in. At this step you can choose between two options: **Email and password** or **Email and one-time passcode**. You can configure social accounts later, which would allow your customers to sign in using their [Google](how-to-google-federation-customers.md) or [Facebook](how-to-facebook-federation-customers.md) account. You can also [define custom attributes](how-to-define-custom-attributes.md) to collect from the user during sign-up.
-
-If you prefer, you can add your company logo, change the background color or adjust the sign-in layout. These optional changes will apply to the look and feel of all your apps in this tenant with customer configurations. After you have the created tenant, additional branding options are available. You can [customize the default branding](how-to-customize-branding-customers.md) and [add languages](how-to-customize-languages-customers.md). Once you're finished with the customization, select **Continue**.
--
-## Try out the sign-up experience and create your first user
-
-1. The guide will configure your tenant with the options you have selected. Once the configuration is complete, the button will change its text from **Setting up...** to **Run it now**.
-1. Select the **Run it now** button. A new browser tab will open with the sign-in page for your tenant that can be used to create and sign in users.
-1. Select **No account? Create one** to create a new user in the tenant.
-1. Add your new user's email address and select **Next**. Don't use the same email you used to create your trial.
-1. Complete the sign-up steps on the screen. Typically, once the user has signed in, they're redirected back to your app. However, since you havenΓÇÖt set up an app at this step, you'll be redirected to JWT.ms instead, where you can view the contents of the token issued during the sign-in process.
-1. Go back to the guide tab. At this stage, you can either exit the guide and go to the admin center to explore the full range of configuration options for your tenant. Or you can **Continue** and set up a sample app. We recommend setting up the sample app, so that you can use it to test any further configuration changes you make
-
- :::image type="content" source="media/quickstart-trial-setup/successful-trial-setup.png" alt-text="Screenshot that shows the successful creation of the sign-up experience.":::
-
-## Set up a sample app
-
-The get started guide will automatically configure sample apps for the below app types and languages:
--- Single Page Application (SPA): JavaScript, React, Angular-- Web app: Node.js (Express), ASP.NET Core-
-Follow the steps below, to download and run the sample app.
-
-1. Proceed to set up the sample app by selecting the app type.
-1. Select your language and **Download sample app** on your machine.
-1. Follow the instructions to install and run the app. Sign into the sample app.
-
- :::image type="content" source="media/quickstart-trial-setup/sample-app-setup.png" alt-text="Screenshot of the sample app setup.":::
-
-1. You've completed the process of creating a trial tenant, configuring the sign-in experience, creating your first user, and setting up a sample app. Select **Continue** to go to the summary page, where you can either go to the admin center or you can restart the guide to choose different options.
-
-## Explore Azure AD for customers
-
-Follow the articles below to learn more about the configuration the guide created for you or to configure your own apps. You can always come back to the [admin center](https://entra.microsoft.com/) to customize your tenant and explore the full range of configuration options for your tenant.
-
-> [!NOTE]
-> The next time you return to your tenant, you might be prompted to set up additional authentication factors for added security of your tenant admin account.
-
-## Next steps
+Once your customer tenant free trial is ready, the next step is to personalize your customer's sign-in and sign-up experience, set up a user in your tenant, and configure a sample app. The get started guide will walk you through all of these steps in just a few minutes. For more information about the next steps see the [get started guide](quickstart-get-started-guide.md) article.
active-directory Sample Cli App Node Sign In Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-cli-app-node-sign-in-users.md
Last updated 08/04/2023--+ #Customer intent: As a dev, devops, I want to learn how to authenticate users in an Azure Active Directory (Azure AD) for customers tenant using a sample Node.js CLI application
Learn how to:
- [Sign in users in your own Node.js CLI application](tutorial-cli-app-node-sign-in-prepare-tenant.md). By completing these steps, you build a Node.js CLI application similar to the sample you've run. - [Enable password reset](how-to-enable-password-reset-customers.md). - [Customize the default branding](how-to-customize-branding-customers.md).-- [Configure sign-in with Google](how-to-google-federation-customers.md).
+- [Configure sign-in with Google](how-to-google-federation-customers.md).
active-directory Sample Single Page App Vanillajs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-vanillajs-sign-in.md
Title: Sign in users in a sample vanilla JavaScript single-page application
-description: Learn how to configure a sample JavaSCript single-page application (SPA) to sign in and sign out users.
+description: Learn how to configure a sample JavaScript single-page application (SPA) to sign in and sign out users.
Previously updated : 06/23/2023 Last updated : 08/17/2023 #Customer intent: As a dev, devops, I want to learn about how to configure a sample vanilla JS SPA to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
If you choose to download the `.zip` file, extract the sample app file to a fold
``` 1. Open a web browser and navigate to `http://localhost:3000/`.
-1. Select **No account? Create one**, which starts the sign-up flow.
-1. In the **Create account** window, enter the email address registered to your customer tenant, which starts the sign-up flow as a user for your application.
-1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
-1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
+1. Sign-in with an account registered to the customer tenant.
+1. Once signed in the display name is shown next to the **Sign out** button as shown in the following screenshot.
1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
active-directory Samples Ciam All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/samples-ciam-all.md
Previously updated : 07/17/2023 Last updated : 08/17/2023
These samples and how-to guides demonstrate how to integrate a single-page appli
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - |
-> | JavaScript, Vanilla | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) |
+> | JavaScript, Vanilla | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) |
> | JavaScript, Angular | &#8226; [Sign in users](./sample-single-page-app-angular-sign-in.md) | | > | JavaScript, React | &#8226; [Sign in users](./sample-single-page-app-react-sign-in.md) | &#8226; [Sign in users](./tutorial-single-page-app-react-sign-in-prepare-tenant.md) |
These samples and how-to guides demonstrate how to write a daemon application th
> [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - |
-> | Single-page application | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) |
+> | Single-page application | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) |
### JavaScript, Angular
active-directory Tutorial Desktop Maui Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-maui-role-based-access-control.md
+ Last updated 07/17/2023
active-directory Tutorial Mobile Maui Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-mobile-maui-role-based-access-control.md
+ Last updated 07/17/2023
active-directory Tutorial Single Page App Vanillajs Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-configure-authentication.md
+
+ Title: Tutorial - Handle authentication flows in a Vanilla JavaScript single-page app
+description: Learn how to configure authentication for a Vanilla JavaScript single-page app (SPA) with your Azure Active Directory (AD) for customers tenant.
+++++++++ Last updated : 08/17/2023
+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app
+
+In the [previous article](./tutorial-single-page-app-vanillajs-prepare-app.md), you created a Vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This tutorial demonstrates how to configure the application to authenticate and authorize users to access protected resources.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Configure the settings for the application
+> * Add code to *authRedirect.js* to handle the authentication flow
+> * Add code to *authPopup.js* to handle the authentication flow
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Prepare a single-page application for authentication](tutorial-single-page-app-vanillajs-prepare-app.md).
+
+## Edit the authentication configuration file
+
+The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-grant-flow.md) to authenticate users. The Implicit Grant Flow is a browser-based flow that doesn't require a back-end server. The flow redirects the user to the sign-in page, where the user signs in and consents to the permissions that are being requested by the application. The purpose of *authConfig.js* is to configure the authentication flow.
+
+1. Open *public/authConfig.js* and add the following code snippet:
+
+ ```javascript
+ /**
+ * Configuration object to be passed to MSAL instance on creation.
+ * For a full list of MSAL.js configuration parameters, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
+ */
+ const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply.
+ authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain
+ redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/
+ navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response.
+ },
+ cache: {
+ cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO.
+ storeAuthStateInCookie: false, // set this to true if you have to support IE
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback: (level, message, containsPii) => {
+ if (containsPii) {
+ return;
+ }
+ switch (level) {
+ case msal.LogLevel.Error:
+ console.error(message);
+ return;
+ case msal.LogLevel.Info:
+ console.info(message);
+ return;
+ case msal.LogLevel.Verbose:
+ console.debug(message);
+ return;
+ case msal.LogLevel.Warning:
+ console.warn(message);
+ return;
+ }
+ },
+ },
+ },
+ };
+
+ /**
+ * An optional silentRequest object can be used to achieve silent SSO
+ * between applications by providing a "login_hint" property.
+ */
+
+ // const silentRequest = {
+ // scopes: ["openid", "profile"],
+ // loginHint: "example@domain.net"
+ // };
+
+ // exporting config object for jest
+ if (typeof exports !== 'undefined') {
+ module.exports = {
+ msalConfig: msalConfig,
+ loginRequest: loginRequest,
+ };
+ }
+ ```
+
+1. Replace the following values with the values from the Azure portal:
+ - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center.
+ - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+2. Save the file.
+
+## Adding code to the redirection file
+
+A redirection file is required to handle the response from the sign-in page. It is used to extract the access token from the URL fragment and use it to call the protected API. It is also used to handle errors that occur during the authentication process.
+
+1. Open *public/authRedirect.js* and add the following code snippet:
+
+ ```javascript
+ // Create the main myMSALObj instance
+ // configuration parameters are located at authConfig.js
+ const myMSALObj = new msal.PublicClientApplication(msalConfig);
+
+ let username = "";
+
+ /**
+ * A promise handler needs to be registered for handling the
+ * response returned from redirect flow. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md#redirect-apis
+ */
+ myMSALObj.handleRedirectPromise()
+ .then(handleResponse)
+ .catch((error) => {
+ console.error(error);
+ });
+
+ function selectAccount() {
+
+ /**
+ * See here for more info on account retrieval:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
+ */
+
+ const currentAccounts = myMSALObj.getAllAccounts();
+
+ if (!currentAccounts) {
+ return;
+ } else if (currentAccounts.length > 1) {
+ // Add your account choosing logic here
+ console.warn("Multiple accounts detected.");
+ } else if (currentAccounts.length === 1) {
+ welcomeUser(currentAccounts[0].username);
+ updateTable(currentAccounts[0]);
+ }
+ }
+
+ function handleResponse(response) {
+
+ /**
+ * To see the full list of response object properties, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response
+ */
+
+ if (response !== null) {
+ welcomeUser(response.account.username);
+ updateTable(response.account);
+ } else {
+ selectAccount();
+ }
+ }
+
+ function signIn() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ myMSALObj.loginRedirect(loginRequest);
+ }
+
+ function signOut() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ // Choose which account to logout from by passing a username.
+ const logoutRequest = {
+ account: myMSALObj.getAccountByUsername(username),
+ postLogoutRedirectUri: '/signout', // remove this line if you would like navigate to index page after logout.
+
+ };
+
+ myMSALObj.logoutRedirect(logoutRequest);
+ }
+ ```
+
+1. Save the file.
+
+## Adding code to the *authPopup.js* file
+
+The application uses *authPopup.js* to handle the authentication flow when the user signs in using the pop-up window. The pop-up window is used when the user is already signed in and the application needs to get an access token for a different resource.
+
+1. Open *public/authPopup.js* and add the following code snippet:
+
+ ```javascript
+ // Create the main myMSALObj instance
+ // configuration parameters are located at authConfig.js
+ const myMSALObj = new msal.PublicClientApplication(msalConfig);
+
+ let username = "";
+
+ function selectAccount () {
+
+ /**
+ * See here for more info on account retrieval:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
+ */
+
+ const currentAccounts = myMSALObj.getAllAccounts();
+
+ if (!currentAccounts || currentAccounts.length < 1) {
+ return;
+ } else if (currentAccounts.length > 1) {
+ // Add your account choosing logic here
+ console.warn("Multiple accounts detected.");
+ } else if (currentAccounts.length === 1) {
+ username = currentAccounts[0].username
+ welcomeUser(currentAccounts[0].username);
+ updateTable(currentAccounts[0]);
+ }
+ }
+
+ function handleResponse(response) {
+
+ /**
+ * To see the full list of response object properties, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response
+ */
+
+ if (response !== null) {
+ username = response.account.username
+ welcomeUser(username);
+ updateTable(response.account);
+ } else {
+ selectAccount();
+ }
+ }
+
+ function signIn() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ myMSALObj.loginPopup(loginRequest)
+ .then(handleResponse)
+ .catch(error => {
+ console.error(error);
+ });
+ }
+
+ function signOut() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ // Choose which account to logout from by passing a username.
+ const logoutRequest = {
+ account: myMSALObj.getAccountByUsername(username),
+ mainWindowRedirectUri: '/signout'
+ };
+
+ myMSALObj.logoutPopup(logoutRequest);
+ }
+
+ selectAccount();
+ ```
+
+1. Save the file.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Sign in and sign out of the Vanilla JS SPA](./tutorial-single-page-app-vanillajs-sign-in-sign-out.md)
active-directory Tutorial Single Page App Vanillajs Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-app.md
+
+ Title: Tutorial - Prepare a Vanilla JavaScript single-page app (SPA) for authentication in a customer tenant
+description: Learn how to prepare a Vanilla JavaScript single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant.
+++++++++ Last updated : 08/17/2023
+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant.
++
+# Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant
+
+In the [previous article](tutorial-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a Vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Create a Vanilla JavaScript project in Visual Studio Code
+> * Install required packages
+> * Add code to *server.js* to create a server
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md).
+* Although any integrated development environment (IDE) that supports Vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* [Node.js](https://nodejs.org/en/download/).
+
+## Create a new Vanilla JS project and install dependencies
+
+1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project.
+1. Open a new terminal by selecting **Terminal** > **New Terminal**.
+1. Run the following command to create a new Vanilla JS project:
+
+ ```powershell
+ npm init -y
+ ```
+1. Create additional folders and files to achieve the following project structure:
+
+ ```
+ ΓööΓöÇΓöÇ public
+ ΓööΓöÇΓöÇ authConfig.js
+ ΓööΓöÇΓöÇ authPopup.js
+ ΓööΓöÇΓöÇ authRedirect.js
+ ΓööΓöÇΓöÇ claimUtils.js
+ ΓööΓöÇΓöÇ https://docsupdatetracker.net/index.html
+ ΓööΓöÇΓöÇ signout.html
+ ΓööΓöÇΓöÇ styles.css
+ ΓööΓöÇΓöÇ ui.js
+ ΓööΓöÇΓöÇ server.js
+ ```
+
+## Install app dependencies
+
+1. In the **Terminal**, run the following command to install the required dependencies for the project:
+
+ ```powershell
+ npm install express morgan @azure/msal-browser
+ ```
+
+## Edit the *server.js* file
+
+**Express** is a web application framework for **Node.js**. It's used to create a server that hosts the application. **Morgan** is the middleware that logs HTTP requests to the console. The server file is used to host these dependencies and contains the routes for the application. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/).
+
+1. Add the following code snippet to the *server.js* file:
+
+ ```javascript
+ const express = require('express');
+ const morgan = require('morgan');
+ const path = require('path');
+
+ const DEFAULT_PORT = process.env.PORT || 3000;
+
+ // initialize express.
+ const app = express();
+
+ // Configure morgan module to log all requests.
+ app.use(morgan('dev'));
+
+ // serve public assets.
+ app.use(express.static('public'));
+
+ // serve msal-browser module
+ app.use(express.static(path.join(__dirname, "node_modules/@azure/msal-browser/lib")));
+
+ // set up a route for signout.html
+ app.get('/signout', (req, res) => {
+ res.sendFile(path.join(__dirname + '/public/signout.html'));
+ });
+
+ // set up a route for redirect.html
+ app.get('/redirect', (req, res) => {
+ res.sendFile(path.join(__dirname + '/public/redirect.html'));
+ });
+
+ // set up a route for https://docsupdatetracker.net/index.html
+ app.get('/', (req, res) => {
+ res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html'));
+ });
+
+ app.listen(DEFAULT_PORT, () => {
+ console.log(`Sample app listening on port ${DEFAULT_PORT}!`);
+ });
+
+ ```
+
+In this code, the **app** variable is initialized with the **express** module and **express** is used to serve the public assets. **Msal-browser** is served as a static asset and is used to initiate the authentication flow.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure SPA for authentication](tutorial-single-page-app-vanillajs-configure-authentication.md)
active-directory Tutorial Single Page App Vanillajs Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-tenant.md
+
+ Title: Tutorial - Prepare your customer tenant to authenticate users in a Vanilla JavaScript single-page application
+description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a Vanilla JavaScript single-page app (SPA).
+++++++++ Last updated : 08/17/2023
+#Customer intent: As a developer, I want to learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app
+
+This tutorial series demonstrates how to build a Vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Register a SPA in the Microsoft Entra admin center, and record its identifiers
+> * Define the platform and URLs
+> * Grant permissions to the SPA to access the Microsoft Graph API
+> * Create a sign in and sign out user flow in the Microsoft Entra admin center
+> * Associate your SPA with the user flow
+
+## Prerequisites
+
+- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions:
+
+ * Application administrator
+ * Application developer
+ * Cloud application administrator
+
+- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one.
+
+## Register the SPA and record identifiers
++
+## Add a platform redirect URL
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the SPA with the user flow
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Prepare your Vanilla JS SPA](tutorial-single-page-app-Vanillajs-prepare-app.md)
active-directory Tutorial Single Page App Vanillajs Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-sign-in-sign-out.md
+
+ Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant
+description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant.
++++++++ Last updated : 08/02/2023
+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant
+
+In the [previous article](tutorial-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface
+> * Add code to the *signout.html* file to create the sign-out page
+> * Sign in and sign out of the application
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Create components for authentication and authorization](tutorial-single-page-app-vanillajs-configure-authentication.md).
+
+## Add code to the *https://docsupdatetracker.net/index.html* file
+
+The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button.
+
+1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet:
+
+ ```html
+ <!DOCTYPE html>
+ <html lang="en">
+
+ <head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
+ <title>Microsoft identity platform</title>
+ <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
+ <link rel="stylesheet" href="./styles.css">
+
+ <!-- adding Bootstrap 5 for UI components -->
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet"
+ integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous">
+
+ <!-- msal.min.js can be used in the place of msal-browser.js -->
+ <script src="/msal-browser.min.js"></script>
+ </head>
+
+ <body>
+ <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle">
+ <a class="navbar-brand" href="/">Microsoft identity platform</a>
+ <div class="navbar-collapse justify-content-end">
+ <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button>
+ <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button>
+ </div>
+ </nav>
+ <br>
+ <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js
+ </h5>
+ <h5 id="welcome-div" class="card-header text-center d-none"></h5>
+ <br>
+ <div class="table-responsive-ms" id="table">
+ <table id="table-div" class="table table-striped d-none">
+ <thead id="table-head-div">
+ <tr>
+ <th>Claim Type</th>
+ <th>Value</th>
+ <th>Description</th>
+ </tr>
+ </thead>
+ <tbody id="table-body-div">
+ </tbody>
+ </table>
+ </div>
+ <!-- importing bootstrap.js and supporting js libraries -->
+ <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
+ integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous">
+ </script>
+ <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"
+ integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3"
+ crossorigin="anonymous"></script>
+ <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js"
+ integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3"
+ crossorigin="anonymous"></script>
+
+ <!-- importing app scripts (load order is important) -->
+ <script type="text/javascript" src="./authConfig.js"></script>
+ <script type="text/javascript" src="./ui.js"></script>
+ <script type="text/javascript" src="./claimUtils.js"></script>
+ <!-- <script type="text/javascript" src="./authRedirect.js"></script> -->
+ <!-- uncomment the above line and comment the line below if you would like to use the redirect flow -->
+ <script type="text/javascript" src="./authPopup.js"></script>
+ </body>
+
+ </html>
+ ```
+
+1. Save the file.
+
+## Add code to the *claimUtils.js* file
+
+1. Open *public/claimUtils.js* and add the following code snippet:
+
+ ```javascript
+ /**
+ * Populate claims table with appropriate description
+ * @param {Object} claims ID token claims
+ * @returns claimsObject
+ */
+ const createClaimsTable = (claims) => {
+ let claimsObj = {};
+ let index = 0;
+
+ Object.keys(claims).forEach((key) => {
+ if (typeof claims[key] !== 'string' && typeof claims[key] !== 'number') return;
+ switch (key) {
+ case 'aud':
+ populateClaim(
+ key,
+ claims[key],
+ "Identifies the intended recipient of the token. In ID tokens, the audience is your app's Application ID, assigned to your app in the Azure portal.",
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'iss':
+ populateClaim(
+ key,
+ claims[key],
+ 'Identifies the issuer, or authorization server that constructs and returns the token. It also identifies the Azure AD tenant for which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in /v2.0. The GUID that indicates that the user is a consumer user from a Microsoft account is 9188040d-6c67-4c5b-b112-36a304b66dad.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'iat':
+ populateClaim(
+ key,
+ changeDateFormat(claims[key]),
+ 'Issued At indicates when the authentication for this token occurred.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'nbf':
+ populateClaim(
+ key,
+ changeDateFormat(claims[key]),
+ 'The nbf (not before) claim identifies the time (as UNIX timestamp) before which the JWT must not be accepted for processing.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'exp':
+ populateClaim(
+ key,
+ changeDateFormat(claims[key]),
+ "The exp (expiration time) claim identifies the expiration time (as UNIX timestamp) on or after which the JWT must not be accepted for processing. It's important to note that in certain circumstances, a resource may reject the token before this time. For example, if a change in authentication is required or a token revocation has been detected.",
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'name':
+ populateClaim(
+ key,
+ claims[key],
+ "The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory",
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'preferred_username':
+ populateClaim(
+ key,
+ claims[key],
+ 'The primary username that represents the user. It could be an email address, phone number, or a generic username without a specified format. Its value is mutable and might change over time. Since it is mutable, this value must not be used to make authorization decisions. It can be used for username hints, however, and in human-readable UI as a username. The profile scope is required in order to receive this claim.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'nonce':
+ populateClaim(
+ key,
+ claims[key],
+ 'The nonce matches the parameter included in the original /authorize request to the IDP. If it does not match, your application should reject the token.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'oid':
+ populateClaim(
+ key,
+ claims[key],
+ 'The oid (userΓÇÖs object id) is the only claim that should be used to uniquely identify a user in an Azure AD tenant. The token might have one or more of the following claim, that might seem like a unique identifier, but is not and should not be used as such.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'tid':
+ populateClaim(
+ key,
+ claims[key],
+ 'The tenant ID. You will use this claim to ensure that only users from the current Azure AD tenant can access this app.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'upn':
+ populateClaim(
+ key,
+ claims[key],
+ '(user principal name) ΓÇô might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place or might change to reflect a personal change like marriage.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'email':
+ populateClaim(
+ key,
+ claims[key],
+ 'Email might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'acct':
+ populateClaim(
+ key,
+ claims[key],
+ 'Available as an optional claim, it lets you know what the type of user (homed, guest) is. For example, for an individualΓÇÖs access to their data you might not care for this claim, but you would use this along with tenant id (tid) to control access to say a company-wide dashboard to just employees (homed users) and not contractors (guest users).',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'sid':
+ populateClaim(key, claims[key], 'Session ID, used for per-session user sign-out.', index, claimsObj);
+ index++;
+ break;
+ case 'sub':
+ populateClaim(
+ key,
+ claims[key],
+ 'The sub claim is a pairwise identifier - it is unique to a particular application ID. If a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'ver':
+ populateClaim(
+ key,
+ claims[key],
+ 'Version of the token issued by the Microsoft identity platform',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'auth_time':
+ populateClaim(
+ key,
+ claims[key],
+ 'The time at which a user last entered credentials, represented in epoch time. There is no discrimination between that authentication being a fresh sign-in, a single sign-on (SSO) session, or another sign-in type.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'at_hash':
+ populateClaim(
+ key,
+ claims[key],
+ 'An access token hash included in an ID token only when the token is issued together with an OAuth 2.0 access token. An access token hash can be used to validate the authenticity of an access token',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'uti':
+ case 'rh':
+ index++;
+ break;
+ default:
+ populateClaim(key, claims[key], '', index, claimsObj);
+ index++;
+ }
+ });
+
+ return claimsObj;
+ };
+
+ /**
+ * Populates claim, description, and value into an claimsObject
+ * @param {string} claim
+ * @param {string} value
+ * @param {string} description
+ * @param {number} index
+ * @param {Object} claimsObject
+ */
+ const populateClaim = (claim, value, description, index, claimsObject) => {
+ let claimsArray = [];
+ claimsArray[0] = claim;
+ claimsArray[1] = value;
+ claimsArray[2] = description;
+ claimsObject[index] = claimsArray;
+ };
+
+ /**
+ * Transforms Unix timestamp to date and returns a string value of that date
+ * @param {string} date Unix timestamp
+ * @returns
+ */
+ const changeDateFormat = (date) => {
+ let dateObj = new Date(date * 1000);
+ return `${date} - [${dateObj.toString()}]`;
+ };
+ ```
+
+1. Save the file.
+
+## Add code to the *signout.html* file
+
+1. Open *public/signout.html* and add the following code snippet:
+
+ ```html
+ <!DOCTYPE html>
+ <html lang="en">
+ <head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <title>Azure AD | Vanilla JavaScript SPA</title>
+ <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
+
+ <!-- adding Bootstrap 4 for UI components -->
+ <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/boot8strap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
+ </head>
+ <body>
+ <div class="jumbotron" style="margin: 10%">
+ <h1>Goodbye!</h1>
+ <p>You have signed out and your cache has been cleared.</p>
+ <a class="btn btn-primary" href="/" role="button">Take me back</a>
+ </div>
+ </body>
+ </html>
+ ```
+
+1. Save the file.
+
+## Add code to the *ui.js* file
+
+When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button.
+
+1. Open *public/ui.js* and add the following code snippet:
+
+ ```javascript
+ // Select DOM elements to work with
+ const signInButton = document.getElementById('signIn');
+ const signOutButton = document.getElementById('signOut');
+ const titleDiv = document.getElementById('title-div');
+ const welcomeDiv = document.getElementById('welcome-div');
+ const tableDiv = document.getElementById('table-div');
+ const tableBody = document.getElementById('table-body-div');
+
+ function welcomeUser(username) {
+ signInButton.classList.add('d-none');
+ signOutButton.classList.remove('d-none');
+ titleDiv.classList.add('d-none');
+ welcomeDiv.classList.remove('d-none');
+ welcomeDiv.innerHTML = `Welcome ${username}!`;
+ };
+
+ function updateTable(account) {
+ tableDiv.classList.remove('d-none');
+
+ const tokenClaims = createClaimsTable(account.idTokenClaims);
+
+ Object.keys(tokenClaims).forEach((key) => {
+ let row = tableBody.insertRow(0);
+ let cell1 = row.insertCell(0);
+ let cell2 = row.insertCell(1);
+ let cell3 = row.insertCell(2);
+ cell1.innerHTML = tokenClaims[key][0];
+ cell2.innerHTML = tokenClaims[key][1];
+ cell3.innerHTML = tokenClaims[key][2];
+ });
+ };
+ ```
+
+1. Save the file.
+
+## Add code to the *styles.css* file
+
+1. Open *public/styles.css* and add the following code snippet:
+
+ ```css
+ .navbarStyle {
+ padding: .5rem 1rem !important;
+ }
+
+ .table-responsive-ms {
+ max-height: 39rem !important;
+ padding-left: 10%;
+ padding-right: 10%;
+ }
+ ```
+
+1. Save the file.
+
+## Run your project and sign in
+
+Now that all the required code snippets have been added, the application can be called and tested in a web browser.
+
+1. Open a new terminal and run the following command to start your express web server.
+ ```powershell
+ npm start
+ ```
+1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`.
+1. Select **No account? Create one**, which starts the sign-up flow.
+1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application.
+1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
+
+ 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
+
+1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data.
+
+ :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a Vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
+
+## Sign out of the application
+
+1. To sign out of the application, select **Sign out** in the navigation bar.
+1. A window appears asking which account to sign out of.
+1. Upon successful sign out, a final window appears advising you to close all browser windows.
+
+## Next steps
+
+- [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md
Title: "What's new in Azure Active Directory for customers" description: "New and updated documentation for the Azure Active Directory for customers documentation." Previously updated : 08/01/2023 Last updated : 09/01/2023
Welcome to what's new in Azure Active Directory for customers documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## August 2023
+
+### New articles
+
+- [Quickstart: Get started with guide walkthrough](quickstart-get-started-guide.md)
+- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md)
+- [Sign in users in a sample Node.js CLI application.](sample-cli-app-node-sign-in-users.md)
+- [Tutorial: Prepare a Node.js CLI application for authentication](tutorial-cli-app-node-sign-in-prepare-app.md)
+- [Prepare your customer tenant to sign in users in a Node.js CLI application](tutorial-cli-app-node-sign-in-prepare-tenant.md)
+- [Authenticate users in a Node.js CLI application - Build app](tutorial-cli-app-node-sign-in-sign-out.md)
+- [Tutorial: Use role-based access control in your .NET MAUI](tutorial-desktop-maui-role-based-access-control.md)
+- [Tutorial: Use role-based access control in your .NET MAUI app](tutorial-mobile-maui-role-based-access-control.md)
+
+### Updated articles
+
+- [Collect user attributes during sign-up](how-to-define-custom-attributes.md) - Custom attribute update
+- [Quickstart: Create a tenant (preview)](quickstart-tenant-setup.md) - Get started guide update
+- [Add and manage admin accounts](how-to-manage-admin-accounts.md) - Editorial review
+- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Editorial review
+- [Azure AD for customers documentation](index.yml) - Editorial review
+- [Tutorial: Sign in users in .NET MAUI app](tutorial-desktop-app-maui-sign-in-sign-out.md) - Add app roles to .NET MAUI app and receive them in the ID token
+- [Tutorial: Sign in users in .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-sign-out.md) - Add app roles to .NET MAUI app and receive them in the ID token
+ ## July 2023 ### New articles
Welcome to what's new in Azure Active Directory for customers documentation. Thi
- [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest - [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](./tutorial-single-page-app-react-sign-in-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling - [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](./tutorial-single-page-app-react-sign-in-sign-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling-- [Tutorial: Handle authentication flows in a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant](how-to-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant](how-to-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling
+- [Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling
+- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling
+- [Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling
+- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling
- [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](tutorial-single-page-app-react-sign-in-prepare-tenant.md) - Fixed SPA aligning content styling - [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](tutorial-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes - [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](tutorial-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes
active-directory Customize Invitation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md
description: Azure Active Directory B2B collaboration supports your cross-compan
+ Last updated 12/02/2022
-# Customer intent: As a tenant administrator, I want to customize the invitation process with the API.
+# Customer intent: As a tenant administrator, I want to customize the invitation process with the API.
# Azure Active Directory B2B collaboration API and customization
Check out the invitation API reference in [https://developer.microsoft.com/graph
- [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add and invite guest users](add-users-administrator.md) - [The elements of the B2B collaboration invitation email](invitation-email-elements.md)-
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Last updated 03/15/2023
-+
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
description: Learn how to enable Active Directory B2B external collaboration and
+ Last updated 10/24/2022
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Last updated 01/20/2023
-+ -
-# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
+# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
# Add Facebook as an identity provider for External Identities
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Last updated 01/20/2023
-+
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
description: If you have internal user accounts for partners, distributors, supp
+ Last updated 07/27/2023
- # Customer intent: As a tenant administrator, I want to know how to invite internal users to B2B collaboration.
active-directory Tenant Restrictions V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md
# Set up tenant restrictions V2 (Preview) > [!NOTE]
-> The **Tenant restrictions** settings, which are included with cross-tenant access settings, are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> The **Tenant restrictions** settings, which are included with cross-tenant access settings, are preview features of Azure Active Directory. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
For increased security, you can limit what your users can access when they use an external account to sign in from your networks or devices. With the **Tenant restrictions** settings included with [cross-tenant access settings](cross-tenant-access-overview.md), you can control the external apps that your Windows device users can access when they're using external accounts.
For example, let's say a user in your organization has created a separate accoun
:::image type="content" source="media/tenant-restrictions-v2/authentication-flow.png" alt-text="Diagram illustrating tenant restrictions v2.":::
-| | |
+
+| Steps | Description |
||| |**1** | Contoso configures **Tenant restrictions** in their cross-tenant access settings to block all external accounts and external apps. Contoso enforces the policy on each Windows device by updating the local computer configuration with Contoso's tenant ID and the tenant restrictions policy ID. | |**2** | A user with a Contoso-managed Windows device tries to sign in to an external app using an account from an unknown tenant. The Windows device adds an HTTP header to the authentication request. The header contains Contoso's tenant ID and the tenant restrictions policy ID. | |**3** | *Authentication plane protection:* Azure AD uses the header in the authentication request to look up the tenant restrictions policy in the Azure AD cloud. Because Contoso's policy blocks external accounts from accessing external tenants, the request is blocked at the authentication level. | |**4** | *Data plane protection:* The user tries to access the external application by copying an authentication response token they obtained outside of Contoso's network and pasting it into the Windows device. However, Azure AD compares the claim in the token to the HTTP header added by the Windows device. Because they don't match, Azure AD blocks the session so the user can't access the application. |
-|||
+ This article describes how to configure tenant restrictions V2 using the Azure portal. You can also use the [Microsoft Graph cross-tenant access API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-beta&preserve-view=true) to create these same tenant restrictions policies.
Settings for tenant restrictions V2 are located in the Azure portal under **Cros
1. Under **Applies to**, select one of the following: - **All external applications**: Applies the action you chose under **Access status** to all external applications. If you block access to all external applications, you also need to block access for all of your users and groups (on the **Users and groups** tab).
- - **Select external applications**: Lets you choose the external applications you want the action under **Access status** to apply to. To select applications, choose **Add Microsoft applications** or **Add other applications**. Then search by the application name or the application ID (either the *client app ID* or the *resource app ID*) and select the app. ([See a list of IDs for commonly used Microsoft applications.](https://learn.microsoft.com/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) If you want to add more apps, use the **Add** button. When you're done, select **Submit**.
+ - **Select external applications**: Lets you choose the external applications you want the action under **Access status** to apply to. To select applications, choose **Add Microsoft applications** or **Add other applications**. Then search by the application name or the application ID (either the *client app ID* or the *resource app ID*) and select the app. ([See a list of IDs for commonly used Microsoft applications.](/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) If you want to add more apps, use the **Add** button. When you're done, select **Submit**.
:::image type="content" source="media/tenant-restrictions-v2/tenant-restrictions-default-applications-applies-to.png" alt-text="Screenshot showing selecting the external applications tab.":::
Suppose you use tenant restrictions to block access by default, but you want to
1. If you chose **Select external applications**, do the following for each application you want to add: - Select **Add Microsoft applications** or **Add other applications**. For our Microsoft Learn example, we choose **Add other applications**.
- - In the search box, type the application name or the application ID (either the *client app ID* or the *resource app ID*). ([See a list of IDs for commonly used Microsoft applications.](https://learn.microsoft.com/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) For our Microsoft Learn example, we enter the application ID `18fbca16-2224-45f6-85b0-f7bf2b39b3f3`.
+ - In the search box, type the application name or the application ID (either the *client app ID* or the *resource app ID*). ([See a list of IDs for commonly used Microsoft applications.](/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) For our Microsoft Learn example, we enter the application ID `18fbca16-2224-45f6-85b0-f7bf2b39b3f3`.
- Select the application in the search results, and then select **Add**. - Repeat for each application you want to add. - When you're done selecting applications, select **Submit**.
Suppose you use tenant restrictions to block access by default, but you want to
## Step 3: Enable tenant restrictions on Windows managed devices
-After you create a tenant restrictions V2 policy, you can enforce the policy on each Windows 10, Windows 11, and Windows Server 2022 device by adding your tenant ID and the policy ID to the device's **Tenant Restrictions** configuration. When tenant restrictions are enabled on a Windows device, corporate proxies aren't required for policy enforcement. Devices don't need to be Azure AD managed to enforce tenant restrictions V2; domain-joined devices that are managed with Group Policy are also supported.
+After you create a tenant restrictions V2 policy, you can enforce the policy on each Windows 10 and Windows 11 device by adding your tenant ID and the policy ID to the device's **Tenant Restrictions** configuration. When tenant restrictions are enabled on a Windows device, corporate proxies aren't required for policy enforcement. Devices don't need to be Azure AD managed to enforce tenant restrictions V2; domain-joined devices that are managed with Group Policy are also supported.
### Administrative Templates (.admx) for Windows 10 November 2021 Update (21H2) and Group policy settings
To test the tenant restrictions V2 policy on a device, follow these steps.
> [!NOTE] >
-> - The device must be running Windows 10, Windows 11, or Windows Server 2022 with the latest updates.
+> - The device must be running Windows 10 or Windows 11 with the latest updates.
1. On the Windows computer, press the Windows key, type **gpedit**, and then select **Edit group policy (Control panel)**.
To test the tenant restrictions V2 policy on a device, follow these steps.
## Step 4: Set up tenant restrictions V2 on your corporate proxy
-Tenant restrictions V2 policies can't be directly enforced on non-Windows 10, Windows 11, or Windows Server 2022 devices, such as Mac computers, mobile devices, unsupported Windows applications, and Chrome browsers. To ensure sign-ins are restricted on all devices and apps in your corporate network, configure your corporate proxy to enforce tenant restrictions V2. Although configuring tenant restrictions on your corporate proxy don't provide data plane protection, it does provide authentication plane protection.
+Tenant restrictions V2 policies can't be directly enforced on non-Windows 10 or Windows 11 devices, such as Mac computers, mobile devices, unsupported Windows applications, and Chrome browsers. To ensure sign-ins are restricted on all devices and apps in your corporate network, configure your corporate proxy to enforce tenant restrictions V2. Although configuring tenant restrictions on your corporate proxy don't provide data plane protection, it does provide authentication plane protection.
> [!IMPORTANT] > If you've previously set up tenant restrictions, you'll need to stop sending `restrict-msa` to login.live.com. Otherwise, the new settings will conflict with your existing instructions to the MSA login service.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Last updated 05/23/2023
tags: active-directory -+
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Last updated 05/18/2023
-+ -
-# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption.
+# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption.
# Properties of an Azure Active Directory B2B collaboration user
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 08/01/2023 Last updated : 09/01/2023
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## August 2023
+
+### Updated articles
+
+- [B2B collaboration user claims mapping in Azure Active Directory](claims-mapping.md) - UPN claims behavior update
+- [Self-service sign-up](self-service-sign-up-overview.md) - Customer content reference update
+- [Cross-tenant access overview](cross-tenant-access-overview.md) - New storage model update
+- [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) - New storage model update
+- [Configure B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - New storage model update
+-
## July 2023 ### New article
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md) - Microsoft Teams updates - [Invite guest users to an app](add-users-information-worker.md) - Link and structure updates
-## May 2023
-
-### New article
--- [Set up tenant restrictions V2 (Preview)](tenant-restrictions-v2.md)-
-### Updated articles
--- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md) - Graph API links updates-- [Reset redemption status for a guest user](reset-redemption-status.md) - Screenshot updates-
active-directory Custom Security Attributes Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md
+ Last updated 06/29/2023
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
[Custom security attributes](custom-security-attributes-overview.md) in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. This article describes how to add, edit, or deactivate custom security attribute definitions.
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
+ Last updated 06/29/2023
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
For people in your organization to effectively work with [custom security attributes](custom-security-attributes-overview.md), you must grant the appropriate access. Depending on the information you plan to include in custom security attributes, you might want to restrict custom security attributes or you might want to make them broadly accessible in your organization. This article describes how to manage access to custom security attributes.
active-directory Custom Security Attributes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
Custom security attributes in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. These attributes can be used to store information, categorize objects, or enforce fine-grained access control over specific Azure resources. Custom security attributes can be used with [Azure attribute-based access control (Azure ABAC)](../../role-based-access-control/conditions-overview.md).
active-directory Custom Security Attributes Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-troubleshoot.md
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Symptom - Custom security attributes page is disabled
active-directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-storage-eu.md
Previously updated : 12/13/2022 Last updated : 08/17/2023
The following sections provide information about customer data that doesn't meet
## Services permanently excluded from the EU Data Residency and EU Data Boundary
-* **Reason for customer data egress** - Some forms of communication rely on a network that is operated by global providers, such as phone calls and SMS. Device vendor-specific services such Apple Push Notifications, may be outside of Europe.
+* **Reason for customer data egress** - Some forms of communication, such as phone calls or text messaging platforms like SMS, RCS, or WhatsApp, rely on a network that is operated by global providers. Device vendor-specific services, such as push notifications from Apple or Google, may be outside of Europe.
* **Types of customer data being egressed** - User account data (phone number). * **Customer data location at rest** - In EU Data Boundary. * **Customer data processing** - Some processing may occur globally.
active-directory How To Create Delete Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-create-delete-users.md
This article explains how to create a new user, invite an external guest, and delete a user in your Azure Active Directory (Azure AD) tenant.
-The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
Instructions for the legacy create user process can be found in the [Add or delete users](./add-users.md) article.
active-directory Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/identity-secure-score.md
Previously updated : 06/09/2022 Last updated : 08/23/2023
-# What is the identity secure score in Azure Active Directory?
+# What is identity secure score?
-How secure is your Azure AD tenant? If you don't know how to answer this question, this article explains how the identity secure score helps you to monitor and improve your identity security posture.
-
-## What is an identity secure score?
-
-The identity secure score is percentage that functions as an indicator for how aligned you are with Microsoft's best practice recommendations for security. Each improvement action in identity secure score is tailored to your specific configuration.
+The identity secure score is shown as a percentage that functions as an indicator for how aligned you are with Microsoft's recommendations for security. Each improvement action in identity secure score is tailored to your configuration.
![Secure score](./media/identity-secure-score/identity-secure-score-overview.png)
-The score helps you to:
+This score helps to:
- Objectively measure your identity security posture - Plan identity security improvements
By following the improvement actions, you can:
## How do I get my secure score?
-The identity secure score is available in all editions of Azure AD. Organizations can access their identity secure score from the **Azure portal** > **Azure Active Directory** > **Security** > **Identity Secure Score**.
+Identity secure score is available to free and paid customers. Organizations can access their identity secure score in the [Microsoft Entra admin center](https://entra.microsoft.com/) under **Protection** > **Identity Secure Score**.
## How does it work?
-Every 48 hours, Azure looks at your security configuration and compares your settings with the recommended best practices. Based on the outcome of this evaluation, a new score is calculated for your directory. ItΓÇÖs possible that your security configuration isnΓÇÖt fully aligned with the best practice guidance and the improvement actions are only partially met. In these scenarios, you will only be awarded a portion of the max score available for the control.
+Every 48 hours, Azure looks at your security configuration and compares your settings with the recommended best practices. Based on the outcome of this evaluation, a new score is calculated for your directory. ItΓÇÖs possible that your security configuration isnΓÇÖt fully aligned with the best practice guidance and the improvement actions are only partially met. In these scenarios, you're awarded a portion of the max score available for the control.
-Each recommendation is measured based on your Azure AD configuration. If you are using third-party products to enable a best practice recommendation, you can indicate this configuration in the settings of an improvement action. You also have the option to set recommendations to be ignored if they don't apply to your environment. An ignored recommendation does not contribute to the calculation of your score.
+Each recommendation is measured based on your Azure AD configuration. If you're using third-party products to enable a best practice recommendation, you can indicate this configuration in the settings of an improvement action. You may set recommendations to be ignored if they don't apply to your environment. An ignored recommendation doesn't contribute to the calculation of your score.
![Ignore or mark action as covered by third party](./media/identity-secure-score/identity-secure-score-ignore-or-third-party-reccomendations.png) - **To address** - You recognize that the improvement action is necessary and plan to address it at some point in the future. This state also applies to actions that are detected as partially, but not fully completed. - **Planned** - There are concrete plans in place to complete the improvement action.-- **Risk accepted** - Security should always be balanced with usability, and not every recommendation will work for your environment. When that is the case, you can choose to accept the risk, or the remaining risk, and not enact the improvement action. You won't be given any points, but the action will no longer be visible in the list of improvement actions. You can view this action in history or undo it at any time.-- **Resolved through third party** and **Resolved through alternate mitigation** - The improvement action has already been addressed by a third-party application or software, or an internal tool. You'll gain the points that the action is worth, so your score better reflects your overall security posture. If a third party or internal tool no longer covers the control, you can choose another status. Keep in mind, Microsoft will have no visibility into the completeness of implementation if the improvement action is marked as either of these statuses.
+- **Risk accepted** - Security should always be balanced with usability, and not every recommendation works for everyone. When that is the case, you can choose to accept the risk, or the remaining risk, and not enact the improvement action. You aren't awarded any points, and the action isn't visible in the list of improvement actions. You can view this action in history or undo it at any time.
+- **Resolved through third party** and **Resolved through alternate mitigation** - The improvement action has already been addressed by a third-party application or software, or an internal tool. You're awarded the points the action is worth, so your score better reflects your overall security posture. If a third party or internal tool no longer covers the control, you can choose another status. Keep in mind, Microsoft has no visibility into the completeness of implementation if the improvement action is marked as either of these statuses.
## How does it help me?
To access identity secure score, you must be assigned one of the following roles
With read and write access, you can make changes and directly interact with identity secure score.
-* Global administrator
-* Security administrator
-* Exchange administrator
-* SharePoint administrator
+* Global Administrator
+* Security Administrator
+* Exchange Administrator
+* SharePoint Administrator
#### Read-only roles With read-only access, you aren't able to edit status for an improvement action.
-* Helpdesk administrator
-* User administrator
-* Service support administrator
-* Security reader
-* Security operator
-* Global reader
+* Helpdesk Administrator
+* User Administrator
+* Service support Administrator
+* Security Reader
+* Security Operator
+* Global Reader
### How are controls scored?
-Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states youΓÇÖll get a maximum of 10.71% if you protect all your users with MFA and you only have 5 of 100 total users protected, you would be given a partial score around 0.53% (5 protected / 100 total * 10.71% maximum = 0.53% partial score).
+Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states there's a maximum of 10.71% increase if you protect all your users with MFA and you have 5 of 100 total users protected, you're given a partial score around 0.53% (5 protected / 100 total * 10.71% maximum = 0.53% partial score).
### What does [Not Scored] mean?
-Actions labeled as [Not Scored] are ones you can perform in your organization but won't be scored because they aren't hooked up in the tool (yet!). So, you can still improve your security, but you won't get credit for those actions right now.
-
-In addition, the recommended actions:
-* Protect all users with a user risk policy
-* Protect all users with a sign-in risk policy
-
-Also won't give you credits when configured using Conditional Access Policies, yet, for the same reason as above. For now, these actions give credits only when configured through Identity Protection policies.
+Actions labeled as [Not Scored] are ones you can perform in your organization but aren't scored. So, you can still improve your security, but you aren't given credit for those actions right now.
### How often is my score updated?
The score is calculated once per day (around 1:00 AM PST). If you make a change
### My score changed. How do I figure out why?
-Head over to the [Microsoft 365 Defender portal](https://security.microsoft.com/), where youΓÇÖll find your complete Microsoft secure score. You can easily see all the changes to your secure score by reviewing the in-depth changes on the history tab.
+Head over to the [Microsoft 365 Defender portal](https://security.microsoft.com/), where you find your complete Microsoft secure score. You can easily see all the changes to your secure score by reviewing the in-depth changes on the history tab.
### Does the secure score measure my risk of getting breached?
-In short, no. The secure score does not express an absolute measure of how likely you are to get breached. It expresses the extent to which you have adopted features that can offset the risk of being breached. No service can guarantee that you will not be breached, and the secure score should not be interpreted as a guarantee in any way.
+No, secure score doesn't express an absolute measure of how likely you're to get breached. It expresses the extent to which you have adopted features that can offset risk. No service can guarantee protection, and the secure score shouldn't be interpreted as a guarantee in any way.
### How should I interpret my score?
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
+ Previously updated : 07/11/2023 Last updated : 08/29/2023 - # Customer intent: As a new or existing customer, I want to learn more about the new name for Azure Active Directory (Azure AD) and understand the impact the name change may have on other products, new or existing license(s), what I need to do, and where I can learn more about Microsoft Entra products. # New name for Azure Active Directory
-To unify the [Microsoft Entra](/entra) product family, reflect the progression to modern multicloud identity security, and simplify secure access experiences for all, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID.
+To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID.
-## No action is required from you
+## No interruptions to usage or service
If you're using Azure AD today or are currently deploying Azure AD in your organizations, you can continue to use the service without interruption. All existing deployments, configurations, and integrations will continue to function as they do today without any action from you. You can continue to use familiar Azure AD capabilities that you can access through the Azure portal, Microsoft 365 admin center, and the [Microsoft Entra admin center](https://entra.microsoft.com).
-## Only the name is changing
- All features and capabilities are still available in the product. Licensing, terms, service-level agreements, product certifications, support and pricing remain the same.
+To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
+ Service plan display names will change on October 1, 2023. Microsoft Entra ID Free, Microsoft Entra ID P1, and Microsoft Entra ID P2 will be the new names of standalone offers, and all capabilities included in the current Azure AD plans remain the same. Microsoft Entra ID ΓÇô currently known as Azure AD ΓÇô will continue to be included in Microsoft 365 licensing plans, including Microsoft 365 E3 and Microsoft 365 E5. Details on pricing and whatΓÇÖs included are available on the [pricing and free trials page](https://aka.ms/PricingEntra). :::image type="content" source="./media/new-name/azure-ad-new-name.png" alt-text="Diagram showing the new name for Azure AD and Azure AD External Identities." border="false" lightbox="./media/new-name/azure-ad-new-name-high-res.png"::: During 2023, you may see both the current Azure AD name and the new Microsoft Entra ID name in support area paths. For self-service support, look for the topic path of "Microsoft Entra" or "Azure Active Directory/Microsoft Entra ID."
-## Identity developer and devops experiences aren't impacted by the rename
+## Guide to Azure AD name changes and exceptions
-To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
+We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to update your experiences and use the new name by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces.
-Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts.
+### Product name
-Naming is also not changing for:
+Microsoft Entra ID is the new name for Azure AD. Please replace the product names Azure Active Directory, Azure AD, and AAD with Microsoft Entra ID.
+
+- Microsoft Entra is the name for the product family of identity and network access solutions.
+- Microsoft Entra ID is one of the products within that family.
+- Acronym usage is not encouraged, but if you must replace AAD with an acronym due to space limitations, please use ME-ID.
-- [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Use to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API.-- [Microsoft Graph](/graph) - Get programmatic access to organizations, user, and application data stored in Microsoft Entra ID.-- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph.-- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as "Active Directory," and all related Windows Server identity services associated with Active Directory.-- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name "Active Directory" or any corresponding features.-- [Azure Active Directory B2C](../../active-directory-b2c/index.yml) will continue to be available as an Azure service.-- [Any deprecated or retired functionality, feature, or service](what-is-deprecated.md) of Azure AD.
+### Logo/icon
+
+Please change the Azure AD product icon in your experiences. The Azure AD icons are now at end-of-life.
+
+| **Azure AD product icons** | **Microsoft Entra ID product icon** |
+|:--:|:--:|
+| ![Azure AD product icon](./media/new-name/azure-ad-icon-1.png) ![Alternative Azure AD product icon](./media/new-name/azure-ad-icon-2.png) | ![Microsoft Entra ID product icon](./media/new-name/microsoft-entra-id-icon.png) |
+
+You can download the new Microsoft Entra ID icon here: [Microsoft Entra architecture icons](../architecture/architecture-icons.md)
+
+### Feature names
+
+Capabilities or services formerly known as "Azure Active Directory &lt;feature name&gt;" or "Azure AD &lt;feature name&gt;" will be branded as Microsoft Entra product family features. This is done across our portfolio to avoid naming length and complexity, and because many features work across all the products. For example:
+
+- "Azure AD Conditional Access" is now "Microsoft Entra Conditional Access"
+- "Azure AD single sign-on" is now "Microsoft Entra single sign-on"
+
+See the [Glossary of updated terminology](#glossary-of-updated-terminology) later in this article for more examples.
+
+### Exceptions and clarifications to the Azure AD name change
+
+Names aren't changing for Active Directory, developer tools, Azure AD B2C, nor deprecated or retired functionality, features, or services.
+
+Don't rename the following features, functionality, or services.
+
+#### Azure AD renaming exceptions and clarifications
+
+| **Correct terminology** | **Details** |
+|-|-|
+| Active Directory <br/><br/>&#8226; Windows Server Active Directory <br/>&#8226; Active Directory Federation Services (AD FS) <br/>&#8226; Active Directory Domain Services (AD DS) <br/>&#8226; Active Directory <br/>&#8226; Any Active Directory feature(s) | Windows Server Active Directory, commonly known as Active Directory, and related features and services associated with Active Directory aren't branded with Microsoft Entra. |
+| Authentication library <br/><br/>&#8226; Azure AD Authentication Library (ADAL) <br/>&#8226; Microsoft Authentication Library (MSAL) | Azure Active Directory Authentication Library (ADAL) is deprecated. While existing apps that use ADAL will continue to work, Microsoft will no longer release security fixes on ADAL. Migrate applications to the Microsoft Authentication Library (MSAL) to avoid putting your app's security at risk. <br/><br/>[Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Provides security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. |
+| B2C <br/><br/>&#8226; Azure Active Directory B2C <br/>&#8226; Azure AD B2C | [Azure Active Directory B2C](/azure/active-directory-b2c) isn't being renamed. Microsoft Entra External ID for customers is Microsoft's new customer identity and access management (CIAM) solution. |
+| Graph <br/><br/>&#8226; Azure Active Directory Graph <br/>&#8226; Azure AD Graph <br/>&#8226; Microsoft Graph | Azure Active Directory (Azure AD) Graph is deprecated. Going forward, we will make no further investment in Azure AD Graph, and Azure AD Graph APIs have no SLA or maintenance commitment beyond security-related fixes. Investments in new features and functionalities will only be made in Microsoft Graph.<br/><br/>[Microsoft Graph](/graph) - Grants programmatic access to organization, user, and application data stored in Microsoft Entra ID. |
+| PowerShell <br/><br/>&#8226; Azure Active Directory PowerShell <br/>&#8226; Azure AD PowerShell <br/>&#8226; Microsoft Graph PowerShell | Azure AD PowerShell for Graph is planned for deprecation on March 30, 2024. For more info on the deprecation plans, see the deprecation update. We encourage you to migrate to Microsoft Graph PowerShell, which is the recommended module for interacting with Azure AD. <br/><br/>[Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph. |
+| Accounts <br/><br/>&#8226; Microsoft account <br/>&#8226; Work or school account | For end user sign-ins and account experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md). |
+| Microsoft identity platform | The Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts. |
+| <br/>&#8226; Azure AD Sync <br/>&#8226; DirSync | DirSync and Azure AD Sync aren't supported and no longer work. If you're still using DirSync or Azure AD Sync, you must upgrade to Microsoft Entra Connect to resume your sync process. For more info, see [Microsoft Entra Connect](/azure/active-directory/hybrid/connect/how-to-dirsync-upgrade-get-started). |
+
+## Glossary of updated terminology
+
+Features of the identity and network access products are attributed to Microsoft EntraΓÇöthe product family, not the individual product name.
+
+You're not required to use the Microsoft Entra attribution with features. Only use if needed to clarify whether you're talking about a concept versus the feature in a specific product, or when comparing a Microsoft Entra feature with a competing feature.
+
+Only official product names are capitalized, plus Conditional Access and My * apps.
+
+| **Category** | **Old terminology** | **Correct name as of July 2023** |
+|-||-|
+| **Microsoft Entra product family** | Microsoft Azure Active Directory<br/> Azure Active Directory<br/> Azure Active Directory (Azure AD)<br/> Azure AD<br/> AAD | Microsoft Entra ID<br/> (Second use: Microsoft Entra ID is preferred, ID is acceptable in product/UI experiences, ME-ID if abbreviation is necessary) |
+| | Azure Active Directory External Identities<br/> Azure AD External Identities | Microsoft Entra External ID<br/> (Second use: External ID) |
+| | Azure Active Directory Identity Governance<br/> Azure AD Identity Governance<br/> Microsoft Entra Identity Governance | Microsoft Entra ID Governance<br/> (Second use: ID Governance) |
+| | *New* | Microsoft Entra Internet Access<br/> (Second use: Internet Access) |
+| | Cloud Knox | Microsoft Entra Permissions Management<br/> (Second use: Permissions Management) |
+| | *New* | Microsoft Entra Private Access<br/> (Second use: Private Access) |
+| | Azure Active Directory Verifiable Credentials<br/> Azure AD Verifiable Credentials | Microsoft Entra Verified ID<br/> (Second use: Verified ID) |
+| | Azure Active Directory Workload Identities<br/> Azure AD Workload Identities | Microsoft Entra Workload ID<br/> (Second use: Workload ID) |
+| | Azure Active Directory Domain Services<br/> Azure AD Domain Services | Microsoft Entra Domain Services<br/> (Second use: Domain Services) |
+| **Microsoft Entra ID SKUs** | Azure Active Directory Premium P1 | Microsoft Entra ID P1 |
+| | Azure Active Directory Premium P1 for faculty | Microsoft Entra ID P1 for faculty |
+| | Azure Active Directory Premium P1 for students | Microsoft Entra ID P1 for students |
+| | Azure Active Directory Premium P1 for government | Microsoft Entra ID P1 for government |
+| | Azure Active Directory Premium P2 | Microsoft Entra ID P2 |
+| | Azure Active Directory Premium P2 for faculty | Microsoft Entra ID P2 for faculty |
+| | Azure Active Directory Premium P2 for students | Microsoft Entra ID P2 for students |
+| | Azure Active Directory Premium P2 for government | Microsoft Entra ID P2 for government |
+| | Azure Active Directory Premium F2 | Microsoft Entra ID F2 |
+| **Microsoft Entra ID service plans** | Azure Active Directory Free | Microsoft Entra ID Free |
+| | Azure Active Directory Premium P1 | Microsoft Entra ID P1 |
+| | Azure Active Directory Premium P2 | Microsoft Entra ID P2 |
+| | Azure Active Directory for education | Microsoft Entra ID for education |
+| **Features and functionality** | Azure AD access token authentication<br/> Azure Active Directory access token authentication | Microsoft Entra access token authenticationΓÇ»|
+| | Azure AD account<br/> Azure Active Directory account | Microsoft Entra account<br/><br/> This terminology is only used with IT admins and developers. End users authenticate with a work or school account. |
+| | Azure AD activity logs | Microsoft Entra activity logs |
+| | Azure AD admin<br/> Azure Active Directory admin | Microsoft Entra admin |
+| | Azure AD admin center<br/> Azure Active Directory admin center | Replace with Microsoft Entra admin center and update link to entra.microsoft.com |
+| | Azure AD application proxy<br/> Azure Active Directory application proxy | Microsoft Entra application proxy |
+| | Azure AD audit log | Microsoft Entra audit log |
+| | Azure AD authentication<br/> authenticate with an Azure AD identity<br/> authenticate with Azure AD<br/> authentication to Azure AD | Microsoft Entra authentication<br/> authenticate with a Microsoft Entra identity<br/> authenticate with Microsoft Entra<br/> authentication to Microsoft Entra<br/><br/> This terminology is only used with administrators. End users authenticate with a work or school account. |
+| | Azure AD B2B<br/> Azure Active Directory B2B | Microsoft Entra B2B |
+| | Azure AD built-in roles<br/> Azure Active Directory built-in roles | Microsoft Entra built-in roles |
+| | Azure AD Conditional Access<br/> Azure Active Directory Conditional Access | Microsoft Entra Conditional Access<br/> (Second use: Conditional Access) |
+| | Azure AD cloud-only identities<br/> Azure Active Directory cloud-only identities | Microsoft Entra cloud-only identities |
+| | Azure AD Connect<br/> Azure Active Directory Connect | Microsoft Entra Connect |
+| | Azure AD Connect Sync<br/> Azure Active Directory Connect Sync | Microsoft Entra Connect Sync |
+| | Azure AD domain<br/> Azure Active Directory domain | Microsoft Entra domain |
+| | Azure AD Domain Services<br/> Azure Active Directory Domain Services | Microsoft Entra Domain Services |
+| | Azure AD enterprise application<br/> Azure Active Directory enterprise application | Microsoft Entra enterprise application |
+| | Azure AD federation services<br/> Azure Active Directory federation services | Active Directory Federation Services |
+| | Azure AD groups<br/> Azure Active Directory groups | Microsoft Entra groups |
+| | Azure AD hybrid identities<br/> Azure Active Directory hybrid identities | Microsoft Entra hybrid identities |
+| | Azure AD identities<br/> Azure Active Directory identities | Microsoft Entra identities |
+| | Azure AD identity protection<br/> Azure Active Directory identity protection | Microsoft Entra ID Protection |
+| | Azure AD integrated authentication<br/> Azure Active Directory integrated authentication | Microsoft Entra integrated authentication |
+| | Azure AD join<br/> Azure AD joined<br/> Azure Active Directory join<br/> Azure Active Directory joined | Microsoft Entra join<br/> Microsoft Entra joined |
+| | Azure AD login<br/> Azure Active Directory login | Microsoft Entra login |
+| | Azure AD managed identities<br/> Azure Active Directory managed identities | Microsoft Entra managed identities |
+| | Azure AD multifactor authentication (MFA)<br/> Azure Active Directory multifactor authentication (MFA) | Microsoft Entra multifactor authentication (MFA)<br/> (Second use: MFA) |
+| | Azure AD OAuth and OpenID Connect<br/> Azure Active Directory OAuth and OpenID Connect | Microsoft Entra ID OAuth and OpenID Connect |
+| | Azure AD object<br/> Azure Active Directory object | Microsoft Entra object |
+| | Azure Active Directory-only authentication<br/> Azure AD-only authentication | Microsoft Entra-only authentication |
+| | Azure AD pass-through authentication (PTA)<br/> Azure Active Directory pass-through authentication (PTA) | Microsoft Entra pass-through authentication |
+| | Azure AD password authentication<br/> Azure Active Directory password authentication | Microsoft Entra password authentication |
+| | Azure AD password hash synchronization (PHS)<br/> Azure Active Directory password hash synchronization (PHS) | Microsoft Entra password hash synchronization |
+| | Azure AD password protection<br/> Azure Active Directory password protection | Microsoft Entra password protection |
+| | Azure AD principal ID<br/> Azure Active Directory principal ID | Microsoft Entra principal ID |
+| | Azure AD Privileged Identity Management (PIM)<br/> Azure Active Directory Privileged Identity Management (PIM) | Microsoft Entra Privileged Identity Management (PIM) |
+| | Azure AD registered<br/> Azure Active Directory registered | Microsoft Entra registered |
+| | Azure AD reporting and monitoring<br/> Azure Active Directory reporting and monitoring | Microsoft Entra reporting and monitoring |
+| | Azure AD role<br/> Azure Active Directory role | Microsoft Entra role |
+| | Azure AD schema<br/> Azure Active Directory schema | Microsoft Entra schema |
+| | Azure AD Seamless single sign-on (SSO)<br/> Azure Active Directory Seamless single sign-on (SSO) | Microsoft Entra seamless single sign-on (SSO)<br/> (Second use: SSO) |
+| | Azure AD self-service password reset (SSPR)<br/> Azure Active Directory self-service password reset (SSPR) | Microsoft Entra self-service password reset (SSPR) |
+| | Azure AD service principal<br/> Azure Active Directory service principal | Microsoft Entra service principal |
+| | Azure AD tenant<br/> Azure Active Directory tenant | Microsoft Entra tenant |
+| | Create a user in Azure AD<br/> Create a user in Azure Active Directory | Create a user in Microsoft Entra |
+| | Federated with Azure AD<br/> Federated with Azure Active Directory | Federated with Microsoft Entra |
+| | Hybrid Azure AD Join<br/> Hybrid Azure AD Joined | Microsoft Entra hybrid join<br/> Microsoft Entra hybrid joined |
+| | Managed identities in Azure AD for Azure SQL | Managed identities in Microsoft Entra for Azure SQL |
+| **Acronym usage** | AAD | ME-ID<br/><br/> Note that this isn't an official abbreviation for the product but may be used in code or when absolute shortest form is required. |
## Frequently asked questions ### When is the name change happening?
-The name change will start appearing across Microsoft experiences after a 30-day notification period, which started July 11, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences to be completed by the end of 2023.
+The name change will appear across Microsoft experiences starting August 15, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences and partner experiences to be completed by the end of 2023.
### Why is the name being changed?
No, only the name Azure AD is going away. Capabilities remain the same.
### What will happen to the Azure AD capabilities and features like App Gallery or Conditional Access?
+All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption.
+ The naming of features changes to Microsoft Entra. For example: - Azure AD tenant -> Microsoft Entra tenant - Azure AD account -> Microsoft Entra account-- Azure AD joined -> Microsoft Entra joined-- Azure AD Conditional Access -> Microsoft Entra Conditional Access
-All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption.
+See the [Glossary of updated terminology](#glossary-of-updated-terminology) for more examples.
### Are licenses changing? Are there any changes to pricing?
There are no changes to the identity features and functionality available in Mic
In addition to the capabilities they already have, Microsoft 365 E5 customers will also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra P2, currently known as Azure AD Premium P2.
-### How and when are customers being notified?
-
-The name changes are publicly announced as of July 11, 2023.
+### What's changing for identity developer and devops experience?
-Banners, alerts, and message center posts will notify users of the name change. These will be displayed on the tenant overview page, portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn.
-
-### What if I use the Azure AD name in my content or app?
+Identity developer and devops experiences aren't being renamed. To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
-We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the following section ([Azure AD name changes and exceptions](#azure-ad-name-changes-and-exceptions)) to make the name change in your content and product experiences by the end of 2023.
-
-## Azure AD name changes and exceptions
-
-We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to stay current with the new naming guidance by updating copy by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces.
-
-### Product name
+Many technical components either have low visibility to customers (for example, sign-in URLs), or usually aren't branded, like APIs.
-Replace the product name "Azure Active Directory" or "Azure AD" or "AAD" with Microsoft Entra ID.
-
-*Microsoft Entra* is the correct name for the family of identity and network access solutions, one of which is *Microsoft Entra ID.*
-
-### Logo/icon
+Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts.
-Azure AD is becoming Microsoft Entra ID, and the product icon is also being updated. Work with your Microsoft partner organization to obtain the new product icon.
+Naming is also not changing for:
-### Feature names
+- [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview) ΓÇô Acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API.
+- [Microsoft Graph](/graph) ΓÇô Get programmatic access to organizational, user, and application data stored in Microsoft Entra ID.
+- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) ΓÇô Acts as an API wrapper for the Microsoft Graph APIs; helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph.
+- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as ΓÇ£Active DirectoryΓÇ¥, and all related Windows Server identity services, associated with Active Directory.
+- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name ΓÇ£Active DirectoryΓÇ¥ or any corresponding features.
+- [Azure Active Directory B2C](/azure/active-directory-b2c) will continue to be available as an Azure service.
+- Any deprecated or retired functionality, feature, or service of Azure Active Directory.
-Capabilities or services formerly known as "Azure Active Directory &lt;feature name&gt;" or "Azure AD &lt;feature name&gt;" will be branded as Microsoft Entra product family features. For example:
+### How and when are customers being notified?
-- "Azure AD Conditional Access" is becoming "Microsoft Entra Conditional Access"-- "Azure AD single sign-on" is becoming "Microsoft Entra single sign-on"-- "Azure AD tenant" is becoming "Microsoft Entra tenant"
+The name changes were publicly announced on July 11, 2023.
-### Exceptions to Azure AD name change
+Banners, alerts, and message center posts notified users of the name change. The change was also displayed on the tenant overview page in the portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn.
-Products or features that are being deprecated aren't being renamed. These products or features include:
+### What if I use the Azure AD name in my content or app?
-- Azure AD Authentication Library (ADAL), replaced by [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md)-- Azure AD Graph, replaced by [Microsoft Graph](/graph)-- Azure Active Directory PowerShell for Graph (Azure AD PowerShell), replaced by [Microsoft Graph PowerShell](/powershell/microsoftgraph)
+We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the ([Glossary of updated terminology](#glossary-of-updated-terminology)) to make the name change in your content and product experiences by the end of 2023.
-Names that don't have "Azure AD" also aren't changing. These products or features include Active Directory Federation Services (AD FS), Microsoft identity platform, and Windows Server Active Directory Domain Services (AD DS).
+## Revision history
-End users shouldn't be exposed to the Azure AD or Microsoft Entra ID name. For sign-ins and account user experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md).
+| Date | Change description |
+||--|
+| August 29, 2023 | <br/>&#8226; In the [glossary](#glossary-of-updated-terminology), corrected the entry for "Azure AD activity logs" to separate "Azure AD audit log", which is a distinct type of activity log. <br/>&#8226; Added Azure AD Sync and DirSync to the [Azure AD renaming exceptions and clarifications](#azure-ad-renaming-exceptions-and-clarifications) section. |
+| August 18, 2023 | <br/>&#8226; Updated the article to include a new section [Glossary of updated terminology](#glossary-of-updated-terminology), which includes the old and new terminology.<br/>&#8226; Updated info and added link to usage of the Microsoft Entra ID icon, and updates to verbiage in some sections. |
+| July 11, 2023 | Published the original guidance as part of the [Microsoft Entra moment and related announcement](https://www.microsoft.com/security/blog/2023/07/11/microsoft-entra-expands-into-security-service-edge-and-azure-ad-becomes-microsoft-entra-id/?culture=en-us&country=us). |
## Next steps
active-directory Scenario Azure First Sap Identity Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md
This document provides advice on the **technical design and configuration** of S
| [IDS](https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/d6a8db70bdde459f92f2837349f95090.html) | SAP ID Service. An instance of IAS used by SAP to authenticate customers and partners to SAP-operated PaaS and SaaS services. | | [IPS](https://help.sap.com/viewer/f48e822d6d484fa5ade7dda78b64d9f5/Cloud/en-US/2d2685d469a54a56b886105a06ccdae6.html) | SAP Cloud Identity Services - Identity Provisioning Service. IPS helps to synchronize identities between different stores / target systems. | | [XSUAA](https://blogs.sap.com/2019/01/07/uaa-xsuaa-platform-uaa-cfuaa-what-is-it-all-about/) | Extended Services for Cloud Foundry User Account and Authentication. XSUAA is a multi-tenant OAuth authorization server within the SAP BTP. |
-| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multi-cloud offering for BTP (AWS, Azure, GCP, Alibaba). |
+| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multicloud offering for BTP (AWS, Azure, GCP, Alibaba). |
| [Fiori](https://www.sap.com/products/fiori.html) | The web-based user experience of SAP (as opposed to the desktop-based experience). | ## Overview
Regardless of where the authorization information comes from, it can then be emi
## Next Steps - Learn more about the initial setup in [this tutorial](../saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md)-- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#azure-ad) and beyond
+- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#microsoft-entra-id-formerly-azure-ad) and beyond
active-directory Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md
description: Get protected from common identity threats using Azure AD security
+ Previously updated : 07/31/2023 Last updated : 08/29/2023
To configure security defaults in your directory, you must be assigned at least
To enable security defaults: 1. Sign in to theΓÇ»[Microsoft Entra admin center](https://entra.microsoft.com/).
-1. Browse toΓÇ»**Microsoft Entra ID (Azure AD)**ΓÇ»>ΓÇ»**Properties**.
+1. Browse toΓÇ»**Identity**ΓÇ»> **Overview** > **Properties**.
1. Select **Manage security defaults**. 1. Set **Security defaults** to **Enabled**. 1. Select **Save**.
After security defaults are enabled in your tenant, all authentication requests
Organizations use various Azure services managed through the Azure Resource Manager API, including: - Azure portal -- Microsoft Entra Admin Center
+- Microsoft Entra admin center
- Azure PowerShell - Azure CLI
It's important to verify the identity of users who want to access Azure Resource
After you enable security defaults in your tenant, any user accessing the following services must complete multifactor authentication: - Azure portal
+- Microsoft Entra admin center
- Azure PowerShell - Azure CLI
Organizations that choose to implement Conditional Access policies that replace
To disable security defaults in your directory: 1. Sign in to theΓÇ»[Microsoft Entra admin center](https://entra.microsoft.com/).
-1. Browse toΓÇ»**Microsoft Entra ID (Azure AD)**ΓÇ»>ΓÇ»**Properties**.
+1. Browse toΓÇ»**Identity**ΓÇ»>ΓÇ»**Overview** > **Properties**.
1. Select **Manage security defaults**. 1. Set **Security defaults** to **Disabled (not recommended)**. 1. Select **Save**.
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Rea
Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul> Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul> Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions
-Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul>
+Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li><li>Read multi-tenant organization basic details and active tenants</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul>
Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions Subscriptions | <ul><li>Read all licensing subscriptions<li>Enable service plan memberships</li></ul> | No permissions | No permissions Policies | <ul><li>Read all properties of policies<li>Manage all properties of owned policies</li></ul> | No permissions | No permissions
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Last updated 01/27/2023 --+ # What's deprecated in Azure Active Directory?
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Last updated 7/18/2023 -+
The What's new in Azure Active Directory? release notes provide information abou
+## February 2023
+
+### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Privileged Identity Management (PIM) role activation has been expanded to the Billing and AD extensions in the Azure portal. Shortcuts have been added to Subscriptions (billing) and Access Control (AD) to allow users to activate PIM roles directly from these settings. From the Subscriptions settings, select **View eligible subscriptions** in the horizontal command menu to check your eligible, active, and expired assignments. From there, you can activate an eligible assignment in the same pane. In Access control (IAM) for a resource, you can now select **View my access** to see your currently active and eligible role assignments and activate directly. By integrating PIM capabilities into different Azure portal blades, this new feature allows users to gain temporary access to view or edit subscriptions and resources more easily.
++
+For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md).
+++
+### General Availability - Follow Azure AD best practices with recommendations
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+Azure AD recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure AD Overview.
+
+This release includes our first 3 recommendations:
+
+- Convert from per-user MFA to Conditional Access MFA
+- Migration applications from AD FS to Azure AD
+- Minimize MFA prompts from known devices
++
+For more information, see:
+
+- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md)
+- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview)
+++
+### Public Preview - Azure AD PIM + Conditional Access integration
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Now you can require users who are eligible for a role to satisfy Conditional Access policy requirements for activation: use specific authentication method enforced through Authentication Strengths, activate from Intune compliant device, comply with Terms of Use, and use 3rd party MFA and satisfy location requirements.
+
+For more information, see: [Configure Azure AD role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md).
++++
+### General Availability - More information on why a sign-in was flagged as "unfamiliar"
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Unfamiliar sign-in properties risk detection now provides risk reasons as to which properties are unfamiliar for customers to better investigate that risk.
+
+Identity Protection now surfaces the unfamiliar properties in the Azure portal on UX and in API as *Additional Info* with a user-friendly description explaining that *the following properties are unfamiliar for this sign-in of the given user*.
+
+There's no additional work to enable this feature, the unfamiliar properties are shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md).
++++
+### General Availability - New Federated Apps available in Azure AD Application gallery - February 2023
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In February 2023 we've added the following 10 new applications in our App gallery with Federation support:
+
+[PROCAS](https://accounting.procas.com/), [Tanium Cloud SSO](../saas-apps/tanium-sso-tutorial.md), [LeanDNA](../saas-apps/leandna-tutorial.md), [CalendarAnything LWC](https://silverlinecrm.com/calendaranything/), [courses.work](../saas-apps/courseswork-tutorial.md), [Udemy Business SAML](../saas-apps/udemy-business-saml-tutorial.md), [Canva](../saas-apps/canva-tutorial.md), [Kno2fy](../saas-apps/kno2fy-tutorial.md), [IT-Conductor](../saas-apps/it-conductor-tutorial.md), [ナレッジワーク(Knowledge Work)](../saas-apps/knowledge-work-tutorial.md), [Valotalive Digital Signage Microsoft 365 integration](https://store.valotalive.com/#main), [Priority Matrix HIPAA](https://hipaa.prioritymatrix.com/), [Priority Matrix Government](https://hipaa.prioritymatrix.com/), [Beable](../saas-apps/beable-tutorial.md), [Grain](https://grain.com/app?dialog=integrations&integration=microsoft+teams), [DojoNavi](../saas-apps/dojonavi-tutorial.md), [Global Validity Access Manager](https://myaccessmanager.com/), [FieldEquip](https://app.fieldequip.com/), [Peoplevine](https://control.peoplevine.com/), [Respondent](../saas-apps/respondent-tutorial.md), [WebTMA](../saas-apps/webtma-tutorial.md), [ClearIP](https://clearip.com/login), [Pennylane](../saas-apps/pennylane-tutorial.md), [VsimpleSSO](https://app.vsimple.com/login), [Compliance Genie](../saas-apps/compliance-genie-tutorial.md), [Dataminr Corporate](https://dmcorp.okta.com/), [Talon](../saas-apps/talon-tutorial.md).
++
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - February 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Atmos](../saas-apps/atmos-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++++ ## January 2023 ### Public Preview - Cross-tenant synchronization
For more information on how to enable this feature, see: [Cloud Sync directory e
**Service category:** Audit **Product capability:** Monitoring & Reporting
-This feature analyzes uploaded client-side logs, also known as diagnostic logs, from a Windows 10+ device that is having an issue(s) and suggests remediation steps to resolve the issue(s). Admins can work with end user to collect client-side logs, and then upload them to this troubleshooter in the Entra Portal. For more information, see: [Troubleshooting Windows devices in Azure AD](../devices/troubleshoot-device-windows-joined.md).
+This feature analyzes uploaded client-side logs, also known as diagnostic logs, from a Windows 10+ device that is having an issue(s) and suggests remediation steps to resolve the issue(s). Admins can work with end user to collect client-side logs, and then upload them to this troubleshooter in the Microsoft Entra admin center. For more information, see: [Troubleshooting Windows devices in Azure AD](../devices/troubleshoot-device-windows-joined.md).
The ability for users to create tenants from the Manage Tenant overview has been
**Service category:** My Apps **Product capability:** End User Experiences
-We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
+We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Microsoft Entra admin centers. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
Customers can now meet their complex audit and recertification requirements thro
Currently, users can self-service leave for an organization without the visibility of their IT administrators. Some organizations may want more control over this self-service process.
-With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra portal. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties.
+With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra admin center. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties.
A new policy API is available for the administrators to control tenant wide policy: [externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta&preserve-view=true)
Identity Protection risk detections (alerts) are now also available in Microsoft
In August 2022, we've added the following 40 new applications in our App gallery with Federation support
-[Albourne Castle](https://village.albourne.com/castle), [Adra by Trintech](../saas-apps/adra-by-trintech-tutorial.md), [workhub](../saas-apps/workhub-tutorial.md), [4DX](../saas-apps/4dx-tutorial.md), [Ecospend IAM V1](https://iamapi.sb.ecospend.com/account/login), [TigerGraph](../saas-apps/tigergraph-tutorial.md), [Sketch](../saas-apps/sketch-tutorial.md), [Lattice](../saas-apps/lattice-tutorial.md), [snapADDY Single Sign On](https://app.snapaddy.com/login), [RELAYTO Content Experience Platform](https://relayto.com/signin), [oVice](https://tour.ovice.in/login), [Arena](../saas-apps/arena-tutorial.md), [QReserve](../saas-apps/qreserve-tutorial.md), [Curator](../saas-apps/curator-tutorial.md), [NetMotion Mobility](../saas-apps/netmotion-mobility-tutorial.md), [HackNotice](../saas-apps/hacknotice-tutorial.md), [ERA_EHS_CORE](../saas-apps/era-ehs-core-tutorial.md), [AnyClip Teams Connector](https://videomanager.anyclip.com/login), [Wiz SSO](../saas-apps/wiz-sso-tutorial.md), [Tango Reserve by AgilQuest (EU Instance)](../saas-apps/tango-reserve-tutorial.md), [valid8Me](../saas-apps/valid8me-tutorial.md), [Ahrtemis](../saas-apps/ahrtemis-tutorial.md), [KPMG Leasing Tool](../saas-apps/kpmg-tool-tutorial.md) [Mist Cloud Admin SSO](../saas-apps/mist-cloud-admin-tutorial.md), [Work-Happy](https://live.work-happy.com/?azure=true), [Ediwin SaaS EDI](../saas-apps/ediwin-saas-edi-tutorial.md), [LUSID](../saas-apps/lusid-tutorial.md), [Next Gen Math](https://nextgenmath.com/), [Total ID](https://www.tokyo-shoseki.co.jp/ict/), [Cheetah For Benelux](../saas-apps/cheetah-for-benelux-tutorial.md), [Live Center Australia](https://au.livecenter.com/), [Shop Floor Insight](https://www.dmsiworks.com/apps/shop-floor-insight), [Warehouse Insight](https://www.dmsiworks.com/apps/warehouse-insight), [myAOS](../saas-apps/myaos-tutorial.md), [Hero](https://admin.linc-ed.com/), [FigBytes](../saas-apps/figbytes-tutorial.md), [VerosoftDesign](https://verosoft-design.vercel.app/), [ViewpointOne - UK](https://identity-uk.team.viewpoint.com/), [EyeRate Reviews](https://azure-login.eyeratereviews.com/), [Lytx DriveCam](../saas-apps/lytx-drivecam-tutorial.md)
+[Albourne Castle](https://village.albourne.com/castle), [Adra by Trintech](../saas-apps/adra-by-trintech-tutorial.md), [workhub](../saas-apps/workhub-tutorial.md), [4DX](../saas-apps/4dx-tutorial.md), [Ecospend IAM V1](https://iamapi.sb.ecospend.com/account/login), [TigerGraph](../saas-apps/tigergraph-tutorial.md), [Sketch](../saas-apps/sketch-tutorial.md), [Lattice](../saas-apps/lattice-tutorial.md), [snapADDY Single Sign On](https://app.snapaddy.com/login), [RELAYTO Content Experience Platform](https://relayto.com/signin), [oVice](https://tour.ovice.in/login), [Arena](../saas-apps/arena-tutorial.md), [QReserve](../saas-apps/qreserve-tutorial.md), [Curator](../saas-apps/curator-tutorial.md), [NetMotion Mobility](../saas-apps/netmotion-mobility-tutorial.md), [HackNotice](../saas-apps/hacknotice-tutorial.md), [ERA_EHS_CORE](../saas-apps/era-ehs-core-tutorial.md), [AnyClip Teams Connector](https://videomanager.anyclip.com/login), [Wiz SSO](../saas-apps/wiz-sso-tutorial.md), [Tango Reserve by AgilQuest (EU Instance)](../saas-apps/tango-reserve-tutorial.md), [valid8Me](../saas-apps/valid8me-tutorial.md), [Ahrtemis](../saas-apps/ahrtemis-tutorial.md), [KPMG Leasing Tool](../saas-apps/kpmg-tool-tutorial.md) [Mist Cloud Admin SSO](../saas-apps/mist-cloud-admin-tutorial.md), [Ediwin SaaS EDI](../saas-apps/ediwin-saas-edi-tutorial.md), [LUSID](../saas-apps/lusid-tutorial.md), [Next Gen Math](https://nextgenmath.com/), [Total ID](https://www.tokyo-shoseki.co.jp/ict/), [Cheetah For Benelux](../saas-apps/cheetah-for-benelux-tutorial.md), [Live Center Australia](https://au.livecenter.com/), [Shop Floor Insight](https://www.dmsiworks.com/apps/shop-floor-insight), [Warehouse Insight](https://www.dmsiworks.com/apps/warehouse-insight), [myAOS](../saas-apps/myaos-tutorial.md), [Hero](https://admin.linc-ed.com/), [FigBytes](../saas-apps/figbytes-tutorial.md), [VerosoftDesign](https://verosoft-design.vercel.app/), [ViewpointOne - UK](https://identity-uk.team.viewpoint.com/), [EyeRate Reviews](https://azure-login.eyeratereviews.com/), [Lytx DriveCam](../saas-apps/lytx-drivecam-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
For listing your application in the Azure AD app gallery, please read the detail
-## February 2022
-
--
-
-
-### General Availability - France digital accessibility requirement
-
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** End User Experiences
-
-
-This change provides users who are signing into Azure Active Directory on iOS, Android, and Web UI flavors information about the accessibility of Microsoft's online services via a link on the sign-in page. This ensures that the France digital accessibility compliance requirements are met. The change will only be available for French language experiences.[Learn more](https://www.microsoft.com/fr-fr/accessibility/accessibilite/accessibility-statement)
-
--
-
-
-### General Availability - Downloadable access review history report
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-
-With Azure Active Directory (Azure AD) Access Reviews, you can create a downloadable review history to help your organization gain more insight. The report pulls the decisions that were taken by reviewers when a report is created. These reports can be constructed to include specific access reviews, for a specific time frame, and can be filtered to include different review types and review results.[Learn more](../governance/access-reviews-downloadable-review-history.md)
-
----
-
-
-### Public Preview of Identity Protection for Workload Identities
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-
-Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We're also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md)
-
--
-
-
-### Public Preview - Cross-tenant access settings for B2B collaboration
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** Collaboration
-
-
-
-Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now you have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md)
-
--
-
-
-### Public preview - Create Azure AD access reviews with multiple stages of reviewers
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-
-Use multi-stage reviews to create Azure AD access reviews in sequential stages, each with its own set of reviewers and configurations. Supports multiple stages of reviewers to satisfy scenarios such as: independent groups of reviewers reaching quorum, escalations to other reviewers, and reducing burden by allowing for later stage reviewers to see a filtered-down list. For public preview, multi-stage reviews are only supported on reviews of groups and applications. [Learn more](../governance/create-access-review.md)
-
--
-
-
-### New Federated Apps available in Azure AD Application gallery - February 2022
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Third Party Integration
-
-
-In February 2022 we added the following 20 new applications in our App gallery with Federation support:
-
-[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
-
-You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md),
-
-For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](../manage-apps/v2-howto-app-gallery-listing.md)
-
-
--
-
-
-### Two new MDA detections in Identity Protection
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-
-Identity Protection has added two new detections from Microsoft Defender for Cloud Apps, (formerly MCAS). The Mass Access to Sensitive Files detection detects anomalous user activity, and the Unusual Addition of Credentials to an OAuth app detects suspicious service principal activity.[Learn more](../identity-protection/concept-identity-protection-risks.md)
-
--
-
-
-### Public preview - New provisioning connectors in the Azure AD Application Gallery - February 2022
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)-- [GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)-- [Gong](../saas-apps/gong-provisioning-tutorial.md)-- [LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)-- [ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
-
--
-
-
-### General Availability - Privileged Identity Management (PIM) role activation for SharePoint Online enhancements
-
-**Type:** Changed feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-
-We've improved the Privileged Identity management (PIM) time to role activation for SharePoint Online. Now, when activating a role in PIM for SharePoint Online, you should be able to use your permissions right away in SharePoint Online. This change rolls out in stages, so you might not yet see these improvements in your organization. [Learn more](../privileged-identity-management/pim-how-to-activate-role.md)
-
--
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
In the **All Devices** settings under the Registered column, you can now select
**Service category:** My Apps **Product capability:** End User Experiences
-We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
+We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Microsoft Entra admin centers. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Last updated 05/31/2023 -+
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## August 2023
+
+### General Availability - Tenant Restrictions V2
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Identity Security & Protection
+
+**Tenant Restrictions V2 (TRv2)** is now generally available for authentication plane via proxy.
+
+TRv2 allows organizations to enable safe and productive cross-company collaboration while containing data exfiltration risk. With TRv2, you can control what external tenants your users can access from your devices or network using externally issued identities and provide granular access control on a per org, user, group, and application basis.  
+
+TRv2 uses the cross-tenant access policy, and offers both authentication and data plane protection. It enforces policies during user authentication, and on data plane access with Exchange Online, SharePoint Online, Teams, and MSGraph.ΓÇ» While the data plane support with Windows GPO and Global Secure Access is still in public preview, authentication plane support with proxy is now generally available.
+
+Visit https://aka.ms/tenant-restrictions-enforcement for more information on tenant restriction V2 and Global Secure Access client-side tagging for TRv2 at [Universal tenant restrictions](/azure/global-secure-access/how-to-universal-tenant-restrictions).
+++
+### Public Preview - Cross-tenant access settings supports custom RBAC roles and protected actions
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Cross-tenant access settings can be managed with custom roles defined by your organization. This enables you to define your own finely-scoped roles to manage cross-tenant access settings instead of using one of the built-in roles for management. [Learn more about creating your own custom roles](../external-identities/cross-tenant-access-overview.md#custom-roles-for-managing-cross-tenant-access-settings).
+
+You can also now protect privileged actions inside of cross-tenant access settings using Conditional Access. For example, you can require MFA before allowing changes to default settings for B2B collaboration. Learn more about [Protected actions](../roles/protected-actions-overview.md).
+++
+### General Availability - Additional settings in Entitlement Management auto-assignment policy
+
+**Type:** Changed feature
+**Service category**: Entitlement Management
+**Product capability:** Entitlement Management
+
+In the Entra ID Governance entitlement management auto-assignment policy, there are three new settings. This allows a customer to select to not have the policy create assignments, not remove assignments, and to delay assignment removal.
+++
+### Public Preview - Setting for guest losing access
+
+**Type:** Changed feature
+**Service category:** Entitlement Management
+**Product capability:** Entitlement Management
+
+An administrator can configure that when a guest brought in through entitlement management has lost their last access package assignment, they're deleted after a specified number of days. For more information, see: [Govern access for external users in entitlement management](../governance/entitlement-management-external-users.md).
+++
+### Public Preview - Real-Time Strict Location Enforcement
+
+**Type:** New feature
+**Service category:** Continuous Access Evaluation
+**Product capability:** Access Control
+
+Strictly enforce Conditional Access policies in real-time using Continuous Access Evaluation. Enable services like Microsoft Graph, Exchange Online, and SharePoint Online to block access requests from disallowed locations as part of a layered defense against token replay and other unauthorized access. For more information, see blog: [Public Preview: Strictly Enforce Location Policies with Continuous Access Evaluation](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/public-preview-strictly-enforce-location-policies-with/ba-p/3773133) and documentation:
+[Strictly enforce location policies using continuous access evaluation (preview)](../conditional-access/concept-continuous-access-evaluation-strict-enforcement.md).
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - August 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Airbase](../saas-apps/airbase-provisioning-tutorial.md)
+- [Airtable](../saas-apps/airtable-provisioning-tutorial.md)
+- [Cleanmail Swiss](../saas-apps/cleanmail-swiss-provisioning-tutorial.md)
+- [Informacast](../saas-apps/informacast-provisioning-tutorial.md)
+- [Kintone](../saas-apps/kintone-provisioning-tutorial.md)
+- [O'reilly learning platform](../saas-apps/oreilly-learning-platform-provisioning-tutorial.md)
+- [Tailscale](../saas-apps/tailscale-provisioning-tutorial.md)
+- [Tanium SSO](../saas-apps/tanium-sso-provisioning-tutorial.md)
+- [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-provisioning-tutorial.md)
+- [Xledger](../saas-apps/xledger-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### General Availability - Continuous Access Evaluation for Workload Identities available in Public and Gov clouds
+
+**Type:** New feature
+**Service category:** Continuous Access Evaluation
+**Product capability:** Identity Security & Protection
+
+Real-time enforcement of risk events, revocation events, and Conditional Access location policies is now generally available for workload identities.
+Service principals on line of business (LOB) applications are now protected on access requests to Microsoft Graph. For more information, see: [Continuous access evaluation for workload identities (preview)](../conditional-access/concept-continuous-access-evaluation-workload.md).
+++ ## July 2023 ### General Availability: Azure Active Directory (Azure AD) is being renamed.
SAML/Ws-Fed based identity providers for authentication in Azure AD B2B are gene
**Service category:** Provisioning **Product capability:** Identity Lifecycle Management
-Cross-tenant synchronization allows you to set up a scalable and automated solution for users to access applications across tenants in your organization. It builds upon the Azure Active Directory B2B functionality and automates creating, updating, and deleting B2B users within tenants in your organization. For more information, see: [What is cross-tenant synchronization?](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
+Cross-tenant synchronization allows you to set up a scalable and automated solution for users to access applications across tenants in your organization. It builds upon the Azure Active Directory B2B functionality and automates creating, updating, and deleting B2B users within tenants in your organization. For more information, see: [What is cross-tenant synchronization?](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
In May 2023 we added the following 51 new applications in our App gallery with F
-[INEXTRACK](https://inexto.com/inexto-suite/inextrack), [Valotalive Digital Signage Microsoft 365 integration](https://valota.live/apps/microsoft-excel/), [Tailscale](http://tailscale.com/), [MANTL](https://console.mantl.com/), [ServusConnect](../saas-apps/servusconnect-tutorial.md), [Jigx MS Graph Demonstrator](https://www.jigx.com/), [Delivery Solutions](../saas-apps/delivery-solutions-tutorial.md), [Radiant IOT Portal](../saas-apps/radiant-iot-portal-tutorial.md), [Cosgrid Networks](../saas-apps/cosgrid-networks-tutorial.md), [voya SSO](https://app.voya.ai/), [Redocly](../saas-apps/redocly-tutorial.md), [Glaass Pro](https://glaass.net/pro/), [TalentLyftOIDC](https://www.talentlyft.com/en), [Cisco Expressway](../saas-apps/cisco-expressway-tutorial.md), [IBM TRIRIGA on Cloud](../saas-apps/ibm-tririga-on-cloud-tutorial.md), [Avionte Bold SAML Federated SSO](../saas-apps/avionte-bold-saml-federated-sso-tutorial.md), [InspectNTrack](http://www.inspecttrack.com/), [CAREERSHIP](../saas-apps/careership-tutorial.md), [Cisco Unity Connection](../saas-apps/cisco-unity-connection-tutorial.md), [HSC-Buddy](https://hsc-buddy.com/), [teamecho](https://app.teamecho.at/), [Uni-tel ), [Recnice](https://recnice.com/)
+[INEXTRACK](https://inexto.com/inexto-suite/inextrack), [Valotalive Digital Signage Microsoft 365 integration](https://valota.live/apps/microsoft-excel/), [Tailscale](http://tailscale.com/), [MANTL](https://console.mantl.com/), [ServusConnect](../saas-apps/servusconnect-tutorial.md), [Jigx MS Graph Demonstrator](https://www.jigx.com/), [Delivery Solutions](../saas-apps/delivery-solutions-tutorial.md), [Radiant IOT Portal](../saas-apps/radiant-iot-portal-tutorial.md), [Cosgrid Networks](../saas-apps/cosgrid-networks-tutorial.md), [voya SSO](https://app.voya.ai/), [Redocly](../saas-apps/redocly-tutorial.md), [Glaass Pro](https://glaass.net/pro/), [TalentLyftOIDC](https://www.talentlyft.com/en), [Cisco Expressway](../saas-apps/cisco-expressway-tutorial.md), [IBM TRIRIGA on Cloud](../saas-apps/ibm-tririga-on-cloud-tutorial.md), [Avionte Bold SAML Federated SSO](../saas-apps/avionte-bold-saml-federated-sso-tutorial.md), [InspectNTrack](http://www.inspecttrack.com/), [CAREERSHIP](../saas-apps/careership-tutorial.md), [Cisco Unity Connection](../saas-apps/cisco-unity-connection-tutorial.md), [HSC-Buddy](https://hsc-buddy.com/), [teamecho](https://app.teamecho.at/), [AskFora](https://askfora.com/), [Enterprise Bot](https://www.enterprisebot.ai/),[CMD+CTRL Base Camp](../saas-apps/cmd-ctrl-base-camp-tutorial.md), [Debitia Collections](https://www.debitia.com/), [EnergyManager](https://energymanager.no/), [Visual Workforce](https://prod.visualworkforce.com/), [Uplifter](https://uplifter.ai/), [AI2](https://tmti.net/services/), [TES Cloud](https://www.tes.c), [Recnice](https://recnice.com/)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
Starting July 2023, we're modernizing the following Terms of Use end user experi
No functionalities are removed. The new PDF viewer adds functionality and the limited visual changes in the end-user experiences will be communicated in a future update. If your organization has allow-listed only certain domains, you must ensure your allowlist includes the domains ΓÇÿmyaccount.microsoft.comΓÇÖ and ΓÇÿ*.myaccount.microsoft.comΓÇÖ for Terms of Use to continue working as expected. --
-## February 2023
-
-### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-Privileged Identity Management (PIM) role activation has been expanded to the Billing and AD extensions in the Azure portal. Shortcuts have been added to Subscriptions (billing) and Access Control (AD) to allow users to activate PIM roles directly from these settings. From the Subscriptions settings, select **View eligible subscriptions** in the horizontal command menu to check your eligible, active, and expired assignments. From there, you can activate an eligible assignment in the same pane. In Access control (IAM) for a resource, you can now select **View my access** to see your currently active and eligible role assignments and activate directly. By integrating PIM capabilities into different Azure portal blades, this new feature allows users to gain temporary access to view or edit subscriptions and resources more easily.
--
-For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md).
---
-### General Availability - Follow Azure AD best practices with recommendations
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-Azure AD recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure AD Overview.
-
-This release includes our first 3 recommendations:
--- Convert from per-user MFA to Conditional Access MFA-- Migration applications from AD FS to Azure AD-- Minimize MFA prompts from known devices--
-For more information, see:
--- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md)-- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview)---
-### Public Preview - Azure AD PIM + Conditional Access integration
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-Now you can require users who are eligible for a role to satisfy Conditional Access policy requirements for activation: use specific authentication method enforced through Authentication Strengths, activate from Intune compliant device, comply with Terms of Use, and use 3rd party MFA and satisfy location requirements.
-
-For more information, see: [Configure Azure AD role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md).
----
-### General Availability - More information on why a sign-in was flagged as "unfamiliar"
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-Unfamiliar sign-in properties risk detection now provides risk reasons as to which properties are unfamiliar for customers to better investigate that risk.
-
-Identity Protection now surfaces the unfamiliar properties in the Azure portal on UX and in API as *Additional Info* with a user-friendly description explaining that *the following properties are unfamiliar for this sign-in of the given user*.
-
-There's no additional work to enable this feature, the unfamiliar properties are shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md).
----
-### General Availability - New Federated Apps available in Azure AD Application gallery - February 2023
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In February 2023 we've added the following 10 new applications in our App gallery with Federation support:
-
-[PROCAS](https://accounting.procas.com/), [Tanium Cloud SSO](../saas-apps/tanium-sso-tutorial.md), [LeanDNA](../saas-apps/leandna-tutorial.md), [CalendarAnything LWC](https://silverlinecrm.com/calendaranything/), [courses.work](../saas-apps/courseswork-tutorial.md), [Udemy Business SAML](../saas-apps/udemy-business-saml-tutorial.md), [Canva](../saas-apps/canva-tutorial.md), [Kno2fy](../saas-apps/kno2fy-tutorial.md), [IT-Conductor](../saas-apps/it-conductor-tutorial.md), [ナレッジワーク(Knowledge Work)](../saas-apps/knowledge-work-tutorial.md), [Valotalive Digital Signage Microsoft 365 integration](https://store.valotalive.com/#main), [Priority Matrix HIPAA](https://hipaa.prioritymatrix.com/), [Priority Matrix Government](https://hipaa.prioritymatrix.com/), [Beable](../saas-apps/beable-tutorial.md), [Grain](https://grain.com/app?dialog=integrations&integration=microsoft+teams), [DojoNavi](../saas-apps/dojonavi-tutorial.md), [Global Validity Access Manager](https://myaccessmanager.com/), [FieldEquip](https://app.fieldequip.com/), [Peoplevine](https://control.peoplevine.com/), [Respondent](../saas-apps/respondent-tutorial.md), [WebTMA](../saas-apps/webtma-tutorial.md), [ClearIP](https://clearip.com/login), [Pennylane](../saas-apps/pennylane-tutorial.md), [VsimpleSSO](https://app.vsimple.com/login), [Compliance Genie](../saas-apps/compliance-genie-tutorial.md), [Dataminr Corporate](https://dmcorp.okta.com/), [Talon](../saas-apps/talon-tutorial.md).
--
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
-
-For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
---
-### Public Preview - New provisioning connectors in the Azure AD Application Gallery - February 2023
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-
-We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
--- [Atmos](../saas-apps/atmos-provisioning-tutorial.md)--
-For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
--
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
In order to permit a wide variety of applications and IT requirements to be addr
|:||--| |A| The application supports federated SSO, Azure AD is the only identity provider, and the application doesn't rely upon group or role claims. | In this pattern, you'll configure that the application requires individual application role assignments, and that users are assigned to the application. Then to perform the review, you'll create a single access review for the application, of the users assigned to this application role. When the review completes, if a user was denied, then they will be removed from the application role. Azure AD will then no longer issue that user with federation tokens and the user will be unable to sign into that application.| |B|If the application uses group claims in addition to application role assignments.| An application may use AD or Azure AD group membership, distinct from application roles to express finer-grained access. Here, you can choose based on your business requirements either to have the users who have application role assignments reviewed, or to review the users who have group memberships. If the groups do not provide comprehensive access coverage, in particular if users may have access to the application even if they aren't a member of those groups, then we recommend reviewing the application role assignments, as in pattern A above.|
-|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning via SCIM, or via updates to a SQL table of users or a non-AD LDAP directory. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments. For more information, see [Governing an application's existing users](identity-governance-applications-existing-users.md) to update the application role assignments in Azure AD.|
+|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning via SCIM, via updates to a SQL table of users, has a non-AD LDAP directory, or supports a SOAP or REST provisioning protocol. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments. For more information, see [Governing an application's existing users](identity-governance-applications-existing-users.md) to update the application role assignments in Azure AD.|
### Other options
Now that you have identified the integration pattern for the application, check
1. Change to the **Roles and administrators** tab. This tab displays the administrative roles, that give rights to control the representation of the application in Azure AD, not the access rights in the application. For each administrative role that has permissions to allow changing the application integration or assignments, and has an assignment to that administrative role, ensure that only authorized users are in that role.
-1. Change to the **Provisioning** tab. If automatic provisioning isn't configured, then Azure AD won't have a way to notify the application when a user's access is removed if denied during the review. Provisioning might not be necessary for some integration patterns, if the application is federated and solely relies upon Azure AD as its identity provider, or the application uses AD DS groups. However, if your application integration is pattern C, and the application doesn't support federated SSO with Azure AD as its only identity provider, then you'll need to configure provisioning from Azure AD to the application. Provisioning will be necessary so that Azure AD can automatically remove the reviewed users from the application when a review completes, and this removal step can be done through a change sent from Azure AD to the application through SCIM, LDAP or SQL.
+1. Change to the **Provisioning** tab. If automatic provisioning isn't configured, then Azure AD won't have a way to notify the application when a user's access is removed if denied during the review. Provisioning might not be necessary for some integration patterns, if the application is federated and solely relies upon Azure AD as its identity provider, or the application uses AD DS groups. However, if your application integration is pattern C, and the application doesn't support federated SSO with Azure AD as its only identity provider, then you'll need to configure provisioning from Azure AD to the application. Provisioning will be necessary so that Azure AD can automatically remove the reviewed users from the application when a review completes, and this removal step can be done through a change sent from Azure AD to the application through SCIM, LDAP, SQL, SOAP or REST.
* If this is a gallery application that supports provisioning, [configure the application for provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md). * If the application is a cloud application and supports SCIM, configure [user provisioning with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). * If the application is an on-premises application and supports SCIM, configure an application with the [provisioning agent for on-premises SCIM-based apps](../app-provisioning/on-premises-scim-provisioning.md). * If the application relies upon a SQL database, configure an application with the [provisioning agent for on-premises SQL-based applications](../app-provisioning/on-premises-sql-connector-configure.md). * If the application relies upon another LDAP directory, configure an application with the [provisioning agent for on-premises LDAP-based applications](../app-provisioning/on-premises-ldap-connector-configure.md).
+ * If the application has local user accounts, managed through a SOAP or REST API, configure an application with the [provisioning agent with the web services connector](../app-provisioning/on-premises-web-services-connector.md).
+ * If the application has local user accounts, managed through a MIM connector, configure an application with the [provisioning agent with a custom connector](../app-provisioning/on-premises-custom-connector.md).
+ * If the application is SAP ECC with NetWeaver AS ABAP 7.0 or later, configure an application with the [provisioning agent with a SAP ECC configured web services connector](../app-provisioning/on-premises-sap-connector-configure.md).
1. If provisioning is configured, then click on **Edit Attribute Mappings**, expand the Mapping section and click on **Provision Azure Active Directory Users**. Check that in the list of attribute mappings, there is a mapping for `isSoftDeleted` to the attribute in the application's data store that you would like to set to false when a user loses access. If this mapping isn't present, then Azure AD will not notify the application when a user has gone out of scope, as described in [how provisioning works](../app-provisioning/how-provisioning-works.md). 1. If the application supports federated SSO, then change to the **Conditional Access** tab. Inspect the enabled policies for this application. If there are policies that are enabled, block access, have users assigned to the policies, but no other conditions, then those users may be already blocked from being able to get federated SSO to the application.
active-directory Access Reviews Downloadable Review History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-downloadable-review-history.md
Review history and request review history are available for any user if they're
**Prerequisite role:** All users authorized to view access reviews
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, under **Access Reviews** select **Review history**.
+1. Browse to **Identity governance** > **Access Reviews** > **Review History**.
1. Select **New report**.
The reports provide details on a per-user basis showing the following informatio
| Element name | Description | | | | | AccessReviewId | Review object ID |
-| AccessReviewSeriesId | Object ID of the review series, if the review is an instance of a recurring review. If the review is one time, the value is am empty GUID. |
+| AccessReviewSeriesId | Object ID of the review series, if the review is an instance of a recurring review. If the review is one time, the value is an empty GUID. |
| ReviewType | Review types include group, application, Azure AD role, Azure role, and access package| |ResourceDisplayName | Display Name of the resource being reviewed | | ResourceId | ID of the resource being reviewed |
active-directory Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/apps.md
Previously updated : 06/30/2023 Last updated : 08/24/2023
Microsoft Entra identity governance can be integrated with many other applicatio
| [Atmos](../../active-directory/saas-apps/atmos-provisioning-tutorial.md) | ΓùÅ | | | [AuditBoard](../../active-directory/saas-apps/auditboard-provisioning-tutorial.md) | ΓùÅ | | | [Autodesk SSO](../../active-directory/saas-apps/autodesk-sso-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
-| [Azure Databricks SCIM Connector](https://learn.microsoft.com/azure/databricks/administration-guide/users-groups/scim/aad) | ΓùÅ | |
+| [Azure Databricks SCIM Connector](/azure/databricks/administration-guide/users-groups/scim/aad) | ΓùÅ | |
| [AWS IAM Identity Center](../../active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [BambooHR](../../active-directory/saas-apps/bamboo-hr-tutorial.md) | | ΓùÅ | | [BenQ IAM](../../active-directory/saas-apps/benq-iam-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
Microsoft Entra identity governance can be integrated with many other applicatio
| SAML-based apps | | ΓùÅ | | [SAP Analytics Cloud](../../active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP Cloud Platform](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |
-| [SAP ECC 7.0](../../active-directory/app-provisioning/on-premises-sap-connector-configure.md) | ΓùÅ | |
-| SAP R/3 | ΓùÅ | |
+| [SAP R/3 and ERP](../../active-directory/app-provisioning/on-premises-sap-connector-configure.md) | ΓùÅ | |
| [SAP HANA](../../active-directory/saas-apps/saphana-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors to Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors to Azure Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) | ΓùÅ | ΓùÅ |
active-directory Check Status Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md
When a workflow is created, it's important to check its status, and run history
You're able to retrieve run information of a workflow using Lifecycle Workflows. To check the runs of a workflow using the Azure portal, you would do the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
-
-1. On the left menu, select **Lifecycle Workflows**.
-
-1. On the Lifecycle Workflows overview page, select **Workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
1. Select the workflow you want to run history of.
active-directory Check Workflow Execution Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md
Workflow scheduling will automatically process the workflow for users meeting th
To check the users who fall under the execution scope of a workflow, you'd follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-
-1. In the left menu, select **Lifecycle workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
1. From the list of workflows, select the workflow you want to check the execution scope of.
active-directory Complete Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
## View the status of an access review-
+
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-
You can track the progress of access reviews as they're completed.
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
-
-1. In the left menu, select **Access reviews**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Access Reviews**.
1. In the list, select an access review.
Manually or automatically applying results doesn't have an effect on a group tha
On review creation, the creator can choose between two options for denied guest users in an access review. - Denied guest users can have their access to the resource removed. This is the default.
+ - The denied guest user can be blocked from signing in for 30 days, then deleted from the tenant. During the 30-day period the guest user is able to be restored access to the tenant by an administrator. After the 30-day period is completed, if the guest user hasn't had access to the resource granted to them again, they'll be removed from the tenant permanently. In addition, using the Microsoft Entra admin center, a Global Administrator can explicitly [permanently delete a recently deleted user](../fundamentals/users-restore.md) before that time period is reached. Once a user has been permanently deleted, the data about that guest user will be removed from active access reviews. Audit information about deleted users remains in the audit log.
### Actions taken on denied B2B direct connect users
active-directory Create Access Review Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-pim-for-groups.md
For more information, see [License requirements](access-reviews-overview.md#lice
## Create a PIM for Groups access review
-### Scope
- [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+### Scope
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-2. On the left menu, select **Access reviews**.
+1. Browse to **Identity governance** > **Access Reviews** > **Review History**.
-3. Select **New access review** to create a new access review.
+1. Select **New access review** to create a new access review.
![Screenshot that shows the Access reviews pane in Identity Governance.](./media/create-access-review/access-reviews.png)
-4. In the **Select what to review** box, select **Teams + Groups**.
+1. In the **Select what to review** box, select **Teams + Groups**.
![Screenshot that shows creating an access review.](./media/create-access-review/select-what-review.png)
-5. Select **Teams + Groups** and then select **Select Teams + groups** under **Review Scope**. A list of groups to choose from appears on the right.
+1. Select **Teams + Groups** and then select **Select Teams + groups** under **Review Scope**. A list of groups to choose from appears on the right.
![Screenshot that shows selecting Teams + Groups.](./media/create-access-review/create-pim-review.png)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
Access to groups and applications for employees and guests changes over time. To reduce the risk associated with stale access assignments, administrators can use Azure Active Directory (Azure AD) to create access reviews for group members or application access.
-Microsoft 365 and Security group owners can also use Azure AD to create access reviews for group members as long as the Global or User administrator enables the setting via the **Access Reviews Settings** pane. For more information about these scenarios, see [Manage access reviews](manage-access-review.md).
+Microsoft 365 and Security group owners can also use Azure AD to create access reviews for group members as long as the Global or Identity Governance Administrator enables the setting via the **Access Reviews Settings** pane. For more information about these scenarios, see [Manage access reviews](manage-access-review.md).
Watch a short video that talks about enabling access reviews.
This article describes how to create one or more access reviews for group member
- Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance licenses. - Creating a review on inactive users and with [user-to-group affiliation](review-recommendations-access-reviews.md#user-to-group-affiliation) recommendations requires a Microsoft Entra ID Governance license.-- Global administrator, User administrator, or Identity Governance administrator to create reviews on groups or applications.
+- Global administrator or Identity Governance administrator to create reviews on groups or applications.
- Global administrators and Privileged Role administrators can create reviews on role-assignable groups. For more information, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md). - Microsoft 365 and Security group owner.
If you're reviewing access to an application, then before creating the review, s
### Scope -
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-2. On the left menu, select **Access reviews**.
+1. Browse to **Identity governance** > **Access Reviews**.
3. Select **New access review** to create a new access review.
B2B direct connect users and teams are included in access reviews of the Teams-e
Use the following instructions to create an access review on a team with shared channels:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator, User Admin or Identity Governance Admin.
-
-1. Open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. On the left menu, select **Access reviews**.
+1. Browse to **Identity governance** > **Access Reviews**.
1. Select **+ New access review**.
Use the following instructions to create an access review on a team with shared
## Allow group owners to create and manage access reviews of their groups
-The prerequisite role is a Global or User administrator.
+
+The prerequisite role is a Global or Identity Governance Administrator.
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. On the menu on the left, under **Access reviews**, select **Settings**.
+1. Browse to **Identity governance** > **Access Reviews** > **Settings**.
1. On the **Delegate who can create and manage access reviews** page, set **Group owners can create and manage access reviews for groups they own** to **Yes**.
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
Lifecycle workflows allow for tasks associated with the lifecycle process to be
- **Tasks**: Actions taken when a workflow is triggered. - **Execution conditions**: The who and when of a workflow. These conditions define which users (scope) this workflow should run against, and when (trigger) the workflow should run.
-You can create and customize workflows for common scenarios by using templates, or you can build a workflow from scratch without using a template. Currently, if you use the Azure portal, any workflow that you create must be based on a template. If you want to create a workflow without using a template, use Microsoft Graph.
+You can create and customize workflows for common scenarios by using templates, or you can build a workflow from scratch without using a template. Currently, if you use the Microsoft Entra admin center, any workflow that you create must be based on a template. If you want to create a workflow without using a template, use Microsoft Graph.
## Prerequisites [!INCLUDE [Microsoft Entra ID Governance license](../../../includes/active-directory-entra-governance-license.md)]
-## Create a lifecycle workflow by using a template in the Azure portal
+## Create a lifecycle workflow by using a template in the Microsoft Entra admin center
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-If you're using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios.
+If you're using the Microsoft Entra admin center to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios.
To create a workflow based on a template:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. Select **Azure Active Directory** > **Identity Governance**.
-
-1. On the left menu, select **Lifecycle Workflows**.
-
-1. Select **Workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **Create a workflow**.
1. On the **Choose a workflow** page, select the workflow template that you want to use.
active-directory Customize Workflow Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md
For more information on these customizable parameters, see [Common email task pa
When you're customizing an email sent via lifecycle workflows, you can choose to customize either a new task or an existing task. You do these customizations the same way whether the task is new or existing, but the following steps walk you through updating an existing task. To customize emails sent from tasks within workflows by using the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. On the search bar near the top of the page, enter **Identity Governance** and select the result.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
-1. On the left menu, select **Lifecycle workflows**.
-
-1. On the left menu, select **Workflows**.
-
-1. Select **Tasks**.
+1. Select the workflow that contain the email tasks you want to customize.
1. On the pane that lists tasks, select the task for which you want to customize the email.
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
When you create workflows by using lifecycle workflows, you can fully customize
Workflows that you create within lifecycle workflows follow the same schedule that you define on the **Workflow settings** pane. To adjust the schedule, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. On the search bar near the top of the page, enter **Identity Governance** and select the result.
-
-1. On the left menu, select **Lifecycle workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows**.
1. On the **Lifecycle workflows** overview page, select **Workflow settings**.
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
When a workflow is deleted, it enters a soft-delete state. During this period, y
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. On the search bar near the top of the page, enter **Identity Governance**. Then select **Identity Governance** in the results.
-
-1. On the left menu, select **Lifecycle Workflows**.
-
-1. Select **Workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
1. On the **Workflows** page, select the workflow that you want to delete. Then select **Delete**.
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
# Change approval and requestor information settings for an access package in entitlement management
-Each access package must have one or more access package assignment policies, before a user can be assigned access. When an access package is created in the Entra portal, the Entra portal automatically creates the first access package assignment policy for that access package. The policy determines who can request access, and who if anyone must approve access.
+Each access package must have one or more access package assignment policies, before a user can be assigned access. When an access package is created in the Microsoft Entra admin center, the Microsoft Entra admin center automatically creates the first access package assignment policy for that access package. The policy determines who can request access, and who if anyone must approve access.
As an access package manager, you can change the approval and requestor information settings for an access package at any time by editing an existing policy or adding a new additional policy for requesting access.
For a demonstration of how to add a multi-stage approval to a request policy, wa
## Change approval settings of an existing access package assignment policy + Follow these steps to specify the approval settings for requests for the access package through a policy:
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance Administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. On the **Access packages** page open an access package.
1. Either select a policy to edit or add a new policy to the access package 1. Select **Policies** and then **Add policy** if you want to create a new policy.
For example, if you listed Alice and Bob as the first stage approver(s), list Ca
## Collect additional requestor information for approval
-In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or Multiple Choice questions at the time of request. There's a limit of 20 questions per policy and a limit of 25 answers for Multiple Choice questions. The questions will then be shown to approvers to help them make a decision.
+In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or Multiple Choice questions at the time of request. The questions will then be shown to approvers to help them make a decision.
1. Go to the **Requestor information** tab and select the **Questions** sub tab.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use entitlement management and assign users to access packages, you must have
## View who has an assignment
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the left menu, select **Access packages** and then open the access package.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
+
+1. On the **Access packages** page open an access package.
1. Select **Assignments** to see a list of active assignments.
You can also retrieve assignments in an access package using Microsoft Graph. A
### View assignments with PowerShell
-You can perform this query in PowerShell with the `Get-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet.
+You can perform this query in PowerShell with the `Get-MgEntitlementManagementAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.Read.All"
-Select-MgProfile -Name "beta"
-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign"
-$assignments = Get-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop
-$assignments | ft Id,AssignmentState,TargetId,{$_.Target.DisplayName}
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayName eq 'Marketing Campaign'"
+$assignments = Get-MgEntitlementManagementAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop
+$assignments | ft Id,state,{$_.Target.id},{$_.Target.displayName}
``` ## Directly assign a user In some cases, you might want to directly assign specific users to an access package so that users don't have to go through the process of requesting the access package. To directly assign users, the access package must have a policy that allows administrator direct assignments.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then open the access package.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
+
+1. On the **Access packages** page open an access package.
1. In the left menu, select **Assignments**.
In some cases, you might want to directly assign specific users to an access pac
Entitlement management also allows you to directly assign external users to an access package to make collaborating with partners easier. To do this, the access package must have a policy that allows users not yet in your directory to request access.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package in which you want to add a user.
+1. On the **Access packages** page open an access package.
1. In the left menu, select **Assignments**.
You can also directly assign a user to an access package using Microsoft Graph.
### Assign a user to an access package with PowerShell
-You can assign a user to an access package in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. This cmdlet takes as parameters
-* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet,
-* the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet,
-* the object ID of the target user, if the user is already present in your directory.
+You can assign a user to an access package in PowerShell with the `New-MgEntitlementManagementAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-Select-MgProfile -Name "beta"
-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies"
-$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
-$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetId "a43ee6df-3cc5-491a-ad9d-ea964ef8e464"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty assignmentpolicies
+$policy = $accesspackage.AssignmentPolicies[0]
+$userid = "cdbdf152-82ce-479c-b5b8-df90f561d5c7"
+$params = @{
+ requestType = "adminAdd"
+ assignment = @{
+ targetId = $userid
+ assignmentPolicyId = $policy.Id
+ accessPackageId = $accesspackage.Id
+ }
+}
+New-MgEntitlementManagementAssignmentRequest -BodyParameter $params
```
-You can also assign multiple users that are in your directory to an access package using PowerShell with the `New-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.1 or later. This cmdlet takes as parameters
+You can also assign multiple users that are in your directory to an access package using PowerShell with the `New-MgBetaEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.4.0 or later. This cmdlet takes as parameters
* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, * the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, * the object IDs of the target users, either as an array of strings, or as a list of user members returned from the `Get-MgGroupMember` cmdlet.
For example, if you want to ensure all the users who are currently members of a
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"
-Select-MgProfile -Name "beta"
-$members = Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15"
-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies"
-$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
-$req = New-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members
+$members = Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15" -All
+
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies"
+$policy = $accesspackage.AssignmentPolicies[0]
+$req = New-MgBetaEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members
```
-If you wish to add an assignment for a user who is not yet in your directory, you can use the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. This cmdlet takes as parameters
+If you wish to add an assignment for a user who is not yet in your directory, you can use the `New-MgBetaEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 2.4.0. This cmdlet takes as parameters
* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, * the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, * the email address of the target user. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-Select-MgProfile -Name "beta"
-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies"
-$policy = $accesspackage.AccessPackageAssignmentPolicies[0]
-$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com"
+$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies"
+$policy = $accesspackage.AssignmentPolicies[0]
+$req = New-MgBetaEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com"
``` ## Remove an assignment You can remove an assignment that a user or an administrator had previously requested.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. On the **Access packages** page open an access package.
1. In the left menu, select **Assignments**.
You can also remove an assignment of a user to an access package using Microsoft
### Remove an assignment with PowerShell
-You can remove a user's assignment in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x.
+You can remove a user's assignment in PowerShell with the `New-MgEntitlementManagementAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-Select-MgProfile -Name "beta"
-$assignments = Get-MgEntitlementManagementAccessPackageAssignment -Filter "accessPackageId eq '9f573551-f8e2-48f4-bf48-06efbb37c7b8' and assignmentState eq 'Delivered'" -All -ErrorAction Stop
-$toRemove = $assignments | Where-Object {$_.targetId -eq '76fd6e6a-c390-42f0-879e-93ca093321e7'}
-$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageAssignmentId $toRemove.Id -RequestType "AdminRemove"
+$accessPackageId = "9f573551-f8e2-48f4-bf48-06efbb37c7b8"
+$userId = "040a792f-4c5f-4395-902f-f0d9d192ab2c"
+$filter = "accessPackage/Id eq '" + $accessPackageId + "' and state eq 'Delivered' and target/objectId eq '" + $userId + "'"
+$assignment = Get-MgEntitlementManagementAssignment -Filter $filter -ExpandProperty target -all -ErrorAction stop
+if ($assignment -ne $null) {
+ $params = @{
+ requestType = "adminRemove"
+ assignment = @{ id = $assignment.id }
+ }
+ New-MgEntitlementManagementAssignmentRequest -BodyParameter $params
+}
``` ## Next steps
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
You'll need to have attributes populated on the users who will be in scope for b
## Create an automatic assignment policy
-To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package.
+
+To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new automatic assignment policy for an access package.
**Prerequisite role:** Global administrator or Identity Governance administrator
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. On the **Access packages** page open an access package.
-1. Click **Policies** and then **Add auto-assignment policy** to create a new policy.
+1. Select **Policies** and then **Add auto-assignment policy** to create a new policy.
-1. In the first tab, you'll specify the rule. Click **Edit**.
+1. In the first tab, you'll specify the rule. Select **Edit**.
1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box. > [!NOTE]
- > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Azure portal](../enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
+ > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Entra admin center](../enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
![Screenshot of an access package automatic assignment policy rule configuration.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-rule-configuration.png)
-1. Click **Save** to close the dynamic membership rule editor, then click **Next** to open the **Custom Extensions** tab.
+1. Select **Save** to close the dynamic membership rule editor.
+1. By default, the checkboxes to automatically create and remove assignments should remain checked.
+1. If you wish users to retain access for a limited time after they go out of scope, you can specify a duration in hours or days. For example, when an employee leaves the sales department, you may wish to allow them to continue to retain access for 7 days to allow them to use sales apps and transfer ownership of their resources in those apps to another employee.
+1. Select **Next** to open the **Custom Extensions** tab.
1. If you have [custom extensions](entitlement-management-logic-apps-integration.md) in your catalog you wish to have run when the policy assigns or removes access, you can add them to this policy. Then click next to open the **Review** tab.
To create a policy for an access package, you need to start from the access pack
![Screenshot of an access package automatic assignment policy review tab.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-review.png)
-1. Click **Create** to save the policy.
+1. Select **Create** to save the policy.
> [!NOTE] > At this time, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios.
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
Title: Create an access package in entitlement management
-description: Learn how to create an access package of resources that you want to share in Azure Active Directory entitlement management.
+description: Learn how to create an access package of resources that you want to share in Microsoft Entra entitlement management.
documentationCenter: ''
Then once the access package is created, you can [change the hidden setting](ent
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To complete the following steps, you need a role of global administrator, Identity Governance administrator, user administrator, catalog owner, or access package manager.
+To complete the following steps, you need a role of global Administrator, Identity Governance Administrator, catalog owner, or access package manager.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory**, and then select **Identity Governance**.
-
-1. On the left menu, select **Access packages**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
1. Select **New access package**.
- ![Screenshot that shows the button for creating a new access package in the Azure portal.](./media/entitlement-management-shared/access-packages-list.png)
+ ![Screenshot that shows the button for creating a new access package in the Microsoft Entra admin center.](./media/entitlement-management-shared/access-packages-list.png)
## Configure basics
On the **Basics** tab, you give the access package a name and specify which cata
1. In the **Catalog** dropdown list, select the catalog where you want to put the access package. For example, you might have a catalog owner who manages all the marketing resources that can be requested. In this case, you could select the marketing catalog.
- You see only catalogs that you have permission to create access packages in. To create an access package in an existing catalog, you must be a global administrator, Identity Governance administrator, or user administrator. Or you must be a catalog owner or access package manager in that catalog.
+ You see only catalogs that you have permission to create access packages in. To create an access package in an existing catalog, you must be a Global Administrator or Identity Governance Administrator. Or you must be a catalog owner or access package manager in that catalog.
![Screenshot that shows basic information for a new access package.](./media/entitlement-management-access-package-create/basics.png)
- If you're a global administrator, an Identity Governance administrator, a user administrator, or catalog creator, and you want to create your access package in a new catalog that's not listed, select **Create new catalog**. Enter the catalog name and description, and then select **Create**.
+ If you're a global Administrator, an Identity Governance Administrator, or catalog creator, and you want to create your access package in a new catalog that's not listed, select **Create new catalog**. Enter the catalog name and description, and then select **Create**.
The access package that you're creating, and any resources included in it, are added to the new catalog. Later, you can add more catalog owners or add attributes to the resources that you put in the catalog. To learn more about how to edit the attributes list for a specific catalog resource and the prerequisite roles, read [Add resource attributes in the catalog](entitlement-management-catalog-create.md#add-resource-attributes-in-the-catalog).
If you're not sure which resource roles to include, you can skip adding them whi
![Screenshot that shows the panel for selecting applications for resource roles in a new access package.](./media/entitlement-management-access-package-create/resource-roles.png)
- If you're creating the access package in the general catalog or a new catalog, you can choose any resource from the directory that you own. You must be at least a global administrator, a user administrator, or catalog creator.
+ If you're creating the access package in the general catalog or a new catalog, you can choose any resource from the directory that you own. You must be at least a global administrator, an Identity Governance Administrator, or catalog creator.
If you're creating the access package in an existing catalog, you can select any resource that's already in the catalog without owning it.
- If you're a global administrator, a user administrator, or catalog owner, you have the additional option of selecting resources that you own but that aren't yet in the catalog. If you select resources not currently in the selected catalog, these resources are also added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, select the **See all** checkbox at the top of the panel. If you want to select only resources that are currently in the selected catalog, leave the **See all** checkbox cleared (the default state).
+ If you're a global administrator, an Identity Governance Administrator, or catalog owner, you have the additional option of selecting resources that you own but that aren't yet in the catalog. If you select resources not currently in the selected catalog, these resources are also added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, select the **See all** checkbox at the top of the panel. If you want to select only resources that are currently in the selected catalog, leave the **See all** checkbox cleared (the default state).
1. In the **Role** list, select the role that you want users to be assigned for the resource. For more information on selecting the appropriate roles for a resource, read [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
You can create an access package by using Microsoft Graph. A user in an appropri
### Create an access package by using Microsoft PowerShell
-You can also create an access package in PowerShell by using the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x.
+You can also create an access package in PowerShell by using the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 2.4.0.
-First, retrieve the ID of the catalog (and of the resources and their roles in that catalog) that you want to include in the access package. Use a script similar to the following example:
+First, retrieve the ID of the catalog (and of the resource and its roles in that catalog) that you want to include in the access package. Use a script similar to the following example:
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-Select-MgProfile -Name "beta"
-$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'"
-$rsc = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes"
-$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc[0].Id + "')"
-$rr = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource"
+$catalog = Get-MgBetaEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'"
+
+$rsc = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes"
+$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc.Id + "')"
+$rr = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource"
``` Then, create the access package:
$params = @{
Description = "outside sales representatives" }
-$ap = New-MgEntitlementManagementAccessPackage -BodyParameter $params
+$ap = New-MgBetaEntitlementManagementAccessPackage -BodyParameter $params
```
-After you create the access package, assign the resource roles to it. For example, if you want to include the second resource role of the first resource returned earlier as a resource role of the new access package, you can use a script similar to this one:
+After you create the access package, assign the resource roles to it. For example, if you want to include the second resource role of the resource returned earlier as a resource role of the new access package, you can use a script similar to this one:
```powershell $rparams = @{
$rparams = @{
DisplayName = $rr[2].DisplayName OriginSystem = $rr[2].OriginSystem AccessPackageResource = @{
- Id = $rsc[0].Id
- ResourceType = $rsc[0].ResourceType
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
+ Id = $rsc.Id
+ ResourceType = $rsc.ResourceType
+ OriginId = $rsc.OriginId
+ OriginSystem = $rsc.OriginSystem
} } AccessPackageResourceScope = @{
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
+ OriginId = $rsc.OriginId
+ OriginSystem = $rsc.OriginSystem
} }
-New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams
+New-MgBetaEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams
``` Finally, create the policies. In this policy, only the administrator can assign access, and there are no access reviews. For more examples, see [Create an assignment policy through PowerShell](entitlement-management-access-package-request-policy.md#create-an-access-package-assignment-policy-through-powershell) and [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true).
active-directory Entitlement Management Access Package Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md
This article describes how to hide or delete an access package.
Follow these steps to change the **Hidden** setting for an access package.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then open the access package.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
+
+1. On the **Access packages** page open an access package.
1. On the Overview page, select **Edit**.
Follow these steps to change the **Hidden** setting for an access package.
An access package can only be deleted if it has no active user assignments. Follow these steps to delete an access package.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package.
1. In the left menu, select **Assignments** and remove access for all users.
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
Title: Tutorial - Manage access to resources in entitlement management
-description: Step-by-step tutorial for how to create your first access package using the Azure portal in entitlement management.
+description: Step-by-step tutorial for how to create your first access package using the Microsoft Entra admin center in entitlement management.
documentationCenter: ''
In this tutorial, you learn how to:
> * Allow a user in your directory to request access > * Demonstrate how an internal user can request the access package
-For a step-by-step demonstration of the process of deploying Azure Active Directory entitlement management, including creating your first access package, view the following video:
+For a step-by-step demonstration of the process of deploying Microsoft Entra entitlement management, including creating your first access package, view the following video:
>[!VIDEO https://www.youtube.com/embed/zaaKvaaYwI4]
-This rest of this article uses the Azure portal to configure and demonstrate entitlement management.
+This rest of this article uses the Microsoft Entra admin center to configure and demonstrate entitlement management.
## Prerequisites
For more information, see [License requirements](entitlement-management-overview
A resource directory has one or more resources to share. In this step, you create a group named **Marketing resources** in the Woodgrove Bank directory that is the target resource for entitlement management. You also set up an internal requestor.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global administrator or Identity Governance Administrator
![Diagram that shows the users and groups for this tutorial.](./media/entitlement-management-access-package-first/elm-users-groups.png)
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Global administrator or User administrator.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left navigation, select **Azure Active Directory**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages**.
1. [Create two users](../fundamentals/add-users.md). Use the following names or different names. | Name | Directory role | | | |
- | **Admin1** | Global administrator, or User administrator. This user can be the user you're currently signed in. |
+ | **Admin1** | Global administrator, or Identity Governance Administrator. This user can be the user you're currently signed in. |
| **Requestor1** | User | 4. [Create an Azure AD security group](../fundamentals/how-to-manage-groups.md) named **Marketing resources** with a membership type of **Assigned**. This group is the target resource for entitlement management. The group should be empty of members to start.
A resource directory has one or more resources to share. In this step, you creat
An *access package* is a bundle of resources that a team or project needs and is governed with policies. Access packages are defined in containers called *catalogs*. In this step, you create a **Marketing Campaign** access package in the **General** catalog.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
![Diagram that describes the relationship between the access package elements.](./media/entitlement-management-access-package-first/elm-access-package.png)
-1. In the Azure portal, in the left navigation, select **Azure Active Directory**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Identity Governance**
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages**. If you see **Access denied**, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory.
+1. On the **Access packages** page open an access package.
+
+1. When opening the access package if you see **Access denied**, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory.
1. Select **New access package**.
An *access package* is a bundle of resources that a team or project needs and is
:::image type="content" source="./media/entitlement-management-access-package-first/resource-roles.png" alt-text="Screenshot the shows how to select the member role." lightbox="./media/entitlement-management-access-package-first/resource-roles.png"::: >[!IMPORTANT]
- >The [role-assignable groups](../roles/groups-concept.md) added to an access package will be indicated using the Sub Type **Assignable to roles**. For more information, check out the [Create a role-assignable group](../roles/groups-create-eligible.md) article. Keep in mind that once a role-assignable group is present in an access package catalog, administrative users who are able to manage in entitlement management, including global administrators, user administrators and catalog owners of the catalog, will be able to control the access packages in the catalog, allowing them to choose who can be added to those groups. If you don't see a role-assignable group that you want to add or you are unable to add it, make sure you have the required Azure AD role and entitlement management role to perform this operation. You might need to ask someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
+ >The [role-assignable groups](../roles/groups-concept.md) added to an access package will be indicated using the Sub Type **Assignable to roles**. For more information, check out the [Create a role-assignable group](../roles/groups-create-eligible.md) article. Keep in mind that once a role-assignable group is present in an access package catalog, administrative users who are able to manage in entitlement management, including global Administrators, Identity Governance Administrators and catalog owners of the catalog, will be able to control the access packages in the catalog, allowing them to choose who can be added to those groups. If you don't see a role-assignable group that you want to add or you are unable to add it, make sure you have the required Azure AD role and entitlement management role to perform this operation. You might need to ask someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
>[!NOTE] > When using [dynamic groups](../enterprise-users/groups-create-rule.md) you will not see any other roles available besides owner. This is by design.
In this step, you perform the steps as the **internal requestor** and request ac
**Prerequisite role:** Internal requestor
-1. Sign out of the Azure portal.
+1. Sign out of the Microsoft Entra admin center.
1. In a new browser window, navigate to the My Access portal link you copied in the previous step.
In this step, you perform the steps as the **internal requestor** and request ac
In this step, you confirm that the **internal requestor** was assigned the access package and that they're now a member of the **Marketing resources** group.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
1. Sign out of the My Access portal.
-1. Sign in to the [Azure portal](https://portal.azure.com) as **Admin1**.
-
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as **Admin1**.
-1. In the left menu, select **Access packages**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages**.
1. Find and select **Marketing Campaign** access package.
In this step, you confirm that the **internal requestor** was assigned the acces
:::image type="content" source="./media/entitlement-management-access-package-first/request-details.png" alt-text="Screenshot of the access package request details." lightbox="./media/entitlement-management-access-package-first/request-details.png":::
-1. In the left navigation, select **Azure Active Directory**.
+1. In the left navigation, select **Identity**.
1. Select **Groups** and open the **Marketing resources** group.
In this step, you confirm that the **internal requestor** was assigned the acces
In this step, you remove the changes you made and delete the **Marketing Campaign** access package.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global Administrator or Identity Governance Administrator
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. In the Microsoft Entra admin center **Identity Governance**.
1. Open the **Marketing Campaign** access package.
In this step, you remove the changes you made and delete the **Marketing Campaig
1. For **Marketing Campaign**, select the ellipsis (**...**) and then select **Delete**. In the message that appears, select **Yes**.
-1. In Azure Active Directory, delete any users you created such as **Requestor1** and **Admin1**.
+1. In **Identity**, delete any users you created such as **Requestor1** and **Admin1**.
1. Delete the **Marketing resources** group.
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
To use entitlement management and assign users to access packages, you must have
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+**Prerequisite role**: Global Administrator, Identity Governance Administrator, Catalog owner or Access package manager
Follow these steps to change the list of incompatible groups or other access packages for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package which users will request.
+1. On the **Access packages** page open the access package which users will request.
1. In the left menu, select **Separation of duties**.
New-MgEntitlementManagementAccessPackageIncompatibleAccessPackageByRef -AccessPa
## View other access packages that are configured as incompatible with this one
-**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+**Prerequisite role**: Global Administrator, Identity Governance Administrator, Catalog owner or Access package manager
Follow these steps to view the list of other access packages that have indicated that they're incompatible with an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package.
1. In the left menu, select **Separation of duties**.
Follow these steps to view the list of other access packages that have indicated
If you've configured incompatible access settings on an access package that already has users assigned to it, then you can download a list of those users who have that additional access. Those users who also have an assignment to the incompatible access package won't be able to re-request access.
-**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+**Prerequisite role**: Global Administrator, Identity Governance Administrator, Catalog owner or Access package manager
Follow these steps to view the list of users who have assignments to two access packages.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible.
+1. On the **Access packages** page open the access package where you've configured another access package as incompatible.
1. In the left menu, select **Separation of duties**.
Follow these steps to view the list of users who have assignments to two access
If you're configuring incompatible access settings on an access package that already has users assigned to it, then any of those users who also have an assignment to the incompatible access package or groups won't be able to re-request access.
-**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+**Prerequisite role**: Global Administrator, Identity Governance Administrator, Catalog owner or Access package manager
Follow these steps to view the list of users who have assignments to two access packages.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments.
+1. Open the access package where you'll be configuring incompatible assignments.
1. In the left menu, select **Assignments**.
-1. In the **Status** field, ensure that **Delivered** status is selected.
+1. In the **Status** field, ensure that **Delivered** status is selected.
-1. Select the **Download** button and save the resulting CSV file as the first file with a list of assignments.
+1. Select the **Download** button and save the resulting CSV file as the first file with a list of assignments.
-1. In the navigation bar, select **Identity Governance**.
+1. In the navigation bar, select **Identity Governance**.
1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible. 1. In the left menu, select **Assignments**.
-1. In the **Status** field, ensure that the **Delivered** status is selected.
+1. In the **Status** field, ensure that the **Delivered** status is selected.
-1. Select the **Download** button and save the resulting CSV file as the second file with a list of assignments.
+1. Select the **Download** button and save the resulting CSV file as the second file with a list of assignments.
-1. Use a spreadsheet program such as Excel to open the two files.
+1. Use a spreadsheet program such as Excel to open the two files.
-1. Users who are listed in both files will have already-existing incompatible assignments.
+1. Users who are listed in both files will have already-existing incompatible assignments.
### Identifying users who already have incompatible access programmatically
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
To ensure users have the right access to an access package, custom questions can
## Open lifecycle settings + To change the lifecycle settings for an access package, you need to open the corresponding policy. Follow these steps to open the lifecycle settings for an access package.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package that you want to edit.
-1. Click **Policies** and then click the policy that has the lifecycle settings you want to edit.
+1. Select **Policies** and then select the policy that has the lifecycle settings you want to edit.
The Policy details pane opens at the bottom of the page. ![Access package - Policy details pane](./media/entitlement-management-shared/policy-details.png)
-1. Click **Edit** to edit the policy.
+1. Select **Edit** to edit the policy.
![Access package - Edit policy](./media/entitlement-management-shared/policy-edit.png)
-1. Click the **Lifecycle** tab to open the lifecycle settings.
+1. Select the **Lifecycle** tab to open the lifecycle settings.
[!INCLUDE [Entitlement management lifecycle policy](../../../includes/active-directory-entitlement-management-lifecycle-policy.md)]
active-directory Entitlement Management Access Package Manage Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-manage-lifecycle.md
Entitlement management allows you to gain visibility into the state of a guest u
- **Blank** - The lifecycle for the guest user isn't determined. This happens when the guest user had an access package assigned before managing user lifecycle was possible. > [!NOTE]
-> When a guest user is set as **Governed**, based on ELM tenant settings their account will be deleted or disabled in specified days after their last access package assignment expires. Learn more about ELM settings here: [Manage external access with Azure Active Directory entitlement management](../architecture/6-secure-access-entitlement-managment.md).
+> When a guest user is set as **Governed**, based on ELM tenant settings their account will be deleted or disabled in specified days after their last access package assignment expires. Learn more about ELM settings here: [Manage external access with Microsoft Entra entitlement management](../architecture/6-secure-access-entitlement-managment.md).
You can directly convert ungoverned users to be governed by using the **Mark Guests as Governed (preview)** functionality in the top menu bar. ## Manage guest user lifecycle in the Azure portal + To manage user lifecycle, you'd follow these steps:
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package you want to manage guest user lifecycle of.
1. In the left menu, select **Assignments**.
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
For information about the priority logic that is used when multiple policies app
## Open an existing access package and add a new policy with different request settings + If you have a set of users that should have different request and approval settings, you'll likely need to create a new policy. Follow these steps to start adding a new policy to an existing access package:
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package you want to edit.
-1. Click **Policies** and then **Add policy**.
+1. Select **Policies** and then **Add policy**.
1. You will start on the **Basics** tab. Type a name and a description for the policy. ![Create policy with name and description](./media/entitlement-management-access-package-request-policy/policy-name-description.png)
-1. Click **Next** to open the **Requests** tab.
+1. Select **Next** to open the **Requests** tab.
1. Change the **Users who can request access** setting. Use the steps in the following sections to change the setting to one of the following options: - [For users in your directory](#for-users-in-your-directory)
Follow these steps if you want to allow users not in your directory to request t
![Access package - Requests - For users not in your directory](./media/entitlement-management-access-package-request-policy/for-users-not-in-your-directory.png)
-1. Select one of the following options:
+1. Select whether the users who can request access are required to be affiliated with an existing connected organization, or can be anyone on the Internet. A connected organization is one that you have a pre-existing relationship with, which might have an external Azure AD directory or another identity provider. Select one of the following options:
| | Description | | | | | **Specific connected organizations** | Choose this option if you want to select from a list of organizations that your administrator previously added. All users from the selected organizations can request this access package. |
- | **All configured connected organizations** | Choose this option if all users from all your configured connected organizations can request this access package. Only users from configured connected organizations can request access packages that are shown to users from all configured organizations. |
+ | **All configured connected organizations** | Choose this option if all users from all your configured connected organizations can request this access package. Only users from configured connected organizations can request access packages, so if a user is not from an Azure AD tenant, domain or identity provider associated with an existing connected organization, they will not be able to request. |
| **All users (All connected organizations + any new external users)** | Choose this option if any user on the internet should be able to request this access package. If they donΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. The automatically created connected organization will be in a **proposed** state. For more information about the proposed state, see [State property of connected organizations](entitlement-management-organization.md#state-property-of-connected-organizations). |
- A connected organization is an external Azure AD directory or domain that you have a relationship with.
1. If you selected **Specific connected organizations**, click **Add directories** to select from a list of connected organizations that your administrator previously added.
Follow these steps if you want to allow users not in your directory to request t
> [!NOTE] > All users from the selected connected organizations can request this access package. For a connected organization that has an Azure AD directory, users from all verified domains associated with the Azure AD directory can request, unless those domains are blocked by the Azure B2B allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
-1. If you want to require approval, use the steps in [Change approval settings for an access package in entitlement management](entitlement-management-access-package-approval-policy.md) to configure approval settings.
+1. Next, use the steps in [Change approval settings for an access package in entitlement management](entitlement-management-access-package-approval-policy.md) to configure approval settings to specify who should approve requests from users not in your organization.
1. Go to the [Enable requests](#enable-requests) section.
Follow these steps if you want to allow users not in your directory to request t
Follow these steps if you want to bypass access requests and allow administrators to directly assign specific users to this access package. Users won't have to request the access package. You can still set lifecycle settings, but there are no request settings.
-1. In the **Users who can request access** section, click **None (administrator direct assignments only**.
+1. In the **Users who can request access** section, click **None (administrator direct assignments only)**.
![Access package - Requests - None administrator direct assignments only](./media/entitlement-management-access-package-request-policy/none-admin-direct-assignments-only.png)
Follow these steps if you want to bypass access requests and allow administrator
To change the request and approval settings for an access package, you need to open the corresponding policy with those settings. Follow these steps to open and edit the request settings for an access package assignment policy:
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package whose policy request settings you want to edit.
-1. Click **Policies** and then click the policy you want to edit.
+1. Select **Policies** and then click the policy you want to edit.
The Policy details pane opens at the bottom of the page. ![Access package - Policy details pane](./media/entitlement-management-shared/policy-details.png)
-1. Click **Edit** to edit the policy.
+1. Select **Edit** to edit the policy.
![Access package - Edit policy](./media/entitlement-management-shared/policy-edit.png)
-1. Click the **Requests** tab to open the request settings.
+1. Select the **Requests** tab to open the request settings.
1. Use the steps in the previous sections to change the request settings as needed.
To change the request and approval settings for an access package, you need to o
![Access package - Policy- Enable policy setting](./media/entitlement-management-access-package-approval-policy/enable-requests.png)
-1. Click **Next**.
+1. Select **Next**.
1. If you want to require requestors to provide additional information when requesting access to an access package, use the steps in [Change approval and requestor information settings for an access package in entitlement management](entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval) to configure requestor information. 1. Configure lifecycle settings.
-1. If you are editing a policy click **Update**. If you are adding a new policy, click **Create**.
+1. If you are editing a policy select **Update**. If you are adding a new policy, select **Create**.
## Create an access package assignment policy programmatically
You can create a policy using Microsoft Graph. A user in an appropriate role wit
### Create an access package assignment policy through PowerShell
-You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x.
+You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version.
-This script below illustrates using the `beta` profile, to create a policy for direct assignment to an access package. In this policy, only the administrator can assign access, and there are no access reviews. See [Create an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md#create-an-access-package-assignment-policy-through-powershell) for an example of how to create an automatic assignment policy, and [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for more examples.
+This script below illustrates creating a policy for direct assignment to an access package. In this policy, only the administrator can assign access, and there are no approvals or access reviews. See [Create an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md#create-an-access-package-assignment-policy-through-powershell) for an example of how to create an automatic assignment policy, and [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-v1.0&preserve-view=true) for more examples.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-Select-MgProfile -Name "beta"
$apid = "cdd5f06b-752a-4c9f-97a6-82f4eda6c76d"
-$pparams = @{
- AccessPackageId = $apid
- DisplayName = "direct"
- Description = "direct assignments by administrator"
- AccessReviewSettings = $null
- RequestorSettings = @{
- ScopeType = "NoSubjects"
- AcceptRequests = $true
- AllowedRequestors = @(
- )
- }
- RequestApprovalSettings = @{
- IsApprovalRequired = $false
- IsApprovalRequiredForExtension = $false
- IsRequestorJustificationRequired = $false
- ApprovalMode = "NoApproval"
- ApprovalStages = @(
- )
- }
+$params = @{
+ displayName = "New Policy"
+ description = "policy for assignment"
+ allowedTargetScope = "notSpecified"
+ specificAllowedTargets = @(
+ )
+ expiration = @{
+ endDateTime = $null
+ duration = $null
+ type = "noExpiration"
+ }
+ requestorSettings = @{
+ enableTargetsToSelfAddAccess = $false
+ enableTargetsToSelfUpdateAccess = $false
+ enableTargetsToSelfRemoveAccess = $false
+ allowCustomAssignmentSchedule = $true
+ enableOnBehalfRequestorsToAddAccess = $false
+ enableOnBehalfRequestorsToUpdateAccess = $false
+ enableOnBehalfRequestorsToRemoveAccess = $false
+ onBehalfRequestors = @(
+ )
+ }
+ requestApprovalSettings = @{
+ isApprovalRequiredForAdd = $false
+ isApprovalRequiredForUpdate = $false
+ stages = @(
+ )
+ }
+ accessPackage = @{
+ id = $apid
+ }
}
-New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams
+
+New-MgEntitlementManagementAssignmentPolicy -BodyParameter $params
``` ## Prevent requests from users with incompatible access
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
In entitlement management, you can see who has requested access packages, the po
## View requests
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the left menu, click **Access packages** and then open the access package.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Click **Requests**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. Click a specific request to see additional details.
+1. On the **Access packages** page open the access package you want to view requests of.
+
+1. Select **Requests**.
+
+1. Select a specific request to see additional details.
![List of requests for an access package](./media/entitlement-management-access-package-requests/requests-list.png)
You can also retrieve requests for an access package using Microsoft Graph. A u
You can also remove a completed request that is no longer needed. To remove a request:
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package you want to remove requests for.
-1. Click **Requests**.
+1. Select **Requests**.
1. Find the request you want to remove from the access package.
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
This video provides an overview of how to change an access package.
## Check catalog for resources + If you need to add resources to an access package, you should check whether the resources you need are available in the access package's catalog. If you're an access package manager, you can't add resources to a catalog, even if you own them. You're restricted to using the resources available in the catalog.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. On the **Access packages** page open the access package you want to check catalog for resources for.
1. In the left menu, select **Catalog** and then open the catalog.
A resource role is a collection of permissions associated with a resource. Reso
If you want some users to receive different roles than others, then you need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then open the access package.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
+
+1. On the **Access packages** page open the access package you want to add resource roles to.
1. In the left menu, select **Resource roles**.
You can add a resource role to an access package using Microsoft Graph. A user i
### Add resource roles to an access package with Microsoft PowerShell
-You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x.
+You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 2.4.0.
-First, you would retrieve the ID of the catalog, and of the resources and their roles in that catalog that you wish to include in the access package, using a script similar to the following.
+First, you would retrieve the ID of the catalog, and of the resource and its roles in that catalog that you wish to include in the access package, using a script similar to the following. This assumes there is a single application resource in the catalog.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"
-Select-MgProfile -Name "beta"
-$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'"
-$rsc = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes"
-$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc[0].Id + "')"
-$rr = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource"
+$catalog = Get-MgBetaEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'"
+
+$rsc = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes"
+$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc.Id + "')"
+$rr = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource"
```
-Then, assign the resource roles to the access package. For example, if you wished to include the second resource role of the first resource returned earlier as a resource role of an access package, you would use a script similar to the following.
+Then, assign the resource role from that resource to the access package. For example, if you wished to include the second resource role of the resource returned earlier as a resource role of an access package, you would use a script similar to the following.
```powershell $apid = "cdd5f06b-752a-4c9f-97a6-82f4eda6c76d"
$rparams = @{
DisplayName = $rr[2].DisplayName OriginSystem = $rr[2].OriginSystem AccessPackageResource = @{
- Id = $rsc[0].Id
- ResourceType = $rsc[0].ResourceType
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
+ Id = $rsc.Id
+ ResourceType = $rsc.ResourceType
+ OriginId = $rsc.OriginId
+ OriginSystem = $rsc.OriginSystem
} } AccessPackageResourceScope = @{
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
+ OriginId = $rsc.OriginId
+ OriginSystem = $rsc.OriginSystem
} }
-New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $apid -BodyParameter $rparams
+New-MgBetaEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $apid -BodyParameter $rparams
``` ## Remove resource roles
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. On the **Access packages** page open the access package you want to remove resource roles for.
1. In the left menu, select **Resource roles**.
active-directory Entitlement Management Access Package Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md
In order for the external user from another directory to use the My Access porta
## Share link to request an access package
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, Catalog owner, or Access package manager
-1. In the left menu, select **Access packages** and then open the access package.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
+
+1. On the **Access packages** page open the access package you want to share a link to request an access package for.
1. On the Overview page, check the **Hidden** setting. If the **Hidden** setting is **Yes**, then even users who do not have the My Access portal link can browse and request the access package. If you do not wish to have them browse for the access package, then change the setting to **No**.
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
To reduce the risk of stale access, you should enable periodic reviews of users
To enable reviews of access packages, you must meet the prerequisites for creating an access package: - Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance-- Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
+- Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
For more information, see [License requirements](entitlement-management-overview.md#license-requirements). ## Create an access review of an access package + You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package assignment policy](entitlement-management-access-package-lifecycle-policy.md) policy. If you have multiple policies, for different communities of users to request access, you can have independent access review schedules for each policy. Follow these steps to enable access reviews of an access package's assignments:
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
++
+1. Browse to **Identity governance** > **Access reviews** > **Access package**.
-1. To create a new access policy, in the left menu, select **Access packages**, then select **New access** package.
+1. To create a new access policy, select **New access** package.
1. To edit an existing access policy, in the left menu, select **Access packages** and open the access package you want to edit. Then, in the left menu, select **Policies** and select the policy that has the lifecycle settings you want to edit.
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
Entitlement management simplifies how enterprises manage access to groups, appli
To review users' active access package assignments, the creator of a review must satisfy these prerequisites: - Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance-- Global administrator, Identity Governance administrator, or User administrator
+- Global administrator or Identity Governance administrator
For more information, see [License requirements](entitlement-management-overview.md#license-requirements).
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
This article shows you how to create and manage a catalog of resources and acces
A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. An administrator can create a catalog. In addition, a user who has been delegated the [catalog creator](entitlement-management-delegate.md) role can create a catalog for resources that they own. A nonadministrator who creates the catalog becomes the first catalog owner. A catalog owner can add more users, groups of users, or application service principals as catalog owners.
-**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
+**Prerequisite roles:** Global Administrator, Identity Governance Administrator, or Catalog creator
> [!NOTE]
-> Users who were assigned the User administrator role will no longer be able to create catalogs or manage access packages in a catalog they don't own. If users in your organization were assigned the User administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the Identity Governance administrator role.
+> Users who were assigned the User Administrator role will no longer be able to create catalogs or manage access packages in a catalog they don't own. If users in your organization were assigned the User Administrator role to configure catalogs, access packages, or policies in entitlement management, you should instead assign these users the Identity Governance administrator role.
To create a catalog:
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. On the left menu, select **Catalogs**.
+1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**.
- ![Screenshot that shows entitlement management catalogs in the Azure portal.](./media/entitlement-management-catalog-create/catalogs.png)
+ ![Screenshot that shows entitlement management catalogs in the Entra admin center.](./media/entitlement-management-catalog-create/catalogs.png)
1. Select **New catalog**.
To include resources in an access package, the resources must exist in a catalog
To add resources to a catalog:
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. On the left menu, select **Catalogs** and then open the catalog you want to add resources to.
+1. Browse to **Identity governance** > **Catalogs**.
+
+1. On the **Catalogs** page open the catalog you want to add resources to.
1. On the left menu, select **Resources**.
You can also add a resource to a catalog by using Microsoft Graph. A user in an
### Add a resource to a catalog with PowerShell
-You can also add a resource to a catalog in PowerShell with the `New-MgEntitlementManagementAccessPackageResourceRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. The following example shows how to add a group to a catalog as a resource using Microsoft Graph beta and Microsoft Graph PowerShell cmdlets module version 1.x.x.
+You can also add a resource to a catalog in PowerShell with the `New-MgEntitlementManagementResourceRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. The following example shows how to add a group to a catalog as a resource using Microsoft Graph PowerShell cmdlets module version 2.4.0.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Group.ReadWrite.All"
-Select-MgProfile -Name "beta"
+ $g = Get-MgGroup -Filter "displayName eq 'Marketing'"
-Import-Module Microsoft.Graph.Identity.Governance
-$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'"
-$nr = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAccessPackageResource
-$nr.OriginId = $g.Id
-$nr.OriginSystem = "AadGroup"
-$rr = New-MgEntitlementManagementAccessPackageResourceRequest -CatalogId $catalog.Id -AccessPackageResource $nr
-$ar = Get-MgEntitlementManagementAccessPackageCatalog -AccessPackageCatalogId $catalog.Id -ExpandProperty accessPackageResources
-$ar.AccessPackageResources
+
+$catalog = Get-MgEntitlementManagementCatalog -Filter "displayName eq 'Marketing'"
+$params = @{
+ requestType = "adminAdd"
+ resource = @{
+ originId = $g.Id
+ originSystem = "AadGroup"
+ }
+ catalog = @{ id = $catalog.id }
+}
+
+New-MgEntitlementManagementResourceRequest -BodyParameter $params
+sleep 5
+$ar = Get-MgEntitlementManagementCatalog -AccessPackageCatalogId $catalog.Id -ExpandProperty resources
+$ar.resources
``` ## Remove resources from a catalog
You can remove resources from a catalog. A resource can be removed from a catalo
To remove resources from a catalog:
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**.
-1. On the left menu, select **Catalogs** and then open the catalog you want to remove resources from.
+1. On the **Catalogs** page open the catalog you want to remove resources from.
1. On the left menu, select **Resources**.
To remove resources from a catalog:
## Add more catalog owners + The user who created a catalog becomes the first catalog owner. To delegate management of a catalog, add users to the catalog owner role. Adding more catalog owners helps to share the catalog management responsibilities.
-**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+**Prerequisite roles:** Global Administrator, Identity Governance Administrator, or Catalog owner
To assign a user to the catalog owner role:
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Catalogs**.
-1. On the left menu, select **Catalogs** and then open the catalog you want to add administrators to.
+1. On the **Catalogs** page open the catalog you want to add administrators to.
1. On the left menu, select **Roles and administrators**.
To assign a user to the catalog owner role:
You can edit the name and description for a catalog. Users see this information in an access package's details.
-**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+**Prerequisite roles:** Global Administrator, Identity Governance Administrator, or Catalog owner
To edit a catalog:
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. On the left menu, select **Catalogs** and then open the catalog you want to edit.
+1. Browse to **Identity governance** > **Catalogs**.
+
+1. On the **Catalogs** page open the catalog you want to edit.
1. On the catalog's **Overview** page, select **Edit**.
To edit a catalog:
You can delete a catalog, but only if it doesn't have any access packages.
-**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+**Prerequisite roles:** Global Administrator, Identity Governance Administrator, or Catalog owner
To delete a catalog:
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Catalogs**.
-1. On the left menu, select **Catalogs** and then open the catalog you want to delete.
+1. On the **Catalogs** page open the catalog you want to delete.
1. On the catalog's **Overview** page, select **Delete**.
active-directory Entitlement Management Custom Teams Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-custom-teams-extension.md
In this tutorial, you learn how to:
## Create a Logic App and custom extension in a catalog + Prerequisite roles: Global administrator, Identity Governance administrator, or Catalog owner and Resource Group Owner. To create a Logic App and custom extension in a catalog, you'd follow these steps:
-1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate To Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md
If you have existing catalogs to delegate, then continue at the [create and mana
## As an IT administrator, delegate to a catalog creator + Follow these steps to assign a user to the catalog creator role.
-**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
+**Prerequisite role:** Global Administrator or Identity Governance Administrator
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, in the **Entitlement management** section, select **Settings**.
+1. Browse to **Identity governance** > **Entitlement management** > **settings**.
1. Select **Edit**.
Follow these steps to assign a user to the catalog creator role.
1. Select **Save**.
-## Allow delegated roles to access the Azure portal
+## Allow delegated roles to access the Microsoft Entra admin center
-To allow delegated roles, such as catalog creators and access package managers, to access the Azure portal to manage access packages, you should check the administration portal setting.
+To allow delegated roles, such as catalog creators and access package managers, to access the Microsoft Entra admin center to manage access packages, you should check the administration portal setting.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global Administrator or Identity Governance Administrator
-1. In the Azure portal, select **Azure Active Directory** and then select **Users**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **User settings**.
+1. Browse to **Identity** > **Users** > **User settings**.
1. Make sure **Restrict access to Azure AD administration portal** is set to **No**.
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md
In addition to the catalog owner and access package manager roles, you can also
## As a catalog owner, delegate to an access package manager + Follow these steps to assign a user to the access package manager role:
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
+**Prerequisite role:** Global administrator, Identity Governance administrator, or Catalog owner
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**.
-1. In the left menu, select **Catalogs** and then open the catalog you want to add administrators to.
+1. On the **Catalogs** page open the catalog you want to add administrators to.
1. In the left menu, select **Roles and administrators**.
Follow these steps to assign a user to the access package manager role:
Follow these steps to remove a user from the access package manager role:
-**Prerequisite role:** Global administrator, User administrator, or Catalog owner
+**Prerequisite role:** Global Administrator, Identity Governance Administrator, or Catalog owner
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**.
-1. In the left menu, select **Catalogs** and then open the catalog you want to add administrators to.
+1. On the **Catalogs** page open the catalog you want to add administrators to.
1. In the left menu, select **Roles and administrators**.
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md
For a user who isn't a global administrator, to add groups, applications, or Sha
| | :: | :: | :: | :: | :: | | [Global administrator](../roles/permissions-reference.md) | n/a | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Identity Governance administrator](../roles/permissions-reference.md) | n/a | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [User administrator](../roles/permissions-reference.md) | n/a | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Intune administrator](../roles/permissions-reference.md) | Catalog owner | :heavy_check_mark: | :heavy_check_mark: | | | | [Exchange administrator](../roles/permissions-reference.md) | Catalog owner | | :heavy_check_mark: | | | | [Teams service administrator](../roles/permissions-reference.md) | Catalog owner | | :heavy_check_mark: | | |
For managing external collaboration, where the individual external users for a c
* To allow users in external directories from connected organizations to be able to request access packages in a catalog, the catalog setting of **Enabled for external users** needs to be set to **Yes**. Changing this setting can be done by an administrator or a catalog owner of the catalog. * The access package must also have a policy set [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory). This policy can be created by an administrator, catalog owner or access package manager of the catalog.
-* An access package with that policy will allow users in scope to be able to request access, including users not already in your directory. If their request is approved, or does not require approval, then the user will be automatically be added to your directory.
+* An access package with that policy will allow users in scope to be able to request access, including users not already in your directory. If their request is approved, or does not require approval, then the user will be automatically added to your directory.
* If the policy setting was for **All users**, and the user was not part of an existing connected organization, then a new proposed connected organization is automatically created. You can [view the list of connected organizations](entitlement-management-organization.md#view-the-list-of-connected-organizations) and remove organizations that are no longer needed. You can also configure what happens when an external user brought in by entitlement management loses their last assignment to any access packages. You can block them from signing in to this directory, or have their guest account removed, in the settings to [manage the lifecycle of external users](entitlement-management-external-users.md#manage-the-lifecycle-of-external-users).
You can prevent users who are not in administrative roles from inviting individu
To prevent delegated employees from configuring entitlement management to let external users request for external collaboration, then be sure to communicate this constraint to all global administrators, identity governance administrators, catalog creators, and catalog owners, as they are able to change catalogs, so that they do not inadvertently permit new collaboration in new or updated catalogs. They should ensure that catalogs are set with **Enabled for external users** to **No**, and do not have any access packages with policies for allowing a user not in the directory to request.
-You can view the list of catalogs currently enabled for external users in the Azure portal.
+You can view the list of catalogs currently enabled for external users in the Microsoft Entra admin center.
-1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. On the left menu, select **Catalogs**.
+1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**.
1. Change the filter setting for **Enabled for external users** to **Yes**.
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
-
+
Title: Govern access for external users in entitlement management description: Learn about the settings you can specify to govern access for external users in entitlement management.
The following diagram and steps provide an overview of how external users are gr
1. If the policy settings include an expiration date, then later when the access package assignment for the external user expires, the external user's access rights from that access package are removed.
-1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user is blocked from signing in and the guest user account is removed from your directory.
+1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user will be blocked from signing in, and the external user account will be removed from your directory.
## Settings for external users
To ensure people outside of your organization can request access packages and ge
![Edit catalog settings](./media/entitlement-management-shared/catalog-edit.png)
- If you're an administrator or catalog owner, you can view the list of catalogs currently enabled for external users in the Azure portal list of catalogs, by changing the filter setting for **Enabled for external users** to **Yes**. If any of those catalogs shown in that filtered view have a non-zero number of access packages, those access packages may have a policy [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory) that allow external users to request.
+ If you're an administrator or catalog owner, you can view the list of catalogs currently enabled for external users in the Microsoft Entra admin center list of catalogs, by changing the filter setting for **Enabled for external users** to **Yes**. If any of those catalogs shown in that filtered view have a non-zero number of access packages, those access packages may have a policy [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory) that allow external users to request.
### Configure your Azure AD B2B external collaboration settings
To ensure people outside of your organization can request access packages and ge
:::image type="content" source="media/entitlement-management-external-users/exclude-app-guests-selection.png" alt-text="Screenshot of the exclude guests app selection."::: > [!NOTE]
-> The Entitlement Management app includes the entitlement management side of MyAccess, the Entitlement Management side of Azure portal and the Entitlement Management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided.
+> The Entitlement Management app includes the entitlement management side of MyAccess, the Entitlement Management side of the Microsoft Entra admin center, and the Entitlement Management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided.
### Review your SharePoint Online external sharing settings
To ensure people outside of your organization can request access packages and ge
### Review your Microsoft 365 group sharing settings -- If you want to include Microsoft 365 groups in your access packages for external users, make sure the **Let users add new guests to the organization** is set to **On** to allow guest access. For more information, see [Manage guest access to Microsoft 365 Groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups?view=microsoft-365-worldwide#manage-groups-guest-access).
+- If you want to include Microsoft 365 groups in your access packages for external users, make sure the **Let users add new guests to the organization** is set to **On** to allow guest access. For more information, see [Manage guest access to Microsoft 365 Groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups#manage-groups-guest-access).
- If you want external users to be able to access the SharePoint Online site and resources associated with a Microsoft 365 group, make sure you turn on SharePoint Online external sharing. For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting).
To ensure people outside of your organization can request access packages and ge
## Manage the lifecycle of external users
-You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they're blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory.
+
+You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they're blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory. You can also configure that an external user is not blocked from sign in or deleted, or that an external user is not blocked from sign in but is deleted (preview).
-**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
+**Prerequisite role:** Global Administrator or Identity Governance Administrator
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, in the **Entitlement management** section, select **Settings**.
+1. Browse to **Identity governance** > **Entitlement management** > **Settings**.
1. Select **Edit**.
You can select what happens when an external user, who was invited to your direc
1. Once an external user loses their last assignment to any access packages, if you want to block them from signing in to this directory, set the **Block external user from signing in to this directory** to **Yes**. > [!NOTE]
- > If a user is blocked from signing in to this directory, then the user will be unable to re-request the access package or request additional access in this directory. Do not configure blocking them from signing in if they will subsequently need to request access to other access packages.
+ > Entitlement management only blocks external guest user accounts from signing in that were invited through entitlement management or that were added to entitlement management for lifecycle management. Also, note that a user will be blocked from signing in even if that user was added to resources in this directory that were not access package assignments. If a user is blocked from signing in to this directory, then the user will be unable to re-request the access package or request additional access in this directory. Do not configure blocking them from signing in if they will subsequently need to request access to this or other access packages.
1. Once an external user loses their last assignment to any access packages, if you want to remove their guest user account in this directory, set **Remove external user** to **Yes**. > [!NOTE]
- > Entitlement management only removes accounts that were invited through entitlement management. Also, note that a user will be blocked from signing in and removed from this directory even if that user was added to resources in this directory that were not access package assignments. If the guest was present in this directory prior to receiving access package assignments, they will remain. However, if the guest was invited through an access package assignment, and after being invited was also assigned to a OneDrive for Business or SharePoint Online site, they will still be removed.
+ > Entitlement management only removes external guest user accounts that were invited through entitlement management or that were added to entitlement management for lifecycle managementh. Also, note that a user will be removed from this directory even if that user was added to resources in this directory that were not access package assignments. If the guest was present in this directory prior to receiving access package assignments, they will remain. However, if the guest was invited through an access package assignment, and after being invited was also assigned to a OneDrive for Business or SharePoint Online site, they will still be removed.
-1. If you want to remove the guest user account in this directory, you can set the number of days before it's removed. If you want to remove the guest user account as soon as they lose their last assignment to any access packages, set **Number of days before removing external user from this directory** to **0**.
+1. If you want to remove the guest user account in this directory, you can set the number of days before it's removed. While an external user is notified when their access package expires, there is no notification when their account is removed. If you want to remove the guest user account as soon as they lose their last assignment to any access packages, set **Number of days before removing external user from this directory** to **0**.
1. Select **Save**.
active-directory Entitlement Management Group Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-licenses.md
# Tutorial: Manage the lifecycle of your group-based licenses in Azure AD+ With Azure Active Directory (Azure AD), you can use groups to manage the [licenses for your applications](../enterprise-users/licensing-groups-assign.md). You can make the management of these groups even easier by using entitlement management:
For more information, see [License requirements](entitlement-management-overview
**Prerequisite role:** Global Administrator, Identity Governance Administrator, User Administrator, Catalog Owner, or Access Package Manager
-1. In the Azure portal, on the left pane, select **Azure Active Directory**.
-
-2. Under **Manage**, select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-3. Under **Entitlement Management**, select **Access packages**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-4. Select **New access package**.
+1. On the **Access packages** page Select **New access package**.
-5. On the **Basics** tab, in the **Name** box, enter **Office Licenses**. In the **Description** box, enter **Access to licenses for Office applications**.
+1. On the **Basics** tab, in the **Name** box, enter **Office Licenses**. In the **Description** box, enter **Access to licenses for Office applications**.
-6. You can leave **General** in the **Catalog** list.
+1. You can leave **General** in the **Catalog** list.
## Step 2: Configure the resources for your access package 1. Select **Next: Resource roles** to go to the **Resource roles** tab.
-2. On this tab, you select the resources and the resource role to include in the access package. In this scenario, select **Groups and Teams** and search for your group that has assigned [Office licenses](../enterprise-users/licensing-groups-assign.md).
+1. On this tab, you select the resources and the resource role to include in the access package. In this scenario, select **Groups and Teams** and search for your group that has assigned [Office licenses](../enterprise-users/licensing-groups-assign.md).
-3. In the **Role** list, select **Member**.
+1. In the **Role** list, select **Member**.
## Step 3: Configure requests for your access package
For more information, see [License requirements](entitlement-management-overview
On this tab, you create a request policy. A *policy* defines the rules for access to an access package. You create a policy that allows employees in the resource directory to request the access package.
-3. In the **Users who can request access** section, select **For users in your directory** and then select **All members (excluding guests)**. These settings make it so that only members of your directory can request Office licenses.
+1. In the **Users who can request access** section, select **For users in your directory** and then select **All members (excluding guests)**. These settings make it so that only members of your directory can request Office licenses.
-4. Ensure that **Require approval** is set to **Yes**.
+1. Ensure that **Require approval** is set to **Yes**.
-5. Leave **Require requestor justification** set to **Yes**.
+1. Leave **Require requestor justification** set to **Yes**.
-6. Leave **How many stages** set to **1**.
+1. Leave **How many stages** set to **1**.
-7. Under **Approver**, select **Manager as approver**. This option allows the requestor's manager to approve the request. You can select a different person to be the fallback approver if the system can't find the manager.
+1. Under **Approver**, select **Manager as approver**. This option allows the requestor's manager to approve the request. You can select a different person to be the fallback approver if the system can't find the manager.
-8. Leave **Decision must be made in how many days?** set to **14**.
+1. Leave **Decision must be made in how many days?** set to **14**.
-9. Leave **Require approver justification** set to **Yes**.
+1. Leave **Require approver justification** set to **Yes**.
-10. Under **Enable new requests and assignments**, select **Yes** to enable employees to request the access package as soon as it's created.
+1. Under **Enable new requests and assignments**, select **Yes** to enable employees to request the access package as soon as it's created.
## Step 4: Configure requestor information for your access package 1. Select **Next** to go to the **Requestor information** tab.
-2. On this tab, you can ask questions to collect more information from the requestor. The questions are shown on the request form and can be either required or optional. In this scenario, you haven't been asked to include requestor information for the access package, so you can leave these boxes empty.
+1. On this tab, you can ask questions to collect more information from the requestor. The questions are shown on the request form and can be either required or optional. In this scenario, you haven't been asked to include requestor information for the access package, so you can leave these boxes empty.
## Step 5: Configure the lifecycle for your access package 1. Select **Next: Lifecycle** to go to the **Lifecycle** tab.
-2. In the **Expiration** section, for **Access package assignments expire**, select **Number of days**.
+1. In the **Expiration** section, for **Access package assignments expire**, select **Number of days**.
-3. In **Assignments expire after**, enter **365**. This box specifies when members who have access to the access package needs to renew their access.
+1. In **Assignments expire after**, enter **365**. This box specifies when members who have access to the access package needs to renew their access.
-4. You can also configure access reviews, which allow periodic checks of whether the employee still needs access to the access package. A review can be a self-review performed by the employee. Or you can set the employee's manager or another person as the reviewer. For more information, see [Access reviews](entitlement-management-access-reviews-create.md).
+1. You can also configure access reviews, which allow periodic checks of whether the employee still needs access to the access package. A review can be a self-review performed by the employee. Or you can set the employee's manager or another person as the reviewer. For more information, see [Access reviews](entitlement-management-access-reviews-create.md).
In this scenario, you want all employees to review whether they still need a license for Office each year. 1. Under **Require access reviews**, select **Yes**.
- 2. You can leave **Starting on** set to the current date. This date is when the access review starts. After you create an access review, you can't update its start date.
- 3. Under **Review frequency**, select **Annually**, because the review occurs once per year. The **Review frequency** box is where you determine how often the access review runs.
- 4. Specify a **Duration (in days)**. The duration box is where you indicate how many days each occurrence of the access review series runs.
- 5. Under **Reviewers**, select **Manager**.
+ 1. You can leave **Starting on** set to the current date. This date is when the access review starts. After you create an access review, you can't update its start date.
+ 1. Under **Review frequency**, select **Annually**, because the review occurs once per year. The **Review frequency** box is where you determine how often the access review runs.
+ 1. Specify a **Duration (in days)**. The duration box is where you indicate how many days each occurrence of the access review series runs.
+ 1. Under **Reviewers**, select **Manager**.
## Step 6: Review and create your access package
For more information, see [License requirements](entitlement-management-overview
On this tab, you can review the configuration for your access package before you create it. If there are any problems, you can use the tabs to go to a specific point in the process to make edits.
-3. When you're happy with your configuration, select **Create**. After a moment, you should see a notification stating that the access package is created.
+1. When you're happy with your configuration, select **Create**. After a moment, you should see a notification stating that the access package is created.
-4. After the access package is created, you'll see the **Overview** page for the package. You'll find the **My Access portal link** here. Copy the link and share it with your team so your team members can request the access package to be assigned licenses for Office.
+1. After the access package is created, you'll see the **Overview** page for the package. You'll find the **My Access portal link** here. Copy the link and share it with your team so your team members can request the access package to be assigned licenses for Office.
## Step 7: Clean up resources
In this step, you can delete the Office Licenses access package.
**Prerequisite role:** Global Administrator, Identity Governance Administrator, or Access Package Manager
-1. In the Azure portal, on the left pane, select **Azure Active Directory**.
-
-2. Under **Manage**, select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-3. Under **Entitlement Management**, select **Access packages**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-4. Open the **Office Licenses** access package.
+1. Open the **Office Licenses** access package.
-5. Select **Resource Roles**.
+1. Select **Resource Roles**.
-6. Select the group you added to the access package. On the details pane, select **Remove resource role**. In the message box that appears, select **Yes**.
+1. Select the group you added to the access package. On the details pane, select **Remove resource role**. In the message box that appears, select **Yes**.
-7. Open the list of access packages.
+1. Open the list of access packages.
-8. For **Office Licenses**, select the ellipsis button (...) and then select **Delete**. In the message box that appears, select **Yes**.
+1. For **Office Licenses**, select the ellipsis button (...) and then select **Delete**. In the message box that appears, select **Yes**.
## Next steps
active-directory Entitlement Management Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-writeback.md
na Previously updated : 02/23/2023 Last updated : 08/24/2023
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a tab within access package polic
**Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner or Resource Group Owner
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-
-1. In the left menu, select **Catalogs**.
+1. Browse to **Identity governance** > **Catalogs**.
1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions**.
These triggers to Logic Apps are controlled in a tab within access package polic
**Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
1. Select the access package you want to add a custom extension (logic app) to from the list of access packages that have already been created.
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub
**Prerequisite role**: Global Administrator
-1. Sign in to the [Azure portal](https://portal.azure.com) as a user who is a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace.
+1. Sign in to the [Microsoft Entra admin center](https://portal.azure.com) as a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace.
-1. Select **Azure Active Directory** then select **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace.
+1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**.
+
+1. Check if there's already a setting to send the audit logs to that workspace.
1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to send the Azure AD audit log to the Azure Monitor workspace.
Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub
1. Later, to see the range of dates held in your workspace, you can use the *Archived Log Date Range* workbook:
- 1. Select **Azure Active Directory** then select **Workbooks**.
+ 1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
1. Expand the section **Azure Active Directory Troubleshooting**, and select on **Archived Log Date Range**.
To view events for an access package, you must have access to the underlying Azu
Use the following procedure to view events:
-1. In the Azure portal, select **Azure Active Directory** then select **Workbooks**. If you only have one subscription, move on to step 3.
+1. In the Microsoft Entra admin center, select **Identity** then select **Workbooks**. If you only have one subscription, move on to step 3.
1. If you have multiple subscriptions, select the subscription that contains the workspace.
Use the following procedure to view events:
![View app role assignments](./media/entitlement-management-access-package-incompatible/workbook-ara.png)
-## Create custom Azure Monitor queries using the Azure portal
+## Create custom Azure Monitor queries using the Microsoft Entra admin center
You can create your own queries on Azure AD audit events, including entitlement management events.
-1. In Azure Active Directory of the Azure portal, select **Logs** under the Monitoring section in the left navigation menu to create a new query page.
+1. In Identity of the Microsoft Entra admin center, select **Logs** under the Monitoring section in the left navigation menu to create a new query page.
1. Your workspace should be shown in the upper left of the query page. If you have multiple Azure Monitor workspaces, and the workspace you're using to store Azure AD audit events isn't shown, select **Select Scope**. Then, select the correct subscription and workspace.
active-directory Entitlement Management Onboard External User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-onboard-external-user.md
For more information, see [License requirements](entitlement-management-overview
## Step 1: Configure basics + **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
-1. In the Azure portal, in the left navigation, select **Azure Active Directory**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-2. In the left menu, select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-3. In the left menu, select **Access packages**. If you see Access denied, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory.
+3. When selecting the access package page if you see Access denied, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory.
4. Select **New access package**.
In this step, you can delete the **External user package** access package.
**Prerequisite role:** Global administrator, Identity Governance administrator or Access package manager
-1. In the **Azure portal**, in the left navigation, select **Azure Active Directory**.
-
-2. In the left menu, select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-3. In the left menu, select **Access Packages**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
4. Open the **External user package** access package.
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
# Manage connected organizations in entitlement management
-With entitlement management, you can collaborate with people outside your organization. If you frequently collaborate with users in an external Azure AD directory or domain, you can add them as a connected organization. This article describes how to add a connected organization so that you can allow users outside your organization to request resources in your directory.
+With entitlement management, you can collaborate with people outside your organization. If you frequently collaborate with many users from specific external organizations, you can add those organization's identity sources as connected organizations. Having a connected organization simplifies how more people from those organizations can request access. This article describes how to add a connected organization so that you can allow users outside your organization to request resources in your directory.
## What is a connected organization? A connected organization is another organization that you have a relationship with. In order for the users in that organization to be able to access your resources, such as your SharePoint Online sites or apps, you'll need a representation of that organization's users in that directory. Because in most cases the users in that organization aren't already in your Azure AD directory, you can use entitlement management to bring them into your Azure AD directory as needed.
+If you want to provide a path for anyone to request access, and you are not sure which organizations those new users might be from, then you can configure an [access package assignment policy for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory). In that policy, select the option of **All users (All connected organizations + any new external users)**. If the requestor is approved, and they donΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them.
+
+If you want to only allow individuals from designated organizations to request access, then first create those connected organizations. Second, configure an [access package assignment policy for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory), select the option of **Specific connected organizations**, and select the organizations you created.
++ There are four ways that entitlement management lets you specify the users that form a connected organization. It could be * users in another Azure AD directory (from any Microsoft cloud), * users in another non-Azure AD directory that has been configured for direct federation, * users in another non-Azure AD directory, whose email addresses all have the same domain name in common, or
-* users with a Microsoft Account, such as from the domain *live.com*, if you have a business need for collaboration with users which have no common organization.
+* users with a Microsoft Account, such as from the domain *live.com*, if you have a business need for collaboration with users that have no common organization.
For example, suppose you work at Woodgrove Bank and you want to collaborate with two external organizations. You want to give users from both external organizations access to the same resources, but these two organizations have different configurations: -- Graphic Design Institute uses Azure AD, and their users have a user principal name that ends with *graphicdesigninstitute.com*.-- Contoso does not yet use Azure AD. Contoso users have a user principal name that ends with *contoso.com*.
+- Contoso does not yet use Azure AD. Contoso users have an email address that ends with *contoso.com*.
+- Graphic Design Institute uses Azure AD, and at least some of their users have a user principal name that ends with *graphicdesigninstitute.com*.
+
+In this case, you can configure two connected organizations, then one access package with one policy.
-In this case, you can configure one access package, with one policy, and two connected organizations. You create one connected organization for Graphic Design Institute and one for Contoso. If you then specify the two connected organizations in a policy for **users not yet in your directory**, users from each organization, with a user principal name that matches one of the connected organizations, can request the access package. Users with a user principal name that has a domain of contoso.com would match the Contoso-connected organization and would also be allowed to request the package. Users with a user principal name that has a domain of *graphicdesigninstitute.com* and are using an organizational account would match the Graphic Design Institute-connected organization and be allowed to submit requests. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches another [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to the Graphic Design Institute tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy. If you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, that includes users from those domains that aren't yet part of Azure AD directories who'll authenticate using email OTP when accessing your resources.
+1. Ensure that you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, so that users from those domains that aren't yet part of Azure AD directories who'll authenticate using email one-time-passcode when requesting access or later accessing your resources. In addition, you may need to [configure your Azure AD B2B external collaboration settings](entitlement-management-external-users.md?#configure-your-azure-ad-b2b-external-collaboration-settings) to allow external users access.
+1. Create a connected organization for Contoso. When you specify the domain *contoso.com*, entitlement management will recognize that there is no existing Azure AD tenant associated with that domain, and that users from that connected organization will be recognized if they authenticate with an email one-time-passcode with a *contoso.com* email address domain.
+1. Create another connected organization for Graphic Design Institute. When you specify the domain *graphicdesigninstitute.com*, entitlement management will recognize that there is a tenant associated with that domain.
+1. In a catalog that allows external users to request, create an access package.
+1. In that access package, create an access package assignment policy for **users not yet in your directory**. In that policy, select the option **Specific connected organizations** and specify the two connected organizations. This will allow users from each organization, with an identity source that matches one of the connected organizations, to request the access package.
+1. When external users with a user principal name that has a domain ofΓÇ»*contoso.com* request the access package, they will authenticate using email. This email domain will match the Contoso-connected organization and the user will be allowed to request the package. After they request, [how access works for external users](entitlement-management-external-users.md?#how-access-works-for-external-users) describes how the B2B user is then invited and access is assigned for the external user.
+1. In addition, external users that are using an organizational account from the Graphic Design Institute tenant would match the Graphic Design Institute-connected organization and be allowed to request the access package. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches another [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to the Graphic Design Institute tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy.
-![Connected organization example](./media/entitlement-management-organization/connected-organization-example.png)
+[ ![Diagram of connected organizations in example and their relationships with an assignment policy and with a tenant.](./media/entitlement-management-organization/connected-organization-example.png) ](./media/entitlement-management-organization/connected-organization-example-expanded.png#lightbox)
How users from the Azure AD directory or domain authenticate depends on the authentication type. The authentication types for connected organizations are:
For a demonstration of how to add a connected organization, watch the following
## View the list of connected organizations
-**Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator*
-1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**.
+**Prerequisite role**: *Global Administrator* or *Identity Governance Administrator*
-1. In the left pane, select **Connected organizations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**.
1. In the search box, you can search for a connected organization by the name of the connected organization. However, you cannot search for a domain name.
For a demonstration of how to add a connected organization, watch the following
To add an external Azure AD directory or domain as a connected organization, follow the instructions in this section.
-**Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator*
+**Prerequisite role**: *Global Administrator* or *Identity Governance Administrator*
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**.
-1. In the left pane, select **Connected organizations**, and then select **Add connected organization**.
+1. On the **Connected organizations** page select **Add connected organization**.
![The "Add connected organization" button](./media/entitlement-management-organization/connected-organization.png)
To add an external Azure AD directory or domain as a connected organization, fol
If the connected organization changes to a different domain, the organization's name changes, or you want to change the sponsors, you can update the connected organization by following the instructions in this section.
-**Prerequisite role**: *Global administrator* or *User administrator*
+**Prerequisite role**: *Global Administrator* or *Identity Governance Administrator*
-1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left pane, select **Connected organizations**, and then select the connected organization to open it.
+1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**.
+
+1. On the **Connected organizations** page select the connected organization you want to update.
1. In the connected organization's overview pane, select **Edit** to change the organization name, description, or state.
If the connected organization changes to a different domain, the organization's
If you no longer have a relationship with an external Azure AD directory or domain, or do not wish to have a proposed connected organization any longer, you can delete the connected organization.
-**Prerequisite role**: *Global administrator* or *User administrator*
+**Prerequisite role**: *Global Administrator* or *Identity Governance Administrator*
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**.
-1. In the left pane, select **Connected organizations**, and then select the connected organization to open it.
+1. On the **Connected organizations** page select the connected organization you want to delete to open it.
1. In the connected organization's overview pane, select **Delete** to delete it.
foreach ($c in $co) {
There are two different states for connected organizations in entitlement management, configured and proposed: -- A configured connected organization is a fully functional connected organization that allows users within that organization access to access packages. When an admin creates a new connected organization in the Azure portal, it will be in the **configured** state by default since the administrator created and wants to use this connected organization. Additionally, when a connected org is created programmatically via the API, the default state should be **configured** unless set to another state explicitly.
+- A **configured** connected organization is a fully functional connected organization that allows users within that organization access to access packages. When an admin creates a new connected organization in the Azure portal, it will be in the **configured** state by default since the administrator created and wants to use this connected organization. Additionally, when a connected org is created programmatically via the API, the default state should be **configured** unless set to another state explicitly.
Configured connected organizations will show up in the pickers for connected organizations and will be in scope for any policies that target ΓÇ£all configured connected organizationsΓÇ¥. -- A proposed connected organization is a connected organization that has been automatically created, but hasn't had an administrator create or approve the organization. When a user signs up for an access package outside of a configured connected organization, any automatically created connected organizations will be in the **proposed** state since no administrator in the tenant set-up that partnership.
+- A **proposed** connected organization is a connected organization that has been automatically created, but hasn't had an administrator create or approve the organization. When a user signs up for an access package outside of a configured connected organization, any automatically created connected organizations will be in the **proposed** state since no administrator in the tenant set-up that partnership.
Proposed connected organizations are not in scope for the ΓÇ£all configured connected organizationsΓÇ¥ setting on any policies but can be used in policies only for policies targeting specific organizations.
active-directory Entitlement Management Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reports.md
Watch the following video to learn how to view what resources users have access
## View users assigned to an access package + This report enables you to list all of the users who are assigned to an access package.
-**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
+**Prerequisite role:** Global Administrator or Identity Governance Administrator
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages**.
-1. In the left menu, select **Access packages** and then open the access package of interest.
+1. On the **Access packages** page select the access package of interest.
1. In the left menu, select **Assignments**, then select **Download**.
This report enables you to list all of the users who are assigned to an access p
This report enables you to list all of the access packages a user can request and the access packages that are currently assigned to the user.
-**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
+**Prerequisite role:** Global Administrator or Identity Governance Administrator
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Reports**.
+1. Browse to **Identity governance** > **Entitlement management** > **Reports**.
1. Select **Access packages for a user**.
This report enables you to list all of the access packages a user can request an
This report enables you to list the resources currently assigned to a user in entitlement management. This report is for resources managed with entitlement management. The user might have access to other resources in your directory outside of entitlement management.
-**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
+**Prerequisite role:** Global administrator or Identity Governance Administrator
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Reports**.
+1. Browse to **Identity governance** > **Entitlement management** > **Reports**.
1. Select **Resource assignments for a user**.
This report enables you to list the resources currently assigned to a user in en
To get additional details on how a user requested and received access to an access package, you can use the Azure AD audit log. In particular, you can use the log records in the `EntitlementManagement` and `UserManagement` categories to get additional details on the processing steps for each request.
-1. Select **Azure Active Directory** and then select **Audit logs**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity governance** > **Entitlement management** > **Audit logs**.
1. At the top, change the **Category** to either `EntitlementManagement` or `UserManagement`, depending on the audit record you're looking for.
When the user's access package assignment expires, is canceled by the user, or r
## Download list of connected organizations
-**Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator*
+**Prerequisite role**: *Global Administrator* or *Identity Governance Administrator*
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**.
-1. In the left pane, select **Connected organizations**, and then select **Download**.
+1. On the **Connected organizations** page select **Download**.
## View events for an access package
To view events for an access package, you must have access to the underlying Azu
- Reports reader - Application administrator
-1. In the Azure portal, select **Azure Active Directory** then select **Workbooks**. If you only have one subscription, move on to step 3.
+1. In the Microsoft Entra admin center, select **Identity** then select **Workbooks** under **Monitoring & health**. If you only have one subscription, move on to step 3.
1. If you have multiple subscriptions, select the subscription that contains the workspace.
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
To use entitlement management and assign users to access packages, you must have
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role**: Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
If you have users who are in the "Delivered" state but don't have access to resources that are a part of the access package, you'll likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Select **Azure Active Directory**, and then select **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess.
+1. On the **Access packages** page open the access package with the user assignment you want to reprocess.
1. Underneath **Manage** on the left side, select **Assignments**.
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
na Previously updated : 05/31/2023 Last updated : 08/24/2023
To use entitlement management and assign users to access packages, you must have
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role**: Global Administrator, Identity Governance Administrator, Catalog owner, Access package manager or Access package assignment manager
If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you might need to reprocess some of those requests. Follow these steps to reprocess requests for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Click **Azure Active Directory**, and then click **Identity Governance**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. On the **Access packages** open the access package.
-1. Underneath **Manage** on the left side, click **Requests**.
+1. Underneath **Manage** on the left side, select **Requests**.
1. Select all users whose requests you wish to reprocess.
-1. Click **Reprocess**.
+1. Select **Reprocess**.
## Next steps
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md
Title: Request an access package - entitlement management
-description: Learn how to use the My Access portal to request access to an access package in Azure Active Directory entitlement management.
+description: Learn how to use the My Access portal to request access to an access package in Microsoft Entra entitlement management.
documentationCenter: ''
active-directory Entitlement Management Request Approve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-approve.md
Title: Approve or deny access requests - entitlement management
-description: Learn how to use the My Access portal to approve or deny requests to an access package in Azure Active Directory entitlement management.
+description: Learn how to use the My Access portal to approve or deny requests to an access package in Microsoft Entra entitlement management.
documentationCenter: ''
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
Title: Common scenarios in entitlement management
-description: Learn the high-level steps you should follow for common scenarios in Azure Active Directory entitlement management.
+description: Learn the high-level steps you should follow for common scenarios in Microsoft Entra entitlement management.
documentationCenter: ''
active-directory Entitlement Management Ticketed Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md
To add a Logic App workflow to an existing catalog, you use an ARM template for
Provide the Azure subscription, resource group details, along with the Logic App name and the Catalog ID to associate the Logic App with and select purchase. For more information on how to create a new catalog, please follow the steps in this document: [Create and manage a catalog of resources in entitlement management](entitlement-management-catalog-create.md).
-1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate To Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
After setting up custom extensibility in the catalog, administrators can create
With Azure, you're able to use [Azure Key Vault](/azure/key-vault/secrets/about-secrets) to store application secrets such as passwords. To register an application with secrets within the Azure portal, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. Search for and select Azure Active Directory.
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Under Manage, select App registrations > New registration.
After registering your application, you must add a client secret by following th
To authorize the created application to call the [MS Graph resume API](/graph/api/accesspackageassignmentrequest-resume) you'd do the following steps:
-1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate to the Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
At this point it's time to configure ServiceNow for resuming the entitlement man
1. Sign in to ServiceNow and navigate to the Application Registry. 1. Select ΓÇ£*New*ΓÇ¥ and then select ΓÇ£**Connect to a third party OAuth Provider**ΓÇ¥. 1. Provide a name for the application, and select Client Credentials in the Default Grant type.
- 1. Enter the Client Name, ID, Client Secret, Authorization URL, Token URL that were generated when you registered the Azure Active Directory application in the Azure portal.
+ 1. Enter the Client Name, ID, Client Secret, Authorization URL, Token URL that were generated when you registered the Azure Active Directory application in the Microsoft Entra admin center.
1. Submit the application. :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-servicenow-application-registry.png" alt-text="Screenshot of the application registry within ServiceNow." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-servicenow-application-registry.png"::: 1. Create a System Web Service REST API message by following these steps:
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md
This article describes some items you should check to help you troubleshoot enti
* Roles for applications are defined by the application itself and are managed in Azure AD. If an application doesn't have any resource roles, entitlement management assigns users to a **Default Access** role.
- The Azure portal may also show service principals for services that can't be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they can't be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.
+ The Microsoft Entra admin center may also show service principals for services that can't be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they can't be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.
* Applications that only support Personal Microsoft Account users for authentication, and don't support organizational accounts in your directory, don't have application roles and can't be added to access package catalogs.
This article describes some items you should check to help you troubleshoot enti
* When a user who isn't yet in your directory signs in to the My Access portal to request an access package, be sure they authenticate using their organizational account. The organizational account can be either an account in the resource directory, or in a directory that is included in one of the policies of the access package. If the user's account isn't an organizational account, or the directory where they authenticate isn't included in the policy, then the user won't see the access package. For more information, see [Request access to an access package](entitlement-management-request-access.md).
-* If a user is blocked from signing in to the resource directory, they won't be able to request access in the My Access portal. Before the user can request access, you must remove the sign-in block from the user's profile. To remove the sign-in block, in the Azure portal, select **Azure Active Directory**, select **Users**, select the user, and then select **Profile**. Edit the **Settings** section and change **Block sign in** to **No**. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/how-to-manage-user-profile-info.md). You can also check if the user was blocked due to an [Identity Protection policy](../identity-protection/howto-identity-protection-remediate-unblock.md).
+* If a user is blocked from signing in to the resource directory, they won't be able to request access in the My Access portal. Before the user can request access, you must remove the sign-in block from the user's profile. To remove the sign-in block, in the Microsoft Entra admin center, select **Identity**, select **Users**, select the user, and then select **Profile**. Edit the **Settings** section and change **Block sign in** to **No**. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/how-to-manage-user-profile-info.md). You can also check if the user was blocked due to an [Identity Protection policy](../identity-protection/howto-identity-protection-remediate-unblock.md).
* In the My Access portal, if a user is both a requestor and an approver, they won't see their request for an access package on the **Approvals** page. This behavior is intentional - a user can't approve their own request. Ensure that the access package they're requesting has additional approvers configured on the policy. For more information, see [Change request and approval settings for an access package](entitlement-management-access-package-request-policy.md). ### View a request's delivery errors
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then open the access package.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages**.
1. Select **Requests**.
You can only reprocess a request that has a status of **Delivery failed** or **P
- If the error wasn't fixed during the trials window, the request status may be **Delivery failed** or **partially delivered**. You can then use the **reprocess** button. You'll have seven days to reprocess the request.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then open the access package.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages** to open an access package.
1. Select **Requests**.
You can only reprocess a request that has a status of **Delivery failed** or **P
You can only cancel a pending request that hasn't yet been delivered or whose delivery has failed.The **cancel** button would be grayed out otherwise.
-**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then open the access package.
+1. Browse to **Identity governance** > **Entitlement management** > **Access packages** to open an access package.
1. Select **Requests**.
active-directory Entitlement Management Verified Id Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-verified-id-settings.md
Before you begin, you must set up your tenant to use the [Microsoft Entra Verifi
## Create an access package with verified ID requirements To add a verified ID requirement to an access package, you must start from the access packageΓÇÖs requests tab. Follow these steps to add a verified ID requirement to a new access package.
To add a verified ID requirement to an access package, you must start from the a
> [!NOTE] > Identity Governance administrator, User administrator, Catalog owner, or Access package manager will be able to add verified ID requirements to access packages soon.
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
-1. In the left menu, select **Access packages** and then select **+ New access package**.
+1. Browse to **Identity governance** > **Entitlement management** > **Access package**.
+
+1. On the **Access packages** page select **+ New access package**.
1. On the **Requests** tab, scroll to the **Required Verified Ids** section.
active-directory Identity Governance Applications Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-integrate.md
Once you've [established the policies](identity-governance-applications-define.md) for who should have access to an application, then you can [connect your application to Azure AD](../manage-apps/what-is-application-management.md) and then [deploy the policies](identity-governance-applications-deploy.md) for governing access to them.
-Microsoft Entra identity governance can be integrated with many applications, using [standards](../architecture/auth-sync-overview.md) such as OpenID Connect, SAML, SCIM, SQL and LDAP. Through these standards, you can use Azure AD with many popular SaaS applications and on-premises applications, including applications that your organization has developed. This deployment plan covers how to connect your application to Azure AD and enable identity governance features to be used for that application.
+Microsoft Entra identity governance can be integrated with many applications, including well-known applications such as SAP and those using [standards](../architecture/auth-sync-overview.md) such as OpenID Connect, SAML, SCIM, SQL, LDAP, SOAP and REST. Through these standards, you can use Azure AD with many popular SaaS applications and on-premises applications, including applications that your organization has developed. This deployment plan covers how to connect your application to Azure AD and enable identity governance features to be used for that application.
In order for Microsoft Entra identity governance to be used for an application, the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met:
Next, if the application implements a provisioning protocol, then you should con
| SCIM | configure an application with the [provisioning agent for on-premises SCIM-based apps](../app-provisioning/on-premises-scim-provisioning.md)| | local user accounts, stored in a SQL database | configure an application with the [provisioning agent for on-premises SQL-based applications](../app-provisioning/on-premises-sql-connector-configure.md)| | local user accounts, stored in an LDAP directory | configure an application with the [provisioning agent for on-premises LDAP-based applications](../app-provisioning/on-premises-ldap-connector-configure.md) |
+ | local user accounts, managed through a SOAP or REST API | configure an application with the [provisioning agent with the web services connector](../app-provisioning/on-premises-web-services-connector.md)|
+ | local user accounts, managed through a MIM connector | configure an application with the [provisioning agent with a custom connector](../app-provisioning/on-premises-custom-connector.md)|
+ | SAP ECC with NetWeaver AS ABAP 7.0 or later | configure an application with the [provisioning agent with a SAP ECC configured web services connector](../app-provisioning/on-premises-sap-connector-configure.md)|
1. If your application uses Microsoft Graph to query groups from Azure AD, then [consent](../develop/application-consent-experience.md) to the applications to have the appropriate permissions to read from your tenant.
active-directory Identity Governance Applications Not Provisioned Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-not-provisioned-users.md
There are three common scenarios in which it's necessary to populate Azure Activ
- Application that doesn't use Azure AD as its only identity provider - Application does not use Azure AD as its identity provider nor does it support provisioning
-For more information on those first two scenarios, where the application supports provisioning, or uses an LDAP directory, SQL database or relies upon Azure AD as its identity provider, see the article [govern an application's existing users](identity-governance-applications-existing-users.md). That article covers how to use identity governance features for existing users of those categories of applications.
+For more information on those first two scenarios, where the application supports provisioning, or uses an LDAP directory, SQL database, has a SOAP or REST API or relies upon Azure AD as its identity provider, see the article [govern an application's existing users](identity-governance-applications-existing-users.md). That article covers how to use identity governance features for existing users of those categories of applications.
This article covers the third scenario. For some legacy applications it might not be feasible to remove other identity providers or local credential authentication from the application, or enable support for provisioning protocols for those applications. For those applications, if you want to use Azure AD to review who has access to that application, or remove someone's access from that application, you'll need to create assignments in Azure AD that represent application users. This article covers that scenario of an application that does not use Azure AD as its identity provider and does not support provisioning.
Follow the instructions in the [guide for creating an access review of groups or
1. The columns `PrincipalDisplayName` and `PrincipalId` contain the display names and Azure AD user IDs of each user who retains an application role assignment.
+## Configure entitlement management integration with ServiceNow for ticketing (optional)
+
+If you have ServiceNow then you can optionally configure automated ServiceNow ticket creation, using the [entitlement management integration](entitlement-management-ticketed-provisioning.md) via Logic Apps. In that scenario, entitlement management can automatically create ServiceNow tickets for manual provisioning of users who have received access package assignments.
## Next steps - [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
+ - [Automated ServiceNow ticket creation with entitlement management integration](entitlement-management-ticketed-provisioning.md)
active-directory Identity Governance Applications Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-prepare.md
Microsoft Entra identity governance can be integrated with many applications, us
Before you begin the process of governing application access from Azure AD, you should check your Azure AD environment is appropriately configured.
-* **Ensure your Azure AD and Microsoft Online Services environment is ready for the [compliance requirements](../standards/standards-overview.md) for the applications to be integrated and properly licensed**. Compliance is a shared responsibility among Microsoft, cloud service providers (CSPs), and organizations. To use Azure AD to govern access to applications, you must have one of the following licenses in your tenant:
+* **Ensure your Azure AD and Microsoft Online Services environment is ready for the [compliance requirements](../standards/standards-overview.md) for the applications to be integrated and properly licensed**. Compliance is a shared responsibility among Microsoft, cloud service providers (CSPs), and organizations. To use Azure AD to govern access to applications, you must have one of the following [license combinations](licensing-fundamentals.md) in your tenant:
- * Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance
- * Enterprise Mobility + Security (EMS) E5 license
+ * **Microsoft Entra ID Governance** and its prerequisite, Microsoft Azure AD Premium P1
+ * **Microsoft Entra ID Governance Step Up for Microsoft Entra ID P2** and its prerequisite, either Microsoft Azure AD Premium P2 or Enterprise Mobility + Security (EMS) E5
- Your tenant needs to have at least as many licenses as the number of member (non-guest) users who have or can request access to the applications, approve, or review access to the applications. With an appropriate license for those users, you can then govern access to up to 1500 applications per user.
+ Your tenant needs to have at least as many licenses as the number of member (non-guest) users who are governed, including those that have or can request access to the applications, approve, or review access to the applications. With an appropriate license for those users, you can then govern access to up to 1500 applications per user.
* **If you will be governing guest's access to the application, link your Azure AD tenant to a subscription for MAU billing**. This step is necessary prior to having a guest request or review their access. For more information, see [billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Organizations need a process to manage access beyond what was initially provisio
Typically, IT delegates access approval decisions to business decision makers. Furthermore, IT can involve the users themselves. For example, users that access confidential customer data in a company's marketing application in Europe need to know the company's policies. Guest users may be unaware of the handling requirements for data in an organization to which they've been invited.
-Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Microsoft Entra can also provision access to apps that use [AD groups](../enterprise-users/groups-write-back-portal.md), [other on-premises directories](../app-provisioning/on-premises-ldap-connector-configure.md) or [databases](../app-provisioning/on-premises-sql-connector-configure.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Microsoft Entra access reviews](access-reviews-overview.md). [Microsoft Entra entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Microsoft Entra features for your access lifecycle automation scenarios.
+Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Microsoft Entra can also provision access to apps that use [AD groups](../enterprise-users/groups-write-back-portal.md), [other on-premises directories](../app-provisioning/on-premises-ldap-connector-configure.md) or [databases](../app-provisioning/on-premises-sql-connector-configure.md), or that have a [SOAP or REST API](../app-provisioning/on-premises-web-services-connector.md) including [SAP](sap.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Microsoft Entra access reviews](access-reviews-overview.md). [Microsoft Entra entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Microsoft Entra features for your access lifecycle automation scenarios.
Lifecycle access can be automated using workflows. [Workflows can be created](create-lifecycle-workflow.md) to automatically add user to groups, where access to applications and resources are granted. Users can also be moved when their condition within the organization changes to different groups, and can even be removed entirely from all groups.
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
You can update the following basic information without creating a new workflow.
If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article.
-If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you must manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph).
+If done via the Microsoft Entra Admin center, the new version is created automatically. If done using Microsoft Graph, you must manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph).
-## Edit the properties of a workflow using the Azure portal
+## Edit the properties of a workflow using the Microsoft Entra Admin center
+To edit the properties of a workflow using the Microsoft Entra admin center, you do the following steps:
-To edit the properties of a workflow using the Azure portal, you do the following steps:
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-
-1. On the left menu, select **Lifecycle workflows**.
-
-1. On the left menu, select **Workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
1. Here you see a list of all of your current workflows. Select the workflow that you want to edit.
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
Workflows created with Lifecycle Workflows are able to grow and change with the needs of your organization. Workflows exist as versions from creation. When making changes to other than basic information, you create a new version of the workflow. For more information, see [Manage a workflow's properties](manage-workflow-properties.md).
-Changing a workflow's tasks or execution conditions requires the creation of a new version of that workflow. Tasks within workflows can be added, reordered, and removed at will. Updating a workflow's tasks or execution conditions within the Azure portal will trigger the creation of a new version of the workflow automatically. Making these updates in Microsoft Graph will require the new workflow version to be created manually.
+Changing a workflow's tasks or execution conditions requires the creation of a new version of that workflow. Tasks within workflows can be added, reordered, and removed at will. Updating a workflow's tasks or execution conditions within the Microsoft Entra admin center will trigger the creation of a new version of the workflow automatically. Making these updates in Microsoft Graph will require the new workflow version to be created manually.
-## Edit the tasks of a workflow using the Azure portal
+## Edit the tasks of a workflow using the Microsoft Entra admin center
+Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Microsoft Entra admin center, you complete the following steps:
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you complete the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-
-1. In the left menu, select **Lifecycle workflows**.
-
-1. In the left menu, select **workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
-1. On the left side of the screen, select **Tasks**.
+1. Select the workflow that you want to edit the tasks of and on the left side of the screen, select **Tasks**.
1. You can add a task to the workflow by selecting the **Add task** button.
Tasks within workflows can be added, edited, reordered, and removed at will. To
1. After making changes, select **save** to capture changes to the tasks.
-## Edit the execution conditions of a workflow using the Azure portal
+## Edit the execution conditions of a workflow using the Microsoft Entra admin center
-To edit the execution conditions of a workflow using the Azure portal, you do the following steps:
+To edit the execution conditions of a workflow using the Microsoft Entra admin center, you do the following steps:
1. On the left menu of Lifecycle Workflows, select **Workflows**.
To edit the execution conditions of a workflow using the Azure portal, you do th
1. After making changes, select **save** to capture changes to the execution conditions.
-## See versions of a workflow using the Azure portal
+## See versions of a workflow using the Microsoft Entra admin center
1. On the left menu of Lifecycle Workflows, select **Workflows**.
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
Scheduled workflows by default run every 3 hours, but can also run on-demand so that they can be applied to specific users whenever you see fit. A workflow can be run on demand for any user, and doesn't take into account whether or not a user meets the workflow's execution conditions. Running a workflow on-demand allows you to test workflows before their scheduled run. This testing, on a set of users up to 10 at a time, allows you to see how a workflow will run before it processes a larger set of users. Testing your workflow before their scheduled runs helps you proactively solve potential lifecycle issues more quickly.
-## Run a workflow on-demand in the Azure portal
+## Run a workflow on-demand in the Microsoft Entra admin center
-
-Use the following steps to run a workflow on-demand.
+Use the following steps to run a workflow on-demand:
>[!NOTE] >To be run on demand, the workflow must be enabled.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-
-1. On the left menu, select **Lifecycle workflows**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. select **Workflows**
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
1. On the workflow screen, select the specific workflow you want to run.
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
If you're the second-stage or third-stage reviewer, you'll also see the decision
Approve or deny access as outlined in [Review access for one or more users](#review-access-for-one-or-more-users). > [!NOTE]
-> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Azure portal. This action will close the active stage and start the next stage.
+> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Microsoft Entra admin center. This action will close the active stage and start the next stage.
### Review access for B2B direct connect users in Teams shared channels and Microsoft 365 groups (preview)
active-directory Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/sap.md
na Previously updated : 06/28/2023 Last updated : 08/24/2023
After your users are in Azure AD, you can provision accounts into the various Sa
### Provision identities into on-premises SAP systems that SAP IPS doesn't support
-Customers who have yet to transition from applications such as SAP ERP Central Component (SAP ECC) to SAP S/4HANA can still rely on the Azure AD provisioning service to provision user accounts. Within SAP ECC, you expose the necessary Business Application Programming Interfaces (BAPIs) for creating, updating, and deleting users. Within Azure AD, you have two options:
+Customers who have yet to transition from applications such as SAP R/3 and SAP ERP Central Component (SAP ECC) to SAP S/4HANA can still rely on the Azure AD provisioning service to provision user accounts. Within SAP R/3 and SAP ECC, you expose the necessary Business Application Programming Interfaces (BAPIs) for creating, updating, and deleting users. Within Azure AD, you have two options:
-* Use the lightweight Azure AD provisioning agent and [web services connector](/azure/active-directory/app-provisioning/on-premises-web-services-connector) to [provision users into apps such as SAP ECC](/azure/active-directory/app-provisioning/on-premises-sap-connector-configure?branch=pr-en-us-243167).
+* Use the lightweight Azure AD provisioning agent and [web services connector](/azure/active-directory/app-provisioning/on-premises-web-services-connector) to [provision users into apps such as SAP ECC](/azure/active-directory/app-provisioning/on-premises-sap-connector-configure).
* In scenarios where you need to do more complex group and role management, use [Microsoft Identity Manager](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-ma-ws) to manage access to your legacy SAP applications. ## Trigger custom workflows
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
Lifecycle Workflows can be used to trigger custom tasks via an extension to Azur
For more information about Lifecycle Workflows extensibility, see: [Workflow Extensibility](lifecycle-workflow-extensibility.md).
-## Create a custom task extension using the Azure portal
+## Create a custom task extension using the Microsoft Entra admin center
-To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you complete these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you complete these steps:
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
-1. In the left menu, select **Lifecycle Workflows**.
+1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**.
1. On the Lifecycle workflows screen, select **Custom task extension**.
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
Title: Execute employee termination tasks by using lifecycle workflows
-description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows in the Azure portal.
+description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows in the Microsoft Entra admin center.
# Execute employee termination tasks by using lifecycle workflows
-This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows in the Azure portal.
+This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows in the Microsoft Entra admin center.
This *leaver* scenario runs a workflow on demand and accomplishes the following tasks:
The leaver scenario includes the following steps:
## Create a workflow by using the leaver template
+Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Microsoft Entra admin center:
-Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Azure portal:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. On the right, select **Azure Active Directory**.
-3. Select **Identity Governance**.
-4. Select **Lifecycle workflows**.
-5. On the **Overview** tab, select **New workflow**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
+2. Select **Identity Governance**.
+3. Select **Lifecycle workflows**.
+4. On the **Overview** tab, select **New workflow**.
:::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of the Overview tab and the button for creating a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
-6. From the collection of templates, choose **Select** under **Real-time employee termination**.
+5. From the collection of templates, choose **Select** under **Real-time employee termination**.
:::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting a workflow template for real-time employee termination." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
-7. Configure basic information about the workflow, and then select **Next: Review tasks**.
+6. Configure basic information about the workflow, and then select **Next: Review tasks**.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of the tab for basic workflow information." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png":::
-8. Inspect the tasks if you want, but no additional configuration is needed. Select **Next: Select users** when you're finished.
+7. Inspect the tasks if you want, but no additional configuration is needed. Select **Next: Select users** when you're finished.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of the tab for reviewing template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png":::
-9. Choose the **Select users to run now** option. It allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on demand later at any time, as needed.
+8. Choose the **Select users to run now** option. It allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on demand later at any time, as needed.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Screenshot of the option for selecting users to run now." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png":::
-10. Select **Add users** to designate the users for this workflow.
+9. Select **Add users** to designate the users for this workflow.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of the button for adding users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png":::
-11. A panel with the list of available users appears on the right side of the window. Choose **Select** when you're done with your selection.
+10. A panel with the list of available users appears on the right side of the window. Choose **Select** when you're done with your selection.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of a list of available users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png":::
-12. Select **Next: Review and create** when you're satisfied with your selection of users.
+11. Select **Next: Review and create** when you're satisfied with your selection of users.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of added users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png":::
-13. Verify that the information is correct, and then select **Create**.
+12. Verify that the information is correct, and then select **Create**.
:::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of the tab for reviewing workflow choices, along with the button for creating the workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png":::
To run the workflow immediately, you can use the on-demand feature.
> [!NOTE] > You currently can't run a workflow on demand if it's set to **Disabled**. You need to set the workflow to **Enabled** to use the on-demand feature.
-To run a workflow on demand for users by using the Azure portal:
+To run a workflow on demand for users by using the Microsoft Entra admin center:
1. On the workflow screen, select the specific workflow that you want to run. 2. Select **Run on demand**.
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
Title: 'Automate employee onboarding tasks before their first day of work with Azure portal'
-description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal.
+ Title: 'Automate employee onboarding tasks before their first day of work with the Microsoft Entra admin center'
+description: Tutorial for onboarding users to an organization using Lifecycle workflows with the Microsoft Entra admin center.
-# Automate employee onboarding tasks before their first day of work with Azure portal
+# Automate employee onboarding tasks before their first day of work with the Microsoft Entra admin center
-This tutorial provides a step-by-step guide on how to automate prehire tasks with Lifecycle workflows using the Azure portal.
+This tutorial provides a step-by-step guide on how to automate prehire tasks with Lifecycle workflows using the Microsoft Entra admin center.
This prehire scenario generates a temporary access pass for our new employee and sends it via email to the user's new manager.
Detailed breakdown of the relevant attributes:
The pre-hire scenario can be broken down into the following: - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager
- - **Prerequisite:** Editing the attributes required for this scenario in the portal
+ - **Prerequisite:** Editing the attributes required for this scenario in the admin center
- **Prerequisite:** Edit the attributes for this scenario using Microsoft Graph Explorer - **Prerequisite:** Enabling and using Temporary Access Pass (TAP) - Creating the lifecycle management workflow
The pre-hire scenario can be broken down into the following:
## Create a workflow using prehire template
+Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Microsoft Entra admin center.
-Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. On the right, select **Azure Active Directory**.
-3. Select **Identity Governance**.
-4. Select **Lifecycle workflows**.
-5. On the **Overview** page, select **New workflow**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
+2. Select **Identity Governance**.
+3. Select **Lifecycle workflows**.
+4. On the **Overview** page, select **New workflow**.
:::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
-6. From the templates, select **select** under **Onboard pre-hire employee**.
+5. From the templates, select **select** under **Onboard pre-hire employee**.
:::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
-7. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**.
+6. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**.
:::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
-8. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters).
+7. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters).
:::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
-9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished.
+8. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished.
:::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png":::
-10. On the review blade, verify the information is correct and select **Create**.
+9. On the review blade, verify the information is correct and select **Create**.
:::image type="content" source="media/tutorial-lifecycle-workflows/onboard-create.png" alt-text="Screenshot of creating an onboard workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-create.png"::: ## Run the workflow
Now that the workflow is created, it will automatically run the workflow every 3
>[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-To run a workflow on-demand, for users using the Azure portal, do the following steps:
+To run a workflow on-demand, for users using the Microsoft Entra admin center, do the following steps:
1. On the workflow screen, select the specific workflow you want to run. 2. Select **Run on demand**.
active-directory Tutorial Prepare User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-user-accounts.md
Last updated 08/02/2023 -+ # Preparing user accounts for Lifecycle workflows tutorials
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
Title: Automate employee offboarding tasks after their last day of work with Azure portal
-description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Azure portal.
+ Title: Automate employee offboarding tasks after their last day of work with the Microsoft Entra admin center
+description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with the Microsoft Entra admin center.
-# Automate employee offboarding tasks after their last day of work with Azure portal
+# Automate employee offboarding tasks after their last day of work with the Microsoft Entra admin center
-This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
+This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Microsoft Entra admin center.
This post off-boarding scenario runs a scheduled workflow and accomplishes the following tasks:
The scheduled leaver scenario can be broken down into the following:
## Create a workflow using scheduled leaver template
+Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Microsoft Entra admin center.
-Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
-
- 1. Sign in to the [Azure portal](https://portal.azure.com).
- 2. On the right, select **Azure Active Directory**.
- 3. Select **Identity Governance**.
- 4. Select **Lifecycle workflows**.
- 5. On the **Overview** page, select **New workflow**.
+ 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator).
+ 2. Select **Identity Governance**.
+ 3. Select **Lifecycle workflows**.
+ 4. On the **Overview** page, select **New workflow**.
:::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
- 6. From the templates, select **Select** under **Post-offboarding of an employee**.
+ 5. From the templates, select **Select** under **Post-offboarding of an employee**.
:::image type="content" source="media/tutorial-lifecycle-workflows/select-leaver-template.png" alt-text="Screenshot of selecting a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-leaver-template.png":::
- 7. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**.
+ 6. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**.
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
- 8. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
+ 7. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png":::
- 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished.
+ 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished.
:::image type="content" source="media/tutorial-lifecycle-workflows/review-leaver-tasks.png" alt-text="Screenshot of leaver workflow tasks." lightbox="media/tutorial-lifecycle-workflows/review-leaver-tasks.png":::
-10. On the review blade, verify the information is correct and select **Create**.
+9. On the review blade, verify the information is correct and select **Create**.
:::image type="content" source="media/tutorial-lifecycle-workflows/create-leaver-workflow.png" alt-text="Screenshot of a leaver workflow being created." lightbox="media/tutorial-lifecycle-workflows/create-leaver-workflow.png"::: >[!NOTE]
Now that the workflow is created, it will automatically run the workflow every 3
>[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-To run a workflow on-demand, for users using the Azure portal, do the following steps:
+To run a workflow on-demand, for users using the Microsoft Entra admin center, do the following steps:
1. On the workflow screen, select the specific workflow you want to run. 2. Select **Run on demand**.
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
A workflow can be broken down into the following three main parts:
## Templates
-Creating a workflow via the Azure portal requires the use of a template. A Lifecycle Workflow template is a framework that is used for predefined tasks, and helps automate the creation of a workflow.
+Creating a workflow via the Microsoft Entra admin center requires the use of a template. A Lifecycle Workflow template is a framework that is used for predefined tasks, and helps automate the creation of a workflow.
[![Understanding workflow template diagram.](media/understanding-lifecycle-workflows/workflow-3.png)](media/understanding-lifecycle-workflows/workflow-3.png#lightbox)
The **My Feed** section of the workflow overview contains a quick peek into when
The **Quick Action** section allows you to quickly take action with your workflow. These quick actions can either be making the workflow do something, or used for history or editing purposes. The following actions you can take are: - Run on Demand: Allows you to quickly run the workflow on demand. For more information on this process, see: [Run a workflow on-demand](on-demand-workflow.md)-- Edit tasks: Allows you to add, delete, edit, or reorder tasks within the workflow. For more information on this process, see: [Edit the tasks of a workflow using the Azure portal](manage-workflow-tasks.md#edit-the-tasks-of-a-workflow-using-the-azure-portal)
+- Edit tasks: Allows you to add, delete, edit, or reorder tasks within the workflow. For more information on this process, see: [Edit the tasks of a workflow using the MicrosoftEntra admin center](manage-workflow-tasks.md#edit-the-tasks-of-a-workflow-using-the-microsoft-entra-admin-center)
- View Workflow History: Allows you to view the history of the workflow. For more information on the three history perspectives, see: [Lifecycle Workflows history](lifecycle-workflow-history.md) Actions taken from the overview of a workflow allow you to quickly complete tasks, which can normally be done via the manage section of a workflow.
The offset determines how many days before or after the time-based attribute the
> [!NOTE]
-> The offsetInDays value in the Azure portal is shown as *Days from event*. When you schedule a workflow to run, this value is used as the baseline for who a workflow will run. Currently there is a 3 day window in processing scheduled workflows. For example, if you schedule a workflow to run for users who joined 7 days ago, a user who meets the execution conditions for the workflow, but joined between 7 to 10 days ago would have the workflow ran for them.
+> The offsetInDays value in the Microsoft Entra admin center is shown as *Days from event*. When you schedule a workflow to run, this value is used as the baseline for who a workflow will run. Currently there is a 3 day window in processing scheduled workflows. For example, if you schedule a workflow to run for users who joined 7 days ago, a user who meets the execution conditions for the workflow, but joined between 7 to 10 days ago would have the workflow ran for them.
## Configure scope
For a detailed guide on setting the execution conditions for a workflow, see: [C
While newly created workflows are enabled by default, scheduling is an option that must be enabled manually. To verify whether the workflow is scheduled, you can view the **Scheduled** column.
-Once scheduling is enabled, the workflow is evaluated every three hours to determine whether or not it should run based on the execution conditions.
+Once scheduling is enabled, the workflow is evaluated based on the interval that is set within your workflow settings(default of three hours) to determine whether or not it should run based on the execution conditions.
[![Workflow template schedule.](media/understanding-lifecycle-workflows/workflow-10.png)](media/understanding-lifecycle-workflows/workflow-10.png#lightbox)
For more information, see: [Lifecycle Workflows Versioning](lifecycle-workflow-v
## Next steps-- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [Create a custom workflow using the Microsoft Entra admin center](tutorial-onboard-custom-workflow-portal.md)
- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-provisioning.md
For more information, see [What is HR driven provisioning?](../app-provisioning/
In Azure AD, the term **[app provisioning](../app-provisioning/user-provisioning.md)** refers to automatically creating copies of user identities in the applications that users need access to, for applications that have their own data store, distinct from Azure AD or Active Directory. In addition to creating user identities, app provisioning includes the maintenance and removal of user identities from those apps, as the user's status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), as each of these applications have their own user repository distinct from Azure AD.
-Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](/azure/active-directory/app-provisioning/on-premises-scim-provisioning) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) user store or a [SQL](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) database, Azure AD can support those as well.
+Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](/azure/active-directory/app-provisioning/on-premises-scim-provisioning) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) user store or a [SQL](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) database, or that have a [SOAP or REST API](../app-provisioning/on-premises-web-services-connector.md), Azure AD can support those as well.
For more information, see [What is app provisioning?](../app-provisioning/user-provisioning.md)
active-directory Custom Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/custom-attribute-mapping.md
-+ Last updated 01/12/2023
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md
+ Last updated 01/11/2023
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-install-pshell.md
The Windows server must have TLS 1.2 enabled before you install the Azure AD Con
[![Screenshot that the download agent](media/how-to-install/new-install-2.png)](media/how-to-install/new-install-2.png#lightbox)</br> 6. On the right, click **Accept terms and download**. 7. For the purposes of these instructions, the agent was downloaded to the C:\temp folder.
- 8. Install ProvisioningAgent in quiet mode.
+ 8. Install ProvisioningAgent in quiet mode. [If Installing against US Government Cloud, click here for alternate code block.](how-to-install-pshell.md#installing-against-us-government-cloud)
``` $installerProcess = Start-Process 'c:\temp\AADConnectProvisioningAgentSetup.exe' /quiet -NoNewWindow -PassThru $installerProcess.WaitForExit() ```
- 9. Import the Provisioning Agent PS module.
+ 10. Import the Provisioning Agent PS module.
``` Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.PowerShell.dll" ```
- 10. Connect to Azure AD by using an account with the hybrid identity role. You can customize this section to fetch a password from a secure store.
+ 11. Connect to Azure AD by using an account with the hybrid identity role. You can customize this section to fetch a password from a secure store.
``` $hybridAdminPassword = ConvertTo-SecureString -String "Hybrid identity admin password" -AsPlainText -Force
The Windows server must have TLS 1.2 enabled before you install the Azure AD Con
Connect-AADCloudSyncAzureAD -Credential $hybridAdminCreds ```
- 11. Add the gMSA account, and provide credentials of the domain admin to create the default gMSA account.
+ 12. Add the gMSA account, and provide credentials of the domain admin to create the default gMSA account.
``` $domainAdminPassword = ConvertTo-SecureString -String "Domain admin password" -AsPlainText -Force
The Windows server must have TLS 1.2 enabled before you install the Azure AD Con
Add-AADCloudSyncGMSA -Credential $domainAdminCreds ```
- 12. Or use the preceding cmdlet to provide a precreated gMSA account.
+ 13. Or use the preceding cmdlet to provide a precreated gMSA account.
``` Add-AADCloudSyncGMSA -CustomGMSAName preCreatedGMSAName$ ```
- 13. Add the domain.
+ 14. Add the domain.
``` $contosoDomainAdminPassword = ConvertTo-SecureString -String "Domain admin password" -AsPlainText -Force
The Windows server must have TLS 1.2 enabled before you install the Azure AD Con
Add-AADCloudSyncADDomain -DomainName contoso.com -Credential $contosoDomainAdminCreds ```
- 14. Or use the preceding cmdlet to configure preferred domain controllers.
+ 15. Or use the preceding cmdlet to configure preferred domain controllers.
``` $preferredDCs = @("PreferredDC1", "PreferredDC2", "PreferredDC3") Add-AADCloudSyncADDomain -DomainName contoso.com -Credential $contosoDomainAdminCreds -PreferredDomainControllers $preferredDCs ```
- 15. Repeat the previous step to add more domains. Provide the account names and domain names of the respective domains.
- 16. Restart the service.
+ 16. Repeat the previous step to add more domains. Provide the account names and domain names of the respective domains.
+ 17. Restart the service.
``` Restart-Service -Name AADConnectProvisioningAgent ```
- 17. Go to the Azure portal to create the cloud sync configuration.
+ 18. Go to the Azure portal to create the cloud sync configuration.
## Provisioning agent gMSA PowerShell cmdlets Now that you've installed the agent, you can apply more granular permissions to the gMSA. For information and step-by-step instructions on how to configure the permissions, see [Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets](how-to-gmsa-cmdlets.md).
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-install.md
To update an existing agent to use the Group Managed Service Account created dur
>[!IMPORTANT] > After you've installed the agent, you must configure and enable it before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
-## Enable password writeback in Azure AD Connect cloud sync
++
+## Enable password writeback in cloud sync
+
+You can enable password writeback in SSPR directly in the portal or through PowerShell.
+
+### Enable password writeback in the portal
+To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, using the portal, complete the following steps:
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account.
+ 2. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
+ 3. Check the option for **Enable password write back for synced users** .
+ 4. (optional) If Azure AD Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Azure AD Connect cloud sync**.
+ 5. Check the option for **Allow users to unlock accounts without resetting their password** to *Yes*.
+ 6. When ready, select **Save**.
+
+### Using PowerShell
To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, use the `Set-AADCloudSyncPasswordWritebackConfiguration` cmdlet and the tenantΓÇÖs global administrator credentials:
active-directory Migrate Azure Ad Connect To Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/migrate-azure-ad-connect-to-cloud-sync.md
+ Last updated 01/17/2023
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/reference-powershell.md
+ Last updated 01/17/2023
active-directory How To Bypassdirsyncoverrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-bypassdirsyncoverrides.md
+
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-emergency-ad-fs-certificate-rotation.md
+ Last updated 01/26/2023
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md
ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be
na+ Last updated 01/26/2023
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-saml-idp.md
description: This document describes using a SAML 2.0 compliant Idp for single s
-+ na
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-single-adfs-multitenant-federation.md
ms.assetid:
na+ Last updated 01/26/2023
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-tenant.md
description: This topic describes how to use Connect when you have an existing A
+ Last updated 01/26/2023
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-multiple-domains.md
ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4
na+ Last updated 01/26/2023
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md
ms.assetid: 91b88fda-bca6-49a8-898f-8d906a661f07
na+ Last updated 05/02/2023
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md
ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501 + Last updated 05/18/2023
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md
ms.assetid: 7b9df836-e8a5-4228-97da-2faec9238b31 + Last updated 01/26/2023
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-preferreddatalocation.md
description: Describes how to put your Microsoft 365 user resources close to the
+ Last updated 01/26/2023
active-directory How To Connect Syncservice Duplicate Attribute Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-duplicate-attribute-resiliency.md
ms.assetid: 537a92b7-7a84-4c89-88b0-9bce0eacd931
na+ Last updated 01/26/2023
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md
ms.assetid: 213aab20-0a61-434a-9545-c4637628da81
na+ Last updated 01/26/2023
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/migrate-from-federation-to-cloud-authentication.md
description: This article has information about moving your hybrid identity envi
+ Last updated 04/04/2023
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-accounts-permissions.md
na+ Last updated 01/19/2023
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsynctools.md
-+ # Azure AD Connect: ADSyncTools PowerShell Reference
Accept wildcard characters: False
``` #### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).+ ## Export-ADSyncToolsAadDisconnectors ### SYNOPSIS Export Azure AD Disconnector objects
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable
Use ObjectType argument in case you want to export Disconnectors for a given object type only ### OUTPUTS Exports a CSV file with Disconnector objects containing: - UserPrincipalName, Mail, SourceAnchor, DistinguishedName, CsObjectId, ObjectType, ConnectorId and CloudAnchor
+## Export-ADSyncToolsAadPublicFolders
+### SYNOPSIS
+Exports all synchronized Mail-Enabled Public Folder objects from AzureAD to a CSV file
+### SYNTAX
+```
+Export-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-Path] <Object> [<CommonParameters>]
+```
+### DESCRIPTION
+This function exports to a CSV file all the synchronized Mail-Enabled Public Folders (MEPF) present in Azure AD.
+It can be used in conjunction with Remove-ADSyncToolsAadPublicFolders to identify and remove orphaned Mail-Enabled Public Folders in Azure AD.
+This function requires the credentials of a Global Administrator in Azure AD and authentication with MFA is not supported.
+NOTE: If DirSync has been disabled on the tenant, you will need to temporarily re-enabled DirSync in order to remove orphaned Mail Enabled Public Folders from Azure AD.
+### EXAMPLES
+#### EXAMPLE 1
+```
+Export-ADSyncToolsAadPublicFolders -Credential $(Get-Credential) -Path <file_name>
+```
+### PARAMETERS
+#### -Credential
+Azure AD Global Admin Credential
+```yaml
+Type: PSCredential
+Parameter Sets: (All)
+Aliases:
+Required: true
+Position: 1
+Default value: None
+Accept pipeline input: True (ByPropertyName)
+Accept wildcard characters: False
+```
+#### -Path
+Path for output file
+```yaml
+Type: String
+Parameter Sets: (All)
+Aliases:
+Required: true
+Position: 2
+Default value: None
+Accept pipeline input: false (ByPropertyName)
+Accept wildcard characters: false
+```
+#### CommonParameters
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
+### INPUTS
+
+### OUTPUTS
+This cmdlet creates the `<filename>` containing all synced Mail-Enabled PublicFolder objects in CSV format.
+
## Export-ADSyncToolsHybridAadJoinReport ### SYNOPSIS Generates a report of certificates stored in Active Directory Computer objects, specifically,
InputCsvFilename must point to a CSV file with at least 2 columns: SourceAnchor,
### OUTPUTS Shows results from ExportDeletions operation DISCLAIMER: Other than User objects that have a Recycle Bin, any other object types DELETED with this function cannot be RECOVERED!+
+## Remove-ADSyncToolsAadPublicFolders
+### SYNOPSIS
+Removes synchronized Mail-Enabled Public Folders (MEPF) present from AzureAD.
+You can specify one SourceAnchor/ImmutableID for the target MEPF object to delete, or provide a CSV list with a batch of objects to delete when used in conjunction with Export-ADSyncToolsAadPublicFolders.
+This function requires the credentials of a Global Administrator in Azure AD and authentication with MFA is not supported.
+NOTE: If DirSync has been disabled on the tenant, you'll need to temporary re-enabled DirSync in order to remove orphaned Mail Enabled Public Folders from Azure AD.
+### SYNTAX
+```
+Export-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-Path] <Object> [<CommonParameters>]
+```
+### DESCRIPTION
+This function exports to a CSV file all the synchronized Mail-Enabled Public Folders (MEPF) present in Azure AD.
+It can be used in conjunction with Remove-ADSyncToolsAadPublicFolders to identify and remove orphaned Mail-Enabled Public Folders in Azure AD.
+This function requires the credentials of a Global Administrator in Azure AD and authentication with MFA is not supported.
+NOTE: If DirSync has been disabled on the tenant, you will need to temporarily re-enabled DirSync in order to remove orphaned Mail Enabled Public Folders from Azure AD.
+### EXAMPLES
+#### EXAMPLE 1
+```
+Remove-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-InputCsvFilename] <Object> [-WhatIf] [-Confirm] [<CommonParameters>]
+```
+#### EXAMPLE 2
+```
+Remove-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-SourceAnchor] <Object> [-WhatIf] [-Confirm] [<CommonParameters>]
+```
+### PARAMETERS
+#### -Credential
+Azure AD Global Admin Credential
+```yaml
+Type: PSCredential
+Parameter Sets: (All)
+Aliases:
+Required: true
+Position: 1
+Default value: None
+Accept pipeline input: True (ByPropertyName)
+Accept wildcard characters: False
+```
+#### -InputCsvFilename
+Path for input CSV file
+```yaml
+Type: String
+Parameter Sets: InputCsv
+Aliases:
+Required: true
+Position: 2
+Default value: None
+Accept pipeline input: true (ByPropertyName)
+Accept wildcard characters: false
+```
+#### -SourceAnchor
+Target SourceAnchor/ImmutableID
+```yaml
+Type: String
+Parameter Sets: SourceAnchor
+Aliases:
+Required: true
+Position: 2
+Default value: None
+Accept pipeline input: true (ByPropertyName)
+Accept wildcard characters: false
+```
+#### CommonParameters
+This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).
+### INPUTS
+The CSV input file can be generated using Export-ADSyncToolsAadPublicFolders.
+Path parameters must point to a CSV file with at least 2 columns: SourceAnchor, SyncObjectType.
+### OUTPUTS
+Shows results from ExportDeletions operation.
+ ## Remove-ADSyncToolsExpiredCertificates ### SYNOPSIS Script to Remove Expired Certificates from UserCertificate Attribute
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history-archive.md
Last updated 01/19/2023
-+ # Azure AD Connect: Version release history archive
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md
Last updated 7/6/2022 -+
To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to
- We have enabled Auto Upgrade for tenants with custom synchronization rules. Note that deleted (not disabled) default rules will be re-created and enabled upon Auto Upgrade. - We have added Microsoft Azure AD Connect Agent Updater service to the install. This new service will be used for future auto upgrades. - We have removed the Synchronization Service WebService Connector Config program from the install.
+ - Default sync rule ΓÇ£In from AD ΓÇô User CommonΓÇ¥ was updated to flow the employeeType attribute.
### Bug Fixes - We have made improvements to accessibility.
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-connectivity.md
-+ # Troubleshoot Azure AD Connect connectivity issues
active-directory Tshoot Connect Object Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-object-not-syncing.md
ms.assetid:
na+ Last updated 01/19/2023
active-directory Tshoot Connect Recover From Localdb 10Gb Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-recover-from-localdb-10gb-limit.md
By default, Azure AD Connect retains up to seven daysΓÇÖ worth of run history da
2. Go to the **Operations** tab.
-3. Under **Actions**, select **Clear Runs**…
+3. Under **Actions**, select **Clear Runs**.
-4. You can either choose **Clear all runs** or **Clear runs before… \<date>** option. It is recommended that you start by clearing run history data that are older than two days. If you continue to run into DB size issue, then choose the **Clear all runs** option.
+4. You can either choose **Clear all runs** or **Clear runs before... \<date>** option. It is recommended that you start by clearing run history data that are older than two days. If you continue to run into DB size issue, then choose the **Clear all runs** option.
### Shorten retention period for run history data This step is to reduce the likelihood of running into the 10-GB limit issue after multiple sync cycles.
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sso.md
ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7 + Last updated 01/19/2023
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sync-errors.md
Last updated 01/19/2023 -+
active-directory Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/install.md
Cloud sync uses the Azure AD Connect provisioning agent. Use the steps below to
4. On the left, select **Agent**. 5. Select **Download on-premises agent**, and select **Accept terms & download**. 6. Once the **Azure AD Connect Provisioning Agent Package** has completed downloading, run the *AADConnectProvisioningAgentSetup.exe* installation file from your downloads folder.
+ >[!NOTE]
+ >When installing for the US Government Cloud use:
+ >*AADConnectProvisioningAgentSetup.exe ENVIRONMENTNAME=AzureUSGovernment*
+ >See "[Install an agent in the US government cloud](cloud-sync/how-to-install.md#install-an-agent-in-the-us-government-cloud)" for more information.
+ 7. On the splash screen, select **I agree to the license and conditions**, and then select **Install**. 8. Once the installation operation completes, the configuration wizard will launch. Select **Next** to start the configuration. 9. On the **Select Extension** screen, select **HR-driven provisioning (Workday and SuccessFactors) / Azure AD Connect Cloud Sync** and click **Next**.
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/prerequisites.md
For more information on the cloud sync accounts, and how to set up a custom gMSA
|Requirement|Description and more requirements| |--|--|
-|Windows server 2016 or greater (Windows Server 2022 not supported yet) that is or has:|ΓÇó 4 GB RAM or more</br>ΓÇó .NET 4.6.2 runtime or greater</br>ΓÇó domain-joined</br>ΓÇó PowerShell execution policy set to **RemoteSigned**</br>ΓÇó TLS 1.2 enabled</br>ΓÇó if federation is being used, the AD FS severs must be Windows Server 2012 R2 or higher and TLS/SSL certificates must be configured.|
+|Windows server 2016 or greater that is or has:|ΓÇó 4 GB RAM or more</br>ΓÇó .NET 4.6.2 runtime or greater</br>ΓÇó domain-joined</br>ΓÇó PowerShell execution policy set to **RemoteSigned**</br>ΓÇó TLS 1.2 enabled</br>ΓÇó if federation is being used, the AD FS severs must be Windows Server 2012 R2 or higher and TLS/SSL certificates must be configured.|
|Active Directory|ΓÇó On-premises AD that has a forest functional level 2003 or higher</br>ΓÇó a writeable domain controller| |Azure AD tenant|ΓÇó A tenant in Azure used to synchronize from on-premises| |SQL Server|Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. For more information on using a SQL server, see [Azure AD Connect SQL server requirements](connect/how-to-connect-install-prerequisites.md#sql-server-used-by-azure-ad-connect)
active-directory Verify Sync Tool Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/verify-sync-tool-version.md
+
+ Title: 'Verify your version of cloud sync or connect sync'
+description: This article describes the steps to verify the version of the provisioning agent or connect sync.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ Last updated : 08/17/2023+++++
+# Verify your version of the provisioning agent or connect sync
+This article describes the steps to verify the installed version of the provisioning agent and connect sync.
+
+## Verify the provisioning agent
+To see what version of the provisioning agent your using, use the following steps:
++
+## Verfiy connect sync
+To see what version of connect sync your using, use the following steps:
+
+### On the local server
+
+To verify that the agent is running, follow these steps:
+
+ 1. Sign in to the server with an administrator account.
+ 2. Open **Services** either by navigating to it or by going to *Start/Run/Services.msc*.
+ 3. Under **Services**, make sure that **Microsoft Azure AD Sync** is present and the status is **Running**.
++
+### Verify the connect sync version
+
+To verify that the version of the agent running, follow these steps:
+
+1. Navigate to 'C:\Program Files\Microsoft Azure AD Connect'
+2. Right-click on **AzureADConnect.exe** and select **properties**.
+3. Click the **details** tab and the version number ID next to the Product version.
+
+## Next steps
+- [Common scenarios](common-scenarios.md)
+- [Choosing the right sync tool](https://setup.microsoft.com/azure/add-or-sync-users-to-azure-ad)
+- [Steps to start](get-started.md)
+- [Prerequisites](prerequisites.md)
active-directory Concept Identity Protection B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-b2b.md
From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/
### Manually dismiss user's risk
-If password reset isn't an option for you from the Azure portal, you can choose to manually dismiss user risk. Dismissing user risk doesn't have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It's important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
+If password reset isn't an option for you, you can choose to manually dismiss user risk. Dismissing user risk doesn't have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It's important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and select the user. Select the "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report.
active-directory Concept Identity Protection Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md
Title: Azure Active Directory Identity Protection security overview
-description: Learn how the Security overview gives you an insight into your organizationΓÇÖs security posture.
+description: Learn how the security overview gives you an insight into your organizationΓÇÖs security posture.
Previously updated : 07/07/2023 Last updated : 08/23/2023
# Azure Active Directory Identity Protection - Security overview
-The [Security overview](https://aka.ms/IdentityProtectionRefresh) in the Azure portal gives you an insight into your organizationΓÇÖs security posture. It helps identify potential attacks and understand the effectiveness of your policies.
+The Security overview gives insight into your organizationΓÇÖs security posture. It helps identify potential attacks and understand the effectiveness of your policies.
The ΓÇÿSecurity overviewΓÇÖ is broadly divided into two sections: -- Trends, on the left, provide a timeline of risk in your organization.-- Tiles, on the right, highlight the key ongoing issues in your organization and suggest how to quickly take action.
+- Trend graphs, provide a timeline of risk in your organization.
+- Tiles, highlight the key ongoing issues in your organization and suggest how to quickly take action.
-You can find the security overview page in the **Azure portal** > **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**.
+You can find the security overview page in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Identity Protection** > **Overview**.
-## Trends
-
-### New risky users detected
-
-This chart shows the number of new risky users that were detected over the chosen time period. You can filter the view of this chart by user risk level (low, medium, high). Hover over the UTC date increments to see the number of risky users detected for that day. Selecting this chart will bring you to the ΓÇÿRisky usersΓÇÖ report. To remediate users that are at risk, consider changing their password.
-
-### New risky sign-ins detected
-
-This chart shows the number of risky sign-ins detected over the chosen time period. You can filter the view of this chart by the sign-in risk type (real-time or aggregate) and the sign-in risk level (low, medium, high). Unprotected sign-ins are successful real-time risk sign-ins that weren't MFA challenged. (Note: Sign-ins that are risky because of offline detections can't be protected in real-time by sign-in risk policies). Hover over the UTC date increments to see the number of sign-ins detected at risk for that day. Selecting this chart will bring you to the ΓÇÿRisky sign-insΓÇÖ report.
-
-## Tiles
-
-### High risk users
-
-The ΓÇÿHigh risk usersΓÇÖ tile shows the latest count of users with high probability of identity compromise. These users should be a top priority for investigation. Selecting the ΓÇÿHigh risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of high. Using this report, you can learn more and remediate these users with a password reset.
--
-### Medium risk users
-The ΓÇÿMedium risk usersΓÇÖ tile shows the latest count of users with medium probability of identity compromise. Selecting the ΓÇÿMedium risk usersΓÇÖ tile will take you to a view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of medium. Using this report, you can further investigate and remediate these users.
-
-### Unprotected risky sign-ins
-
-The ΓÇÿUnprotected risky sign-ins' tile shows the last weekΓÇÖs count of successful, real-time risky sign-ins that weren't blocked or MFA challenged by a Conditional Access policy, Identity Protection risk policy, or per-user MFA. These successful sign-ins are potentially compromised and not challenged for MFA. To protect such sign-ins in future, apply a sign-in risk policy. Selecting the ΓÇÿUnprotected risky sign-ins' tile will take you to the sign-in risk policy configuration blade where you can configure the sign-in risk policy.
-
-### Legacy authentication
-
-The ΓÇÿLegacy authenticationΓÇÖ tile shows the last weekΓÇÖs count of legacy authentications with risk present in your organization. Legacy authentication protocols don't support modern security methods such as an MFA. To prevent legacy authentication, you can apply a Conditional Access policy. Selecting the ΓÇÿLegacy authenticationΓÇÖ tile will redirect you to the ΓÇÿIdentity Secure ScoreΓÇÖ.
-
-### Identity Secure Score
-
-The Identity Secure Score measures and compares your security posture to industry patterns. If you select the **Identity Secure Score** tile, it will redirect to [Identity Secure Score](../fundamentals/identity-secure-score.md) where you can learn more about improving your security posture.
+The security overview page is being replaced by the [Microsoft Entra ID Protection dashboard](id-protection-dashboard.md)
## Next steps - [What is risk](concept-identity-protection-risks.md) - [Policies available to mitigate risks](concept-identity-protection-policies.md)
+- [Identity Secure Score](../fundamentals/identity-secure-score.md)
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
- # Securing workload identities Azure AD Identity Protection has historically protected users in detecting, investigating, and remediating identity-based risks. We're now extending these capabilities to workload identities to protect applications and service principals.
These differences make workload identities harder to manage and put them at high
To make use of workload identity risk, including the new **Risky workload identities** blade and the **Workload identity detections** tab in the **Risk detections** blade in the portal, you must have the following. -- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.
+- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade).
- One of the following administrator roles assigned
- - Global Administrator
- Security Administrator - Security Operator - Security Reader Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
+ - Global Administrator
## Workload identity risk detections
We detect risk on workload identities across sign-in behavior and offline indica
Organizations can find workload identities that have been flagged for risk in one of two locations:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Security** > **Risky workload identities**.
-1. Or browse to **Azure Active Directory** > **Security** > **Risk detections**.
- 1. Select the **Workload identity detections** tab.'
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Reader](../roles/permissions-reference.md#security-reader).
+1. Browse to **Protection** > **Identity Protection** > **Risky workload identities**.
:::image type="content" source="media/concept-workload-identity-risk/workload-identity-detections-in-risk-detections-report.png" alt-text="Screenshot showing risks detected against workload identities in the report." lightbox="media/concept-workload-identity-risk/workload-identity-detections-in-risk-detections-report.png":::
For improved security and resilience of your workload identities, Continuous Acc
## Investigate risky workload identities
-Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal.
+Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis.
Some of the key questions to answer during your investigation include:
The [Azure Active Directory security operations guide for Applications](../archi
Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk, or confirm the account as compromised in the Risky workload identities report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins. ## Remediate risky workload identities
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md
Azure AD stores reports and security signals for a defined period of time. When
| Azure AD MFA usage | 30 days | 30 days | 30 days | | Risky sign-ins | 7 days | 30 days | 30 days |
-Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
[ ![Diagnostic settings screen in Azure AD showing existing configuration](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png) ](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png#lightbox)
Organizations can choose to store data for longer periods by changing diagnostic
Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
-Once enabled you'll find access to Log Analytics in the **Azure portal** > **Azure AD** > **Log Analytics**. The following tables are of most interest to Identity Protection administrators:
+Once enabled you'll find access to Log Analytics in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Log Analytics**. The following tables are of most interest to Identity Protection administrators:
- AADRiskyUsers - Provides data like the **Risky users** report in Identity Protection. - AADUserRiskEvents - Provides data like the **Risk detections** report in Identity Protection.
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
For more information on Azure AD multifactor authentication, see [What is Azure
## Policy configuration -
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **MFA registration policy**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator)
+1. Browse to **Protection** > **Identity Protection** > **MFA registration policy**.
1. Under **Assignments** > **Users** 1. Under **Include**, select **All users** or **Select individuals and groups** if limiting your rollout. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
active-directory Howto Identity Protection Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md
As an administrator, you can set:
- **The user risk level that triggers the generation of this email** - By default, the risk level is set to ΓÇ£HighΓÇ¥ risk. - **The recipients of this email** - Users in the Global Administrator, Security Administrator, or Security Reader roles are automatically added to this list. We attempt to send emails to the first 20 members of each role. If a user is enrolled in PIM to elevate to one of these roles on demand, then **they will only receive emails if they are elevated at the time the email is sent**.
- - Optionally you can **Add custom email here** users defined must have the appropriate permissions to view the linked reports in the Azure portal.
+ - Optionally you can **Add custom email here** users defined must have the appropriate permissions to view the linked reports.
-Configure the users at risk email in the **Azure portal** under **Azure Active Directory** > **Security** > **Identity Protection** > **Users at risk detected alerts**.
+Configure the users at risk email in the [Microsoft Entra admin center](https://entra.microsoft.com) under **Protection** > **Identity Protection** > **Users at risk detected alerts**.
## Weekly digest email
Users in the Global Administrator, Security Administrator, or Security Reader ro
As an administrator, you can switch sending a weekly digest email on or off and choose the users assigned to receive the email.
-Configure the weekly digest email in the **Azure portal** under **Azure Active Directory** > **Security** > **Identity Protection** > **Weekly digest**.
+Configure the weekly digest email in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Identity Protection** > **Weekly digest**.
## See also
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Before organizations enable remediation policies, they may want to [investigate]
### User risk policy in Conditional Access
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
After confirming your settings using [report-only mode](../conditional-access/ho
### Sign-in risk policy in Conditional Access
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access**.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**.
If you already have risk policies enabled in Identity Protection, we highly reco
1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md) based on Microsoft's recommendations and your organizational requirements. 1. Ensure that the new Conditional Access risk policy works as expected by testing it in [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md). 1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.
- 1. Browse back to **Azure Active Directory** > **Security** > **Conditional Access**.
+ 1. Browse back to **Protection** > **Conditional Access**.
1. Select this new policy to edit it. 1. Set **Enable policy** to **On** to enable the policy 1. **Disable** the old risk policies in Identity Protection.
- 1. Browse to **Azure Active Directory** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy.
+ 1. Browse to **Protection** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy.
1. Set **Enforce policy** to **Off** 1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md).
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Identity Protection provides organizations with three reports they can use to investigate identity risks in their environment. These reports are the **risky users**, **risky sign-ins**, and **risk detections**. Investigation of events is key to better understanding and identifying any weak points in your security strategy.
-All three reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal. The risky users and risky sign-ins reports allow for downloading the most recent 2500 entries, while the risk detections report allows for downloading the most recent 5000 records.
+All three reports allow for downloading of events in .CSV format for further analysis. The risky users and risky sign-ins reports allow for downloading the most recent 2500 entries, while the risk detections report allows for downloading the most recent 5000 records.
Organizations can take advantage of the Microsoft Graph API integrations to aggregate data with other sources they may have access to as an organization.
-The three reports are found in the **Azure portal** > **Azure Active Directory** > **Security**.
+The three reports are found in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Identity Protection**.
## Navigating the reports
To view and investigate risks on a userΓÇÖs account, select the ΓÇ£Detections no
The Risk history tab also shows all the events that have led to a user risk change in the last 90 days. This list includes risk detections that increased the userΓÇÖs risk and admin remediation actions that lowered the userΓÇÖs risk. View it to understand how the userΓÇÖs risk has changed. With the information provided by the risky users report, administrators can find:
Administrators can then choose to take action on these events. Administrators ca
## Risky sign-ins The risky sign-ins report contains filterable data for up to the past 30 days (one month).
Administrators can then choose to take action on these events. Administrators ca
## Risk detections The risk detections report contains filterable data for up to the past 90 days (three months).
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Administrators are given two options when resetting a password for their users:
If after investigation and confirming that the user account isn't at risk of being compromised, then you can choose to dismiss the risky user.
-To **Dismiss user risk**, search for and select **Azure AD Risky users** in the Azure portal or the Entra portal, select the affected user, and select **Dismiss user(s) risk**.
+To Dismiss user risk in the [Microsoft Entra admin center](https://entra.microsoft.com), browse to **Protection** > **Identity Protection** > **Risky users**, select the affected user, and select **Dismiss user(s) risk**.
When you select **Dismiss user risk**, the user is no longer at risk, and all the risky sign-ins of this user and corresponding risk detections are dismissed as well.
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
Simulating the atypical travel condition is difficult because the algorithm uses
**To simulate an atypical travel risk detection, perform the following steps**: 1. Using your standard browser, navigate to [https://myapps.microsoft.com](https://myapps.microsoft.com).
-2. Enter the credentials of the account you want to generate an atypical travel risk detection for.
-3. Change your user agent. You can change user agent in Microsoft Edge from Developer Tools (F12).
-4. Change your IP address. You can change your IP address by using a VPN, a Tor add-on, or creating a new virtual machine in Azure in a different data center.
-5. Sign-in to [https://myapps.microsoft.com](https://myapps.microsoft.com) using the same credentials as before and within a few minutes after the previous sign-in.
+1. Enter the credentials of the account you want to generate an atypical travel risk detection for.
+1. Change your user agent. You can change user agent in Microsoft Edge from Developer Tools (F12).
+1. Change your IP address. You can change your IP address by using a VPN, a Tor add-on, or creating a new virtual machine in Azure in a different data center.
+1. Sign-in to [https://myapps.microsoft.com](https://myapps.microsoft.com) using the same credentials as before and within a few minutes after the previous sign-in.
The sign-in shows up in the Identity Protection dashboard within 2-4 hours. ## Leaked Credentials for Workload Identities - This risk detection indicates that the application's valid credentials have been leaked. This leak can occur when someone checks in the credentials in a public code artifact on GitHub. Therefore, to simulate this detection, you need a GitHub account and can [sign up a GitHub account](https://docs.github.com/get-started/signing-up-for-github) if you don't have one already.
-**To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Browse to **Azure Active Directory** > **App registrations**.
-3. Select **New registration** to register a new application or reuse an existing stale application.
-4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit.
+### Simulate Leaked Credentials in GitHub for Workload Identities
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations**.
+1. Select **New registration** to register a new application or reuse an existing stale application.
+1. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit.
> [!Note] > **You can not retrieve the secret again after you leave this page**.
-5. Get the TenantID and Application(Client)ID in the **Overview** page.
-6. Ensure you disable the application via **Azure Active Directory** > **Enterprise Application** > **Properties** > Set **Enabled for users to sign-in** to **No**.
-7. Create a **public** GitHub Repository, add the following config and commit the change as a file with the .txt extension.
+1. Get the TenantID and Application(Client)ID in the **Overview** page.
+1. Ensure you disable the application via **Identity** > **Applications** > **Enterprise Application** > **Properties** > Set **Enabled for users to sign-in** to **No**.
+1. Create a **public** GitHub Repository, add the following config and commit the change as a file with the .txt extension.
```GitHub file "AadClientId": "XXXX-2dd4-4645-98c2-960cf76a4357", "AadSecret": "p3n7Q~XXXX", "AadTenantDomain": "XXXX.onmicrosoft.com", "AadTenantId": "99d4947b-XXX-XXXX-9ace-abceab54bcd4", ```
-7. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit.
+1. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit.
## Testing risk policies
This section provides you with steps for testing the user and the sign-in risk p
To test a user risk security policy, perform the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**.
-1. Select **Configure user risk policy**.
- 1. Under **Assignments**
- 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout.
- 1. Optionally you can choose to exclude users from the policy.
- 1. **Conditions** - **User risk** Microsoft's recommendation is to set this option to **High**.
- 1. Under **Controls**
- 1. **Access** - Microsoft's recommendation is to **Allow access** and **Require password change**.
- 1. **Enforce Policy** - **Off**
- 1. **Save** - This action will return you to the **Overview** page.
+1. Configure a [user risk policy](howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access) targeting the users you plan to test with.
1. Elevate the user risk of a test account by, for example, simulating one of the risk detections a few times. 1. Wait a few minutes, and then verify that risk has elevated for your user. If not, simulate more risk detections for the user. 1. Return to your risk policy and set **Enforce Policy** to **On** and **Save** your policy change.
To test a user risk security policy, perform the following steps:
To test a sign-in risk policy, perform the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**.
-1. Select **Configure sign-in risk policy**.
- 1. Under **Assignments**
- 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout.
- 1. Optionally you can choose to exclude users from the policy.
- 1. **Conditions** - **Sign-in risk** Microsoft's recommendation is to set this option to **Medium and above**.
- 1. Under **Controls**
- 1. **Access** - Microsoft's recommendation is to **Allow access** and **Require multifactor authentication**.
- 1. **Enforce Policy** - **On**
- 1. **Save** - This action will return you to the **Overview** page.
+1. Configure a [sign-in risk policy](howto-identity-protection-configure-risk-policies.md#sign-in-risk-policy-in-conditional-access) targeting the users you plan to test with.
1. You can now test Sign-in Risk-based Conditional Access by signing in using a risky session (for example, by using the Tor browser). ## Next steps
active-directory Id Protection Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/id-protection-dashboard.md
To access this new dashboard, you need:
Organizations can access the new dashboard by: 1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
-1. Browse to **Identity** > **Protection** > **Identity Protection** > **Dashboard (Preview)**.
+1. Browse to **Protection** > **Identity Protection** > **Dashboard (Preview)**.
### Metric cards
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/access-panel-collections.md
To create collections on the My Apps portal, you need:
To create a collection, you must have an Azure AD Premium P1 or P2 license.
-1. Sign in to the [Azure portal](https://portal.azure.com) as an admin with an Azure AD Premium P1 or P2 license.
-
-2. Go to **Azure Active Directory** > **Enterprise Applications**.
-
-3. Under **Manage**, select **App launchers**.
-
-4. Select **New collection**. In the **New collection** page, enter a **Name** for the collection (we recommend not using "collection" in the name. Then enter a **Description**.
-
-5. Select the **Applications** tab. Select **+ Add application**, and then in the **Add applications** page, select all the applications you want to add to the collection, or use the **Search** box to find applications.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
+1. Under **Manage**, select **App launchers**.
+1. Select **New collection**. In the **New collection** page, enter a **Name** for the collection (we recommend not using "collection" in the name). Then enter a **Description**.
+1. Select the **Applications** tab. Select **+ Add application**, and then in the **Add applications** page, select all the applications you want to add to the collection, or use the **Search** box to find applications.
![Add an application to the collection](media/acces-panel-collections/add-applications.png)
-6. When you're finished adding applications, select **Add**. The list of selected applications appears. You can use the arrows to change the order of applications in the list.
-
-7. Select the **Owners** tab. Select **+ Add users and groups**, and then in the **Add users and groups** page, select the users or groups you want to assign ownership to. When you're finished selecting users and groups, choose **Select**.
-
-8. Select the **Users and groups** tab. Select **+ Add users and groups**, and then in the **Add users and groups** page, select the users or groups you want to assign the collection to. Or use the **Search** box to find users or groups. When you're finished selecting users and groups, choose **Select**.
-
-9. Select **Review + Create**. The properties for the new collection appear.
+1. When you're finished adding applications, select **Add**. The list of selected applications appears. You can use the arrows to change the order of applications in the list.
+1. Select the **Owners** tab. Select **+ Add users and groups**, and then in the **Add users and groups** page, select the users or groups you want to assign ownership to. When you're finished selecting users and groups, choose **Select**.
+1. Select the **Users and groups** tab. Select **+ Add users and groups**, and then in the **Add users and groups** page, select the users or groups you want to assign the collection to. Or use the **Search** box to find users or groups. When you're finished selecting users and groups, choose **Select**.
+1. Select **Review + Create**. The properties for the new collection appear.
> [!NOTE]
-> Admin collections are managed through the [Azure portal](https://portal.azure.com), not from [My Apps portal](https://myapps.microsoft.com). For example, if you assign users or groups as an owner, then they can only manage the collection through the Azure portal.
+> Admin collections are managed through the [Microsoft Entra admin center](https://entra.microsoft.com), not from [My Apps portal](https://myapps.microsoft.com). For example, if you assign users or groups as an owner, then they can only manage the collection through the Microsoft Entra admin center.
> [!NOTE] > There is a known issue with Office apps in collections. If you already have at least one Office app in a collection and want to add more, follow these steps: > 1. Select the collection you'd like to manage, then select the **Applications** tab.
-> 2. Remove all Office apps from the collection but do not save the changes.
-> 3. Select **+ Add application**.
-> 4. In the **Add applications** page, select all the Office apps you want to add to the collection (including the ones that you removed in step 2).
-> 5. When you're finished adding applications, select **Add**. The list of selected applications appears. You can use the arrows to change the order of applications in the list.
-> 5. Select **Save** to apply the changes.
+> 1. Remove all Office apps from the collection but do not save the changes.
+> 1. Select **+ Add application**.
+> 1. In the **Add applications** page, select all the Office apps you want to add to the collection (including the ones that you removed in step 2).
+> 1. When you're finished adding applications, select **Add**. The list of selected applications appears. You can use the arrows to change the order of applications in the list.
+> 1. Select **Save** to apply the changes.
## View audit logs The Audit logs record My Apps collections operations, including collection creation end-user actions. The following events are generated from My Apps:
-* Create admin collection
-* Edit admin collection
-* Delete admin collection
-* Self-service application adding (end user)
-* Self-service application deletion (end user)
+- Create admin collection
+- Edit admin collection
+- Delete admin collection
+- Self-service application adding (end user)
+- Self-service application deletion (end user)
-You can access audit logs in the [Azure portal](https://portal.azure.com) by selecting **Azure Active Directory** > **Enterprise Applications** > **Audit logs** in the Activity section. For **Service**, select **My Apps**.
+You can access audit logs in the [Microsoft Entra admin center](https://entra.microsoft.com) by selecting **Identity** > **Applications** > **Enterprise applications** > **Audit logs** in the Activity section. For **Service**, select **My Apps**.
## Get support for My Account pages
From the My Apps page, a user can select **My account** > **View account** to op
In case you need to submit a support request for an issue with the Azure AD account page or the Office account page, follow these steps so your request is routed properly:
-* For issues with the **Azure AD "My Account"** page, open a support request from within the Azure portal. Go to **Azure portal** > **Azure Active Directory** > **New support request**.
+- For issues with the **Azure AD "My Account"** page, open a support request from within the Microsoft Entra admin center. Go to **Microsoft Entra admin center** > **Identity** > **Learn & support** > **New support request**.
-* For issues with the **Office "My account"** page, open a support request from within the Microsoft 365 admin center. Go to **Microsoft 365 admin center** > **Support**.
+- For issues with the **Office "My account"** page, open a support request from within the Microsoft 365 admin center. Go to **Microsoft 365 admin center** > **Support**.
## Next steps
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md
# Quickstart: Create and assign a user account
-In this quickstart, you use the Azure portal to create a user account in your Azure Active Directory (Azure AD) tenant. After you create the account, you can assign it to the enterprise application that you added to your tenant.
+In this quickstart, you use the Microsoft Entra admin center to create a user account in your Azure Active Directory (Azure AD) tenant. After you create the account, you can assign it to the enterprise application that you added to your tenant.
It's recommended that you use a nonproduction environment to test the steps in this quickstart.
It's recommended that you use a nonproduction environment to test the steps in t
To create a user account and assign it to an enterprise application, you need: - An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, or owner of the service principal.
+- One of the following roles: Global Administrator, Cloud Application Administrator, or owner of the service principal. You'll need the User Administrator role to manage users.
- Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md). ## Create a user account
To create a user account and assign it to an enterprise application, you need:
To create a user account in your Azure AD tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. Browse to **Azure Active Directory** and select **Users**.
-1. Select **New user** at the top of the pane.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**
+1. Select **New user** at the top of the pane and then, select **Create new user**.
:::image type="content" source="media/add-application-portal-assign-users/new-user.png" alt-text="Add a new user account to your Azure AD tenant.":::
-1. In the **User name** field, enter the username of the user account. For example, `contosouser1@contoso.com`. Be sure to change `contoso.com` to the name of your tenant domain.
-1. In the **Name** field, enter the name of the user of the account. For example, `contosouser1`.
+1. In the **User principal name** field, enter the username of the user account. For example, `contosouser1@contoso.com`. Be sure to change `contoso.com` to the name of your tenant domain.
+1. In the **Display name** field, enter the name of the user of the account. For example, `contosouser1`.
1. Enter the details required for the user under the **Groups and roles**, **Settings**, and **Job info** sections. 1. Select **Create**.
To create a user account in your Azure AD tenant:
To assign a user account to an enterprise application:
-1. Sign in to the [Azure portal](https://portal.azure.com), then browse to **Azure Active Directory** and select **Enterprise applications**.
-1. Search for and select the application to which you want to assign the user account. For example, the application that you created in the previous quickstart named **Azure AD SAML Toolkit 1**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. For example, the application that you created in the previous quickstart named **Azure AD SAML Toolkit 1**.
1. In the left pane, select **Users and groups**, and then select **Add user/group**. :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Azure AD tenant.":::
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Application properties control how the application is represented and how the ap
To configure the application properties:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Search for and select the application that you want to use.
1. In the **Manage** section, select **Properties** to open the **Properties** pane for editing. 1. On the **Properties** pane, you may want to configure the following properties for your application: - Logo
To configure the application properties:
Use the following Microsoft Graph PowerShell script to configure basic application properties.
-You'll need to consent to the `Application.ReadWrite.All` permission.
+You'll need to to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) and consent to the `Application.ReadWrite.All` permission.
```powershell
Update-MgApplication -ApplicationId $applicationId -BodyParameter $params
:::zone pivot="ms-graph"
-To configure the basic properties of an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+To configure the basic properties of an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
You'll need to consent to the `Application.ReadWrite.All` permission.
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
It is recommended that you use a non-production environment to test the steps in
To configure OIDC-based SSO, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, or owner of the service principal.
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
## Add the application
When you add an enterprise application that uses the OIDC standard for SSO, you
To configure OIDC-based SSO for an application:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-1. In the **Enterprise applications** pane, select **New application**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. In the **All applications** pane, select **New application**.
1. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated SSO and provisioning. Search for and select the application. In this example, **SmartSheet** is being used. 1. Select **Sign-up**. Sign in with the user account credentials from Azure Active Directory. If you already have a subscription to the application, then user details and tenant information is validated. If the application is not able to verify the user, then it redirects you to sign up for the application service.
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
To configure SSO, you need:
To enable SSO for an application:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. For example, **Azure AD SAML Toolkit 1**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results. For example, **Azure AD SAML Toolkit 1**.
1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing. 1. Select **SAML** to open the SSO configuration page. After the application is configured, users can sign in to it by using their credentials from the Azure AD tenant. 1. The process of configuring an application to use Azure AD for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the **configuration guide** link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML Toolkit 1** are listed in this article.
You add sign-in and reply URL values, and you download a certificate to begin th
To configure SSO in Azure AD:
-1. In the Azure portal, select **Edit** in the **Basic SAML Configuration** section on the **Set up single sign-on** pane.
+1. In the Entra admin center, select **Edit** in the **Basic SAML Configuration** section on the **Set up single sign-on** pane.
1. For **Reply URL (Assertion Consumer Service URL)**, enter `https://samltoolkit.azurewebsites.net/SAML/Consume`. 1. For **Sign on URL**, enter `https://samltoolkit.azurewebsites.net/`. 1. Select **Save**.
Use the values that you recorded for **SP Initiated Login URL** and **Assertion
To update the single sign-on values:
-1. In the Azure portal, select **Edit** in the **Basic SAML Configuration** section on the **Set up single sign-on** pane.
+1. In the Entra admin center, select **Edit** in the **Basic SAML Configuration** section on the **Set up single sign-on** pane.
1. For **Reply URL (Assertion Consumer Service URL)**, enter the **Assertion Consumer Service (ACS) URL** value that you previously recorded. 1. For **Sign on URL**, enter the **SP Initiated Login URL** value that you previously recorded. 1. Select **Save**.
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md
It's recommended that you use a nonproduction environment to test the steps in t
To add an enterprise application to your Azure AD tenant, you need: - An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, or Application Administrator.
+- One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.
## Add an enterprise application
To add an enterprise application to your Azure AD tenant, you need:
To add an enterprise application to your tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. Browse to **Azure Active Directory** and select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-1. In the **Enterprise applications** pane, select **New application**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select **New application**.
1. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for and select the application. In this quickstart, **Azure AD SAML Toolkit** is being used. :::image type="content" source="media/add-application-portal/browse-gallery.png" alt-text="Browse in the enterprise application gallery for the application that you want to add.":::
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Last updated 07/12/2023 -+ # Azure Active Directory PowerShell examples for Application Management
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
Previously updated : 09/06/2022 Last updated : 08/29/2023
This article describes the properties that you can configure for an enterprise application in your Azure Active Directory (Azure AD) tenant. To configure the properties, see [Configure enterprise application properties](add-application-portal-configure.md).
-## Enabled for users to sign in?
+## Enabled for users to sign in?
-If this option is set to **Yes**, then assigned users are able to sign in to the application from the My Apps portal, the User access URL, or by navigating to the application URL directly. If assignment is required, then only users who are assigned to the application are able to sign-in. If assignment is required, applications must be assigned to be granted a token.
+If this option is set to **Yes**, then assigned users are able to sign in to the application from the My Apps portal, the User access URL, or by navigating to the application URL directly. If assignment is required, then only users who are assigned to the application are able to sign-in. If assignment is required, applications must be assigned to get a token.
If this option is set to **No**, then no users are able to sign in to the application, even if they're assigned to it. Tokens aren't issued for the application.
-## Name
+## Name
-This property is the name of the application that users see on the My Apps portal. Administrators see the name when they manage access to the application. Other tenants see the name when integrating the application into their directory.
+This property is the name of the application that users see on the My Apps portal. Administrators see the name when they manage access to the application. Other tenants see the name when integrating the application into their directory.
-It's recommended that you choose a name that users can understand. This is important because this name is visible in the various portals, such as My Apps and O365 Launcher.
+It's recommended that you choose a name that users can understand. This is important because this name is visible in the various portals, such as My Apps and Microsoft 365 Launcher.
-## Homepage URL
+## Homepage URL
-If the application is custom-developed, the homepage URL is the URL that a user can use to sign in to the application. For example, it's the URL that is launched when the application is selected in the My Apps portal. If this application is from the Azure AD Gallery, this URL is where you can go to learn more about the application or its vendor.
+If the application is custom-developed, the homepage URL is the URL that a user can use to sign in to the application. For example, it's the URL that is launched when the application is selected in the My Apps portal. If this application is from the Azure AD Gallery, this URL is where you can go to learn more about the application or its vendor.
-The homepage URL can't be edited within enterprise applications. The homepage URL must be edited on the application object.
+The homepage URL can't be edited within enterprise applications. The homepage URL must be edited on the application object.
-## Logo
+## Logo
This is the application logo that users see on the My Apps portal and the Office 365 application launcher. Administrators also see the logo in the Azure AD gallery. Custom logos must be exactly 215x215 pixels in size and be in the PNG format. You should use a solid color background with no transparency in your application logo. The logo file size can't be over 100 KB.
-## Application ID
+## Application ID
This property is the unique identifier for the application in your directory. You can use this application ID if you ever need help from Microsoft Support. You can also use the identifier to perform operations using the Microsoft Graph APIs or the Microsoft Graph PowerShell SDK.
-## Object ID
+## Object ID
-This is the unique identifier of the service principal object associated with the application. This identifier can be useful when performing management operations against this application using PowerShell or other programmatic interfaces. This identifier is different than the identifier for the application object.
+This is the unique identifier of the service principal object associated with the application. This identifier can be useful when performing management operations against this application using PowerShell or other programmatic interfaces. This identifier is different than the identifier for the application object.
-The identifier is used to update information for the local instance of the application, such as assigning users and groups to the application. The identifier can also be used to update the properties of the enterprise application or to configure single-sign on.
+The identifier is used to update information for the local instance of the application, such as assigning users and groups to the application. The identifier can also be used to update the properties of the enterprise application or to configure single-sign on.
-## Assignment required
+## Assignment required
-This option doesn't affect whether or not an application appears on the My Apps portal. To show the application there, assign an appropriate user or group to the application. This option has no effect on users' access to the application when it's configured for any of the other single sign-on modes.
+This setting controls who or what in the directory can obtain an access token for the application. You can use this setting to further lock down access to the application and let only specified users and applications obtain access tokens.
+
+This option determines whether or not an application appears on the My Apps portal. To show the application there, assign an appropriate user or group to the application. This option has no effect on users' access to the application when it's configured for any of the other single sign-on modes.
+
+If this option is set to **Yes**, then users and other applications or services must first be assigned this application before being able to access it.
+
+If this option is set to **No**, then all users are able to sign in, and other applications and services are able to obtain an access token to the application. This option also allows any external users that may have been invited into your organization to sign in.
+
+This option only applies to the following types of applications and
-If this option is set to **Yes**, then users and other applications or services must first be assigned this application before being able to access it.
-
-If this option is set to **No**, then all users are able to sign in, and other applications and services are able to obtain an access token to the application.
-
-This option only applies to the following types of applications and
- Applications using SAML - OpenID Connect - OAuth 2.0 - WS-Federation for user sign-- Application Proxy applications with Azure AD pre-authentication enabled-- Applications or services for which other applications or service are requesting access tokens
+- Application Proxy applications with Azure AD preauthentication enabled
+- Applications or services for which other applications or service are requesting access tokens
-## Visible to users
+## Visible to users
-Makes the application visible in My Apps and the O365 Launcher
+Makes the application visible in My Apps and the Microsoft 365 Launcher
-If this option is set to **Yes**, then assigned users see the application on the My Apps portal and O365 app launcher.
+If this option is set to **Yes**, then assigned users see the application on the My Apps portal and Microsoft 365 app launcher.
-If this option is set to **No**, then no users see this application on their My Apps portal and O365 launcher.
+If this option is set to **No**, then no users see this application on their My Apps portal and Microsoft 365 launcher.
Make sure that a homepage URL is included or else the application can't be launched from the My Apps portal.
-Regardless of whether assignment is required or not, only assigned users are able to see this application in the My Apps portal. If you want certain users to see the application in the My Apps portal, but everyone to be able to access it, assign the users in the **Users and Groups** tab, and set assignment required to **No**.
+Regardless of whether assignment is required or not, only assigned users are able to see this application in the My Apps portal. If you want certain users to see the application in the My Apps portal, but everyone to be able to access it, assign the users in the **Users and Groups** tab, and set assignment required to **No**.
-## Notes
+## Notes
-You can use this field to add any information that is relevant for the management of the application. The field is a free text field with a maximum size of 1024 characters.
+You can use this field to add any information that is relevant for the management of the application. The field is a free text field with a maximum size of 1024 characters.
## Next steps
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
Previously updated : 02/01/2022 Last updated : 09/05/2023
Here are some things to check if an app is appearing or not appearing:
- Make sure the userΓÇÖs account is **enabled** for sign-ins. - Make sure the userΓÇÖs account is **not locked out.** - Make sure the userΓÇÖs **password is not expired or forgotten.**-- Make sure **Multi-Factor Authentication** is not blocking user access.-- Make sure a **Conditional Access policy** or **Identity Protection** policy is not blocking user access.
+- Make sure **Multi-Factor Authentication** isn't blocking user access.
+- Make sure a **Conditional Access policy** or **Identity Protection** policy isn't blocking user access.
- Make sure that a userΓÇÖs **authentication contact info** is up to date to allow Multi-Factor Authentication or Conditional Access policies to be enforced. - Make sure to also try clearing your browserΓÇÖs cookies and trying to sign in again.
Access to My Apps can be blocked due to a problem with the userΓÇÖs account. Fol
To check if a userΓÇÖs account is present, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Check the properties of the user object to be sure that they look as you expect and no data is missing.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. Search for the user you're interested in and **select the row** to view the details of the user.
+1. Check the properties of the user object to be sure that they look as you expect and no data is missing.
### Check a userΓÇÖs account status To check a userΓÇÖs account status, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Select **Profile**.
-8. Under **Settings** ensure that **Block sign in** is set to **No**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. **Search** for the user you're interested in and **select the row** to select.
+1. Select **Profile**.
+1. Under **Settings** ensure that **Block sign in** is set to **No**.
### Reset a userΓÇÖs password To reset a userΓÇÖs password, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Select the **Reset password** button at the top of the user pane.
-8. Select the **Reset password** button on the **Reset password** pane that appears.
-9. Copy the **temporary password** or **enter a new password** for the user.
-10. Communicate this new password to the user, they be required to change this password during their next sign-in to Azure Active Directory.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. **Search** for the user you're interested in and **select the row** to select.
+1. Select the **Reset password** button at the top of the user pane.
+1. Select the **Reset password** button on the **Reset password** pane that appears.
+1. Copy the **temporary password** or **enter a new password** for the user.
+1. Communicate this new password to the user, they be required to change this password during their next sign-in to Azure Active Directory.
### Enable self-service password reset
To enable self-service password reset, follow these deployment steps:
To check a userΓÇÖs multi-factor authentication status, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. Select the **Multi-Factor Authentication** button at the top of the pane.
-7. Once the **Multi-Factor Authentication Administration Portal** loads, ensure you are on the **Users** tab.
-8. Find the user in the list of users by searching, filtering, or sorting.
-9. Select the user from the list of users and **Enable**, **Disable**, or **Enforce** multi-factor authentication as desired.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. Select the **Per-user MFA** button at the top of the pane.
+1. Once the **Multi-Factor Authentication** administration portal loads, ensure you are on the **Users** tab.
+1. Find the user in the list of users by searching, filtering, or sorting.
+1. Select the user from the list of users and **Enable**, **Disable**, or **Enforce** multi-factor authentication as desired.
>[!NOTE] >If a user is in an **Enforced** state, you may set them to **Disabled** temporarily to let them back into their account. Once they are back in, you can then change their state to **Enabled** again to require them to re-register their contact information during their next sign-in. Alternatively, you can follow the steps in the [Check a userΓÇÖs authentication contact info](#check-a-users-authentication-contact-info) to verify or set this data for them.
To check a userΓÇÖs multi-factor authentication status, follow these steps:
To check a userΓÇÖs authentication contact info used for Multi-factor authentication, Conditional Access, Identity Protection, and Password Reset, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Select **Profile**.
-8. Scroll down to **Authentication contact info**.
-9. **Review** the data registered for the user and update as needed.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. **Search** for the user you're interested in and **select the row** to select.
+1. Select **Authentication method** under **Manage**.
+1. **Review** the data registered for the user and update as needed.
### Check a userΓÇÖs group memberships To check a userΓÇÖs group memberships, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Select **Groups** to see which groups the user is a member of.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. **Search** for the user you're interested in and **select the row** to select.
+1. Select **Groups** to see which groups the user is a member of.
### Check if a user has more than 999 app role assignments If a user has more than 999 app role assignments, then they may not see all of their apps on My Apps.
-This is because My Apps currently reads up to 999 app role assignments to determine the apps to which users are assigned. If a user is assigned to more than 999 apps, it is not possible to control which of those apps will show in the My Apps portal.
+This is because My Apps currently reads up to 999 app role assignments to determine the apps to which users are assigned. If a user is assigned to more than 999 apps, it isn't possible to control which of those apps show in the My Apps portal.
To check if a user has more than 999 app role assignments, follow these steps: 1. Install the [**Microsoft.Graph**](https://github.com/microsoftgraph/msgraph-sdk-powershell) PowerShell module.
-2. Run `Connect-MgGraph -Scopes "User.ReadBasic.All Application.Read.All"`.
-3. Run `(Get-MgUserAppRoleAssignment -UserId "<user-id>" -PageSize 999).Count` to determine the number of app role assignments the user currently has granted.
-4. If the result is 999, the user likely has more than 999 app roles assignments.
+2. Run `Connect-MgGraph -Scopes "User.ReadBasic.All Application.Read.All"`and sign in as at least a [User Administrator](../roles/permissions-reference.md#user-administrator)..
+1. Run `(Get-MgUserAppRoleAssignment -UserId "<user-id>" -PageSize 999).Count` to determine the number of app role assignments the user currently has granted.
+1. If the result is 999, the user likely has more than 999 app roles assignments.
### Check a userΓÇÖs assigned licenses To check a userΓÇÖs assigned licenses, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Select **Licenses** to see which licenses the user currently has assigned.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. **Search** for the user you're interested in and **select the row** to select.
+1. Select **Licenses** to see which licenses the user currently has assigned.
### Assign a user a license To assign a license to a user, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **All users**.
-6. **Search** for the user you are interested in and **select the row** to select.
-7. Select **Licenses** to see which licenses the user currently has assigned.
-8. Select the **Assign** button.
-9. Select **one or more products** from the list of available products.
-10. **Optional** select the **assignment options** item to granularly assign products. Select **Ok**.
-11. Select the **Assign** button to assign these licenses to this user.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users** >
+1. **Search** for the user you're interested in and **select the row** to select.
+1. Select **Licenses** to see which licenses the user currently has assigned.
+1. Select the **Assignments** button.
+1. Select one or more licenses from the list of available products.
+1. Optional: Select **Review license options** to granularly assign products.
+1. Select **Save**.
## Troubleshooting deep links
Deep links or User access URLs are links your users may use to access their pass
To check if you have the correct deep link, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Co-admin.**
-2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
-5. Select **All Applications** to view a list of all your applications.
- - If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications.**
-6. Select the application you want the check the deep link for.
-7. Find the label **User Access URL**. Your deep link should match this URL.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
+1. Find the label **User Access URL**. Your deep link should match this URL.
## Contact support
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
In this scenario, Azure Active Directory (Azure AD) signs the user in. But the a
There are several possible reasons why the app didn't accept the response from Azure AD. If there's an error message or code displayed, use the following resources to diagnose the error:
-* [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md)
-
-* [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
-
+- [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md)
+- [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
If the error message doesn't clearly identify what's missing from the response, try the following: -- If the app is the Azure AD gallery, verify that you followed the steps in [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).-
+- If the app is in the Azure AD gallery, verify that you followed the steps in [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).
- Use a tool like [Fiddler](https://www.telerik.com/fiddler) to capture the SAML request, response, and token.- - Send the SAML response to the app vendor and ask them what's missing. [!INCLUDE [portal updates](../includes/portal-update.md)]
If the error message doesn't clearly identify what's missing from the response,
To add an attribute in the Azure AD configuration that will be sent in the Azure AD response, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a global administrator or co-admin.
-
-2. At the top of the navigation pane on the left side, select **All services** to open the Azure AD extension.
-
-3. Type **Azure Active Directory** in the filter search box, and then select **Azure Active Directory**.
-
-4. Select **Enterprise Applications** in the Azure AD navigation pane.
-
-5. Select **All Applications** to view a list of your apps.
-
- > [!NOTE]
- > If you don't see the app that you want, use the **Filter** control at the top of the **All Applications List**. Set the **Show** option to "All Applications."
-
-6. Select the application that you want to configure for single sign-on.
-
-7. After the app loads, select **Single sign-on** in the navigation pane.
-
-8. In the **User Attributes** section, select **View and edit all other user attributes**. Here you can change which attributes to send to the app in the SAML token when users sign in.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application that you want to configure for single sign-on.
+1. After the app loads, select **Single sign-on** in the navigation pane.
+1. In the **User Attributes** section, select **View and edit all other user attributes**. Here you can change which attributes to send to the app in the SAML token when users sign in.
To add an attribute:
To add an attribute in the Azure AD configuration that will be sent in the Azure
1. Select **Save**. You'll see the new attribute in the table.
-9. Save the configuration.
+1. Save the configuration.
The next time that the user signs in to the app, Azure AD will send the new attribute in the SAML response.
If you're using [Azure AD automated user provisioning](../app-provisioning/user-
To change the User Identifier value, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a global administrator or co-admin.
-
-2. Select **All services** at the top of the navigation pane on the left side to open the Azure AD extension.
-
-3. Type **Azure Active Directory** in the filter search box, and then select **Azure Active Directory**.
-
-4. Select **Enterprise Applications** in the Azure AD navigation pane.
-
-5. Select **All Applications** to view a list of your apps.
-
- > [!NOTE]
- > If you don't see the app that you want, use the **Filter** control at the top of the **All Applications List**. Set the **Show** option to "All Applications."
-
-6. Select the app that you want to configure for SSO.
-
-7. After the app loads, select **Single sign-on** in the navigation pane.
-
-8. Under **User attributes**, select the unique identifier for the user from the **User Identifier** drop-down list.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select the app that you want to configure for SSO.
+1. After the app loads, select **Single sign-on** in the navigation pane.
+1. Under **User attributes**, select the unique identifier for the user from the **User Identifier** drop-down list.
### Change the NameID format
Azure AD selects the format for the **NameID** attribute (User Identifier) based
To change which parts of the SAML token are digitally signed by Azure AD, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/) and sign in as a global administrator or co-admin.
-
-2. Select **All services** at the top of the navigation pane on the left side to open the Azure AD extension.
-
-3. Type **Azure Active Directory** in the filter search box, and then select **Azure Active Directory**.
-
-4. Select **Enterprise Applications** in the Azure AD navigation pane.
-
-5. Select **All Applications** to view a list of your apps.
-
- > [!NOTE]
- > If you don't see the application that you want, use the **Filter** control at the top of the **All Applications List**. Set the **Show** option to "All Applications."
-
-6. Select the application that you want to configure for single sign-on.
-
-7. After the application loads, select **Single sign-on** in the navigation pane.
-
-8. Under **SAML Signing Certificate**, select **Show advanced certificate signing settings**.
-
-9. Select the **Signing Option** that the app expects from among these options:
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select the application that you want to configure for single sign-on.
+1. After the application loads, select **Single sign-on** in the navigation pane.
+1. Under **SAML Signing Certificate**, select **Show advanced certificate signing settings**.
+1. Select the **Signing Option** that the app expects from among these options:
- **Sign SAML response** - **Sign SAML response and assertion**
By default, Azure AD signs the SAML token by using the most-secure algorithm. We
To change the signing algorithm, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/) and sign in as a global administrator or co-admin.
-
-2. Select **All services** at the top of the navigation pane on the left side to open the Azure AD extension.
-
-3. Type **Azure Active Directory** in the filter search box, and then select **Azure Active Directory**.
-
-4. Select **Enterprise Applications** in the Azure AD navigation pane.
-
-5. Select **All Applications** to view a list of your applications.
-
- > [!NOTE]
- > If you don't see the application that you want, use the **Filter** control at the top of the **All Applications List**. Set the **Show** option to "All Applications."
-
-6. Select the app that you want to configure for single sign-on.
-
-7. After the app loads, select **Single sign-on** from the navigation pane on the left side of the app.
-
-8. Under **SAML Signing Certificate**, select **Show advanced certificate signing settings**.
-
-9. Select **SHA-1** as the **Signing Algorithm**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select the app that you want to configure for single sign-on.
+1. After the app loads, select **Single sign-on** from the navigation pane on the left side of the app.
+1. Under **SAML Signing Certificate**, select **Show advanced certificate signing settings**.
+1. Select **SHA-1** as the **Signing Algorithm**.
The next time that the user signs in to the app, Azure AD will sign the SAML token by using the SHA-1 algorithm. ## Next steps
-* [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).
-
-* [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md)
-
-* [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
+- [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md)
+- [Azure AD Authentication and authorization error codes](../develop/reference-error-codes.md)
+- [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Many applications that integrate with Azure Active Directory require permissions to various resources in order to run. When these resources are also integrated with Azure Active Directory, the permission to access them is requested using the Azure AD consent framework. These requests result in a consent prompt being shown the first time an application is used, which is often a one-time operation.
-In certain scenarios, additional consent prompts can appear when a user attempts to sign-in. In this article, we'll diagnose the reason for the unexpected consent prompts showing, and how to troubleshoot.
+In certain scenarios, additional consent prompts can appear when a user attempts to sign-in. In this article, we diagnose the reason for the unexpected consent prompts showing, and how to troubleshoot.
> [!VIDEO https://www.youtube.com/embed/a1AjdvNDda4]
In certain scenarios, additional consent prompts can appear when a user attempts
Further prompts can be expected in various scenarios:
-* The application has been configured to require assignment. Individual user consent isn't currently supported for apps that require assignment; thus the permissions must be granted by an admin for the whole directory. If you configure an application to require assignment, be sure to also grant tenant-wide admin consent so that assigned user can sign-in.
+- The application has been configured to require assignment. Individual user consent isn't currently supported for apps that require assignment; thus the permissions must be granted by an admin for the whole directory. If you configure an application to require assignment, be sure to also grant tenant-wide admin consent so that assigned user can sign-in.
-* The set of permissions required by the application has changed by the developer and needs to be granted again.
+- The set of permissions required by the application has changed by the developer and needs to be granted again.
-* The user who originally consented to the application wasn't an administrator, and now a different (non-admin) user is using the application for the first time.
+- The user who originally consented to the application wasn't an administrator, and now a different (nonadmin) user is using the application for the first time.
-* The user who originally consented to the application was an administrator, but they didn't consent on-behalf of the entire organization.
+- The user who originally consented to the application was an administrator, but they didn't consent on-behalf of the entire organization.
-* The application is using [incremental and dynamic consent](../develop/permissions-consent-overview.md#consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
+- The application is using [incremental and dynamic consent](../develop/permissions-consent-overview.md#consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
-* Consent was revoked after being granted initially.
+- Consent was revoked after being granted initially.
-* The developer has configured the application to require a consent prompt every time it's used (note: this behavior isn't best practice).
+- The developer has configured the application to require a consent prompt every time it's used (note: this behavior isn't best practice).
> [!NOTE] > Following Microsoft's recommendations and best practices, many organizations have disabled or limited users' permission to grant consent to apps. If an application forces users to grant consent every time they sign in, most users will be blocked from using these applications even if an administrator grants tenant-wide admin consent. If you encounter an application which is requiring user consent even after admin consent has been granted, check with the app publisher to see if they have a setting or option to stop forcing user consent on every sign in.
Further prompts can be expected in various scenarios:
To ensure the permissions granted for the application are up-to-date, you can compare the permissions that are being requested by the application with the permissions already granted in the tenant.
-1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account.
-2. Navigate to **Enterprise applications**.
-3. Select the application in question from the list.
-4. Under Security in the left-hand navigation, choose **Permissions**
-5. View the list of already granted permissions from the table on the Permissions page
-6. To view the requested permissions, select the **Grant admin consent** button. (NOTE: This will open a consent prompt listing all of the requested permissions. Don't click accept on the consent prompt unless you're sure you want to grant tenant-wide admin consent.)
-7. Within the consent prompt, expand the listed permissions and compare with the table on the permissions page. If any are present in the consent prompt but not the permissions page, that permission has yet to be consented to. Unconsented permissions may be the cause for unexpected consent prompts showing for the application.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
+1. Under Security in the left-hand navigation, choose **Permissions**
+1. View the list of already granted permissions from the table on the Permissions page
+1. To view the requested permissions, select the **Grant admin consent** button. This opens a consent prompt listing all of the requested permissions. Don't select **Accept** on the consent prompt unless you're sure you want to grant tenant-wide admin consent.
+1. Within the consent prompt, expand the listed permissions and compare with the table on the permissions page. If any are present in the consent prompt but not the permissions page, that permission has yet to be consented to. Unconsented permissions may be the cause for unexpected consent prompts showing for the application.
### View user assignment settings If the application requires assignment, individual users can't consent for themselves. To check if assignment is required for the application, do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account.
-2. Navigate to **Enterprise applications**.
-3. Select the application in question from the list.
-4. Under Manage in the left-hand navigation, choose **Properties**.
-5. Check to see if **Assignment required?** is set to **Yes**.
-6. If set to yes, then an admin must consent to the permissions on behalf of the entire organization.
+1. On the application's page, Select **Properties** under **Manage**.
+1. Check to see if **Assignment required?** is set to **Yes**.
+1. If set to yes, then an admin must consent to the permissions on behalf of the entire organization.
### Review tenant-wide user consent settings Determining whether an individual user can consent to an application can be configured by every organization, and may differ from directory to directory. Even if every permission doesn't require admin consent by default, your organization may have disabled user consent entirely, preventing an individual user to consent for themselves for an application. To view your organization's user consent settings, do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an administrator account.
-2. Navigate to **Enterprise applications**.
-3. Under Security in the left-hand navigation, choose **Consent and permissions**.
-4. View the user consent settings. If set to *Do not allow user consent*, users will never be able to consent on behalf of themselves for an application.
+1. Navigate to the **Enterprise applications** page of the Microsoft Entra admin center.
+1. Under **Security**, choose **Consent and permissions**.
+1. View the user consent settings. If set to **Do not allow user consent**, users are never able to consent on behalf of themselves for an application.
## Next steps
-* [Apps, permissions, and consent in Azure Active Directory (v1.0 endpoint)](../develop/quickstart-register-app.md)
+- [Apps, permissions, and consent in Azure Active Directory (v1.0 endpoint)](../develop/quickstart-register-app.md)
-* [Scopes, permissions, and consent in the Azure Active Directory (v2.0 endpoint)](../develop/permissions-consent-overview.md)
+- [Scopes, permissions, and consent in the Azure Active Directory (v2.0 endpoint)](../develop/permissions-consent-overview.md)
-* [Unexpected error when performing consent to an application](application-sign-in-unexpected-user-consent-error.md)
+- [Unexpected error when performing consent to an application](application-sign-in-unexpected-user-consent-error.md)
active-directory Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-app-owners.md
An [owner of an enterprise application](overview-assign-app-owners.md) in Azure Active Directory (Azure AD) can manage the organization-specific configuration of the application, such as single sign-on, provisioning, and user assignments. An owner can also add or remove other owners. Unlike Global Administrators, owners can manage only the enterprise applications they own. In this article, you learn how to assign an owner of an application.
+## Prerequisites
+
+To add an enterprise application to your Azure AD tenant, you need:
+
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.
[!INCLUDE [portal updates](../includes/portal-update.md)] ## Assign an owner
An [owner of an enterprise application](overview-assign-app-owners.md) in Azure
To assign an owner to an enterprise application:
-1. Sign in to [your Azure AD organization](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) with an account that is eligible for the **Application Administrator** role or the **Cloud Application Administrator** role for the organization.
-2. Select **Enterprise applications**, and then select the application that you want to add an owner to.
-3. Select **Owners**, and then select **Add** to get a list of user accounts that you can choose an owner from.
-4. Search for and select the user account that you want to be an owner of the application.
-5. Click **Select** to add the user account that you chose as an owner of the application.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Select the application that you want to add an owner to.
+1. Select **Owners**, and then select **Add** to get a list of user accounts that you can choose an owner from.
+1. Search for and select the user account that you want to be an owner of the application.
+1. Select **Select** to add the user account that you chose as an owner of the application.
:::zone-end
To assign an owner to an enterprise application:
Use the following Microsoft Graph PowerShell cmdlet to add an owner to an enterprise application.
-You'll need to consent to the `Application.ReadWrite.All` permission.
+You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) and consent to the `Application.ReadWrite.All` permission.
In the following example, the user's object ID is 8afc02cb-4d62-4dba-b536-9f6d73e9be26 and the applicationId is 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b. ```powershell
-Import-Module Microsoft.Graph.Applications
+1. Connect-MgGraph -Scopes 'Application.ReadWrite.All'
+
+1. Import-Module Microsoft.Graph.Applications
$params = @{ "@odata.id" = "https://graph.microsoft.com/v1.0/directoryObjects/8afc02cb-4d62-4dba-b536-9f6d73e9be26"
New-MgServicePrincipalOwnerByRef -ServicePrincipalId '46e6adf4-a9cf-4b60-9390-0b
:::zone pivot="ms-graph"
-To assign an owner to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+To assign an owner to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
-You'll need to consent to the `Application.ReadWrite.All` permission.
+You need to consent to the `Application.ReadWrite.All` permission.
Run the following Microsoft Graph query to assign an owner to an application. You need the object ID of the user you want to assign the application to. In the following example, the user's object ID is 8afc02cb-4d62-4dba-b536-9f6d73e9be26 and the appId is 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b.
Content-Type: application/json
:::zone-end > [!NOTE]
-> If the user setting **Restrict access to Azure AD administration portal** is set to `Yes`, non-admin users will not be able to use the Azure portal to manage the applications they own. For more information about the actions that can be performed on owned enterprise applications, see [Owned enterprise applications](../fundamentals/users-default-permissions.md#owned-enterprise-applications).
+> If the user setting **Restrict access to Azure AD administration portal** is set to `Yes`, non-admin users aren't able to use the Microsoft Entra admin center to manage the applications they own. For more information about the actions that can be performed on owned enterprise applications, see [Owned enterprise applications](../fundamentals/users-default-permissions.md#owned-enterprise-applications).
## Next steps
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Title: Assign users and groups
+ Title: Manage users and groups assignment to an application
description: Learn how to assign and unassign users, and groups, for an app using Azure Active Directory for identity management.
Last updated 11/22/2022 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
-# Assign users and groups to an application
+# Manage users and groups assignment to an application
This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's [My Apps](https://myapps.microsoft.com/) portal for easy access. If the application exposes app roles, you can also assign a specific app role to the user.
To assign users to an enterprise application, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - Azure Active Directory Premium P1 or P2 for group-based assignment. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory). -
+## Assign users, and groups, to an application
+
:::zone pivot="portal" To assign a user or group account to an enterprise application:
-1. Sign in to the [Azure portal](https://portal.azure.com), then select **Enterprise applications**, and then search for and select the application to which you want to assign the user or group account.
-1. Browse to **Azure Active Directory** > **Users and groups**, and then select **Add user/group**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
+1. Select **Users and groups**, and then select **Add user/group**.
:::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Azure AD tenant.":::
To assign a user or group account to an enterprise application:
1. Select **Select**. 1. On the **Add Assignment** pane, select **Assign** at the bottom of the pane.
+## Unassign users, and groups, from an application
+
+1. Follow the steps on the [Assign users, and groups, to an application](#assign-users-and-groups-to-an-application) section to navigate to the **Users and groups** pane.
+1. Search for and select the user or group that you want to unassign from the application.
+1. Select **Remove** to unassign the user or group from the application.
+ :::zone-end :::zone pivot="aad-powershell" 1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-AzureAD -Scopes "Application.Read.All", "Directory.Read.All", "Application.ReadWrite.All", "Directory.ReadWrite.All"` and sign in with a Global Administrator user account.
+1. Run `Connect-AzureAD` and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
1. Use the following script to assign a user and role to an application: ```powershell
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
## Unassign users, and groups, from an application 1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-AzureAD -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"` and sign in with a Global Administrator user account. Use the following script to remove a user and role from an application.
+1. Run `Connect-AzureAD` and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Use the following script to remove a user and role from an application.
```powershell # Store the proper parameters
$assignments | ForEach-Object {
:::zone pivot="ms-powershell" 1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"` and sign in with a Global Administrator user account.
+1. Run `Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"` and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
1. Use the following script to assign a user and role to an application: ```powershell
New-MgUserAppRoleAssignment -UserId $userId -BodyParameter $params |
## Unassign users, and groups, from an application 1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"` and sign in with a Global Administrator user account. Use the following script to remove a user and role from an application.
+1. Run `Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"` and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). Use the following script to remove a user and role from an application.
```powershell # Get the user and the service principal
$assignments | ForEach-Object {
:::zone pivot="ms-graph"
-1. To assign users and groups to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+1. To assign users and groups to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer)as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
You'll need to consent to the following permissions:
$assignments | ForEach-Object {
In the example, both the resource-servicePrincipal-id and resourceId represent the enterprise application. ## Unassign users, and groups, from an application+ To unassign user and groups from the application, run the following query. 1. Get the enterprise application. Filter by displayName.
active-directory Certificate Signing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/certificate-signing-options.md
Title: Advanced certificate signing options in a SAML token
-description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory
+description: Learn how to use advanced certificate signing options in the SAML token for preintegrated apps in Azure Active Directory
Previously updated : 07/21/2022 Last updated : 07/18/2023
# Advanced certificate signing options in a SAML token
-Today Azure Active Directory (Azure AD) supports thousands of pre-integrated applications in the Azure Active Directory App Gallery. Over 500 of the applications support single sign-on by using the [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) 2.0 protocol, such as the [NetSuite](https://azuremarketplace.microsoft.com/marketplace/apps/aad.netsuite) application. When a customer authenticates to an application through Azure AD by using SAML, Azure AD sends a token to the application (via an HTTP POST). The application then validates and uses the token to sign in the customer instead of prompting for a username and password. These SAML tokens are signed with the unique certificate that's generated in Azure AD and by specific standard algorithms.
+Today Azure Active Directory (Azure AD) supports thousands of preintegrated applications in the Azure Active Directory App Gallery. Over 500 of the applications support single sign-on by using the [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) 2.0 protocol, such as the [NetSuite](https://azuremarketplace.microsoft.com/marketplace/apps/aad.netsuite) application. When a customer authenticates to an application through Azure AD by using SAML, Azure AD sends a token to the application (via an HTTP POST). The application then validates and uses the token to sign in the customer instead of prompting for a username and password. These SAML tokens are signed with the unique certificate that's generated in Azure AD and by specific standard algorithms.
Azure AD uses some of the default settings for the gallery applications. The default values are set up based on the application's requirements.
Azure AD supports two signing algorithms, or secure hash algorithms (SHAs), to s
* **SHA-1**. This algorithm is older, and it's treated as less secure than SHA-256. If an application supports only this signing algorithm, you can select this option in the **Signing Algorithm** drop-down list. Azure AD then signs the SAML response with the SHA-1 algorithm.
+## Prerequisites
+
+To change an application's SAML certificate signing options and the certificate signing algorithm, you need:
+
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+ [!INCLUDE [portal updates](../includes/portal-update.md)] ## Change certificate signing options and signing algorithm
-To change an application's SAML certificate signing options and the certificate signing algorithm, select the application in question:
+To change an application's SAML certificate signing options and the certificate signing algorithm:
-1. In the [Azure portal](https://portal.azure.com), sign in to your account.
-1. Browse to **Azure Active Directory** > **Enterprise applications**. A list of the enterprise applications in your account appears.
-1. Select an application. An overview page for the application appears. In this example, the Salesforce application is used.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results. In this example, you use the Salesforce application.
![Example: Application overview page](./media/certificate-signing-options/application-overview-page.png)
Next, change the certificate signing options in the SAML token for that applicat
## Next steps
-* [Configure single sign-on to applications that are not in the Azure Active Directory App Gallery](../develop/single-sign-on-saml-protocol.md)
-* [Troubleshoot SAML-based single sign-on](./debug-saml-sso-issues.md)
+- [Configure single sign-on to applications that are not in the Azure Active Directory App Gallery](../develop/single-sign-on-saml-protocol.md)
+- [Troubleshoot SAML-based single sign-on](./debug-saml-sso-issues.md)
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To configure the admin consent workflow, you need:
To enable the admin consent workflow and choose reviewers:
-1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites.
-1. Search for and select **Azure Active Directory**.
-1. Select **Enterprise applications**.
-1. Under **Security**, select **Consent and permissions**.
-1. Under **Manage**, select **Admin consent settings**. Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Consent and permissions** > **Admin consent settings**.
+1. Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .
![Screenshot of configure admin consent workflow settings.](./media/configure-admin-consent-workflow/enable-admin-consent-workflow.png)
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Last updated 03/16/2023 -+ zone_pivot_groups: home-realm-discovery- #customer intent: As and admin, I want to configure Home Realm Discovery for Azure AD authentication for federated users.
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-linked-sign-on.md
Some common scenarios where linked-based SSO is valuable include:
## Prerequisites To configure linked-based SSO in your Azure AD tenant, you need:-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- One of the following roles: Global Administrator, Application Administrator, or owner of the service principal.-- An application that supports linked-based SSO.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- An application that supports linked-based SSO.
## Configure linked-based single sign-on
-1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
-2. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
-3. Search for and select the application that you want to add linked SSO.
-4. Select **Single sign-on** and then select **Linked**.
-5. Enter the URL for the sign-in page of the application.
-6. Select **Save**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Search for and select the application that you want to add linked SSO.
+1. Select **Single sign-on** and then select **Linked**.
+1. Enter the URL for the sign-in page of the application.
+1. Select **Save**.
## Next steps
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
The configuration page for password-based SSO is simple. It includes only the UR
To configure password-based SSO in your Azure AD tenant, you need: - An Azure account with an active subscription. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- Global Administrator, or owner of the service principal.
+- Global Administrator, Cloud Application Administrator, or owner of the service principal.
- An application that supports password-based SSO. ## Configure password-based single sign-on
-1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
-1. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
-1. Search for and select the application that you want to add password-based SSO.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
1. Select **Single sign-on** and then select **Password-based**. 1. Enter the URL for the sign-in page of the application. 1. Select **Save**.
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Last updated 3/28/2023 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want configure permission classifications for applications in Azure AD
To configure permission classifications, you need:
:::zone pivot="portal"
-Follow these steps to classify permissions using the Azure portal:
+Follow these steps to classify permissions using the Microsoft Entra admin center:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator), [Application Administrator](../roles/permissions-reference.md#application-administrator), or [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)
-1. Select **Azure Active Directory** > **Enterprise applications** > **Consent and permissions** > **Permission classifications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Consent and permissions** > **Permission classifications**.
1. Choose the tab for the permission classification you'd like to update. 1. Choose **Add permissions** to classify another permission. 1. Select the API and then select the delegated permission(s).
In this example, we've classified the minimum set of permission required for sin
You can use the latest [Azure AD PowerShell](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0), to classify permissions. Permission classifications are configured on the **ServicePrincipal** object of the API that publishes the permissions.
-Run the following command to connect to Azure AD PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article.
+Run the following command to connect to Azure AD PowerShell. To consent to the required scopes, sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```powershell Connect-AzureAD
Connect-AzureAD
You can use [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?preserve-view=true&view=graph-powershell-1.0), to classify permissions. Permission classifications are configured on the **ServicePrincipal** object of the API that publishes the permissions.
-Run the following command to connect to Microsoft Graph PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article.
+Run the following command to connect to Microsoft Graph PowerShell. To consent to the required scopes, sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```powershell Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant".
Remove-MgServicePrincipalDelegatedPermissionClassification -DelegatedPermissionC
:::zone pivot="ms-graph"
-To configure permissions classifications for an enterprise application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+To configure permissions classifications for an enterprise application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
You need to consent to the `Policy.ReadWrite.PermissionGrant` permission.
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
Last updated 11/17/2021 --+ #customer intent: As an admin, I want to configure risk-based step-up consent. # Configure risk-based step-up consent using PowerShell
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Title: Configure group owner consent to apps accessing group data
-description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data.
+description: Manage group and team owners consent to applications that should be granted access to the group or team's data.
Previously updated : 09/06/2022 Last updated : 08/25/2023
+zone_pivot_groups: enterprise-apps-minus-former-powershell
#customer intent: As an admin, I want to configure group owner consent to apps accessing group data using Azure AD
-# Configure group owner consent to applications
+# Configure group and team owner consent to applications
+
+In this article, you'll learn how to configure the way group and team owners consent to applications and how to disable all future group and team owners' consent operations to applications.
Group and team owners can authorize applications, such as applications published by third-party vendors, to access your organization's data associated with a group. For example, a team owner in Microsoft Teams can allow an app to read all Teams messages in the team, or list the basic profile of a group's members. See [Resource-specific consent in Microsoft Teams](/microsoftteams/resource-specific-consent) to learn more.
+Group owner consent can be managed in two separate ways: through *directory settings* and *app consent policy*. In the directory settings, you can enable all groups owner, enable selected group owner, or disable group owners' ability to give consent to applications. On the other hand, by utilizing the app consent policy, you can specify which app consent policy governs the group owner consent for applications. You then have the flexibility to assign either a Microsoft built-in policy or create your own custom policy to effectively manage the consent process for group owners.
+
+Before utilizing the app consent policy to manage your group owner consent, you need to disable the group owner consent setting that is managed by directory settings. Disabling this setting allows for group owner consent subject to app consent policies. You can learn how to disable the group owner consent setting in various ways in this article. Learn more about [managing group owner consent by app consent policies](manage-group-owner-consent-policies.md) tailored to your needs.
++ ## Prerequisites
-To complete the tasks in this guide, you need the following:
+To configure group and team owner consent, you need:
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A Global Administrator role.-- Set up Azure AD PowerShell. See [Azure AD PowerShell](/powershell/azure/)
+- A user account. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Global Administrator or Privileged Administrator role.
-## Manage group owner consent to apps
+## Manage group owner consent to apps by directory settings
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-You can configure which users are allowed to consent to apps accessing their groups' or teams' data, or you can disable this for all users.
+You can configure which users are allowed to consent to apps accessing their groups' or teams' data, or you can disable the setting for all users.
+
-# [Portal](#tab/azure-portal)
+To configure group and team owner consent settings through the Azure portal:
Follow these steps to manage group owner consent to apps accessing group data:
Follow these steps to manage group owner consent to apps accessing group data:
In this example, all group owners are allowed to consent to apps accessing their groups' data: :::image type="content" source="media/configure-user-consent-groups/group-owner-consent.png" alt-text="Group owner consent settings":::++
+To manage group and team owner consent settings through directory setting by Microsoft Graph PowerShell:
+
+You can use the [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) module to enable or disable group owners' ability to consent to applications accessing your organization's data for the groups they own. The cmdlets used here are included in the [Microsoft.Graph.Identity.SignIns](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.SignIns) module.
+
+### Connect to Microsoft Graph PowerShell
+
+Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*.
+
+change the profile to beta by using the `Select-MgProfile` command
+```powershell
+Select-MgProfile -Name "beta"
+```
+Use the least-privilege permission
+```powershell
+Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization"
+
+# If you need to create a new setting based on the templates, please use this permission
+Connect-MgGraph -Scopes "Directory.ReadWrite.All"
+```
+
+### Retrieve the current setting through directory settings
+
+Retrieve the current value for the **Consent Policy Settings** directory settings in your tenant. This requires checking if the directory settings for this feature have been created, and if not, using the values from the corresponding directory settings template.
+
+```powershell
+$consentSettingsTemplateId = "dffd5d46-495d-40a9-8e21-954ff55e198a" # Consent Policy Settings
+$settings = Get-MgDirectorySetting | ?{ $_.TemplateId -eq $consentSettingsTemplateId }
+
+if (-not $settings) {
+ $template = Get-MgDirectorySettingTemplate -DirectorySettingTemplateId $consentSettingsTemplateId
+ $body = @{
+ "templateId" = $template.Id
+ "values" = @(
+ @{
+ "name" = "EnableGroupSpecificConsent"
+ "value" = $true
+ },
+ @{
+ "name" = "BlockUserConsentForRiskyApps"
+ "value" = $true
+ },
+ @{
+ "name" = "EnableAdminConsentRequests"
+ "value" = $true
+ },
+ @{
+ "name" = "ConstrainGroupSpecificConsentToMembersOfGroupId"
+ "value" = ""
+ }
+ )
+ }
+ $settings = New-MgDirectorySetting -BodyParameter $body
+}
+
+$enabledValue = $settings.Values | ? { $_.Name -eq "EnableGroupSpecificConsent" }
+$limitedToValue = $settings.Values | ? { $_.Name -eq "ConstrainGroupSpecificConsentToMembersOfGroupId" }
+```
+
+### Understand the setting values
+
+There are two settings values that define which users would be able to allow an app to access their group's data:
+
+| Setting | Type | Description |
+| - | | |
+| _EnableGroupSpecificConsent_ | Boolean | Flag indicating if groups owners are allowed to grant group-specific permissions. |
+| _ConstrainGroupSpecificConsentToMembersOfGroupId_ | Guid | If _EnableGroupSpecificConsent_ is set to "True" and this value set to a group's object ID, members of the identified group will be authorized to grant group-specific permissions to the groups they own. |
+
+### Update settings values for the desired configuration
+
+```powershell
+# Disable group-specific consent entirely
+$enabledValue.Value = "false"
+$limitedToValue.Value = ""
+```
+
+```powershell
+# Enable group-specific consent for all users
+$enabledValue.Value = "true"
+$limitedToValue.Value = ""
+```
+
+```powershell
+# Enable group-specific consent for users in a given group
+$enabledValue.Value = "true"
+$limitedToValue.Value = "{group-object-id}"
+```
+
+### Save your settings
+
+```powershell
+# Update an existing directory settings
+Update-MgDirectorySetting -DirectorySettingId $settings.Id -Values $settings.Values
+```
+++
+To manage group and team owner consent settings through directory setting by [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) :
+
+### Retrieve the current setting through directory settings
+
+Retrieve the current value for the **Consent Policy Settings** from directory settings in your tenant. This requires checking if the directory settings for this feature have been created, and if not, using the second MS Graph call to create the corresponding directory settings.
+```http
+GET https://graph.microsoft.com/beta/settings
+```
+Response
+
+``` http
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#settings",
+ "value": [
+ {
+ "id": "{ directorySettingId }",
+ "displayName": "Consent Policy Settings",
+ "templateId": "dffd5d46-495d-40a9-8e21-954ff55e198a",
+ "values": [
+ {
+ "name": "EnableGroupSpecificConsent",
+ "value": "true"
+ },
+ {
+ "name": "BlockUserConsentForRiskyApps",
+ "value": "true"
+ },
+ {
+ "name": "EnableAdminConsentRequests",
+ "value": "true"
+ },
+ {
+ "name": "ConstrainGroupSpecificConsentToMembersOfGroupId",
+ "value": ""
+ }
+ ]
+ }
+ ]
+}
+```
++
+create the corresponding directory settings if the `value` is empty (see below as an example).
+```http
+GET https://graph.microsoft.com/beta/settings
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#settings",
+ "value": []
+}
+```
++
+```http
+POST https://graph.microsoft.com/beta/settings
+{
+ "templateId": "dffd5d46-495d-40a9-8e21-954ff55e198a",
+ "values": [
+ {
+ "name": "EnableGroupSpecificConsent",
+ "value": "true"
+ },
+ {
+ "name": "BlockUserConsentForRiskyApps",
+ "value": "true"
+ },
+ {
+ "name": "EnableAdminConsentRequests",
+ "value": "true"
+ },
+ {
+ "name": "ConstrainGroupSpecificConsentToMembersOfGroupId",
+ "value": ""
+ }
+ ]
+}
+```
+### Understand the setting values
+
+There are two settings values that define which users would be able to allow an app to access their group's data:
+
+| Setting | Type | Description |
+| - | | |
+| _EnableGroupSpecificConsent_ | Boolean | Flag indicating if groups owners are allowed to grant group-specific permissions. |
+| _ConstrainGroupSpecificConsentToMembersOfGroupId_ | Guid | If _EnableGroupSpecificConsent_ is set to "True" and this value set to a group's object ID, members of the identified group will be authorized to grant group-specific permissions to the groups they own. |
+
+### Update settings values for the desired configuration
+
+Replace `{directorySettingId}` with the actual ID in the `value` collection when retrieving the current setting
+
+Disable group-specific consent entirely
+```http
+PATCH https://graph.microsoft.com/beta/settings/{directorySettingId}
+{
+ "values": [
+ {
+ "name": "EnableGroupSpecificConsent",
+ "value": "false"
+ },
+ {
+ "name": "BlockUserConsentForRiskyApps",
+ "value": "true"
+ },
+ {
+ "name": "EnableAdminConsentRequests",
+ "value": "true"
+ },
+ {
+ "name": "ConstrainGroupSpecificConsentToMembersOfGroupId",
+ "value": ""
+ }
+ ]
+}
+```
+
+Enable group-specific consent for all users
+```http
+PATCH https://graph.microsoft.com/beta/settings/{directorySettingId}
+{
+ "values": [
+ {
+ "name": "EnableGroupSpecificConsent",
+ "value": "true"
+ },
+ {
+ "name": "BlockUserConsentForRiskyApps",
+ "value": "true"
+ },
+ {
+ "name": "EnableAdminConsentRequests",
+ "value": "true"
+ },
+ {
+ "name": "ConstrainGroupSpecificConsentToMembersOfGroupId",
+ "value": ""
+ }
+ ]
+}
+```
+Enable group-specific consent for users in a given group
+```http
+PATCH https://graph.microsoft.com/beta/settings/{directorySettingId}
+{
+ "values": [
+ {
+ "name": "EnableGroupSpecificConsent",
+ "value": "true"
+ },
+ {
+ "name": "BlockUserConsentForRiskyApps",
+ "value": "true"
+ },
+ {
+ "name": "EnableAdminConsentRequests",
+ "value": "true"
+ },
+ {
+ "name": "ConstrainGroupSpecificConsentToMembersOfGroupId",
+ "value": "{group-object-id}"
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> **User can consent to apps accessing company data on their behalf** setting, when turned off, doesn't disable the **Users can consent to apps accessing company data for groups they own** option.
+
+## Manage group owner consent to apps by app consent policy
+
+You can configure which users are allowed to consent to apps accessing their groups' or teams' data through app consent policies. To allow group owner consent subject to app consent policies, the group owner consent setting must be disabled. Once disabled, your current policy is read from app consent policies.
+
-# [PowerShell](#tab/azure-powershell)
+To choose which app consent policy governs user consent for applications, you can use the [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) module. The cmdlets used here are included in the [Microsoft.Graph.Identity.SignIns](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.SignIns) module.
-You can use the Azure AD PowerShell Preview module, [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview), to enable or disable group owners' ability to consent to applications accessing your organization's data for the groups they own.
+### Connect to Microsoft Graph PowerShell
-1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module).
+Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*.
+```powershell
+# change the profile to beta by using the `Select-MgProfile` command
+Select-MgProfile -Name "beta"
+```
+```powershell
+Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization"
+```
+
+### Disable group owner consent to use app consent policies
+
+1. Check if the `ManagePermissionGrantPoliciesForOwnedResource` is scoped in `group`
+
+ 1. Retrieve the current value for the group owner consent setting
```powershell
- Remove-Module AzureAD
- Import-Module AzureADPreview
+ Get-MgPolicyAuthorizationPolicy | select -ExpandProperty DefaultUserRolePermissions | ft PermissionGrantPoliciesAssigned
```
+ If `ManagePermissionGrantPoliciesForOwnedResource` is returned in `PermissionGrantPoliciesAssigned`, your group owner consent setting **might** have been governed by the app consent policy.
-1. Connect to Azure AD PowerShell.
+ 1. Check if the policy is scoped to `group`
+ ```powershell
+ Get-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | ft AdditionalProperties
+ ```
+ If `resourceScopeType` == `group`, your group owner consent setting **has been** governed by the app consent policy.
- ```powershell
- Connect-AzureAD
- ```
+1. To disable group owner consent to utilize app consent policies, ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other current `ManagePermissionGrantsForOwnedResource.*` policies if any that aren't applicable to groups while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
-1. Retrieve the current value for the **Consent Policy Settings** directory settings in your tenant. This requires checking if the directory settings for this feature have been created, and if not, using the values from the corresponding directory settings template.
+```powershell
+# only exclude policies that are scoped in group
+$body = @{
+ "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @(
+ "managePermissionGrantsForSelf.{current-policy-for-user-consent}",
+ "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}"
+ )
+}
+Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body
- ```powershell
- $consentSettingsTemplateId = "dffd5d46-495d-40a9-8e21-954ff55e198a" # Consent Policy Settings
- $settings = Get-AzureADDirectorySetting -All $true | Where-Object { $_.TemplateId -eq $consentSettingsTemplateId }
+```
- if (-not $settings) {
- $template = Get-AzureADDirectorySettingTemplate -Id $consentSettingsTemplateId
- $settings = $template.CreateDirectorySetting()
- }
+### Assign an app consent policy to group owners
- $enabledValue = $settings.Values | ? { $_.Name -eq "EnableGroupSpecificConsent" }
- $limitedToValue = $settings.Values | ? { $_.Name -eq "ConstrainGroupSpecificConsentToMembersOfGroupId" }
- ```
+To allow group owner consent subject to an app consent policy, choose which app consent policy should govern group owners' authorization to grant consent to apps. Ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
-1. Understand the setting values. There are two settings values that define which users would be able to allow an app to access their group's data:
+```powershell
+$body = @{
+ "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @(
+ "managePermissionGrantsForSelf.{current-policy-for-user-consent}",
+ "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}",
+ "managePermissionGrantsForOwnedResource.{app-consent-policy-id-for-group}" #new app consent policy for groups
+ )
+}
+Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body
+```
- | Setting | Type | Description |
- | - | | |
- | _EnableGroupSpecificConsent_ | Boolean | Flag indicating if groups owners are allowed to grant group-specific permissions. |
- | _ConstrainGroupSpecificConsentToMembersOfGroupId_ | Guid | If _EnableGroupSpecificConsent_ is set to "True" and this value set to a group's object ID, members of the identified group will be authorized to grant group-specific permissions to the groups they own. |
+Replace `{app-consent-policy-id-for-group}` with the ID of the policy you want to apply. You can choose a [custom app consent policy](manage-group-owner-consent-policies.md#create-a-custom-group-owner-consent-policy) that you've created, or you can choose from the following built-in policies:
-1. Update settings values for the desired configuration:
+| ID | Description |
+|:|:|
+| microsoft-pre-approval-apps-for-group | **Allow group owner consent to pre-approved apps only**<br/> Allow group owners consent only for apps preapproved by admins for the groups they own. |
+| microsoft-all-application-permissions-for-group | **Allow group owner consent to apps**<br/> This option allows all group owners to consent to any permission that doesn't require admin consent, for any application, for the groups they own. It includes apps that have been preapproved by permission grant preapproval policy for group resource-specific-consent. |
- ```powershell
- # Disable group-specific consent entirely
- $enabledValue.Value = "False"
- $limitedToValue.Value = ""
- ```
+For example, to enable group owner consent subject to the built-in policy `microsoft-all-application-permissions-for-group`, run the following commands:
- ```powershell
- # Enable group-specific consent for all users
- $enabledValue.Value = "True"
- $limitedToValue.Value = ""
+```powershell
+$body = @{
+ "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @(
+ "managePermissionGrantsForSelf.{current-policy-for-user-consent}",
+ "managePermissionGrantsForOwnedResource.{all-policies-that-are-not-applicable-to-groups}",
+ "managePermissionGrantsForOwnedResource.{microsoft-all-application-permissions-for-group}" # policy that is be scoped to group
+ )
+}
+Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body
+```
+++
+Use the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to choose which group owner consent policy governs user consent group owners' ability to consent to applications accessing your organization's data for the groups they own.
+
+### Disable group owner consent to use app consent policies
+
+1. Check if the `ManagePermissionGrantPoliciesForOwnedResource` is scoped in `group`
+
+ 1. Retrieve the current value for the group owner consent setting
+ ```http
+ GET https://graph.microsoft.com/v1.0/policies/authorizationPolicy
```
+ If `ManagePermissionGrantsForOwnedResource` is returned in `permissionGrantPolicyIdsAssignedToDefaultUserRole`, your group owner consent setting might have been governed by the app consent policy.
- ```powershell
- # Enable group-specific consent for users in a given group
- $enabledValue.Value = "True"
- $limitedToValue.Value = "{group-object-id}"
+ 2.Check if the policy is scoped to `group`
+ ```http
+ GET https://graph.microsoft.com/beta/policies/permissionGrantPolicies/{microsoft-all-application-permissions-for-group}
```
+ If `resourceScopeType` == `group`, your group owner consent setting has been governed by the app consent policy.
+
+2. To disable group owner consent to utilize app consent policies, ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other current `ManagePermissionGrantsForOwnedResource.*` policies if any that aren't applicable to groups. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
+ ```http
+ PATCH https://graph.microsoft.com/beta/policies/authorizationPolicy
+ {
+ "defaultUserRolePermissions": {
+ "permissionGrantPoliciesAssigned": [
+ "managePermissionGrantsForSelf.{current-policy-for-user-consent}",
+ "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}"
+ ]
+ }
+ }
+ ```
-1. Save your settings.
+### Assign an app consent policy to group owners
- ```powershell
- if ($settings.Id) {
- # Update an existing directory settings
- Set-AzureADDirectorySetting -Id $settings.Id -DirectorySetting $settings
- } else {
- # Create a new directory settings to override the default setting
- New-AzureADDirectorySetting -DirectorySetting $settings
- }
- ```
+To allow group owner consent subject to an app consent policy, choose which app consent policy should govern group owners' authorization to grant consent to apps. Ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
-
+```http
+PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
-> [!NOTE]
-> "User can consent to apps accessing company data on their behalf" setting, when turned off, does not disable the "Users can consent to apps accessing company data for groups they own" option
+{
+ "defaultUserRolePermissions": {
+ "managePermissionGrantsForSelf.{current-policy-for-user-consent}",
+ "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}",
+ "managePermissionGrantsForOwnedResource.{app-consent-policy-id-for-group}"
+ }
+}
+```
-## Next steps
+Replace `{app-consent-policy-id-for-group}` with the ID of the policy you want to apply for groups. You can choose a [custom app consent policy for groups](manage-group-owner-consent-policies.md) that you've created, or you can choose from the following built-in policies:
+
+| ID | Description |
+|:|:|
+| microsoft-pre-approval-apps-for-group | **Allow group owner consent to pre-approved apps only**<br/> Allow group owners consent only for apps preapproved by admins for the groups they own. |
+| microsoft-all-application-permissions-for-group | **Allow group owner consent to apps**<br/> This option allows all group owners to consent to any permission that doesn't require admin consent, for any application, for the groups they own. It includes apps that have been preapproved by permission grant preapproval policy for group resource-specific-consent. |
-To learn more:
+For example, to enable group owner consent subject to the built-in policy `microsoft-pre-approval-apps-for-group`, use the following PATCH command:
+
+```http
+PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
+
+{
+ "defaultUserRolePermissions": {
+ "permissionGrantPoliciesAssigned": [
+ "managePermissionGrantsForSelf.{current-policy-for-user-consent}",
+ "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}",
+ "managePermissionGrantsForOwnedResource.microsoft-pre-approval-apps-for-group"
+ ]
+ }
+}
+```
+
+## Next steps
-* [Configure user consent settings](configure-user-consent.md)
-* [Configure the admin consent workflow](configure-admin-consent-workflow.md)
-* [Learn how to manage consent to applications and evaluate consent requests](manage-consent-requests.md)
-* [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
-* [Permissions and consent in the Microsoft identity platform](../develop/permissions-consent-overview.md)
+- [Manage group owner consent policies](manage-group-owner-consent-policies.md)
To get help or find answers to your questions:
-* [Azure AD on Microsoft Q&A](/answers/topics/azure-active-directory.html)
+- [Azure AD on Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Previously updated : 04/19/2023 Last updated : 08/25/2023
To configure user consent, you need:
To configure user consent settings through the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
-1. Select **Azure Active Directory** > **Enterprise applications** > **Consent and permissions** > **User consent settings**.
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Consent and permissions** > **User consent settings**.
1. Under **User consent for applications**, select which consent setting you want to configure for all users.
To choose which app consent policy governs user consent for applications, you ca
### Connect to Microsoft Graph PowerShell
-Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*.
+Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*. You need to sign in as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
```powershell Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization"
Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization"
### Disable user consent
-To disable user consent, set the consent policies that govern user consent to empty:
+To disable user consent, ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
```powershell
-Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
- "PermissionGrantPoliciesAssigned" = @() }
+# only exclude user consent policy
+$body = @{
+ "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @(
+ "managePermissionGrantsForOwnedResource.{other-current-policies}"
+ )
+}
+Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body
+ ``` ### Allow user consent subject to an app consent policy
-To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps:
+To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps. Please ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
```powershell
-Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
- "PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.{consent-policy-id}") }
+$body = @{
+ "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @(
+ "managePermissionGrantsForSelf.{consent-policy-id}",
+ "managePermissionGrantsForOwnedResource.{other-current-policies}"
+ )
+}
+Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body
``` Replace `{consent-policy-id}` with the ID of the policy you want to apply. You can choose a [custom app consent policy](manage-app-consent-policies.md#create-a-custom-app-consent-policy) that you've created, or you can choose from the following built-in policies:
Replace `{consent-policy-id}` with the ID of the policy you want to apply. You c
For example, to enable user consent subject to the built-in policy `microsoft-user-default-low`, run the following commands: ```powershell
-Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
- "PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.microsoft-user-default-low") }
+$body = @{
+ "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @(
+ "managePermissionGrantsForSelf.managePermissionGrantsForSelf.microsoft-user-default-low",
+ "managePermissionGrantsForOwnedResource.{other-current-policies}"
+ )
+}
``` :::zone-end
Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{
Use the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to choose which app consent policy governs user consent for applications.
-To disable user consent, set the consent policies that govern user consent to empty:
+To disable user consent, please ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
```http PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy { "defaultUserRolePermissions": {
- "permissionGrantPoliciesAssigned": []
- }
+ "permissionGrantPoliciesAssigned": [
+ "managePermissionGrantsForOwnedResource.{other-current-policies}"
+ ]
+ }
} ``` ### Allow user consent subject to an app consent policy
-To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps:
+To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps. Ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings.
```http PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy {
- "defaultUserRolePermissions": {
- "permissionGrantPoliciesAssigned": ["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"]
+ "defaultUserRolePermissions": {
+ "managePermissionGrantsForSelf.{consent-policy-id}",
+ "managePermissionGrantsForOwnedResource.{other-current-policies}"
} } ```
PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
{ "defaultUserRolePermissions": { "permissionGrantPoliciesAssigned": [
- "managePermissionGrantsForSelf.microsoft-user-default-low"
+ "managePermissionGrantsForSelf.microsoft-user-default-low",
+ "managePermissionGrantsForOwnedResource.{other-current-policies}"
] } }
PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy
> [!TIP] > To allow users to request an administrator's review and approval of an application that the user isn't allowed to consent to, [enable the admin consent workflow](configure-admin-consent-workflow.md). For example, you might do this when user consent has been disabled or when an application is requesting permissions that the user isn't allowed to grant.+ ## Next steps - [Manage app consent policies](manage-app-consent-policies.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
active-directory Custom Security Attributes Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/custom-security-attributes-apps.md
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
[Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your applications or to help determine who gets access. This article describes how to assign, update, list, or remove custom security attributes for Azure AD enterprise applications.
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
To download and install the My Apps Secure Sign-in Extension, use one of the fol
To test SAML-based single sign-on between Azure AD and a target application:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator or other administrator that is authorized to manage applications.
-1. In the left navigation pane, select **Azure Active Directory**, and then select **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
1. From the list of enterprise applications, select the application for which you want to test single sign-on, and then from the options on the left, select **Single sign-on**. 1. To open the SAML-based single sign-on testing experience, go to **Test single sign-on** (step 5). If the **Test** button is greyed out, you need to fill out and save the required attributes first in the **Basic SAML Configuration** section. 1. In the **Test single sign-on** page, use your corporate credentials to sign in to the target application. You can sign in as the current user or as a different user. If you sign in as a different user, a prompt asks you to authenticate.
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Last updated 06/21/2023
zone_pivot_groups: enterprise-apps-all-+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application.
To delete an enterprise application, you need:
:::zone pivot="portal"
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to delete. In this article, we use the **Azure AD SAML Toolkit 1** as an example.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results. In this article, we use the **Azure AD SAML Toolkit 1** as an example.
1. In the **Manage** section of the left menu, select **Properties**. 1. At the top of the **Properties** pane, select **Delete**, and then select **Yes** to confirm you want to delete the application from your Azure AD tenant.
To delete an enterprise application, you need:
Import-Module AzureAD ```
-1. Connect to Azure AD PowerShell:
+1. Connect to Azure AD PowerShell and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator):
```powershell Connect-AzureAD
To delete an enterprise application, you need:
:::zone pivot="ms-powershell"
-1. Connect to Microsoft Graph PowerShell:
+1. Connect to Microsoft Graph PowerShell and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator):
```powershell Connect-MgGraph -Scopes 'Application.Read.All'
To delete an enterprise application, you need:
:::zone pivot="ms-graph"
-Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+To delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), you need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+ 1. To get the list of service principals in your tenant, run the following query. # [HTTP](#tab/http)
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Last updated 2/23/2023 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want to disable user sign-in for an application so that no user can sign in to it in Azure Active Directory. # Disable user sign-in for an application
In this article, you learn how to prevent users from signing in to an applicatio
To disable user sign-in, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: An administrator, or owner of the service principal.
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
## Disable user sign-in
To disable user sign-in, you need:
:::zone pivot="portal"
-1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator for your directory.
-1. Search for and select **Azure Active Directory**.
-1. Select **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
1. Search for the application you want to disable a user from signing in, and select the application. 1. Select **Properties**. 1. Select **No** for **Enabled for users to sign-in?**.
To disable user sign-in, you need:
You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being preauthorized by Microsoft. You can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
-Ensure you've installed the AzureAD module (use the command `Install-Module -Name AzureAD`). In case you're prompted to install a NuGet module or the new Azure AD V2 PowerShell module, type Y and press ENTER.
+Ensure you've installed the AzureAD module (use the command `Install-Module -Name AzureAD`). In case you're prompted to install a NuGet module or the new Azure AD V2 PowerShell module, type Y and press ENTER. You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```PowerShell # Connect to Azure AD PowerShell
-Connect-AzureAD -Scopes "Application.ReadWrite.All"
+Connect-AzureAD -Scopes
# The AppId of the app to be disabled $appId = "{AppId}"
if ($servicePrincipal) {
You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being preauthorized by Microsoft. You can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
-Ensure you've installed the Microsoft Graph module (use the command `Install-Module Microsoft.Graph`).
+Ensure you've installed the Microsoft Graph module (use the command `Install-Module Microsoft.Graph`). You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```powershell # Connect to Microsoft Graph PowerShell
else { $servicePrincipal = New-MgServicePrincipal -AppId $appId ΓÇôAccountEnabl
You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being preauthorized by Microsoft. You can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
-To disable sign-in to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+To disable sign-in to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
You need to consent to the `Application.ReadWrite.All` permission.
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Granting tenant-wide admin consent requires you to sign in as a user that is aut
To grant tenant-wide admin consent, you need: - An Azure AD user account with one of the following roles:+ - Global Administrator or Privileged Role Administrator, for granting consent for apps requesting any permission, for any API. - Cloud Application Administrator or Application Administrator, for granting consent for apps requesting any permission for any API, _except_ Azure AD Graph or Microsoft Graph app roles (application permissions). - A custom directory role that includes the [permission to grant permissions to applications](../roles/custom-consent-permissions.md), for the permissions required by the application.
You can grant tenant-wide admin consent through the **Enterprise applications**
To grant tenant-wide admin consent to an app listed in **Enterprise applications**:
-1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites section.
-1. Select **Azure Active Directory**, and then select **Enterprise applications**.
-1. Select the application to which you want to grant tenant-wide admin consent, and then select **Permissions**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
+1. Select **Permissions** under **Security**.
:::image type="content" source="media/grant-tenant-wide-admin-consent/grant-tenant-wide-admin-consent.png" alt-text="Screenshot shows how to grant tenant-wide admin consent."::: 1. Carefully review the permissions that the application requires. If you agree with the permissions the application requires, select **Grant admin consent**.
For applications your organization has developed, or which are registered direct
To grant tenant-wide admin consent from **App registrations**:
-1. Sign in to the [Azure portal](https://portal.azure.com) with one of the roles listed in the prerequisites section.
-1. Select **Azure Active Directory**, and then select **App registrations**.
-1. Select the application to which you want to grant tenant-wide admin consent.
-1. Select **API permissions**.
+1. On the Entra admin center, browse to **Identity** > **Applications** > **App registrations** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
+1. Select **API permissions** under **Manage**.
1. Carefully review the permissions that the application requires. If you agree, select **Grant admin consent**. ## Construct the URL for granting tenant-wide admin consent
In the example, the resource enterprise application is Microsoft Graph of object
## Grant admin consent for delegated permissions
-1. Connect to Microsoft Graph PowerShell:
+1. Connect to Microsoft Graph PowerShell and sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```powershell Connect-MgGraph -Scopes "Application.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All"
New-MgOauth2PermissionGrant -BodyParameter $params |
Format-List Id, ClientId, ConsentType, ResourceId, Scope ```
-1. Confirm that you've granted tenant wide admin consent by running the following request.
+4. Confirm that you've granted tenant wide admin consent by running the following request.
```powershell
- Get-MgOauth2PermissionGrant-Filter "clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' consentType eq 'AllPrincipals'"
+ Get-MgOauth2PermissionGrant -Filter "clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' consentType eq 'AllPrincipals'"
``` ## Grant admin consent for application permissions In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
-1. Connect to Microsoft Graph PowerShell:
+1. Connect to Microsoft Graph PowerShell and sign in as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
```powershell Connect-MgGraph -Scopes "Application.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"
Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to gr
## Grant admin consent for delegated permissions
-In the following example, you'll grant delegated permissions defined by a resource enterprise application to a client enterprise application on behalf of all users.
+In the following example, you'll grant delegated permissions defined by a resource enterprise application to a client enterprise application on behalf of all users.
+
+You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
In the example, the resource enterprise application is Microsoft Graph of object ID `7ea9e944-71ce-443d-811c-71e8047b557a`. The Microsoft Graph defines the delegated permissions, `User.Read.All` and `Group.Read.All`. The consentType is `AllPrincipals`, indicating that you're consenting on behalf of all users in the tenant. The object ID of the client enterprise application is `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a941`.
In the example, the resource enterprise application is Microsoft Graph of object
``` ## Grant admin consent for application permissions
-In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
+In the following example, you grant the Microsoft Graph enterprise application (the principal of ID `b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94`) an app role (application permission) of ID `df021288-bdef-4463-88db-98f22de89214` that's exposed by a resource enterprise application of ID `7ea9e944-71ce-443d-811c-71e8047b557a`.
+
+You need to sign in as sign as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
1. Retrieve the app roles defined by Microsoft graph in your tenant. Identify the app role that you'll grant the client enterprise application. In this example, the app role ID is `df021288-bdef-4463-88db-98f22de89214`
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
zone_pivot_groups: enterprise-apps-all--+ #customer intent: As an admin, I want to hide an enterprise application from user's experience so that it is not listed in the user's Active directory access portals or Microsoft 365 launchers
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Last updated 01/02/2023 --+ # Home Realm Discovery for an application
active-directory Howto Enforce Signed Saml Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md
Previously updated : 06/29/2022 Last updated : 07/18/2023
SAML Request Signature Verification is a functionality that validates the signature of signed authentication requests. An App Admin now can enable and disable the enforcement of signed requests and upload the public keys that should be used to do the validation.
-If enabled Azure Active Directory will validate the requests against the public keys configured. There are some scenarios where the authentication requests can fail:
+If enabled Azure Active Directory validates the requests against the public keys configured. There are some scenarios where the authentication requests can fail:
- Protocol not allowed for signed requests. Only SAML protocol is supported. - Request not signed, but verification is enabled. -- No verification certificate configured for SAML request signature verification.
+- No verification certificate configured for SAML request signature verification. For more information about the certificate requirements, see [Certificate signing options](certificate-signing-options.md).
- Signature verification failed. - Key identifier in request is missing and two most recently added certificates don't match with the request signature. - Request signed but algorithm missing. -- No certificate matching with provided key identifier.
+- No certificate matching with provided key identifier.
- Signature algorithm not allowed. Only RSA-SHA256 is supported. > [!NOTE]
If enabled Azure Active Directory will validate the requests against the public
> Enabling `Require Verification certificates` will not allow IDP-initiated authentication requests (like SSO testing feature, MyApps or M365 app launcher) to be validated as the IDP would not possess the same private keys as the registered application.
+## Prerequisites
+
+To configure SAML request signature verification, you need:
+
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+ [!INCLUDE [portal updates](../includes/portal-update.md)]
-## To configure SAML Request Signature Verification in the Azure portal
+## Configure SAML Request Signature Verification
-1. Inside the Azure portal, navigate to **Azure Active Directory** from the Search bar or Azure Services.
-
- ![Screenshot of Azure Active Directory inside the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation1.png)
-
-2. Navigate to **Enterprise applications** from the left menu.
-
- ![Screenshot of Enterprise Application option inside the Azure portal Navigation.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation2.png)
-
-3. Select the application you wish to apply the changes.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
-4. Navigate to **Single sign-on.**
+1. Navigate to **Single sign-on**.
-5. In the **Single sign-on** screen, there's a new subsection called **Verification certificates** under **SAML Certificates.**
+1. In the **Single sign-on** screen, scroll to the subsection called **Verification certificates** under **SAML Certificates.**
- ![Screenshot of verification certificates under SAML Certificates on the Enterprise Application page in the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation3.png)
+ ![Screenshot of verification certificates under SAML Certificates on the Enterprise Application page.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation3.png)
-6. Click on **Edit.**
+1. Select **Edit.**
-7. In the new blade, you'll be able to enable the verification of signed requests and opt-in for weak algorithm verification in case your application still uses RSA-SHA1 to sign the authentication requests.
+1. In the new blade, you're able to enable the verification of signed requests and opt-in for weak algorithm verification in case your application still uses RSA-SHA1 to sign the authentication requests.
-8. To enable the verification of signed requests, click **Enable verification certificates** and upload a verification public key that matches with the private key used to sign the request.
-
- ![Screenshot of enable verification certificates in Enterprise Application within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation4.png)
+1. To enable the verification of signed requests, select **Enable verification certificates** and upload a verification public key that matches with the private key used to sign the request.
- ![Screenshot of upload certificates in Enterprise Application within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation5.png)
-
- ![Screenshot of certificate upload success in Enterprise Application within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation6.png)
+ ![Screenshot of enable verification certificates in Enterprise Applications page.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation4.png)
-9. Once you have your verification certificate uploaded, click **Save.**
-
- ![Screenshot of certificate verification save in Enterprise Application within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation7.png)
-
- ![Screenshot of certificate update success in Enterprise Application within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation8.png)
+1. Once you have your verification certificate uploaded, select **Save**.
-10. When the verification of signed requests is enabled, the test experience is disabled as the requests requires to be signed by the service provider.
+1. When the verification of signed requests is enabled, the test experience is disabled as the requests requires to be signed by the service provider.
- ![Screenshot of testing disabled warning when signed requests enabled in Enterprise Application within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation9.png)
+ ![Screenshot of testing disabled warning when signed requests enabled in Enterprise Application page.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation9.png)
-11. If you want to see the current configuration of an enterprise application, you can navigate to the **Single Sign-on** screen and see the summary of your configuration under **SAML Certificates**. There you'll be able to see if the verification of signed requests is enabled and the count of Active and Expired verification certificates.
+1. If you want to see the current configuration of an enterprise application, you can navigate to the **Single Sign-on** screen and see the summary of your configuration under **SAML Certificates**. There you're able to see if the verification of signed requests is enabled and the count of Active and Expired verification certificates.
- ![Screenshot of enterprise application configuration in single sign-on screen within the Azure portal.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation10.png)
+ ![Screenshot of enterprise application configuration in single sign-on screen.](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation10.png)
## Next steps
-* Find out [How Azure AD uses the SAML protocol](../develop/saml-protocol-reference.md)
-* Learn the format, security characteristics, and contents of [SAML tokens in Azure AD](../develop/reference-saml-tokens.md)
+- Find out [How Azure AD uses the SAML protocol](../develop/saml-protocol-reference.md)
+- Learn the format, security characteristics, and contents of [SAML tokens in Azure AD](../develop/reference-saml-tokens.md)
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Last updated 06/15/2023
-+ # Configure Azure Active Directory SAML token encryption
To configure token encryption, you need to upload an X.509 certificate file that
Azure AD uses AES-256 to encrypt the SAML assertion data.
+## Prerequisites
+
+To configure SAML token encryption, you need:
+
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+ [!INCLUDE [portal updates](../includes/portal-update.md)] ## Configure enterprise application SAML token encryption
-This section describes how to configure enterprise application's SAML token encryption. Applications that have been set up from the **Enterprise applications** blade in the Azure portal, either from the Application Gallery or a Non-Gallery app. For applications registered through the **App registrations** experience, follow the [Configure registered application SAML token encryption](#configure-registered-application-saml-token-encryption) guidance.
+This section describes how to configure enterprise application's SAML token encryption. Applications that have been set up from the **Enterprise applications** blade in the Microsoft Entra admin center, either from the Application Gallery or a Non-Gallery app. For applications registered through the **App registrations** experience, follow the [Configure registered application SAML token encryption](#configure-registered-application-saml-token-encryption) guidance.
To configure enterprise application's SAML token encryption, follow these steps:
To configure enterprise application's SAML token encryption, follow these steps:
1. Add the certificate to the application configuration in Azure AD.
-### To configure token encryption in the Azure portal
-
-You can add the public cert to your application configuration within the Azure portal.
+### Configure token encryption in the Microsoft Entra admin center
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select the **Azure Active Directory**.
-
-1. Select **Enterprise applications** blade and then select the application that you wish to configure token encryption for.
+You can add the public cert to your application configuration within the Microsoft Entra admin center.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
1. On the application's page, select **Token encryption**.
- ![Screenshot shows how to select the Token encryption option in the Azure portal.](./media/howto-saml-token-encryption/token-encryption-option-small.png)
+ ![Screenshot shows how to select the Token encryption option in the Microsoft Entra admin center.](./media/howto-saml-token-encryption/token-encryption-option-small.png)
> [!NOTE]
- > The **Token encryption** option is only available for SAML applications that have been set up from the **Enterprise applications** blade in the Azure portal, either from the Application Gallery or a Non-Gallery app. For other applications, this menu option is disabled.
+ > The **Token encryption** option is only available for SAML applications that have been set up from the **Enterprise applications** blade in the Microsoft Entra admin center, either from the Application Gallery or a Non-Gallery app. For other applications, this menu option is disabled.
1. On the **Token encryption** page, select **Import Certificate** to import the .cer file that contains your public X.509 certificate.
- ![Screenshot shows how to import a certificate file using Azure portal.](./media/howto-saml-token-encryption/import-certificate-small.png)
+ ![Screenshot shows how to import a certificate file using Microsoft Entra admin center.](./media/howto-saml-token-encryption/import-certificate-small.png)
1. Once the certificate is imported, and the private key is configured for use on the application side, activate encryption by selecting the **...** next to the thumbprint status, and then select **Activate token encryption** from the options in the dropdown menu.
You can add the public cert to your application configuration within the Azure p
1. Confirm that the SAML assertions emitted for the application are encrypted.
-### To deactivate token encryption in the Azure portal
+### To deactivate token encryption in the Microsoft Entra admin center
-1. In the Azure portal, go to **Azure Active Directory > Enterprise applications**, and then select the application that has SAML token encryption enabled.
+1. In the Microsoft Entra admin center, go to **Identity** > **Applications** > **Enterprise applications** > **All applications**, and then select the application that has SAML token encryption enabled.
1. On the application's page, select **Token encryption**, find the certificate, and then select the **...** option to show the dropdown menu.
You can add the public cert to your application configuration within the Azure p
## Configure registered application SAML token encryption
-This section describes how to configure registered application's SAML token encryption. Applications that have been set up from the **App registrations** blade in the Azure portal. For enterprise application, follow the [Configure enterprise application SAML token encryption](#configure-enterprise-application-saml-token-encryption) guidance.
+This section describes how to configure registered application's SAML token encryption. Applications that have been set up from the **App registrations** blade in the Microsoft Entra admin center. For enterprise application, follow the [Configure enterprise application SAML token encryption](#configure-enterprise-application-saml-token-encryption) guidance.
Encryption certificates are stored on the application object in Azure AD with an `encrypt` usage tag. You can configure multiple encryption certificates and the one that's active for encrypting tokens is identified by the `tokenEncryptionKeyID` attribute.
-You'll need the application's object ID to configure token encryption using Microsoft Graph API or PowerShell. You can find this value programmatically, or by going to the application's **Properties** page in the Azure portal and noting the **Object ID** value.
+You'll need the application's object ID to configure token encryption using Microsoft Graph API or PowerShell. You can find this value programmatically, or by going to the application's **Properties** page in the Microsoft Entra admin center and noting the **Object ID** value.
When you configure a keyCredential using Graph, PowerShell, or in the application manifest, you should generate a GUID to use for the keyId.
-To configure token encryption, follow these steps:
+To configure token encryption for an application registration, follow these steps:
# [Portal](#tab/azure-portal)
-1. From the Azure portal, go to **Azure Active Directory > App registrations**.
-
-1. Select the **All apps** tab to show all apps, and then select the application that you want to configure.
-
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **App registrations** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
1. In the application's page, select **Manifest** to edit the [application manifest](../develop/reference-app-manifest.md). The following example shows an application manifest configured with two encryption certificates, and with the second selected as the active one using the tokenEncryptionKeyId.
To configure token encryption, follow these steps:
# [Azure AD PowerShell](#tab/azuread-powershell)
-1. Use the latest Azure AD PowerShell module to connect to your tenant.
+1. Use the latest Azure AD PowerShell module to connect to your tenant. You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
1. Set the token encryption settings using the **[Set-AzureApplication](/powershell/module/azuread/set-azureadapplication?view=azureadps-2.0-preview&preserve-view=true)** command.
To configure token encryption, follow these steps:
# [Microsoft Graph PowerShell](#tab/msgraph-powershell)
-1. Use the Microsoft Graph PowerShell module to connect to your tenant.
+1. Use the Microsoft Graph PowerShell module to connect to your tenant. You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
1. Set the token encryption settings using the **[Update-MgApplication](/powershell/module/microsoft.graph.applications/update-mgapplication?view=graph-powershell-1.0&preserve-view=true)** command.
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Previously updated : 02/28/2023 Last updated : 08/25/2023
zone_pivot_groups: enterprise-apps-minus-portal-aad
# Manage app consent policies
+App consent policies are a way to manage the permissions that apps have to access data in your organization. They're used to control what apps users can consent to and to ensure that apps meet certain criteria before they can access data. These policies help organizations maintain control over their data and ensure they only grant access to trusted apps.
+
+In this article, you learn how to manage built-in and custom app consent policies to control when consent can be granted.
++ With [Microsoft Graph](/graph/overview) and [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage app consent policies. An app consent policy consists of one or more "include" condition sets and zero or more "exclude" condition sets. For an event to be considered in an app consent policy, it must match *at least* one "include" condition set, and must not match *any* "exclude" condition set.
Each condition set consists of several conditions. For an event to match a condi
App consent policies where the ID begins with "microsoft-" are built-in policies. Some of these built-in policies are used in existing built-in directory roles. For example, the `microsoft-application-admin` app consent policy describes the conditions under which the Application Administrator and Cloud Application Administrator roles are allowed to grant tenant-wide admin consent. Built-in policies can be used in custom directory roles and to configure user consent settings, but can't be edited or deleted.
-## Pre-requisites
+## Prerequisites
-1. A user or service with one of the following roles:
+- A user or service with one of the following roles:
- Global Administrator directory role - Privileged Role Administrator directory role - A custom directory role with the necessary [permissions to manage app consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies)
App consent policies where the ID begins with "microsoft-" are built-in policies
:::zone pivot="ms-powershell"
-2. Connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true).
-
+To manage app consent policies for applications with Microsoft Graph PowerShell, connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true).
```powershell Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant" ```
Once the app consent policy has been created, you can [allow user consent](confi
## Delete a custom app consent policy
-1. The following shows how you can delete a custom app consent policy. **This action cannot be undone.**
-
- ```powershell
+The following cmdlet shows how you can delete a custom app consent policy.
+
+```powershell
Remove-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId "my-custom-policy"
- ```
+```
:::zone-end
Follow these steps to create a custom app consent policy:
1. Add "include" condition sets.
- Include delegated permissions classified "low", for apps from verified publishers
+ Include delegated permissions classified "low" for apps from verified publishers
```http POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-policy }/includes
Follow these steps to create a custom app consent policy:
{ "permissionType": "delegated",
- ΓÇ£PermissionClassification: "low",
+ "PermissionClassification": "low",
"clientApplicationsFromVerifiedPublisherOnly": true } ```
Once the app consent policy has been created, you can [allow user consent](confi
## Delete a custom app consent policy
-1. The following shows how you can delete a custom app consent policy. **This action canΓÇÖt be undone.**
+1. The following shows how you can delete a custom app consent policy.
-```http
-DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy
-```
+ ```http
+ DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy
+ ```
:::zone-end > [!WARNING] > Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy.+ ### Supported conditions The following table provides the list of supported conditions for app consent policies.
The following table provides the list of supported conditions for app consent po
| PermissionClassification | The [permission classification](configure-permission-classifications.md) for the permission being granted, or "all" to match with any permission classification (including permissions that aren't classified). Default is "all". | | PermissionType | The permission type of the permission being granted. Use "application" for application permissions (for example, app roles) or "delegated" for delegated permissions. <br><br>**Note**: The value "delegatedUserConsentable" indicates delegated permissions that haven't been configured by the API publisher to require admin consent. This value may be used in built-in permission grant policies, but can't be used in custom permission grant policies. Required. | | ResourceApplication | The **AppId** of the resource application (for example, the API) for which a permission is being granted, or "any" to match with any resource application or API. Default is "any". |
-| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <ul><li>Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object.</li><li>Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object.</li></ol> |
+| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <br> - Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object. <br> - Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object. |
| ClientApplicationIds | A list of **AppId** values for the client applications to match with, or a list with the single value "all" to match any client application. Default is the single value "all". | | ClientApplicationTenantIds | A list of Azure Active Directory tenant IDs in which the client application is registered, or a list with the single value "all" to match with client apps registered in any tenant. Default is the single value "all". | | ClientApplicationPublisherIds | A list of Microsoft Partner Network (MPN) IDs for [verified publishers](../develop/publisher-verification-overview.md) of the client application, or a list with the single value "all" to match with client apps from any publisher. Default is the single value "all". | | ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it doesn't have a verified publisher. Default is `$false`. |
+|scopeType| The resource scope type the preapproval applies to. Possible values: `group` for [groups](/graph/api/resources/group) and [teams](/graph/api/resources/team), `chat` for [chats](/graph/api/resources/chat?view=graph-rest-1.0&preserve-view=true), or `tenant` for tenant-wide access. Required.|
+| sensitivityLabels| The sensitivity labels that are applicable to the scope type and have been preapproved. It allows you to protect sensitive organizational data. Learn about [sensitivity labels](/microsoft-365/compliance/sensitivity-labels). **Note:** Chat resource **does not** support sensitivityLabels yet.
-> [!WARNING]
-> Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy.
## Next steps
-To learn more:
-
-* [Manage app consent policies using Microsoft Graph](/graph/api/resources/permissiongrantpolicy)
-* [Configure user consent settings](configure-user-consent.md)
-* [Configure the admin consent workflow](configure-admin-consent-workflow.md)
-* [Learn how to manage consent to applications and evaluate consent requests](manage-consent-requests.md)
-* [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
-* [Permissions and consent in the Microsoft identity platform](../develop/permissions-consent-overview.md)
+- [Manage group owner consent policies](manage-group-owner-consent-policies.md)
To get help or find answers to your questions:
-* [Azure AD on Microsoft Q&A](/answers/products/)
+* [Azure AD on Microsoft Q&A](/answers/products/)
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Previously updated : 03/28/2023 Last updated : 09/04/2023 zone_pivot_groups: enterprise-apps-all --+ #customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.- # Review permissions granted to enterprise applications
Please see [Restore permissions granted to applications](restore-permissions.md)
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-You can access the Azure portal to view the permissions granted to an app. You can revoke permissions granted by admins for your entire organization, and you can get contextual PowerShell scripts to perform other actions.
+You can access the Microsoft Entra admin center to view the permissions granted to an app. You can revoke permissions granted by admins for your entire organization, and you can get contextual PowerShell scripts to perform other actions.
-To revoke an application's permissions that have been granted for the entire organization:
+To review an application's permissions that have been granted for the entire organization or to a specific user or group:
-1. Sign in to the [Azure portal](https://portal.azure.com) using one of the roles listed in the prerequisites section.
-1. Select **Azure Active Directory**, and then select **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
1. Select the application that you want to restrict access to. 1. Select **Permissions**.
-1. The permissions listed in the **Admin consent** tab apply to your entire organization. Choose the permission you would like to remove, select the **...** control for that permission, and then choose **Revoke permission**.
-
-To review an application's permissions:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using one of the roles listed in the prerequisites section.
-1. Select **Azure Active Directory**, and then select **Enterprise applications**.
-1. Select the application that you want to restrict access to.
-1. Select **Permissions**. In the command bar, select **Review permissions**.
-![Screenshot of the review permissions window.](./media/manage-application-permissions/review-permissions.png)
-1. Give a reason for why you want to review permissions for the application by selecting any of the options listed after the question, **Why do you want to review permissions for this application?**
-
-Each option generates PowerShell scripts that enable you to control user access to the application and to review permissions granted to the application. For information about how to control user access to an application, see [How to remove a user's access to an application](methods-for-removing-user-access.md)
+1. To view permissions that apply to your entire organization, select the **Admin consent** tab. To view permissions granted to a specific user or group, select the **User consent** tab.
+1. To view the details of a given permission, select the permission from the list. The **Permission Details** pane opens.
+1. To revoke a given permission, choose the permission you would like to revoke, select the **...** control for that permission, and then choose **Revoke permission**.
:::zone-end
Each option generates PowerShell scripts that enable you to control user access
## Review and revoke permissions
-Use the following Azure AD PowerShell script to revoke all permissions granted to an application.
+Use the following Azure AD PowerShell script to revoke all permissions granted to an application. You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```powershell Connect-AzureAD
$spOAuth2PermissionsGrants | ForEach-Object {
} # Get all application permissions for the service principal
-$spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo-ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+$spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
# Remove all application permissions $spApplicationPermissions | ForEach-Object {
$assignments | ForEach-Object {
## Review and revoke permissions
-Use the following Microsoft Graph PowerShell script to revoke all permissions granted to an application.
+Use the following Microsoft Graph PowerShell script to revoke all permissions granted to an application. You need to sign in as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
```powershell Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"
$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrin
## Review and revoke permissions
-To review permissions, Sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+To review permissions, Sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
You need to consent to the following permissions:
active-directory Manage Group Owner Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-group-owner-consent-policies.md
+
+ Title: Manage app consent policies for group owners
+description: Learn how to manage built-in and custom app consent policies for group owner to control when consent can be granted.
+++++++ Last updated : 08/25/2023+++
+zone_pivot_groups: enterprise-apps-minus-portal-aad
+
+#customer intent: As an admin, I want to manage app consent policies for group owner for enterprise applications in Azure AD
++
+# Manage app consent policies for group owners
+
+App consent policies are a way to manage the permissions that apps have to access data in your organization. They're used to control what apps users can consent to and to ensure that apps meet certain criteria before they can access data. These policies help organizations maintain control over their data and ensure that it's being accessed only by trusted apps.
+
+In this article, you learn how to manage built-in and custom app consent policies to control when group owner consent can be granted.
+
+With [Microsoft Graph](/graph/overview) and [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage group owner consent policies.
+
+A group owner consent policy consists of zero or more "include" condition sets and zero or more "exclude" condition sets. For an event to be considered in a group owner consent policy, the "include" condition set must not match *any* "exclude" condition set.
+
+Each condition set consists of several conditions. For an event to match a condition set, *all* conditions in the condition set must be met.
+
+Group owner consent policies where the ID begins with "microsoft-" are built-in policies. For example, the `microsoft-pre-approval-apps-for-group` group owner consent policy describes the conditions under which the group owners are allowed to grant consent to applications from the preapproved list by the admin to access data for the groups they own. Built-in policies can be used in custom directory roles and to configure user consent settings, but can't be edited or deleted.
+
+## Prerequisites
+
+- A user or service with one of the following roles:
+ - Global Administrator directory role
+ - Privileged Role Administrator directory role
+ - A custom directory role with the necessary [permissions to manage group owner consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies)
+ - The Microsoft Graph app role (application permission) Policy.ReadWrite.PermissionGrant (when connecting as an app or a service)
+- To allow group owner consent subject to app consent policies, the group owner consent setting must be disabled. Once disabled, your current policy is read from the app consent policy. To learn how to disable group owner consent, see [Disable group owner consent setting](configure-user-consent-groups.md)
+
+
+To manage group owner consent policies for applications with Microsoft Graph PowerShell, connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) and sign in with one of the roles listed in the prerequisites section. You also need to consent to the `Policy.ReadWrite.PermissionGrant` permission.
+
+ ```powershell
+ # change the profile to beta by using the `Select-MgProfile` command
+ Select-MgProfile -Name "beta"
+ ```
+ ```powershell
+ Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant"
+ ```
+
+## Retrieve the current value for the group owner consent policy
+
+Learn how to verify if your group owner consent setting has been authorized in other ways.
+
+1. Retrieve the current value for the group owner consent setting
+
+ ```powershell
+ Get-MgPolicyAuthorizationPolicy | select -ExpandProperty DefaultUserRolePermissions | ft PermissionGrantPoliciesAssigned
+ ```
+If `ManagePermissionGrantPoliciesForOwnedResource` is returned in `PermissionGrantPoliciesAssigned`, your group owner consent setting might have been authorized in other ways.
+
+1. Check if the policy is scoped to `group`
+```powershell
+ Get-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | Select -ExpandProperty AdditionalProperties
+```
+If `ResourceScopeType` == `group`, your group owner consent setting has been authorized in other ways. In addition, if the app consent policy for groups has been assigned `microsoft-pre-approval-apps-for-group`, it means the preapproval feature is enabled for your tenant.
++
+## List existing group owner consent policies
+
+It's a good idea to start by getting familiar with the existing group owner consent policies in your organization:
+
+1. List all group owner consent policies:
+
+ ```powershell
+ Get-MgPolicyPermissionGrantPolicy | ft Id, DisplayName, Description
+ ```
+
+1. View the "include" condition sets of a policy:
+
+ ```powershell
+ Get-MgPolicyPermissionGrantPolicyInclude -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | fl
+ ```
+
+1. View the "exclude" condition sets:
+
+ ```powershell
+ Get-MgPolicyPermissionGrantPolicyExclude -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | fl
+ ```
+
+## Create a custom group owner consent policy
+
+Follow these steps to create a custom group owner consent policy:
+
+1. Create a new empty group owner consent policy.
+
+ ```powershell
+ New-MgPolicyPermissionGrantPolicy `
+ -Id "my-custom-app-consent-policy-for-group" `
+ -DisplayName "My first custom app consent policy for group" `
+ -Description "This is a sample custom app consent policy for group." `
+ -AdditionalProperties @{includeAllPreApprovedApplications = $false; resourceScopeType = "group"}
+ ```
+1. Add "include" condition sets.
+
+ ```powershell
+ # Include delegated permissions classified "low", for apps from verified publishers
+ New-MgPolicyPermissionGrantPolicyInclude `
+ -PermissionGrantPolicyId "my-custom-app-consent-policy-for-group" `
+ -PermissionType "delegated" `
+ -PermissionClassification "low" `
+ -ClientApplicationsFromVerifiedPublisherOnly
+ ```
+
+ Repeat this step to add more "include" condition sets.
+
+1. Optionally, add "exclude" condition sets.
+
+ ```powershell
+ # Retrieve the service principal for the Azure Management API
+ $azureApi = Get-MgServicePrincipal -Filter "servicePrincipalNames/any(n:n eq 'https://management.azure.com/')"
+
+ # Exclude delegated permissions for the Azure Management API
+ New-MgPolicyPermissionGrantPolicyExclude `
+ -PermissionGrantPolicyId "my-custom-app-consent-policy-for-group" `
+ -PermissionType "delegated" `
+ -ResourceApplication $azureApi.AppId
+ ```
+
+ Repeat this step to add more "exclude" condition sets.
+
+Once the app consent policy for group has been created, you can [allow group owners consent](configure-user-consent-groups.md) subject to this policy.
+
+## Delete a custom group owner consent policy
+
+1. The following shows how you can delete a custom group owner consent policy.
+
+ ```powershell
+ Remove-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId "my-custom-app-consent-policy-for-group"
+ ```
+++
+To manage group owner consent policies, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section. You also need to consent to the `Policy.ReadWrite.PermissionGrant` permission.
+
+## Retrieve the current value for the group owner consent policy
+
+Learn how to verify if your group owner consent setting has been authorized in other ways.
+1. Retrieve the current policy value
+ ```http
+ GET /policies/authorizationPolicy
+ ```
+ If `ManagePermissionGrantPoliciesForOwnedResource` appears, your group owner consent setting might have been authorized in other ways.
+
+1. Check if the policy is scoped to `group`
+ ```http
+ GET /policies/permissionGrantPolicies/{ microsoft-all-application-permissions-for-group }
+ ```
+ If `resourceScopeType` == `group`, your group owner consent setting has been authorized in other ways. In addition, if the app consent policy for groups has been assigned `microsoft-pre-approval-apps-for-group`, it means the preapproval feature is enabled for your tenant.
+
+## List existing group owner consent policies
+
+It's a good idea to start by getting familiar with the existing group owner consent policies in your organization:
+
+1. List all app consent policies:
+
+ ```http
+ GET /policies/permissionGrantPolicies
+ ```
+
+1. View the "include" condition sets of a policy:
+
+ ```http
+ GET /policies/permissionGrantPolicies/{ microsoft-all-application-permissions-for-group }/includes
+ ```
+
+1. View the "exclude" condition sets:
+
+ ```http
+ GET /policies/permissionGrantPolicies/{ microsoft-all-application-permissions-for-group }/excludes
+ ```
+
+## Create a custom group owner consent policy
+
+Follow these steps to create a custom group owner consent policy:
+
+1. Create a new empty group owner consent policy.
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies
+
+ {
+ "id": "my-custom-app-consent-policy-for-group",
+ "displayName": "My first custom app consent policy for group",
+ "description": "This is a sample custom app consent policy for group",
+ "includeAllPreApprovedApplications": false,
+ "resourceScopeType": "group"
+ }
+ ```
+
+1. Add "include" condition sets.
+
+ Include delegated permissions classified "low" for apps from verified publishers
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-app-consent-policy-for-group }/includes
+
+ {
+ "permissionType": "delegated",
+ "permissionClassification": "low",
+ "clientApplicationsFromVerifiedPublisherOnly": true
+ }
+ ```
+
+ Repeat this step to add more "include" condition sets.
+
+1. Optionally, add "exclude" condition sets.
+ Exclude delegated permissions for the Azure Management API (appId 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b)
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-app-consent-policy-for-group }/excludes
+
+ {
+ "permissionType": "delegated",
+ "resourceApplication": "46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b "
+ }
+ ```
+
+ Repeat this step to add more "exclude" condition sets.
+
+Once the group owner consent policy has been created, you can [allow group owners consent](configure-user-consent.md?tabs=azure-powershell#allow-user-consent-subject-to-an-app-consent-policy) subject to this policy.
+
+## Delete a custom group owner consent policy
+
+1. The following shows how you can delete a custom group owner consent policy.
+
+ ```http
+ DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy
+ ```
++
+> [!WARNING]
+> Deleted group owner consent policies cannot be restored. If you accidentally delete a custom group owner consent policy, you will need to re-create the policy.
+
+### Supported conditions
+
+The following table provides the list of supported conditions for group owner consent policies.
+
+| Condition | Description|
+|:|:-|
+| PermissionClassification | The [permission classification](configure-permission-classifications.md) for the permission being granted, or "all" to match with any permission classification (including permissions that aren't classified). Default is "all". |
+| PermissionType | The permission type of the permission being granted. Use "application" for application permissions (for example, app roles) or "delegated" for delegated permissions. <br><br>**Note**: The value "delegatedUserConsentable" indicates delegated permissions that haven't been configured by the API publisher to require admin consent. This value may be used in built-in permission grant policies, but can't be used in custom permission grant policies. Required. |
+| ResourceApplication | The **AppId** of the resource application (for example, the API) for which a permission is being granted, or "any" to match with any resource application or API. Default is "any". |
+| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <br> - Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object.<br> - Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object. |
+| ClientApplicationIds | A list of **AppId** values for the client applications to match with, or a list with the single value "all" to match any client application. Default is the single value "all". |
+| ClientApplicationTenantIds | A list of Azure Active Directory tenant IDs in which the client application is registered, or a list with the single value "all" to match with client apps registered in any tenant. Default is the single value "all". |
+| ClientApplicationPublisherIds | A list of Microsoft Partner Network (MPN) IDs for [verified publishers](../develop/publisher-verification-overview.md) of the client application, or a list with the single value "all" to match with client apps from any publisher. Default is the single value "all". |
+| ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it doesn't have a verified publisher. Default is `$false`. |
+
+> [!WARNING]
+> Deleted group owner consent policies can't be restored. If you accidentally delete a custom group owner consent policy, you will need to re-create the policy.
+
+To get help or find answers to your questions:
+
+- [Azure AD on Microsoft Q&A](/answers/products/)
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
# Enable self-service application assignment
-In this article, you learn how to enable self-service application access using the Azure portal.
+In this article, you learn how to enable self-service application access using the Microsoft Entra admin center.
Before your users can self-discover applications from the [My Apps portal](./myapps-overview.md), you need to enable **Self-service application access** for the applications. This functionality is available for applications that were added from the Azure AD Gallery, [Azure AD Application Proxy](../app-proxy/application-proxy.md), or were added using [user or admin consent](../develop/application-consent-experience.md).
Self-service application access is a great way to allow users to self-discover a
To enable self-service application access to an application, follow the steps below:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
-
-1. Select **Azure Active Directory**. In the left navigation menu, select **Enterprise applications**.
-
-1. Select the application from the list. If you don't see the application, start typing its name in the search box. Or use the filter controls to select the application type, status, or visibility, and then select **Apply**.
-
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
1. In the left navigation menu, select **Self-service**. > [!NOTE] > The **Self-service** menu item isn't available if the corresponding app registration's setting for public client flows is enabled. To access this setting, the app registration needs to exist in your tenant. Locate the app registration, select **Authentication** in the left navigation, then locate **Allow public client flows**. 1. To enable Self-service application access for this application, set **Allow users to request access to this application?** to **Yes.**- 1. Next to **To which group should assigned users be added?**, select **Select group**. Choose a group, and then select **Select**. When a user's request is approved, they'll be added to this group. When viewing this group's membership, you'll be able to see who has been granted access to the application through self-service access. > [!NOTE] > This setting doesn't support groups synchronized from on-premises. 1. **Optional:** To require business approval before users are allowed access, set **Require approval before granting access to this application?** to **Yes**.- 1. **Optional:** Next to **Who is allowed to approve access to this application?** Select **Select approvers** to specify the business approvers who are allowed to approve access to this application. Select up to 10 individual business approvers, and then select **Select**. >[!NOTE]
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
Many organizations use Active Directory Federation Services (AD FS) to provide single sign-on to cloud applications. There are significant benefits to moving your AD FS applications to Azure AD for authentication, especially in terms of cost management, risk management, productivity, compliance, and governance. But understanding which applications are compatible with Azure AD and identifying specific migration steps can be time consuming.
-The AD FS application activity report in the [Entra portal](https://entra.microsoft.com) lets you quickly identify which of your applications are capable of being migrated to Azure AD. It assesses all AD FS applications for compatibility with Azure AD, checks for any issues, and gives guidance on preparing individual applications for migration. With the AD FS application activity report, you can:
+The AD FS application activity report in the [Microsoft Entra admin center](https://entra.microsoft.com) lets you quickly identify which of your applications are capable of being migrated to Azure AD. It assesses all AD FS applications for compatibility with Azure AD, checks for any issues, and gives guidance on preparing individual applications for migration. With the AD FS application activity report, you can:
* **Discover AD FS applications and scope your migration.** The AD FS application activity report lists all AD FS applications in your organization that have had an active user login in the last 30 days. The report indicates an apps readiness for migration to Azure AD. The report doesn't display Microsoft related relying parties in AD FS such as Office 365. For example, relying parties with name 'urn:federation:MicrosoftOnline'.
The AD FS application activity data is available to users who are assigned any o
## Prerequisites
-* Your organization must be currently using AD FS to access applications.
-* Azure AD Connect Health must be enabled in your Azure AD tenant.
-* The Azure AD Connect Health for AD FS agent must be installed.
-* [Learn more about Azure AD Connect Health](../hybrid/connect/how-to-connect-health-adfs.md)
-* [Get started with setting up Azure AD Connect Health and install the AD FS agent](../hybrid/connect/how-to-connect-health-agent-install.md)
+- Your organization must be currently using AD FS to access applications.
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, Global Reader, or owner of the service principal.
+- Azure AD Connect Health must be enabled in your Azure AD tenant.
+- The Azure AD Connect Health for AD FS agent must be installed.
+- [Learn more about Azure AD Connect Health](../hybrid/connect/how-to-connect-health-adfs.md).
+- [Get started with setting up Azure AD Connect Health and install the AD FS agent](../hybrid/connect/how-to-connect-health-agent-install.md).
+ >[!IMPORTANT] >There are a couple reasons you won't see all the applications you are expecting after you have installed Azure AD Connect Health. The AD FS application activity report only shows AD FS relying parties with user logins in the last 30 days. Also, the report won't display Microsoft related relying parties such as Office 365. ## Discover AD FS applications that can be migrated
-The AD FS application activity report is available in the [Entra portal](https://entra.microsoft.com) under Azure AD **Usage & insights** reporting. The AD FS application activity report analyzes each AD FS application to determine if it can be migrated as-is, or if additional review is needed.
-
-1. Sign in to the [Entra portal](https://entra.microsoft.com) with an admin role that has access to AD FS application activity data (global administrator, reports reader, security reader, application administrator, or cloud application administrator).
-
-2. Select **Azure Active Directory**, and then select **Enterprise applications**.
+The AD FS application activity report is available in the Microsoft Entra admin center under Azure AD **Usage & insights** reporting. The AD FS application activity report analyzes each AD FS application to determine if it can be migrated as-is, or if additional review is needed.
-3. Under **Activity**, select **Usage & Insights**, and then select **AD FS application activity** to open a list of all AD FS applications in your organization.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
+1. Under **Activity**, select **Usage & Insights**, and then select **AD FS application activity** to open a list of all AD FS applications in your organization.
![AD FS application activity](media/migrate-adfs-application-activity/adfs-application-activity.png)
-4. For each application in the AD FS application activity list, view the **Migration status**:
+1. For each application in the AD FS application activity list, view the **Migration status**:
- * **Ready to migrate** means the AD FS application configuration is fully supported in Azure AD and can be migrated as-is.
+ - **Ready to migrate** means the AD FS application configuration is fully supported in Azure AD and can be migrated as-is.
- * **Needs review** means some of the application's settings can be migrated to Azure AD, but you'll need to review the settings that can't be migrated as-is.
+ - **Needs review** means some of the application's settings can be migrated to Azure AD, but you'll need to review the settings that can't be migrated as-is.
- * **Additional steps required** means Azure AD doesn't support some of the application's settings, so the application canΓÇÖt be migrated in its current state.
+ - **Additional steps required** means Azure AD doesn't support some of the application's settings, so the application canΓÇÖt be migrated in its current state.
## Evaluate the readiness of an application for migration
-1. In the AD FS application activity list, click the status in the **Migration status** column to open migration details. You'll see a summary of the configuration tests that passed, along with any potential migration issues.
+1. In the AD FS application activity list, select the status in the **Migration status** column to open migration details. You'll see a summary of the configuration tests that passed, along with any potential migration issues.
![Migration details](media/migrate-adfs-application-activity/migration-details.png)
-2. Click a message to open additional migration rule details. For a full list of the properties tested, see the [AD FS application configuration tests](#ad-fs-application-configuration-tests) table, below.
+1. Select a message to open additional migration rule details. For a full list of the properties tested, see the [AD FS application configuration tests](#ad-fs-application-configuration-tests) table, below.
![Migration rule details](media/migrate-adfs-application-activity/migration-rule-details.png)
The following table lists all configuration tests that are performed on AD FS ap
If you have configured a claim rule for the application in AD FS, the experience will provide a granular analysis for all the claim rules. You'll see which claim rules can be moved to Azure AD and which ones need further review.
-1. In the AD FS application activity list, click the status in the **Migration status** column to open migration details. You'll see a summary of the configuration tests that passed, along with any potential migration issues.
+1. In the AD FS application activity list, select the status in the **Migration status** column to open migration details. You'll see a summary of the configuration tests that passed, along with any potential migration issues.
2. On the **Migration rule details** page, expand the results to display details about potential migration issues and to get additional guidance. For a detailed list of all claim rules tested, see the [Check the results of claim rule tests](#check-the-results-of-claim-rule-tests) table, below.
active-directory Migrate Adfs Apps Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-stages.md
Update the configuration of your production app to point to your production Azur
Your line-of-business apps are those that your organization developed or those that are a standard packaged product.
-Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Entra portal](https://entra.microsoft.com/#home).
+Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
## Next steps
active-directory Migrate Adfs Represent Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-represent-security-policies.md
Explicit group authorization in AD FS:
To map this rule to Azure AD:
-1. In the [Entra portal](https://entra.microsoft.com/#home), [create a user group](../fundamentals/how-to-manage-groups.md) that corresponds to the group of users from AD FS.
+1. In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), [create a user group](../fundamentals/how-to-manage-groups.md) that corresponds to the group of users from AD FS.
1. Assign app permissions to the group: :::image type="content" source="media/migrate-adfs-represent-security-policies/allow-a-group-explicitly-2.png" alt-text="Screenshot shows how to add a user assignment to the app.":::
Explicit user authorization in AD FS:
To map this rule to Azure AD:
-* In the [Entra portal](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below:
+* In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below:
:::image type="content" source="media/migrate-adfs-represent-security-policies/authorize-a-specific-user-2.png" alt-text="Screenshot shows My SaaS apps in Azure.":::
The following are examples of types of MFA rules in AD FS, and how you can map t
MFA rule settings in AD FS: ### Example 1: Enforce MFA based on users/groups
Emit attributes as Claims rule in AD FS:
To map the rule to Azure AD:
-1. In the [Entra portal](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration:
+1. In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration:
:::image type="content" source="media/migrate-adfs-represent-security-policies/map-emit-attributes-as-claims-rule-2.png" alt-text="Screenshot shows the Single sign-on page for your Enterprise Application.":::
In this table, we've listed some useful Permit and Except options and how they m
| From Devices with Specific Trust Level| Set this from the **Device State** control under Assignments -> Conditions| Use the **Exclude** option under Device State Condition and Include **All devices** | | With Specific Claims in the Request| This setting can't be migrated| This setting can't be migrated |
-Here's an example of how to configure the Exclude option for trusted locations in the Entra portal:
+Here's an example of how to configure the Exclude option for trusted locations in the Microsoft Entra admin center:
:::image type="content" source="media/migrate-adfs-represent-security-policies/map-built-in-access-control-policies-3.png" alt-text="Screenshot of mapping access control policies.":::
Your existing external users can be set up in these two ways in AD FS:
As you progress with your migration, you can take advantage of the benefits that [Azure AD B2B](../external-identities/what-is-b2b.md) offers by migrating these users to use their own corporate identity when such an identity is available. This streamlines the process of signing in for those users, as they're often signed in with their own corporate sign-in. Your organization's administration is easier as well, by not having to manage accounts for external users. - **Federated external Identities**ΓÇöIf you're currently federating with an external organization, you have a few approaches to take:
- - [Add Azure Active Directory B2B collaboration users in the Entra portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to.
+ - [Add Azure Active Directory B2B collaboration users in the Microsoft Entra admin center](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to.
- [Create a self-service B2B sign-up workflow](../external-identities/self-service-portal.md) that generates a request for individual users at your partner organization using the B2B invitation API. No matter how your existing external users are configured, they likely have permissions that are associated with their account, either in group membership or specific permissions. Evaluate whether these permissions need to be migrated or cleaned up.
active-directory Migrate Adfs Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-saml-based-sso.md
Apps that you can move easily today include SAML 2.0 apps that use the standard
The following require more configuration steps to migrate to Azure AD: * Custom authorization or multi-factor authentication (MFA) rules in AD FS. You configure them using the [Azure AD Conditional Access](../conditional-access/overview.md) feature.
-* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Entra portal interface.
+* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Microsoft Entra admin center interface.
* WS-Federation apps such as SharePoint apps that require SAML version 1.1 tokens. You can configure them manually using PowerShell. You can also add a preintegrated generic template for SharePoint and SAML 1.1 applications from the gallery. We support the SAML 2.0 protocol. * Complex claims issuance transforms rules. For information about supported claims mappings, see: * [Claims mapping in Azure Active Directory](../develop/saml-claims-customization.md).
Migration requires assessing how the application is configured on-premises, and
The following table describes some of the most common mapping of settings between an AD FS Relying Party Trust to Azure AD Enterprise Application: * AD FSΓÇöFind the setting in the AD FS Relying Party Trust for the app. Right-click the relying party and select Properties.
-* Azure ADΓÇöThe setting is configured within [Entra portal](https://entra.microsoft.com/#home) in each application's SSO properties.
+* Azure ADΓÇöThe setting is configured within [Microsoft Entra admin center](https://entra.microsoft.com/#home) in each application's SSO properties.
| Configuration setting| AD FS| How to configure in Azure AD| SAML Token | | - | - | - | - |
The following table describes some of the most common mapping of settings betwee
Configure your applications to point to Azure AD versus AD FS for SSO. Here, we're focusing on SaaS apps that use the SAML protocol. However, this concept extends to custom line-of-business apps as well. > [!NOTE]
-> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Entra portal](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**:
+> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Microsoft Entra admin center](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**:
* Select Directory ID to see your Tenant ID. * Select Application ID to see your Application ID.
SaaS apps need to know where to send authentication requests and how to validate
| - | - | - | | **IdP Sign-on URL** <p>Sign-on URL of the IdP from the app's perspective (where the user is redirected for sign-in).| The AD FS sign-on URL is the AD FS federation service name followed by "/adfs/ls/." <p>For example: `https://fs.contoso.com/adfs/ls/`| Replace {tenant-id} with your tenant ID. <p> ΓÇÄFor apps that use the SAML-P protocol: [https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p>ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/{tenant-id}/wsfed](https://login.microsoftonline.com/{tenant-id}/wsfed) | | **IdP sign-out URL**<p>Sign-out URL of the IdP from the app's perspective (where the user is redirected when they choose to sign out of the app).| The sign-out URL is either the same as the sign-on URL, or the same URL with "wa=wsignout1.0" appended. For example: `https://fs.contoso.com/adfs/ls/?wa=wsignout1.0`| Replace {tenant-id} with your tenant ID.<p>For apps that use the SAML-P protocol:<p>[https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p> ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0](https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0) |
-| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Entra portal in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. |
+| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Microsoft Entra admin center in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. |
| **Identifier/ "issuer"**<p>Identifier of the IdP from the app's perspective (sometimes called the "issuer ID").<p>ΓÇÄIn the SAML token, the value appears as the Issuer element.| The identifier for AD FS is usually the federation service identifier in AD FS Management under **Service > Edit Federation Service Properties**. For example: `http://fs.contoso.com/adfs/services/trust`| Replace {tenant-id} with your tenant ID.<p>https:\//sts.windows.net/{tenant-id}/ | | **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). |
active-directory Migrate Okta Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md
Last updated 05/23/2023 -+ # Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication
active-directory Migrate Okta Sync Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md
Last updated 05/23/2023 -+ # Tutorial: Migrate Okta sync provisioning to Azure AD Connect synchronization
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Last updated 03/16/2023
zone_pivot_groups: home-realm-discovery--+ #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy # Disable auto-acceleration sign-in
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Last updated 06/21/2023 -+ zone_pivot_groups: enterprise-apps-minus-portal #Customer intent: As an administrator of an Azure AD tenant, I want to restore a soft deleted enterprise application.
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
In this article, you learn how to review and take action on admin consent reques
To review and take action on admin consent requests, you need: - An Azure account. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A designated reviewer with the appropriate role to [review admin consent requests](grant-admin-consent.md#prerequisites).
+- A Global Administrator or a designated reviewer with the appropriate role to [review admin consent requests](grant-admin-consent.md#prerequisites).
## Review and take action on admin consent requests
To review and take action on admin consent requests, you need:
To review the admin consent requests and take action:
-1. Sign in to the [Azure portal](https://portal.azure.com) as one of the registered reviewers of the admin consent workflow.
-1. Search for and select **Azure Active Directory**.
-1. From the navigation menu, select **Enterprise applications**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) who is a designated reviewer.
+1. Browse to **Identity** > **Applications** > **Enterprise applications**.
1. Under **Activity**, select **Admin consent requests**. 1. Select **My Pending** tab to view and act on the pending requests. 1. Select the application that is being requested from the list.
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
-+ Last updated 07/12/2023
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
The overall solution comprises the following components:
The following diagram illustrates the high-level traffic flow. Tenant restrictions requires TLS inspection only on traffic to Azure AD, not to the Microsoft 365 cloud services. This distinction is important, because the traffic volume for authentication to Azure AD is typically much lower than traffic volume to SaaS applications like Exchange Online and SharePoint Online. :::image type="content" source="./media/tenant-restrictions/traffic-flow.png" alt-text="Diagram of tenant restrictions traffic flow.":::+ ## Set up tenant restrictions There are two steps to get started with tenant restrictions. First, make sure that your clients can connect to the right addresses. Second, configure your proxy infrastructure.
The headers should include the following elements:
- For *Restrict-Access-Context*, use a value of a single directory ID, declaring which tenant is setting the tenant restrictions. For example, to declare Contoso as the tenant that set the tenant restrictions policy, the name/value pair looks like: `Restrict-Access-Context: 456ff232-35l2-5h23-b3b3-3236w0826f3d`. You *must* use your own directory ID here to get logs for these authentications. If you use any directory ID other than your own, those sign-in logs *will* appear in someone else's tenant, with all personal information removed. For more information, see [Admin experience](#admin-experience).
-> [!TIP]
-> You can find your directory ID in the [Azure portal](https://portal.azure.com). Sign in as an administrator, select **Azure Active Directory**, then select **Properties**.
->
-> To validate that a directory ID or domain name refer to the same tenant, use that ID or domain in place of \<tenant\> in this URL: `https://login.microsoftonline.com/<tenant>/v2.0/.well-known/openid-configuration`. If the results with the domain and the ID are the same, they refer to the same tenant.
+To find your directory ID:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Overview** > **Overview**.
+1. Copy the **Tenant ID** value.
+
+To validate that a directory ID or domain name refer to the same tenant, use that ID or domain in place of \<tenant\> in this URL: `https://login.microsoftonline.com/<tenant>/v2.0/.well-known/openid-configuration`. If the results with the domain and the ID are the same, they refer to the same tenant.
To prevent users from inserting their own HTTP header with non-approved tenants, the proxy needs to replace the *Restrict-Access-To-Tenants* header if it's already present in the incoming request.
An example user is on the Contoso network, but is trying to access the Fabrikam
:::image type="content" source="./media/tenant-restrictions/error-message.png" alt-text="Screenshot of tenant restrictions error message, from April 2021."::: ### Admin experience
-While configuration of tenant restrictions is done on the corporate proxy infrastructure, admins can access the tenant restrictions reports in the Azure portal directly. To view the reports:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Browse to **Azure Active Directory**. The Azure Active Directory overview page appears.
+While configuration of tenant restrictions is done on the corporate proxy infrastructure, admins can access the tenant restrictions reports in the Microsoft Entra admin center directly. To view the reports:
-3. On the Overview page, select **Tenant restrictions**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Overview** > **Tenant restrictions**.
The admin for the tenant specified as the Restricted-Access-Context tenant can use this report to see sign-ins blocked because of the tenant restrictions policy, including the identity used and the target directory ID. Sign-ins are included if the tenant setting the restriction is either the user tenant or resource tenant for the sign-in. The report may contain limited information, such as target directory ID, when a user who is in a tenant other than the Restricted-Access-Context tenant signs in. In this case, user identifiable information, such as name and user principal name, is masked to protect user data in other tenants (For example, `"{PII Removed}@domain.com" or 00000000-0000-0000-0000-000000000000` in place of usernames and object IDs as appropriate).
-Like other reports in the Azure portal, you can use filters to specify the scope of your report. You can filter on a specific time interval, user, application, client, or status. If you select the **Columns** button, you can choose to display data with any combination of the following fields:
+Like other reports in the Microsoft Entra admin center, you can use filters to specify the scope of your report. You can filter on a specific time interval, user, application, client, or status. If you select the **Columns** button, you can choose to display data with any combination of the following fields:
- **User** - this field can have personal data removed, where it is set to `00000000-0000-0000-0000-000000000000`. - **Application**
active-directory Tutorial Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
By default, Azure configures a certificate to expire after three years when it's
1. Save the new certificate. 1. Download the new certificate in the correct format. 1. Upload the new certificate to the application.
-1. Make the new certificate active in the Azure portal.
+1. Make the new certificate active in the Microsoft Entra admin center.
The following two sections help you perform these steps.
The following two sections help you perform these steps.
First, create and save new certificate with a different expiration date:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Azure Active Directory** > **Enterprise applications**.
-1. From the list of applications, select your desired application.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
1. Under the **Manage** section, select **Single sign-on**. 1. If the **Select a single sign-on method** page appears, select **SAML**. 1. In the **Set up Single Sign-On with SAML** page, find the **SAML Signing Certificate** heading and select the **Edit** icon (a pencil). The **SAML Signing Certificate** page appears, which displays the status (**Active** or **Inactive**), expiration date, and thumbprint (a hash string) of each certificate.
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md
It's recommended that you use a nonproduction environment to test the steps in t
To view applications that have been registered in your Azure AD tenant, you need: - An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, or owner of the service principal.
+- One of the following roles: Global Administrator, Cloud Application Administrator, or owner of the service principal.
- Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md). ## View a list of applications
To view applications that have been registered in your Azure AD tenant, you need
To view the enterprise applications registered in your tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.
-1. Browse to **Azure Active Directory** and select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**.
:::image type="content" source="media/view-applications-portal/view-enterprise-applications.png" alt-text="View the registered applications in your Azure AD tenant.":::- 1. To view more applications, select **Load more** at the bottom of the list. If there are many applications in your tenant, it might be easier to search for a particular application instead of scrolling through the list. ## Search for an application
active-directory What Is Access Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-access-management.md
Previously updated : 07/20/2022 Last updated : 07/18/2023
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 08/01/2023 Last updated : 09/04/2023
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## August 2023
+### New articles
+
+- [Manage app consent policies for group owners](manage-group-owner-consent-policies.md) - New how-to guide on how to manage group owner consent policies.
+
+### Updated articles
+
+- [Properties of an enterprise application](application-properties.md) - Updates on the user requirement property
+- [Configure group and team owner consent to applications](configure-user-consent-groups.md) - Updates to examples for configuring group and team owner consent
+- [Configure how users consent to applications](configure-user-consent.md) - Updates to examples for configuring user consent
+- [Manage app consent policies](manage-app-consent-policies.md) - Updates to examples for managing app consent policies
+- [Review the application activity report](migrate-adfs-application-activity.md) - Updates to stale local links
## July 2023 ### New articles
The following PowerShell sample was added:
- [Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS](datawiza-sso-mfa-oracle-ebs.md) - [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md) - [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md)
-## May 2023
-
-### New articles
--- [Phase 2: Classify apps and plan pilot](migrate-adfs-classify-apps-plan-pilot.md)-- [Phase 1: Discover and scope apps](migrate-adfs-discover-scope-apps.md)-- [Phase 4: Plan management and insights](migrate-adfs-plan-management-insights.md)-- [Phase 3: Plan migration and testing](migrate-adfs-plan-migration-test.md)-- [Represent AD FS security policies in Azure Active Directory: Mappings and examples](migrate-adfs-represent-security-policies.md)-- [SAML-based single sign-on: Configuration and Limitations](migrate-adfs-saml-based-sso.md)-
-### Updated articles
--- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect synchronization](migrate-okta-sync-provisioning.md)-- [Application management videos](app-management-videos.md)-- [Understand the stages of migrating application authentication from AD FS to Azure AD](./migrate-adfs-apps-stages.md)-- [Plan application migration to Azure Active Directory](./migrate-adfs-apps-phases-overview.md)-- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](./migrate-okta-sync-provisioning.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)-- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](./cloudflare-integration.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)-- [Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication](migrate-okta-federation.md)
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
na Previously updated : 05/12/2022 Last updated : 09/07/2023 -+ # Assign a managed identity access to an application role using PowerShell
Managed identities for Azure resources provide Azure services with an identity i
> [!NOTE] > The tokens that your application receives are cached by the underlying infrastructure, which means that any changes to the managed identity's roles can take significant time to take effect. For more information, see [Limitation of using managed identities for authorization](managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization).
-In this article, you learn how to assign a managed identity to an application role exposed by another application using Azure AD PowerShell.
+In this article, you learn how to assign a managed identity to an application role exposed by another application using the Microsoft Graph PowerShell SDK.
## Prerequisites
In this article, you learn how to assign a managed identity to an application ro
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing. - To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top-right corner of code blocks.
- - Run scripts locally by installing the latest version of [the Az PowerShell module](/powershell/azure/install-azure-powershell). You can also use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started).
+ - Run scripts locally by installing the latest version of the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started).
## Assign a managed identity access to another application's app role
In this article, you learn how to assign a managed identity to an application ro
1. Find the object ID of the service application's service principal. You can find this using the Azure portal. Go to Azure Active Directory and open the **Enterprise applications** page, then find the application and look for the **Object ID**. You can also find the service principal's object ID by its display name using the following PowerShell script:
- # [Azure PowerShell](#tab/azurepowershell)
-
- ```powershell
- $serverServicePrincipalObjectId = (Get-AzureADServicePrincipal -Filter "DisplayName eq '$applicationName'").ObjectId
- ```
-
- # [Microsoft Graph](#tab/microsoftgraph)
- ```powershell $serverServicePrincipalObjectId = (Get-MgServicePrincipal -Filter "DisplayName eq '$applicationName'").Id ```
-
- > [!NOTE] > Display names for applications are not unique, so you should verify that you obtain the correct application's service principal.
In this article, you learn how to assign a managed identity to an application ro
Execute the following PowerShell command to add the role assignment:
- # [Azure PowerShell](#tab/azurepowershell)
-
- ```powershell
- New-AzureADServiceAppRoleAssignment `
- -ObjectId $managedIdentityObjectId `
- -Id $appRoleId `
- -PrincipalId $managedIdentityObjectId `
- -ResourceId $serverServicePrincipalObjectId
- ```
-
- # [Microsoft Graph](#tab/microsoftgraph)
- ```powershell New-MgServicePrincipalAppRoleAssignment ` -ServicePrincipalId $managedIdentityObjectId `
In this article, you learn how to assign a managed identity to an application ro
-AppRoleId $appRoleId ```
-
- ## Complete script This example script shows how to assign an Azure web app's managed identity to an app role.
-# [Azure PowerShell](#tab/azurepowershell)
-
-```powershell
-# Install the module. This step requires you to be an administrator on your machine.
-# Install-Module AzureAD
-
-# Your tenant ID (in the Azure portal, under Azure Active Directory > Overview).
-$tenantID = '<tenant-id>'
-
-# The name of your web app, which has a managed identity that should be assigned to the server app's app role.
-$webAppName = '<web-app-name>'
-$resourceGroupName = '<resource-group-name-containing-web-app>'
-
-# The name of the server app that exposes the app role.
-$serverApplicationName = '<server-application-name>' # For example, MyApi
-
-# The name of the app role that the managed identity should be assigned to.
-$appRoleName = '<app-role-name>' # For example, MyApi.Read.All
-
-# Look up the web app's managed identity's object ID.
-$managedIdentityObjectId = (Get-AzWebApp -ResourceGroupName $resourceGroupName -Name $webAppName).identity.principalid
-
-Connect-AzureAD -TenantId $tenantID
-
-# Look up the details about the server app's service principal and app role.
-$serverServicePrincipal = (Get-AzureADServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
-$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId
-$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id
-
-# Assign the managed identity access to the app role.
-New-AzureADServiceAppRoleAssignment `
- -ObjectId $managedIdentityObjectId `
- -Id $appRoleId `
- -PrincipalId $managedIdentityObjectId `
- -ResourceId $serverServicePrincipalObjectId
-```
-
-# [Microsoft Graph](#tab/microsoftgraph)
- ```powershell # Install the module. # Install-Module Microsoft.Graph -Scope CurrentUser
Connect-MgGraph -TenantId $tenantId -Scopes 'Application.Read.All','Application.
# Look up the details about the server app's service principal and app role. $serverServicePrincipal = (Get-MgServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
-$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId
+$serverServicePrincipalObjectId = $serverServicePrincipal.Id
$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id # Assign the managed identity access to the app role. New-MgServicePrincipalAppRoleAssignment `
- -ServicePrincipalId $managedIdentityObjectId `
+ -ServicePrincipalId $serverServicePrincipalObjectId `
-PrincipalId $managedIdentityObjectId ` -ResourceId $serverServicePrincipalObjectId ` -AppRoleId $appRoleId
active-directory How To View Managed Identity Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-portal.md
na Previously updated : 06/24/2022 Last updated : 08/31/2023
active-directory Qs Configure Portal Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md
na Previously updated : 01/11/2022 Last updated : 08/31/2023
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
To remove a user-assigned identity from a VM, your account needs the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role assignment. No other Azure AD directory role assignments are required. 1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription that contains the VM.
-2. Navigate to the desired VM and click **Identity**, **User assigned**, the name of the user-assigned managed identity you want to delete and then click **Remove** (click **Yes** in the confirmation pane).
+2. Navigate to the desired VM and select **Identity**, **User assigned**, the name of the user-assigned managed identity you want to delete and then click **Remove** (click **Yes** in the confirmation pane).
![Remove user-assigned managed identity from a VM](./media/msi-qs-configure-portal-windows-vm/remove-user-assigned-identity-vm-screenshot.png)
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
Last updated 05/10/2023 -+ # Configure managed identities for Azure resources on an Azure VM using PowerShell
active-directory Services Id Authentication Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-id-authentication-support.md
The following services support Azure AD authentication. New services are added t
| Azure App Services | [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md) | | Azure Batch | [Authenticate Batch service solutions with Active Directory](../../batch/batch-aad-auth.md) | | Azure Container Registry | [Authenticate with an Azure container registry](../../container-registry/container-registry-authentication.md) |
-| Azure Cognitive Services | [Authenticate requests to Azure Cognitive Services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) |
+| Azure AI services | [Authenticate requests to Azure AI services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) |
| Azure Communication Services | [Authenticate to Azure Communication Services](../../communication-services/concepts/authentication.md) | | Azure Cosmos DB | [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](../../cosmos-db/how-to-setup-rbac.md) | | Azure Databricks | [Authenticate using Azure Active Directory tokens](/azure/databricks/dev-tools/api/latest/aad/)
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
SQL DB requires unique Azure AD display names. With this, the Azure AD accounts
> [!NOTE] > `VMName` in the following command is the name of the VM that you enabled system assigned identity on in the prerequisites section.
+ >
+ > If you encounter the error "Principal `VMName` has a duplicate display name", append the CREATE USER statement with WITH OBJECT_ID='xxx'.
```sql ALTER ROLE db_datareader ADD MEMBER [VMName]
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
Previously updated : 06/16/2023 Last updated : 08/28/2023
The following diagram shows how you can use cross-tenant synchronization to enab
:::image type="content" source="./media/cross-tenant-synchronization-overview/cross-tenant-synchronization-diagram.png" alt-text="Diagram that shows synchronization of users for multiple tenants." lightbox="./media/cross-tenant-synchronization-overview/cross-tenant-synchronization-diagram.png"::: ## Who should use?- - Organizations that own multiple Azure AD tenants and want to streamline intra-organization cross-tenant application access. - Cross-tenant synchronization is **not** currently suitable for use across organizational boundaries.
Does cross-tenant synchronization support deprovisioning users?
- Remove the user from a group that is assigned to the cross-tenant synchronization configuration - An attribute on the user changes such that they do not meet the scoping filter conditions defined on the cross-tenant synchronization configuration anymore -- Currently only regular users, Helpdesk Admins and User Account Admins can be deleted. Users with other Azure AD roles such as directory reader currently cannot be deleted by cross-tenant synchronization. This is subject to change in the future.- - If the user is blocked from sign-in in the source tenant (accountEnabled = false) they will be blocked from sign-in in the target. This is not a deletion, but an updated to the accountEnabled property. Does cross-tenant synchronization support restoring users?
active-directory Multi Tenant Organization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-configure-graph.md
+
+ Title: Configure a multi-tenant organization using the Microsoft Graph API (Preview)
+description: Learn how to configure a multi-tenant organization in Azure Active Directory using the Microsoft Graph API.
+++++++ Last updated : 08/22/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Configure a multi-tenant organization using the Microsoft Graph API (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article describes the key steps to configure a multi-tenant organization using the Microsoft Graph API. This article uses an example owner tenant named *Cairo* and two member tenants named *Berlin* and *Athens*.
+
+If you instead want to use the Microsoft 365 admin center to configure a multi-tenant organization, see [Set up a multi-tenant org in Microsoft 365 (Preview)](/microsoft-365/enterprise/set-up-multi-tenant-org) and [Join or leave a multi-tenant organization in Microsoft 365 (Preview)](/microsoft-365/enterprise/join-leave-multi-tenant-org). To learn how to configure Microsoft Teams for your multi-tenant organization, see [The new Microsoft Teams desktop client](/microsoftteams/new-teams-desktop-admin).
+
+## Prerequisites
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization.
+- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
+
+![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant**
+
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization.
+- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
+
+## Step 1: Sign in to the owner tenant
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+These steps describe how to use Microsoft Graph Explorer (recommended), but you can also use Postman, or another REST API client.
+
+1. Start [Microsoft Graph Explorer tool](https://aka.ms/ge).
+
+1. Sign in to the owner tenant.
+
+1. Select your profile and then select **Consent to permissions**.
+
+1. Consent to the following required permissions.
+
+ - `MultiTenantOrganization.ReadWrite.All`
+ - `Policy.Read.All`
+ - `Policy.ReadWrite.CrossTenantAccess`
+ - `Application.ReadWrite.All`
+ - `Directory.ReadWrite.All`
+
+## Step 2: Create a multi-tenant organization
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+1. In the owner tenant, use the [Create multiTenantOrganization](/graph/api/tenantrelationship-put-multitenantorganization) API to create your multi-tenant organization. This operation can take a few minutes.
+
+ **Request**
+
+ ```http
+ PUT https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization
+ {
+ "displayName": "Cairo"
+ }
+ ```
+
+1. Use the [Get multiTenantOrganization](/graph/api/multitenantorganization-get) API to check that the operation has completed before proceeding.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization
+ ```
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/$entity",
+ "id": "{mtoId}",
+ "createdDateTime": "2023-04-05T08:27:10Z",
+ "displayName": "Cairo",
+ "description": null
+ }
+ ```
+
+## Step 3: Add tenants
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+1. In the owner tenant, use the [Add multiTenantOrganizationMember](/graph/api/multitenantorganization-post-tenants) API to add tenants to your multi-tenant organization.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants
+ {
+ "tenantId": "{memberTenantIdB}",
+ "displayName": "Berlin"
+ }
+ ```
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants
+ {
+ "tenantId": "{memberTenantIdA}",
+ "displayName": "Athens"
+ }
+ ```
+
+1. Use the [List multiTenantOrganizationMembers](/graph/api/multitenantorganization-list-tenants) API to verify that the operation has completed before proceeding.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants
+ ```
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants"
+ "value": [
+ {
+ "tenantId": "{ownerTenantId}",
+ "displayName": "Cairo",
+ "addedDateTime": "2023-04-05T08:27:10Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "owner",
+ "state": "active",
+ "transitionDetails": null
+ },
+ {
+ "tenantId": "{memberTenantIdB}",
+ "displayName": "Berlin",
+ "addedDateTime": "2023-04-05T08:30:44Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "member",
+ "state": "pending",
+ "transitionDetails": {
+ "desiredState": "active",
+ "desiredRole": "member",
+ "status": "notStarted",
+ "details": null
+ }
+ },
+ {
+ "tenantId": "{memberTenantIdA}",
+ "displayName": "Athens",
+ "addedDateTime": "2023-04-05T08:31:03Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "member",
+ "state": "pending",
+ "transitionDetails": {
+ "desiredState": "active",
+ "desiredRole": "member",
+ "status": "notStarted",
+ "details": null
+ }
+ }
+ ]
+ }
+ ```
+
+## Step 4: (Optional) Change the role of a tenant
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+By default, tenants added to the multi-tenant organization are member tenants. Optionally, you can change them to owner tenants, which allow them to add other tenants to the multi-tenant organization. You can also change an owner tenant to a member tenant.
+
+1. In the owner tenant, use the [Update multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-update) API to change a member tenant to an owner tenant.
+
+ **Request**
+
+ ```http
+ PATCH https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdB}
+ {
+ "role": "owner"
+ }
+ ```
+
+1. Use the [Get multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-get) API to verify the change.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdB}
+ ```
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants/$entity",
+ "tenantId": "{memberTenantIdB}",
+ "displayName": "Berlin",
+ "addedDateTime": "2023-04-05T08:30:44Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "member",
+ "state": "pending",
+ "transitionDetails": {
+ "desiredState": "active",
+ "desiredRole": "owner",
+ "status": "notStarted",
+ "details": null
+ }
+ }
+ ```
+
+1. Use the [Update multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-update) API to change an owner tenant to a member tenant.
+
+ **Request**
+
+ ```http
+ PATCH https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdB}
+ {
+ "role": "member"
+ }
+ ```
+
+## Step 5: (Optional) Remove a member tenant
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+You can remove any member tenant, including your own. You can't remove owner tenants. Also, you can't remove the original creator tenant, even if it has been changed from owner to member.
+
+1. In the owner tenant, use the [Remove multiTenantOrganizationMember](/graph/api/multitenantorganization-delete-tenants) API to remove any member tenant. This operation takes a few minutes.
+
+ **Request**
+
+ ```http
+ DELETE https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD}
+ ```
+
+1. Use the [Get multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-get) API to verify the change.
+
+ **Request**
+
+ ```http
+ GET beta https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD}
+ ```
+
+ If you check immediately after calling the remove API, it will show a response similar to the following.
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants/$entity",
+ "tenantId": "{memberTenantIdD}",
+ "displayName": "Denver",
+ "addedDateTime": "2023-04-05T08:40:52Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "member",
+ "state": "pending",
+ "transitionDetails": {
+ "desiredState": "removed",
+ "desiredRole": "member",
+ "status": "notStarted",
+ "details": null
+ }
+ }
+ ```
+
+ After the remove operation completes, the response is similar to the following. This is an expected error message. It indicates that the tenant has been removed from the multi-tenant organization.
+
+ **Response**
+
+ ```http
+ {
+ "error": {
+ "code": "Directory_ObjectNotFound",
+ "message": "Unable to read the company information from the directory.",
+ "innerError": {
+ "date": "2023-04-05T08:44:07",
+ "request-id": "75216961-c21d-49ed-8c1f-2cfe51f920f1",
+ "client-request-id": "30129b19-51e8-41ed-8ba0-1501bac03802"
+ }
+ }
+ }
+ ```
+## Step 6: Wait
+
+![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant**
+
+- To allow for asynchronous processing, wait a **minimum of 2 hours** between creation and joining a multi-tenant organization.
+
+## Step 7: Sign in to a member tenant
+
+![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant**
+
+The Cairo tenant created a multi-tenant organization and added the Berlin and Athens tenants. In these steps you sign in to the Berlin tenant and join the multi-tenant organization created by Cairo.
+
+1. Start [Microsoft Graph Explorer tool](https://aka.ms/ge).
+
+1. Sign in to the member tenant.
+
+1. Select your profile and then select **Consent to permissions**.
+
+1. Consent to the following required permissions.
+
+ - `MultiTenantOrganization.ReadWrite.All`
+ - `Policy.Read.All`
+ - `Policy.ReadWrite.CrossTenantAccess`
+ - `Application.ReadWrite.All`
+ - `Directory.ReadWrite.All`
+
+## Step 8: Join the multi-tenant organization
+
+![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant**
+
+1. In the member tenant, use the [Update multiTenantOrganizationJoinRequestRecord](/graph/api/multitenantorganizationjoinrequestrecord-update) API to join the multi-tenant organization.
+
+ **Request**
+
+ ```http
+ PATCH beta https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/joinRequest
+ {
+ "addedByTenantId": "{ownerTenantId}"
+ }
+ ```
+
+1. Use the [Get multiTenantOrganizationJoinRequestRecord](/graph/api/multitenantorganizationjoinrequestrecord-get) API to verify the join.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/joinRequest
+ ```
+
+ This operation takes a few minutes. If you check immediately after calling the API to join, the response will be similar to the following.
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/joinRequest/$entity",
+ "id": "aa87e8a4-9c88-4e67-971d-79c9e43319a3",
+ "addedByTenantId": "{ownerTenantId}",
+ "memberState": "active",
+ "role": "member",
+ "transitionDetails": {
+ "desiredMemberState": "active",
+ "status": "notStarted",
+ "details": ""
+ }
+ }
+ ```
+
+ After the join operation completes, the response is similar to the following.
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/joinRequest/$entity",
+ "id": "aa87e8a4-9c88-4e67-971d-79c9e43319a3",
+ "addedByTenantId": "{ownerTenantId}",
+ "memberState": "active",
+ "role": "member",
+ "transitionDetails": null
+ }
+ ```
+
+1. Use the [List multiTenantOrganizationMembers](/graph/api/multitenantorganization-list-tenants) API to check the multi-tenant organization itself. It should reflect the join operation.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants
+ ```
+
+ **Response**
+
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants",
+ "value": [
+ {
+ "tenantId": "{memberTenantIdA}",
+ "displayName": "Athens",
+ "addedDateTime": "2023-04-05T10:14:35Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "member",
+ "state": "active",
+ "transitionDetails": null
+ },
+ {
+ "tenantId": "{memberTenantIdB}",
+ "displayName": "Berlin",
+ "addedDateTime": "2023-04-05T08:30:44Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "member",
+ "state": "active",
+ "transitionDetails": null
+ },
+ {
+ "tenantId": "{ownerTenantId}",
+ "displayName": "Cairo",
+ "addedDateTime": "2023-04-05T08:27:10Z",
+ "joinedDateTime": null,
+ "addedByTenantId": "{ownerTenantId}",
+ "role": "owner",
+ "state": "active",
+ "transitionDetails": null
+ }
+ ]
+ }
+ ```
+
+1. To allow for asynchronous processing, wait **up to 4 hours** before joining a multi-tenant organization is completed.
+
+## Step 9: (Optional) Leave the multi-tenant organization
+
+![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant**
+
+You can leave a multi-tenant organization that you have joined. The process for removing your own tenant from the multi-tenant organization is the same as the process for removing another tenant from the multi-tenant organization.
+
+If your tenant is the only multi-tenant organization owner, you must designate a new tenant to be the multi-tenant organization owner. For steps, see [Step 4: (Optional) Change the role of a tenant](#step-4-optional-change-the-role-of-a-tenant)
+
+- In the tenant, use the [Remove multiTenantOrganizationMember](/graph/api/multitenantorganization-delete-tenants) API to remove the tenant. This operation takes a few minutes.
+
+ **Request**
+
+ ```http
+ DELETE https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD}
+ ```
+
+## Step 10: (Optional) Delete the multi-tenant organization
+
+![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant**
+
+You delete a multi-tenant organization by removing all tenants. The process for removing the final owner tenant is the same as the process for removing all other member tenants.
+
+- In the final owner tenant, use the [Remove multiTenantOrganizationMember](/graph/api/multitenantorganization-delete-tenants) API to remove the tenant. This operation takes a few minutes.
+
+ **Request**
+
+ ```http
+ DELETE https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD}
+ ```
+
+## Next steps
+
+- [Set up a multi-tenant org in Microsoft 365 (Preview)](/microsoft-365/enterprise/set-up-multi-tenant-org)
+- [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs)
+- [The new Microsoft Teams desktop client](/microsoftteams/new-teams-desktop-admin)
+- [Configure multi-tenant organization templates using the Microsoft Graph API (Preview)](./multi-tenant-organization-configure-templates.md)
active-directory Multi Tenant Organization Configure Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-configure-templates.md
+
+ Title: Configure multi-tenant organization templates using Microsoft Graph API (Preview)
+description: Learn how to configure multi-tenant organization templates in Azure Active Directory using the Microsoft Graph API.
+++++++ Last updated : 08/22/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Configure multi-tenant organization templates using the Microsoft Graph API (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article describes how to configure a policy template for your multi-tenant organization.
+
+## Prerequisites
+
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization.
+- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
+
+## Cross-tenant access policy partner template
+
+The [cross-tenant access partner configuration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) handles trust settings and automatic user consent settings between partner tenants. For example, you can use these settings to trust multi-factor authentication claims for inbound users from the target partner tenant. With the template in an unconfigured state, partner configurations for partner tenants in the multi-tenant organization won't be amended, with all trust settings passed through from default settings. However, if you configure the template, then partner configurations will be amended corresponding to the policy template.
+
+### Configure inbound and outbound automatic redemption
+
+To specify which trust settings and automatic user consent settings to apply to your policy template, use the [Update multiTenantOrganizationPartnerConfigurationTemplate](/graph/api/multitenantorganizationpartnerconfigurationtemplate-update) API. If you create or join a multi-tenant organization using the Microsoft 365 admin center, this configuration is handled automatically.
+
+**Request**
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration
+
+{
+ "inboundTrust": {
+ "isMfaAccepted": true,
+ "isCompliantDeviceAccepted": true,
+ "isHybridAzureADJoinedDeviceAccepted": true
+ },
+ "automaticUserConsentSettings": {
+ "inboundAllowed": true,
+ "outboundAllowed": true
+ },
+ "templateApplicationLevel": "newPartners,existingPartners"
+}
+```
+
+### Disable the template for existing partners
+
+To apply this template only to new multi-tenant organization members and exclude existing partners, set the `templateApplicationLevel` parameter to new partners only.
+
+**Request**
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration
+
+{
+ "inboundTrust": {
+ "isMfaAccepted": true,
+ "isCompliantDeviceAccepted": true,
+ "isHybridAzureADJoinedDeviceAccepted": true
+ },
+ "automaticUserConsentSettings": {
+ "inboundAllowed": true,
+ "outboundAllowed": true
+ },
+ "templateApplicationLevel": "newPartners"
+}
+```
+
+### Disable the template completely
+
+To disable the template completely, set the `templateApplicationLevel` parameter to null.
+
+**Request**
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration
+
+{
+ "inboundTrust": {
+ "isMfaAccepted": true,
+ "isCompliantDeviceAccepted": true,
+ "isHybridAzureADJoinedDeviceAccepted": true
+ },
+ "automaticUserConsentSettings": {
+ "inboundAllowed": true,
+ "outboundAllowed": true
+ },
+ "templateApplicationLevel": ""
+}
+```
+
+### Reset the template
+
+To reset the template to its default state (decline all trust and automatic user consent), use the [multiTenantOrganizationPartnerConfigurationTemplate: resetToDefaultSettings](/graph/api/multitenantorganizationpartnerconfigurationtemplate-resettodefaultsettings) API.
+
+```http
+POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration/resetToDefaultSettings
+```
+
+## Cross-tenant synchronization template
+
+The identity synchronization policy governs [cross-tenant synchronization](cross-tenant-synchronization-overview.md), which allows you to share users and groups across tenants in your organization. You can use these settings to allow inbound user synchronization. With the template in an unconfigured state, the identity synchronization policy for partner tenants in the multi-tenant organization won't be amended. However, if you configure the template, then the identity synchronization policy will be amended corresponding to the policy template.
+
+### Configure inbound user synchronization
+
+To allow inbound user synchronization in the policy template, use the [Update multiTenantOrganizationIdentitySyncPolicyTemplate](/graph/api/multitenantorganizationidentitysyncpolicytemplate-update) API. If you create or join a multi-tenant organization using the Microsoft 365 admin center, this configuration is handled automatically.
+
+**Request**
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization
+
+{
+ "userSyncInbound": {
+ "isSyncAllowed": true
+ },
+ "templateApplicationLevel": "newPartners,existingPartners"
+}
+```
+
+### Disable the template for existing partners
+
+To apply this template only to new multi-tenant organization members and exclude existing partners, set the `templateApplicationLevel` parameter to new partners only.
+
+**Request**
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization
+
+{
+ "userSyncInbound": {
+ "isSyncAllowed": true
+ },
+ "templateApplicationLevel": "newPartners"
+}
+```
+
+### Disable the template completely
+
+To disable the template completely, set the `templateApplicationLevel` parameter to null.
+
+**Request**
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization
+
+{
+ "userSyncInbound": {
+ "isSyncAllowed": true
+ },
+ "templateApplicationLevel": ""
+}
+```
+
+### Reset the template
+
+To reset the template to its default state (decline inbound synchronization), use the [multiTenantOrganizationIdentitySyncPolicyTemplate: resetToDefaultSettings](/graph/api/multitenantorganizationidentitysyncpolicytemplate-resettodefaultsettings) API.
+
+**Request**
+
+```http
+POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization/resetToDefaultSettings
+```
+
+## Next steps
+
+- [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md)
active-directory Multi Tenant Organization Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-known-issues.md
+
+ Title: Known issues for multi-tenant organizations (Preview)
+description: Learn about known issues when you work with multi-tenant organizations in Azure Active Directory.
+++++++ Last updated : 08/22/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Known issues for multi-tenant organizations (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article discusses known issues to be aware of when you work with multi-tenant organization functionality across Azure AD and Microsoft 365. To provide feedback about the multi-tenant organization functionality on UserVoice, see [Azure Active Directory (Azure AD) UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789?category_id=360892). We watch UserVoice closely so that we can improve the service.
+
+## Scope
+
+The experiences and issues described in this article have the following scope.
+
+| Scope | Description |
+| | |
+| In scope | - Azure AD administrator experiences and issues related to multi-tenant organizations to support seamless collaboration experiences in new Teams, with reciprocally provisioned B2B members |
+| Related scope | - Microsoft 365 admin center experiences and issues related to multi-tenant organizations<br/>- Microsoft 365 multi-tenant organization people search experiences and issues<br/>- Cross-tenant synchronization issues related to Microsoft 365 |
+| Out of scope | - Cross-tenant synchronization unrelated to Microsoft 365<br/>- End user experiences in new Teams<br/>- End user experiences in Power BI<br/>- Tenant migration or consolidation |
+| Unsupported scenarios | - Seamless collaboration experience across multi-tenant organizations in classic Teams<br/>- Self-service for multi-tenant organizations larger than 5 tenants or 100,000 internal users per tenant<br/>- Using provisioning or synchronization engines other than Azure AD cross-tenant synchronization<br/>- Multi-tenant organizations in Azure Government or Microsoft Azure operated by 21Vianet<br/>- Cross-cloud multi-tenant organizations |
+
+## Multi-tenant organization related issues
+
+- Allow for at least 2 hours between the creation of a multi-tenant organization and any tenant joining the multi-tenant organization.
+
+- Allow for up to 4 hours between submission of a multi-tenant organization join request and the same join request to succeed and finish.
+
+- Self-service of multi-tenant organization functionality is limited to a maximum of 5 tenants and 100,000 internal users per tenant. To request a raise in these limits, submit an Azure AD or Microsoft 365 admin center support request.
+
+- In the Microsoft Graph APIs, the default limits of 5 tenants and 100,000 internal users per tenant are only enforced at the time of joining. In Microsoft 365 admin center, the default limits are enforced at multi-tenant organization creation time and at time of joining.
+
+- There are multiple reasons why a join request might fail. If Microsoft 365 admin center doesn't indicate why a join request isn't succeeding, try examining the join request response by using the Microsoft Graph APIs or Microsoft Graph Explorer.
+
+- If you followed the correct sequence of creating a multi-tenant organization, adding a tenant to the multi-tenant organization, and the added tenant's join request keeps failing, submit a support request to Azure AD or Microsoft 365 admin center.
+
+- As part of a multi-tenant organization, newly invited B2B users receive an additional user property that includes the home tenant identifier of the B2B user. Already redeemed B2B users don't have this additional user property. Currently, Microsoft 365 admin center share users functionality or Azure AD cross-tenant synchronization are currently the only accepted methods to get this additional user property populated.
+
+- As part of a multi-tenant organization, [reset redemption status for a B2B user](../external-identities/reset-redemption-status.md) is currently unavailable and disabled.
+
+## B2B user or B2B member related issues
+
+- The promotion of B2B guests to B2B members represents a strategic decision by multi-tenant organizations to consider B2B members as trusted users of the organization. Review the [default permissions](../fundamentals/users-default-permissions.md) for B2B members.
+
+- To promote B2B guests to B2B members, a source tenant administrator can amend the [attribute mappings](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings), or a target tenant administrator can [change the userType](../fundamentals/how-to-manage-user-profile-info.md#add-or-change-profile-information) if the property is not recurringly synchronized.
++
+- In [SharePoint OneDrive](/sharepoint/), the promotion of B2B guests to B2B members may not happen automatically. If faced with a user type mismatch between Azure AD and SharePoint OneDrive, try [Set-SPUser [-SyncFromAD]](/powershell/module/sharepoint-server/set-spuser).
+
+- In [SharePoint OneDrive](/sharepoint/) user interfaces, when sharing a file with *People in Fabrikam*, the current user interfaces might be counterintuitive, because B2B members in Fabrikam from Contoso count towards *People in Fabrikam*.
+
+- In [Microsoft Forms](/office365/servicedescriptions/microsoft-forms-service-description), B2B member users may not be able to access forms.
+
+- In [Microsoft Power BI](/power-bi/enterprise/service-admin-azure-ad-b2b#who-can-you-invite), B2B member users may require license assignment in addition to having an untested experience. Power BI preview for B2B members as part of a multi-tenant organization is expected.
+
+- In [Microsoft Power Apps](/power-platform/), [Microsoft Dynamics 365](/dynamics365/), and other workloads, B2B member users may require license assignment. Experiences for B2B members are untested.
+
+## User synchronization issues
+
+- When to use Microsoft 365 admin center to share users: If you haven't previously used Azure AD cross-tenant synchronization, and you intend to establish a [collaborating user set](multi-tenant-organization-microsoft-365.md#collaborating-user-set) topology where the same set of users is shared to all multi-tenant organization tenants, you may want to use the Microsoft 365 admin center share users functionality.
+
+- When to use Azure AD cross-tenant synchronization: If you're already using Azure AD cross-tenant synchronization, for various [multi-hub multi-spoke topologies](cross-tenant-synchronization-topology.md), you don't need to use the Microsoft 365 admin center share users functionality. Instead, you may want to continue using your existing Azure AD cross-tenant synchronization jobs.
+
+- Contact objects: The at-scale provisioning of B2B users may collide with contact objects. The handling or conversion of contact objects is currently not supported.
+
+- Microsoft 365 admin center / Azure AD: Whether you use the Microsoft 365 admin center share users functionality or Azure AD cross-tenant synchronization, the following items apply:
+
+ - In the identity platform, both methods are represented as Azure AD cross-tenant synchronization jobs.
+ - You may adjust the attribute mappings to match your organizations' needs.
+ - By default, new B2B users are provisioned as B2B members, while existing B2B guests remain B2B guests.
+ - You can opt to convert B2B guests into B2B members by setting [**Apply this mapping** to **Always**](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings).
+
+- Microsoft 365 admin center / Azure AD: If you're using Azure AD cross-tenant synchronization to provision your users, rather than the Microsoft 365 admin center share users functionality, Microsoft 365 admin center indicates an **Outbound sync status** of **Not configured**. This is expected preview behavior. Currently, Microsoft 365 admin center only shows the status of Azure AD cross-tenant synchronization jobs created and managed by Microsoft 365 admin center and doesn't display Azure AD cross-tenant synchronizations created and managed in Azure AD.
+
+- Microsoft 365 admin center / Azure AD: If you view Azure AD cross-tenant synchronization in Azure portal, after adding tenants to or after joining a multi-tenant organization in Microsoft 365 admin center, you'll see a cross-tenant synchronization configuration with the name MTO_Sync_&lt;TenantID&gt;. Refrain from editing or changing the name if you want Microsoft 365 admin center to recognize the configuration as created and managed by Microsoft 365 admin center.
+
+- Microsoft 365 admin center / Azure AD: There's no established or supported pattern for Microsoft 365 admin center to take control of pre-existing Azure AD cross-tenant synchronization configurations and jobs.
+
+- Advantage of using cross-tenant access settings template for identity synchronization: Azure AD cross-tenant synchronization doesn't support establishing a cross-tenant synchronization configuration before the tenant in question allows inbound synchronization in their cross-tenant access settings for identity synchronization. Hence the usage of the cross-tenant access settings template for identity synchronization is encouraged, with `userSyncInbound` set to true, as facilitated by Microsoft 365 admin center.
+
+- Source of Authority Conflict: Using Azure AD cross-tenant synchronization to target hybrid identities that have been converted to B2B users has not been tested and is not supported.
+
+- Syncing B2B guests versus B2B members: As your organization rolls out the multi-tenant organization functionality including provisioning of B2B users across multi-tenant organization tenants, you might want to provision some users as B2B guests, while provision others users as B2B members. To achieve this, you may want to establish two Azure AD cross-tenant synchronization configurations in the source tenant, one with userType attribute mappings configured to B2B guest, and another with userType attribute mappings configured to B2B member, each with [**Apply this mapping** set to **Always**](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings). By moving a user from one configuration's scope to the other, you can easily control who will be a B2B guest or a B2B member in the target tenant.
+
+- Cross-tenant synchronization deprovisioning: By default, when provisioning scope is reduced while a synchronization job is running, users fall out of scope and are soft deleted, unless Target Object Actions for Delete is disabled. For more information, see [Deprovisioning](cross-tenant-synchronization-overview.md#deprovisioning) and [Define who is in scope for provisioning](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters).
+
+- Cross-tenant synchronization deprovisioning: Currently, [SkipOutOfScopeDeletions](../app-provisioning/skip-out-of-scope-deletions.md?toc=%2Fazure%2Factive-directory%2Fmulti-tenant-organizations%2Ftoc.json&pivots=cross-tenant-synchronization) works for application provisioning jobs, but not for Azure AD cross-tenant synchronization. To avoid soft deletion of users taken out of scope of cross-tenant synchronization, set [Target Object Actions for Delete](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters) to disabled.
+
+## Next steps
+
+- [Known issues for provisioning in Azure Active Directory](../app-provisioning/known-issues.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization)
active-directory Multi Tenant Organization Microsoft 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-microsoft-365.md
+
+ Title: Multi-tenant organization identity provisioning for Microsoft 365 (Preview)
+description: Learn how multi-tenant organizations identity provisioning and Microsoft 365 work together.
+++++++ Last updated : 08/22/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Multi-tenant organization identity provisioning for Microsoft 365 (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The multi-tenant organization capability is designed for organizations that own multiple Azure Active Directory (Azure AD) tenants and want to streamline intra-organization cross-tenant collaboration in Microsoft 365. It's built on the premise of reciprocal provisioning of B2B member users across multi-tenant organization tenants.
+
+## Microsoft 365 people search
+
+[Teams external access](/microsoftteams/communicate-with-users-from-other-organizations) and [Teams shared channels](/microsoftteams/shared-channels#getting-started-with-shared-channels) excluded, [Microsoft 365 people search](/microsoft-365/enterprise/multi-tenant-people-search) is typically scoped to within local tenant boundaries. In multi-tenant organizations with increased need for cross-tenant coworker collaboration, it's recommended to reciprocally provision users from their home tenants into the resource tenants of collaborating coworkers.
+
+## New Microsoft Teams
+
+The [new Microsoft Teams](/microsoftteams/new-teams-desktop-admin) experience improves upon Microsoft 365 people search and Teams external access for a unified seamless collaboration experience. For this improved experience to light up, the multi-tenant organization representation in Azure AD is required and collaborating users shall be provisioned as B2B members.
+
+## Collaborating user set
+
+Collaboration in Microsoft 365 is built on the premise of reciprocal provisioning of B2B identities across multi-tenant organization tenants.
+
+For example, say Annie in tenant A, Bob and Barbara in tenant B, and Charlie in tenant C want to collaborate. Conceptually, these four users represent a collaborating user set of four internal identities across three tenants.
++
+For people search to succeed, while scoped to local tenant boundaries, the entire collaborating user set must be represented within the scope of each multi-tenant organization tenant A, B, and C, in the form of either internal or B2B identities.
++
+Depending on your organizationΓÇÖs needs, the collaborating user set may contain a subset of collaborating employees, or eventually all employees.
+
+## Sharing your users
+
+One of the simpler ways to achieve a collaborating user set in each multi-tenant organization tenant is for each tenant administrator to define their user contribution and synchronization them outbound. Tenant administrators on the receiving end should accept the shared users inbound.
+
+- Administrator A contributes or shares Annie
+- Administrator B contributes or shares Bob and Barbara
+- Administrator C contributes or shares Charles
++
+Microsoft 365 admin center facilitates orchestration of such a collaborating user set across multi-tenant organization tenants. For more information, see [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs).
+
+Alternatively, pair-wise configuration of inbound and outbound cross-tenant synchronization can be used to orchestrate such collating user set across multi-tenant organization tenants. For more information, see [What is a cross-tenant synchronization](cross-tenant-synchronization-overview.md).
+
+## B2B member users
+
+To ensure a seamless collaboration experience across the multi-tenant organization in new Microsoft Teams, B2B identities are provisioned as B2B users of [Member userType](../external-identities/user-properties.md#user-type).
+
+| User synchronization method | Default userType property |
+| | |
+| [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs) | **Member**<br/> Remains Guest, if the B2B identity already existed as Guest |
+| [Cross-tenant synchronization in Azure AD](./cross-tenant-synchronization-overview.md) | **Member**<br/> Remains Guest, if the B2B identity already existed as Guest |
+
+From a security perspective, you should review the default permissions granted to B2B member users. For more information, see [Compare member and guest default permissions](../fundamentals/users-default-permissions.md#compare-member-and-guest-default-permissions).
+
+To change the userType from **Guest** to **Member** (or vice versa), a source tenant administrator can amend the [attribute mappings](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings), or a target tenant administrator can [change the userType](../fundamentals/how-to-manage-user-profile-info.md#add-or-change-profile-information) if the property is not recurringly synchronized.
+
+## Unsharing your users
+
+To unshare users, you deprovision users by using the user deprovisioning capabilities available in Azure AD cross-tenant synchronization. By default, when provisioning scope is reduced while a synchronization job is running, users fall out of scope and are soft deleted, unless Target Object Actions for Delete is disabled. For more information, see [Deprovisioning](cross-tenant-synchronization-overview.md#deprovisioning) and [Define who is in scope for provisioning](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters).
+
+## Next steps
+
+- [Plan for multi-tenant organizations in Microsoft 365](/microsoft-365/enterprise/plan-multi-tenant-org-overview)
+- [Set up a multi-tenant org in Microsoft 365](/microsoft-365/enterprise/set-up-multi-tenant-org)
active-directory Multi Tenant Organization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-overview.md
+
+ Title: What is a multi-tenant organization in Azure Active Directory? (Preview)
+description: Learn about multi-tenant organizations in Azure Active Directory and Microsoft 365.
+++++++ Last updated : 08/22/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# What is a multi-tenant organization in Azure Active Directory? (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Multi-tenant organization is a feature in Azure Active Directory (Azure AD) and Microsoft 365 that enables you to form a tenant group within your organization. Each pair of tenants in the group is governed by cross-tenant access settings that you can use to configure B2B or cross-tenant synchronization.
+
+## Why use multi-tenant organization?
+
+Here are the primary goals of multi-tenant organization:
+
+- Define a group of tenants belonging to your organization
+- Collaborate across your tenants in new Microsoft Teams
+- Enable search and discovery of user profiles across your tenants through Microsoft 365 people search
+
+## Who should use it?
+
+Organizations that own multiple Azure AD tenants and want to streamline intra-organization cross-tenant collaboration in Microsoft 365.
+
+The multi-tenant organization capability is built on the assumption of reciprocal provisioning of B2B member users across multi-tenant organization tenants.
+
+As such, the multi-tenant organization capability assumes the simultaneous use of Azure AD cross-tenant synchronization or an alternative bulk provisioning engine for [external identities](../external-identities/user-properties.md).
+
+## Benefits
+
+Here are the primary benefits of a multi-tenant organization:
+
+- Differentiate in-organization and out-of-organization external users
+
+ In Azure AD, external users originating from within a multi-tenant organization can be differentiated from external users originating from outside the multi-tenant organization. This differentiation facilitates the application of different policies for in-organization and out-of-organization external users.
+- Improved collaborative experience in Microsoft Teams
+
+ In new Microsoft Teams, multi-tenant organization users can expect an improved collaborative experience across tenants with chat, calling, and meeting start notifications from all connected tenants across the multi-tenant organization. Tenant switching is more seamless and faster. For more information, see [Microsoft Teams: Advantages of the new architecture](https://techcommunity.microsoft.com/t5/microsoft-teams-blog/microsoft-teams-advantages-of-the-new-architecture/ba-p/3775704).
+
+- Improved people search experience across tenants
+
+ Across Microsoft 365 services, the multi-tenant organization people search experience is a collaboration feature that enables search and discovery of people across multiple tenants. Once enabled, users are able to search and discover synced user profiles in a tenant's global address list and view their corresponding people cards. For more information, see [Microsoft 365 multi-tenant organization people search (public preview)](/microsoft-365/enterprise/multi-tenant-people-search).
+
+## How does a multi-tenant organization work?
+
+The multi-tenant organization capability enables you to form a tenant group within your organization. The following list describes the basic lifecycle of a multi-tenant organization.
+
+- Define a multi-tenant organization
+
+ One tenant administrator defines a multi-tenant organization as a grouping of tenants. The grouping of tenants isn't reciprocal until each listed tenant takes action to join the multi-tenant organization. The objective is a reciprocal agreement between all listed tenants.
+
+- Join a multi-tenant organization
+
+ Tenant administrators of listed tenants take action to join the multi-tenant organization. After joining, the multi-tenant organization relationship is reciprocal between each and every tenant that joined the multi-tenant organization.
+
+- Leave a multi-tenant organization
+
+ Tenant administrators of listed tenants can leave a multi-tenant organization at any time. While a tenant administrator who defined the multi-tenant organization can add and remove listed tenants they don't control the other tenants.
+
+A multi-tenant organization is established as a collaboration of equals. Each tenant administrator stays in control of their tenant and their membership in the multi-tenant organization.
+
+## Cross-tenant access settings
+
+Administrators staying in control of their resources is a guiding principle for multi-tenant organization collaboration. Cross-tenant access settings are required for each tenant-to-tenant relationship. Tenant administrators explicitly configure, as needed, the following policies:
+
+- Cross-tenant access partner configurations
+
+ For more information, see [Configure cross-tenant access settings for B2B collaboration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) and [crossTenantAccessPolicyConfigurationPartner resource type](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner?view=graph-rest-beta&preserve-view=true).
+
+- Cross-tenant access identity synchronization
+
+ For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md) and [crossTenantIdentitySyncPolicyPartner resource type](/graph/api/resources/crosstenantidentitysyncpolicypartner).
+
+## Multi-tenant organization example
+
+The following diagram shows three tenants A, B, and C that form a multi-tenant organization.
++
+| Tenant | Description |
+| :: | |
+| A | Administrators see a multi-tenant organization consisting of A, B, C.<br/>They also see cross-tenant access settings for B and C. |
+| B | Administrators see a multi-tenant organization consisting of A, B, C.<br/>They also see cross-tenant access settings for A and C. |
+| C | Administrators see a multi-tenant organization consisting of A, B, C.<br/>They also see cross-tenant access settings for A and B. |
+
+## Templates for cross-tenant access settings
+
+To ease the setup of homogenous cross-tenant access settings applied to partner tenants in the multi-tenant organization, the administrator of each multi-tenant organization tenant can configure optional cross-tenant access settings templates dedicated to the multi-tenant organization. These templates can be used to preconfigure cross-tenant access settings that are applied to any partner tenant newly joining the multi-tenant organization.
+
+## Tenant role and state
+
+To facilitate the management of a multi-tenant organization, any given multi-tenant organization tenant has an associated role and state.
+
+| Tenant role | Description |
+| | |
+| Owner | One tenant creates the multi-tenant organization. The multi-tenant organization creating tenant receives the role of owner. The privilege of the owner tenant is to add tenants into a pending state as well as to remove tenants from the multi-tenant organization. Also, an owner tenant can change the role of other multi-tenant organization tenants. |
+| Member | Following the addition of pending tenants to the multi-tenant organization, pending tenants need to join the multi-tenant organization to turn their state from pending to active. Joined tenants typically start in the member role. Any member tenant has the privilege to leave the multi-tenant organization. |
+
+| Tenant state | Description |
+| | |
+| Pending | A pending tenant has yet to join a multi-tenant organization. While listed in an administratorΓÇÖs view of the multi-tenant organization, a pending tenant isn't yet part of the multi-tenant organization, and as such is hidden from an end userΓÇÖs view of a multi-tenant organization. |
+| Active | Following the addition of pending tenants to the multi-tenant organization, pending tenants need to join the multi-tenant organization to turn their state from pending to active. Joined tenants typically start in the member role. Any member tenant has the privilege to leave the multi-tenant organization. |
+
+## Constraints
+
+The multi-tenant organization capability has been designed with the following constraints:
+
+- Any given tenant can only create or join a single multi-tenant organization.
+- Any multi-tenant organization must have at least one active owner tenant.
+- Each active tenant must have cross-tenant access settings for all active tenants.
+- Any active tenant may leave a multi-tenant organization by removing themselves from it.
+- A multi-tenant organization is deleted when the only remaining active (owner) tenant leaves.
+
+## External user segmentation
+
+By defining a multi-tenant organization, as well as pivoting on the Azure AD user property of userType, [external identities](../external-identities/user-properties.md) are segmented as follows:
+
+- External members originating from within a multi-tenant organization
+- External guests originating from within a multi-tenant organization
+- External members originating from outside of your organization
+- External guests originating from outside of your organization
+
+This segmentation of external users, due to the definition of a multi-tenant organization, enables administrators to better differentiate in-organization from out-of-organization external users.
+
+External members originating from within a multi-tenant organization are called multi-tenant organization members.
+
+Multi-tenant collaboration capabilities in Microsoft 365 aim to provide a seamless collaboration experience across tenant boundaries when collaborating with multi-tenant organization member users.
+
+## Get started
+
+Here are the basic steps to get started using multi-tenant organization.
+
+### Step 1: Plan your deployment
+
+For more information, see [Plan for multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/plan-multi-tenant-org-overview).
+
+### Step 2: Create your multi-tenant organization
+
+Create your multi-tenant organization using [Microsoft 365 admin center](/microsoft-365/enterprise/set-up-multi-tenant-org) or [Microsoft Graph API](multi-tenant-organization-configure-graph.md):
+
+- First tenant, soon-to-be owner tenant, creates a multi-tenant organization.
+- Owner tenant adds one or more joiner tenants.
+- To allow for asynchronous processing, wait a **minimum of 2 hours**.
+
+### Step 3: Join a multi-tenant organization
+
+Join a multi-tenant organization using [Microsoft 365 admin center](/microsoft-365/enterprise/join-leave-multi-tenant-org) or [Microsoft Graph API](multi-tenant-organization-configure-graph.md):
+
+- Joiner tenants submit a join request to join the multi-tenant organization of owner tenant.
+- To allow for asynchronous processing, wait **up to 4 hours**.
+
+Your multi-tenant organization is formed.
+
+### Step 4: Synchronize users
+
+Depending on your use case, you may want to synchronize users using one of the following methods:
+
+- [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs)
+- [Configure cross-tenant synchronization using the Azure portal](cross-tenant-synchronization-configure.md)
+- [Configure cross-tenant synchronization using Microsoft Graph API](cross-tenant-synchronization-configure-graph.md)
+- Your alternative bulk provisioning engine
+
+## Limits
+
+Multi-tenant organizations have the following limits:
+
+- A maximum of five active tenants per multi-tenant organization
+- A maximum of 100,000 internal users per active tenant at the time of joining
+
+If you want to add more than five tenants or 100,000 internal users per tenant, contact Microsoft support.
+
+## License requirements
+
+The multi-tenant organization capability is in preview, and you can start using it if you have Azure AD Premium P1 licenses or above in all multi-tenant organization tenants. Licensing terms will be released at general availability. To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+
+## Next steps
+
+- [Plan for multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/plan-multi-tenant-org-overview)
+- [What is cross-tenant synchronization?](cross-tenant-synchronization-overview.md)
active-directory Multi Tenant Organization Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-templates.md
+
+ Title: Multi-tenant organization templates (Preview)
+description: Learn about multi-tenant organization templates in Azure Active Directory.
+++++++ Last updated : 08/22/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Multi-tenant organization templates (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Administrators staying in control of their resources is a guiding principle for multi-tenant organization collaboration. Cross-tenant access settings are required for each tenant-to-tenant relationship. Tenant administrators explicitly configure cross-tenant access partner configurations and identity synchronization settings for partner tenants inside the multi-tenant organization.
+
+To help apply homogenous cross-tenant access settings to partner tenants in the multi-tenant organization, the administrator of each tenant can configure optional cross-tenant access settings templates dedicated to the multi-tenant organization. This article describes how to use templates to preconfigure cross-tenant access settings that are applied to any partner tenant newly joining the multi-tenant organization.
+
+## Autogeneration of cross-tenant access settings
+
+Within a multi-tenant organization, each pair of tenants must have bi-directional [cross-tenant access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md), for both, partner configuration and identity synchronization. These settings provide the underlying policy framework for enabling trust and for sharing users and applications.
+
+When your tenant joins a new multi-tenant organization, or when a partner tenant joins your existing multi-tenant organization, cross-tenant access settings to other partner tenants in the enlarged multi-tenant organization, if they don't already exist, are automatically generated in an unconfigured state. In an unconfigured state, these cross-tenant access settings pass through the [default settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#configure-default-settings).
+
+Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. Typically, these settings are configured to be nontrusting. For example, cross-tenant trusts for multi-factor authentication and compliant device claims might be disabled and user and group sharing in B2B direct connect or B2B collaboration might be disallowed.
+
+In multi-tenant organizations, on the other hand, cross-tenant access settings are typically expected to be trusting. For example, cross-tenant trusts for multi-factor authentication and compliant device claims might be enabled and user and group sharing in B2B direct connect or B2B collaboration might be allowed.
+
+While the autogeneration of cross-tenant access settings for multi-tenant organization partner tenants in and of itself doesn't change any authentication or authorization policy behavior, it allows your organization to easily customize the cross-tenant access settings for partner tenants in the multi-tenant organization on a per-tenant basis.
+
+## Policy templates at multi-tenant organization formation
+
+As previously described, in multi-tenant organizations, cross-tenant access settings are typically expected to be trusting. For example, cross-tenant trusts for multi-factor authentication and compliant device claims might be enabled and user and group sharing in B2B direct connect or B2B collaboration might be allowed.
+
+While autogeneration of cross-tenant access settings, per previous section, guarantees the existence of cross-tenant access settings for every multi-tenant organization partner tenant, further maintenance of the cross-tenant access settings for multi-tenant organization partner tenants is conducted individually, on a per-tenant basis.
+
+To reduce the workload for administrators at the time of multi-tenant organization formation, you can optionally use policy templates for preemptive configuration of cross-tenant access settings. These template settings are applied at the time of your tenant joins a multi-tenant organization to all external multi-tenant organization partner tenants as well as at the time of any partner tenant joins your existing multi-tenant organization to such new partner tenant.
+
+[Enablement or configuration of the optional policy templates](multi-tenant-organization-configure-templates.md), at the time of a partner tenant joins a multi-tenant organization, preemptively amend the corresponding [cross-tenant access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md), for both partner configuration and identity synchronization.
+
+As an example, consider the actions of the administrators for an anticipated multi-tenant organization with three tenants, A, B, and C.
+
+- The administrators of all three tenants enable and configure their respective optional policy templates to enable cross-tenant trusts for multi-factor authentication and compliant device claims and to allow user and group sharing in B2B direct connect and B2B collaboration.
+- Administrator A creates the multi-tenant organization and adds tenants B and C as pending tenants to the multi-tenant organization.
+- Administrator B joins the multi-tenant organization. Cross-tenant access settings in tenant A for partner tenant B are amended, according to tenant A policy template settings. Vice versa, cross-tenant access settings in tenant B for partner tenant A are amended, according to tenant B policy template settings.
+- Administrator C joins the multi-tenant organization. Cross-tenant access settings in tenants A (and B) for partner tenant C are amended, according to tenant A (and B) policy template settings. Similarly, cross-tenant access settings in tenant C for partner tenants A and B are amended, according to tenant C policy template settings.
+- Following the formation of this multi-tenant organization of three tenants, the cross-tenant access settings of all tenant pairs in the multi-tenant organization have preemptively been configured.
+
+In summary, configuration of the optional policy templates enable you to homogeneously initialize cross-tenant access settings across your multi-tenant organization, while maintaining maximum flexibility to customize your cross-tenant access settings as needed on a per-tenant basis.
+
+To stop using the policy templates, you can reset them to their default state. For more information, see [Configure multi-tenant organization templates](multi-tenant-organization-configure-templates.md).
+
+## Policy template scoping and additional properties
+
+To provide administrators with further configurability, you can choose when cross-tenant access settings are to be amended according to the policy templates. For example, you can choose to apply the policy templates for the following tenants when a tenant joins a multi-tenant organization:
+
+| Tenant | Description |
+| | |
+| Only new partner tenants | Tenants whose cross-tenant access settings are autogenerated |
+| Only existing partner tenants | Tenants who already have cross-tenant access settings |
+| All partner tenants | Both new partner tenants and existing partner tenants |
+| No partner tenants | Policy templates are effectively disabled |
+
+In this context, *new* partners refer to tenants for which you haven't yet configured cross-tenant access settings, while *existing* partners refer to tenants for which you have already configured cross-tenant access settings. This scoping is specified with the `templateApplicationLevel` property on the cross-tenant access [partner configuration template](/graph/api/resources/multitenantorganizationpartnerconfigurationtemplate) and the `templateApplicationLevel` property on the cross-tenant access [identity synchronization template](/graph/api/resources/multitenantorganizationidentitysyncpolicytemplate).
+
+Finally, in terms of interpretation of template property values, any template property value of `null` has no effect on the corresponding property value in the targeted cross-tenant access settings, while a defined template property value causes the corresponding property value in the targeted cross-tenant access settings to be amended in accordance with the template. The following table illustrates how template property values are being applied to corresponding cross-tenant access setting values.
+
+| Template Value | Initial Partner Settings Value<br/>(Before joining multi-tenant org) | Final Partner Settings Value<br/>(After joining multi-tenant org) |
+| | | |
+| `null` | &lt;Partner Settings Value&gt; | &lt;Partner Settings Value&gt; |
+| &lt;Template Value&gt; | &lt;any value&gt; | &lt;Template Value&gt; |
+
+## Policy templates used by Microsoft 365 admin center
+
+When a multi-tenant organization is formed in Microsoft 365 admin center, an administrator agrees to the following multi-tenant organization template settings:
+
+- Identity synchronization is set to allow users to synchronize into this tenant
+- Cross-tenant access is set to automatically redeem user invitations for both inbound and outbound
+
+This is achieved by setting the corresponding three template property values to `true`:
+
+- `automaticUserConsentSettings.inboundAllowed`
+- `automaticUserConsentSettings.outboundAllowed`
+- `userSyncInbound`
+
+For more information, see [Join or leave a multi-tenant organization in Microsoft 365](/microsoft-365/enterprise/join-leave-multi-tenant-org).
+
+## Cross-tenant access settings at time of multi-tenant organization disassembly
+
+Currently, there's no equivalent policy template feature supporting the disassembly of a multi-tenant organization. When a partner tenant leaves the multi-tenant organization, each tenant administrator must re-examine and amend accordingly the cross-tenant access settings for the partner tenant that left the multi-tenant organization.
+
+The partner tenant that left the multi-tenant organization must re-examine and amend accordingly the cross-tenant access settings for all former multi-tenant organization partner tenants as well as consider resetting the two policy templates for cross-tenant access settings.
+
+## Next steps
+
+- [Configure multi-tenant organization templates using the Microsoft Graph API (Preview)](./multi-tenant-organization-configure-templates.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/overview.md
Title: What is a multi-tenant organization in Azure Active Directory?
-description: Learn about multi-tenant organizations in Azure Active Directory.
+ Title: Multi-tenant organization scenario and Azure AD capabilities
+description: Learn about the multi-tenant organization scenario and capabilities in Azure Active Directory.
Previously updated : 05/05/2023 Last updated : 08/22/2023 #Customer intent: As a dev, devops, or it admin, I want to
-# What is a multi-tenant organization in Azure Active Directory?
+# Multi-tenant organization scenario and Azure AD capabilities
-This article provides an overview of multi-tenant organizations.
+This article provides an overview of the multi-tenant organization scenario and the related capabilities in Azure Active Directory (Azure AD).
## What is a tenant?
-A *tenant* is an instance of Azure Active Directory (Azure AD) in which information about a single organization resides including organizational objects such as users, groups, and devices and also application registrations, such as Microsoft 365 and third-party applications. A tenant also contains access and compliance policies for resources, such as applications registered in the directory. The primary functions served by a tenant include identity authentication as well as resource access management.
+A *tenant* is an instance of Azure AD in which information about a single organization resides including organizational objects such as users, groups, and devices and also application registrations, such as Microsoft 365 and third-party applications. A tenant also contains access and compliance policies for resources, such as applications registered in the directory. The primary functions served by a tenant include identity authentication as well as resource access management.
From an Azure AD perspective, a tenant forms an identity and access management scope. For example, a tenant administrator makes an application available to some or all the users in the tenant and enforces access policies on that application for users in that tenant. In addition, a tenant contains organizational branding data that drives end-user experiences, such as the organizations email domains and SharePoint URLs used by employees in that organization. From a Microsoft 365 perspective, a tenant forms the default collaboration and licensing boundary. For example, users in Microsoft Teams or Microsoft Outlook can easily find and collaborate with other users in their tenant, but don't have the ability to find or see users in other tenants.
The following diagram shows how users in other tenants might not be able to acce
As your organization evolves, your IT team must adapt to meet the changing needs. This often includes integrating with an existing tenant or forming a new one. Regardless of how the identity infrastructure is managed, it's critical that users have a seamless experience accessing resources and collaborating. Today, you may be using custom scripts or on-premises solutions to bring the tenants together to provide a seamless experience across tenants.
+## B2B direct connect
+
+To enable users across tenants to collaborate in [Teams Connect shared channels](/microsoftteams/platform/concepts/build-and-test/shared-channels), you can use [Azure AD B2B direct connect](../external-identities/b2b-direct-connect-overview.md). B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration in Teams. When the trust is established, the B2B direct connect user has single sign-on access using credentials from their home tenant.
+
+Here's the primary constraint with using B2B direct connect across multiple tenants:
+
+- Currently, B2B direct connect works only with Teams Connect shared channels.
++
+For more information, see [B2B direct connect overview](../external-identities/b2b-direct-connect-overview.md).
+ ## B2B collaboration To enable users across tenants to collaborate, you can use [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. Once the external user has redeemed their invitation or completed sign-up, they're represented in your tenant as a user object. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data.
Here are the primary constraints with using B2B collaboration across multiple te
:::image type="content" source="./media/overview/multi-tenant-b2b-collaboration.png" alt-text="Diagram that shows using B2B collaboration across tenants." lightbox="./media/overview/multi-tenant-b2b-collaboration.png":::
-## B2B direct connect
-
-To enable users across tenants to collaborate in [Teams Connect shared channels](/microsoftteams/platform/concepts/build-and-test/shared-channels), you can use [Azure AD B2B direct connect](../external-identities/b2b-direct-connect-overview.md). B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration in Teams. When the trust is established, the B2B direct connect user has single sign-on access using credentials from their home tenant.
-
-Here's the primary constraint with using B2B direct connect across multiple tenants:
--- Currently, B2B direct connect works only with Teams Connect shared channels.-
+For more information, see [B2B collaboration overview](../external-identities/what-is-b2b.md).
## Cross-tenant synchronization
Here are the primary constraints with using cross-tenant synchronization across
:::image type="content" source="./media/overview/multi-tenant-cross-tenant-sync.png" alt-text="Diagram that shows using cross-tenant synchronization across tenants." lightbox="./media/overview/multi-tenant-cross-tenant-sync.png":::
+For more information, see [What is cross-tenant synchronization?](./cross-tenant-synchronization-overview.md).
+
+## Multi-tenant organization (Preview)
+
+> [!IMPORTANT]
+> Multi-tenant organization is currently in PREVIEW.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+[Multi-tenant organization](./multi-tenant-organization-overview.md) is a feature in Azure AD and Microsoft 365 that enables you to form a tenant group within your organization. Each pair of tenants in the group is governed by cross-tenant access settings that you can use to configure B2B or cross-tenant synchronization.
+
+Here are the primary benefits of a multi-tenant organization:
+
+- Differentiate in-organization and out-of-organization external users
+- Improved collaborative experience in new Microsoft Teams
+- Improved people search experience across tenants
++
+For more information, see [What is a multi-tenant organization in Azure Active Directory?](./multi-tenant-organization-overview.md).
+ ## Compare multi-tenant capabilities
-Depending on the needs of your organization, you can use any combination of cross-tenant synchronization, B2B collaboration, and B2B direct connect. The following table compares the capabilities of each feature. For more information about different external identity scenarios, see [Comparing External Identities feature sets](../external-identities/external-identities-overview.md#comparing-external-identities-feature-sets).
+Depending on the needs of your organization, you can use any combination of B2B direct connect, B2B collaboration, cross-tenant synchronization, and multi-tenant organization capabilities. B2B direct connect and B2B collaboration are independent capabilities, while cross-tenant synchronization and multi-tenant organization capabilities are independent of each other, though both rely on underlying B2B collaboration.
+
+The following table compares the capabilities of each feature. For more information about different external identity scenarios, see [Comparing External Identities feature sets](../external-identities/external-identities-overview.md#comparing-external-identities-feature-sets).
-| | Cross-tenant synchronization<br/>(internal) | B2B collaboration<br/>(Org-to-org external) | B2B direct connect<br/>(Org-to-org external) |
-| | | | |
-| **Purpose** | Users can seamlessly access apps/resources across the same organization, even if they're hosted in different tenants. | Users can access apps/resources hosted in external tenants, usually with limited guest privileges. Depending on automatic redemption settings, users might need to accept a consent prompt in each tenant. | Users can access Teams Connect shared channels hosted in external tenants. |
-| **Value** | Enables collaboration across organizational tenants. Administrators don't have to manually invite and synchronize users between tenants to ensure continuous access to apps/resources within the organization. | Enables external collaboration. More control and monitoring for administrators by managing the B2B collaboration users. Administrators can limit the access that these external users have to their apps/resources. | Enables external collaboration within Teams Connect shared channels only. More convenient for administrators because they don't have to manage B2B users. |
-| **Primary administrator workflow** | Configure the cross-tenant synchronization engine to synchronize users between multiple tenants as B2B collaboration users. | Add external users to resource tenant by using the B2B invitation process or build your own onboarding experience using the [B2B collaboration invitation manager](../external-identities/external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration). | Configure cross-tenant access to provide external users inbound access to tenant the credentials for their home tenant. |
-| **Trust level** | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. | Low to mid trust. User objects can be tracked easily and managed with granular controls. | Mid trust. B2B direct connect users are less easy to track, mandating a certain level of trust with the external organization. |
-| **Effect on users** | Within the same organization, users are synchronized from their home tenant to the resource tenant as B2B collaboration users. | External users are added to a tenant as B2B collaboration users. | Users access the resource tenant using the credentials for their home tenant. User objects aren't created in the resource tenant. |
-| **User type** | B2B collaboration user<br/>- External member (default)<br/>- External guest | B2B collaboration user<br/>- External member<br/>- External guest (default) | B2B direct connect user<br/>- N/A |
+| | B2B direct connect<br/>(Org-to-org external or internal) | B2B collaboration<br/>(Org-to-org external or internal) | Cross-tenant synchronization<br/>(Org internal) | Multi-tenant organization<br/>(Org internal) |
+| | | | | |
+| **Purpose** | Users can access Teams Connect shared channels hosted in external tenants. | Users can access apps/resources hosted in external tenants, usually with limited guest privileges. Depending on automatic redemption settings, users might need to accept a consent prompt in each tenant. | Users can seamlessly access apps/resources across the same organization, even if they're hosted in different tenants. | Users can more seamlessly collaborate across a multi-tenant organization in new Teams and people search. |
+| **Value** | Enables external collaboration within Teams Connect shared channels only. More convenient for administrators because they don't have to manage B2B users. | Enables external collaboration. More control and monitoring for administrators by managing the B2B collaboration users. Administrators can limit the access that these external users have to their apps/resources. | Enables collaboration across organizational tenants. Administrators don't have to manually invite and synchronize users between tenants to ensure continuous access to apps/resources within the organization. | Enables collaboration across organizational tenants. Administrators continue to have full configuration ability via cross-tenant access settings. Optional cross-tenant access templates allow pre-configuration of cross-tenant access settings. |
+| **Primary administrator workflow** | Configure cross-tenant access to provide external users inbound access to tenant the credentials for their home tenant. | Add external users to resource tenant by using the B2B invitation process or build your own onboarding experience using the [B2B collaboration invitation manager](../external-identities/external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration). | Configure the cross-tenant synchronization engine to synchronize users between multiple tenants as B2B collaboration users. | Create a multi-tenant organization, add (invite) tenants, join a multi-tenant organization. Leverage existing B2B collaboration users or use cross-tenant synchronization to provision B2B collaboration users. |
+| **Trust level** | Mid trust. B2B direct connect users are less easy to track, mandating a certain level of trust with the external organization. | Low to mid trust. User objects can be tracked easily and managed with granular controls. | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. |
+| **Effect on users** | Users access the resource tenant using the credentials for their home tenant. User objects aren't created in the resource tenant. | External users are added to a tenant as B2B collaboration users. | Within the same organization, users are synchronized from their home tenant to the resource tenant as B2B collaboration users. | Within the same multi-tenant organization, B2B collaboration users, particularly member users, benefit from enhanced, seamless collaboration across Microsoft 365. |
+| **User type** | B2B direct connect user<br/>- N/A | B2B collaboration user<br/>- External member<br/>- External guest (default) | B2B collaboration user<br/>- External member (default)<br/>- External guest | B2B collaboration user<br/>- External member (default)<br/>- External guest |
-The following diagram shows how cross-tenant synchronization, B2B collaboration, and B2B direct connect could be used together.
+The following diagram shows how B2B direct connect, B2B collaboration, and cross-tenant synchronization capabilities could be used together.
:::image type="content" source="./media/overview/multi-tenant-capabilities.png" alt-text="Diagram that shows different multi-tenant capabilities." lightbox="./media/overview/multi-tenant-capabilities.png"::: ## Terminology
-To better understand multi-tenant organizations, you can refer back to the following list of terms.
+To better understand multi-tenant organization scenario related Azure AD capabilities, you can refer back to the following list of terms.
| Term | Definition | | | | | tenant | An instance of Azure Active Directory (Azure AD). | | organization | The top level of a business hierarchy. |
-| multi-tenant organization | An organization that has more than one instance of Azure AD. |
+| multi-tenant organization | An organization that has more than one instance of Azure AD, as well as a capability to group those instances in Azure AD. |
+| creator tenant | The tenant that created the multi-tenant organization. |
+| owner tenant | A tenant with the owner role. Initially, the creator tenant. |
+| added tenant | A tenant that was added by an owner tenant. |
+| joiner tenant | A tenant that is joining the multi-tenant organization. |
+| join request | A joiner or added tenant submits a join request to join the multi-tenant organization. |
+| pending tenant | A tenant that was added by an owner but that hasn't yet joined. |
+| active tenant | A tenant that created or joined the multi-tenant organization. |
+| member tenant | A tenant with the member role. Most joiner tenants start as members. |
+| multi-tenant organization tenant | An active tenant of the multi-tenant organization, not pending. |
| cross-tenant synchronization | A one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. |
-| cross-tenant access settings | Settings to manage collaboration with external Azure AD organizations. |
+| cross-tenant access settings | Settings to manage collaboration for specific Azure AD organizations. |
+| cross-tenant access settings template | An optional template to preconfigure cross-tenant access settings that are applied to any partner tenant newly joining the multi-tenant organization. |
| organizational settings | Cross-tenant access settings for specific Azure AD organizations. | | configuration | An application and underlying service principal in Azure AD that includes the settings (such as target tenant, user scope, and attribute mappings) needed for cross-tenant synchronization. | | provisioning | The process of automatically creating or synchronizing objects across a boundary. |
To better understand multi-tenant organizations, you can refer back to the follo
## Next steps
+- [What is a multi-tenant organization in Azure Active Directory?](multi-tenant-organization-overview.md)
- [What is cross-tenant synchronization?](cross-tenant-synchronization-overview.md)
active-directory Azure Pim Resource Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md
Title: View audit report for Azure resource roles in Privileged Identity Management (PIM)
-description: View activity and audit history for Azure resource roles in Azure AD Privileged Identity Management (PIM).
+description: View activity and audit history for Azure resource roles in Privileged Identity Management (PIM).
documentationcenter: ''
# View activity and audit history for Azure resource roles in Privileged Identity Management
-With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Azure portal that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Microsoft Entra admin center that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
> [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here. ## View activity and activations + To see what actions a specific user took in various resources, you can view the Azure resource activity that's associated with a given activation period.
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure resources**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources**.
1. Select the resource you want to view activity and activations for.
To see what actions a specific user took in various resources, you can view the
You may have a compliance requirement where you must provide a complete list of role assignments to auditors. Privileged Identity Management enables you to query role assignments at a specific resource, which includes role assignments for all child resources. Previously, it was difficult for administrators to get a complete list of role assignments for a subscription and they had to export role assignments for each specific resource. Using Privileged Identity Management, you can query for all active and eligible role assignments in a subscription including role assignments for all resource groups and resources.
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure resources**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources**.
1. Select the resource you want to export role assignments for, such as a subscription.
You may have a compliance requirement where you must provide a complete list of
Resource audit gives you a view of all role activity for a resource.
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure resources**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources**.
1. Select the resource you want to view audit history for.
Resource audit gives you a view of all role activity for a resource.
My audit enables you to view your personal role activity.
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure resources**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources**.
1. Select the resource you want to view audit history for.
My audit enables you to view your personal role activity.
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com) with Privileged Role administrator role permissions, and open Azure AD.
-1. Select **Audit logs**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
+
+1. Browse to **Identity** > **Monitoring & health** > **Audit logs**.
+ 1. Use the **Service** filter to display only audit events for the Privileged identity Management service. On the **Audit logs** page, you can: - See the reason for an audit event in the **Status reason** column.
My audit enables you to view your personal role activity.
1. Select an audit log event to see the ticket number on the **Activity** tab of the **Details** pane.
- [![Check the ticket number for the audit event](media/azure-pim-resource-rbac/audit-event-ticket-number.png "Check the ticket number for the audit event")](media/azure-pim-resource-rbac/audit-event-ticket-number.png)]
+ [![Check the ticket number for the audit event](media/azure-pim-resource-rbac/audit-event-ticket-number.png "Check the ticket number for the audit event")](media/azure-pim-resource-rbac/audit-event-ticket-number.png)
1. You can view the requester (person activating the role) on the **Targets** tab of the **Details** pane for an audit event. There are three target types for Azure resource roles:
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
na Previously updated : 6/7/2023 Last updated : 8/15/2023
To learn more about Azure AD built-in roles and their permissions, see [Azure AD
One Azure AD tenant can have up to 500 role-assignable groups. To learn more about Azure AD service limits and restrictions, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
-Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). It requires a Microsft Entra Premium P1, P2, or Micrsoft Entra ID Governance license.
+Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). It requires a Microsoft Entra Premium P1, P2, or Microsoft Entra ID Governance license.
## Relationship between role-assignable groups and PIM for Groups
One group can be an eligible member of another group, even if one of those group
If a user is an active member of Group A, and Group A is an eligible member of Group B, the user can activate their membership in Group B. This activation will be only for the user that requested the activation for, it does not mean that the entire Group A becomes an active member of Group B.
+## Privileged Identity Management and app provisioning (Public Preview)
+
+> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
+
+If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol.
+
+In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM.
+Provisioning configuration depends on the application. Generally, we recommend having at least two groups assigned to the application. Depending on the number of roles in your application, you may choose to define additional ΓÇ£privileged groups.ΓÇ¥:
++
+|Group|Purpose|Members|Group membership|Role assigned in the application|
+|--|--|--|--|--|
+|All users group|Ensure that all users that need access to the application are constantly provisioned to the application.|All users that need to access application.|Active|None, or low-privileged role|
+|Privileged group|Provide just-in-time access to privileged role in the application.|Users that need to have just-in-time access to privileged role in the application.|Eligible|Privileged role|
+ ## Next steps - [Bring groups into Privileged Identity Management](groups-discover-groups.md) - [Assign eligibility for a group in Privileged Identity Management](groups-assign-member-owner.md) - [Activate your group membership or ownership in Privileged Identity Management](groups-activate-roles.md)-- [Approve activation requests for group members and owners](groups-approval-workflow.md)
+- [Approve activation requests for group members and owners](groups-approval-workflow.md)
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
This article is for eligible members or owners who want to activate their group
When you need to take on a group membership or ownership, you can request activation by using the **My roles** navigation option in PIM.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD Privileged Identity Management -> My roles -> Groups**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **My roles** > **Groups**.
>[!NOTE] > You may also use this [short link](https://aka.ms/pim) to open the **My roles** page directly.
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
Follow the steps in this article to approve or deny requests for group membershi
## View pending requests + As a delegated approver, you receive an email notification when an Azure resource role request is pending your approval. You can view pending requests in Privileged Identity Management.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD Privileged Identity Management** > **Approve requests** > **Groups**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests** > **Groups**.
1. In the **Requests for role activations** section, you see a list of requests pending your approval.
When you activate a role in Privileged Identity Management, the activation might
If your activation is delayed:
-1. Sign out of the Azure portal and then sign back in.
+1. Sign out of the Microsoft Entra admin center and then sign back in.
1. In Privileged Identity Management, verify that you're listed as the member of the role. ## Next steps
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
When a membership or ownership is assigned, the assignment:
## Assign an owner or member of a group + Follow these steps to make a user eligible member or owner of a group. You will need permissions to manage groups. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level). > [!NOTE] > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com)
+
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**.
-1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups.
+1. Here you can view groups that are already enabled for PIM for Groups.
:::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
Follow these steps to make a user eligible member or owner of a group. You will
## Update or remove an existing role assignment + Follow these steps to update or remove an existing role assignment. You will need permissions to manage groups. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level). > [!NOTE] > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
-1. Sign in to the [Azure portal](https://portal.azure.com) with appropriate role permissions.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
+
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**.
-1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups.
+1. Here you can view groups that are already enabled for PIM for Groups.
:::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
Title: Audit activity history for group assignments in Privileged Identity Management
-description: View activity and audit activity history for group assignments in Azure AD Privileged Identity Management (PIM).
+description: View activity and audit activity history for group assignments in Privileged Identity Management (PIM).
documentationcenter: ''
Follow these steps to view the audit history for groups in Privileged Identity M
## View resource audit history + **Resource audit** gives you a view of all activity associated with groups in PIM.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD Privileged Identity Management -> Groups**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**.
1. Select the group you want to view audit history for.
Follow these steps to view the audit history for groups in Privileged Identity M
**My audit** enables you to view your personal role activity for groups in PIM.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. Select **Azure AD Privileged Identity Management -> Groups**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**.
1. Select the group you want to view audit history for.
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
You need appropriate permissions to bring groups in Azure AD PIM. For role-assig
> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**.
+
+1. Here you can view groups that are already enabled for PIM for Groups.
:::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
You need appropriate permissions to bring groups in Azure AD PIM. For role-assig
> [!NOTE]
-> Alternatively, you can use the Groups blade to bring group under Privileged Identity Management.
+> Alternatively, you can use the Groups pane to bring group under Privileged Identity Management.
> [!NOTE] > Once a group is managed, it can't be taken out of management. This prevents another resource administrator from removing PIM settings.
active-directory Groups Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md
After the request has been submitted, resource administrators are notified of a
### Admin approves
-Resource administrators can access the renewal request from the link in the email notification or by accessing Privileged Identity Management from the Azure portal and selecting **Approve requests** from the left pane.
+Resource administrators can access the renewal request from the link in the email notification or by accessing Privileged Identity Management from the Microsoft Entra admin center and selecting **Approve requests** from the left pane.
When an administrator selects **Approve** or **Deny**, the details of the request are shown along with a field to provide a business justification for the audit logs.
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
Role settings are defined per role per group. All assignments for the same role
## Update role settings - To open the settings for a group role:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. Select **Azure AD Privileged Identity Management** > **Groups**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**.
1. Select the group for which you want to configure role settings.
active-directory Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md
Title: Approve or deny requests for Azure AD roles in PIM
-description: Learn how to approve or deny requests for Azure AD roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to approve or deny requests for Azure AD roles in Privileged Identity Management (PIM).
documentationcenter: ''
With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD),
As a delegated approver, you'll receive an email notification when an Azure AD role request is pending your approval. You can view these pending requests in Privileged Identity Management.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Approve requests**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests**.
![Approve requests - page showing request to review Azure AD roles](./media/azure-ad-pim-approval-workflow/resources-approve-pane.png)
active-directory Pim Complete Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-roles-and-resource-roles-review.md
Title: Complete an access review of Azure resource and Azure AD roles in PIM
-description: Learn how to complete an access review of Azure resource and Azure AD roles Privileged Identity Management in Azure Active Directory.
+description: Learn how to complete an access review of Azure resource and Azure AD roles Privileged Identity Management.
documentationcenter: ''
Once the review has been created, follow the steps in this article to complete t
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a user that is assigned to one of the prerequisite role(s).
-2. For **Azure resources**, select your resource under **Azure resources** and then select **Access reviews** from the dashboard. For **Azure AD roles**, proceed directly to the **Access reviews** on the dashboard.
+1. Browse to **Identity governance** > **Privileged Identity Management**.
-3. Select the access review that you want to manage. Below is a sample screenshot of the **Access Reviews** overview for both **Azure resources** and **Azure AD roles**.
+1. For **Azure AD roles**, select **Azure AD roles**. For **Azure resources**, select **Azure resources**
+
+1. Select the access review that you want to manage. Below is a sample screenshot of the **Access Reviews** overview for both **Azure resources** and **Azure AD roles**.
:::image type="content" source="media/pim-complete-azure-ad-roles-and-resource-roles-review/rbac-azure-ad-roles-home-list.png" alt-text="Access reviews list showing role, owner, start date, end date, and status screenshot." lightbox="media/pim-complete-azure-ad-roles-and-resource-roles-review/rbac-azure-ad-roles-home-list.png":::
active-directory Pim Create Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-roles-and-resource-roles-review.md
Title: Create an access review of Azure resource and Azure AD roles in PIM
-description: Learn how to create an access review of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to create an access review of Azure resource and Azure AD roles in Privileged Identity Management (PIM).
documentationcenter: ''
For more information about licenses for PIM, refer to [License requirements to u
Access Reviews for **Service Principals** requires an Entra Workload Identities Premium plan in addition to Microsoft Entra Premium P2 or Microsoft Entra ID Governance licenses. -- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.
+- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Microsoft Entra admin center.
## Create access reviews [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com) as a user that is assigned to one of the prerequisite role(s).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a user that is assigned to one of the prerequisite role(s).
-2. Select **Identity Governance**.
-
-3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**.
+1. Browse to **Identity governance** > **Privileged Identity Management**.
+
+1. For **Azure AD roles**, select **Azure AD roles**. For **Azure resources**, select **Azure resources**
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Azure portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Microsoft Entra admin center screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage.
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
For both Azure AD and Azure resource role, make sure that youΓÇÖve users represe
### Plan rollback
-If PIM fails to work as desired in the production environment, you can change the role assignment from eligible to active once again. For each role that you’ve configured, select the ellipsis (…) for all users with assignment type as **eligible**. You can then select the **Make active** option to go back and make the role assignment **active**.
+If PIM fails to work as desired in the production environment, you can change the role assignment from eligible to active once again. For each role that you’ve configured, select the ellipsis **(…)** for all users with assignment type as **eligible**. You can then select the **Make active** option to go back and make the role assignment **active**.
## Plan and implement PIM for Azure AD roles
active-directory Pim Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-getting-started.md
Title: Start using PIM
-description: Learn how to enable and get started using Azure AD Privileged Identity Management (PIM) in the Azure portal.
+description: Learn how to enable and get started using Privileged Identity Management (PIM) in the Microsoft Entra admin center.
documentationcenter: ''
Once Privileged Identity Management is set up, you can learn your way around.
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-To make it easier to open Privileged Identity Management, add a PIM tile to your Azure portal dashboard.
+To make it easier to open Privileged Identity Management, add a PIM tile to your Microsoft Entra admin center dashboard.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center all services page](https://entra.microsoft.com/#allservices/category/All)
-1. Select **All services** and find the **Azure AD Privileged Identity Management** service.
+1. Find the **Azure AD Privileged Identity Management** service.
![Azure AD Privileged Identity Management in All services](./media/pim-getting-started/pim-all-services-find.png)
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
Title: Activate Azure AD roles in PIM
-description: Learn how to activate Azure AD roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to activate Azure AD roles in Privileged Identity Management (PIM).
documentationcenter: ''
This article is for administrators who need to activate their Azure AD role in P
When you need to assume an Azure AD role, you can request activation by opening **My roles** in Privileged Identity Management.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
+1. Browse to **Identity governance** > **Privileged Identity Management** > **My roles**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
-1. Select **My roles**, and then select **Azure AD roles** to see a list of your eligible Azure AD roles.
+1. Select **Azure AD roles** to see a list of your eligible Azure AD roles.
![My roles page showing roles you can activate](./media/pim-how-to-activate-role/my-roles.png)
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
Title: Assign Azure AD roles in PIM
-description: Learn how to assign Azure AD roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to assign Azure AD roles in Privileged Identity Management (PIM).
documentationcenter: ''
# Assign Azure AD roles in Privileged Identity Management
-With Azure Active Directory (Azure AD), a Global administrator can make **permanent** Azure AD admin role assignments. These role assignments can be created using the [Azure portal](../roles/permissions-reference.md) or using [PowerShell commands](/powershell/module/azuread#directory_roles).
+With Azure Active Directory (Azure AD), a Global administrator can make **permanent** Azure AD admin role assignments. These role assignments can be created using the [Microsoft Entra admin center](../roles/permissions-reference.md) or using [PowerShell commands](/powershell/module/azuread#directory_roles).
The Azure AD Privileged Identity Management (PIM) service also allows Privileged role administrators to make permanent admin role assignments. Additionally, Privileged role administrators can make users **eligible** for Azure AD admin roles. An eligible administrator can activate the role when they need it, and then their permissions expire once they're done.
Privileged Identity Management support both built-in and custom Azure AD roles.
Follow these steps to make a user eligible for an Azure AD admin role.
-1. Sign in to the [Azure portal](https://portal.azure.com) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles**.
1. Select **Roles** to see the list of roles for Azure AD permissions.
Follow these steps to make a user eligible for an Azure AD admin role.
For certain roles, the scope of the granted permissions can be restricted to a single admin unit, service principal, or application. This procedure is an example if assigning a role that has the scope of an administrative unit. For a list of roles that support scope via administrative unit, see [Assign scoped roles to an administrative unit](../roles/admin-units-assign-roles.md). This feature is currently being rolled out to Azure AD organizations.
-1. Sign in to the [Azure portal](https://portal.azure.com) with Privileged Role Administrator permissions.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure Active Directory** > **Roles and administrators**.
+1. Browse to **Identity** > **Roles & admins** > **Roles & admins**.
1. Select the **User Administrator**.
The following is an example of the response. The response object shown here migh
Follow these steps to update or remove an existing role assignment. **Microsoft Entra Premium P2 or Microsoft Entra ID Governance licensed customers only**: Don't assign a group as Active to a role through both Azure AD and Privileged Identity Management (PIM). For a detailed explanation, see [Known issues](../roles/groups-concept.md#known-issues).
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD roles**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles**.
1. Select **Roles** to see the list of roles for Azure AD.
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
Title: Configure Azure AD role settings in PIM
-description: Learn how to configure Azure AD role settings in Azure AD Privileged Identity Management (PIM).
+description: Learn how to configure Azure AD role settings in Privileged Identity Management (PIM).
documentationcenter: ''
PIM role settings are also known as PIM policies.
To open the settings for an Azure AD role:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD Privileged Identity Management** > **Azure AD Roles** > **Roles**. This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles** > **Roles**.
+
+1. This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles.
:::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png"::: 1. Select the role whose settings you want to configure.
active-directory Pim How To Configure Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md
Title: Security alerts for Azure AD roles in PIM
-description: Configure security alerts for Azure AD roles Privileged Identity Management in Azure Active Directory.
+description: Configure security alerts for Azure AD roles Privileged Identity Management.
documentationcenter: ''
Severity: **Low**
Follow these steps to configure security alerts for Azure AD roles in Privileged Identity Management:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
-
-1. From the left menu, select **Azure AD Roles**.
-
-1. From the left menu, select **Alerts**, and then select **Setting**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD Roles** > **Alerts** > **Setting**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
![Screenshots of alerts page with the settings highlighted.](media/pim-how-to-configure-security-alerts/alert-settings.png)
active-directory Pim How To Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md
Title: Renew Azure AD role assignments in PIM
-description: Learn how to extend or renew Azure Active Directory role assignments in Azure AD Privileged Identity Management (PIM).
+description: Learn how to extend or renew Azure Active Directory role assignments in Privileged Identity Management (PIM).
documentationcenter: ''
After the request has been submitted, administrators are notified of a pending r
### Admin approves
-Azure AD administrators can access the renewal request from the link in the email notification, or by accessing Privileged Identity Management from the Azure portal and selecting **Approve requests** in PIM.
+Azure AD administrators can access the renewal request from the link in the email notification, or by accessing Privileged Identity Management from the Microsoft Entra admin center and selecting **Approve requests** in PIM.
![Azure AD roles - Approve requests page listing requests and links to approve or deny](./media/pim-how-to-renew-extend/extend-admin-approve-list.png)
active-directory Pim How To Use Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md
Follow these steps to view the audit history for Azure AD roles.
## View resource audit history + Resource audit gives you a view of all activity associated with your Azure AD roles.
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD roles**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles**.
1. Select **Resource audit**.
Resource audit gives you a view of all activity associated with your Azure AD ro
My audit enables you to view your personal role activity.
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure AD roles**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles**.
1. Select the resource you want to view audit history for.
active-directory Pim Perform Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-roles-and-resource-roles-review.md
Title: Perform an access review of Azure resource and Azure AD roles in PIM
-description: Learn how to review access of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to review access of Azure resource and Azure AD roles in Privileged Identity Management (PIM).
documentationcenter: ''
Privileged Identity Management (PIM) simplifies how enterprises manage privileged access to resources in Azure Active Directory (AD), part of Microsoft Entra, and other Microsoft online services like Microsoft 365 or Microsoft Intune. Follow the steps in this article to perform reviews of access to roles.
-If you are assigned to an administrative role, your organization's privileged role administrator may ask you to regularly confirm that you still need that role for your job. You might get an email that includes a link, or you can go straight to the [Azure portal](https://portal.azure.com) and begin.
+If you're assigned to an administrative role, your organization's privileged role administrator may ask you to regularly confirm that you still need that role for your job. You might get an email that includes a link, or you can go straight to the [Microsoft Entra admin center](https://entra.microsoft.com) and begin.
If you're a privileged role administrator or global administrator interested in access reviews, get more details at [How to start an access review](./pim-create-roles-and-resource-roles-review.md).
If you're a privileged role administrator or global administrator interested in
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-You can approve or deny access based on whether the user still needs access to the role. Choose **Approve** if you want them to stay in the role, or **Deny** if they do not need the access anymore. The users' assignment status will not change until the review closes and the administrator applies the results. Common scenarios in which certain denied users cannot have results applied to them may include the following:
+You can approve or deny access based on whether the user still needs access to the role. Choose **Approve** if you want them to stay in the role, or **Deny** if they don't need the access anymore. The users' assignment status won't change until the review closes and the administrator applies the results. Common scenarios in which certain denied users can't have results applied to them may include the following:
-- **Reviewing members of a synced on-premises Windows AD group**: If the group is synced from an on-premises Windows AD, the group cannot be managed in Azure AD and therefore membership cannot be changed.-- **Reviewing a role with nested groups assigned**: For users who have membership through a nested group, the access review will not remove their membership to the nested group and therefore they will retain access to the role being reviewed.
+- **Reviewing members of a synced on-premises Windows AD group**: If the group is synced from an on-premises Windows AD, the group can't be managed in Azure AD, and therefore membership can't be changed.
+- **Reviewing a role with nested groups assigned**: For users who have membership through a nested group, the access review won't remove their membership to the nested group and therefore they retain access to the role being reviewed.
- **User not found or other errors**: These may also result in an apply result not being supported. Follow these steps to find and complete the access review:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Azure Active Directory** and open **Privileged Identity Management**.
-1. Select **Review access**. If you have any pending access reviews, they will appear in the access reviews page.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
- :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png" alt-text="Screenshot of Privileged Identity Management application, with Review access blade selected for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png":::
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Review access**.
+
+1. If you have any pending access reviews, they appear in the access reviews page.
+
+ :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png" alt-text="Screenshot of Privileged Identity Management application, with Review access pane selected for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png":::
1. Select the review you want to complete.+ 1. Choose **Approve** or **Deny**. In the **Provide a reason box**, enter a business justification for your decision as needed. :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-completed.png" alt-text="Screenshot of Privileged Identity Management application, with the selected Access Review for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-completed.png":::
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
This article is for members who need to activate their Azure resource role in Pr
When you need to take on an Azure resource role, you can request activation by using the **My roles** navigation option in Privileged Identity Management.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
+1. Browse to **Identity governance** > **Privileged Identity Management**.
+
+1. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
1. Select **My roles**.
active-directory Pim Resource Roles Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md
Title: Approve requests for Azure resource roles in PIM
-description: Learn how to approve or deny requests for Azure resource roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to approve or deny requests for Azure resource roles in Privileged Identity Management (PIM).
documentationcenter: ''
Follow the steps in this article to approve or deny requests for Azure resource
As a delegated approver, you'll receive an email notification when an Azure resource role request is pending your approval. You can view these pending requests in Privileged Identity Management.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Approve requests**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests**.
![Approve requests - Azure resources page showing request to review](./media/pim-resource-roles-approval-workflow/resources-approve-requests.png)
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
Title: Assign Azure resource roles in Privileged Identity Management
-description: Learn how to assign Azure resource roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to assign Azure resource roles in Privileged Identity Management (PIM).
documentationcenter: ''
For more information, see [What is Azure attribute-based access control (Azure A
Follow these steps to make a user eligible for an Azure resource role.
-1. Sign in to the [Azure portal](https://portal.azure.com) with Owner or User Access Administrator role permissions.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Access Administrator](../roles/permissions-reference.md#user-administrator).
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure resources**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources**.
1. Select the **Resource type** you want to manage. For example, such as **Resource**, or **Resource group**. Then select the resource you want to manage to open its overview page.
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
Title: Configure security alerts for Azure roles in Privileged Identity Management
-description: Learn how to configure security alerts for Azure resource roles in Azure AD Privileged Identity Management (PIM).
+description: Learn how to configure security alerts for Azure resource roles in Privileged Identity Management (PIM).
documentationcenter: ''
Alert | Severity | Trigger | Recommendation
Follow these steps to configure security alerts for Azure roles in Privileged Identity Management:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
-
-1. From the left menu, select **Azure resources**.
-
-1. From the list of resources, select your Azure subscription.
-
-1. On the **Alerts** page, select **Settings**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources** Select your subscription > **Alerts** > **Setting**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
![Screenshot of the alerts page with settings highlighted.](media/pim-resource-roles-configure-alerts/rbac-navigate-settings.png)
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
Title: Configure Azure resource role settings in PIM
-description: Learn how to configure Azure resource role settings in Azure AD Privileged Identity Management (PIM).
+description: Learn how to configure Azure resource role settings in Privileged Identity Management (PIM).
documentationcenter: ''
PIM role settings are also known as PIM policies.
## Open role settings - To open the settings for an Azure resource role:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
-1. Select **Azure AD Privileged Identity Management** > **Azure Resources**. This page shows a list of Azure resources discovered in Privileged Identity Management. Use the **Resource type** filter to select all required resource types.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve Resources**. This page shows a list of Azure resources discovered in Privileged Identity Management. Use the **Resource type** filter to select all required resource types.
:::image type="content" source="media/pim-resource-roles-configure-role-settings/resources-list.png" alt-text="Screenshot that shows the list of Azure resources discovered in Privileged Identity Management." lightbox="media/pim-resource-roles-configure-role-settings/resources-list.png":::
active-directory Pim Resource Roles Discover Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md
Title: Discover Azure resources to manage in PIM
-description: Learn how to discover Azure resources to manage in Azure AD Privileged Identity Management (PIM).
+description: Learn how to discover Azure resources to manage in Privileged Identity Management (PIM).
documentationcenter: ''
When you first set up Privileged Identity Management for Azure resources, you ne
## Required permissions + You can view and manage the management groups or subscriptions to which you have Microsoft.Authorization/roleAssignments/write permissions, such as User Access Administrator or Owner roles. If you are not a subscription owner, but are a Global Administrator and don't see any Azure subscriptions or management groups to manage, then you can [elevate access to manage your resources](../../role-based-access-control/elevate-access-global-admin.md). ## Discover resources -
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Open **Azure AD Privileged Identity Management**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Select **Azure resources**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure Resources**.
If this is your first time using Privileged Identity Management for Azure resources, you'll see a **Discover resources** page.
active-directory Pim Resource Roles Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-renew-extend.md
Title: Renew Azure resource role assignments in PIM
-description: Learn how to extend or renew Azure resource role assignments in Azure AD Privileged Identity Management (PIM).
+description: Learn how to extend or renew Azure resource role assignments in Privileged Identity Management (PIM).
documentationcenter: ''
Users assigned to a role can extend expiring role assignments directly from the
![Azure resources - My roles page listing eligible roles with an Action column](media/pim-resource-roles-renew-extend/aadpim-rbac-extend-ui.png)
-When the assignment end date-time is within 14 days, the link to **Extend** becomes an active in the Azure portal. In the following example, assume the current date is March 27.
+When the assignment end date-time is within 14 days, the link to **Extend** becomes an active in the Microsoft Entra admin center. In the following example, assume the current date is March 27.
>[!Note] >For a group assigned to a role, the **Extend** link never becomes available so that a user with an inherited assignment can't extend the group assignment.
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md
We support all Microsoft 365 roles in the Azure AD Roles and Administrators port
> [!NOTE] > - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.
-> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-device-administrator-role).
+> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role).
## Next steps
active-directory Pim Security Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-security-wizard.md
Also, keep role assignments permanent if a user has a Microsoft account (in othe
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator).
-1. Open **Azure AD Privileged Identity Management**.
+1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure ad roles** >**Discovery and insights (Preview)**.
-1. From the left menu, select **Azure AD roles** and then select **Discovery and insights (Preview)**. Opening the page begins the discovery process to find relevant role assignments.
+1. Opening the page begins the discovery process to find relevant role assignments.
![Azure AD roles - Discovery and insights page showing the 3 options](./media/pim-security-wizard/new-preview-link.png)
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
- Title: Azure Active Directory activity log integration options
-description: Introduction to the options for integrating Azure Active Directory activity logs with storage and analysis tools.
------- Previously updated : 07/27/2023----
-# Azure AD activity log integrations
-
-Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs.
-
-With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered.
-
-## Supported reports
-
-The following logs can be integrated with one of many endpoints:
-
-* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant.
-* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors.
-* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications.
-* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity.
-* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization.
-
-## Integration options
-
-To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories:
--- Troubleshooting-- Long-term storage-- Analysis and monitoring-
-### Troubleshooting
-
-If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed.
-
-If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options.
-
-### Long-term storage
-
-If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often.
-
-If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options.
-
-### Analysis and monitoring
-
-If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring.
-
-If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools.
-
-If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub.
-
-## Cost considerations
-
-There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day.
-
-Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day.
-
-Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles:
--- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md)-- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md)-- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md)-
-Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md).
-
-## Estimate your costs
-
-To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint.
-
-The following factors could affect costs for your organization:
--- Audit log events use around 2 KB of data storage-- Sign-in log events use on average 11.5 KB of data storage-- A tenant of about 100,000 users could incur about 1.5 million events per day-- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame-
-### Daily log size
-
-To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-
-If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start:
--- 1000 records-- For large tenants, 15 minutes of sign-ins-- For small to medium tenants, 1 hour of sign-ins-
-You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly.
-
-With the data sample captured, multiply accordingly to find out how large the file would be for one day.
-
-### Estimate the daily cost
-
-To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase.
-
-To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done.
-
-With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts.
-
-![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png)
-
-Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost.
-
-## Calculate estimated costs
-
-From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products.
--- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/)-- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/)-- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/)-- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)-
-Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices.
-
-![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png)
-
-## Next steps
-
-* [Create a storage account](../../storage/common/storage-account-create.md)
-* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md)
-* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md)
-* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
- Title: Sign-in logs (preview)
-description: Conceptual information about sign-in logs, including new features in preview.
------- Previously updated : 03/28/2023----
-# Sign-in logs in Azure Active Directory (preview)
-
-Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs.
-
-Two other activity logs are also available to help monitor the health of your tenant:
-- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources.-- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by a provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.-
-The classic sign-in logs in Azure AD provide you with an overview of interactive user sign-ins. Three more sign-in logs are now in preview:
--- Non-interactive user sign-ins-- Service principal sign-ins-- Managed identities for Azure resource sign-ins-
-This article gives you an overview of the sign-in activity report with the preview of non-interactive, application, and managed identities for Azure resources sign-ins. For information about the sign-in report without the preview features, see [Sign-in logs in Azure Active Directory](concept-sign-ins.md).
-
-## How do you access the sign-in logs?
-
-You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com).
-
-To access the sign-ins log for a tenant, you must have one of the following roles:
--- Global Administrator-- Security Administrator-- Security Reader-- Global Reader-- Reports Reader-
->[!NOTE]
->To see Conditional Access data in the sign-ins log, you need to be a user in one of the following roles:
-Company Administrator, Global Reader, Security Administrator, Security Reader, Conditional Access Administrator .
-
-The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
-
-**To access the Azure AD sign-ins log preview:**
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
-1. Go to **Azure Active Directory** > **Sign-ins log**.
-1. Select the **Try out our new sign-ins preview** link.
-
- ![Screenshot of the preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-preview-link.png)
-
- To toggle back to the legacy view, select the **Click here to leave the preview** link.
-
- ![Screenshot of the leave preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-leave-preview-link.png)
-
-You can also access the sign-in logs from the following areas of Azure AD:
--- Users-- Groups-- Enterprise applications-
-On the sign-in logs page, you can switch between:
--- **Interactive user sign-ins:** Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.--- **Non-interactive user sign-ins:** Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.--- **Service principal sign-ins:** Sign-ins by apps and service principals that don't involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.--- **Managed identities for Azure resources sign-ins:** Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) -
-![Screenshot of the sign-in log types.](./media/concept-all-sign-ins/sign-ins-report-types.png)
-
-## View the sign-ins log
-
-To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
-
-### Interactive user sign-ins
-
-Interactive user sign-ins provide an authentication factor to Azure AD or interact directly with Azure AD or a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Azure AD or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Azure AD.
-
-> [!NOTE]
-> The interactive user sign-in log previously contained some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign in log for increased accuracy.
-
-**Report size:** small </br>
-**Examples:**
--- A user provides username and password in the Azure AD sign-in screen.-- A user passes an SMS MFA challenge.-- A user provides a biometric gesture to unlock their Windows PC with Windows Hello for Business.-- A user is federated to Azure AD with an AD FS SAML assertion.-
-In addition to the default fields, the interactive sign-in log also shows:
--- The sign-in location-- Whether Conditional Access has been applied-
-You can customize the list view by clicking **Columns** in the toolbar.
-
-![Screenshot customize columns button.](./media/concept-all-sign-ins/sign-in-logs-columns-preview.png)
-
-#### Considerations for MFA sign-ins
-
-When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`.
-
-### Non-interactive user sign-ins
-
-Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user perceives these sign-ins as happening in the background.
-
-**Report size:** Large </br>
-**Examples:**
--- A client app uses an OAuth 2.0 refresh token to get an access token.-- A client uses an OAuth 2.0 authorization code to get an access token and refresh token.-- A user performs single sign-on (SSO) to a web or Windows app on an Azure AD joined PC (without providing an authentication factor or interacting with an Azure AD prompt).-- A user signs in to a second Microsoft Office app while they have a session on a mobile device using FOCI (Family of Client IDs).-
-In addition to the default fields, the non-interactive sign-in log also shows:
--- Resource ID-- Number of grouped sign-ins-
-You can't customize the fields shown in this report.
-
-![Screenshot of the disabled columns option.](./media/concept-all-sign-ins/disabled-columns.png)
-
-To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in.
-
-When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
-
-Sign-ins are aggregated in the non-interactive users when the following data matches:
--- Application-- User-- IP address-- Status-- Resource ID-
-> [!NOTE]
-> The IP address of non-interactive sign-ins performed by [confidential clients](../develop/msal-client-applications.md) doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance.
-
-### Service principal sign-ins
-
-Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any nonuser account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
--
-**Report size:** Large </br>
-**Examples:**
--- A service principal uses a certificate to authenticate and access the Microsoft Graph. -- An application uses a client secret to authenticate in the OAuth Client Credentials flow. -
-You can't customize the fields shown in this report.
-
-To make it easier to digest the data in the service principal sign-in logs, service principal sign-in events are grouped. Sign-ins from the same entity under the same conditions are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the service principal report when the following data matches:
--- Service principal name or ID-- Status-- IP address-- Resource name or ID-
-### Managed identity for Azure resources sign-ins
-
-Managed identities for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management. A VM with managed credentials uses Azure AD to get an Access Token.
-
-**Report size:** Small </br>
-**Examples:**
-
- You can't customize the fields shown in this report.
-
-To make it easier to digest the data, managed identities for Azure resources sign in logs, non-interactive sign-in events are grouped. Sign-ins from the same entity are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the managed identities report when all of the following data matches:
--- Managed identity name or ID-- Status-- Resource name or ID-
-Select an item in the list view to display all sign-ins that are grouped under a node. Select a grouped item to see all details of the sign-in.
-
-### Filter the results
-
-Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
-
-Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters. Take note of the **Date** range in your filter to ensure that Azure AD only returns the data you need. The filter you configure for interactive sign-ins is persisted for non-interactive sign-ins and vice versa.
-
-Select the **Add filters** option from the top of the table to get started.
-
-![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-all-sign-ins/sign-in-logs-filter-preview.png)
-
-There are several filter options to choose from:
--- **User:** The *user principal name* (UPN) of the user in question.-- **Status:** Options are *Success*, *Failure*, and *Interrupted*.-- **Resource:** The name of the service used for the sign-in.-- **Conditional Access:** The status of the Conditional Access policy. Options are:
- - *Not applied:* No policy applied to the user and application during sign-in.
- - *Success:* One or more Conditional Access policies applied to the user and application (but not necessarily the other conditions) during sign-in.
- - *Failure:* The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access.
-- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
-The following table provides the options and descriptions for the **Client app** filter option.
-
-> [!NOTE]
-> Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario.
-
-|Name|Modern authentication|Description|
-||:-:||
-|Authenticated SMTP| |Used by POP and IMAP client's to send email messages.|
-|Autodiscover| |Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.|
-|Exchange ActiveSync| |This filter shows all sign-in attempts where the EAS protocol has been attempted.|
-|Browser|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using web browsers|
-|Exchange ActiveSync| | Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online|
-|Exchange Online PowerShell| |Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).|
-|Exchange Web Services| |A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.|
-|IMAP4| |A legacy mail client using IMAP to retrieve email.|
-|MAPI over HTTP| |Used by Outlook 2010 and later.|
-|Mobile apps and desktop clients|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using mobile apps and desktop clients.|
-|Offline Address Book| |A copy of address list collections that are downloaded and used by Outlook.|
-|Outlook Anywhere (RPC over HTTP)| |Used by Outlook 2016 and earlier.|
-|Outlook Service| |Used by the Mail and Calendar app for Windows 10.|
-|POP3| |A legacy mail client using POP3 to retrieve email.|
-|Reporting Web Services| |Used to retrieve report data in Exchange Online.|
-|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.|
-
-## Analyze the sign-in logs
-
-Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
-
-### Sign-in error code
-
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
-
-![Screenshot of a sign-in error code.](./media/concept-all-sign-ins/error-code.png)
-
-For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
-
-![Screenshot of the error code lookup tool.](./media/concept-all-sign-ins/error-code-lookup-tool.png)
-
-### Authentication details
-
-The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
--- A list of authentication policies applied, such as Conditional Access or Security Defaults.-- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.-- The sequence of authentication methods used to sign-in.-- If the authentication attempt was successful and the reason why.-
-This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
--- The volume of sign-ins protected by MFA. -- The reason for the authentication prompt, based on the session lifetime policies.-- Usage and success rates for each authentication method.-- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.-- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.-
-While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab.
-
-![Screenshot of the Authentication Details tab.](media/concept-all-sign-ins/authentication-details-tab.png)
-
-When analyzing authentication details, take note of the following details:
--- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).-- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
- - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
- - The **Primary authentication** row isn't initially logged.
-- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.-
-## Sign-in data used by other services
-
-Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
-
-### Risky sign-in data in Azure AD Identity Protection
-
-Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data:
--- Risky users-- Risky user sign-ins -- Risky service principals-- Risky service principal sign-ins-
-For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
-
-![Screenshot of risky users in Identity Protection.](media/concept-all-sign-ins/id-protection-overview.png)
-
-### Azure AD application and authentication sign-in activity
-
-With an application-centric view of your sign-in data, you can answer questions such as:
--- Who is using my applications?-- What are the top three applications in my organization?-- How is my newest application doing?-
-To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md).
-
-![Screenshot of the Azure AD application activity report.](media/concept-all-sign-ins/azure-ad-app-activity.png)
-
-Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication.
-
-![Screenshot of the Authentication methods report.](media/concept-all-sign-ins/azure-ad-authentication-methods.png)
-
-### Microsoft 365 activity logs
-
-You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Microsoft 365 activity and Azure AD activity logs share a significant number of directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs.
-
-You can also access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview).
-
-## Next steps
--- [Basic info in the Azure AD sign-in logs](reference-basic-info-sign-in-logs.md)--- [How to download logs in Azure Active Directory](howto-download-logs.md)--- [How to access activity logs in Azure AD](howto-access-activity-logs.md)
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
- Title: Audit logs in Azure Active Directory
-description: Overview of the audit logs in Azure Active Directory.
+ Title: Learn about the audit logs in Azure Active Directory
+description: Learn about the types of identity related events that are captured in Azure Active Directory audit logs.
Previously updated : 11/04/2022 Last updated : 08/24/2023 -+ + # Audit logs in Azure Active Directory Azure Active Directory (Azure AD) activity logs include audit logs, which is a comprehensive report on every logged event in Azure AD. Changes to applications, groups, users, and licenses are all captured in the Azure AD audit logs.
Two other activity logs are also available to help monitor the health of your te
This article gives you an overview of the audit logs.
-## What is it?
+## What can you do with audit logs?
-Audit logs in Azure AD provide access to system activity records, often needed for compliance. This log is categorized by user, group, and application management.
+Audit logs in Azure AD provide access to system activity records, often needed for compliance. You can get answers to questions related to users, groups, and applications.
-With a user-centric view, you can get answers to questions such as:
--- What types of updates have been applied to users?
+**Users:**
+- What types of changes were recently applied to users?
- How many users were changed?- - How many passwords were changed? -- What has an administrator done in a directory?--
-With a group-centric view, you can get answers to questions such as:
--- What are the groups that have been added?--- Are there groups with membership changes?
+**Groups:**
+- What groups were recently added?
- Have the owners of group been changed?- - What licenses have been assigned to a group or a user?
+**Applications:**
-With an application-centric view, you can get answers to questions such as:
--- What applications have been added or updated?--- What applications have been removed?-
+- What applications have been added, updated, or removed?
- Has a service principal for an application changed?- - Have the names of applications been changed?--- Who gave consent to an application?-
-## How do I access it?
-
-To access the audit log for a tenant, you must have one of the following roles:
--- Reports Reader-- Security Reader-- Security Administrator-- Global Reader-- Global Administrator-
-Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure AD** and select **Audit log** from the **Monitoring** section.
-
-The audit activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview). See [Getting started with Azure Active Directory Premium](../fundamentals/get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
- ## What do the logs show? Audit logs have a default list view that shows:
Audit logs have a default list view that shows:
- Category and name of the activity (*what*) - Status of the activity (success or failure) - Target-- Initiator / actor of an activity (who)
+- Initiator / actor of an activity (*who*)
You can customize and filter the list view by clicking the **Columns** button in the toolbar. Editing the columns enables you to add or remove fields from your view.
-![Screenshot of available fields.](./media/concept-audit-logs/columnselect.png "Remove fields")
- ### Filtering audit logs You can filter the audit data using the options visible in your list such as date range, service, category, and activity.
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
++
+ Title: Logs available for streaming to endpoints from Azure Active Directory
+description: Learn about the Azure Active Directory logs available for streaming to an endpoint for storage, analysis, or monitoring.
+++++++ Last updated : 08/09/2023+++++
+# Learn about the identity logs you can stream to an endpoint
+
+Using Diagnostic settings in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
+
+This article describes the logs that you can route to an endpoint from Azure AD Diagnostic settings.
+
+## Prerequisites
+
+Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Azure AD tenant.
+
+To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles.
+
+- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md)
+- [Archive logs to a storage account](howto-archive-logs-to-storage-account.md)
+- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md)
+- [Send to a partner solution](../../partner-solutions/overview.md)
+
+## Activity log options
+
+The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal.
+
+### Audit logs
+
+The `AuditLogs` report capture changes to applications, groups, users, and licenses in your Azure AD tenant. Once you've routed your audit logs, you can filter or analyze by date/time, the service that logged the event, and who made the change. For more information, see [Audit logs](concept-audit-logs.md).
+
+### Sign-in logs
+
+The `SignInLogs` send the interactive sign-in logs, which are logs generated by your users signing in. Sign-in logs are generated by users providing their username and password on an Azure AD sign-in screen or passing an MFA challenge. For more information, see [Interactive user sign-ins](concept-all-sign-ins.md#interactive-user-sign-ins).
+
+### Non-interactive sign-in logs
+
+The `NonInteractiveUserSIgnInLogs` are sign-ins done on behalf of a user, such as by a client app. The device or client uses a token or code to authenticate or access a resource on behalf of a user. For more information, see [Non-interactive user sign-ins](concept-all-sign-ins.md#non-interactive-user-sign-ins).
+
+### Service principal sign-in logs
+
+If you need to review sign-in activity for apps or service principals, the `ServicePrincipalSignInLogs` may be a good option. In these scenarios, certificates or client secrets are used for authentication. For more information, see [Service principal sign-ins](concept-all-sign-ins.md#service-principal-sign-ins).
+
+### Managed identity sign-in logs
+
+The `ManagedIdentitySignInLogs` provide similar insights as the service principal sign-in logs, but for managed identities, where Azure manages the secrets. For more information, see [Managed identity sign-ins](concept-all-sign-ins.md#managed-identity-for-azure-resources-sign-ins).
+
+### Provisioning logs
+
+If your organization provisions users through a third-party application such as Workday or ServiceNow, you may want to export the `ProvisioningLogs` reports. For more information, see [Provisioning logs](concept-provisioning-logs.md).
+
+### AD FS sign-in logs
+
+Sign-in activity for Active Directory Federated Services (AD FS) applications are captured in this Usage and insight reports. You can export the `ADFSSignInLogs` report to monitor sign-in activity for AD FS applications. For more information, see [AD FS sign-in logs](concept-usage-insights-report.md#ad-fs-application-activity).
+
+### Risky users
+
+The `RiskyUsers` logs identify users who are at risk based on their sign-in activity. This report is part of Azure AD Identity Protection and uses sign-in data from Azure AD. For more information, see [What is Azure AD Identity Protection?](../identity-protection/overview-identity-protection.md).
+
+### User risk events
+
+The `UserRiskEvents` logs are part of Azure AD Identity Protection. These logs capture details about risky sign-in events. For more information, see [How to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins).
+
+### Risky service principals
+
+The `RiskyServicePrincipals` logs provide information about service principals that Azure AD Identity Protection detected as risky. Service principal risk represents the probability that an identity or account is compromised. These risks are calculated asynchronously using data and patterns from Microsoft's internal and external threat intelligence sources. These sources may include security researchers, law enforcement professionals, and security teams at Microsoft. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md)
+
+### Service principal risk events
+
+The `ServicePrincipalRiskEvents` logs provide details around the risky sign-in events for service principals. These logs may include any identified suspicious events related to the service principal accounts. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md)
+
+### Enriched Microsoft 365 audit logs
+
+The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you can enable for Microsoft Entra Internet Access. Selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet to secure access to your Microsoft 365 traffic *and* you enabled the enriched logs. For more information, see [How to use the Global Secure Access enriched Microsoft 365 logs](../../global-secure-access/how-to-view-enriched-logs.md).
+
+### Microsoft Graph activity logs
+
+The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in private preview. The logs are visible in Azure AD, but selecting these options won't add new logs to your workspace unless your organization was included in the private preview.
+
+### Network access traffic logs
+
+The `NetworkAccessTrafficLogs` logs are associated with Microsoft Entra Internet Access and Microsoft Entra Private Access. The logs are visible in Azure AD, but selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet Access and Microsoft Entra Private Access to secure access to your corporate resources. For more information, see [What is Global Secure Access?](../../global-secure-access/overview-what-is-global-secure-access.md).
+
+## Next steps
+
+- [Learn about the sign-ins logs](concept-all-sign-ins.md)
+- [Explore how to access the activity logs](howto-access-activity-logs.md)
active-directory Concept Log Monitoring Integration Options Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-log-monitoring-integration-options-considerations.md
+
+ Title: Azure Active Directory activity log integration options and considerations
+description: Introduction to the options and considerations for integrating Azure Active Directory activity logs with storage and analysis tools.
+++++++ Last updated : 08/09/2023+++
+# Azure AD activity log integrations
+
+Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs.
+
+With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered.
+
+## Supported reports
+
+The following logs can be integrated with one of many endpoints:
+
+* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant.
+* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors.
+* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications.
+* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity.
+* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization.
+
+## Integration options
+
+To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories:
+
+- Troubleshooting
+- Long-term storage
+- Analysis and monitoring
+
+### Troubleshooting
+
+If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed.
+
+If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options.
+
+### Long-term storage
+
+If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often.
+
+If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options.
+
+### Analysis and monitoring
+
+If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring.
+
+If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools.
+
+If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub.
+
+## Cost considerations
+
+There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day.
+
+Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day.
+
+Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles:
+
+- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md)
+- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md)
+- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md)
+
+Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md).
+
+## Estimate your costs
+
+To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint.
+
+The following factors could affect costs for your organization:
+
+- Audit log events use around 2 KB of data storage
+- Sign-in log events use on average 11.5 KB of data storage
+- A tenant of about 100,000 users could incur about 1.5 million events per day
+- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame
+
+### Daily log size
+
+To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start:
+
+- 1000 records
+- For large tenants, 15 minutes of sign-ins
+- For small to medium tenants, 1 hour of sign-ins
+
+You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly.
+
+With the data sample captured, multiply accordingly to find out how large the file would be for one day.
+
+### Estimate the daily cost
+
+To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase.
+
+To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done.
+
+With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts.
+
+![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png)
+
+Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost.
+
+## Calculate estimated costs
+
+From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products.
+
+- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/)
+- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/)
+- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/)
+- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)
+
+Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices.
+
+![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png)
+
+## Next steps
+
+* [Create a storage account](../../storage/common/storage-account-create.md)
+* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md)
+* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md)
+* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Title: Provisioning logs in Azure Active Directory
-description: Overview of the provisioning logs in Azure Active Directory.
+description: Learn about the information included in the provisioning logs in Azure Active Directory.
Previously updated : 06/16/2023 Last updated : 08/24/2023 - # Provisioning logs in Azure Active Directory
Application owners can view logs for their own applications. The following roles
- Global Administrator - Users in a custom role with the [provisioningLogs permission](../roles/custom-enterprise-app-permissions.md#full-list-of-permissions)
-To access the provisioning log data, you have the following options:
+There are several ways to view or analyze the Provisioning logs:
-- Select **Provisioning logs** from the **Monitoring** section of Azure AD.
+- View in the Azure portal.
+- Stream logs to [Azure Monitor](../app-provisioning/application-provisioning-log-analytics.md) through Diagnostic settings.
+- Analyze logs through [Workbook](howto-use-workbooks.md) templates.
+- Access logs programmatically through the [Microsoft Graph API](/graph/api/resources/provisioningobjectsummary).
+- [Download the logs](howto-download-logs.md) as a CSV or JSON file.
-- Stream the provisioning logs into [Azure Monitor](../app-provisioning/application-provisioning-log-analytics.md). This method allows for extended data retention and building custom dashboards, alerts, and queries.
+To access the logs in the Azure portal:
-- Query the [Microsoft Graph API](/graph/api/resources/provisioningobjectsummary) for the provisioning logs.--- Download the provisioning logs as a CSV or JSON file.
+1. Sign in to the [Azure portal](https://portal.azure.com) using the Reports Reader role.
+1. Browse to **Azure Active Directory** > **Monitoring** > **Provisioning logs**.
## View the provisioning logs
This area enables you to display more fields or remove fields that are already d
## Filter the results
-When you filter your provisioning data, some filter values are dynamically populated based on your tenant. For example, if you don't have any "create" events in your tenant, there won't be a **Create** filter option.
+When you filter your provisioning data, some filter values are dynamically populated based on your tenant. For example, if you don't have any "create" events in your tenant, the\= **Create** filter option isn't available.
The **Identity** filter enables you to specify the name or the identity that you care about. This identity might be a user, group, role, or other object.
When you select an item in the provisioning list view, you get more details abou
## Download logs as CSV or JSON
-You can download the provisioning logs for later use by going to the logs in the Azure portal and selecting **Download**. The file will be filtered based on the filter criteria you've selected. Make the filters as specific as possible to reduce the size and time of the download.
+You can download the provisioning logs for later use by going to the logs in the Azure portal and selecting **Download**. The results are filtered based on the filter criteria you've selected. Make the filters as specific as possible to reduce the size and time of the download.
The CSV download includes three files:
The JSON file is downloaded in minified format to reduce the size of the downloa
- Use [Visual Studio Code to format the JSON](https://code.visualstudio.com/docs/languages/json#_formatting). -- Use PowerShell to format the JSON. This script will output the JSON in a format that includes tabs and spaces:
+- Use PowerShell to format the JSON. This script produces a JSON output in a format that includes tabs and spaces:
` $JSONContent = Get-Content -Path "<PATH TO THE PROVISIONING LOGS FILE>" | ConvertFrom-JSON`
Here are some tips and considerations for provisioning reports:
- You can use the change ID attribute as unique identifier, which can be helpful when you're interacting with product support, for example. -- You might see skipped events for users who aren't in scope. This behavior is expected, especially when the sync scope is set to all users and groups. The service will evaluate all the objects in the tenant, even the ones that are out of scope.
+- You might see skipped events for users who aren't in scope. This behavior is expected, especially when the sync scope is set to all users and groups. The service evaluates all the objects in the tenant, even the ones that are out of scope.
- The provisioning logs don't show role imports (applies to AWS, Salesforce, and Zendesk). You can find the logs for role imports in the audit logs.
Use the following table to better understand how to resolve errors that you find
|Error code|Description| ||| |Conflict,<br>EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
-|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.|
-|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing it from working. This attempt will automatically be retried in 40 minutes.|
+|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt is automatically retired. Microsoft has also been notified of this issue.|
+|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing it from working. This attempt is automatically retried in 40 minutes.|
|InsufficientRights,<br>MethodNotAllowed,<br>NotPermitted,<br>Unauthorized| Azure AD authenticated with the target application but wasn't authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).| |UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing it from working.|
-|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There's nothing to do. This attempt will automatically be retried in 40 minutes.|
-|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.|
+|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There's nothing to do. This attempt is automatically retried in 40 minutes.|
+|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning triggers an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant must be evaluated again, and certain provisioning events might be dropped.|
|NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing it from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). | |MandatoryFieldsMissing,<br>MissingValues |The user couldn't be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields aren't omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.| |SchemaAttributeNotFound |The operation couldn't be performed because an attribute was specified that doesn't exist in the target application. See the [documentation](../app-provisioning/customize-application-attributes.md) on attribute customization and ensure that your configuration is correct.|
-|InternalError |An internal service error occurred within the Azure AD provisioning service. There's nothing to do. This attempt will automatically be retried in 40 minutes.|
+|InternalError |An internal service error occurred within the Azure AD provisioning service. There's nothing to do. This attempt is automatically retired in 40 minutes.|
|InvalidDomain |The operation couldn't be performed because an attribute value contains an invalid domain name. Update the domain name on the user or add it to the permitted list in the target application. |
-|Timeout |The operation couldn't be completed because the target application took too long to respond. There's nothing to do. This attempt will automatically be retried in 40 minutes.|
+|Timeout |The operation couldn't be completed because the target application took too long to respond. There's nothing to do. This attempt is automatically retried in 40 minutes.|
|LicenseLimitExceeded|The user couldn't be created in the target application because there are no available licenses for this user. Procure more licenses for the target application. Or, review your user assignments and attribute mapping configuration to ensure that the correct users are assigned with the correct attributes.| |DuplicateTargetEntries |The operation couldn't be completed because more than one user in the target application was found with the configured matching attributes. Remove the duplicate user from the target application, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).| |DuplicateSourceEntries | The operation couldn't be completed because more than one user was found with the configured matching attributes. Remove the duplicate user, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).| |ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. The presence of this error doesn't indicate that the user is in scope, because you haven't yet evaluated scoping for the user.| |EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.|
-|SystemForCrossDomainIdentity<br>ManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
+|SystemForCrossDomainIdentity<br>ManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, this error appears.|
|SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).| |SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.|
Use the following table to better understand how to resolve errors that you find
> | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). | > |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Further investigation likely requires contacting support.|
-> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.|
> |AzureActiveDirectoryForbidden|External collaboration settings have blocked invitations.|Navigate to user settings and ensure that [external collaboration settings](../external-identities/external-collaboration-settings-configure.md) are permitted.| > |InvitationCreationFailureInvalidPropertyValue|Potential causes:<br/>* The Primary SMTP Address is an invalid value.<br/>* UserType is neither guest nor member<br/>* Group email Address is not supported | Potential solutions:<br/>* The Primary SMTP Address has an invalid value. Resolving this issue will likely require updating the mail property of the source user. For more information, see [Prepare for directory synchronization to Microsoft 365](https://aka.ms/DirectoryAttributeValidations)<br/>* Ensure that the userType property is provisioned as type guest or member. This can be fixed by checking your attribute mappings to understand how the userType attribute is mapped.<br/>* The email address address of the user matches with the email address of a group in the tenant. Update the email address for one of the two objects.| > |InvitationCreationFailureAmbiguousUser| The invited user has a proxy address that matches an internal user in the target tenant. The proxy address must be unique. | To resolve this error, delete the existing internal user in the target tenant or remove this user from sync scope.|
active-directory Concept Sign In Log Activity Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-in-log-activity-details.md
+
+ Title: Learn about the sign-in log activity details
+description: Learn about the information available on each of the tabs on the Azure AD sign-in log activity details.
+++++++ Last updated : 08/31/2023++++
+# Learn about the sign-in log activity details
+
+Azure AD logs all sign-ins into an Azure tenant for compliance purposes. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly.
+
+- [Learn about the sign-in logs](concept-sign-ins.md).
+- [Customize and filter the sign-in logs](howto-customize-filter-logs.md)
+
+This article explains the values on the Basic info tab of the sign-ins log.
+
+## [Basic info](#tab/basic-info)
+
+The Basic info tab contains most of the details that are also displayed in the table. You can launch the Sign-in Diagnostic from the Basic info tab. For more information, see [How to use the Sign-in Diagnostic](howto-use-sign-in-diagnostics.md).
+
+### Sign-in error codes
+
+If a sign-in failed, you can get more information about the reason in the Basic info tab of the related log item. The error code and associated failure reason appear in the details. For more information, see [How to troubleshoot sign-in errors.](howto-troubleshoot-sign-in-errors.md).
+
+![Screenshot of the sign-in error code on the basics tab.](media/concept-sign-in-log-activity-details/sign-in-error-code.png)
+
+## [Location and Device](#tab/location-and-device)
+
+The **Location** and **Device info** tabs display general information about the location and IP address of the user. The Device info tab provides details on the browser and operating system used to sign in. This tab also provides details on if the device is compliant, managed, or hybrid Azure AD joined.
+
+## [Authentication details](#tab/authentication-details)
+
+The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
+
+- A list of authentication policies applied, such as Conditional Access or Security Defaults.
+- The sequence of authentication methods used to sign-in.
+- If the authentication attempt was successful and the reason why.
+
+This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
+
+- The volume of sign-ins protected by MFA.
+- Usage and success rates for each authentication method.
+- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.
+- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.
+
+![Screenshot of the Authentication Details tab.](media/concept-sign-in-log-activity-details/sign-in-activity-details-authentication.png)
+
+When analyzing authentication details, take note of the following details:
+
+- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
+ - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
+ - The **Primary authentication** row isn't initially logged.
+- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.
+- If Conditional Access policies for authentication or session lifetime are applied, they're listed above the sign-in attempts. If you don't see either of these, those policies aren't currently applied. For more information, see [Conditional Access session controls](../conditional-access/concept-conditional-access-session.md).
+++
+## Unique identifiers
+
+In Azure AD, a resource access has three relevant components:
+
+- **Who:** The identity (User) doing the sign-in.
+- **How:** The client (Application) used for the access.
+- **What:** The target (Resource) accessed by the identity.
+
+Each component has an associated unique identifier (ID).:
+
+- **Authentication requirement:** Shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed.
+ - Graph API supports `$filter` (`eq` and `startsWith` operators only).
+
+- **Conditional Access evaluation:** Shows whether continuous access evaluation (CAE) was applied to the sign-in event.
+ - There are multiple sign-in requests for each authentication, which can appear on either the interactive or non-interactive tabs.
+ - CAE is only displayed as true for one of the requests, and it can appear on the interactive tab or non-interactive tab.
+ - For more information, see [Monitor and troubleshoot sign-ins with continuous access evaluation in Azure AD](../conditional-access/howto-continuous-access-evaluation-troubleshoot.md).
+
+- **Correlation ID:** The correlation ID groups sign-ins from the same sign-in session. The value is based on parameters passed by a client, so may Azure AD cannot guarantee its accuracy.
+
+- **Cross-tenant access type:** Describes the type of cross-tenant access used by the actor to access the resource. Possible values are:
+ - `none` - A sign-in event that didn't cross an Azure AD tenant's boundaries.
+ - `b2bCollaboration`- A cross tenant sign-in performed by a guest user using B2B Collaboration.
+ - `b2bDirectConnect` - A cross tenant sign-in performed by a B2B.
+ - `microsoftSupport`- A cross tenant sign-in performed by a Microsoft support agent in a Microsoft customer tenant.
+ - `serviceProvider` - A cross-tenant sign-in performed by a Cloud Service Provider (CSP) or similar admin on behalf of that CSP's customer in a tenant
+ - `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](/graph/best-practices-concept).
+ - If the sign-in didn't the pass the boundaries of a tenant, the value is `none`.
+
+- **Request ID:** An identifier that corresponds to an issued token. If you're looking for sign-ins with a specific token, you need to extract the request ID from the token, first.
+
+- **Sign-in:** String the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a user principal name (UPN), but can be another identifier such as a phone number.
+
+- **Sign-in event types:** Indicates the category of the sign-in the event represents.
+ - The user sign-ins category can be `interactiveUser` or `nonInteractiveUser` and corresponds to the value for the **isInteractive** property on the sign-in resource.
+ - The managed identity category is `managedIdentity`.
+ - The service principal category is **servicePrincipal**.
+ - The Azure portal doesn't show this value, but the sign-in event is placed in the tab that matches its sign-in event type. Possible values are:
+ - `interactiveUser`
+ - `nonInteractiveUser`
+ - `servicePrincipal`
+ - `managedIdentity`
+ - `unknownFutureValue`
+ - The Microsoft Graph API, supports: `$filter` (`eq` operator only).
+
+- **Tenant:** The sign-in log tracks two tenant identifiers:
+ - **Home tenant** ΓÇô The tenant that owns the user identity. Azure AD tracks the ID and name.
+ - **Resource tenant** ΓÇô The tenant that owns the (target) resource.
+ - These identifiers are relevant in cross-tenant scenarios.
+ - For example, to find out how users outside your tenant are accessing your resources, select all entries where the home tenant doesnΓÇÖt match the resource tenant.
+
+- **User type:** Type of a user.
+ - Examples include `member`, `guest`, or `external`.
+++
+## Considerations for sign-in logs
+
+The following scenarios are important to consider when you're reviewing sign-in logs.
+
+- **IP address and location:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+
+- **Conditional Access:**
+ - *Not applied:* No policy applied to the user and application during sign-in.
+ - *Success:* One or more Conditional Access policies applied to or were evaluated for the user and application (but not necessarily the other conditions) during sign-in. Even though a Conditional Access policy might not apply, if it was evaluated, the Conditional Access status shows *Success*.
+ - *Failure:* The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access.
+
+- **Home tenant name:** Due to privacy commitments, Azure AD doesn't populate the home tenant name field during cross-tenant scenarios.
+
+- **Multifactor authentication:** When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`.
+
+## Next steps
+
+* [Learn about exporting Azure AD sign-in logs](concept-activity-logs-azure-monitor.md)
+* [Explore the sign-in diagnostic in Azure AD](./howto-use-sign-in-diagnostics.md)
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Title: Sign-in logs in Azure Active Directory
-description: Conceptual information about Azure AD sign-in logs.
+description: Learn about the four types of sign-in logs available in Azure Active Directory Monitoring and health.
Previously updated : 03/24/2023 Last updated : 08/31/2023 -
-# Sign-in logs in Azure Active Directory
+# What are Azure Active Directory sign-in logs?
-Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs.
+Azure Active Directory (Azure AD) logs all sign-ins into an Azure tenant, which includes your internal apps and resources. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly.
+
+Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure AD are a powerful type of [activity log](overview-reports.md) that you can analyze. This article explains how to access and utilize the sign-in logs.
+
+The preview view of the sign-in logs includes interactive and non-interactive user sign-ins as well as service principal and managed identity sign-ins. You can still view the classic sign-in logs, which only include interactive sign-ins.
Two other activity logs are also available to help monitor the health of your tenant: - **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources.
Two other activity logs are also available to help monitor the health of your te
## What can you do with sign-in logs?
-You can use the sign-ins log to find answers to questions like:
+You can use the sign-in logs to answer questions such as:
-- What is the sign-in pattern of a user?
+- How many users have signed into a particular application this week?
+- How many failed sign-in attempts have occurred in the last 24 hours?
+- Are users signing in from specific browsers or operating systems?
+- Which of my Azure resources are being accessed by managed identities and service principals?
-- How many users have signed in over a week?
+You can also describe the activity associated with a sign-in request by identifying the following details:
-- WhatΓÇÖs the status of these sign-ins?
+- **Who** ΓÇô The identity (User) performing the sign-in.
+- **How** ΓÇô The client (Application) used for the sign-in.
+- **What** ΓÇô The target (Resource) accessed by the identity.
-## How do you access the sign-in logs?
+## What are the types of sign-in logs?
-You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com).
+There are four types of logs in the sign-in logs preview:
-To access the sign-ins log for a tenant, you must have one of the following roles:
+- Interactive user sign-ins
+- Non-interactive user sign-ins
+- Service principal sign-ins
+- Managed identity sign-ins
-- Global Administrator-- Security Administrator-- Security Reader-- Global Reader-- Reports Reader
+The classic sign-in logs only include interactive user sign-ins.
-The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
+### Interactive user sign-ins
-**To access the Azure AD sign-ins log:**
+Interactive sign-ins are performed *by* a user. They provide an authentication factor to Azure AD. That authentication factor could also interact with a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Azure AD or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Azure AD.
-1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
-1. Go to **Azure Active Directory** > **Sign-ins log**.
- ![Screenshot of the Monitoring side menu with sign-in logs highlighted.](./media/concept-sign-ins/side-menu-sign-in-logs.png)
+**Report size:** small </br>
+**Examples:**
-You can also access the sign-in logs from the following areas of Azure AD:
+- A user provides username and password in the Azure AD sign-in screen.
+- A user passes an SMS MFA challenge.
+- A user provides a biometric gesture to unlock their Windows PC with Windows Hello for Business.
+- A user is federated to Azure AD with an AD FS SAML assertion.
-- Users-- Groups-- Enterprise applications
+In addition to the default fields, the interactive sign-in log also shows:
-## View the sign-ins log
+- The sign-in location
+- Whether Conditional Access has been applied
-To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down.
+#### Known limitations
-### Customize the layout
+**Non-interactive sign-ins on the interactive sign-in logs**
-The sign-ins log has a default view, but you can customize the view using over 30 column options.
+Previously, some non-interactive sign-ins from Microsoft Exchange clients were included in the interactive user sign-in log for better visibility. This increased visibility was necessary before the non-interactive user sign-in logs were introduced in November 2020. However, it's important to note that some non-interactive sign-ins, such as those using FIDO2 keys, may still be marked as interactive due to the way the system was set up before the separate non-interactive logs were introduced. These sign-ins may display interactive details like client credential type and browser information, even though they are technically non-interactive sign-ins.
-1. Select **Columns** from the menu at the top of the log.
-1. Select the columns you want to view and select the **Save** button at the bottom of the window.
+**Passthrough sign-ins**
-![Screenshot of the sign-in logs page with the Columns option highlighted.](./media/concept-sign-ins/sign-in-logs-columns.png)
+Azure Active Directory issues tokens for authentication and authorization. In some situations, a user who is signed in to the Contoso tenant may try to access resources in the Fabrikam tenant, where they don't have access. A no-authorization token, called a passthrough token, is issued to the Fabrikam tenant. The passthrough token doesn't allow the user to access any resources.
-### Filter the results <h3 id="filter-sign-in-activities"></h3>
+When reviewing the logs for this situation, the sign-in logs for the home tenant (in this scenario, Contoso) don't show a sign-in attempt because the token wasn't evaluated against the home tenant's policies. The sign-in token was only used to display the appropriate failure message. You won't see a sign-in attempt in the logs for the home tenant.
-Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
+### Non-interactive user sign-ins
-Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters.
+Non-interactive sign-ins are done *on behalf of a* user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, Azure AD recognizes when the user's token needs to be refreshed and does so behind the scenes, without interrupting the user's session. In general, the user perceives these sign-ins as happening in the background.
-Select the **Add filters** option from the top of the table to get started.
+![Screenshot of the non-interactive user sign-ins log.](media/concept-sign-ins/sign-in-logs-user-noninteractive.png)
-![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-sign-ins/sign-in-logs-filter.png)
+**Report size:** Large </br>
+**Examples:**
-There are several filter options to choose from:
+- A client app uses an OAuth 2.0 refresh token to get an access token.
+- A client uses an OAuth 2.0 authorization code to get an access token and refresh token.
+- A user performs single sign-on (SSO) to a web or Windows app on an Azure AD joined PC (without providing an authentication factor or interacting with an Azure AD prompt).
+- A user signs in to a second Microsoft Office app while they have a session on a mobile device using FOCI (Family of Client IDs).
-- **User:** The *user principal name* (UPN) of the user in question.-- **Status:** Options are *Success*, *Failure*, and *Interrupted*.-- **Resource:** The name of the service used for the sign-in.-- **Conditional Access:** The status of the Conditional Access policy. Options are:
- - *Not applied:* No policy applied to the user and application during sign-in.
- - *Success:* One or more Conditional Access policies applied to or were evaluated for the user and application (but not necessarily the other conditions) during sign-in. Even though a Conditional Access policy might not apply, if it was evaluated, the Conditional Access status will show 'Success'.
- - *Failure:* The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access.
-- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+In addition to the default fields, the non-interactive sign-in log also shows:
-The following table provides the options and descriptions for the **Client app** filter option.
+- Resource ID
+- Number of grouped sign-ins
-> [!NOTE]
-> Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario.
+You can't customize the fields shown in this report.
+
+To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in.
-|Name|Modern authentication|Description|
-||:-:||
-|Authenticated SMTP| |Used by POP and IMAP client's to send email messages.|
-|Autodiscover| |Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.|
-|Exchange ActiveSync| |This filter shows all sign-in attempts where the EAS protocol has been attempted.|
-|Browser|![Blue checkmark.](./media/concept-sign-ins/check.png)|Shows all sign-in attempts from users using web browsers|
-|Exchange ActiveSync| | Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online|
-|Exchange Online PowerShell| |Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).|
-|Exchange Web Services| |A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.|
-|IMAP4| |A legacy mail client using IMAP to retrieve email.|
-|MAPI over HTTP| |Used by Outlook 2010 and later.|
-|Mobile apps and desktop clients|![Blue checkmark.](./media/concept-sign-ins/check.png)|Shows all sign-in attempts from users using mobile apps and desktop clients.|
-|Offline Address Book| |A copy of address list collections that are downloaded and used by Outlook.|
-|Outlook Anywhere (RPC over HTTP)| |Used by Outlook 2016 and earlier.|
-|Outlook Service| |Used by the Mail and Calendar app for Windows 10.|
-|POP3| |A legacy mail client using POP3 to retrieve email.|
-|Reporting Web Services| |Used to retrieve report data in Exchange Online.|
-|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.|
-## Analyze the sign-in logs
+When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) has a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
-Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
+Sign-ins are aggregated in the non-interactive users when the following data matches:
-### Sign-in error codes
+- Application
+- User
+- IP address
+- Status
+- Resource ID
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
+> [!NOTE]
+> The IP address of non-interactive sign-ins performed by [confidential clients](../develop/msal-client-applications.md) doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance.
+
+### Service principal sign-ins
-![Screenshot of a sign-in error code.](./media/concept-sign-ins/error-code.png)
+Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any nonuser account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
-For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button.
+![Screenshot of the service principal sign-ins log.](media/concept-sign-ins/sign-in-logs-service-principal.png)
-![Screenshot of the error code lookup tool.](./media/concept-sign-ins/error-code-lookup-tool.png)
+**Report size:** Large </br>
+**Examples:**
-### Authentication details
+- A service principal uses a certificate to authenticate and access the Microsoft Graph.
+- An application uses a client secret to authenticate in the OAuth Client Credentials flow.
-The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt:
+You can't customize the fields shown in this report.
-- A list of authentication policies applied, such as Conditional Access or Security Defaults.-- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.-- The sequence of authentication methods used to sign-in.-- If the authentication attempt was successful and the reason why.
+To make it easier to digest the data in the service principal sign-in logs, service principal sign-in events are grouped. Sign-ins from the same entity under the same conditions are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the service principal report when the following data matches:
-This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track:
+- Service principal name or ID
+- Status
+- IP address
+- Resource name or ID
-- The volume of sign-ins protected by MFA. -- The reason for the authentication prompt, based on the session lifetime policies.-- Usage and success rates for each authentication method.-- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.-- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.
+### Managed identity sign-ins
-While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab.
+Managed identities for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management. A VM with managed credentials uses Azure AD to get an Access Token.
-![Screenshot of the Authentication Details tab](media/concept-sign-ins/authentication-details-tab.png)
+![Screenshot of the managed identity sign-ins log.](media/concept-sign-ins/sign-in-logs-managed-identity.png)
-When analyzing authentication details, take note of the following details:
+**Report size:** Small </br>
+**Examples:**
-- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).-- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include:
- - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
- - The **Primary authentication** row isn't initially logged.
-- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.
+ You can't customize the fields shown in this report.
-#### Considerations for MFA sign-ins
+To make it easier to digest the data, managed identities for Azure resources sign-in logs, non-interactive sign-in events are grouped. Sign-ins from the same entity are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the managed identities report when all of the following data matches:
-When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`.
+- Managed identity name or ID
+- Status
+- Resource name or ID
+
+Select an item in the list view to display all sign-ins that are grouped under a node. Select a grouped item to see all details of the sign-in.
## Sign-in data used by other services
-Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
+Sign-in data is used by several services in Azure to monitor risky sign-ins, provide insight into application usage, and more.
-### Risky sign-in data in Azure AD Identity Protection
+### Azure AD Identity Protection
Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data: - Risky users - Risky user sign-ins -- Risky service principals-- Risky service principal sign-ins-
- For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
+- Risky workload identities
-![Screenshot of risky users in Identity Protection.](media/concept-sign-ins/id-protection-overview.png)
+For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md).
-### Azure AD application and authentication sign-in activity
+### Azure AD Usage and insights
To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md).
-![Screenshot of the Azure AD application activity report.](media/concept-sign-ins/azure-ad-app-activity.png)
-Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication.
+There are several reports available in **Usage & insights**. Some of these reports are in preview.
-![Screenshot of the Authentication methods report.](media/concept-sign-ins/azure-ad-authentication-methods.png)
+- Azure AD application activity (preview)
+- AD FS application activity
+- Authentication methods activity
+- Service principal sign-in activity (preview)
+- Application credential activity (preview)
### Microsoft 365 activity logs
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
Title: Usage and insights report
-description: Introduction to usage and insights report in the Azure portal
+description: Learn about the information you can explore using the Usage and insights report in Azure Active Directory.
Previously updated : 05/30/2023 Last updated : 08/24/2023
You can access the Usage and insights reports from the Azure portal and using Mi
### To access Usage & insights in the portal:
-1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role.
-1. Go to **Azure Active Directory** > **Usage & insights**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Usage & insights**.
The **Usage & insights** reports are also available from the **Enterprise applications** area of Azure AD. All users can access their own sign-ins at the [My Sign-Ins portal](https://mysignins.microsoft.com/security-info).
For more information, see [Application sign-in in Microsoft Graph](/graph/api/re
## AD FS application activity
-The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user login to authenticate in the last 30 days. These applications have not been migrated to Azure AD for authentication.
+The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user sign-in to authenticate in the last 30 days. These applications haven't been migrated to Azure AD for authentication.
Viewing the AD FS application activity using Microsoft Graph retrieves a list of the `relyingPartyDetailedSummary` objects, which identifies the relying party to a particular Federation Service.
Are you planning on running a registration campaign to nudge users to sign up fo
Looking for the details of a user and their authentication methods? Look at the **User registration details** report from the side menu and search for a name or UPN. The default MFA method and other methods registered are displayed. You can also see if the user is capable of registering for one of the authentication methods.
-Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You'll be able to see the method used to attempt to register or reset an authentication method.
+Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You can see the method used to attempt to register or reset an authentication method.
## Service principal sign-in activity (preview)
-The Service principal sign-in activity (preview) report provides the last activity date for every service principal. The report provides you information on the usage of the service principal - whether it was used as a client or resource app and whether it was used in an app-only or delegated context. The report shows the last time the service principal was used.
+The Service principal sign-in activity (preview) report provides the last activity date for every service principal. The report provides you with information on the usage of the service principal - whether it was used as a client or resource app and whether it was used in an app-only or delegated context. The report shows the last time the service principal was used.
[ ![Screenshot of the service principal sign-in activity report.](./media/concept-usage-insights-report/service-principal-sign-ins.png) ](./media/concept-usage-insights-report/service-principal-sign-ins.png#lightbox)
Add the following query to retrieve the service principal sign-in activity, then
GET https://graph.microsoft.com/beta/reports/servicePrincipalSignInActivities/{id} ```
-The following is an example of the response:
+Example response:
```json {
For more information, see [List service principal activity in Microsoft Graph](/
## Application credential activity (preview)
-The Application credential activity (preview) report provides the last credential activity date for every application credential. The report provides the credential type (certificate or client secret), the last used date, and the expiration date. With this report you can view the expiration dates of all your applications in one place.
+The Application credential activity (preview) report provides the last credential activity date for every application credential. The report provides the credential type (certificate or client secret), the last used date, and the expiration date. With this report, you can view the expiration dates of all your applications in one place.
To view the details of the application credential activity, select the **View more details** link. These details include the application object, service principal, and resource IDs. You can also see if the credential origin is the application or the service principal.
To get started, follow these instructions to work with `appCredentialSignInActiv
```http GET https://graph.microsoft.com/beta/reports/appCredentialSignInActivities/{id} ```
-The following is an example of the response:
+Example response:
```json {
active-directory How To View Applied Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md
- Title: View applied Conditional Access policies in Azure AD sign-in logs
+ Title: View applied Conditional Access policies in the Azure AD sign-in logs
description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the effect of those policies.
Previously updated : 02/03/2023 Last updated : 08/24/2023 - # View applied Conditional Access policies in Azure AD sign-in logs
To see applied Conditional Access policies in the sign-in logs, administrators m
The following built-in roles grant permissions to *read Conditional Access policies*: -- Global Administrator
+- Security Reader
- Global Reader - Security Administrator -- Security Reader - Conditional Access Administrator
+- Global Administrator
The following built-in roles grant permission to *view sign-in logs*: -- Global Administrator -- Security Administrator -- Security Reader -- Global Reader - Reports Reader
+- Security Reader
+- Global Reader
+- Security Administrator
+- Global Administrator
## Permissions for client apps
The Azure AD Graph PowerShell module doesn't support viewing applied Conditional
The activity details of sign-in logs contain several tabs. The **Conditional Access** tab lists the Conditional Access policies applied to that sign-in event.
-1. Sign in to the [Azure portal](https://portal.azure.com) using the Security Reader role.
-1. In the **Monitoring** section, select **Sign-in logs**.
-1. Select a sign-in item from the table to open the **Activity Details: Sign-ins context** pane.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Monitoring & health** >ΓÇ»**Sign-in logs**.
+1. Select a sign-in item from the table to view the sign-in details pane.
1. Select the **Conditional Access** tab. If you don't see the Conditional Access policies, confirm you're using a role that provides access to both the sign-in logs and the Conditional Access policies.
active-directory Howto Access Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md
Title: Access activity logs in Azure AD
-description: Learn how to choose the right method for accessing the activity logs in Azure AD.
+description: How to choose the right method for accessing and integrating the activity logs in Azure Active Directory.
Previously updated : 07/26/2023 Last updated : 08/28/2023 --
-# How To: Access activity logs in Azure AD
+# How to access activity logs in Azure AD
-The data in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with various options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario.
+The data collected in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with several options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario.
You can access Azure AD activity logs and reports using the following methods:
Each of these methods provides you with capabilities that may align with certain
## Prerequisites
-The required roles and licenses may vary based on the report. Global Administrator can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
+The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
| Log / Report | Roles | Licenses | |--|--|--|
The required roles and licenses may vary based on the report. Global Administrat
| Usage and insights | Security Reader<br>Reports Reader<br> Security Administrator | Premium P1/P2 | | Identity Protection* | Security Administrator<br>Security Operator<br>Security Reader<br>Global Reader | Azure AD Free/Microsoft 365 Apps<br>Azure AD Premium P1/P2 |
-*The level of access and capabilities for Identity Protection varies with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements).
+*The level of access and capabilities for Identity Protection vary with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements).
Audit logs are available for features that you've licensed. To access the sign-ins logs using the Microsoft Graph API, your tenant must have an Azure AD Premium license associated with it.
The SIEM tools you can integrate with your event hub can provide analysis and mo
### Quick steps
-1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
1. Create an Event Hubs namespace and event hub.
-1. Go to **Azure AD** > **Diagnostic settings**.
+1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**.
1. Choose the logs you want to stream, select the **Stream to an event hub** option, and complete the fields. - [Set up an Event Hubs namespace and an event hub](../../event-hubs/event-hubs-create.md) - [Learn more about streaming activity logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md)
The SIEM tools you can integrate with your event hub can provide analysis and mo
## Access logs with Microsoft Graph API
-The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance.
+The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app.
### Recommended uses
Integrating Azure AD logs with Azure Monitor logs provides a centralized locatio
### Quick steps
-1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
1. [Create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
-1. Go to **Azure AD** > **Diagnostic settings**.
+1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**.
1. Choose the logs you want to stream, select the **Send to Log Analytics workspace** option, and complete the fields.
-1. Go to **Azure AD** > **Log Analytics** and begin querying the data.
+1. Browse to **Identity** > **Monitoring & health** > **Log Analytics** and begin querying the data.
- [Integrate Azure AD logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) - [Learn how to query using Log Analytics](howto-analyze-activity-logs-log-analytics.md)
The reports available in the Azure portal provide a wide range of capabilities t
Use the following basic steps to access the reports in the Azure portal. #### Azure AD activity logs
-1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs** from the **Monitoring** menu.
+1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs**.
1. Adjust the filter according to your needs. - [Learn how to filter activity logs](quickstart-filter-audit-log.md) - [Explore the Azure AD audit log categories and activities](reference-audit-activities.md)
Use the following basic steps to access the reports in the Azure portal.
#### Azure AD Identity Protection reports
-1. Go to **Azure AD** > **Security** > **Identity Protection**.
+1. Browse to **Protection** > **Identity Protection**.
1. Explore the available reports. - [Learn more about Identity Protection](../identity-protection/overview-identity-protection.md) - [Learn how to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md) #### Usage and insights reports
-1. Go to **Azure AD** and select **Usage and insights** from the **Monitoring** menu.
+1. Browse to **Identity** > **Monitoring & health** > **Usage and insights**.
1. Explore the available reports. - [Learn more about the Usage and insights report](concept-usage-insights-report.md)
We recommend manually downloading and storing your activity logs if you have bud
Use the following basic steps to archive or download your activity logs.
-### Archive activity logs to a storage account
+#### Archive activity logs to a storage account
-1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
1. Create a storage account.
-1. Go to **Azure AD** > **Diagnostic settings**.
+1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**.
1. Choose the logs you want to stream, select the **Archive to a storage account** option, and complete the fields. - [Review the data retention policies](reference-reports-data-retention.md) #### Manually download activity logs
-1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles.
-1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs** from the **Monitoring** menu.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs** from the **Monitoring** menu.
1. Select **Download**. - [Learn more about how to download logs](howto-download-logs.md).
active-directory Howto Analyze Activity Logs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md
Title: Analyze activity logs using Log Analytics
-description: Learn how to analyze Azure Active Directory activity logs using Log Analytics
+description: Learn how to analyze audit, sign-in, and provisioning logs Azure Active Directory using Log Analytics queries.
Previously updated : 06/26/2023 Last updated : 08/24/2023 -- # Analyze Azure AD activity logs with Log Analytics
This article describes to analyze the Azure AD activity logs in your Log Analyti
## Roles and licenses
-To analyze Azure AD logs with Azure Monitor, you need the following roles and licenses:
+To analyze activity logs with Log Analytics, you need:
+
+- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md)
+- A Log Analytics workspace *and* access to that workspace
+- The appropriate roles for Azure Monitor *and* Azure AD
+
+### Log Analytics workspace
+
+You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data.
+
+For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md).
+
+### Azure Monitor roles
+
+Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access.
+
+- **View**:
+ - Monitoring Reader
+ - Log Analytics Reader
+
+- **View and modify settings**:
+ - Monitoring Contributor
+ - Log Analytics Contributor
+
+For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader).
+
+For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+
+### Azure AD roles
-* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace.
-* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD.
+- **Read**:
+ - Reports Reader
+ - Security Reader
+ - Global Reader
-* **Reports Reader**, **Security Reader**, or **Security Administrator** access for the Azure AD tenant: These roles are required to view Log Analytics through the Azure AD portal.
+- **Update**:
+ - Security Administrator
-* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
+For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md).
## Access Log Analytics
To view the Azure AD Log Analytics, you must already be sending your activity lo
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
-1. Go to **Azure Active Directory** > **Log Analytics**. A default search query runs.
+1. Browse to **Identity** > **Monitoring & health** > **Log Analytics**. A default search query runs.
![Default query](./media/howto-analyze-activity-logs-log-analytics/defaultquery.png)
active-directory Howto Archive Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-archive-logs-to-storage-account.md
+
+ Title: How to archive activity logs to a storage account
+description: Learn how to archive Azure Active Directory activity logs to a storage account through Diagnostic settings.
+++++++ Last updated : 08/24/2023+++
+# Customer intent: As an IT administrator, I want to learn how to archive Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
++
+# How to archive Azure AD logs to an Azure storage account
+
+If you need to store Azure Active Directory (Azure AD) activity logs for longer than the [default retention period](reference-reports-data-retention.md), you can archive your logs to a storage account.
+
+## Prerequisites
+
+To use this feature, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+* An Azure storage account.
+* A user who's a *Security Administrator* or *Global Administrator* for the Azure AD tenant.
+
+## Archive logs to an Azure storage account
++
+6. Under **Destination Details** select the **Archive to a storage account** check box.
+
+7. Select the appropriate **Subscription** and **Storage account** from the menus.
+
+ ![Diagnostics settings](media/howto-archive-logs-to-storage-account/diagnostic-settings-storage.png)
+
+8. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
+
+ > [!NOTE]
+ > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
+
+9. Select **Save** to save the setting.
+
+10. Close the window to return to the Diagnostic settings pane.
+
+## Next steps
+
+- [Learn about other ways to access activity logs](howto-access-activity-logs.md)
+- [Manually download activity logs](howto-download-logs.md)
+- [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md)
+- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md)
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
Title: Prerequisites for Azure Active Directory reporting API
-description: Learn about the prerequisites to access the Azure AD reporting API
+description: Learn how to configure the prerequisites that are required to access the Microsoft Graph reporting API.
Previously updated : 01/11/2023 Last updated : 08/24/2023 - # Prerequisites to access the Azure Active Directory reporting API
-The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs.
+The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance.
This article describes how to enable Microsoft Graph to access the Azure AD reporting APIs in the Azure portal and through PowerShell
To get access to the reporting data through the API, you need to have one of the
- Security Administrator - Global Administrator
-In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
+In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. If the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any other license requirement.
Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) in one of the required roles.
Registration is needed even if you're accessing the reporting API using a script
> ## Enable the Microsoft Graph API through the Azure portal
-To enable your application to access Microsoft Graph without user intervention, you'll need to register your application with Azure AD, then grant permissions to the Microsoft Graph API. This article covers the steps to follow in the Azure portal.
+To enable your application to access Microsoft Graph without user intervention, you need to register your application with Azure AD, then grant permissions to the Microsoft Graph API. This article covers the steps to follow in the Azure portal.
### Register an Azure AD application [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Go to **Azure Active Directory** > **App registrations**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Reader](../roles/permissions-reference.md#security-reader).
+1. Browse to **Identity** > **Applications** > **App registrations**.
1. Select **New registration**.
To enable your application to access Microsoft Graph without user intervention,
To access the Azure AD reporting API, you must grant your app *Read directory data* and *Read all audit log data* permissions for the Microsoft Graph API.
-1. **Azure Active Directory** > **App Registrations**> **API permissions** and select **Add a permission**.
+1. Browse to **Identity** > **Applications** > **App Registrations**.
+1. Select **Add a permission**.
![Screenshot of the API permissions menu option and Add permissions button.](./media/howto-configure-prerequisites-for-reporting-api/api-permissions-new-permission.png)
To access the Azure AD reporting API, you must grant your app *Read directory da
Once you have the app registration configured, you can run activity log queries in Microsoft Graph.
-1. Sign in to https://graph.microsoft.com using the **Security Reader** role. You may need to confirm that you're signed into the appropriate role. Select your profile icon in the upper-right corner of Microsoft Graph.
+1. Sign in to https://graph.microsoft.com using the **Security Reader** role.
+ - You may need to confirm that you're signed into the appropriate role.
+ - Select your profile icon in the upper-right corner of Microsoft Graph.
1. Use one of the following queries to start using Microsoft Graph for accessing activity logs: - GET `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` - GET `https://graph.microsoft.com/v1.0/auditLogs/signIns`
Once you have the app registration configured, you can run activity log queries
## Access reports using Microsoft Graph PowerShell
-To use PowerShell to access the Azure AD reporting API, you'll need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application).
+To use PowerShell to access the Azure AD reporting API, you need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application).
- Tenant ID - Client app ID
To use PowerShell to access the Azure AD reporting API, you'll need to gather a
You need these values when configuring calls to the reporting API. We recommend using a certificate because it's more secure.
-1. Go to **Azure Active Directory** > **App Registrations**.
+1. Browse to **Identity** > **Applications** > **App Registrations**.
+1. Open the application you created.
1. Copy the **Directory (tenant) ID**. 1. Copy the **Application (client) ID**.
-1. Go to **App Registration** > Select your application > **Certificates & secrets** > **Certificates** > **Upload certificate** and upload your certificate's public key file.
+1. Browse to **Certificates & secrets** > **Certificates** > **Upload certificate** and upload your certificate's public key file.
- If you don't have a certificate to upload, follow the steps outlined in the [Create a self-signed certificate to authenticate your application](../develop/howto-create-self-signed-certificate.md) article.
-Next you'll authenticate with the configuration settings you just gathered. Open PowerShell and run the following command, replacing the placeholders with your information.
+Next you need to authenticate with the configuration settings you just gathered. Open PowerShell and run the following command, replacing the placeholders with your information.
```powershell Connect-MgGraph -ClientID YOUR_APP_ID -TenantId YOUR_TENANT_ID -CertificateName YOUR_CERT_SUBJECT ## Or -CertificateThumbprint instead of -CertificateName
active-directory Howto Customize Filter Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-customize-filter-logs.md
+
+ Title: Customize and filter the activity logs in Azure AD
+description: Learn how to customize the columns and filter of the Azure Active Directory activity logs so you can analyze the results.
+++++++ Last updated : 08/30/2023++++
+# How to customize and filter identity activity logs
+
+Sign-in logs are a commonly used tool to troubleshoot user access issues and investigate risky sign-in activity. Audit logs collect every logged event in Azure Active Directory (Azure AD) and can be used to investigate changes to your environment. There are over 30 columns you can choose from to customize your view of the sign-in logs in the Azure AD portal. Audit logs and Provisioning logs can also be customized and filtered for your needs.
+
+This article shows you how to customize the columns and then filter the logs to find the information you need more efficiently.
+
+## Prerequisites
+
+The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
+
+| Log / Report | Roles | Licenses |
+|--|--|--|
+| Audit | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD |
+| Sign-ins | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD |
+| Provisioning | Same as audit and sign-ins, plus<br>Security Operator<br>Application Administrator<br>Cloud App Administrator<br>A custom role with `provisioningLogs` permission | Premium P1/P2 |
+| Conditional Access data in the sign-in logs | Company Administrator<br>Global Reader<br>Security Administrator<br>Security Reader<br>Conditional Access Administrator | Premium P1/P2 |
+
+## How to access the activity logs in the Azure portal
+
+You can always access your own sign-in history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com). You can also access the sign-in logs from **Users** and **Enterprise applications** in Azure AD.
++
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs**.
+
+## Audit logs
+
+With the information in the Azure AD audit logs, you can access all records of system activities for compliance purposes. Audit logs can be accessed from the **Monitoring and health** section of Azure AD, where you can sort and filter on every category and activity. You can also access audit logs in the area of the portal for the service you're investigating.
+
+![Screenshot of the audit logs option on the side menu.](media/howto-customize-filter-logs/audit-logs-navigation.png)
+
+For example, if you're looking into changes to Azure AD groups, you can access the Audit logs from **Azure AD** > **Groups**. When you access the audit logs from the service, the filter is automatically adjusted according to the service.
+
+![Screenshot of the audit logs option from the Groups menu.](media/howto-customize-filter-logs/audit-logs-groups.png)
+
+### Customize the layout of the audit logs
+
+Audit logs can be customized like the sign-in logs. There aren't as many column options, but it's as important to make sure you're seeing the columns you need. The **Service**, **Category** and **Activity** columns are related to each other, so these columns should always be visible.
+
+### Filter the audit logs
+
+When you filter the logs by **Service**, the **Category** and **Activity** details automatically change. In some cases, there may only be one Category or Activity. For a detailed table of all potential combinations of these details, see [Audit activities](reference-audit-activities.md).
++
+## Sign-in logs
+
+On the sign-in logs page, you can switch between four sign-in log types. For more information on the logs, see [What are Azure AD sign-in logs?](concept-sign-ins.md).
++
+- **Interactive user sign-ins:** Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.
+
+- **Non-interactive user sign-ins:** Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.
+
+- **Service principal sign-ins:** Sign-ins by apps and service principals that don't involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.
+
+- **Managed identities for Azure resources sign-ins:** Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md).
+
+### Customize the layout of the sign-in logs
+
+To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can only customize the column for the interactive user sign-in log. The sign-ins log has a default view, but you can customize the view using over 30 column options.
+
+1. Select **Columns** from the menu at the top of the log.
+1. Select the columns you want to view and select the **Save** button at the bottom of the window.
+
+![Screenshot of the sign-in logs page with the Columns option highlighted.](./media/howto-customize-filter-logs/sign-in-logs-columns.png)
+
+### Filter the sign-in logs <h3 id="filter-sign-in-activities"></h3>
+
+Filtering the sign-in logs is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential.
+
+Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters.
+
+1. Select the **Add filters** button, choose a filter option and select **Apply**.
+
+ ![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/howto-customize-filter-logs/sign-in-logs-add-filters.png)
+
+1. Either enter a specific detail - such as a Request ID - or select another filter option.
+
+ ![Screenshot of the filter options with a field to enter filter details open.](./media/howto-customize-filter-logs/sign-in-logs-filter-options.png)
+
+You can filter on several details. The following table describes some commonly used filters. *Not all filter options are described.*
+
+| Filter | Description |
+| | |
+| Request ID | Unique identifier for a sign-in request |
+| Correlation ID | Unique identifier for all sign-in requests that are part of a single sign-in attempt |
+| User | The *user principal name* (UPN) of the user |
+| Application | The application targeted by the sign-in request |
+| Status | Options are *Success*, *Failure*, and *Interrupted* |
+| Resource | The name of the service used for the sign-in |
+| IP address | The IP address of the client used for the sign-in |
+| Conditional Access | Options are *Not applied*, *Success*, and *Failure* |
+
+Now that your sign-in logs table is formatted for your needs, you can more effectively analyze the data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools.
+
+Customizing the columns and adjusting the filter helps to look at logs with similar characteristics. To look at the details of a sign-in, select a row in the table to open the **Activity Details** panel. There are several tabs in the panel to explore. For more information, see [Sign-in log activity details](concept-sign-in-log-activity-details.md).
+++
+### Client app filter
+
+When reviewing where a sign-in originated, you may need to use the **Client app** filter. Client app has two subcategories: **Modern authentication clients** and **Legacy authentication clients**. Modern authentication clients have two more subcategories: **Browser** and **Mobile apps and desktop clients**. There are several subcategories for Legacy authentication clients, which are defined in the [Legacy authentication client details](#legacy-authentication-client-details) table.
+
+![Screenshot of the client app filter selected, with the categories highlighted.](media/concept-sign-ins/client-app-filter.png)
+
+**Browser** sign-ins include all sign-in attempts from web browsers. When viewing the details of a sign-in from a browser, the **Basic info** tab shows **Client app: Browser**.
+
+![Screenshot of the sign-in details, with the client app detail highlighted.](media/concept-sign-ins/client-app-browser.png)
+
+On the **Device info** tab, **Browser** shows the details of the web browser. The browser type and version are listed, but in some cases, the name of the browser and version is not available. You may see something like **Rich Client 4.0.0.0**.
+
+![Screenshot of the sign-in activity details with a Rich Client browser example highlighted.](media/concept-sign-ins/browser-rich-client.png)
+
+#### Legacy authentication client details
+
+The following table provides the details for each of the *Legacy authentication client* options.
+
+|Name|Description|
+|||
+|Authenticated SMTP|Used by POP and IMAP clients to send email messages.|
+|Autodiscover|Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.|
+|Exchange ActiveSync|This filter shows all sign-in attempts where the EAS protocol has been attempted.|
+|Exchange ActiveSync| Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online|
+|Exchange Online PowerShell|Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).|
+|Exchange Web Services|A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.|
+|IMAP4|A legacy mail client using IMAP to retrieve email.|
+|MAPI over HTTP|Used by Outlook 2010 and later.|
+|Offline Address Book|A copy of address list collections that are downloaded and used by Outlook.|
+|Outlook Anywhere (RPC over HTTP)|Used by Outlook 2016 and earlier.|
+|Outlook Service|Used by the Mail and Calendar app for Windows 10.|
+|POP3|A legacy mail client using POP3 to retrieve email.|
+|Reporting Web Services|Used to retrieve report data in Exchange Online.|
+|Other clients|Shows all sign-in attempts from users where the client app isn't included or unknown.|
+
+## Next steps
+
+- [Analyze a sing-in error](quickstart-analyze-sign-in.md)
+- [Troubleshoot sign-in errors](howto-troubleshoot-sign-in-errors.md)
+- [Explore all audit log categories and activities](reference-audit-activities.md)
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
- Title: How to download logs in Azure Active Directory
-description: Learn how to download activity logs in Azure Active Directory.
+description: Learn how to download audit, sign-in, and provisioning log data for storage in Azure Active Directory.
Previously updated : 02/16/2023 Last updated : 08/24/2023 --
-# How to: Download logs in Azure Active Directory
+# How to download logs in Azure Active Directory
The Azure Active Directory (Azure AD) portal gives you access to three types of activity logs:
Azure AD stores the data in these logs for a limited amount of time. As an IT ad
The option to download the data of an activity log is available in all editions of Azure AD. You can also download activity logs using Microsoft Graph; however, downloading logs programmatically requires a premium license.
-The following roles provide read access to audit logs. Always use the least privileged role, according to Microsoft [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
-- Reports Reader-- Security Reader-- Security Administrator-- Global Reader (sign-in logs only)-- Global Administrator
+The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
+
+| Log / Report | Roles | Licenses |
+|--|--|--|
+| Audit | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD |
+| Sign-ins | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD |
+| Provisioning | Same as audit and sign-ins, plus<br>Security Operator<br>Application Administrator<br>Cloud App Administrator<br>A custom role with `provisioningLogs` permission | Premium P1/P2 |
## Log download details
Azure AD stores activity logs for a specific period. For more information, see [
## How to download activity logs
-You can access the activity logs from the **Monitoring** section of Azure AD or from the **Users** page of Azure AD. If you view the audit logs from the **Users** page, the filter category will be set to **UserManagement**. Similarly, if you view the audit logs from the **Groups** page, the filter category will be set to **GroupManagement**. Regardless of how you access the activity logs, your download is based on the filter you've set.
+You can access the activity logs from the **Monitoring** section of Azure AD or from the **Users** page of Azure AD. If you view the audit logs from the **Users** page, the filter category is set to **UserManagement**. Similarly, if you view the audit logs from the **Groups** page, the filter category is set to **GroupManagement**. Regardless of how you access the activity logs, your download is based on the filter you've set.
-1. Navigate to the activity log you need to download.
-1. Adjust the filter for your needs.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs**.
1. Select **Download**.
- - For audit and sign-in logs, a window appears where you'll select the download format (CSV or JSON).
- - For provisioning logs, you'll select the download format (CSV of JSON) from the Download button.
+ - For audit and sign-in logs, a window appears where you select the download format (CSV or JSON).
+ - For provisioning logs, you select the download format (CSV of JSON) from the Download button.
- You can change the File Name of the download. - Select the **Download** button. 1. The download processes and sends the file to your default download location.
active-directory Howto Integrate Activity Logs With Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md
- Title: Integrate logs with ArcSight using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor
------- Previously updated : 10/31/2022------
-# Integrate Azure Active Directory logs with ArcSight using Azure Monitor
-
-[Micro Focus ArcSight](https://software.microfocus.com/products/siem-security-information-event-management/overview) is a security information and event management (SIEM) solution that helps you detect and respond to security threats in your platform. You can now route Azure Active Directory (Azure AD) logs to ArcSight using Azure Monitor using the ArcSight connector for Azure AD. This feature allows you to monitor your tenant for security compromise using ArcSight.
-
-In this article, you learn how to route Azure AD logs to ArcSight using Azure Monitor.
-
-## Prerequisites
-
-To use this feature, you need:
-* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
-
-Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
-
-## Integrate Azure AD logs with ArcSight
-
-1. First, complete the steps in the **Prerequisites** section of the configuration guide. This section includes the following steps:
- * Set user permissions in Azure, to ensure there's a user with the **owner** role to deploy and configure the connector.
- * Open ports on the server with Syslog NG Daemon SmartConnector, so it's accessible from Azure.
- * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector.
-
-2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
-
-3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
- * The requisite Azure functions are created in your Azure subscription.
- * The Azure AD logs are streamed to the correct destination.
- * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps.
- * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
-
-4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
-
-5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
-
-## Next steps
-
-[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292)
active-directory Howto Integrate Activity Logs With Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md
+
+ Title: Integrate Azure Active Directory logs with Azure Monitor logs
+description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs for querying and analysis.
+++++++ Last updated : 08/08/2023++++
+# Integrate Azure AD logs with Azure Monitor logs
+
+Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so your sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data.
+
+This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor.
+
+Use the integration of Azure AD activity logs and Azure Monitor to perform the following tasks:
+
+- Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud.
+- Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights.
+- Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment.
+- Identify sign-ins from applications still using the Active Directory Authentication Library (ADAL) for authentication. [Learn about the ADAL end-of-support plan.](../develop/msal-migration.md)
+
+> [!NOTE]
+> Integrating Azure Active Directory logs with Azure Monitor automatically enables the Azure Active Directory data connector within Microsoft Sentinel.
+
+## How do I access it?
+
+To use this feature, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+* An Azure AD Premium P1 or P2 tenant.
+* **Global Administrator** or **Security Administrator** access for the Azure AD tenant.
+* A **Log Analytics workspace** in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+* Permission to access data in a Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
+
+## Create a Log Analytics workspace
+
+A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+
+Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
+
+## Send logs to Azure Monitor
+
+Follow the steps below to send logs from Azure Active Directory to Azure Monitor logs. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
++
+6. Under **Destination Details** select the **Send to Log Analytics workspace** check box.
+
+7. Select the appropriate **Subscription** and **Log Analytics workspace** from the menus.
+
+8. Select the **Save** button.
+
+ ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-azure-monitor-logs/diagnostic-settings-log-analytics-workspace.png)
+
+If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs.
+
+## Next steps
+
+* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
+* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
+* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
- Title: Integrate Azure Active Directory logs with Azure Monitor | Microsoft Docs
-description: Learn how to integrate Azure Active Directory logs with Azure Monitor
------- Previously updated : 06/26/2023-----
-# How to integrate Azure AD logs with Azure Monitor logs
-
-Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. Integrating Azure AD logs with Azure Monitor logs enables rich visualizations, monitoring, and alerting on the connected data.
-
-This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor Logs.
-
-## Roles and licenses
-
-To integrate Azure AD logs with Azure Monitor, you need the following roles and licenses:
-
-* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-
-* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD.
-
-* **Security Administrator access for the Azure AD tenant:** This role is required to set up the Diagnostics settings.
-
-* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
-
-## Integrate logs with Azure Monitor logs
-
-To send Azure AD logs to Azure Monitor Logs you must first have a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md). Then you can set up the Diagnostics settings in Azure AD to send your activity logs to that workspace.
-
-### Create a Log Analytics workspace
-
-A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
-
-Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
-
-### Set up Diagnostics settings
-
-Once you have a Log Analytics workspace created, follow the steps below to send logs from Azure Active Directory to that workspace.
--
-Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator**.
-
-1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from the Audit logs or Sign-in logs.
-
-1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** to change an existing integration.
-
-1. Enter a **Diagnostic setting name**. If you're editing an existing integration, you can't change the name.
-
-1. Any or all of the following logs can be sent to the Log Analytics workspace. Some logs may be in public preview but still visible in the portal.
- * `AuditLogs`
- * `SignInLogs`
- * `NonInteractiveUserSignInLogs`
- * `ServicePrincipalSignInLogs`
- * `ManagedIdentitySignInLogs`
- * `ProvisioningLogs`
- * `ADFSSignInLogs` Active Directory Federation Services (ADFS)
- * `RiskyServicePrincipals`
- * `RiskyUsers`
- * `ServicePrincipalRiskEvents`
- * `UserRiskEvents`
-
-1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview.
- * `EnrichedOffice365AuditLogs`
- * `MicrosoftGraphActivityLogs`
- * `NetworkAccessTrafficLogs`
-
-1. In the **Destination details**, select **Send to Log Analytics workspace** and choose the appropriate details from the menus that appear.
- * You can also send logs to any or all of the following destinations. Additional fields appear, depending on your selection.
- * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear.
- * **Stream to an event hub:** Select the appropriate details from the menus that appear.
- * **Send to partner solution:** Select the appropriate details from the menus that appear.
-
-1. Select **Save** to save the setting.
-
- ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-log-analytics/Configure.png)
-
-If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs.
-
-> [!NOTE]
-> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel.
-
-## Next steps
-
-* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
-* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
-* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
- Title: Integrate Splunk using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor.
------- Previously updated : 10/31/2022------
-# How to: Integrate Azure Active Directory logs with Splunk using Azure Monitor
-
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with Splunk by using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with Splunk.
-
-## Prerequisites
-
-To use this feature, you need:
--- An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). --- The [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). -
-## Integrate Azure Active Directory logs
-
-1. Open your Splunk instance, and select **Data Summary**.
-
- ![The "Data Summary" button](./media/howto-integrate-activity-logs-with-splunk/DataSummary.png)
-
-2. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
-
- ![The Data Summary Sourcetypes tab](./media/howto-integrate-activity-logs-with-splunk/source-eventhub.png)
-
-Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
-
- ![Activity logs](./media/howto-integrate-activity-logs-with-splunk/activity-logs.png)
-
-> [!NOTE]
-> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
->
-
-## Next steps
-
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
- Title: Stream logs to SumoLogic using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor.
------- Previously updated : 10/31/2022------
-# Integrate Azure Active Directory logs with SumoLogic using Azure Monitor
-
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with SumoLogic using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with SumoLogic.
-
-## Prerequisites
-
-To use this feature, you need:
-* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A SumoLogic single sign-on enabled subscription.
-
-## Steps to integrate Azure AD logs with SumoLogic
-
-1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
-3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
-
- ![Dashboard](./media/howto-integrate-activity-logs-with-sumologic/overview-dashboard.png)
-
-## Next steps
-
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
Title: How to manage inactive user accounts
-description: Learn how to detect and resolve user accounts that have become obsolete
+description: Learn how to detect and resolve Azure Active Directory user accounts that have become inactive or obsolete.
+ Previously updated : 05/02/2023 Last updated : 08/24/2023 -- # How To: Manage inactive user accounts
The following details relate to the `lastSignInDateTime` property.
If you need to view the latest sign-in activity for a user, you can view the user's sign-in details in Azure AD. You can also use the Microsoft Graph **users by name** scenario described in the [previous section](#detect-inactive-user-accounts-with-microsoft-graph).
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Azure AD** > **Users** > select a user from the list.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select a user from the list.
1. In the **My Feed** area of the user's Overview, locate the **Sign-ins** tile. ![Screenshot of the user overview page with the sign-in activity tile highlighted.](media/howto-manage-inactive-user-accounts/last-sign-activity-tile.png)
active-directory Howto Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-stream-logs-to-event-hub.md
+
+ Title: Stream Azure Active Directory logs to an event hub
+description: Learn how to stream Azure Active Directory activity logs to an event hub for SIEM tool integration and analysis.
+++++++ Last updated : 08/24/2023+++
+# How to stream activity logs to an event hub
+
+Your Azure Active Directory (Azure AD) tenant produces large amounts of data every second. Sign-in activity and logs of changes made in your tenant add up to a lot of data that can be hard to analyze. Integrating with Security Information and Event Management (SIEM) tools can help you gain insights into your environment.
+
+This article shows how you can stream your logs to an event hub, to integrate with one of several SIEM tools.
+
+## Prerequisites
+
+To stream logs to a SIEM tool, you first need to create an **Azure event hub**.
+
+Once you have an event hub that contains Azure AD activity logs, you can set up the SIEM tool integration using the **Azure AD Diagnostics Settings**.
+
+## Stream logs to an event hub
++
+6. Select the **Stream to an event hub** check box.
+
+7. Select the Azure subscription, Event Hubs namespace, and optional event hub where you want to route the logs.
+
+The subscription and Event Hubs namespace must both be associated with the Azure AD tenant from where you're streaming the logs.
+
+Once you have the Azure event hub ready, navigate to the SIEM tool you want to integrate with the activity logs. You'll finish the process in the SIEM tool.
+
+We currently support Splunk, SumoLogic, and ArcSight. Select a tab below to get started. Refer to the tool's documentation.
+
+# [Splunk](#tab/splunk)
+
+To use this feature, you need the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details).
+
+### Integrate Azure AD logs with Splunk
+
+1. Open your Splunk instance and select **Data Summary**.
+
+ ![The "Data Summary" button](./media/howto-stream-logs-to-event-hub/datasummary.png)
+
+1. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
+
+ ![The Data Summary Sourcetypes tab](./media/howto-stream-logs-to-event-hub/source-eventhub.png)
+
+Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
+
+ ![Activity logs](./media/howto-stream-logs-to-event-hub/activity-logs.png)
+
+If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
+
+# [SumoLogic](#tab/SumoLogic)
+
+To use this feature, you need a SumoLogic single sign-on enabled subscription.
+
+### Integrate Azure AD logs with SumoLogic
+
+1. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
+
+1. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
+
+ ![Dashboard](./media/howto-stream-logs-to-event-hub/overview-dashboard.png)
+
+# [ArcSight](#tab/ArcSight)
+
+To use this feature, you need a configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
+
+Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://software.microfocus.com/products/siem-security-information-event-management/overview). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
+
+## Integrate Azure AD logs with ArcSight
+
+1. Complete the steps in the **Prerequisites** section of the ArcSight configuration guide. This section includes the following steps:
+ * Set user permissions in Azure to ensure there's a user with the **owner** role to deploy and configure the connector.
+ * Open ports on the server with Syslog NG Daemon SmartConnector so it's accessible from Azure.
+ * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector.
+
+1. Follow the steps in the **Deploying the Connector** section of the ArcSight configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
+
+1. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
+ * The requisite Azure functions are created in your Azure subscription.
+ * The Azure AD logs are streamed to the correct destination.
+ * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps.
+ * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
+
+1. Complete the post-deployment steps in the **Post-Deployment Configurations** of the ArcSight configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
+
+1. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
+++
+## Activity log integration options and considerations
+
+If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up **custom tooling** by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md).
+
+**IBM QRadar** is another option for integrating with Azure AD activity logs. The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
+
+Some sign-in categories contain large amounts of log data, depending on your tenantΓÇÖs configuration. In general, the non-interactive user sign-ins and service principal sign-ins can be 5 to 10 times larger than the interactive user sign-ins.
+
+## Next steps
+
+- [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
+- [Use Microsoft Graph to access Azure AD activity logs](quickstart-access-log-with-graph-api.md)
active-directory Howto Troubleshoot Sign In Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md
Previously updated : 06/19/2023 Last updated : 08/24/2023 - # How to: Troubleshoot sign-in errors using Azure Active Directory reports
You need:
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com) using a role of least privilege access.
-1. Go to **Azure AD** > **Sign-ins**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
1. Use the filters to narrow down the results - Search by username if you're troubleshooting a specific user. - Search by application if you're troubleshooting issues with a specific app.
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
- Title: Azure Monitor workbooks for Azure Active Directory
-description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports.
------- Previously updated : 07/28/2023---
-# How to use Azure Active Directory Workbooks
-
-Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD.
-
-When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch.
--- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks.-- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.-
-## Prerequisites
-
-To use Azure Workbooks for Azure AD, you need:
--- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md)-- A Log Analytics workspace *and* access to that workspace-- The appropriate roles for Azure Monitor *and* Azure AD-
-### Log Analytics workspace
-
-You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data.
-
-For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md).
-
-### Azure Monitor roles
-
-Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access.
--- **View**:
- - Monitoring Reader
- - Log Analytics Reader
--- **View and modify settings**:
- - Monitoring Contributor
- - Log Analytics Contributor
-
-For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader).
-
-For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
-
-### Azure AD roles
-
-Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace.
--- **Read**:
- - Reports Reader
- - Security Reader
- - Global Reader
--- **Update**:
- - Security Administrator
-
-For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md).
-
-## How to access Azure Workbooks for Azure AD
--
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
- - **Workbooks**: All workbooks created in your tenant
- - **Public Templates**: Prebuilt workbooks for common or high priority scenarios
- - **My Templates**: Templates you've created
-1. Select a report or template from the list. Workbooks may take a few moments to populate.
- - Search for a template by name.
- - Select the **Browse across galleries** to view templates that aren't specific to Azure AD.
-
- ![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
-
-## Create a new workbook
-
-Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json).
-
-**To create a new workbook from scratch**:
-1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**.
-1. Select **+ New**.
-1. Select an element from the **+ Add** menu.
-
- For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md).
-
- ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png)
-
-**To create a new workbook from a template**:
-1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**.
-1. Select a workbook template from the Gallery.
-1. Select **Edit** from the top of the page.
- - Each element of the workbook has its own **Edit** button.
- - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md)
-
-1. Select the **Edit** button for any element. Make your changes and select **Done editing**.
- ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png)
-1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name.
-1. In the **Save As** window:
- - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**.
- - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md).
-1. Select the **Apply** button.
-
-## Next steps
-
-* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
-* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md).
active-directory Howto Use Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md
Title: How to use Azure Active Directory recommendations
-description: Learn how to use Azure Active Directory recommendations.
+description: Learn how to use Azure Active Directory recommendations to monitor and improve the health of your tenant.
Previously updated : 07/14/2023 Last updated : 08/24/2023
-# How to: Use Azure AD recommendations
+# How to use Azure Active Directory Recommendations
The Azure Active Directory (Azure AD) recommendations feature provides you with personalized insights with actionable guidance to:
Some recommendations may require a P2 or other license. For more information, se
To view the details of a recommendation:
-1. Sign in to Azure using the appropriate least-privilege role.
-1. Go to **Azure AD** > **Recommendations** and select a recommendation from the list.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Overview** > **Recommendations tab**
+1. Select a recommendation from the list.
![Screenshot of the list of recommendations.](./media/howto-use-recommendations/recommendations-list.png)
active-directory Howto Use Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-sign-in-diagnostics.md
- Title: How to use the Sign-in diagnostic
-description: Information on how to use the Sign-in diagnostic in Azure Active Directory.
+ Title: How to use Azure Active Directory Sign-in diagnostics
+description: How to use the Sign-in diagnostic in tool Azure Active Directory to troubleshoot sign-in related scenarios.
Previously updated : 06/19/2023 Last updated : 08/24/2023 - # Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
Determining the reason for a failed sign-in can quickly become a challenging tas
This article gives you an overview of what the Sign-in diagnostic is and how you can use it to troubleshoot sign-in related errors.
-## How it works
+## Prerequisites
+
+To use the Sign-in diagnostic:
+- You must be signed as at least a **Global Reader**.
+- With the correct access level, you can start the Sign-in diagnostic from more than one place.
+- Flagged sign-in events can also be reviewed from the Sign-in diagnostic.
+ - Flagged sign-in events are captured *after* a user has enabled flagging during their sign-in experience.
+ - For more information, see [flagged sign-ins](overview-flagged-sign-ins.md).
+
+## How does it work?
In Azure AD, sign-in attempts are controlled by:
Due to the greater flexibility of the system to respond to a sign-in attempt, yo
- Displaying information about what happened. - Providing recommendations to resolve problems.
-## How to access it
-
-To use the Sign-in diagnostic, you must be signed into the tenant as a **Global Reader** or **Global Administrator**. With the correct access level, you can start the Sign-in diagnostic from more than one place.
-
-Flagged sign-in events can also be reviewed from the Sign-in diagnostic. Flagged sign-in events are captured *after* a user has enabled flagging during their sign-in experience. For more information, see [flagged sign-ins](overview-flagged-sign-ins.md).
- ### From Diagnose and Solve Problems You can start the Sign-in diagnostic from the **Diagnose and Solve Problems** area of Azure AD. From Diagnose and Solve Problems you can review any flagged sign-in events or search for a specific sign-in event. You can also start this process from the Conditional Access Diagnose and Solve Problems area. **To search for sign-in events**:
-1. Go to **Azure AD** or **Azure AD Conditional Access** > **Diagnose and Solve Problems**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Learn & support** > **Diagnose & solve problems** or **Protection** **Conditional Access** > **Diagnose and Solve Problems**.
+1. Select the **Troubleshoot** link on the **Sign-in Diagnostic** tile.
1. Select the **All Sign-In Events** tab to start a search. 1. Enter as many details as possible into the search fields. - **User**: Provide the name or email address of who made the sign-in attempt.
You can start the Sign-in diagnostic from the **Diagnose and Solve Problems** ar
You can start the Sign-in diagnostic from a specific sign-in event in the Sign-in logs. When you start the process from a specific sign-in event, the diagnostics start right away. You aren't prompted to enter details first.
-1. Go to **Azure AD** > **Sign-in logs** and select a sign-in event.
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs** and select a sign-in event.
- You can filter your list to make it easier to find specific sign-in events. 1. From the Activity Details window that opens, select the **Launch the Sign-in diagnostic** link.
You can start the Sign-in diagnostic from a specific sign-in event in the Sign-i
If you're in the middle of creating a support request *and* the options you selected are related to sign-in activity, you'll be prompted to run the Sign-in diagnostics during the support request process.
-1. Go to **Azure AD** > **Diagnose and Solve Problems**.
+1. Browse to **Diagnose and Solve Problems**.
1. Select the appropriate fields as necessary. For example: - **Service type**: Azure Active Directory Sign-in and Multi-Factor Authentication - **Problem type**: Multi-Factor Authentication
If you're in the middle of creating a support request *and* the options you sele
After the Sign-in diagnostic completes its search, a few things appear on the screen: -- The **Authentication Summary** lists all of the events that match the details you provided.
+- The **Authentication summary** lists all of the events that match the details you provided.
- Select the **View Columns** option in the upper-right corner of the summary to change the columns that appear.-- The **diagnostic Results** describe what happened during the sign-in events.
+- The **Diagnostic results** describe what happened during the sign-in events.
- Scenarios could include MFA requirements from a Conditional Access policy, sign-in events that may need to have a Conditional Access policy applied, or a large number of failed sign-in attempts over the past 48 hours. - Related content and links to troubleshooting tools may be provided. - Read through the results to identify any actions that you can take. - Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
- ![Screenshot of the diagnostic Results for a scenario.](media/howto-use-sign-in-diagnostics/diagnostic-result-mfa-proofup.png)
+ ![Screenshot of the Diagnostic results for a scenario.](media/howto-use-sign-in-diagnostics/diagnostic-result-mfa-proofup.png)
- Provide feedback on the results to help improve the feature.
active-directory Howto Use Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-workbooks.md
+
+ Title: Azure Monitor workbooks for Azure Active Directory
+description: Learn how to use Azure Monitor workbooks for analyzing identity logs in Azure Active Directory reports.
+++++++ Last updated : 08/24/2023++++
+# How to use Azure Active Directory Workbooks
+
+Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD.
+
+When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch.
+
+- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks.
+- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.
+
+## Prerequisites
+
+To use Azure Workbooks for Azure AD, you need:
+
+- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md)
+- A Log Analytics workspace *and* access to that workspace
+- The appropriate roles for Azure Monitor *and* Azure AD
+
+### Log Analytics workspace
+
+You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data.
+
+For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md).
+
+### Azure Monitor roles
+
+Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access.
+
+- **View**:
+ - Monitoring Reader
+ - Log Analytics Reader
+
+- **View and modify settings**:
+ - Monitoring Contributor
+ - Log Analytics Contributor
+
+For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader).
+
+For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+
+### Azure AD roles
+
+Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace.
+
+- **Read**:
+ - Reports Reader
+ - Security Reader
+ - Global Reader
+
+- **Update**:
+ - Security Administrator
+
+For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md).
+
+## How to access Azure Workbooks for Azure AD
++
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
+ - **Workbooks**: All workbooks created in your tenant
+ - **Public Templates**: Prebuilt workbooks for common or high priority scenarios
+ - **My Templates**: Templates you've created
+1. Select a report or template from the list. Workbooks may take a few moments to populate.
+ - Search for a template by name.
+ - Select the **Browse across galleries** to view templates that aren't specific to Azure AD.
+
+ ![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
+
+## Create a new workbook
+
+Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json).
+
+**To create a new workbook from scratch**:
+1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
+1. Select **+ New**.
+1. Select an element from the **+ Add** menu.
+
+ For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md).
+
+ ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png)
+
+**To create a new workbook from a template**:
+1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
+1. Select a workbook template from the Gallery.
+1. Select **Edit** from the top of the page.
+ - Each element of the workbook has its own **Edit** button.
+ - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md)
+
+1. Select the **Edit** button for any element. Make your changes and select **Done editing**.
+ ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png)
+1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name.
+1. In the **Save As** window:
+ - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**.
+ - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md).
+1. Select the **Apply** button.
+
+## Next steps
+
+* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md).
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Previously updated : 11/01/2022 Last updated : 08/25/2023 # Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.- # What are flagged sign-ins in Azure Active Directory?
Azure AD sign-in events are critical to understanding what went right or wrong w
Flagged Sign-ins is a feature intended to increase the signal to noise ratio for user sign-ins requiring help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with. Admins and help desk workers also benefit from finding the right events more efficiently. Flagged Sign-in events contain the same information as other sign-in events contain with one addition: they also indicate that a user flagged the event for review by admins.
-Flagged sign-ins gives the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event will then appear as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD sign-ins log.
+Flagged sign-ins give the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event then appears as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD sign-ins log.
In summary, you can use flagged sign-ins to:
Flagged sign-ins gives you the ability to enable flagging when signing in using
5. Open a new browser window (in the same browser application) and attempt the same sign-in that failed. 6. Reproduce the sign-in error that was seen before.
-With flagging enabled, the same browser application and client must be used or the events won't be flagged.
+With flagging enabled, the same browser application and client must be used or the events aren't flagged.
### Admin: Find flagged events in reports
-1. In the Azure portal, go to **Sign-in logs** > **Add Filters**.
-1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
-1. All events that were flagged by users are shown.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**.
+1. Open the **Add filters** menu and select **Flagged for review**. All events that were flagged by users are shown.
1. If needed, apply more filters to refine the event view. 1. Select the event to review what happened.
Any user signing into Azure AD via web page can use flag sign-ins for review. Me
## Who can review flagged sign-ins?
-Reviewing flagged sign-in events requires permissions to read the sign-in report events in the Azure portal. For more information, see [who can access it?](concept-sign-ins.md#how-do-you-access-the-sign-in-logs)
+Reviewing flagged sign-in events requires permissions to read the sign-in report events in the Azure portal. For more information, see [How to access activity logs](howto-access-activity-logs.md#prerequisites).
To flag sign-in failures, you don't need extra permissions.
active-directory Overview Monitoring Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md
+
+ Title: What is Azure Active Directory monitoring and health?
+description: Provides a general overview of Azure Active Directory monitoring and health.
+++++++ Last updated : 08/15/2023+++++
+# What is Azure Active Directory monitoring and health?
+
+The features of Azure Active Directory (Azure AD) Monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to:
+
+- Determine how your users utilize your apps and services.
+- Detect potential risks affecting the health of your environment.
+- Troubleshoot issues preventing your users from getting their work done.
+
+Sign-in and audit logs comprise the activity logs behind many Azure AD reports, which can be used to analyze, monitor, and troubleshoot activity in your tenant. Routing your activity logs to an analysis and monitoring solution provides greater insights into your tenant's health and security.
+
+This article describes the types of activity logs available in Azure AD, the reports that use the logs, and the monitoring services available to help you analyze the data.
+
+## Identity activity logs
+
+Activity logs help you understand the behavior of users in your organization. There are three types of activity logs in Azure AD:
+
+- [**Audit logs**](concept-audit-logs.md) include the history of every task performed in your tenant.
+
+- [**Sign-in logs**](concept-all-sign-ins.md) capture the sign-in attempts of your users and client applications.
+
+- [**Provisioning logs**](concept-provisioning-logs.md) provide information around users provisioned in your tenant through a third party service.
+
+The activity logs can be viewed in the Azure portal or using the Microsoft Graph API. Activity logs can also be routed to various endpoints for storage or analysis. To learn about all of the options for viewing the activity logs, see [How to access activity logs](howto-access-activity-logs.md).
+
+### Audit logs
+
+Audit logs provide you with records of system activities for compliance. This data enables you to address common scenarios such as:
+
+- Someone in my tenant got access to an admin group. Who gave them access?
+- I want to know the list of users signing into a specific app because I recently onboarded the app and want to know if itΓÇÖs doing well.
+- I want to know how many password resets are happening in my tenant.
+
+### Sign-in logs
+
+The sign-ins logs enable you to find answers to questions such as:
+
+- What is the sign-in pattern of a user?
+- How many users have users signed in over a week?
+- WhatΓÇÖs the status of these sign-ins?
+
+### Provisioning logs
+
+You can use the provisioning logs to find answers to questions like:
+
+- What groups were successfully created in ServiceNow?
+- What users were successfully removed from Adobe?
+- What users from Workday were successfully created in Active Directory?
+
+## Identity reports
+
+Reviewing the data in the Azure AD activity logs can provide helpful information for IT administrators. To streamline the process of reviewing data on key scenarios, we've created several reports on common scenarios that use the activity logs.
+
+- [Identity Protection](../identity-protection/overview-identity-protection.md) uses sign-in data to create reports on risky users and sign-in activities.
+- Activity related to your applications, such as service principal and app credential activity, are used to create reports in [Usage and insights](concept-usage-insights-report.md).
+- [Azure AD workbooks](overview-workbooks.md) provide a customizable way to view and analyze the activity logs.
+- [Monitor the status of Azure AD recommendations to improve your tenant's security.](overview-recommendations.md)
+
+## Identity monitoring and tenant health
+
+Reviewing Azure AD activity logs is the first step in maintaining and improving the health and security of your tenant. You need to analyze the data, monitor on risky scenarios, and determine where you can make improvements. Azure AD monitoring provides the necessary tools to help you make informed decisions.
+
+Monitoring Azure AD activity logs requires routing the log data to a monitoring and analysis solution. Endpoints include Azure Monitor logs, Microsoft Sentinel, or a third-party solution third-party Security Information and Event Management (SIEM) tool.
+
+- [Stream logs to an event hub to integrate with third-party SIEM tools.](howto-stream-logs-to-event-hub.md)
+- [Integrate logs with Azure Monitor logs.](howto-integrate-activity-logs-with-log-analytics.md)
+- [Analyze logs with Azure Monitor logs and Log Analytics.](howto-analyze-activity-logs-log-analytics.md)
++
+For an overview of how to access, store, and analyze activity logs, see [How to access activity logs](howto-access-activity-logs.md).
++
+## Next steps
+
+- [Learn about the sign-ins logs](concept-all-sign-ins.md)
+- [Learn about the audit logs](concept-audit-logs.md)
+- [Use Microsoft Graph to access activity logs](quickstart-access-log-with-graph-api.md)
+- [Integrate activity logs with SIEM tools](howto-stream-logs-to-event-hub.md)
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
-- Title: What is Azure Active Directory monitoring?
-description: Provides a general overview of Azure Active Directory monitoring.
------- Previously updated : 11/01/2022---
-# Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant.
---
-# What is Azure Active Directory monitoring?
-
-With Azure Active Directory (Azure AD) monitoring, you can now route your Azure AD activity logs to different endpoints. You can then either retain it for long-term use or integrate it with third-party Security Information and Event Management (SIEM) tools to gain insights into your environment.
-
-Currently, you can route the logs to:
--- An Azure storage account.-- An Azure event hub, so you can integrate with your Splunk and Sumologic instances.-- Azure Log Analytics workspace, wherein you can analyze the data, create dashboard and alert on specific events-
-**Prerequisite role**: Global Administrator
-
-> [!VIDEO https://www.youtube.com/embed/syT-9KNfug8]
--
-## Licensing and prerequisites for Azure AD reporting and monitoring
-
-You'll need an Azure AD premium license to access the Azure AD sign-in logs.
-
-For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
-
-To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant.
-
-Depending on the final destination of your log data, you'll need one of the following:
-
-* An Azure storage account that you have ListKeys permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage).
-
-* An Azure Event Hubs namespace to integrate with third-party SIEM solutions.
-
-* An Azure Log Analytics workspace to send logs to Azure Monitor logs.
-
-## Diagnostic settings configuration
-
-To configure monitoring settings for Azure AD activity logs, first sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways:
-
-* Select **Diagnostic settings** from the **Monitoring** section.
-
- ![Diagnostics settings](./media/overview-monitoring/diagnostic-settings.png)
-
-* Select **Audit Logs** or **Sign-ins**, then select **Export settings**.
-
- ![Export settings](./media/overview-monitoring/export-settings.png)
--
-## Route logs to storage account
-
-By routing logs to an Azure storage account, you can retain it for longer than the default retention period outlined in our [retention policies](reference-reports-data-retention.md). Learn how to [route data to your storage account](quickstart-azure-monitor-route-logs-to-storage-account.md).
-
-## Stream logs to event hub
-
-Routing logs to an Azure event hub allows you to integrate with third-party SIEM tools like Sumologic and Splunk. This integration allows you to combine Azure AD activity log data with other data managed by your SIEM, to provide richer insights into your environment. Learn how to [stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md).
-
-## Send logs to Azure Monitor logs
-
-[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) is a solution that consolidates monitoring data from different sources and provides a query language and analytics engine that gives you insights into the operation of your applications and resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor and alert on collected data. Learn how to [send data to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md).
-
-You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-ins and audit events. Learn how to [install and use log analytics views for Azure AD activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md).
-
-## Next steps
-
-* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md)
-* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md)
-* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
-- Title: What are Azure Active Directory reports?
-description: Provides a general overview of Azure Active Directory reports.
------- Previously updated : 02/03/2023---
-# Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment.
---
-# What are Azure Active Directory reports?
-
-Azure Active Directory (Azure AD) reports provide a comprehensive view of activity in your environment. The provided data enables you to:
--- Determine how your apps and services are utilized by your users-- Detect potential risks affecting the health of your environment-- Troubleshoot issues preventing your users from getting their work done -
-## Activity reports
-
-Activity reports help you understand the behavior of users in your organization. There are two types of activity reports in Azure AD:
--- **Audit logs** - The [audit logs activity report](concept-audit-logs.md) provides you with access to the history of every task performed in your tenant.--- **Sign-ins** - With the [sign-ins activity report](concept-sign-ins.md), you can determine, who has performed the tasks reported by the audit logs report.---
-> [!VIDEO https://www.youtube.com/embed/ACVpH6C_NL8]
-
-### Audit logs report
-
-The [audit logs report](concept-audit-logs.md) provides you with records of system activities for compliance. This data enables you to address common scenarios such as:
--- Someone in my tenant got access to an admin group. Who gave them access? --- I want to know the list of users signing into a specific app since I recently onboarded the app and want to know if itΓÇÖs doing well--- I want to know how many password resets are happening in my tenant--
-#### What Azure AD license do you need to access the audit logs report?
-
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/whatis.md#which-features-work-in-azure-ad).
-
-### Sign-ins report
-
-The [sign-ins report](concept-sign-ins.md) enables you to find answers to questions such as:
--- What is the sign-in pattern of a user?-- How many users have users signed in over a week?-- WhatΓÇÖs the status of these sign-ins?-
-#### What Azure AD license do you need to access the sign-ins activity report?
-
-To access the sign-ins activity report, your tenant must have an Azure AD Premium license associated with it.
-
-## Programmatic access
-
-In addition to the user interface, Azure AD also provides you with [programmatic access](./howto-configure-prerequisites-for-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
-
-## Next steps
--- [Risky sign-ins report](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins)-- [Audit logs report](concept-audit-logs.md)-- [Sign-ins logs report](concept-sign-ins.md)
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
- Title: What are Service Health notifications in Azure Active Directory?
-description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them.
------- Previously updated : 11/01/2022-----
-# What are Service Health notifications in Azure Active Directory?
-
-Azure Service Health has been updated to provide notifications to tenant admins within the Azure portal when there are Service Health events for Azure Active Directory services. Due to the criticality of these events, an alert card in the Azure AD overview page will also be provided to support the discovery of these notifications.
-
-## How it works
-
-When there happens to be a Service Health notification for an Azure Active Directory service, it will be posted to the Service Health page within the Azure portal. Previously these were subscription events that were posted to all the subscription owners/readers of subscriptions within the tenant that had an issue. To improve the targeting of these notifications, they'll now be available as tenant events to the tenant admins of the impacted tenant. For a transition period, these service events will be available as both tenant events and subscription events.
-
-Now that they're available as tenant events, they appear on the Azure AD overview page as alert cards. Any Service Health notification that has been updated within the last three days will be shown in one of the cards.
-
-
-![Screenshot of the alert cards on the Azure AD overview page.](./media/overview-service-health-notifications/service-health-overview.png)
---
-Each card:
--- Represents a currently active event, or a resolved one that will be distinguished by the icon in the card. -- Has a link to the event. You can review the event on the Azure Service Health pages. -
-
-![Screenshot of the event on the Azure Service Health page.](./media/overview-service-health-notifications/service-health-issues.png)
--
-
-
-For more information on the new Azure Service Health tenant events, see [Azure Service Health portal updates](../../service-health/service-health-portal-update.md)
-
-## Who will see the notifications
-
-Most of the built-in admin roles will have access to see these notifications. For the complete list of all authorized roles, see [Azure Service Health Tenant Admin authorized roles](../../service-health/admin-access-reference.md). Currently custom roles aren't supported.
-
-## What you should know
-
-Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon.
--
-
---
-## Next steps
--- [Service Health overview](../../service-health/service-health-overview.md)
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
Title: Access Azure AD logs with the Microsoft Graph API
-description: In this quickstart, you learn how you can access the sign-ins log using the Graph API.
+ Title: Analyze Azure AD sign-in logs with the Microsoft Graph API
+description: Learn how to access the sign-ins log and analyze a single sign-in attempt using the Microsoft Graph API.
Previously updated : 11/01/2022 Last updated : 08/25/2023 --+ #Customer intent: As an IT admin, you need to how to use the Graph API to access the log files so that you can fix issues. # Quickstart: Access Azure AD logs with the Microsoft Graph API
-With the information in the Azure Active Directory (Azure AD) sign-in logs, you can figure out what happened if a sign-in of a user failed. This quickstart shows you how to access the sign-ins log using the Graph API.
+With the information in the Azure Active Directory (Azure AD) sign-in logs, you can figure out what happened if a sign-in of a user failed. This quickstart shows you how to access the sign-ins log using the Microsoft Graph API.
## Prerequisites
To complete the scenario in this quickstart, you need:
- **Access to an Azure AD tenant**: If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - **A test account called Isabella Simonsen**: If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user).-- **Access to the reporting API**: If you haven't configured access yet, see [How to configure the prerequisites for the reporting API](howto-configure-prerequisites-for-reporting-api.md).
+- **Access to the Microsoft Graph API**: If you haven't configured access yet, see [How to configure the prerequisites for the reporting API](howto-configure-prerequisites-for-reporting-api.md).
## Perform a failed sign-in
To complete the scenario in this quickstart, you need:
The goal of this step is to create a record of a failed sign-in in the Azure AD sign-ins log.
-**To complete this step:**
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as Isabella Simonsen using an incorrect password.
-
-2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](./overview-reports.md#activity-reports).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as Isabella Simonsen using an incorrect password.
+2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log.
## Find the failed sign-in
-This section provides you with the steps to get information about your sign-in using the Graph API.
+This section provides the steps to locate the failed sign-in attempt using the Microsoft Graph API.
![Microsoft Graph Explorer query](./media/quickstart-access-log-with-graph-api/graph-explorer-query.png)
-**To review the failed sign-in:**
- 1. Navigate to [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer).
-2. Sign-in to your tenant as global administrator.
+2. Follow the prompts to authenticate into your tenant.
![Microsoft Graph Explorer authentication](./media/quickstart-access-log-with-graph-api/graph-explorer-authentication.png)
active-directory Quickstart Analyze Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md
Title: Analyze sign-ins with the Azure AD sign-ins log
+ Title: Quickstart guide to analyze a failed Azure AD sign-in
description: In this quickstart, you learn how you can use the sign-ins log to determine the reason for a failed sign-in to Azure AD. Previously updated : 11/01/2022 Last updated : 08/21/2023 --+ #Customer intent: As an IT admin, you need to know how to use the sign-ins log so that you can fix sign-in issues. # Quickstart: Analyze sign-ins with the Azure AD sign-ins log
The goal of this step is to create a record of a failed sign-in in the Azure AD
This section provides you with the steps to analyze a failed sign-in: - **Filter sign-ins**: Remove all records that aren't relevant to your analysis. For example, set a filter to display only the records of a specific user.-- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also look up the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
+- **Lookup additional error information**: In addition to the information, you can find in the sign-ins log, you can also look up the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
**To review the failed sign-in:**
This section provides you with the steps to analyze a failed sign-in:
Review the outcome of the tool and determine whether it provides you with additional information.
-![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
-- ## More tests Now, that you know how to find an entry in the sign-in log by name, you should also try to find the record using the following filters:
Now, that you know how to find an entry in the sign-in log by name, you should a
![Status failure](./media/quickstart-analyze-sign-in/status-failure.png) -- ## Clean up resources When no longer needed, delete the test user. If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users.md#delete-a-user). ## Next steps
-> [!div class="nextstepaction"]
-> [What are Azure Active Directory reports?](overview-reports.md)
+- [Learn how to use the sign-in diagnostic](howto-use-sign-in-diagnostics.md)
+- [Analyze sign-in logs with Azure Monitor Log Analytics](howto-analyze-activity-logs-log-analytics.md)
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
- Title: Tutorial - Archive Azure Active Directory logs to a storage account
-description: Learn how to route Azure Active Directory logs to a storage account
------- Previously updated : 07/14/2023---
-# Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
---
-# Tutorial: Archive Azure AD logs to an Azure storage account
-
-In this tutorial, you learn how to set up Azure Monitor diagnostics settings to route Azure Active Directory (Azure AD) logs to an Azure storage account.
-
-## Prerequisites
-
-To use this feature, you need:
-
-* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-* An Azure AD tenant.
-* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant.
-* To export sign-in data, you must have an Azure AD P1 or P2 license.
-
-## Archive logs to an Azure storage account
--
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Azure Active Directory** > **Monitoring** > **Audit logs**.
-
-1. Select **Export Data Settings**.
-
-1. You can either create a new setting (up to three settings are allowed) or edit an existing setting.
- - To change existing setting, select **Edit setting** next to the diagnostic setting you want to update.
- - To add new settings, select **Add diagnostic setting**.
-
- ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
-
-1. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
-
-1. Under **Destination Details** select the **Archive to a storage account** check box. Text fields for the retention period appear next to each log category.
-
-1. Select the Azure subscription and storage account for you want to route the logs.
-
-1. Select all the relevant categories in under **Category details**:
-
- ![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png)
-
-1. In the **Retention days** field, enter the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
-1. Select **Save**.
-
-1. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
- > [!NOTE]
- > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
-
-1. Select **Save** to save the setting.
-
-1. Close the window to return to the Diagnostic settings pane.
-
-## Next steps
-
-* [Tutorial: Configure a log analytics workspace](tutorial-log-analytics-wizard.md)
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Quickstart Filter Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-filter-audit-log.md
- Title: Filter your Azure AD audit log
-description: In this quickstart, you learn how you can filter entries in your Azure AD audit log.
---- Previously updated : 11/01/2022------
-#Customer intent: As an IT admin, you need to know how to filter your audit log so that you can analyze management activities.
-
-# Quickstart: Filter your Azure AD audit log
-
-With the information in the Azure AD audit log, you get access to records of system activities for compliance.
-This quickstart shows how to you can locate a newly created user account in your audit log.
--
-## Prerequisites
-
-To complete the scenario in this quickstart, you need:
--- **Access to an Azure AD tenant** - If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- **A test account called Isabella Simonsen** - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user).-
-## Find the new user account
-
-This section provides you with the steps to filter your audit log.
--
-**To find the new user:**
-
-1. Navigate to the [audit log](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Audit).
-
-2. To list only records for Isabella Simonsen:
-
- a. In the toolbar, select **Add filters**.
-
- ![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
-
- b. In the **Pick a field** list, select **Target**, and then select **Apply**
-
- c. In the **Target** textbox, type the **User Principal Name** of **Isabella Simonsen**, and then select **Apply**.
-
-3. Select the filtered item.
-
- ![Filtered items](./media/quickstart-filter-audit-log/audit-log-list.png)
-
-4. Review the **Audit Log Details**.
-
- ![Audit log details](./media/quickstart-filter-audit-log/audit-log-details.png)
-
-
-
-## Clean up resources
-
-When no longer needed, delete the test user. If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users.md#delete-a-user).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [What are Azure Active Directory reports?](overview-reports.md)
active-directory Recommendation Migrate From Adal To Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md
Title: Azure Active Directory recommendation - Migrate from ADAL to MSAL | Microsoft Docs
+ Title: Migrate from ADAL to MSAL recommendation
description: Learn why you should migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. -+ Previously updated : 08/10/2023 Last updated : 08/15/2023 -- # Azure AD recommendation: Migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries
Existing apps that use ADAL will continue to work after the end-of-support date.
## Action plan
-The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK.
-
-### [Azure portal](#tab/Azure-portal)
-
-There are four steps to identifying and updating your apps in the Azure portal. The following steps are covered in detail in the [List all apps using ADAL](../develop/howto-get-list-of-all-auth-library-apps.md) article.
-
-1. Send Azure AD sign-in event to Azure Monitor.
-1. [Access the sign-ins workbook in Azure AD.](../develop/howto-get-list-of-all-auth-library-apps.md)
-1. Identify the apps that use ADAL.
-1. Update your code.
- - The steps to update your code vary depending on the type of application.
- - For example, the steps for .NET and Python applications have separate instructions.
- - For a full list of instructions for each scenario, see [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal).
+The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. The steps for the Microsoft Graph PowerShell SDK are provided in the Recommendation details in the Azure Active Directory portal.
### [Microsoft Graph API](#tab/Microsoft-Graph-API) You can use Microsoft Graph to identify apps that need to be migrated to MSAL. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations).
-Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select **GET** as the HTTP method from the dropdown.
+1. Set the API version to **beta**.
+1. Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
```http https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources
You can run the following set of commands in Windows PowerShell. These commands
+ ## Frequently asked questions ### Why does it take 30 days to change the status to completed?
To reduce false positives, the service uses a 30 day window for ADAL requests. T
### How were ADAL applications identified before the recommendation was released?
-The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) is an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook does not capture Service Principal sign-ins, while the recommendation does.
+The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) was an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook doesn't capture Service Principal sign-ins, while the recommendation does.
### Why is the number of ADAL applications different in the workbook and the recommendation?
active-directory Reference Audit Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md
Title: Azure Active Directory (Azure AD) audit activity reference
-description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory (Azure AD).
+description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory.
Previously updated : 12/02/2022 Last updated : 08/23/2023 - # Azure AD audit log categories and activities
Azure Active Directory (Azure AD) audit logs collect all traceable activities wi
This article provides a comprehensive list of the audit categories and their related activities. Use the "In this article" section to jump to a specific audit category.
-Audit log activities and categories change periodically. The tables are updated regularly, but may not be in sync with what is available in Azure AD. Provide us feedback if you think there's a missing audit category or activity.
+Audit log activities and categories change periodically. The tables are updated regularly, but may not be in sync with what is available in Azure AD. Provide us with feedback if you think there's a missing audit category or activity.
-1. Sign in to the **Azure portal** using one of the [required roles](concept-audit-logs.md#how-do-i-access-it).
+1. Sign in to the **Azure portal** using one of the [required roles](concept-audit-logs.md).
1. Browse to **Azure Active Directory** > **Audit logs**. 1. Adjust the filters accordingly. 1. Select a row from the resulting table to view the details.
With [Azure AD Identity Governance access reviews](../governance/manage-user-acc
## Account provisioning
-Each time an account is provisioned in your Azure AD tenant, a log for that account is captured. Automated provisioning, such as with [Azure AD Connect cloud sync](../hybrid/cloud-sync/what-is-cloud-sync.md), will be found in this log. The Account provisioning service only has one audit category in the logs.
+Each time an account is provisioned in your Azure AD tenant, a log for that account is captured. Automated provisioning, such as with [Azure AD Connect cloud sync](../hybrid/cloud-sync/what-is-cloud-sync.md), is found in this log. The Account provisioning service only has one audit category in the logs.
|Audit Category|Activity| |||
This set of audit logs is related to [B2C](../../active-directory-b2c/overview.m
|ApplicationManagement|Retrieve V2 application service principals| |ApplicationManagement|Update V2 application| |ApplicationManagement|Update V2 application permission grant|
-|Authentication|A self-service sign up request was completed|
+|Authentication|A self-service sign-up request was completed|
|Authentication|An API was called as part of a user flow| |Authentication|Delete all available strong authentication devices| |Authentication|Evaluate Conditional Access policies|
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
The SLA attainment is truncated at three places after the decimal. Numbers aren'
| May | 99.999% | 99.999% | 99.999% | | June | 99.999% | 99.999% | 99.999% | | July | 99.999% | 99.999% | 99.999% |
-| August | 99.999% | 99.999% | |
+| August | 99.999% | 99.999% | 99.999% |
| September | 99.999% | 99.998% | | | October | 99.999% | 99.999% | | | November | 99.998% | 99.999% | |
active-directory Reference Basic Info Sign In Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md
- Title: Basic info in the Azure AD sign-in logs
-description: Learn what the basic info in the sign-in logs is about.
------- Previously updated : 10/28/2022------
-# Basic info in the Azure AD sign-in logs
-
-Azure AD logs all sign-ins into an Azure tenant for compliance. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. [Learn how to access, view, and analyze Azure AD sign-in logs](concept-sign-ins.md)
-
-This article explains the values on the Basic info tab of the sign-ins log.
-
-## Unique identifiers
-
-In Azure AD, a resource access has three relevant components:
--- **Who** ΓÇô The identity (User) doing the sign-in. -- **How** ΓÇô The client (Application) used for the access. -- **What** ΓÇô The target (Resource) accessed by the identity.-
-Each component has an associated unique identifier (ID). Below is an example of user using the Microsoft Azure classic deployment model to access the Azure portal.
-
-![Open audit logs](./media/reference-basic-info-sign-in-logs/sign-in-details-basic-info.png)
-
-### Tenant
-
-The sign-in log tracks two tenant identifiers:
--- **Home tenant** ΓÇô The tenant that owns the user identity. -- **Resource tenant** ΓÇô The tenant that owns the (target) resource.-
-These identifiers are relevant in cross-tenant scenarios. For example, to find out how users outside your tenant are accessing your resources, select all entries where the home tenant doesnΓÇÖt match the resource tenant.
-For the home tenant, Azure AD tracks the ID and the name.
-
-### Request ID
-
-The request ID is an identifier that corresponds to an issued token. If you are looking for sign-ins with a specific token, you need to extract the request ID from the token, first.
--
-### Correlation ID
-
-The correlation ID groups sign-ins from the same sign-in session. The identifier was implemented for convenience. Its accuracy is not guaranteed because the value is based on parameters passed by a client.
-
-### Sign-in
-
-The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a user principal name (UPN), but can be another identifier such as a phone number.
-
-### Authentication requirement
-
-This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. Graph API supports `$filter` (`eq` and `startsWith` operators only).
-
-### Sign-in event types
-
-Indicates the category of the sign in the event represents. For user sign-ins, the category can be `interactiveUser` or `nonInteractiveUser` and corresponds to the value for the **isInteractive** property on the sign-in resource. For managed identity sign-ins, the category is `managedIdentity`. For service principal sign-ins, the category is **servicePrincipal**. The Azure portal doesn't show this value, but the sign-in event is placed in the tab that matches its sign-in event type. Possible values are:
--- `interactiveUser`-- `nonInteractiveUser`-- `servicePrincipal`-- `managedIdentity`-- `unknownFutureValue`-
-The Microsoft Graph API, supports: `$filter` (`eq` operator only)
-
-### User type
-
-The type of a user. Examples include `member`, `guest`, or `external`.
--
-### Cross-tenant access type
-
-This attribute describes the type of cross-tenant access used by the actor to access the resource. Possible values are:
--- `none` - A sign-in event that did not cross an Azure AD tenant's boundaries.-- `b2bCollaboration`- A cross tenant sign-in performed by a guest user using B2B Collaboration.-- `b2bDirectConnect` - A cross tenant sign-in performed by a B2B.-- `microsoftSupport`- A cross tenant sign-in performed by a Microsoft support agent in a Microsoft customer tenant.-- `serviceProvider` - A cross-tenant sign-in performed by a Cloud Service Provider (CSP) or similar admin on behalf of that CSP's customer in a tenant-- `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](/graph/best-practices-concept).-
-If the sign-in did not the pass the boundaries of a tenant, the value is `none`.
-
-### Conditional Access evaluation
-
-This value shows whether continuous access evaluation (CAE) was applied to the sign-in event. There are multiple sign-in requests for each authentication. Some are shown on the interactive tab, while others are shown on the non-interactive tab. CAE is only displayed as true for one of the requests, and it can be on the interactive tab or non-interactive tab. For more information, see [Monitor and troubleshoot sign-ins with continuous access evaluation in Azure AD](../conditional-access/howto-continuous-access-evaluation-troubleshoot.md).
-
-## Next steps
-
-* [Learn about exporting Azure AD sign-in logs](concept-activity-logs-azure-monitor.md)
-* [Explore the sign-in diagnostic in Azure AD](./howto-use-sign-in-diagnostics.md)
active-directory Reference Powershell Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md
-+ # Azure AD PowerShell cmdlets for reporting
active-directory Tutorial Configure Log Analytics Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-configure-log-analytics-workspace.md
+
+ Title: Configure a log analytics workspace in Azure AD
+description: Learn how to configure an Azure AD Log Analytics workspace and run Kusto queries on your identity data.
++++ Last updated : 08/25/2023+++++
+#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment.
++
+# Tutorial: Configure a log analytics workspace
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure a log analytics workspace for your audit and sign-in logs
+> * Run queries using the Kusto Query Language (KQL)
+> * Create an alert rule that sends alerts when a specific account is used
+> * Create a custom workbook using the quickstart template
+> * Add a query to an existing workbook template
+
+## Prerequisites
+
+- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+
+- An Azure Active Directory (Azure AD) tenant.
+
+- A user who's at least a **Security Administrator** for the Azure AD tenant.
++
+Familiarize yourself with these articles:
+
+- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md)
+
+- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md)
+
+- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md)
+
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+
+- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md)
+++
+## Configure a workspace
++++
+This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs.
+Configuring a log analytics workspace consists of two main steps:
+
+1. Creating a log analytics workspace
+2. Setting diagnostic settings
+
+**To configure a workspace:**
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator)
+
+2. Browse to **Log Analytics workspaces**.
+
+ ![Search resources services and docs](./media/tutorial-log-analytics-wizard/search-services.png)
+
+3. Select **Create**.
+
+ ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png)
+
+4. On the **Create Log Analytics workspace** page, perform the following steps:
+
+ ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png)
+
+ 1. Select your subscription.
+
+ 2. Select a resource group.
+
+ 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1).
+
+ 4. Select your region.
+
+5. Click **Review + Create**.
+
+ ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png)
+
+6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace.
+
+ ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png)
+
+7. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+8. In **Monitoring** section, click **Diagnostic setting**.
+
+ ![Screenshot shows Diagnostic settings selected from Monitoring.](./media/tutorial-log-analytics-wizard/diagnostic-settings.png)
+
+9. On the **Diagnostic settings** page, click **Add diagnostic setting**.
+
+ ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png)
+
+10. On the **Diagnostic setting** page, perform the following steps:
+
+ ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png)
+
+ 1. Under **Category details**, select **AuditLogs** and **SigninLogs**.
+
+ 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace.
+
+ 3. Click **Save**.
+
+## Run queries
+
+This procedure shows how to run queries using the **Kusto Query Language (KQL)**.
++
+**To run a query:**
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+3. In the **Monitoring** section, click **Logs**.
+
+4. On the **Logs** page, click **Get Started**.
+
+5. In the **Search* textbox, type your query.
+
+6. Click **Run**.
++
+### KQL query examples
+
+Take 10 random entries from the input data:
+
+`SigninLogs | take 10`
+
+Look at the sign-ins where the Conditional Access was a success
+
+`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus`
++
+Count how many successes there have been
+
+`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count`
++
+Aggregate count of successful sign-ins by user by day:
+
+`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)`
++
+View how many times a user does a certain operation in specific time period:
+
+`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity`
++
+Pivot the results on operation name
+
+`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)`
++
+Merge together Audit and Sign in Logs using an inner join:
+
+`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated`
++
+View number of signs ins by client app type:
+
+`SigninLogs | summarize count() by ClientAppUsed`
+
+Count the sign ins by day:
+
+`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)`
+
+Take 5 random entries and project the columns you wish to see in the results:
+
+`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
++
+Take the top 5 in descending order and project the columns you wish to see
+
+`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
+
+Create a new column by combining the values to two other columns:
+
+`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed`
+
+## Create an alert rule
+
+This procedure shows how to send alerts when the breakglass account is used.
+
+**To create an alert rule:**
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+3. In the **Monitoring** section, click **Logs**.
+
+4. On the **Logs** page, click **Get Started**.
+
+5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName`
+
+6. Click **Run**.
+
+7. In the toolbar, click **New alert rule**.
+
+ ![New alert rule](./media/tutorial-log-analytics-wizard/new-alert-rule.png)
+
+8. On the **Create alert rule** page, verify that the scope is correct.
+
+9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count**
+
+ ![Default condition](./media/tutorial-log-analytics-wizard/default-condition.png)
+
+10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps:
+
+ ![Alert logic](./media/tutorial-log-analytics-wizard/alert-logic.png)
+
+ 1. As **Based on**, select **Number of results**.
+
+ 2. As **Operator**, select **Greater than**.
+
+ 3. As **Threshold value**, select **0**.
+
+11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps:
+
+ ![Evaluated based on](./media/tutorial-log-analytics-wizard/evaluated-based-on.png)
+
+ 1. As **Period (in minutes)**, select **5**.
+
+ 2. As **Frequency (in minutes)**, select **5**.
+
+ 3. Click **Done**.
+
+12. Under **Action group**, click **Select action group**.
+
+ ![Action group](./media/tutorial-log-analytics-wizard/action-group.png)
+
+13. On the **Select an action group to attach to this alert rule**, click **Create action group**.
+
+ ![Create action group](./media/tutorial-log-analytics-wizard/create-action-group.png)
+
+14. On the **Create action group** page, perform the following steps:
+
+ ![Instance details](./media/tutorial-log-analytics-wizard/instance-details.png)
+
+ 1. In the **Action group name** textbox, type **My action group**.
+
+ 2. In the **Display name** textbox, type **My action**.
+
+ 3. Click **Review + create**.
+
+ 4. Click **Create**.
++
+15. Under **Customize action**, perform the following steps:
+
+ ![Customize actions](./media/tutorial-log-analytics-wizard/customize-actions.png)
+
+ 1. Select **Email subject**.
+
+ 2. In the **Subject line** textbox, type: `Breakglass account has been used`
+
+16. Under **Alert rule details**, perform the following steps:
+
+ ![Alert rule details](./media/tutorial-log-analytics-wizard/alert-rule-details.png)
+
+ 1. In the **Alert rule name** textbox, type: `Breakglass account`
+
+ 2. In the **Description** textbox, type: `Your emergency access account has been used`
+
+17. Click **Create alert rule**.
++
+## Create a custom workbook
+
+This procedure shows how to create a new workbook using the quickstart template.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+3. In the **Monitoring** section, click **Workbooks**.
+
+ ![Screenshot shows Monitoring in the Azure portal menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
+
+4. In the **Quickstart** section, click **Empty**.
+
+ ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png)
+
+5. Click **Add**.
+
+ ![Add workbook](./media/tutorial-log-analytics-wizard/add-workbook.png)
+
+6. Click **Add text**.
+
+ ![Add text](./media/tutorial-log-analytics-wizard/add-text.png)
++
+7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**.
+
+ ![Workbook text](./media/tutorial-log-analytics-wizard/workbook-text.png)
+
+8. In the new workbook, click **Add**, and then click **Add query**.
+
+ ![Add query](./media/tutorial-log-analytics-wizard/add-query.png)
+
+9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed`
+
+10. Click **Run Query**.
+
+ ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png)
+
+11. In the toolbar, under **Visualization**, click **Pie chart**.
+
+ ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png)
+
+12. Click **Done Editing**.
+
+ ![Done editing](./media/tutorial-log-analytics-wizard/done-workbook-editing.png)
+++
+## Add a query to a workbook template
+
+This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of conditional access success to failures.
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+3. In the **Monitoring** section, click **Workbooks**.
+
+ ![Screenshot shows Monitoring in the menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
+
+4. In the **conditional access** section, click **Conditional Access Insights and Reporting**.
+
+ ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png)
+
+5. In the toolbar, click **Edit**.
+
+ ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png)
+
+6. In the toolbar, click the three dots, then **Add**, and then **Add query**.
+
+ ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png)
+
+7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus`
+
+8. Click **Run Query**.
+
+ ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png)
+
+9. Click **Time Range**, and then select **Set in query**.
+
+10. Click **Visualization**, and then select **Bar chart**.
+
+11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**.
+
+ ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png)
++++++++
+## Next steps
+
+Advance to the next article to learn how to manage device identities by using the Azure portal.
+> [!div class="nextstepaction"]
+> [Monitoring](overview-monitoring.md)
active-directory Tutorial Log Analytics Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-log-analytics-wizard.md
- Title: Configure a log analytics workspace in Azure AD
-description: Learn how to configure log analytics.
----- Previously updated : 10/31/2022------
-#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment.
---
-# Tutorial: Configure a log analytics workspace
--
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Configure a log analytics workspace for your audit and sign-in logs
-> * Run queries using the Kusto Query Language (KQL)
-> * Create an alert rule that sends alerts when a specific account is used
-> * Create a custom workbook using the quickstart template
-> * Add a query to an existing workbook template
-
-## Prerequisites
--- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).--- An Azure Active Directory (Azure AD) tenant.--- A user who's a Global Administrator or Security Administrator for the Azure AD tenant.--
-Familiarize yourself with these articles:
--- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md)--- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md)--- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md)--- [KQL quick reference](/azure/data-explorer/kql-quick-reference)--- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md)---
-## Configure a workspace
--
-This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs.
-Configuring a log analytics workspace consists of two main steps:
-
-1. Creating a log analytics workspace
-2. Setting diagnostic settings
-
-**To configure a workspace:**
--
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
-
-2. Search for **log analytics workspaces**.
-
- ![Search resources services and docs](./media/tutorial-log-analytics-wizard/search-services.png)
-
-3. On the log analytics workspaces page, click **Add**.
-
- ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png)
-
-4. On the **Create Log Analytics workspace** page, perform the following steps:
-
- ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png)
-
- 1. Select your subscription.
-
- 2. Select a resource group.
-
- 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1).
-
- 4. Select your region.
-
-5. Click **Review + Create**.
-
- ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png)
-
-6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace.
-
- ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png)
-
-7. Search for **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
-
-8. In **Monitoring** section, click **Diagnostic setting**.
-
- ![Screenshot shows Diagnostic settings selected from Monitoring.](./media/tutorial-log-analytics-wizard/diagnostic-settings.png)
-
-9. On the **Diagnostic settings** page, click **Add diagnostic setting**.
-
- ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png)
-
-10. On the **Diagnostic setting** page, perform the following steps:
-
- ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png)
-
- 1. Under **Category details**, select **AuditLogs** and **SigninLogs**.
-
- 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace.
-
- 3. Click **Save**.
-
-## Run queries
-
-This procedure shows how to run queries using the **Kusto Query Language (KQL)**.
--
-**To run a query:**
--
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
-
-2. Search for **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
-
-3. In the **Monitoring** section, click **Logs**.
-
-4. On the **Logs** page, click **Get Started**.
-
-5. In the **Search* textbox, type your query.
-
-6. Click **Run**.
--
-### KQL query examples
-
-Take 10 random entries from the input data:
-
-`SigninLogs | take 10`
-
-Look at the sign-ins where the Conditional Access was a success
-
-`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus`
--
-Count how many successes there have been
-
-`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count`
--
-Aggregate count of successful sign-ins by user by day:
-
-`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)`
--
-View how many times a user does a certain operation in specific time period:
-
-`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity`
--
-Pivot the results on operation name
-
-`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)`
--
-Merge together Audit and Sign in Logs using an inner join:
-
-`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated`
--
-View number of signs ins by client app type:
-
-`SigninLogs | summarize count() by ClientAppUsed`
-
-Count the sign ins by day:
-
-`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)`
-
-Take 5 random entries and project the columns you wish to see in the results:
-
-`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
--
-Take the top 5 in descending order and project the columns you wish to see
-
-`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
-
-Create a new column by combining the values to two other columns:
-
-`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed`
-
-## Create an alert rule
-
-This procedure shows how to send alerts when the breakglass account is used.
-
-**To create an alert rule:**
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
-
-2. Search for **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
-
-3. In the **Monitoring** section, click **Logs**.
-
-4. On the **Logs** page, click **Get Started**.
-
-5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName`
-
-6. Click **Run**.
-
-7. In the toolbar, click **New alert rule**.
-
- ![New alert rule](./media/tutorial-log-analytics-wizard/new-alert-rule.png)
-
-8. On the **Create alert rule** page, verify that the scope is correct.
-
-9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count**
-
- ![Default condition](./media/tutorial-log-analytics-wizard/default-condition.png)
-
-10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps:
-
- ![Alert logic](./media/tutorial-log-analytics-wizard/alert-logic.png)
-
- 1. As **Based on**, select **Number of results**.
-
- 2. As **Operator**, select **Greater than**.
-
- 3. As **Threshold value**, select **0**.
-
-11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps:
-
- ![Evaluated based on](./media/tutorial-log-analytics-wizard/evaluated-based-on.png)
-
- 1. As **Period (in minutes)**, select **5**.
-
- 2. As **Frequency (in minutes)**, select **5**.
-
- 3. Click **Done**.
-
-12. Under **Action group**, click **Select action group**.
-
- ![Action group](./media/tutorial-log-analytics-wizard/action-group.png)
-
-13. On the **Select an action group to attach to this alert rule**, click **Create action group**.
-
- ![Create action group](./media/tutorial-log-analytics-wizard/create-action-group.png)
-
-14. On the **Create action group** page, perform the following steps:
-
- ![Instance details](./media/tutorial-log-analytics-wizard/instance-details.png)
-
- 1. In the **Action group name** textbox, type **My action group**.
-
- 2. In the **Display name** textbox, type **My action**.
-
- 3. Click **Review + create**.
-
- 4. Click **Create**.
--
-15. Under **Customize action**, perform the following steps:
-
- ![Customize actions](./media/tutorial-log-analytics-wizard/customize-actions.png)
-
- 1. Select **Email subject**.
-
- 2. In the **Subject line** textbox, type: `Breakglass account has been used`
-
-16. Under **Alert rule details**, perform the following steps:
-
- ![Alert rule details](./media/tutorial-log-analytics-wizard/alert-rule-details.png)
-
- 1. In the **Alert rule name** textbox, type: `Breakglass account`
-
- 2. In the **Description** textbox, type: `Your emergency access account has been used`
-
-17. Click **Create alert rule**.
--
-## Create a custom workbook
-
-This procedure shows how to create a new workbook using the quickstart template.
----
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
-
-2. Search for **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
-
-3. In the **Monitoring** section, click **Workbooks**.
-
- ![Screenshot shows Monitoring in the Azure portal menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
-
-4. In the **Quickstart** section, click **Empty**.
-
- ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png)
-
-5. Click **Add**.
-
- ![Add workbook](./media/tutorial-log-analytics-wizard/add-workbook.png)
-
-6. Click **Add text**.
-
- ![Add text](./media/tutorial-log-analytics-wizard/add-text.png)
--
-7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**.
-
- ![Workbook text](./media/tutorial-log-analytics-wizard/workbook-text.png)
-
-8. In the new workbook, click **Add**, and then click **Add query**.
-
- ![Add query](./media/tutorial-log-analytics-wizard/add-query.png)
-
-9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed`
-
-10. Click **Run Query**.
-
- ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png)
-
-11. In the toolbar, under **Visualization**, click **Pie chart**.
-
- ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png)
-
-12. Click **Done Editing**.
-
- ![Done editing](./media/tutorial-log-analytics-wizard/done-workbook-editing.png)
---
-## Add a query to a workbook template
-
-This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of Conditional Access success to failures.
--
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
-
-2. Search for **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
-
-3. In the **Monitoring** section, click **Workbooks**.
-
- ![Screenshot shows Monitoring in the menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
-
-4. In the **Conditional Access** section, click **Conditional Access Insights and Reporting**.
-
- ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png)
-
-5. In the toolbar, click **Edit**.
-
- ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png)
-
-6. In the toolbar, click the three dots, then **Add**, and then **Add query**.
-
- ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png)
-
-7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus`
-
-8. Click **Run Query**.
-
- ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png)
-
-9. Click **Time Range**, and then select **Set in query**.
-
-10. Click **Visualization**, and then select **Bar chart**.
-
-11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**.
-
- ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png)
--------
-## Next steps
-
-Advance to the next article to learn how to manage device identities by using the Azure portal.
-> [!div class="nextstepaction"]
-> [Monitoring](overview-monitoring.md)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Last updated 11/15/2022 -+
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Last updated 06/09/2023 -+
> [!IMPORTANT] > Restricted management administrative units are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Administrative units let you subdivide your organization into any unit that you want, and then assign specific administrators that can manage only the members of that unit. For example, you could use administrative units to delegate permissions to administrators of each school at a large university, so they could control access, manage users, and set policies only in the School of Engineering.
active-directory Admin Units Members Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md
Last updated 05/13/2022 -+
> [!IMPORTANT] > Dynamic membership rules for administrative units are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
You can add or remove users or devices for administrative units manually. With this preview, you can add or remove users or devices for administrative units dynamically using rules. This article describes how to create administrative units with dynamic membership rules using the Azure portal, PowerShell, or Microsoft Graph API.
active-directory Admin Units Members List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md
Last updated 06/09/2023 -+
active-directory Admin Units Members Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md
Last updated 06/09/2023 -+
active-directory Admin Units Restricted Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-restricted-management.md
> [!IMPORTANT] > Restricted management administrative units are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Restricted management administrative units allow you to protect specific objects in your tenant from modification by anyone other than a specific set of administrators that you designate. This allows you to meet security or compliance requirements without having to remove tenant-level role assignments from your administrators.
active-directory Assign Roles Different Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/assign-roles-different-scopes.md
Last updated 02/04/2022 -+
active-directory Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/best-practices.md
Previously updated : 03/28/2021 Last updated : 09/01/2023
This article describes some of the best practices for using Azure Active Directory role-based access control (Azure AD RBAC). These best practices are derived from our experience with Azure AD RBAC and the experiences of customers like yourself. We encourage you to also read our detailed security guidance at [Securing privileged access for hybrid and cloud deployments in Azure AD](security-planning.md).
-## 1. Manage to least privilege
+## 1. Apply principle of least privilege
When planning your access control strategy, it's a best practice to manage to least privilege. Least privilege means you grant your administrators exactly the permission they need to do their job. There are three aspects to consider when you assign a role to your administrators: a specific set of permissions, over a specific scope, for a specific period of time. Avoid assigning broader roles at broader scopes even if it initially seems more convenient to do so. By limiting roles and scopes, you limit what resources are at risk if the security principal is ever compromised. Azure AD RBAC supports over 65 [built-in roles](permissions-reference.md). There are Azure AD roles to manage directory objects like users, groups, and applications, and also to manage Microsoft 365 services like Exchange, SharePoint, and Intune. To better understand Azure AD built-in roles, see [Understand roles in Azure Active Directory](concept-understand-roles.md). If there isn't a built-in role that meets your need, you can create your own [custom roles](custom-create.md).
Follow these steps to help you find the right role.
![Roles and administrators page in Azure AD with Service filter open](./media/best-practices/roles-administrators.png)
-1. Refer to the [Azure AD built-in roles](permissions-reference.md) documentation. Permissions associated with each role are listed together for better readability. To understand the structure and meaning of role permissions, see [How to understand role permissions](permissions-reference.md#how-to-understand-role-permissions).
+1. Refer to the [Azure AD built-in roles](permissions-reference.md) documentation. Permissions associated with each role are listed together for better readability. To understand the structure and meaning of role permissions, see [How to understand role permissions](privileged-roles-permissions.md#how-to-understand-role-permissions).
1. Refer to the [Least privileged role by task](delegate-by-task.md) documentation.
For information about access reviews for roles, see [Create an access review of
## 5. Limit the number of Global Administrators to less than 5
-As a best practice, Microsoft recommends that you assign the Global Administrator role to **fewer than five** people in your organization. Global Administrators hold keys to the kingdom, and it is in your best interest to keep the attack surface low. As stated previously, all of these accounts should be protected with multi-factor authentication.
+As a best practice, Microsoft recommends that you assign the Global Administrator role to **fewer than five** people in your organization. Global Administrators essentially have unrestricted access, and it is in your best interest to keep the attack surface low. As stated previously, all of these accounts should be protected with multi-factor authentication.
By default, when a user signs up for a Microsoft cloud service, an Azure AD tenant is created and the user is made a member of the Global Administrators role. Users who are assigned the Global Administrator role can read and modify every administrative setting in your Azure AD organization. With a few exceptions, Global Administrators can also read and modify all configuration settings in your Microsoft 365 organization. Global Administrators also have the ability to elevate their access to read data. Microsoft recommends that you keep two break glass accounts that are permanently assigned to the Global Administrator role. Make sure that these accounts don't require the same multi-factor authentication mechanism as your normal administrative accounts to sign in, as described in [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-## 6. Use groups for Azure AD role assignments and delegate the role assignment
+## 6. Limit the number of privileged role assignments to less than 10
+
+Some roles include privileged permissions, such as the ability to update credentials. Since these roles can potentially lead to elevation of privilege, you should limit the use of these privileged role assignments to **fewer than 10** in your organization. You can identity roles, permissions, and role assignments that are privileged by looking for the **PRIVILEGED** label. For more information, see [Privileged roles and permissions in Azure AD](privileged-roles-permissions.md).
+
+## 7. Use groups for Azure AD role assignments and delegate the role assignment
If you have an external governance system that takes advantage of groups, then you should consider assigning roles to Azure AD groups, instead of individual users. You can also manage role-assignable groups in PIM to ensure that there are no standing owners or members in these privileged groups. For more information, see [Privileged Identity Management (PIM) for Groups (preview)](../privileged-identity-management/concept-pim-for-groups.md). You can assign an owner to role-assignable groups. That owner decides who is added to or removed from the group, so indirectly, decides who gets the role assignment. In this way, a Global Administrator or Privileged Role Administrator can delegate role management on a per-role basis by using groups. For more information, see [Use Azure AD groups to manage role assignments](groups-concept.md).
-## 7. Activate multiple roles at once using PIM for Groups
+## 8. Activate multiple roles at once using PIM for Groups
It may be the case that an individual has five or six eligible assignments to Azure AD roles through PIM. They will have to activate each role individually, which can reduce productivity. Worse still, they can also have tens or hundreds of Azure resources assigned to them, which aggravates the problem.
In this case, you should use [Privileged Identity Management (PIM) for Groups (p
![PIM for Groups diagram showing activating multiple roles at once](./media/best-practices/pim-for-groups.png)
-## 8. Use cloud native accounts for Azure AD roles
+## 9. Use cloud native accounts for Azure AD roles
Avoid using on-premises synced accounts for Azure AD role assignments. If your on-premises account is compromised, it can compromise your Azure AD resources as well.
active-directory Concept Understand Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/concept-understand-roles.md
Last updated 04/22/2022 -+
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md
Last updated 05/10/2022 -+ # Assign custom roles with resource scope using PowerShell in Azure Active Directory
active-directory Custom Available Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-available-permissions.md
Last updated 11/04/2020 -+
active-directory Custom Consent Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-consent-permissions.md
Last updated 01/31/2023 -+ # App consent permissions for custom roles in Azure Active Directory
active-directory Custom Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-create.md
Last updated 12/09/2022 -+ # Create and assign a custom role in Azure Active Directory
Your custom role will show up in the list of available roles to assign.
### Connect to Azure
-To connect to Azure Active Directory, use the following command:
+To connect to Microsoft Graph PowerShell, use the following command:
``` PowerShell
-Connect-AzureAD
+Connect-MgGraph -Scopes "RoleManagement.Read.All"
``` ### Create the custom role
$allowedResourceAction =
"microsoft.directory/applications/basic/update", "microsoft.directory/applications/credentials/update" )
-$rolePermissions = @{'allowedResourceActions'= $allowedResourceAction}
+$rolePermissions = @(@{AllowedResourceActions= $allowedResourceAction})
# Create new custom admin role
-$customAdmin = New-AzureADMSRoleDefinition -RolePermissions $rolePermissions -DisplayName $displayName -Description $description -TemplateId $templateId -IsEnabled $true
+$customAdmin = New-MgRoleManagementDirectoryRoleDefinition -RolePermissions $rolePermissions -DisplayName $displayName -IsEnabled -Description $description -TemplateId $templateId
``` ### Assign the custom role using PowerShell
Assign the role using the below PowerShell script:
``` PowerShell # Get the user and role definition you want to link
-$user = Get-AzureADUser -Filter "userPrincipalName eq 'cburl@f128.info'"
-$roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Application Support Administrator'"
+$user = Get-MgUser -Filter "userPrincipalName eq 'cburl@f128.info'"
+$roleDefinition = Get-MgRoleManagementDirectoryRoleDefinition -Filter "DisplayName eq 'Application Support Administrator'"
# Get app registration and construct resource scope for assignment.
-$appRegistration = Get-AzureADApplication -Filter "displayName eq 'f/128 Filter Photos'"
+$appRegistration = Get-MgApplication -Filter "Displayname eq 'POSTMAN'"
$resourceScope = '/' + $appRegistration.objectId # Create a scoped role assignment
-$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $resourceScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+$roleAssignment = New-MgRoleManagementDirectoryRoleAssignment -DirectoryScopeId $resourcescope -RoleDefinitionId $roledefinition.Id -PrincipalId $user.Id
``` ## Create a role with the Microsoft Graph API
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md
Last updated 01/31/2023 -+ # Enterprise application permissions for custom roles in Azure Active Directory
active-directory Custom Enterprise Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-apps.md
Last updated 02/04/2022 -+
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md
Last updated 04/10/2023 -+
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md
Last updated 04/10/2023 -+
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md
Last updated 04/10/2023 -+
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md
Last updated 02/04/2022 -+
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md
Last updated 08/08/2023 -+
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md
Last updated 02/06/2023 -+
Follow these steps to assign Azure AD roles using PowerShell.
1. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md). ```powershell
- Import-Module -Name AzureADPreview -Force
+ Import-Module -Name Microsoft.Graph.Identity.Governance -Force
```
-1. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+1. In a PowerShell window, use [Connect-MgGraph](/powershell/microsoftgraph/authentication-commands?view=graph-powershell-1.0&preserve-view=true) to sign in to your tenant.
```powershell
- Connect-AzureAD
+ Connect-MgGraph -Scopes "RoleManagement.ReadWrite.Directory"
```
-1. Use [Get-AzureADUser](/powershell/module/azuread/get-azureaduser) to get the user you want to assign a role to.
+1. Use [Get-MgUser](/powershell/module/microsoft.graph.users/get-mguser?view=graph-powershell-1.0&preserve-view=true) to get the user you want to assign a role to.
```powershell
- $user = Get-AzureADUser -Filter "userPrincipalName eq 'user@contoso.com'"
+ $user = Get-MgUser -Filter "userPrincipalName eq 'johndoe@contoso.com'"
``` ### Assign a role
-1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+1. Use [Get-MgRoleManagementDirectoryRoleDefinition](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroledefinition?view=graph-powershell-1.0&preserve-view=true) to get the role you want to assign.
```powershell
- $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Billing Administrator'"
+ $roledefinition = Get-MgRoleManagementDirectoryRoleDefinition -Filter "DisplayName eq 'Billing Administrator'"
```
-1. Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role.
+1. Use [New-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.identity.governance/new-mgrolemanagementdirectoryroleassignment?view=graph-powershell-1.0&preserve-view=true) to assign the role.
```powershell
- $roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId '/' -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+ $roleassignment = New-MgRoleManagementDirectoryRoleAssignment -DirectoryScopeId '/' -RoleDefinitionId $roledefinition.Id -PrincipalId $user.Id
``` ### Assign a role as eligible using PIM
Follow these steps to assign Azure AD roles using PowerShell.
If PIM is enabled, you have additional capabilities, such as making a user eligible for a role assignment or defining the start and end time for a role assignment. These capabilities use a different set of PowerShell commands. For more information about using PowerShell and PIM, see [PowerShell for Azure AD roles in Privileged Identity Management](../privileged-identity-management/powershell-for-azure-ad-roles.md).
-1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+1. Use [Get-MgRoleManagementDirectoryRoleDefinition](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroledefinition?view=graph-powershell-1.0&preserve-view=true) to get the role you want to assign.
```powershell
- $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Billing Administrator'"
+ $roledefinition = Get-MgRoleManagementDirectoryRoleDefinition -Filter "DisplayName eq 'Billing Administrator'"
```
-1. Use [Get-AzureADMSPrivilegedResource](/powershell/module/azuread/get-azureadmsprivilegedresource) to get the privileged resource. In this case, your tenant.
+1. Use the following command to create a hash table to store all the necessary attributes required to assign the role to the user. The Principal ID will be the user id to which you want to assign the role. In this example, the assignment will be valid only for **10 hours**.
```powershell
- $aadTenant = Get-AzureADMSPrivilegedResource -ProviderId aadRoles
- ```
-
-1. Use [New-Object](/powershell/module/microsoft.powershell.utility/new-object) to create a new `AzureADMSPrivilegedSchedule` object to define the start and end time of the role assignment.
-
- ```powershell
- $schedule = New-Object Microsoft.Open.MSGraph.Model.AzureADMSPrivilegedSchedule
- $schedule.Type = "Once"
- $schedule.StartDateTime = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ")
- $schedule.EndDateTime = "2021-07-25T20:00:00.000Z"
+ $params = @{
+ "PrincipalId" = "053a6a7e-4a75-48bc-8324-d70f50ec0d91"
+ "RoleDefinitionId" = "b0f54661-2d74-4c50-afa3-1ec803f12efe"
+ "Justification" = "Add eligible assignment"
+ "DirectoryScopeId" = "/"
+ "Action" = "AdminAssign"
+ "ScheduleInfo" = @{
+ "StartDateTime" = Get-Date
+ "Expiration" = @{
+ "Type" = "AfterDuration"
+ "Duration" = "PT10H"
+ }
+ }
+ }
```
-1. Use [Open-AzureADMSPrivilegedRoleAssignmentRequest](/powershell/module/azuread/open-azureadmsprivilegedroleassignmentrequest) to assign the role as eligible.
+1. Use [New-MgRoleManagementDirectoryRoleEligibilityScheduleRequest](/powershell/module/microsoft.graph.identity.governance/new-mgrolemanagementdirectoryroleeligibilityschedulerequest?view=graph-powershell-1.0&preserve-view=true) to assign the role as eligible. Once the role has been assigned, it will reflect on the Azure portal under **Privileged Identity Management -> Azure AD Roles -> Assignments -> Eligible Assignments** section.
```powershell
- $roleAssignmentEligible = Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId $aadTenant.Id -RoleDefinitionId $roleDefinition.Id -SubjectId $user.objectId -Type 'AdminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "Review billing info"
+ New-MgRoleManagementDirectoryRoleEligibilityScheduleRequest -BodyParameter $params | Format-List Id, Status, Action, AppScopeId, DirectoryScopeId, RoleDefinitionId, IsValidationOnly, Justification, PrincipalId, CompletedDateTime, CreatedDateTime
``` ## Microsoft Graph API
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 07/26/2023 Last updated : 08/29/2023
This article lists the Azure AD built-in roles you can assign to allow managemen
> [!div class="mx-tableFixed"] > | Role | Description | Template ID | > | | | |
-> | [Application Administrator](#application-administrator) | Can create and manage all aspects of app registrations and enterprise apps. | 9b895d92-2cd3-44c7-9d02-a6ac2d5ea5c3 |
-> | [Application Developer](#application-developer) | Can create application registrations independent of the 'Users can register applications' setting. | cf1c38e5-3621-4004-a7cb-879624dced7c |
+> | [Application Administrator](#application-administrator) | Can create and manage all aspects of app registrations and enterprise apps.<br/>[![Privileged label icon.](./medi) | 9b895d92-2cd3-44c7-9d02-a6ac2d5ea5c3 |
+> | [Application Developer](#application-developer) | Can create application registrations independent of the 'Users can register applications' setting.<br/>[![Privileged label icon.](./medi) | cf1c38e5-3621-4004-a7cb-879624dced7c |
> | [Attack Payload Author](#attack-payload-author) | Can create attack payloads that an administrator can initiate later. | 9c6df0f2-1e7c-4dc3-b195-66dfbd24aa8f | > | [Attack Simulation Administrator](#attack-simulation-administrator) | Can create and manage all aspects of attack simulation campaigns. | c430b396-e693-46cc-96f3-db01bf8bb62a | > | [Attribute Assignment Administrator](#attribute-assignment-administrator) | Assign custom security attribute keys and values to supported Azure AD objects. | 58a13ea3-c632-46ae-9ee0-9c0d43cd7f3d | > | [Attribute Assignment Reader](#attribute-assignment-reader) | Read custom security attribute keys and values for supported Azure AD objects. | ffd52fa5-98dc-465c-991d-fc073eb59f8f | > | [Attribute Definition Administrator](#attribute-definition-administrator) | Define and manage the definition of custom security attributes. | 8424c6f0-a189-499e-bbd0-26c1753c96d4 | > | [Attribute Definition Reader](#attribute-definition-reader) | Read the definition of custom security attributes. | 1d336d2c-4ae8-42ef-9711-b3604ce3fc2c |
-> | [Authentication Administrator](#authentication-administrator) | Can access to view, set and reset authentication method information for any non-admin user. | c4e39bd9-1100-46d3-8c65-fb160da0071f |
+> | [Authentication Administrator](#authentication-administrator) | Can access to view, set and reset authentication method information for any non-admin user.<br/>[![Privileged label icon.](./medi) | c4e39bd9-1100-46d3-8c65-fb160da0071f |
> | [Authentication Policy Administrator](#authentication-policy-administrator) | Can create and manage the authentication methods policy, tenant-wide MFA settings, password protection policy, and verifiable credentials. | 0526716b-113d-4c15-b2c8-68e3c22b9f80 | > | [Azure AD Joined Device Local Administrator](#azure-ad-joined-device-local-administrator) | Users assigned to this role are added to the local administrators group on Azure AD-joined devices. | 9f06204d-73c1-4d4c-880a-6edb90606fd8 | > | [Azure DevOps Administrator](#azure-devops-administrator) | Can manage Azure DevOps policies and settings. | e3973bdf-4987-49ae-837a-ba8e231c7286 | > | [Azure Information Protection Administrator](#azure-information-protection-administrator) | Can manage all aspects of the Azure Information Protection product. | 7495fdc4-34c4-4d15-a289-98788ce399fd |
-> | [B2C IEF Keyset Administrator](#b2c-ief-keyset-administrator) | Can manage secrets for federation and encryption in the Identity Experience Framework (IEF). | aaf43236-0c0d-4d5f-883a-6955382ac081 |
+> | [B2C IEF Keyset Administrator](#b2c-ief-keyset-administrator) | Can manage secrets for federation and encryption in the Identity Experience Framework (IEF).<br/>[![Privileged label icon.](./medi) | aaf43236-0c0d-4d5f-883a-6955382ac081 |
> | [B2C IEF Policy Administrator](#b2c-ief-policy-administrator) | Can create and manage trust framework policies in the Identity Experience Framework (IEF). | 3edaf663-341e-4475-9f94-5c398ef6c070 | > | [Billing Administrator](#billing-administrator) | Can perform common billing related tasks like updating payment information. | b0f54661-2d74-4c50-afa3-1ec803f12efe | > | [Cloud App Security Administrator](#cloud-app-security-administrator) | Can manage all aspects of the Defender for Cloud Apps product. | 892c5842-a9a6-463a-8041-72aa08ca3cf6 |
-> | [Cloud Application Administrator](#cloud-application-administrator) | Can create and manage all aspects of app registrations and enterprise apps except App Proxy. | 158c047a-c907-4556-b7ef-446551a6b5f7 |
-> | [Cloud Device Administrator](#cloud-device-administrator) | Limited access to manage devices in Azure AD. | 7698a772-787b-4ac8-901f-60d6b08affd2 |
+> | [Cloud Application Administrator](#cloud-application-administrator) | Can create and manage all aspects of app registrations and enterprise apps except App Proxy.<br/>[![Privileged label icon.](./medi) | 158c047a-c907-4556-b7ef-446551a6b5f7 |
+> | [Cloud Device Administrator](#cloud-device-administrator) | Limited access to manage devices in Azure AD.<br/>[![Privileged label icon.](./medi) | 7698a772-787b-4ac8-901f-60d6b08affd2 |
> | [Compliance Administrator](#compliance-administrator) | Can read and manage compliance configuration and reports in Azure AD and Microsoft 365. | 17315797-102d-40b4-93e0-432062caca18 | > | [Compliance Data Administrator](#compliance-data-administrator) | Creates and manages compliance content. | e6d1a23a-da11-4be4-9570-befc86d067a7 |
-> | [Conditional Access Administrator](#conditional-access-administrator) | Can manage Conditional Access capabilities. | b1be1c3e-b65d-4f19-8427-f6fa0d97feb9 |
+> | [Conditional Access Administrator](#conditional-access-administrator) | Can manage Conditional Access capabilities.<br/>[![Privileged label icon.](./medi) | b1be1c3e-b65d-4f19-8427-f6fa0d97feb9 |
> | [Customer LockBox Access Approver](#customer-lockbox-access-approver) | Can approve Microsoft support requests to access customer organizational data. | 5c4f9dcd-47dc-4cf7-8c9a-9e4207cbfc91 | > | [Desktop Analytics Administrator](#desktop-analytics-administrator) | Can access and manage Desktop management tools and services. | 38a96431-2bdf-4b4c-8b6e-5d3d8abac1a4 | > | [Directory Readers](#directory-readers) | Can read basic directory information. Commonly used to grant directory read access to applications and guests. | 88d8e3e3-8f55-4a1e-953a-9b9898b8876b |
-> | [Directory Synchronization Accounts](#directory-synchronization-accounts) | Only used by Azure AD Connect service. | d29b2b05-8046-44ba-8758-1e26182fcf32 |
-> | [Directory Writers](#directory-writers) | Can read and write basic directory information. For granting access to applications, not intended for users. | 9360feb5-f418-4baa-8175-e2a00bac4301 |
+> | [Directory Synchronization Accounts](#directory-synchronization-accounts) | Only used by Azure AD Connect service.<br/>[![Privileged label icon.](./medi) | d29b2b05-8046-44ba-8758-1e26182fcf32 |
+> | [Directory Writers](#directory-writers) | Can read and write basic directory information. For granting access to applications, not intended for users.<br/>[![Privileged label icon.](./medi) | 9360feb5-f418-4baa-8175-e2a00bac4301 |
> | [Domain Name Administrator](#domain-name-administrator) | Can manage domain names in cloud and on-premises. | 8329153b-31d0-4727-b945-745eb3bc5f31 | > | [Dynamics 365 Administrator](#dynamics-365-administrator) | Can manage all aspects of the Dynamics 365 product. | 44367163-eba1-44c3-98af-f5787879f96a | > | [Edge Administrator](#edge-administrator) | Manage all aspects of Microsoft Edge. | 3f1acade-1e04-4fbc-9b69-f0302cd84aef |
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [External ID User Flow Attribute Administrator](#external-id-user-flow-attribute-administrator) | Can create and manage the attribute schema available to all user flows. | 0f971eea-41eb-4569-a71e-57bb8a3eff1e | > | [External Identity Provider Administrator](#external-identity-provider-administrator) | Can configure identity providers for use in direct federation. | be2f45a1-457d-42af-a067-6ec1fa63bc45 | > | [Fabric Administrator](#fabric-administrator) | Can manage all aspects of the Fabric and Power BI products. | a9ea8996-122f-4c74-9520-8edcd192826c |
-> | [Global Administrator](#global-administrator) | Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities. | 62e90394-69f5-4237-9190-012177145e10 |
-> | [Global Reader](#global-reader) | Can read everything that a Global Administrator can, but not update anything. | f2ef992c-3afb-46b9-b7cf-a126ee74c451 |
-> | [Global Secure Access Administrator](#global-secure-access-administrator) | Create and manage all aspects of Microsoft Entra Internet Access and Microsoft Entra Private Access, including managing access to public and private endpoints. | ac434307-12b9-4fa1-a708-88bf58caabc1 |
+> | [Global Administrator](#global-administrator) | Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities.<br/>[![Privileged label icon.](./medi) | 62e90394-69f5-4237-9190-012177145e10 |
+> | [Global Reader](#global-reader) | Can read everything that a Global Administrator can, but not update anything.<br/>[![Privileged label icon.](./medi) | f2ef992c-3afb-46b9-b7cf-a126ee74c451 |
+> | [Global Secure Access Administrator](#global-secure-access-administrator) | Create and manage all aspects of Microsoft Entra Internet Access and Microsoft Entra Private Access, including managing access to public and private endpoints. | ac434307-12b9-4fa1-a708-88bf58caabc1 |
> | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | > | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b |
-> | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 |
-> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single sign-on (Seamless SSO), and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
+> | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators.<br/>[![Privileged label icon.](./medi) | 729827e3-9c14-49f7-bb1b-9608f156bbb8 |
+> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single sign-on (Seamless SSO), and federation settings.<br/>[![Privileged label icon.](./medi) | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
> | [Identity Governance Administrator](#identity-governance-administrator) | Manage access using Azure AD for identity governance scenarios. | 45d8d3c5-c802-45c6-b32a-1d70b5e1e86e | > | [Insights Administrator](#insights-administrator) | Has administrative access in the Microsoft 365 Insights app. | eb1f4a8d-243a-41f0-9fbd-c7cdf6c5ef7c | > | [Insights Analyst](#insights-analyst) | Access the analytical capabilities in Microsoft Viva Insights and run custom queries. | 25df335f-86eb-4119-b717-0ff02de207e9 | > | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the Microsoft 365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d |
-> | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product. | 3a2c62db-5318-420d-8d74-23affee5d9d5 |
+> | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product.<br/>[![Privileged label icon.](./medi) | 3a2c62db-5318-420d-8d74-23affee5d9d5 |
> | [Kaizala Administrator](#kaizala-administrator) | Can manage settings for Microsoft Kaizala. | 74ef975b-6605-40af-a5d2-b9539d836353 | > | [Knowledge Administrator](#knowledge-administrator) | Can configure knowledge, learning, and other intelligent features. | b5a8dcf3-09d5-43a9-a639-8e29ef291470 | > | [Knowledge Manager](#knowledge-manager) | Can organize, create, manage, and promote topics and knowledge. | 744ec460-397e-42ad-a462-8b3f9747a02c |
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Network Administrator](#network-administrator) | Can manage network locations and review enterprise network design insights for Microsoft 365 Software as a Service applications. | d37c8bed-0711-4417-ba38-b4abe66ce4c2 | > | [Office Apps Administrator](#office-apps-administrator) | Can manage Office apps cloud services, including policy and settings management, and manage the ability to select, unselect and publish 'what's new' feature content to end-user's devices. | 2b745bdf-0803-4d80-aa65-822c4493daac | > | [Organizational Messages Writer](#organizational-messages-writer) | Write, publish, manage, and review the organizational messages for end-users through Microsoft product surfaces. | 507f53e4-4e52-4077-abd3-d2e1558b6ea2 |
-> | [Partner Tier1 Support](#partner-tier1-support) | Do not use - not intended for general use. | 4ba39ca4-527c-499a-b93d-d9b492c50246 |
-> | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use. | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 |
-> | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators. | 966707d0-3269-4727-9be2-8c3a10f19b9d |
+> | [Partner Tier1 Support](#partner-tier1-support) | Do not use - not intended for general use.<br/>[![Privileged label icon.](./medi) | 4ba39ca4-527c-499a-b93d-d9b492c50246 |
+> | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use.<br/>[![Privileged label icon.](./medi) | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 |
+> | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators.<br/>[![Privileged label icon.](./medi) | 966707d0-3269-4727-9be2-8c3a10f19b9d |
> | [Permissions Management Administrator](#permissions-management-administrator) | Manage all aspects of Entra Permissions Management. | af78dc32-cf4d-46f9-ba4e-4428526346b5 | > | [Power Platform Administrator](#power-platform-administrator) | Can create and manage all aspects of Microsoft Dynamics 365, Power Apps and Power Automate. | 11648597-926c-4cf3-9c36-bcebb0ba8dcc | > | [Printer Administrator](#printer-administrator) | Can manage all aspects of printers and printer connectors. | 644ef478-e28f-4e28-b9dc-3fdde9aa0b1f | > | [Printer Technician](#printer-technician) | Can register and unregister printers and update printer status. | e8cef6f1-e4bd-4ea8-bc07-4b8d950f4477 |
-> | [Privileged Authentication Administrator](#privileged-authentication-administrator) | Can access to view, set and reset authentication method information for any user (admin or non-admin). | 7be44c8a-adaf-4e2a-84d6-ab2649e08a13 |
-> | [Privileged Role Administrator](#privileged-role-administrator) | Can manage role assignments in Azure AD, and all aspects of Privileged Identity Management. | e8611ab8-c189-46e8-94e1-60213ab1f814 |
+> | [Privileged Authentication Administrator](#privileged-authentication-administrator) | Can access to view, set and reset authentication method information for any user (admin or non-admin).<br/>[![Privileged label icon.](./medi) | 7be44c8a-adaf-4e2a-84d6-ab2649e08a13 |
+> | [Privileged Role Administrator](#privileged-role-administrator) | Can manage role assignments in Azure AD, and all aspects of Privileged Identity Management.<br/>[![Privileged label icon.](./medi) | e8611ab8-c189-46e8-94e1-60213ab1f814 |
> | [Reports Reader](#reports-reader) | Can read sign-in and audit reports. | 4a5d8f65-41da-4de4-8968-e035b65339cf | > | [Search Administrator](#search-administrator) | Can create and manage all aspects of Microsoft Search settings. | 0964bb5e-9bdb-4d7b-ac29-58e794862a40 | > | [Search Editor](#search-editor) | Can create and manage the editorial content such as bookmarks, Q and As, locations, floorplan. | 8835291a-918c-4fd7-a9ce-faa49f0cf7d9 |
-> | [Security Administrator](#security-administrator) | Can read security information and reports, and manage configuration in Azure AD and Office 365. | 194ae4cb-b126-40b2-bd5b-6091b380977d |
-> | [Security Operator](#security-operator) | Creates and manages security events. | 5f2222b1-57c3-48ba-8ad5-d4759f1fde6f |
-> | [Security Reader](#security-reader) | Can read security information and reports in Azure AD and Office 365. | 5d6b6bb7-de71-4623-b4af-96380a352509 |
+> | [Security Administrator](#security-administrator) | Can read security information and reports, and manage configuration in Azure AD and Office 365.<br/>[![Privileged label icon.](./medi) | 194ae4cb-b126-40b2-bd5b-6091b380977d |
+> | [Security Operator](#security-operator) | Creates and manages security events.<br/>[![Privileged label icon.](./medi) | 5f2222b1-57c3-48ba-8ad5-d4759f1fde6f |
+> | [Security Reader](#security-reader) | Can read security information and reports in Azure AD and Office 365.<br/>[![Privileged label icon.](./medi) | 5d6b6bb7-de71-4623-b4af-96380a352509 |
> | [Service Support Administrator](#service-support-administrator) | Can read service health information and manage support tickets. | f023fd81-a637-4b56-95fd-791ac0226033 | > | [SharePoint Administrator](#sharepoint-administrator) | Can manage all aspects of the SharePoint service. | f28a1f50-f6e7-4571-818b-6a12f2af6b6c | > | [Skype for Business Administrator](#skype-for-business-administrator) | Can manage all aspects of the Skype for Business product. | 75941009-915a-4869-abe7-691bff18279e |
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 | > | [Tenant Creator](#tenant-creator) | Create new Azure AD or Azure AD B2C tenants. | 112ca1a2-15ad-4102-995e-45b0bc479a6a | > | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e |
-> | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 |
+> | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins.<br/>[![Privileged label icon.](./medi) | fe930be7-5e62-47db-91af-98c3a49a38b1 |
> | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 | > | [Viva Goals Administrator](#viva-goals-administrator) | Manage and configure all aspects of Microsoft Viva Goals. | 92b086b3-e367-4ef2-b869-1de128fb986e | > | [Viva Pulse Administrator](#viva-pulse-administrator) | Can manage all settings for Microsoft Viva Pulse app. | 87761b17-1ed2-4af3-9acd-92a150038160 |
This article lists the Azure AD built-in roles you can assign to allow managemen
## Application Administrator
-Users in this role can create and manage all aspects of enterprise applications, application registrations, and application proxy settings. Note that users assigned to this role are not added as owners when creating new application registrations or enterprise applications.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role can create and manage all aspects of enterprise applications, application registrations, and application proxy settings. Note that users assigned to this role are not added as owners when creating new application registrations or enterprise applications.
This role also grants the ability to consent for delegated permissions and application permissions, with the exception of application permissions for Microsoft Graph.
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/applications/audience/update | Update the audience property for applications | > | microsoft.directory/applications/authentication/update | Update authentication on all types of applications | > | microsoft.directory/applications/basic/update | Update basic properties for applications |
-> | microsoft.directory/applications/credentials/update | Update application credentials |
+> | microsoft.directory/applications/credentials/update | Update application credentials<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applications/extensionProperties/update | Update extension properties on applications | > | microsoft.directory/applications/notes/update | Update notes of applications | > | microsoft.directory/applications/owners/update | Update owners of applications |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/connectorGroups/delete | Delete application proxy connector groups | > | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups | > | microsoft.directory/connectorGroups/allProperties/update | Update all properties of application proxy connector groups |
-> | microsoft.directory/customAuthenticationExtensions/allProperties/allTasks | Create and manage custom authentication extensions |
+> | microsoft.directory/customAuthenticationExtensions/allProperties/allTasks | Create and manage custom authentication extensions<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deletedItems.applications/delete | Permanently delete applications, which can no longer be restored | > | microsoft.directory/deletedItems.applications/restore | Restore soft deleted applications to original state |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applicationPolicies/create | Create application policies | > | microsoft.directory/applicationPolicies/delete | Delete application policies | > | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals |
-> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals |
+> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/notes/update | Update notes of service principals | > | microsoft.directory/servicePrincipals/owners/update | Update owners of service principals | > | microsoft.directory/servicePrincipals/permissions/update | Update permissions of service principals |
This role also grants the ability to consent for delegated permissions and appli
## Application Developer
-Users in this role can create application registrations when the "Users can register applications" setting is set to No. This role also grants permission to consent on one's own behalf when the "Users can consent to apps accessing company data on their behalf" setting is set to No. Users assigned to this role are added as owners when creating new application registrations.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role can create application registrations when the "Users can register applications" setting is set to No. This role also grants permission to consent on one's own behalf when the "Users can consent to apps accessing company data on their behalf" setting is set to No. Users assigned to this role are added as owners when creating new application registrations.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | | > | microsoft.directory/applications/createAsOwner | Create all types of applications, and creator is added as the first owner |
-> | microsoft.directory/oAuth2PermissionGrants/createAsOwner | Create OAuth 2.0 permission grants, with creator as the first owner |
+> | microsoft.directory/oAuth2PermissionGrants/createAsOwner | Create OAuth 2.0 permission grants, with creator as the first owner<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/createAsOwner | Create service principals, with creator as the first owner | ## Attack Payload Author
For more information, see [Manage access to custom security attributes in Azure
## Authentication Administrator
-Assign the Authentication Administrator role to users who need to do the following:
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Assign the Authentication Administrator role to users who need to do the following:
-- Set or reset any authentication method (including passwords) for non-administrators and some roles. For a list of the roles that an Authentication Administrator can read or update authentication methods, see [Who can reset passwords](#who-can-reset-passwords).
+- Set or reset any authentication method (including passwords) for non-administrators and some roles. For a list of the roles that an Authentication Administrator can read or update authentication methods, see [Who can reset passwords](privileged-roles-permissions.md#who-can-reset-passwords).
- Require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in.-- Perform sensitive actions for some users. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Perform sensitive actions for some users. For more information, see [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions).
- Create and manage support tickets in Azure and the Microsoft 365 admin center. Users with this role **cannot** do the following:
Users with this role **cannot** do the following:
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/users/authenticationMethods/create | Create authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users |
+> | microsoft.directory/users/authenticationMethods/create | Update authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users |
-> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
+> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
-> | microsoft.directory/users/delete | Delete users |
-> | microsoft.directory/users/disable | Disable users |
-> | microsoft.directory/users/enable | Enable users |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/delete | Delete users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/restore | Restore deleted users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
-> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users<br/>[![Privileged label icon.](./medi) |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Assign the Authentication Policy Administrator role to users who need to do the
Users with this role **cannot** do the following: -- Cannot update sensitive properties. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).-- Cannot delete or restore users. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Cannot update sensitive properties. For more information, see [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions).
+- Cannot delete or restore users. For more information, see [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions).
- Cannot manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. [!INCLUDE [authentication-table-include](./includes/authentication-table-include.md)]
Users with this role have all permissions in the Azure Information Protection se
## B2C IEF Keyset Administrator
-User can create and manage policy keys and secrets for token encryption, token signatures, and claim encryption/decryption. By adding new keys to existing key containers, this limited administrator can roll over secrets as needed without impacting existing applications. This user can see the full content of these secrets and their expiration dates even after their creation.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). User can create and manage policy keys and secrets for token encryption, token signatures, and claim encryption/decryption. By adding new keys to existing key containers, this limited administrator can roll over secrets as needed without impacting existing applications. This user can see the full content of these secrets and their expiration dates even after their creation.
> [!IMPORTANT] > This is a sensitive role. The keyset administrator role should be carefully audited and assigned with care during pre-production and production.
User can create and manage policy keys and secrets for token encryption, token s
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks | Read and configure key sets in Azure Active Directory B2C |
+> | microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks | Read and configure key sets in Azure Active Directory B2C<br/>[![Privileged label icon.](./medi) |
## B2C IEF Policy Administrator
Users with this role have full permissions in Defender for Cloud Apps. They can
## Cloud Application Administrator
-Users in this role have the same permissions as the Application Administrator role, excluding the ability to manage application proxy. This role grants the ability to create and manage all aspects of enterprise applications and application registrations. Users assigned to this role are not added as owners when creating new application registrations or enterprise applications.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role have the same permissions as the Application Administrator role, excluding the ability to manage application proxy. This role grants the ability to create and manage all aspects of enterprise applications and application registrations. Users assigned to this role are not added as owners when creating new application registrations or enterprise applications.
This role also grants the ability to consent for delegated permissions and application permissions, with the exception of application permissions for Microsoft Graph.
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/applications/audience/update | Update the audience property for applications | > | microsoft.directory/applications/authentication/update | Update authentication on all types of applications | > | microsoft.directory/applications/basic/update | Update basic properties for applications |
-> | microsoft.directory/applications/credentials/update | Update application credentials |
+> | microsoft.directory/applications/credentials/update | Update application credentials<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applications/extensionProperties/update | Update extension properties on applications | > | microsoft.directory/applications/notes/update | Update notes of applications | > | microsoft.directory/applications/owners/update | Update owners of applications |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | > | microsoft.directory/deletedItems.applications/delete | Permanently delete applications, which can no longer be restored | > | microsoft.directory/deletedItems.applications/restore | Restore soft deleted applications to original state |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applicationPolicies/create | Create application policies | > | microsoft.directory/applicationPolicies/delete | Delete application policies | > | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals |
-> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals |
+> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/notes/update | Update notes of service principals | > | microsoft.directory/servicePrincipals/owners/update | Update owners of service principals | > | microsoft.directory/servicePrincipals/permissions/update | Update permissions of service principals |
This role also grants the ability to consent for delegated permissions and appli
## Cloud Device Administrator
-Users in this role can enable, disable, and delete devices in Azure AD and read Windows 10 BitLocker keys (if present) in the Azure portal. The role does not grant permissions to manage any other properties on the device.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role can enable, disable, and delete devices in Azure AD and read Windows 10 BitLocker keys (if present) in the Azure portal. The role does not grant permissions to manage any other properties on the device.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deletedItems.devices/delete | Permanently delete devices, which can no longer be restored | > | microsoft.directory/deletedItems.devices/restore | Restore soft deleted devices to original state | > | microsoft.directory/devices/delete | Delete devices from Azure AD |
Users in this role can enable, disable, and delete devices in Azure AD and read
> | microsoft.directory/devices/enable | Enable devices in Azure AD | > | microsoft.directory/deviceLocalCredentials/password/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, including the password | > | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies |
-> | microsoft.directory/deviceManagementPolicies/basic/update | Update basic properties on device management application policies |
+> | microsoft.directory/deviceManagementPolicies/basic/update | Update basic properties on device management application policies<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
-> | microsoft.directory/deviceRegistrationPolicy/basic/update | Update basic properties on device registration policies |
+> | microsoft.directory/deviceRegistrationPolicy/basic/update | Update basic properties on device registration policies<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
In | Can do
## Conditional Access Administrator
-Users with this role have the ability to manage Azure Active Directory Conditional Access settings.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role have the ability to manage Azure Active Directory Conditional Access settings.
> [!div class="mx-tableFixed"] > | Actions | Description |
Users with this role have the ability to manage Azure Active Directory Condition
> | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations | > | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations | > | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations |
-> | microsoft.directory/conditionalAccessPolicies/create | Create Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/delete | Delete Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for Conditional Access policies |
-> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions |
+> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
+> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions<br/>[![Privileged label icon.](./medi) |
## Customer LockBox Access Approver
Users in this role can read basic directory information. This role should be use
## Directory Synchronization Accounts
-Do not use. This role is automatically assigned to the Azure AD Connect service, and is not intended or supported for any other use.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Do not use. This role is automatically assigned to the Azure AD Connect service, and is not intended or supported for any other use.
> [!div class="mx-tableFixed"] > | Actions | Description |
Do not use. This role is automatically assigned to the Azure AD Connect service,
> | microsoft.directory/applications/audience/update | Update the audience property for applications | > | microsoft.directory/applications/authentication/update | Update authentication on all types of applications | > | microsoft.directory/applications/basic/update | Update basic properties for applications |
-> | microsoft.directory/applications/credentials/update | Update application credentials |
+> | microsoft.directory/applications/credentials/update | Update application credentials<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applications/notes/update | Update notes of applications | > | microsoft.directory/applications/owners/update | Update owners of applications | > | microsoft.directory/applications/permissions/update | Update exposed permissions and required permissions on all types of applications | > | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/applications/tag/update | Update tags of applications | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD |
+> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/organization/dirSync/update | Update the organization directory sync property | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/policies/create | Create policies in Azure AD |
Do not use. This role is automatically assigned to the Azure AD Connect service,
> | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/policies/owners/read | Read owners of policies | > | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |
-> | microsoft.directory/policies/basic/update | Update basic properties on policies |
+> | microsoft.directory/policies/basic/update | Update basic properties on policies<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/policies/owners/update | Update owners of policies | > | microsoft.directory/policies/tenantDefault/update | Update default organization policies | > | microsoft.directory/servicePrincipals/create | Create service principals |
Do not use. This role is automatically assigned to the Azure AD Connect service,
> | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals |
-> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals |
+> | microsoft.directory/servicePrincipals/credentials/update | Update credentials of service principals<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/notes/update | Update notes of service principals | > | microsoft.directory/servicePrincipals/owners/update | Update owners of service principals | > | microsoft.directory/servicePrincipals/permissions/update | Update permissions of service principals |
Do not use. This role is automatically assigned to the Azure AD Connect service,
## Directory Writers
-Users in this role can read and update basic information of users, groups, and service principals.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role can read and update basic information of users, groups, and service principals.
> [!div class="mx-tableFixed"] > | Actions | Description |
Users in this role can read and update basic information of users, groups, and s
> | microsoft.directory/groupSettings/create | Create group settings | > | microsoft.directory/groupSettings/delete | Delete group settings | > | microsoft.directory/groupSettings/basic/update | Update basic properties on group settings |
-> | microsoft.directory/oAuth2PermissionGrants/create | Create OAuth 2.0 permission grants |
-> | microsoft.directory/oAuth2PermissionGrants/basic/update | Update OAuth 2.0 permission grants |
+> | microsoft.directory/oAuth2PermissionGrants/create | Create OAuth 2.0 permission grants<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/oAuth2PermissionGrants/basic/update | Update OAuth 2.0 permission grants<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials | > | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses |
-> | microsoft.directory/users/create | Add users |
-> | microsoft.directory/users/disable | Disable users |
-> | microsoft.directory/users/enable | Enable users |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/create | Add users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/inviteGuest | Invite guest users | > | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users | > | microsoft.directory/users/photo/update | Update photo of users | > | microsoft.directory/users/sponsors/update | Update sponsors of users |
-> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users<br/>[![Privileged label icon.](./medi) |
## Domain Name Administrator
Users with this role have global permissions within Microsoft Fabric and Power B
## Global Administrator
-Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Global Administrators can view Directory Activity logs. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has zero Global Administrators.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Global Administrators can view Directory Activity logs. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has zero Global Administrators.
> [!NOTE] > As a best practice, Microsoft recommends that you assign the Global Administrator role to fewer than five people in your organization. For more information, see [Best practices for Azure AD roles](best-practices.md).
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/adminConsentRequestPolicy/allProperties/allTasks | Manage admin consent request policies in Azure AD | > | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) | > | microsoft.directory/appConsent/appConsentRequests/allProperties/read | Read all properties of consent requests for applications registered with Azure AD |
-> | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties |
+> | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
-> | microsoft.directory/users/authenticationMethods/create | Create authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
-> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/users/authenticationMethods/create | Update authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps | > | microsoft.directory/connectors/create | Create application proxy connectors | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/connectorGroups/allProperties/update | Update all properties of application proxy connector groups | > | microsoft.directory/contacts/allProperties/allTasks | Create and delete contacts, and read and update all properties | > | microsoft.directory/contracts/allProperties/allTasks | Create and delete partner contracts, and read and update all properties |
-> | microsoft.directory/customAuthenticationExtensions/allProperties/allTasks | Create and manage custom authentication extensions |
+> | microsoft.directory/customAuthenticationExtensions/allProperties/allTasks | Create and manage custom authentication extensions<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deletedItems/delete | Permanently delete objects, which can no longer be restored | > | microsoft.directory/deletedItems/restore | Restore soft deleted objects to original state | > | microsoft.directory/devices/allProperties/allTasks | Create and delete devices, and read and update all properties |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations | > | microsoft.directory/deviceLocalCredentials/password/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, including the password | > | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies |
-> | microsoft.directory/deviceManagementPolicies/basic/update | Update basic properties on device management application policies |
+> | microsoft.directory/deviceManagementPolicies/basic/update | Update basic properties on device management application policies<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
-> | microsoft.directory/deviceRegistrationPolicy/basic/update | Update basic properties on device registration policies |
+> | microsoft.directory/deviceRegistrationPolicy/basic/update | Update basic properties on device registration policies<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties | > | microsoft.directory/directoryRoleTemplates/allProperties/allTasks | Create and delete Azure AD role templates, and read and update all properties | > | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update role-assignable groups | > | microsoft.directory/groupSettings/allProperties/allTasks | Create and delete group settings, and read and update all properties | > | microsoft.directory/groupSettingTemplates/allProperties/allTasks | Create and delete group setting templates, and read and update all properties |
-> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD |
-> | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection |
+> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/loginOrganizationBranding/allProperties/allTasks | Create and delete loginTenantBranding, and read and update all properties |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/organization/allProperties/allTasks | Read and update all properties for an organization | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD |
-> | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties |
-> | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of Conditional Access policies |
+> | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of conditional access policies |
> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/create | Create cross-tenant sync policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/basic/update | Update basic settings of cross-tenant sync policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs |
-> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions |
+> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties | > | microsoft.directory/roleDefinitions/allProperties/allTasks | Create and delete role definitions, and read and update all properties | > | microsoft.directory/scopedRoleMemberships/allProperties/allTasks | Create and delete scopedRoleMemberships, and read and update all properties |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/serviceAction/disableDirectoryFeature | Can perform the "disable directory feature" service action | > | microsoft.directory/serviceAction/enableDirectoryFeature | Can perform the "enable directory feature" service action | > | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the getAvailableExtentionProperties service action |
-> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties |
+> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-company-admin | Grant consent for any permission to any application | > | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/subscribedSkus/allProperties/allTasks | Buy and manage subscriptions and delete subscriptions |
-> | microsoft.directory/users/allProperties/allTasks | Create and delete users, and read and update all properties |
+> | microsoft.directory/users/allProperties/allTasks | Create and delete users, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/permissionGrantPolicies/create | Create permission grant policies | > | microsoft.directory/permissionGrantPolicies/delete | Delete permission grant policies | > | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies |
Users with this role have access to all administrative features in Azure Active
> | microsoft.commerce.billing/purchases/standard/read | Read purchase services in M365 Admin Center. | > | microsoft.dynamics365/allEntities/allTasks | Manage all aspects of Dynamics 365 | > | microsoft.edge/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Edge |
+> | microsoft.networkAccess/allEntities/allProperties/allTasks | Manage all aspects of Entra Network Access |
> | microsoft.flow/allEntities/allTasks | Manage all aspects of Microsoft Power Automate | > | microsoft.hardware.support/shippingAddress/allProperties/allTasks | Create, read, update, and delete shipping addresses for Microsoft hardware warranty claims, including shipping addresses created by others | > | microsoft.hardware.support/shippingStatus/allProperties/read | Read shipping status for open Microsoft hardware warranty claims |
Users with this role have access to all administrative features in Azure Active
> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Fabric and Power BI | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams | > | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app |
+> | microsoft.viva.goals/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Goals |
+> | microsoft.viva.pulse/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Pulse |
> | microsoft.windows.defenderAdvancedThreatProtection/allEntities/allTasks | Manage all aspects of Microsoft Defender for Endpoint | > | microsoft.windows.updatesDeployments/allEntities/allProperties/allTasks | Read and configure all aspects of Windows Update Service | ## Global Reader
-Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Microsoft 365 Defender portal, Microsoft Purview compliance portal, Azure portal, and Device Management admin center.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Microsoft 365 Defender portal, Microsoft Purview compliance portal, Azure portal, and Device Management admin center.
Users with this role **cannot** do the following:
Users with this role **cannot** do the following:
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | > | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/cloudAppSecurity/allProperties/read | Read all properties for Defender for Cloud Apps | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups |
Users with this role **cannot** do the following:
> | microsoft.directory/pendingExternalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams | > | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies |
-> | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of Conditional Access policies |
+> | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies |
> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Users with this role **cannot** do the following:
> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/subscribedSkus/allProperties/read | Read all properties of product subscriptions |
-> | microsoft.directory/users/allProperties/read | Read all properties of users |
+> | microsoft.directory/users/allProperties/read | Read all properties of users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/verifiableCredentials/configuration/contracts/cards/allProperties/read | Read a verifiable credential card | > | microsoft.directory/verifiableCredentials/configuration/contracts/allProperties/read | Read a verifiable credential contract | > | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials |
Users with this role **cannot** do the following:
> | microsoft.commerce.billing/allEntities/allProperties/read | Read all resources of Office 365 billing | > | microsoft.commerce.billing/purchases/standard/read | Read purchase services in M365 Admin Center. | > | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge |
+> | microsoft.networkAccess/allEntities/allProperties/read | Read all aspects of Entra Network Access |
> | microsoft.hardware.support/shippingAddress/allProperties/read | Read shipping addresses for Microsoft hardware warranty claims, including existing shipping addresses created by others | > | microsoft.hardware.support/shippingStatus/allProperties/read | Read shipping status for open Microsoft hardware warranty claims | > | microsoft.hardware.support/warrantyClaims/allProperties/read | Read Microsoft hardware warranty claims |
Users with this role **cannot** do the following:
> | microsoft.permissionsManagement/allEntities/allProperties/read | Read all aspects of Entra Permissions Management | > | microsoft.teams/allEntities/allProperties/read | Read all properties of Microsoft Teams | > | microsoft.virtualVisits/allEntities/allProperties/read | Read all aspects of Virtual Visits |
+> | microsoft.viva.goals/allEntities/allProperties/read | Read all aspects of Microsoft Viva Goals |
+> | microsoft.viva.pulse/allEntities/allProperties/read | Read all aspects of Microsoft Viva Pulse |
> | microsoft.windows.updatesDeployments/allEntities/allProperties/read | Read all aspects of Windows Update Service | ## Global Secure Access Administrator
Users with this role **cannot** do the following:
> | microsoft.directory/applications/policies/read | Read policies of applications | > | microsoft.directory/applications/standard/read | Read standard properties of applications | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
> | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
Users in this role can manage Azure Active Directory B2B guest user invitations
## Helpdesk Administrator
-Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. For a list of the roles that a Helpdesk Administrator can reset passwords for and invalidate refresh tokens, see [Who can reset passwords](#who-can-reset-passwords).
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. For a list of the roles that a Helpdesk Administrator can reset passwords for and invalidate refresh tokens, see [Who can reset passwords](privileged-roles-permissions.md#who-can-reset-passwords).
Users with this role **cannot** do the following:
This role was previously named Password Administrator in the [Azure portal](../.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deviceLocalCredentials/standard/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, except the password |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
This role was previously named Password Administrator in the [Azure portal](../.
## Hybrid Identity Administrator
-Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single Sign-On (Seamless SSO), and federation settings. Users can also troubleshoot and monitor logs using this role.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless Single Sign-On (Seamless SSO), and federation settings. Users can also troubleshoot and monitor logs using this role.
> [!div class="mx-tableFixed"] > | Actions | Description |
Users in this role can create, manage and deploy provisioning configuration setu
> | microsoft.directory/domains/federationConfiguration/basic/update | Update basic federation configuration for domains | > | microsoft.directory/domains/federationConfiguration/create | Create federation configuration for domains | > | microsoft.directory/domains/federationConfiguration/delete | Delete federation configuration for domains |
-> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD |
+> | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/organization/dirSync/update | Update the organization directory sync property | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs |
Users in this role can create, manage and deploy provisioning configuration setu
> | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials | > | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema |
+> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments |
> | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals |
Users in this role can access a set of dashboards and insights via the Microsoft
## Intune Administrator
-Users with this role have global permissions within Microsoft Intune Online, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy, as well as create and manage groups. For more information, see [Role-based administration control (RBAC) with Microsoft Intune](/intune/fundamentals/role-based-access-control).
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role have global permissions within Microsoft Intune Online, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy, as well as create and manage groups. For more information, see [Role-based administration control (RBAC) with Microsoft Intune](/intune/fundamentals/role-based-access-control).
This role can create and manage all security groups. However, Intune Administrator does not have admin rights over Office groups. That means the admin cannot update owners or memberships of all Office groups in the organization. However, he/she can manage the Office group that he creates which comes as a part of his/her end-user privileges. So, any Office group (not security group) that he/she creates should be counted against his/her quota of 250.
This role can create and manage all security groups. However, Intune Administrat
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/contacts/create | Create contacts | > | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts |
Assign the Organizational Messages Writer role to users who need to do the follo
## Partner Tier1 Support
-Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use.
> [!IMPORTANT] > This role can reset passwords and invalidate refresh tokens for only non-administrators. This role should not be used because it is deprecated.
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/applications/audience/update | Update the audience property for applications | > | microsoft.directory/applications/authentication/update | Update authentication on all types of applications | > | microsoft.directory/applications/basic/update | Update basic properties for applications |
-> | microsoft.directory/applications/credentials/update | Update application credentials |
+> | microsoft.directory/applications/credentials/update | Update application credentials<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applications/notes/update | Update notes of applications | > | microsoft.directory/applications/owners/update | Update owners of applications | > | microsoft.directory/applications/permissions/update | Update exposed permissions and required permissions on all types of applications |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/groups/restore | Restore groups from soft-deleted container | > | microsoft.directory/groups/members/update | Update members of Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/owners/update | Update owners of Security groups and Microsoft 365 groups, excluding role-assignable groups |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses |
-> | microsoft.directory/users/create | Add users |
-> | microsoft.directory/users/delete | Delete users |
-> | microsoft.directory/users/disable | Disable users |
-> | microsoft.directory/users/enable | Enable users |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/create | Add users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/delete | Delete users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/restore | Restore deleted users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/photo/update | Update photo of users |
-> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users<br/>[![Privileged label icon.](./medi) |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Do not use. This role has been deprecated and will be removed from Azure AD in t
## Partner Tier2 Support
-Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use.
> [!IMPORTANT] > This role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). This role should not be used because it is deprecated.
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/applications/audience/update | Update the audience property for applications | > | microsoft.directory/applications/authentication/update | Update authentication on all types of applications | > | microsoft.directory/applications/basic/update | Update basic properties for applications |
-> | microsoft.directory/applications/credentials/update | Update application credentials |
+> | microsoft.directory/applications/credentials/update | Update application credentials<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/applications/notes/update | Update notes of applications | > | microsoft.directory/applications/owners/update | Update owners of applications | > | microsoft.directory/applications/permissions/update | Update exposed permissions and required permissions on all types of applications |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/groups/restore | Restore groups from soft-deleted container | > | microsoft.directory/groups/members/update | Update members of Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/owners/update | Update owners of Security groups and Microsoft 365 groups, excluding role-assignable groups |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/organization/basic/update | Update basic properties on organization | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties | > | microsoft.directory/roleDefinitions/allProperties/allTasks | Create and delete role definitions, and read and update all properties |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/subscribedSkus/standard/read | Read basic properties on subscriptions | > | microsoft.directory/users/assignLicense | Manage user licenses |
-> | microsoft.directory/users/create | Add users |
-> | microsoft.directory/users/delete | Delete users |
-> | microsoft.directory/users/disable | Disable users |
-> | microsoft.directory/users/enable | Enable users |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/create | Add users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/delete | Delete users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/restore | Restore deleted users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/photo/update | Update photo of users |
-> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users<br/>[![Privileged label icon.](./medi) |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Do not use. This role has been deprecated and will be removed from Azure AD in t
## Password Administrator
-Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Who can reset passwords](#who-can-reset-passwords).
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Who can reset passwords](privileged-roles-permissions.md#who-can-reset-passwords).
Users with this role **cannot** do the following:
Users with this role **cannot** do the following:
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | ## Permissions Management Administrator
Users with this role can register printers and manage printer status in the Micr
## Privileged Authentication Administrator
-Assign the Privileged Authentication Administrator role to users who need to do the following:
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Assign the Privileged Authentication Administrator role to users who need to do the following:
- Set or reset any authentication method (including passwords) for any user, including Global Administrators.-- Delete or restore any users, including Global Administrators. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Delete or restore any users, including Global Administrators. For more information, see [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions).
- Force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke **remember MFA on the device**, prompting for MFA on the next sign-in of all users.-- Update sensitive properties for all users. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Update sensitive properties for all users. For more information, see [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions).
- Create and manage support tickets in Azure and the Microsoft 365 admin center. Users with this role **cannot** do the following:
Users with this role **cannot** do the following:
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/users/authenticationMethods/create | Create authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
+> | microsoft.directory/users/authenticationMethods/create | Update authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
-> | microsoft.directory/users/delete | Delete users |
-> | microsoft.directory/users/disable | Disable users |
-> | microsoft.directory/users/enable | Enable users |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/delete | Delete users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/restore | Restore deleted users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/authorizationInfo/update | Update the multivalued Certificate user IDs property of users | > | microsoft.directory/users/manager/update | Update manager for users |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
-> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users<br/>[![Privileged label icon.](./medi) |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users with this role **cannot** do the following:
## Privileged Role Administrator
-Users with this role can manage role assignments in Azure Active Directory, as well as within Azure AD Privileged Identity Management. They can create and manage groups that can be assigned to Azure AD roles. In addition, this role allows management of all aspects of Privileged Identity Management and administrative units.
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role can manage role assignments in Azure Active Directory, as well as within Azure AD Privileged Identity Management. They can create and manage groups that can be assigned to Azure AD roles. In addition, this role allows management of all aspects of Privileged Identity Management and administrative units.
> [!IMPORTANT] > This role grants the ability to manage assignments for all Azure AD roles including the Global Administrator role. This role does not include any other privileged abilities in Azure AD like creating or updating users. However, users assigned to this role can grant themselves or others additional privilege by assigning additional roles.
Users with this role can manage role assignments in Azure Active Directory, as w
> | microsoft.directory/accessReviews/definitions.groupsAssignableToRoles/delete | Delete access reviews for membership in groups that are assignable to Azure AD roles | > | microsoft.directory/accessReviews/definitions.groups/allProperties/read | Read all properties of access reviews for membership in Security and Microsoft 365 groups, including role-assignable groups. | > | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) |
-> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy |
+> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties | > | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups | > | microsoft.directory/groupsAssignableToRoles/delete | Delete role-assignable groups | > | microsoft.directory/groupsAssignableToRoles/restore | Restore role-assignable groups | > | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update role-assignable groups |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/privilegedIdentityManagement/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Privileged Identity Management | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties | > | microsoft.directory/roleDefinitions/allProperties/allTasks | Create and delete role definitions, and read and update all properties |
Users in this role can create, manage, and delete content for Microsoft Search i
## Security Administrator
-Users with this role have permissions to manage security-related features in the Microsoft 365 Defender portal, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions).
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role have permissions to manage security-related features in the Microsoft 365 Defender portal, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions).
In | Can do |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/create | Create cross-tenant sync policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/basic/update | Update basic settings of cross-tenant sync policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy |
> | microsoft.directory/deviceLocalCredentials/standard/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, except the password | > | microsoft.directory/domains/federation/update | Update federation property of domains | > | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/domains/federationConfiguration/delete | Delete federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
-> | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection |
+> | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/namedLocations/create | Create custom rules that define network locations | > | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations | > | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations | > | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations | > | microsoft.directory/policies/create | Create policies in Azure AD | > | microsoft.directory/policies/delete | Delete policies in Azure AD |
-> | microsoft.directory/policies/basic/update | Update basic properties on policies |
+> | microsoft.directory/policies/basic/update | Update basic properties on policies<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/policies/owners/update | Update owners of policies | > | microsoft.directory/policies/tenantDefault/update | Update default organization policies |
-> | microsoft.directory/conditionalAccessPolicies/create | Create Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/delete | Delete Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for Conditional Access policies |
+> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs |
-> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions |
+> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/servicePrincipals/policies/update | Update policies of service principals | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
+> | microsoft.networkAccess/allEntities/allProperties/allTasks | Manage all aspects of Entra Network Access |
> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/allEntities/basic/update | Update basic properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/allTasks | Create and manage attack payloads in Attack Simulator |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
## Security Operator
-Users with this role can manage alerts and have global read-only access on security-related features, including all information in Microsoft 365 Defender portal, Azure Active Directory, Identity Protection, Privileged Identity Management and Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions).
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role can manage alerts and have global read-only access on security-related features, including all information in Microsoft 365 Defender portal, Azure Active Directory, Identity Protection, Privileged Identity Management and Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions).
| In | Can do | | | |
Users with this role can manage alerts and have global read-only access on secur
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps |
-> | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection |
+> | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties |
Users with this role can manage alerts and have global read-only access on secur
## Security Reader
-Users with this role have global read-only access on security-related feature, including all information in Microsoft 365 Defender portal, Azure Active Directory, Identity Protection, Privileged Identity Management, as well as the ability to read Azure Active Directory sign-in reports and audit logs, and in Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions).
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Users with this role have global read-only access on security-related feature, including all information in Microsoft 365 Defender portal, Azure Active Directory, Identity Protection, Privileged Identity Management, as well as the ability to read Azure Active Directory sign-in reports and audit logs, and in Microsoft Purview compliance portal. For more information about Office 365 permissions, see [Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview compliance](/microsoft-365/security/office-365-security/scc-permissions).
In | Can do |
In | Can do
> | microsoft.directory/accessReviews/definitions/allProperties/read | Read all properties of access reviews of all reviewable resources in Azure AD | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/deviceLocalCredentials/standard/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, except the password | > | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management |
In | Can do
> | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/policies/owners/read | Read owners of policies | > | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of Conditional Access policies |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for Conditional Access policies |
+> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
+> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
+> | microsoft.networkAccess/allEntities/allProperties/read | Read all aspects of Entra Network Access |
> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/read | Read all properties of attack payloads in Attack Simulator | > | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training |
Users with this role can access tenant level aggregated data and associated insi
## User Administrator
-Assign the User Administrator role to users who need to do the following:
+[![Privileged label icon.](./medi)
+
+This is a [privileged role](privileged-roles-permissions.md). Assign the User Administrator role to users who need to do the following:
| Permission | More information | | | | | Create users | |
-| Update most user properties for all users, including all administrators | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
-| Update sensitive properties (including user principal name) for some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
-| Disable or enable some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
-| Delete or restore some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
+| Update most user properties for all users, including all administrators | [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions) |
+| Update sensitive properties (including user principal name) for some users | [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions) |
+| Disable or enable some users | [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions) |
+| Delete or restore some users | [Who can perform sensitive actions](privileged-roles-permissions.md#who-can-perform-sensitive-actions) |
| Create and manage user views | | | Create and manage all groups | | | Assign and read licenses for all users, including all administrators | |
-| Reset passwords | [Who can reset passwords](#who-can-reset-passwords) |
-| Invalidate refresh tokens | [Who can reset passwords](#who-can-reset-passwords) |
+| Reset passwords | [Who can reset passwords](privileged-roles-permissions.md#who-can-reset-passwords) |
+| Invalidate refresh tokens | [Who can reset passwords](privileged-roles-permissions.md#who-can-reset-passwords) |
| Update (FIDO) device keys | | | Update password expiration policies | | | Create and manage support tickets in Azure and the Microsoft 365 admin center | |
Users with this role **cannot** do the following:
> | microsoft.directory/groups/owners/update | Update owners of Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of Security groups and Microsoft 365 groups, excluding role-assignable groups |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses |
-> | microsoft.directory/users/create | Add users |
-> | microsoft.directory/users/delete | Delete users |
-> | microsoft.directory/users/disable | Disable users |
-> | microsoft.directory/users/enable | Enable users |
+> | microsoft.directory/users/create | Add users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/delete | Delete users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/inviteGuest | Invite guest users |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users | > | microsoft.directory/users/restore | Restore deleted users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users |
-> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/password/update | Reset passwords for all users<br/>[![Privileged label icon.](./medi) |
> | microsoft.directory/users/photo/update | Update photo of users | > | microsoft.directory/users/sponsors/update | Update sponsors of users | > | microsoft.directory/users/usageLocation/update | Update usage location of users |
-> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users<br/>[![Privileged label icon.](./medi) |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
For more information, see [Roles and permissions in Viva Goals](/viva/goals/role
> | | | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+> | microsoft.viva.goals/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Goals |
## Viva Pulse Administrator
Assign the Yammer Administrator role to users who need to do the following tasks
- View announcements in the Message center, but not security announcements - View service health
-[Learn more](/Viva/engage/eac-key-admin-roles-permissions)
+[Learn more](/viva/engage/eac-key-admin-roles-permissions)
> [!div class="mx-tableFixed"] > | Actions | Description |
Assign the Yammer Administrator role to users who need to do the following tasks
> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.office365.yammer/allEntities/allProperties/allTasks | Manage all aspects of Yammer |
-## How to understand role permissions
-
-The schema for permissions loosely follows the REST format of Microsoft Graph:
-
-`<namespace>/<entity>/<propertySet>/<action>`
-
-For example:
-
-`microsoft.directory/applications/credentials/update`
-
-| Permission element | Description |
-| | |
-| namespace | Product or service that exposes the task and is prepended with `microsoft`. For example, all tasks in Azure AD use the `microsoft.directory` namespace. |
-| entity | Logical feature or component exposed by the service in Microsoft Graph. For example, Azure AD exposes User and Groups, OneNote exposes Notes, and Exchange exposes Mailboxes and Calendars. There is a special `allEntities` keyword for specifying all entities in a namespace. This is often used in roles that grant access to an entire product. |
-| propertySet | Specific properties or aspects of the entity for which access is being granted. For example, `microsoft.directory/applications/authentication/read` grants the ability to read the reply URL, logout URL, and implicit flow property on the application object in Azure AD.<ul><li>`allProperties` designates all properties of the entity, including privileged properties.</li><li>`standard` designates common properties, but excludes privileged ones related to `read` action. For example, `microsoft.directory/user/standard/read` includes the ability to read standard properties like public phone number and email address, but not the private secondary phone number or email address used for multifactor authentication.</li><li>`basic` designates common properties, but excludes privileged ones related to the `update` action. The set of properties that you can read may be different from what you can update. ThatΓÇÖs why there are `standard` and `basic` keywords to reflect that.</li></ul> |
-| action | Operation being granted, most typically create, read, update, or delete (CRUD). There is a special `allTasks` keyword for specifying all of the above abilities (create, read, update, and delete). |
- ## Deprecated roles The following roles should not be used. They have been deprecated and will be removed from Azure AD in the future.
The following roles should not be used. They have been deprecated and will be re
Not every role returned by PowerShell or MS Graph API is visible in Azure portal. The following table organizes those differences.
-API name | Azure portal name | Notes
| - | -
-Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles)
-Device Managers | Deprecated | [Deprecated roles documentation](#deprecated-roles)
-Device Users | Deprecated | [Deprecated roles documentation](#deprecated-roles)
-Directory Synchronization Accounts | Not shown because it shouldn't be used | [Directory Synchronization Accounts documentation](#directory-synchronization-accounts)
-Guest User | Not shown because it can't be used | NA
-Partner Tier 1 Support | Not shown because it shouldn't be used | [Partner Tier1 Support documentation](#partner-tier1-support)
-Partner Tier 2 Support | Not shown because it shouldn't be used | [Partner Tier2 Support documentation](#partner-tier2-support)
-Restricted Guest User | Not shown because it can't be used | NA
-User | Not shown because it can't be used | NA
-Workplace Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles)
-
-## Who can reset passwords
-
-In the following table, the columns list the roles that can reset passwords and invalidate refresh tokens. The rows list the roles for which their password can be reset.
-
-The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
-
-Role that password can be reset | Password Admin | Helpdesk Admin | Auth Admin | User Admin | Privileged Auth Admin | Global Admin
- | | | | | |
-Auth Admin | &nbsp; | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Global Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:\*
-Groups Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Helpdesk Admin | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Message Center Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Privileged Auth Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-All custom roles | | | | | :heavy_check_mark: | :heavy_check_mark:
-
-> [!IMPORTANT]
-> The [Partner Tier2 Support](#partner-tier2-support) role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). The [Partner Tier1 Support](#partner-tier1-support) role can reset passwords and invalidate refresh tokens for only non-administrators. These roles should not be used because they are deprecated.
-
-The ability to reset a password includes the ability to update the following sensitive properties required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
-- businessPhones-- mobilePhone-- otherMails-
-## Who can perform sensitive actions
-
-Some administrators can perform the following sensitive actions for some users. All users can read the sensitive properties.
-
-| Sensitive action | Sensitive property name |
-| | |
-| Disable or enable users | `accountEnabled` |
-| Update business phone | `businessPhones` |
-| Update mobile phone | `mobilePhone` |
-| Update on-premises immutable ID | `onPremisesImmutableId` |
-| Update other emails | `otherMails` |
-| Update password profile | `passwordProfile` |
-| Update user principal name | `userPrincipalName` |
-| Delete or restore users | Not applicable |
-
-In the following table, the columns list the roles that can perform sensitive actions. The rows list the roles for which the sensitive action can be performed upon.
-
-The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
-
-Role that sensitive action can be performed upon | Auth Admin | User Admin | Privileged Auth Admin | Global Admin
- | | | |
-Auth Admin | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Global Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Groups Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Helpdesk Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Message Center Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Privileged Auth Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Privileged Role Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
-User Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-All custom roles | | | :heavy_check_mark: | :heavy_check_mark:
+| API name | Azure portal name | Notes |
+| | | |
+| Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles) |
+| Device Managers | Deprecated | [Deprecated roles documentation](#deprecated-roles) |
+| Device Users | Deprecated | [Deprecated roles documentation](#deprecated-roles) |
+| Directory Synchronization Accounts | Not shown because it shouldn't be used | [Directory Synchronization Accounts documentation](#directory-synchronization-accounts) |
+| Guest User | Not shown because it can't be used | NA |
+| Partner Tier 1 Support | Not shown because it shouldn't be used | [Partner Tier1 Support documentation](#partner-tier1-support) |
+| Partner Tier 2 Support | Not shown because it shouldn't be used | [Partner Tier2 Support documentation](#partner-tier2-support) |
+| Restricted Guest User | Not shown because it can't be used | NA |
+| User | Not shown because it can't be used | NA |
+| Workplace Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles) |
## Next steps
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md
Last updated 03/17/2022 -+
active-directory Privileged Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/privileged-roles-permissions.md
+
+ Title: Privileged roles and permissions in Azure AD (preview) - Azure Active Directory
+description: Privileged roles and permissions in Azure Active Directory.
+++++++ Last updated : 09/01/2023++++
+# Privileged roles and permissions in Azure AD (preview)
+
+> [!IMPORTANT]
+> Privileged roles and permissions are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Active Directory (Azure AD) has roles and permissions that are identified as privileged. These roles and permissions can be used to delegate management of directory resources to other users or make either network or data security configuration changes. Privileged role assignments can lead to elevation of privilege if not used in a secure and intended manner. Privileged roles and permissions can pose a security threat so they should be used with caution. This article describes privileged roles and permissions and best practices for how to use.
+
+## Which roles and permissions are privileged?
+
+For a list of privileged roles and permissions, see [Azure AD built-in roles](permissions-reference.md). You can also use the Microsoft Entra admin center, Microsoft Graph PowerShell, or Microsoft Graph API to identify roles, permissions, and role assignments that are identified as privileged.
+
+# [Admin center](#tab/admin-center)
+
+In the Microsoft Entra admin center, look for the **PRIVILEGED** label.
+
+![Privileged label icon.](./media/permissions-reference/privileged-label.png)
+
+On the **Roles and administrators** page, privileged roles are identified in the **Privileged** column. The **Assignments** column lists the number of role assignments. You can also filter privileged roles.
++
+When you view the permissions for a privileged role, you can see which permissions are privileged. If you view the permissions as a default user, you won't be able to see which permissions are privileged.
++
+When you create a custom role, you can see which permissions are privileged and the custom role will be labeled as privileged.
++
+# [PowerShell](#tab/ms-powershell)
+
+In Microsoft Graph PowerShell, check whether the `IsPrivileged` property is set to `True`.
+
+To list privileged roles, use the [Get-MgBetaRoleManagementDirectoryRoleDefinition](/powershell/module/Microsoft.Graph.Beta.Identity.Governance/Get-MgBetaRoleManagementDirectoryRoleDefinition) command.
+
+```powershell
+Get-MgBetaRoleManagementDirectoryRoleDefinition -Filter "isPrivileged eq true" | Format-List
+```
+
+```Output
+AllowedPrincipalTypes :
+Description : Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities.
+DisplayName : Global Administrator
+Id : 62e90394-69f5-4237-9190-012177145e10
+InheritsPermissionsFrom : {88d8e3e3-8f55-4a1e-953a-9b9898b8876b}
+IsBuiltIn : True
+IsEnabled : True
+IsPrivileged : True
+ResourceScopes : {/}
+RolePermissions : {Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphUnifiedRolePermission}
+TemplateId : 62e90394-69f5-4237-9190-012177145e10
+Version : 1
+AdditionalProperties : {[inheritsPermissionsFrom@odata.context, https://graph.microsoft.com/beta/$metadata#roleManag
+ ement/directory/roleDefinitions('62e90394-69f5-4237-9190-012177145e10')/inheritsPermissionsFr
+ om]}
+
+AllowedPrincipalTypes :
+Description : Can manage all aspects of users and groups, including resetting passwords for limited admins.
+DisplayName : User Administrator
+Id : fe930be7-5e62-47db-91af-98c3a49a38b1
+InheritsPermissionsFrom : {88d8e3e3-8f55-4a1e-953a-9b9898b8876b}
+IsBuiltIn : True
+IsEnabled : True
+IsPrivileged : True
+ResourceScopes : {/}
+RolePermissions : {Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphUnifiedRolePermission}
+TemplateId : fe930be7-5e62-47db-91af-98c3a49a38b1
+Version : 1
+AdditionalProperties : {[inheritsPermissionsFrom@odata.context, https://graph.microsoft.com/beta/$metadata#roleManag
+ ement/directory/roleDefinitions('fe930be7-5e62-47db-91af-98c3a49a38b1')/inheritsPermissionsFr
+ om]}
+
+...
+```
+
+To list privileged permissions, use the [Get-MgBetaRoleManagementDirectoryResourceNamespaceResourceAction](/powershell/module/Microsoft.Graph.Beta.Identity.Governance/Get-MgBetaRoleManagementDirectoryResourceNamespaceResourceAction) command.
+
+```powershell
+Get-MgBetaRoleManagementDirectoryResourceNamespaceResourceAction -UnifiedRbacResourceNamespaceId "microsoft.directory" -Filter "isPrivileged eq true" | Format-List
+```
+
+```Output
+ActionVerb : PATCH
+AuthenticationContext : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAuthenticationContextClassReference
+AuthenticationContextId :
+Description : Update all properties (including privileged properties) on single-directory applications
+Id : microsoft.directory-applications.myOrganization-allProperties-update-patch
+IsAuthenticationContextSettable :
+IsPrivileged : True
+Name : microsoft.directory/applications.myOrganization/allProperties/update
+ResourceScope : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphUnifiedRbacResourceScope
+ResourceScopeId :
+AdditionalProperties : {}
+
+ActionVerb : PATCH
+AuthenticationContext : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAuthenticationContextClassReference
+AuthenticationContextId :
+Description : Update credentials on single-directory applications
+Id : microsoft.directory-applications.myOrganization-credentials-update-patch
+IsAuthenticationContextSettable :
+IsPrivileged : True
+Name : microsoft.directory/applications.myOrganization/credentials/update
+ResourceScope : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphUnifiedRbacResourceScope
+ResourceScopeId :
+AdditionalProperties : {}
+
+...
+```
+
+To list privileged role assignments, use the [Get-MgBetaRoleManagementDirectoryRoleAssignment](/powershell/module/Microsoft.Graph.Beta.Identity.Governance/Get-MgBetaRoleManagementDirectoryRoleAssignment) command.
+
+```powershell
+Get-MgBetaRoleManagementDirectoryRoleAssignment -ExpandProperty "roleDefinition" -Filter "roleDefinition/isPrivileged eq true" | Format-List
+```
+
+```Output
+AppScope : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAppScope
+AppScopeId :
+Condition :
+DirectoryScope : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphDirectoryObject
+DirectoryScopeId : /
+Id : <Id>
+Principal : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphDirectoryObject
+PrincipalId : <PrincipalId>
+PrincipalOrganizationId : <PrincipalOrganizationId>
+ResourceScope : /
+RoleDefinition : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphUnifiedRoleDefinition
+RoleDefinitionId : 62e90394-69f5-4237-9190-012177145e10
+AdditionalProperties : {}
+
+AppScope : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAppScope
+AppScopeId :
+Condition :
+DirectoryScope : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphDirectoryObject
+DirectoryScopeId : /
+Id : <Id>
+Principal : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphDirectoryObject
+PrincipalId : <PrincipalId>
+PrincipalOrganizationId : <PrincipalOrganizationId>
+ResourceScope : /
+RoleDefinition : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphUnifiedRoleDefinition
+RoleDefinitionId : 62e90394-69f5-4237-9190-012177145e10
+AdditionalProperties : {}
+
+...
+```
+
+# [Graph API](#tab/ms-graph)
+
+In the Microsoft Graph API, check whether the `isPrivileged` property is set to `true`.
+
+To list privileged roles, use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions?view=graph-rest-beta&preserve-view=true&branch=pr-en-us-18827) API.
+
+```http
+GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter=isPrivileged eq true
+```
+
+**Response**
+
+```http
+HTTP/1.1 200 OK
+Content-type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleDefinitions",
+ "value": [
+ {
+ "id": "aaf43236-0c0d-4d5f-883a-6955382ac081",
+ "description": "Can manage secrets for federation and encryption in the Identity Experience Framework (IEF).",
+ "displayName": "B2C IEF Keyset Administrator",
+ "isBuiltIn": true,
+ "isEnabled": true,
+ "isPrivileged": true,
+ "resourceScopes": [
+ "/"
+ ],
+ "templateId": "aaf43236-0c0d-4d5f-883a-6955382ac081",
+ "version": "1",
+ "rolePermissions": [
+ {
+ "allowedResourceActions": [
+ "microsoft.directory/b2cTrustFrameworkKeySet/allProperties/allTasks"
+ ],
+ "condition": null
+ }
+ ],
+ "inheritsPermissionsFrom@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleDefinitions('aaf43236-0c0d-4d5f-883a-6955382ac081')/inheritsPermissionsFrom",
+ "inheritsPermissionsFrom": [
+ {
+ "id": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b"
+ }
+ ]
+ },
+ {
+ "id": "be2f45a1-457d-42af-a067-6ec1fa63bc45",
+ "description": "Can configure identity providers for use in direct federation.",
+ "displayName": "External Identity Provider Administrator",
+ "isBuiltIn": true,
+ "isEnabled": true,
+ "isPrivileged": true,
+ "resourceScopes": [
+ "/"
+ ],
+ "templateId": "be2f45a1-457d-42af-a067-6ec1fa63bc45",
+ "version": "1",
+ "rolePermissions": [
+ {
+ "allowedResourceActions": [
+ "microsoft.directory/domains/federation/update",
+ "microsoft.directory/identityProviders/allProperties/allTasks"
+ ],
+ "condition": null
+ }
+ ],
+ "inheritsPermissionsFrom@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleDefinitions('be2f45a1-457d-42af-a067-6ec1fa63bc45')/inheritsPermissionsFrom",
+ "inheritsPermissionsFrom": [
+ {
+ "id": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b"
+ }
+ ]
+ }
+ ]
+}
+```
+
+To list privileged permissions, use the [List resourceActions](/graph/api/unifiedrbacresourcenamespace-list-resourceactions?view=graph-rest-beta&preserve-view=true&branch=pr-en-us-18827) API.
+
+```http
+GET https://graph.microsoft.com/beta/roleManagement/directory/resourceNamespaces/microsoft.directory/resourceActions?$filter=isPrivileged eq true
+```
+
+**Response**
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/resourceNamespaces('microsoft.directory')/resourceActions",
+ "value": [
+ {
+ "actionVerb": "PATCH",
+ "description": "Update application credentials",
+ "id": "microsoft.directory-applications-credentials-update-patch",
+ "isPrivileged": true,
+ "name": "microsoft.directory/applications/credentials/update",
+ "resourceScopeId": null
+ },
+ {
+ "actionVerb": null,
+ "description": "Manage all aspects of authorization policy",
+ "id": "microsoft.directory-authorizationPolicy-allProperties-allTasks",
+ "isPrivileged": true,
+ "name": "microsoft.directory/authorizationPolicy/allProperties/allTasks",
+ "resourceScopeId": null
+ }
+ ]
+}
+```
+
+To list privileged role assignments, use the [List unifiedRoleAssignments](/graph/api/rbacapplication-list-roleassignments?view=graph-rest-beta&preserve-view=true&branch=pr-en-us-18827) API.
+
+```http
+GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$expand=roleDefinition&$filter=roleDefinition/isPrivileged eq true
+```
+
+**Response**
+
+```http
+HTTP/1.1 200 OK
+Content-type: application/json
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignments(roleDefinition())",
+ "value": [
+ {
+ "id": "{id}",
+ "principalId": "{principalId}",
+ "principalOrganizationId": "{principalOrganizationId}",
+ "resourceScope": "/",
+ "directoryScopeId": "/",
+ "roleDefinitionId": "b1be1c3e-b65d-4f19-8427-f6fa0d97feb9",
+ "roleDefinition": {
+ "id": "b1be1c3e-b65d-4f19-8427-f6fa0d97feb9",
+ "description": "Can manage Conditional Access capabilities.",
+ "displayName": "Conditional Access Administrator",
+ "isBuiltIn": true,
+ "isEnabled": true,
+ "isPrivileged": true,
+ "resourceScopes": [
+ "/"
+ ],
+ "templateId": "b1be1c3e-b65d-4f19-8427-f6fa0d97feb9",
+ "version": "1",
+ "rolePermissions": [
+ {
+ "allowedResourceActions": [
+ "microsoft.directory/namedLocations/create",
+ "microsoft.directory/namedLocations/delete",
+ "microsoft.directory/namedLocations/standard/read",
+ "microsoft.directory/namedLocations/basic/update",
+ "microsoft.directory/conditionalAccessPolicies/create",
+ "microsoft.directory/conditionalAccessPolicies/delete",
+ "microsoft.directory/conditionalAccessPolicies/standard/read",
+ "microsoft.directory/conditionalAccessPolicies/owners/read",
+ "microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read",
+ "microsoft.directory/conditionalAccessPolicies/basic/update",
+ "microsoft.directory/conditionalAccessPolicies/owners/update",
+ "microsoft.directory/conditionalAccessPolicies/tenantDefault/update"
+ ],
+ "condition": null
+ }
+ ]
+ }
+ },
+ {
+ "id": "{id}",
+ "principalId": "{principalId}",
+ "principalOrganizationId": "{principalOrganizationId}",
+ "resourceScope": "/",
+ "directoryScopeId": "/",
+ "roleDefinitionId": "c4e39bd9-1100-46d3-8c65-fb160da0071f",
+ "roleDefinition": {
+ "id": "c4e39bd9-1100-46d3-8c65-fb160da0071f",
+ "description": "Can access to view, set and reset authentication method information for any non-admin user.",
+ "displayName": "Authentication Administrator",
+ "isBuiltIn": true,
+ "isEnabled": true,
+ "isPrivileged": true,
+ "resourceScopes": [
+ "/"
+ ],
+ "templateId": "c4e39bd9-1100-46d3-8c65-fb160da0071f",
+ "version": "1",
+ "rolePermissions": [
+ {
+ "allowedResourceActions": [
+ "microsoft.directory/users/authenticationMethods/create",
+ "microsoft.directory/users/authenticationMethods/delete",
+ "microsoft.directory/users/authenticationMethods/standard/restrictedRead",
+ "microsoft.directory/users/authenticationMethods/basic/update",
+ "microsoft.directory/deletedItems.users/restore",
+ "microsoft.directory/users/delete",
+ "microsoft.directory/users/disable",
+ "microsoft.directory/users/enable",
+ "microsoft.directory/users/invalidateAllRefreshTokens",
+ "microsoft.directory/users/restore",
+ "microsoft.directory/users/basic/update",
+ "microsoft.directory/users/manager/update",
+ "microsoft.directory/users/password/update",
+ "microsoft.directory/users/userPrincipalName/update",
+ "microsoft.azure.serviceHealth/allEntities/allTasks",
+ "microsoft.azure.supportTickets/allEntities/allTasks",
+ "microsoft.office365.serviceHealth/allEntities/allTasks",
+ "microsoft.office365.supportTickets/allEntities/allTasks",
+ "microsoft.office365.webPortal/allEntities/standard/read"
+ ],
+ "condition": null
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+++
+## Best practices for using privileged roles
+
+Here are some best practices for using privileged roles.
+
+- Apply principle of least privilege
+- Use Privileged Identity Management to grant just-in-time access
+- Turn on multi-factor authentication for all your administrator accounts
+- Configure recurring access reviews to revoke unneeded permissions over time
+- Limit the number of Global Administrators to less than 5
+- Limit the number of privileged role assignments to less than 10
+
+If you have 5 or more privileged Global Administrator role assignments, a **Global Administrators** alert card is displayed on the Azure AD Overview page to help you monitor Global Administrator role assignments.
++
+If you exceed 10 privileged role assignments, a warning is displayed on the Roles and administrators page.
+
+For more information, see [Best practices for Azure AD roles](best-practices.md).
+
+## Privileged permissions versus protected actions
+
+Privileged permissions and protected actions are security-related capabilities that have different purposes. Permissions that have the **PRIVILEGED** label help you identify permissions that can lead to elevation of privilege if not used in a secure and intended manner. Protected actions are role permissions that have been assigned Conditional Access policies for added security, such as requiring multi-factor authentication. Conditional Access requirements are enforced when a user performs the protected action. Protected actions are currently in Preview. For more information, see [What are protected actions in Azure AD?](./protected-actions-overview.md).
+
+| Capability | Privileged permission | Protected action |
+| | | |
+| Identify permissions that should be used in a secure manner | :white_check_mark: | |
+| Require additional security to perform an action | | :white_check_mark: |
+
+## Terminology
+
+To understand privileged roles and permissions in Azure AD, it helps to know some of the following terminology.
+
+| Term | Definition |
+| | |
+| action | An activity a security principal can perform on an object type. Sometimes referred to as an operation. |
+| permission | A definition that specifies the activity a security principal can perform on an object type. A permission includes one or more actions. |
+| privileged permission | In Azure AD, permissions that can be used to delegate management of directory resources to other users or make either network or data security configuration changes. Privileged permissions can lead to elevation of privilege if not used in a secure and intended manner. |
+| privileged role | A built-in or custom role that has one or more privileged permissions. |
+| privileged role assignment | A role assignment that uses a privileged role. |
+| elevation of privilege | When a security principal obtains more permissions than their assigned role initially provided by impersonating another role. |
+| protected action | Permissions with Conditional Access applied for added security. |
+
+## How to understand role permissions
+
+The schema for permissions loosely follows the REST format of Microsoft Graph:
+
+`<namespace>/<entity>/<propertySet>/<action>`
+
+For example:
+
+`microsoft.directory/applications/credentials/update`
+
+| Permission element | Description |
+| | |
+| namespace | Product or service that exposes the task and is prepended with `microsoft`. For example, all tasks in Azure AD use the `microsoft.directory` namespace. |
+| entity | Logical feature or component exposed by the service in Microsoft Graph. For example, Azure AD exposes User and Groups, OneNote exposes Notes, and Exchange exposes Mailboxes and Calendars. There is a special `allEntities` keyword for specifying all entities in a namespace. This is often used in roles that grant access to an entire product. |
+| propertySet | Specific properties or aspects of the entity for which access is being granted. For example, `microsoft.directory/applications/authentication/read` grants the ability to read the reply URL, logout URL, and implicit flow property on the application object in Azure AD.<ul><li>`allProperties` designates all properties of the entity, including privileged properties.</li><li>`standard` designates common properties, but excludes privileged ones related to `read` action. For example, `microsoft.directory/user/standard/read` includes the ability to read standard properties like public phone number and email address, but not the private secondary phone number or email address used for multifactor authentication.</li><li>`basic` designates common properties, but excludes privileged ones related to the `update` action. The set of properties that you can read may be different from what you can update. ThatΓÇÖs why there are `standard` and `basic` keywords to reflect that.</li></ul> |
+| action | Operation being granted, most typically create, read, update, or delete (CRUD). There is a special `allTasks` keyword for specifying all of the above abilities (create, read, update, and delete). |
+
+## Compare authentication roles
++
+## Who can reset passwords
+
+In the following table, the columns list the roles that can reset passwords and invalidate refresh tokens. The rows list the roles for which their password can be reset.
+
+The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
+
+| Role that password can be reset | Password Admin | Helpdesk Admin | Auth Admin | User Admin | Privileged Auth Admin | Global Admin |
+| | | | | | | |
+| Auth Admin | &nbsp; | &nbsp; | :white_check_mark: | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Directory Readers | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Global Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark:\* |
+| Groups Admin | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Guest Inviter | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Helpdesk Admin | &nbsp; | :white_check_mark: | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Message Center Reader | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Password Admin | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Privileged Auth Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Reports Reader | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| User<br/>(no admin role) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| User Admin | &nbsp; | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Usage Summary Reports Reader | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| All custom roles | | | | | :white_check_mark: | :white_check_mark: |
+
+> [!IMPORTANT]
+> The [Partner Tier2 Support](permissions-reference.md#partner-tier2-support) role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). The [Partner Tier1 Support](permissions-reference.md#partner-tier1-support) role can reset passwords and invalidate refresh tokens for only non-administrators. These roles should not be used because they are deprecated.
+
+The ability to reset a password includes the ability to update the following sensitive properties required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
+- businessPhones
+- mobilePhone
+- otherMails
+
+## Who can perform sensitive actions
+
+Some administrators can perform the following sensitive actions for some users. All users can read the sensitive properties.
+
+| Sensitive action | Sensitive property name |
+| | |
+| Disable or enable users | `accountEnabled` |
+| Update business phone | `businessPhones` |
+| Update mobile phone | `mobilePhone` |
+| Update on-premises immutable ID | `onPremisesImmutableId` |
+| Update other emails | `otherMails` |
+| Update password profile | `passwordProfile` |
+| Update user principal name | `userPrincipalName` |
+| Delete or restore users | Not applicable |
+
+In the following table, the columns list the roles that can perform sensitive actions. The rows list the roles for which the sensitive action can be performed upon.
+
+The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
+
+| Role that sensitive action can be performed upon | Auth Admin | User Admin | Privileged Auth Admin | Global Admin |
+| | | | | |
+| Auth Admin | :white_check_mark: | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Directory Readers | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Global Admin | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Groups Admin | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Guest Inviter | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Helpdesk Admin | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Message Center Reader | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Password Admin | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Privileged Auth Admin | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Privileged Role Admin | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| Reports Reader | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| User<br/>(no admin role) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | &nbsp; | &nbsp; | :white_check_mark: | :white_check_mark: |
+| User Admin | &nbsp; | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Usage Summary Reports Reader | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| All custom roles | | | :white_check_mark: | :white_check_mark: |
+
+## Next steps
+
+- [Best practices for Azure AD roles](best-practices.md)
+- [Azure AD built-in roles](permissions-reference.md)
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
+ Last updated 04/10/2023
active-directory Quickstart App Registration Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/quickstart-app-registration-limits.md
Last updated 02/04/2022 -+ # Quickstart: Grant permission to create unlimited app registrations
active-directory Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/role-definitions-list.md
Last updated 02/04/2022 -+
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md
-+
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md
Last updated 04/15/2022 -+ # List Azure AD role assignments
This section describes how to list role assignments with single-application scop
## PowerShell
-This section describes viewing assignments of a role with organization-wide scope. This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](custom-assign-powershell.md).
+This section describes viewing assignments of a role with organization-wide scope. This article uses the [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](custom-assign-powershell.md).
-Use the [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) and [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) commands to list role assignments.
+Use the [Get-MgRoleManagementDirectoryRoleDefinition](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroledefinition) and [Get-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroleassignment) commands to list role assignments.
The following example shows how to list the role assignments for the [Groups Administrator](permissions-reference.md#groups-administrator) role. ```powershell # Fetch list of all directory roles with template ID
-Get-AzureADMSRoleDefinition
+Get-MgRoleManagementDirectoryRoleDefinition
# Fetch a specific directory role by ID
-$role = Get-AzureADMSRoleDefinition -Id "fdd7a751-b60b-444a-984c-02652fe8fa1c"
+$role = Get-MgRoleManagementDirectoryRoleDefinition -UnifiedRoleDefinitionId fdd7a751-b60b-444a-984c-02652fe8fa1c
# Fetch membership for a role
-Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
+Get-MgRoleManagementDirectoryRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
``` ```Example
-RoleDefinitionId PrincipalId DirectoryScopeId
-- -- -
-fdd7a751-b60b-444a-984c-02652fe8fa1c 04f632c3-8065-4466-9e30-e71ec81b3c36 /administrativeUnits/3883b136-67f0-412c-9b...
+Id PrincipalId RoleDefinitionId DirectoryScopeId AppScop
+ eId
+-- -- - - -
+lAPpYvVpN0KRkAEhdxReEH2Fs3EjKm1BvSKkcYVN2to-1 71b3857d-2a23-416d-bd22-a471854ddada 62e90394-69f5-4237-9190-012177145e10 /
+lAPpYvVpN0KRkAEhdxReEMdXLf2tIs1ClhpzQPsutrQ-1 fd2d57c7-22ad-42cd-961a-7340fb2eb6b4 62e90394-69f5-4237-9190-012177145e10 /
``` The following example shows how to list all active role assignments across all roles, including built-in and custom roles (currently in Preview). ```powershell
-$roles = Get-AzureADMSRoleDefinition
+$roles = Get-MgRoleManagementDirectoryRoleDefinition
foreach ($role in $roles) {
- Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
+ Get-MgRoleManagementDirectoryRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'"
} ``` ```Example
-RoleDefinitionId PrincipalId DirectoryScopeId Id
-- -- - --
-e8611ab8-c189-46e8-94e1-60213ab1f814 9f9fb383-3148-46a7-9cec-5bf93f8a879c / uB2o6InB6EaU4WAhOrH4FHwni...
-e8611ab8-c189-46e8-94e1-60213ab1f814 027c8aba-2e94-49a8-974b-401e5838b2a0 / uB2o6InB6EaU4WAhOrH4FEqdn...
-fdd7a751-b60b-444a-984c-02652fe8fa1c 04f632c3-8065-4466-9e30-e71ec81b3c36 /administrati... UafX_Qu2SkSYTAJlL-j6HL5Dr...
-...
+Id PrincipalId RoleDefinitionId DirectoryScopeId AppScop
+ eId
+-- -- - - -
+lAPpYvVpN0KRkAEhdxReEH2Fs3EjKm1BvSKkcYVN2to-1 71b3857d-2a23-416d-bd22-a471854ddada 62e90394-69f5-4237-9190-012177145e10 /
+lAPpYvVpN0KRkAEhdxReEMdXLf2tIs1ClhpzQPsutrQ-1 fd2d57c7-22ad-42cd-961a-7340fb2eb6b4 62e90394-69f5-4237-9190-012177145e10 /
+4-PYiFWPHkqVOpuYmLiHa3ibEcXLJYtFq5x3Kkj2TkA-1 c5119b78-25cb-458b-ab9c-772a48f64e40 88d8e3e3-8f55-4a1e-953a-9b9898b8876b /
+4-PYiFWPHkqVOpuYmLiHa2hXf3b8iY5KsVFjHNXFN4c-1 767f5768-89fc-4a8e-b151-631cd5c53787 88d8e3e3-8f55-4a1e-953a-9b9898b8876b /
+BSub0kaAukSHWB4mGC_PModww03rMgNOkpK77ePhDnI-1 4dc37087-32eb-4e03-9292-bbede3e10e72 d29b2b05-8046-44ba-8758-1e26182fcf32 /
+BSub0kaAukSHWB4mGC_PMgzOWSgXj8FHusA4iaaTyaI-1 2859ce0c-8f17-47c1-bac0-3889a693c9a2 d29b2b05-8046-44ba-8758-1e26182fcf32 /
``` ## Microsoft Graph API
active-directory Adobe Identity Management Provisioning Oidc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|name.givenName|String|| |name.familyName|String|| |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:eduRole|String||
+
+ > [!NOTE]
+ > The **eduRole** field accepts values like `Teacher or Student`, anything else will be ignored.
1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management (OIDC)**.
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change log
+08/15/2023 - Added support for Schema Discovery.
## More resources
active-directory Adobe Identity Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
> [!NOTE] > If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |userName|String|
- |emails[type eq "work"].value|String|
- |active|Boolean|
- |addresses[type eq "work"].country|String|
- |name.givenName|String|
- |name.familyName|String|
- |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String|
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String||
+ |addresses[type eq "work"].country|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||
+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:eduRole|String||
+
+ > [!NOTE]
+ > The **eduRole** field accepts values like `Teacher or Student`, anything else will be ignored.
10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**. 11. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
- |displayName|String|
- |members|Reference|
+ |Attribute|Type|Supported for filtering|Required by Adobe Identity Management
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change log
+* 07/18/2023 - The app was added to Gov Cloud.
+* 08/15/2023 - Added support for Schema Discovery.
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Azure Databricks With Private Link Workspace Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/azure-databricks-with-private-link-workspace-provisioning-tutorial.md
+
+ Title: Azure AD on-premises app provisioning to Azure Databricks with Private Link Workspace
+description: This article describes how to use the Azure AD provisioning service to provision users into Azure Databricks with Private Link Workspace.
++++++ Last updated : 08/10/2023++++
+# Microsoft Entra ID Application Provisioning to Azure Databricks with Private Link Workspace
+
+The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) client that can be used to automatically provision users into cloud or on-premises applications. This article outlines how you can use the Azure AD provisioning service to provision users into Azure Databricks workspaces with no public access.
+
+[ ![Diagram that shows SCIM architecture.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/scim-architecture.png)](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/scim-architecture.png#lightbox)
+
+## Prerequisites
+* An Azure AD tenant with Microsoft Entra ID Governance and Azure AD Premium P1 or Premium P2 (or EMS E3 or E5). To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/microsoft-entra-pricing).
+* Administrator role for installing the agent. This task is a one-time effort and should be an Azure account that's either a hybrid administrator or a global administrator.
+* Administrator role for configuring the application in the cloud (application administrator, cloud application administrator, global administrator, or a custom role with permissions).
+* A computer with at least 3 GB of RAM, to host a provisioning agent. The computer should have Windows Server 2016 or a later version of Windows Server, with connectivity to the target application, and with outbound connectivity to login.microsoftonline.com, other Microsoft Online Services and Azure domains. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy.
+
+## Download, install, and configure the Azure AD Connect Provisioning Agent Package
+
+If you have already downloaded the provisioning agent and configured it for another on-premises application, then continue reading in the next section.
+
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 1. On the left, select **Azure AD Connect**.
+ 1. On the left, select **Cloud sync**.
+ [![Screenshot of new UX screen.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/azure-active-directory-connect-new-ux.png)](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/azure-active-directory-connect-new-ux.png#lightbox)
+
+ 1. On the left, select **Agent**.
+ 1. Select **Download on-premises agent**, and select **Accept terms & download**.
+ >[!NOTE]
+ >Please use different provisioning agents for on-premises application provisioning and Azure AD Connect Cloud Sync / HR-driven provisioning. All three scenarios should not be managed on the same agent.
+ 1. Open the provisioning agent installer, agree to the terms of service, and select **next**.
+ 1. When the provisioning agent wizard opens, continue to the **Select Extension** tab and select **On-premises application provisioning** when prompted for the extension you want to enable.
+ 1. The provisioning agent uses the operating system's web browser to display a popup window for you to authenticate to Azure AD, and potentially also your organization's identity provider. If you're using Internet Explorer as the browser on Windows Server, then you may need to add Microsoft web sites to your browser's trusted site list to allow JavaScript to run correctly.
+ 1. Provide credentials for an Azure AD administrator when you're prompted to authorize. The user is required to have the Hybrid Identity Administrator or Global Administrator role.
+ 1. Select **Confirm** to confirm the setting. Once installation is successful, you can select **Exit**, and also close the Provisioning Agent Package installer.
+
+## Provisioning to SCIM-enabled Workspace
+Once the agent is installed, no further configuration is necessary on-premises, and all provisioning configurations are then managed from the Azure portal.
+
+ 1. In the Azure portal, navigate to the Enterprise applications and add the **On-premises SCIM app** from the [gallery](../manage-apps/add-application-portal.md).
+ 1. From the left hand menu, navigate to the **Provisioning** option and select **Get started**.
+ 1. Select **Automatic** from the dropdown list and expand the **On-Premises Connectivity** option.
+ 1. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**.
+ 1. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection.
+ 1. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolvable by DNS. An example for a scenario where the agent is installed on the same host as the application is `https://localhost:8585/scim`
+ ![Screenshot that shows assigning an agent.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial//on-premises-assign-agents.png)
+
+ 1. Create an Admin Token in Azure Databricks User Settings Console and enter the same in the **Secret Token** field
+ 1. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test fails. Use the steps [here](../app-provisioning/on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues.
+ >[!NOTE]
+ > If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the application contains the entire URL provided above.
+
+ 1. Configure any [attribute mappings](../app-provisioning/customize-application-attributes.md) or [scoping](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application.
+ 1. Add users to scope by [assigning users and groups](../manage-apps/add-application-portal-assign-users.md) to the application.
+ 1. Test provisioning a few users [on demand](../app-provisioning/provision-on-demand.md).
+ 1. Add more users into scope by assigning them to your application.
+ 1. Go to the **Provisioning** pane, and select **Start provisioning**.
+ 1. Monitor using the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md).
+
+The following video provides an overview of on-premises provisioning.
+> [!VIDEO https://www.youtube.com/embed/QdfdpaFolys]
+
+## More requirements
+* Ensure your [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) implementation meets the [Azure AD SCIM requirements](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+ Azure AD offers open-source [reference code](https://github.com/AzureAD/SCIMReferenceCode/wiki) that developers can use to bootstrap their SCIM implementation.
+* Support the /schemas endpoint to reduce configuration required in the Azure portal.
+
+## Next steps
+
+* [App provisioning](../app-provisioning/user-provisioning.md)
+* [Generic SQL connector](../app-provisioning/on-premises-sql-connector-configure.md)
+* [Tutorial: ECMA Connector Host generic SQL connector](../app-provisioning/tutorial-ecma-sql-connector.md)
+* [Known issues](../app-provisioning/known-issues.md)
active-directory Canva Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canva-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Canva for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Canva.
++
+writer: twimmers
+
+ms.assetid: 9bf62920-d9e0-4ed4-a4f6-860cb9563b00
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Canva for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Canva and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Canva](https://www.canva.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Canva.
+> * Remove users in Canva when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Canva.
+> * Provision groups and group memberships in Canva.
+> * [Single sign-on](canva-tutorial.md) to Canva (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Canva tenant.
+* A user account in Canva with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Canva](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Canva to support provisioning with Azure AD
+Contact Canva support to configure Canva to support provisioning with Azure AD.
+
+## Step 3. Add Canva from the Azure AD application gallery
+
+Add Canva from the Azure AD application gallery to start managing provisioning to Canva. If you have previously setup Canva for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Canva
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Canva in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Canva**.
+
+ ![Screenshot of the Canva link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Canva Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Canva. If the connection fails, ensure your Canva account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Canva**.
+
+1. Review the user attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Canva for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Canva API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Canva|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |externalId|String||
+ |emails[type eq "work"].value|String||&check;
+ |name.givenName|String||
+ |name.familyName|String||
+ |displayName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Canva**.
+
+1. Review the group attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Canva for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Canva|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Canva, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Canva by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Cisco Webex Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-webex-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Ci
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Citi Program Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citi-program-tutorial.md
Previously updated : 07/31/2023 Last updated : 08/23/2023
In this article, you learn how to integrate CITI Program with Azure Active Direc
* Enable your users to be automatically signed-in to CITI Program with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-You configure and test Azure AD single sign-on for CITI Program in a test environment. CITI Program supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+You configure and test Azure AD single sign-on for CITI Program in a test environment. CITI Program supports **SP-initiated** single sign-on and **Just-In-Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** textbox, type the URL:
+ a. In the **Identifier (Entity ID)** textbox, type the URL:
`https://www.citiprogram.org/shibboleth`
- b. In the **Reply URL** textbox, type the URL:
+ b. In the **Reply URL (Assertion Consumer Service URL)** textbox, type the URL:
`https://www.citiprogram.org/Shibboleth.sso/SAML2/POST` c. In the **Sign on URL** textbox, type a URL using the following pattern:
Complete the following steps to enable Azure AD single sign-on in the Azure port
> [!NOTE] > This value is not real. Update this value with the actual Sign on URL. Contact [CITI Program support team](mailto:shibboleth@citiprogram.org) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. CITI Program application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. The CITI Program application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Default Attributes")
Complete the following steps to enable Azure AD single sign-on in the Azure port
| urn:oid:2.16.840.1.113730.3.1.241 | user.displayname | | urn:oid:2.16.840.1.113730.3.1.3 | user.employeeid | | urn:oid:1.3.6.1.4.1.22704.1.1.1.8 | [other user attribute] |
+ > [!NOTE]
+ > The Source Attribute is what is generally recommended, but not necessarily a rule. For example, if user.mail is unique and scoped, it can also be passed as urn:oid:1.3.6.1.4.1.5923.1.1.1.6.
-1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find the **App Federation Metadata Url** and copy it, or the **Federation Metadata XML** and select **Download** to download the certificate.
![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure CITI Program SSO
-To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). This is required to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **CITI Program** side, you need to send the copied **App Federation Metadata Url** or the downloaded **Federation Metadata XML** to [CITI Program Support](mailto:shibboleth@citiprogram.org). This is required to have the SAML SSO connection set properly on both sides. Also, provide any additional scopes or domains for your integration.
## Test SSO
active-directory Cloudbees Ci Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudbees-ci-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
| `https://cjoc.<CustomerDomain>/securityRealm/finishLogin` | | `https://<Environment>.<CustomerDomain>/securityRealm/finishLogin` |
-1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign on URL** textbox, type the URL using one of the following patterns:
+ c. In the **Sign on URL** textbox, type the URL using one of the following patterns:
| **Sign on URL** | ||
active-directory Datadog Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datadog-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Datadog for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Datadog.
++
+writer: twimmers
+
+ms.assetid: 1f57f4bf-90f9-4a89-90fe-987fd5686d7b
++++ Last updated : 08/30/2023+++
+# Tutorial: Configure Datadog for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Datadog and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Datadog](https://www.datadog.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Datadog.
+> * Remove users in Datadog when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Datadog.
+> * [Single sign-on](datadog-tutorial.md) to Datadog (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Datadog with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Determine what data to [map between Azure AD and Datadog](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Datadog to support provisioning with Azure AD
+Contact Datadog support to configure Datadog to support provisioning with Azure AD.
+
+## Step 3. Add Datadog from the Azure AD application gallery
+
+Add Datadog from the Azure AD application gallery to start managing provisioning to Datadog. If you have previously setup Datadog for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Datadog
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Datadog in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Datadog**.
+
+ ![Screenshot of the Datadog link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Datadog Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Datadog. If the connection fails, ensure your Datadog account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Datadog**.
+
+1. Review the user attributes that are synchronized from Azure AD to Datadog in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Datadog for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Datadog API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Datadog|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |title|string||
+ |emails[type eq "work"].value|String||
+ |name.formatted|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Datadog, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Datadog by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Dialpad Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dialpad-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Di
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Document360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/document360-tutorial.md
Title: Azure Active Directory SSO integration with Document360
-description: Learn how to configure single sign-on between Azure Active Directory and Document360.
+description: Learn how to configure single sign-on (SSO) between Azure Active Directory (AD) and Document360.
Previously updated : 04/27/2023 Last updated : 08/21/2023 - # Azure Active Directory SSO integration with Document360
-In this article, you learn how to integrate Document360 with Azure Active Directory (Azure AD). Document360 is an online self-service knowledge base software. When you integrate Document360 with Azure AD, you can:
+This article teaches you how to integrate Document360 with Azure AD. Document360 is an online self-service knowledge base software. When you integrate Document360 with Azure AD, you can:
* Control in Azure AD who has access to Document360.
-* Enable your users to be automatically signed-in to Document360 with their Azure AD accounts.
+* Enable your users to be automatically signed in to Document360 with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
-You configure and test Azure AD single sign-on for Document360 in a test environment. Document360 supports **SP** and **IDP** initiated single sign-on.
+You configure and test Azure AD single sign-on for Document360 in a test environment. Document360 supports **Service Provider (SP)** and **Identity Provider (IdP)** initiated SSO.
> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+> Identifier of this application is a fixed string value, so only one instance can be configured in one tenant.
## Prerequisites
-To integrate Azure Active Directory with Document360, you need:
+To integrate Azure AD with Document360, you need the following:
* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Document360 single sign-on (SSO) enabled subscription.
+* An Azure AD subscription. If you don't have a subscription, you can [get a free account](https://azure.microsoft.com/free/).
+* Document360 subscription with SSO enabled. If you don't have a subscription, you can [Sign up for a new account](https://document360.com/signup/).
## Add application and assign a test user
-Before you begin the process of configuring single sign-on, you need to add the Document360 application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+Before configuring SSO, add the Document360 application from the Azure AD gallery. You need a test user account to assign to the application and test the SSO configuration.
### Add Document360 from the Azure AD gallery
-Add Document360 from the Azure AD application gallery to configure single sign-on with Document360. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+Add Document360 from the Azure AD application gallery to configure SSO with Document360. For more information on adding an application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
### Create and assign Azure AD test user Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+Alternatively, you can use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
## Configure Azure AD SSO Complete the following steps to enable Azure AD single sign-on in the Azure portal. 1. In the Azure portal, on the **Document360** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, perform the following steps:
+4. On the **Basic SAML Configuration** section, perform the following steps. Choose any one of the Identifiers, Reply URL, and Sign on URL based on your Data center region.
- a. In the **Identifier** textbox, type one of the following URLs:
+ a. In the **Identifier** textbox, type/copy & paste one of the following URLs:
| **Identifier** | |--| | `https://identity.document360.io/saml` |
+ | **(or)** |
| `https://identity.us.document360.io/saml` |
- b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+ b. In the **Reply URL** textbox, type/copy & paste a URL using one of the following patterns:
| **Reply URL** | | -| | `https://identity.document360.io/signin-saml-<ID>` |
+ | **(or)** |
| `https://identity.us.document360.io/signin-saml-<ID>` |
-1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+5. If you wish to configure the application in **SP** initiated mode, then perform the following step:
- In the **Sign on URL** textbox, type one of the following URLs:
+ In the **Sign on URL** textbox, type/copy & paste one of the following URLs:
| **Sign on URL** | |--| | `https://identity.document360.io ` |
+ | **(or)** |
| `https://identity.us.document360.io` | > [!NOTE]
- > The Reply URL is not real. Update this value with the actual Reply URL. Contact [Document360 Client support team](mailto:support@document360.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL is not real. Update this value with the actual Reply URL. You can also refer to the patterns shown in the Azure portal's **Basic SAML Configuration** section.
-1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
+6. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
-1. On the **Set up Document360** section, copy the appropriate URL(s) based on your requirement.
+7. On the **Set up Document360** section, copy the appropriate URL(s) based on your requirement.
![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ## Configure Document360 SSO
-To configure single sign-on on **Document360** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Document360 support team](mailto:support@document360.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. In a different web browser window, log in to your Document360 portal as an administrator.
+
+1. To configure SSO on the **Document360** portal, you need to navigate to **Settings** → **Users & Security** → **SAML/OpenID** → **SAML** and perform the following steps:
+
+ [![Screenshot shows the Document360 configuration.](./media/document360-tutorial/configuration.png "Document360")](./media/document360-tutorial/configuration.png#lightbox)
+
+1. Click on the Edit icon in **SAML basic configuration** on the Document360 portal side and paste the values from Azure AD portal based on the below mentioned field associations.
+
+ | Document360 portal fields | Azure AD portal values |
+ | | |
+ | Email domains | Domains of emails you have under active directory |
+ | Sign On URL | Login URL |
+ | Entity ID | Azure AD identifier |
+ | Sign Out URL | Logout URL |
+ | SAML certificate | Download Certificate (Base64) from Azure AD side and upload in Document360 |
+
+1. Click on the **Save** button when youΓÇÖre done with the values.
+ ### Create Document360 test user
-In this section, you create a user called Britta Simon at Document360. Work with [Document360 support team](mailto:support@document360.com) to add the users in the Document360 platform. Users must be created and activated before you use single sign-on.
+1. In a different web browser window, log in to your Document360 portal as an administrator.
+
+1. From the Document360 portal, go to **Settings → Users & Security → Team accounts & groups → Team account**. Click the **New team account** button and type in the required details, specify the roles, and follow the module steps to add a user to Document360.
+
+ [![Screenshot shows the Document360 test user.](./media/document360-tutorial/add-user.png "Document360")](./media/document360-tutorial/add-user.png#lightbox)
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with the following options.
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Document360 Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to the Document360 Sign-on URL, where you can initiate the login flow.
-* Go to Document360 Sign-on URL directly and initiate the login flow from there.
+* Go to Document360 Sign-on URL directly and initiate the login flow.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Document360 for which you set up the SSO.
+* Click on **Test this application** in the Azure portal, and you should be automatically signed in to the Document360 for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Document360 tile in the My Apps if configured in SP mode, you will be redirected to the application sign-on page for initiating the login flow. If configured in IDP mode, you should be automatically signed in to the Document360 for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Document360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Document360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Additional resources
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Document360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Document360, you can enforce session control, which protects the exfiltration and infiltration of your organization's sensitive data in real-time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Elium Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elium-provisioning-tutorial.md
This tutorial shows how to configure Elium and Azure Active Directory (Azure AD)
> [!NOTE] > This tutorial describes a connector that's built on top of the Azure AD User Provisioning service. For important details about what this service does and how it works, and for frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in preview. For the general terms of use for Azure features in preview, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Foodee Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foodee-provisioning-tutorial.md
This article shows you how to configure Azure Active Directory (Azure AD) in Foo
> [!NOTE] > The article describes a connector that's built on top of the Azure AD User Provisioning service. To learn what this service does and how it works, and to get answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in preview. For more information about the Azure terms-of-use feature for preview features, go to [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Forcepoint Cloud Security Gateway Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Forcepoint Cloud Security Gateway - User Authentication.
++
+writer: twimmers
+
+ms.assetid: 415b2ba3-a9a5-439a-963a-7c2c0254ced1
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Forcepoint Cloud Security Gateway - User Authentication and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Forcepoint Cloud Security Gateway - User Authentication](https://admin.forcepoint.net) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Forcepoint Cloud Security Gateway - User Authentication.
+> * Remove users in Forcepoint Cloud Security Gateway - User Authentication when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Forcepoint Cloud Security Gateway - User Authentication.
+> * Provision groups and group memberships in Forcepoint Cloud Security Gateway - User Authentication.
+> * [Single sign-on](forcepoint-cloud-security-gateway-tutorial.md) to Forcepoint Cloud Security Gateway - User Authentication (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Forcepoint Cloud Security Gateway - User Authentication tenant.
+* A user account in Forcepoint Cloud Security Gateway - User Authentication with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Forcepoint Cloud Security Gateway - User Authentication](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD
+Contact Forcepoint Cloud Security Gateway - User Authentication support to configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD.
+
+## Step 3. Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery
+
+Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery to start managing provisioning to Forcepoint Cloud Security Gateway - User Authentication. If you have previously setup Forcepoint Cloud Security Gateway - User Authentication for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Forcepoint Cloud Security Gateway - User Authentication
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Forcepoint Cloud Security Gateway - User Authentication in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Forcepoint Cloud Security Gateway - User Authentication**.
+
+ ![Screenshot of the Forcepoint Cloud Security Gateway - User Authentication link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Forcepoint Cloud Security Gateway - User Authentication Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Forcepoint Cloud Security Gateway - User Authentication. If the connection fails, ensure your Forcepoint Cloud Security Gateway - User Authentication account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Forcepoint Cloud Security Gateway - User Authentication**.
+
+1. Review the user attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Forcepoint Cloud Security Gateway - User Authentication for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Forcepoint Cloud Security Gateway - User Authentication API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication|
+ |||||
+ |userName|String|&check;|&check;
+ |externalId|String||&check;
+ |displayName|String||&check;
+ |urn:ietf:params:scim:schemas:extension:forcepoint:2.0:User:ntlmId|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Forcepoint Cloud Security Gateway - User Authentication**.
+
+1. Review the group attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Forcepoint Cloud Security Gateway - User Authentication for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||
+ |members|Reference||
+
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Forcepoint Cloud Security Gateway - User Authentication, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Forcepoint Cloud Security Gateway - User Authentication by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
### To configure automatic user provisioning for G Suite in Azure AD:
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to log in to `portal.azure.com` and won't be able to use `aad.portal.azure.com`.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Browse to **Azure Active Directory** > **Enterprise Applications** > **All applications**.
![Enterprise applications blade](./media/g-suite-provisioning-tutorial/enterprise-applications.png) ![All applications blade](./media/g-suite-provisioning-tutorial/all-applications.png)
-2. In the applications list, select **G Suite**.
+1. In the applications list, select **G Suite**.
![The G Suite link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab. Click on **Get started**.
+1. Select the **Provisioning** tab. Click on **Get started**.
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png) ![Get started blade](./media/g-suite-provisioning-tutorial/get-started.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to a Google authorization dialog box in a new browser window.
+1. Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to a Google authorization dialog box in a new browser window.
![G Suite authorize](./media/g-suite-provisioning-tutorial/authorize-1.png)
-6. Confirm that you want to give Azure AD permissions to make changes to your G Suite tenant. Select **Accept**.
+1. Confirm that you want to give Azure AD permissions to make changes to your G Suite tenant. Select **Accept**.
![G Suite Tenant Auth](./media/g-suite-provisioning-tutorial/gapps-auth.png)
-7. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to G Suite. If the connection fails, ensure your G Suite account has Admin permissions and try again. Then try the **Authorize** step again.
+1. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to G Suite. If the connection fails, ensure your G Suite account has Admin permissions and try again. Then try the **Authorize** step again.
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
+1. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
-9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. Select the **Save** button to commit any changes.
> [!NOTE] > GSuite Provisioning currently only supports the use of primaryEmail as the matching attribute.
This section guides you through the steps to configure the Azure AD provisioning
|websites.[type eq "work"].value|String|
-10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
+1. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
-11. Review the group attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in G Suite for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in G Suite for update operations. Select the **Save** button to commit any changes.
|Attribute|Type| |||
This section guides you through the steps to configure the Azure AD provisioning
|name|String| |description|String|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for G Suite, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for G Suite, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to G Suite by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to G Suite by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you're ready to provision, click **Save**.
+1. When you're ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
active-directory Gainsight Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gainsight-saml-tutorial.md
- Title: Azure Active Directory SSO integration with Gainsight SAML
-description: Learn how to configure single sign-on between Azure Active Directory and Gainsight SAML.
-------- Previously updated : 07/14/2023----
-# Azure Active Directory SSO integration with Gainsight SAML
-
-In this article, you'll learn how to integrate Gainsight SAML with Azure Active Directory (Azure AD). Use Azure AD to manage user access and enable single sign-on with Gainsight SAML. Requires an existing Gainsight SAML subscription. When you integrate Gainsight SAML with Azure AD, you can:
-
-* Control in Azure AD who has access to Gainsight SAML.
-* Enable your users to be automatically signed-in to Gainsight SAML with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-You'll configure and test Azure AD single sign-on for Gainsight SAML in a test environment. Gainsight SAML supports both **SP** and **IDP** initiated single sign-on.
-
-## Prerequisites
-
-To integrate Azure Active Directory with Gainsight SAML, you need:
-
-* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Gainsight SAML single sign-on (SSO) enabled subscription.
-
-## Add application and assign a test user
-
-Before you begin the process of configuring single sign-on, you need to add the Gainsight SAML application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
-
-### Add Gainsight SAML from the Azure AD gallery
-
-Add Gainsight SAML from the Azure AD application gallery to configure single sign-on with Gainsight SAML. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
-
-### Create and assign Azure AD test user
-
-Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
-
-## Configure Azure AD SSO
-
-Complete the following steps to enable Azure AD single sign-on in the Azure portal.
-
-1. In the Azure portal, on the **Gainsight SAML** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-
-1. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Identifier** textbox, type a value using one of the following patterns:
-
- | **Identifier** |
- |--|
- | `urn:auth0:gainsight:<ID>` |
- | `urn:auth0:gainsight-eu:<ID>` |
-
- b. In the **Reply URL** textbox, type a URL using one of the following patterns:
-
- | **Reply URL** |
- ||
- | `https://secured.gainsightcloud.com/login/callback?connection=<ID>` |
- | `https://secured.eu.gainsightcloud.com/login/callback?connection=<ID>` |
-
-1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign on URL** textbox, type a URL using one of the following patterns:
-
- | **Sign on URL** |
- ||
- | `https://secured.gainsightcloud.com/samlp/<ID>` |
- | `https://secured.eu.gainsightcloud.com/samlp/<ID>` |
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Gainsight SAML support team](mailto:support@gainsight.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
-
- ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
-
-1. On the **Set up Gainsight SAML** section, copy the appropriate URL(s) based on your requirement.
-
- ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
-
-## Configure Gainsight SAML SSO
-
-To configure single sign-on on **Gainsight SAML** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Gainsight SAML support team](mailto:support@gainsight.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Gainsight SAML test user
-
-In this section, you create a user called Britta Simon at Gainsight SAML SSO. Work with [Gainsight SAML support team](mailto:support@gainsight.com) to add the users in the Gainsight SAML SSO platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Gainsight SAML Sign-on URL where you can initiate the login flow.
-
-* Go to Gainsight SAML Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Gainsight SAML for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Gainsight SAML tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Gainsight SAML for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
-
-* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
-
-## Next steps
-
-Once you configure Gainsight SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Gainsight Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gainsight-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Gainsight
+description: Learn how to configure single sign-on between Azure Active Directory and Gainsight.
++++++++ Last updated : 08/22/2023++++
+# Azure Active Directory SSO integration with Gainsight
+
+In this article, you'll learn how to integrate Gainsight with Azure Active Directory (Azure AD). Use Azure AD to manage user access and enable single sign-on with Gainsight. Requires an existing Gainsight subscription. When you integrate Gainsight with Azure AD, you can:
+
+* Control in Azure AD who has access to Gainsight.
+* Enable your users to be automatically signed-in to Gainsight with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Gainsight in a test environment. Gainsight supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Gainsight, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Gainsight single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Gainsight application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Gainsight SAML from the Azure AD gallery
+
+Add Gainsight SAML from the Azure AD application gallery to configure single sign-on with Gainsight. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Gainsight** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. Provide any dummy url like (`https://gainsight.com`) in the **Identifier (Entity ID)** and **Reply URL (Assertion Consumer Service URL)** in **Basic SAML Configuration** in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Gainsight SAML** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+1. Now on the Gainsight Side, Navigate to **User Management** and click on **Authentication** tab, create a new **SAML** Authentication.
+
+## Setup SAML 2.0 Authentication in Gainsight
+
+> [!NOTE]
+> SAML 2.0 Authentication allows the users to login to Gainsight via Azure AD. Once Gainsight is configured to authenticate via SAML 2.0, users who want to access Gainsight will no longer be prompted to enter a username or password. Instead, an exchange between Gainsight and Azure AD occurs that grants Gainsight access to the users.
+
+**To configure SAML 2.0 Authentication:**
+
+1. Log in to your **Gainsight** company site as an administrator.
+
+1. Click **search bar** on the left side menu and select **User Management**.
+
+ ![Screenshot shows the Gainsight Left Nav Search Bar.](media/gainsight-tutorial/search-bar.png "Search bar")
+
+1. In the **User Management** page, navigate to **Authentication** tab and click **Add Authentication** > **SAML**.
+
+ ![Screenshot shows the Gainsight User Management Authentication Page.](media/gainsight-tutorial/authentication.png "Authentication Page")
+
+1. In the **SAML Mechanism** page, perform the following steps:
+
+ ![Screenshot shows how to edit SAML configuration in Gainsight.](media/gainsight-tutorial/connection.png "Connection Edit")
+
+ 1. Enter a unique connection **Name** in the textbox.
+ 1. Enter a valid **Email Domain** in the textbox.
+ 1. In the **Sign In URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal.
+ 1. In the **Sign Out URL** textbox, paste the **Logout URL** value, which you have copied from the Azure portal.
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal and upload it into the **Certificate** by clicking **Browse** option.
+ 1. Click **Save**.
+ 1. Reopen the new **SAML** Authentication and click on edit on the newly created connection, and download the **metadata**. Open the **metadata** file in your favorite Editor, and copy **entityID** and **Assertion Consumer Service Location URL**.
+
+ > [!Note]
+ > For more information on SAML creation, please refer [GAINSIGHT SAML](https://support.gainsight.com/Gainsight_NXT/01Onboarding_and_Implementation/Onboarding_for_Gainsight_NXT/Login_and_Permissions/03Gainsight_Authentication).
+
+1. Now back to Azure portal, On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, using the values obtained in the Step 4, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** textbox, type a value using one of the following patterns:
+
+ | **Identifier** |
+ | - |
+ | `urn:auth0:gainsight:<ID>` |
+ | `urn:auth0:gainsight-eu:<ID>` |
+
+ b. In the **Reply URL (Assertion Consumer Service URL)** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ | - |
+ | `https://secured.gainsightcloud.com/login/callback connection=<ID>` |
+ | `https://secured.eu.gainsightcloud.com/login/callback?connection=<ID>` |
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ ||
+ | `https://secured.gainsightcloud.com/samlp/<ID>` |
+ | `https://secured.eu.gainsightcloud.com/samlp/<ID>` |
+
+## Create Gainsight test user
+
+1. In a different web browser window, sign in to your Gainsight website as an administrator.
+
+1. In the **User Management** page, navigate to **Users** > **Add User**.
+
+ ![Screenshot shows how to add users in Gainsight.](media/gainsight-tutorial/user.png "Add Users")
+
+1. Fill required fields and click **Save**. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Gainsight Sign-on URL where you can initiate the login flow.
+
+* Go to Gainsight Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Gainsight for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Gainsight tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Gainsight for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Gainsight you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
Previously updated : 11/21/2022 Last updated : 08/16/2023
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
- ## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft Configure and test Azure AD SSO with Google Cloud / G Suite Connector by Microsoft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Google Cloud / G Suite Connector by Microsoft.
active-directory Harness Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harness-provisioning-tutorial.md
In this article, you learn how to configure Azure Active Directory (Azure AD) to
> [!NOTE] > This article describes a connector that's built on top of the Azure AD user provisioning service. For important information about this service and answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in preview. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Hive Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hive-tutorial.md
Previously updated : 11/21/2022 Last updated : 08/21/2023
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to Hive website as an administrator.
-1. Click on the **User Profile** and click **Your workspace**.
+1. Click on the **User Profile** and click your workspace **Settings**.
![Screenshot shows the Hive website with Your workspace selected from the menu.](./media/hive-tutorial/profile.png)
-1. Click **Auth** and perform the following steps:
+1. Click **Enterprise Security** and perform the following steps:
- ![Screenshot shows the Auth page where do the tasks described.](./media/hive-tutorial/authentication.png)
+ [![Screenshot shows the Auth page where do the tasks described.](./media/hive-tutorial/authentication.png)](./media/hive-tutorial/authentication.png#lightbox)
a. Copy **Your Workspace ID** and append it to the **SignOn URL** and **Reply URL** in the **Basic SAML Configuration Section** in the Azure portal.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Hive for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Hive tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Hive tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Hornbill Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hornbill-tutorial.md
Previously updated : 04/19/2023 Last updated : 08/16/2023 # Tutorial: Azure AD SSO integration with Hornbill
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://sso.hornbill.com/<INSTANCE_NAME>/<SUBDOMAIN>`
+`https://sso.hornbill.com/<INSTANCE_NAME>/live`
- b. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/`
+ > [!NOTE]
+ > If you are deploying the Hornbill Mobile Catalog to your organization, you will need to add an additional identifier URL, as so:
+`https://sso.hornbill.com/hornbill/mcatalog`
+
+ b. In the **Reply URL (Assertion Consumer Service URL)** section, add the following:
+`https://<API_SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/xmlmc/sso/saml2/authorize/user/live`
+
+ > [!NOTE]
+ > If you are deploying the Hornbill Mobile Catalog to your organization, you will need to add an additional Reply URL, as so:
+`https://<API_SUBDOMAIN>.hornbill.com/hornbill/xmlmc/sso/saml2/authorize/user/mcatalog`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+`https://live.hornbill.com/<INSTANCE_NAME>/`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Hornbill Client support team](https://www.hornbill.com/support/?request/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update the <INSTANCE_NAME> and <API_SUBDOMAIN> values with the actual values in the Identifier(s), Reply URL(s) and Sign on URL. These values can be retrieved from the Hornbill Solution Center in your Hornbill instance, under **_Your usage > Support_**. Contact [Hornbill Support](https://www.hornbill.com/support) for assistance in getting these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+6. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
![The Certificate download link](common/copy-metadataurl.png)
active-directory Hypervault Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hypervault-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Hypervault for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Hypervault.
++
+writer: twimmers
+
+ms.assetid: eca2ff9e-a09d-4bb4-88f6-6021a93d2c9d
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Hypervault for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Hypervault and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Hypervault](https://hypervault.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Hypervault.
+> * Remove users in Hypervault when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Hypervault.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Hypervault (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Hypervault with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Hypervault](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Hypervault to support provisioning with Azure AD
+Contact Hypervault support to configure Hypervault to support provisioning with Azure AD.
+
+## Step 3. Add Hypervault from the Azure AD application gallery
+
+Add Hypervault from the Azure AD application gallery to start managing provisioning to Hypervault. If you have previously setup Hypervault for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Hypervault
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Hypervault in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Hypervault**.
+
+ ![Screenshot of the Hypervault link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Hypervault**.
+
+1. Review the user attributes that are synchronized from Azure AD to Hypervault in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Hypervault for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Hypervault API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Hypervault|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |displayName|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |emails[type eq "work"].value|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Hypervault, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Hypervault by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Insite Lms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insite-lms-provisioning-tutorial.md
Title: 'Tutorial: Configure Insite LMS for automatic user provisioning with Azure Active Directory'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to Insite LMS.
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Insite LMS.
writer: twimmers
# Tutorial: Configure Insite LMS for automatic user provisioning
-This tutorial describes the steps you need to do in both Insite LMS and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Insite LMS](https://www.insite-it.net/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to do in both Insite LMS and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Insite LMS](https://www.insite-it.net/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
1. Determine what data to [map between Azure AD and Insite LMS](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Insite LMS to support provisioning with Azure AD
+To generate the Secret Token
-1. Navigate to `https://portal.insitelms.net/<OrganizationName>`.
-1. Download and install the Desktop Client.
-1. Log in with your Admin Account and Navigate to **Users** Module.
-1. Select the User `scim@insitelms.net` and press the button **Generate Access Token**. If you can't find the scim-User, contact the Support-Team
- 1. Choose **AzureAdScimProvisioning** and press **generate**
- 1. Copy the **AccessToken**
-1. The **Tenant Url** is `https://web.insitelms.net/<OrganizationName>/api/scim`.
+1. Login to [Insite LMS Admin Console](https://portal.insitelms.net/organization/applications).
+1. Navigate to **Self Hosted Jobs**. You find a job named ΓÇ£SCIMΓÇ¥.
+
+ ![Screenshot of generate API Key.](media/insite-lms-provisioning-tutorial/generate-api-key.png)
+
+1. Click on **Generate Api Key**.
+Copy and save the **Api Key**. This value is entered in the **Secret Token** field in the Provisioning tab of your Insite LMS application in the Azure portal.
+
+>[!NOTE]
+>The Access Token is only valid for 1 year.
## Step 3. Add Insite LMS from the Azure AD application gallery Add Insite LMS from the Azure AD application gallery to start managing provisioning to Insite LMS. If you have previously setup Insite LMS for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-## Step 4. Define who will be in scope for provisioning
+## Step 4. Define who is in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to Insite LMS
+## Step 5. Configure automatic user provisioning to Insite LMS
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Insite LMS app based on user and group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
1. In the applications list, select **Insite LMS**.
- ![The Insite LMS link in the Applications list](common/all-applications.png)
+ ![Screenshot of The Insite LMS link in the Applications list.](common/all-applications.png)
1. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
1. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, enter your Insite LMS **Tenant URL** and **Secret token** information. Select **Test Connection** to ensure that Azure AD can connect to Insite LMS. If the connection fails, ensure that your Insite LMS account has admin permissions and try again.
+1. In the **Admin Credentials** section,
+enter your Insite LMS **Tenant URL** as `https://api.insitelms.net/scim` and enter the **Secret token** generated in Step 2 above. Select **Test Connection** to ensure that Azure AD can connect to Insite LMS. If the connection fails, ensure that your Insite LMS account has admin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
1. Select **Save**. 1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Insite LMS**.
-1. Review the user attributes that are synchronized from Azure AD to Insite LMS in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Insite LMS for update operations. If you change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Insite LMS API supports filtering users based on that attribute. Select **Save** to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Insite LMS in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Insite LMS for update operations. If you change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Insite LMS API supports filtering users based on that attribute. Select **Save** to commit any changes.
- |Attribute|Type|Supported for filtering|
- ||||
- |userName|String|&check;|
- |emails[type eq "work"].value|String|&check;|
- |active|Boolean|
- |name.givenName|String|
- |name.familyName|String|
- |phoneNumbers[type eq "work"].value|String|
+ |Attribute|Type|Supported for filtering|Required by Insite LMS|
+ |||||
+ |userName|String|&check;|&check;|
+ |emails[type eq "work"].value|String|&check;|&check;|
+ |active|Boolean||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq "work"].value|String||
1. To configure scoping filters, see the instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. To enable the Azure AD provisioning service for Insite LMS, change **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
1. Define the users or groups that you want to provision to Insite LMS by selecting the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
1. When you're ready to provision, select **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to do than next cycles, which occur about every 40 minutes as long as the Azure AD provisioning service is running.
After you've configured provisioning, use the following resources to monitor you
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users were provisioned successfully or unsuccessfully. * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion.
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. To learn more about quarantine states, see [Application provisioning status of quarantine](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. To learn more about quarantine states, see [Application provisioning status of quarantine](../app-provisioning/application-provisioning-quarantine-status.md).
## More resources
active-directory Leapsome Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leapsome-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Le
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Oneflow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oneflow-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Oneflow for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Oneflow.
++
+writer: twimmers
+
+ms.assetid: 6af89cdd-956c-4cc2-9a61-98afe7814470
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Oneflow for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Oneflow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oneflow](https://oneflow.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Oneflow.
+> * Remove users in Oneflow when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Oneflow.
+> * Provision groups and group memberships in Oneflow.
+> * [Single sign-on](oneflow-tutorial.md) to Oneflow (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Oneflow tenant.
+* A user account in Oneflow with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Oneflow](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Oneflow to support provisioning with Azure AD
+Contact Oneflow support to configure Oneflow to support provisioning with Azure AD.
+
+## Step 3. Add Oneflow from the Azure AD application gallery
+
+Add Oneflow from the Azure AD application gallery to start managing provisioning to Oneflow. If you have previously setup Oneflow for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Oneflow
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Oneflow in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Oneflow**.
+
+ ![Screenshot of the Oneflow link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Oneflow Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Oneflow. If the connection fails, ensure your Oneflow account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Oneflow**.
+
+1. Review the user attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Oneflow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Oneflow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Oneflow|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq \"work\"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |nickName|String||
+ |title|String||
+ |profileUrl|String||
+ |displayName|String||
+ |addresses[type eq \"work\"].streetAddress|String||
+ |addresses[type eq \"work\"].locality|String||
+ |addresses[type eq \"work\"].region|String||
+ |addresses[type eq \"work\"].postalCode|String||
+ |addresses[type eq \"work\"].country|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:adSourceAnchor|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute1|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute2|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute3|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute4|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute5|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:distinguishedName|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:domain|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:userPrincipalName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oneflow**.
+
+1. Review the group attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Oneflow for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Oneflow|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Oneflow, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Oneflow by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Optiturn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/optiturn-tutorial.md
Previously updated : 07/02/2023 Last updated : 09/04/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** textbox, type the URL:
- `https://optiturn.com/sp`
+ a. In the **Identifier** textbox, type the URL using one of the following:
- b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+ | **Identifier** |
+ ||
+ | `https://optiturn.com/sp` - production |
+ | `https://sandbox.optiturn.com/sp` - testing |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
| **Reply URL** | ||
- | `https://optiturn.com/auth/saml/<Customer_Name>_azure_saml/callback` |
- | `https://<Environment>.optiturn.com/auth/saml/<Customer_Name>_azure_saml/callback` |
+ | `https://optiturn.com/auth/saml/<Customer_Name>_azure_saml/callback` - production |
+ | `https://sandbox.optiturn.com/auth/saml/<Customer_Name>_azure_saml/callback` - testing |
- c. In the **Sign on URL** textbox, type one of the following URL/pattern:
+ c. In the **Sign on URL** textbox, type one of the following:
| **Sign on URL** | |-|
- | ` https://optiturn.com/session/new` |
- | `https://<Environment>.optiturn.com/session/new` |
+ | `https://optiturn.com/session/new` - production |
+ | `https://sandbox.optiturn.com/session/new` - testing |
> [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [OptiTurn support team](mailto:support@optoro.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > `<Customer_Name>` should be replaced with a lowercased and underscored version of your companyΓÇÖs name. For example, Fake Corp. would become fake_corp. The [OptiTurn support team](mailto:support@optoro.com) can help choose this value.
1. OptiTurn application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
active-directory Oracle Cloud Infrastructure Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* An Oracle Cloud Infrastructure Console [tenant](https://www.oracle.com/cloud/sign-in.html?intcmp=OcomFreeTier&source=:ow:o:p:nav:0916BCButton). * A user account in Oracle Cloud Infrastructure Console with Admin permissions.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud
+ ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-## Additional resources
+## Change log
+08/15/2023 - The app was added to Gov Cloud.
+
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Oreilly Learning Platform Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oreilly-learning-platform-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both O'Reilly learning platform and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [O'Reilly learning platform](https://www.oreilly.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). - ## Supported capabilities+ > [!div class="checklist"] > * Create users in O'Reilly learning platform. > * Remove users in O'Reilly learning platform when they do not require access anymore.
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * A user account in O'Reilly learning platform with Admin permissions.
+* An O'Reilly learning platform single sign-on (SSO) enabled subscription.
## Step 1. Plan your provisioning deployment * Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
The scenario outlined in this tutorial assumes that you already have the followi
* Determine what data to [map between Azure AD and O'Reilly learning platform](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure O'Reilly learning platform to support provisioning with Azure AD
-Contact O'Reilly learning platform support to configure O'Reilly learning platform to support provisioning with Azure AD.
+
+Before you begin to configure the O'Reilly learning platform to support provisioning with Azure AD, youΓÇÖll need to generate a SCIM API token within the OΓÇÖReilly Admin Console.
+
+1. Navigate to [OΓÇÖReilly Admin Console](https://learning.oreilly.com/) by logging in to your OΓÇÖReilly account.
+1. Once youΓÇÖve logged in, click **Admin** in the top navigation and select **Integrations**.
+1. Scroll down to the **API tokens** section. Under API tokens, click **Create token** and select the **SCIM API**. Then give your token a name and expiration date, and click Continue. YouΓÇÖll receive your API key in a pop-up message prompting you to store a copy of it in a secure place. Once youΓÇÖve saved a copy of your key, click the checkbox and Continue.
+1. You will use the OΓÇÖReilly SCIM API token in Step 5.
## Step 3. Add O'Reilly learning platform from the Azure AD application gallery
-Add O'Reilly learning platform from the Azure AD application gallery to start managing provisioning to O'Reilly learning platform. If you have previously setup O'Reilly learning platform for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add O'Reilly learning platform from the Azure AD application gallery to start managing provisioning to O'Reilly learning platform. If you have previously [set up O'Reilly learning platform for SSO](oreilly-learning-platform-tutorial.md), you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-## Step 4. Define who will be in scope for provisioning
+## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+## Step 5. Configure automatic user provisioning to O'Reilly learning platform
-## Step 5. Configure automatic user provisioning to O'Reilly learning platform
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in OΓÇÖReilly learning platform based on user assignments in Azure AD.
### To configure automatic user provisioning for O'Reilly learning platform in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your O'Reilly learning platform Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to O'Reilly learning platform. If the connection fails, ensure your O'Reilly learning platform account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your O'Reilly learning platform Tenant URL, which is `https://api.oreilly.com/api/scim/v2`, and Secret Token, which you generated in Step 2. Click **Test Connection** to ensure Azure AD can connect to O'Reilly learning platform. If the connection fails, double-check that your token is correct or [contact the OΓÇÖReilly platform integration team](mailto:platform-integration@oreilly.com) for help.
![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ## Step 6. Monitor your deployment+ Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
active-directory Peakon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peakon-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Pe
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
+ ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites
active-directory Postman Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Postman for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Postman.
++
+writer: twimmers
+
+ms.assetid: f3687101-9bec-4f18-9884-61833f4f58c3
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Postman for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Postman and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Postman](https://www.postman.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Postman.
+> * Remove users in Postman when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Postman.
+> * Provision groups and group memberships in Postman.
+> * [Single sign-on](postman-tutorial.md) to Postman (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Postman tenant on the [Enterprise plan](https://www.postman.com/pricing/).
+* A user account in Postman with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Postman](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Postman to support provisioning with Azure AD
+
+Before you begin to configure Postman to support provisioning with Azure AD, youΓÇÖll need to generate a SCIM API token within the Postman Admin Console.
+
+ > [!NOTE]
+ > You can visit the page [Postman SCIM provisioning overview](https://learning.postman.com/docs/administration/scim-provisioning/scim-provisioning-overview/#enabling-scim-in-postman), to refer **Enable SCIM provisioning in Postman** steps.
+
+1. Navigate to [Postman Admin Console](https://go.postman.co/home) by logging in to your Postman account.
+1. Once youΓÇÖve logged in, click **Team** on the right side and click **Team Settings**.
+1. Select **Authentication** in the sidebar and then turn on the **SCIM provisioning** toggle.
+
+ ![Screenshot of Postman authentication settings page.](media/postman-provisioning-tutorial/postman-authentication-settings.png)
+
+1. You will receive a pop up message asking whether you want to **Turn on SCIM Provisioning**, click **Turn On** to enable SCIM provisioning.
+
+ ![Screenshot of modal to enable SCIM provisioning.](media/postman-provisioning-tutorial/postman-enable-scim-provisioning.png)
+1. To **Generate SCIM API Key**, perform the following steps:
+
+ 1. Select **Generate SCIM API Key** in the **SCIM provisioning** section.
+
+ ![Screenshot to generate SCIM API key in Postman.](media/postman-provisioning-tutorial/postman-generate-scim-api-key.png)
+
+ 1. Enter name of the key and click **Generate**.
+ 1. Copy your new API key for later use and click **Done**.
+
+ > [!NOTE]
+ > You can revisit this page to manage your SCIM API keys. If you regenerate an existing API key, you will have the option to keep the first key active while you switch over.
+
+ > [!NOTE]
+ > To continue enabling SCIM provisioning, see [Configuring SCIM with Azure AD](https://learning.postman.com/docs/administration/scim-provisioning/configuring-scim-with-azure-ad/). For further information or help configuring SCIM, [contact Postman support](https://www.postman.com/support/).
++
+## Step 3. Add Postman from the Azure AD application gallery
+
+Add Postman from the Azure AD application gallery to start managing provisioning to Postman. If you have previously set up Postman for SSO you can use the same application. However, it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Postman
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Postman in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Postman**.
+
+ ![Screenshot of the Postman link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input `https://api.getpostman.com/scim/v2/` as your Postman Tenant URL and your [SCIM API key](https://learning.postman.com/docs/administration/scim-provisioning/scim-provisioning-overview/#generating-scim-api-key) as the Secret Token. Click **Test Connection** to ensure Azure AD can connect to Postman. If the connection fails, ensure your Postman account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Postman**.
+
+1. Review the user attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Postman for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Postman API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Postman|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Postman**.
+
+1. Review the group attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Postman for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Postman|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Postman, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Postman by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md).
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md).
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Recnice Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/recnice-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Recnice](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Recnice to support provisioning with Azure AD
-Contact Recnice support to configure Recnice to support provisioning with Azure AD.
+Before configuring Recnice for automatic user provisioning with Azure AD, you will need to know the Secret Token and Tenant URL.
+
+1. Sign in to your Recnice Admin Console. Click on **Account**.
+
+ ![Screenshot of the Recnice Account Page.](media/recnice-provisioning-tutorial/recnice-account-settings.png)
+
+2. Copy the **SCIM Key** value. This value will be entered in the **Secret Token** field in the Provisioning tab of your Recnice application in the Azure portal.
+
+ ![Screenshot of the S C I M A P I key.](media/recnice-provisioning-tutorial/recnice-token.png)
+
+3. The **Tenant URL** value: `https://scim.recnice.com/scim/`.
## Step 3. Add Recnice from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who will be provisioned ba
## Step 5. Configure automatic user provisioning to Recnice
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Recnice based on user and/or group assignments in Azure AD.
### To configure automatic user provisioning for Recnice in Azure AD:
active-directory Reward Gateway Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reward-gateway-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Re
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in public preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Sap Cloud Platform Identity Authentication Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md
Title: 'Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning with Microsoft Entra ID'
-description: Learn how to configure Microsoft Entra ID to automatically provision and de-provision user accounts to SAP Cloud Identity Services.
+description: Learn how to configure Microsoft Entra ID to automatically provision and deprovision user accounts to SAP Cloud Identity Services.
writer: twimmers
# Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Identity Services and Microsoft Entra ID (Azure AD) to configure Microsoft Entra ID to automatically provision and de-provision users to SAP Cloud Identity Services.
+This tutorial aims to demonstrate the steps for configuring Microsoft Entra ID (Azure AD) and SAP Cloud Identity Services. The goal is to set up Microsoft Entra ID to automatically provision and deprovision users to SAP Cloud Identity Services.
> [!NOTE] > This tutorial describes a connector built on top of the Microsoft Entra ID User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
Before configuring and enabling automatic user provisioning, you should decide w
## Important tips for assigning users to SAP Cloud Identity Services
-* It is recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. Additional users may be assigned later.
+* It's recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. More users may be assigned later.
* When assigning a user to SAP Cloud Identity Services, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
Before configuring and enabling automatic user provisioning, you should decide w
![Screenshot of the SAP Cloud Identity Services Add SCIM.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/configurationauth.png)
-1. You will receive an email to activate your account and set a password for **SAP Cloud Identity Services Service**.
+1. You'll get an email to activate your account and set up a password for the **SAP Cloud Identity Services Service**.
-1. Copy the **User ID** and **Password**. These values will be entered in the Admin Username and Admin Password fields respectively in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal.
+1. Copy the **User ID** and **Password**. These values are entered in the Admin Username and Admin Password fields respectively.
+This is done in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal.
## Add SAP Cloud Identity Services from the gallery
This section guides you through the steps to configure the Microsoft Entra ID pr
1. Review the user attributes that are synchronized from Microsoft Entra ID to SAP Cloud Identity Services in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Cloud Identity Services for update operations. Select the **Save** button to commit any changes.
- ![Screenshot of the SAP Business Technology Platform Identity Authentication User Attributes.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/userattributes.png)
+ |Attribute|Type|Supported for filtering|Required by SAP Cloud Identity Services|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String||&check;
+ |active|Boolean||
+ |displayName|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |name.honorificPrefix|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |locale|String||
+ |timezone|String||
+ |userType|String||
+ |company|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute1|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute2|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute3|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute4|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute5|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute6|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute7|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute8|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute9|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute10|String||
+ |sendMail|String||
+ |mailVerified|String||
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
This section guides you through the steps to configure the Microsoft Entra ID pr
![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
-1. When you are ready to provision, click **Save**.
+1. When you're ready to provision, click **Save**.
![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
For more information on how to read the Microsoft Entra ID provisioning logs, se
* SAP Cloud Identity Services's SCIM endpoint requires certain attributes to be of specific format. You can know more about these attributes and their specific format [here](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/b10fc6a9a37c488a82ce7489b1fab64c.html#).
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md)
active-directory Sap Fiori Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-fiori-tutorial.md
+ Last updated 11/21/2022
active-directory Sap Netweaver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md
+ Last updated 11/21/2022
active-directory Servicely Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicely-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Servicely for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Servicely.
++
+writer: twimmers
+
+ms.assetid: be3af02b-da77-4a88-bec3-e634e2af38b3
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Servicely for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Servicely and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Servicely](https://servicely.ai/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Servicely.
+> * Remove users in Servicely when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Servicely.
+> * Provision groups and group memberships in Servicely.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Servicely tenant.
+* A user account in Servicely with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Servicely](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Servicely to support provisioning with Azure AD
+Contact Servicely support to configure Servicely to support provisioning with Azure AD.
+
+## Step 3. Add Servicely from the Azure AD application gallery
+
+Add Servicely from the Azure AD application gallery to start managing provisioning to Servicely. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Servicely
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Servicely in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Servicely**.
+
+ ![Screenshot of the Servicely link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Servicely Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Servicely. If the connection fails, ensure your Servicely account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Servicely**.
+
+1. Review the user attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Servicely for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Servicely API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Servicely|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |title|String||
+ |preferredLanguage|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |timezone|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Servicely**.
+
+1. Review the group attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Servicely for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Servicely|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Servicely, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Servicely by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
+ Last updated 11/21/2022
active-directory Starleaf Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/starleaf-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in St
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Symantec Web Security Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/symantec-web-security-service.md
The objective of this tutorial is to demonstrate the steps to be performed in Sy
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This connector is currently in Public Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all).
## Prerequisites
active-directory Tailscale Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tailscale-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Tailscale](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Tailscale to support provisioning with Azure AD
-Contact Tailscale support to configure Tailscale to support provisioning with Azure AD.
+
+You need to be an [Owner, Admin, or IT admin](https://tailscale.com/kb/1138/user-roles/) in Tailscale to complete these steps. See [Tailscale plans](https://tailscale.com/pricing/)
+to find out which plans make user & group provisioning for Azure AD available.
+
+### Generate a SCIM API key in Tailscale.
+
+In the **[User management](https://login.tailscale.com/admin/settings/user-management/)** page of the admin console,
+
+1. Click **Enable Provisioning**.
+1. Copy the generated key to the clipboard.
+
+Save the key information in a secure spot. This is the Secret Token you will need to use it when you configure provisioning in Azure AD.
## Step 3. Add Tailscale from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who is provisioned based o
## Step 5. Configure automatic user provisioning to Tailscale
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Tailscale based on user assignments in Azure AD.
### To configure automatic user provisioning for Tailscale in Azure AD:
active-directory Tanium Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-sso-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium SSO support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+ > [!NOTE]
+ > If deploying Tanium in an on-premises configuration, your values may look different than those shown above. The values to use can be retrieved from the **Administration > SAML Configuration** menu in the Tanium console. Details can be found in the [Tanium Console User Guide: Integrating with a SAML IdP](https://docs.tanium.com/platform_user/platform_user/console_using_saml.html?cloud=false "Integrating with a SAML IdP Guide").
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. If deploying to Tanium in an on-premises configuration, click the edit button and set the **Response Signing Option** to "Sign response and assertion".
[ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ](common/copy-metadataurl.png#lightbox)
active-directory Tutorial List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tutorial-list.md
Title: SaaS App Integration Tutorials for use with Azure AD
+ Title: App Integration Tutorials for use with Azure AD
description: Configure Azure Active Directory single sign-on integration with a variety of third-party software as a service applications.
-# Tutorials for integrating SaaS applications with Azure Active Directory
+# Tutorials for integrating applications with Azure Active Directory
-To help integrate your cloud-enabled [software as a service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) applications with Azure Active Directory, we have developed a collection of tutorials that walk you through configuration.
+To help integrate your cloud-enabled [software as a service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) and on-premises applications with Azure Active Directory, we have developed a collection of tutorials that walk you through configuration.
For a list of all SaaS apps that have been pre-integrated into Azure AD, see the [Active Directory Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps).
active-directory Uber Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uber-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Uber SSO
-To configure single sign-on on **Uber** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Uber support team](mailto:business-api-support@uber.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Uber** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Uber support team](mailto:business-support@uber.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Uber test user
-In this section, you create a user called Britta Simon in Uber. Work with [Uber support team](mailto:business-api-support@uber.com) to add the users in the Uber platform. Users must be created and activated before you use single sign-on. Uber also supports automatic user provisioning, you can find more details [here](uber-provisioning-tutorial.md) on how to configure automatic user provisioning.
+In this section, you create a user called Britta Simon in Uber. Work with [Uber support team or your Uber POC](mailto:business-support@uber.com) to add the users in the Uber platform. Users must be created and activated before you use single sign-on. Uber also supports automatic user provisioning, you can find more details [here](uber-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
active-directory Vbrick Rev Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vbrick-rev-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Vbrick Rev Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Vbrick Rev Cloud to support provisioning with Azure AD
-Contact Vbrick Rev Cloud support to configure Vbrick Rev Cloud to support provisioning with Azure AD.
+
+1. Sign in to your **Rev Tenant**. Navigate to **Admin > Security Settings > User Security** in the navigation pane.
+
+ ![Screenshot of Vbrick Rev User Security Settings.](./media/vbrick-rev-cloud-provisioning-tutorial/app-navigations.png)
+
+1. Navigate to **Microsoft Azure AD SCIM** section of the page.
+
+ ![Screenshot of the Vbrick Rev User Security Settings with the Microsoft AD SCIM section called out.](./media/vbrick-rev-cloud-provisioning-tutorial/enable-azure-ad-scim.png)
+
+1. Enable **Microsoft Azure AD SCIM** and click on **Generate Token** button.
+ ![Screenshot of the Vbrick Rev User Security Settings with the Microsoft AD SCIM enable.](./media/vbrick-rev-cloud-provisioning-tutorial/rev-scim-manage.png)
+
+1. It will open a popup with the **URL** and the **JWT token**. Copy and save the **JWT token** and **URL** for next steps.
+
+ ![Screenshot of the Vbrick Rev User Security Settings with the Scim Token section called out.](./media/vbrick-rev-cloud-provisioning-tutorial/copy-token.png)
+
+1. Once you have a copy of the **JWT token** and **URL**, click **OK** to close the popup and then click on the **Save** button at the bottom of the settings page to enable SCIM for your tenant.
## Step 3. Add Vbrick Rev Cloud from the Azure AD application gallery
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
Last updated 1/3/2023 -+
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
If the property `customStatusEndpoint` property isn't specified, then the `anony
| -- | -- | -- | | `url` | string (url)| the url of the custom status endpoint | | `type` | string | the type of the endpoint |+ example: ```
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
To deliver on these promises, we need a technical foundation made up of seven ke
IDs users create, own, and control independently of any organization or government. DIDs are globally unique identifiers linked to Decentralized Public Key Infrastructure (DPKI) metadata composed of JSON documents that contain public key material, authentication descriptors, and service endpoints. **2. Trust System**.
-In order to be able to resolve DID documents, DIDs are typically recorded on an underlying network of some kind that represents a trust system. Microsoft currently supports two trust systems, which are:
+In order to be able to resolve DID documents, DIDs are typically recorded on an underlying network of some kind that represents a trust system. Microsoft currently supports two trust systems, which are:
-- ION (Identity Overlay Network) ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. We have open sourced an [npm package](https://www.npmjs.com/package/@decentralized-identity/ion-tools) to make working with the ION network easy to integrate into your apps and services. Libraries include creating a new DID, generating keys and anchoring your DID on the Bitcoin blockchain.
+- DID:Web is a permission based model that allows trust using a web domainΓÇÖs existing reputation. DID:Web is in support status General Available.
-- DID:Web is a permission based model that allows trust using a web domainΓÇÖs existing reputation.
+- ION (Identity Overlay Network) ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. DID:ION is in preview.
**3. DID User Agent/Wallet: Microsoft Authenticator App**. Enables real people to use decentralized identities and Verifiable Credentials. Authenticator creates DIDs, facilitates issuance and presentation requests for verifiable credentials and manages the backup of your DID's seed through an encrypted wallet file.
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need
## How does revocation work?
-Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status.
+Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status.
### Verifiable credential data
active-directory How Use Vcnetwork https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md
To use the Entra Verified ID Network, you need to have completed the following.
## What is the Entra Verified ID Network?
-In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, but this approach would be both a manual and a complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API.
+In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, but this approach would be both manual and complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API.
:::image type="content" source="media/decentralized-identifier-overview/did-overview.png" alt-text="Diagram of Microsoft DID implementation overview.":::
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
For incorporating identity verification into your Apps, using AU10TIX ΓÇ£Govern
As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users.
-1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
+1. Go to [Microsoft Entra admin center -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
>[!NOTE] > Make sure this is the tenant you set up for Verified ID per the pre-requisites.
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
To incorporate identity verification into your Apps using LexisNexis Verified ID
As a developer you'll provide the steps below to your tenant administrator. The instructions help them obtain the verification request URL, and application body or website to request verifiable credentials from your users.
-1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
+1. Go to [Microsoft Entra admin center -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
>[!Note] > Make sure this is the tenant you set up for Verified ID per the pre-requisites. 1. Go to [Quickstart-> Verification Request -> Start](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/QuickStartVerifierBlade).
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
Follow these steps to incorporate VU Identity Card solution into your Apps.
As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users.
-1. Go to Microsoft Entra portal - [**Verified ID**](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade)
+1. Go to Microsoft Entra admin center - [**Verified ID**](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade)
>[!NOTE] >Verify that the tenant configured for Verified ID meets the prerequisites.
active-directory Services Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/services-partners.md
Last updated 06/29/2023
-# Entra Verified ID Services partners
+# Entra Verified ID Services and solution partners
-Our Services partner network extends and accelerates Microsoft Entra Verified ID adoption. Service partners offer advisory, implementation, integration and managed service capabilities that can help you build seamless end-user experiences using Entra Verified ID.
+Our Services partner network extends and accelerates Microsoft Entra Verified ID adoption. Service partners offer advisory, implementation, integration and managed service capabilities that can help you build seamless end-user experiences using Verified ID.
-## Services partner list
+## Services and solution partner list
You could select a partner from the list and build seamless end-user experiences for onboarding, secure access to critical services, self-service and custom business application scenarios.
-If you're a Services Partner and would like to be considered into Entra Verified ID partner documentation, submit your application [request](https://forms.microsoft.com/r/AGVsXmf4EZ)
-| Services partner | Website |
+| Services and solution partner | Website |
|:-|:--| | ![Screenshot of Affinitiquest logo.](media/services-partners/affinitiquest.png) | [Secure Personally Identifiable Information | AffinitiQuest](https://affinitiquest.io/) | | ![Screenshot of Avanade logo.](media/services-partners/avanade.png) | [Avanade Entra Verified ID Consulting Services](https://appsource.microsoft.com/marketplace/consulting-services/avanadeinc.ava_entra_verified_id_fy23?exp=ubp8) | | ![Screenshot of Credivera logo.](media/services-partners/credivera.png) | [Credivera: Digital Identity Solutions | Verifiable Credentials](https://www.credivera.com/) | | ![Screenshot of Condatis logo.](media/services-partners/condatis.png) | [Decentralized Identity | Condatis](https://condatis.com/technology/decentralized-identity/) | | ![Screenshot of DXC logo.](media/services-partners/dxc.png) | [Digital Identity - Connect with DXC](https://dxc.com/us/en/services/security/digital-identity) |
+| ![Screenshot of IdRamp logo.](media/services-partners/idramp.png) | [Virtual Onboarding with Verified ID](https://idramp.com/virtual-onboarding-with-ms-entra-verified-id/)<br/>[Eradicate passwords with Verified ID](https://idramp.com/eradicate-passwords-with-verified-id-orchestration/)<br/>[Integrated identity orchestration with Verified ID](https://idramp.com/entra-verified-id-integrated-identity-orchestration/)<br/>[Zero Trust collaboration with Verified ID](https://idramp.com/protected-virtual-meetings/)<br/>[Azure Marketplace Verified ID offering](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/idrampinc1682537450107.idramp-orchestration?tab=Overview) |
| ![Screenshot of CTC logo.](media/services-partners/ctc.png) | [CTC's SELMID offering](https://ctc-insight.com/selmid) | | ![Screenshot of Formula5 logo.](media/services-partners/formula5.png) | [Verified ID - Formula5](https://formula5.com/accelerator-for-microsoft-entra-verified-id/)<br/>[Azure Marketplace Verified ID offering](https://azuremarketplace.microsoft.com/marketplace/consulting-services/formulaconsultingllc1668008672143.verifiable_credentials_formula5-preview?tab=Overview&flightCodes=d12a14cf40204b39840e5c0f114c1366) | | ![Screenshot of Kocho logo.](media/services-partners/kocho.png) | [Connect with Kocho. See Verified Identity in Action](https://kocho.co.uk/contact-us/)<br/>[See Verified Identity in Action](https://kocho.co.uk/verified-id-in-action/) |
active-directory Using Wallet Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-wallet-library.md
Then, you have to handle the following major tasks in your app.
- User Interface. Any visual representation of stored credentials and the UI for driving the issuance and presentation process must be implemented by you. ## Wallet Library Demo app
-The Wallet Library comes with a demo app in the github repo that is ready to use without any modifications. You just have to build and deploy it. The demo app is a lightweight and simple implementation that illustrates issuance and presentation at its minimum. To quickly get going, you can use the QR Code Reader app to scan the QR code, and then copy and paste it into the demo app.
+The Wallet Library comes with a demo app in the GitHub repo that is ready to use without any modifications. You just have to build and deploy it. The demo app is a lightweight and simple implementation that illustrates issuance and presentation at its minimum. To quickly get going, you can use the QR Code Reader app to scan the QR code, and then copy and paste it into the demo app.
In order to test the demo app, you need a webapp that issues credentials and makes presentation requests for credentials. The [Woodgrove public demo webapp](https://aka.ms/vcdemo) is used for this purpose in this tutorial. ## Building the Android sample On your developer machine with Android Studio, do the following:
-1. Download or clone the Android Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip).
+1. Download or clone the Android Wallet Library [GitHub repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip).
You donΓÇÖt need the walletlibrary folder and you can delete it if you like. 1. Start Android Studio and open the parent folder of walletlibrarydemo
The sample app holds the issued credential in memory, so after issuance, you can
## Building the iOS sample On your Mac developer machine with Xcode, do the following:
-1. Download or clone the iOS Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip).
+1. Download or clone the iOS Wallet Library [GitHub repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip).
1. Start Xcode and open the top level folder for the WalletLibrary 1. Set focus on WalletLibraryDemo project
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Microsoft Entra Verified ID is now generally available (GA) as the new member of
### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by August 20, 2022.
+- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Microsoft Entra admin center. A fix for this issue should be available by August 20, 2022.
## July 2022
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-faqs.md
Previously updated : 2/21/2023 Last updated : 8/28/2023
pricing](https://www.microsoft.com/security/business/identity-access/microsoft-e
| Conditional Access policies for workload identities |Define the condition in which a workload can access a resource, such as an IP range | | Yes | |**Lifecycle Management**| | | | |Access reviews for service provider-assigned privileged roles | Closely monitor workload identities with impactful permissions | | Yes |
+| Application authentication methods API | Allows IT admins to enforce best practices for how apps in their organizations use application authentication methods. | | Yes |
|**Identity Protection** | | |
-|Identity Protection for workload identities | Detect and remediate compromised workload identities | | Yes |
+|Identity Protection for workload identities | Detect and remediate compromised workload identities | | Yes |
## What is the cost of Workload Identities Premium plan?
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust.md
Use the following values from your Azure AD application registration for your Gi
The following screenshot demonstrates how to copy the application ID and tenant ID.
- ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra portal.](./media/workload-identity-federation-create-trust/copy-client-id.png)
+ ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra admin center.](./media/workload-identity-federation-create-trust/copy-client-id.png)
- `AZURE_SUBSCRIPTION_ID` your subscription ID. To get the subscription ID, open **Subscriptions** in Azure portal and find your subscription. Then, copy the **Subscription ID**.
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
We recommend configuring the Azure Front Door customer certificate secret to ΓÇÿ
Learn more about [Front Door Profile - SwitchVersionBYOC (Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate)](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types). ## Compute
+### Migrate Virtual Machines to Availability Zones
+
+By migrating virtual machines to Availability Zones, you can ensure the isolation of your VMs from potential failures in other zones. With this, you can expect enhanced resiliency in your workload by avoiding downtime and business interruptions.
+
+Learn more about [Availability Zones](../reliability/availability-zones-overview.md).
+ ### Enable Backups on your Virtual Machines Enable backups for your virtual machines and secure your data
We identified the below thread resulted in an unhandled exception for your App a
Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html).
+### Consider changing your App Service configuration to 64-bit
+
+We identified your application is running in 32-bit and the memory is reaching the 2GB limit.
+Consider switching to 64-bit processes so you can take advantage of the additional memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
+
+Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue).
+ ## Next steps Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview)
ai-services Anomaly Detector Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-howto.md
In this article, you learned concepts and workflow for downloading, installing,
* You must specify billing information when instantiating a container. > [!IMPORTANT]
-> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft.
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft.
## Next steps
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md
These custom roles only apply to authoring (Language Understanding Authoring) an
> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal.
-### Cognitive Services LUIS reader
+### Cognitive Services LUIS Reader
A user that should only be validating and reviewing LUIS applications, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets (utterances, intents, entities) to notify the app developers of any changes that need to be made, but do not have direct access to make them.
A user that should only be validating and reviewing LUIS applications, typically
:::column-end::: :::row-end:::
-### Cognitive Services LUIS writer
+### Cognitive Services LUIS Writer
A user that is responsible for building and modifying LUIS application, as a collaborator in a larger team. The collaborator can modify the LUIS application in any way, train those changes, and validate/test those changes in the portal. However, this user wouldn't have access to deploying this application to the runtime, as they may accidentally reflect their changes in a production environment. They also wouldn't be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in a production environment. They may also create new applications under this resource, but with the restrictions mentioned.
A user that is responsible for building and modifying LUIS application, as a col
:::column-end::: :::row-end:::
-### Cognitive Services LUIS owner
+### Cognitive Services LUIS Owner
> [!NOTE] > * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal.
ai-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/authentication.md
Title: Authentication in Azure AI services
description: "There are three ways to authenticate a request to an Azure AI services resource: a resource key, a bearer token, or a multi-service subscription. In this article, you'll learn about each method, and how to make a request." -+ Previously updated : 09/01/2022- Last updated : 08/30/2023+ # Authenticate requests to Azure AI services
ai-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md
Title: Use Azure AI services containers on-premises
+ Title: Use Azure AI containers on-premises
description: Learn how to use Docker containers to use Azure AI services on-premises.
Previously updated : 07/21/2023 Last updated : 08/28/2023 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
-# What are Azure AI services containers?
+# What are Azure AI containers?
Azure AI services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services.
Containerization is an approach to software distribution in which an application
## Containers in Azure AI services
-Azure AI services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure AI services. You can find instructions and image locations in the tables below.
+Azure AI containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure AI services. You can find instructions and image locations in the tables below.
> [!NOTE] > See [Install and run Document Intelligence containers](document-intelligence/containers/install-run.md) for **Azure AI Document Intelligence** container instructions and image locations.
Azure AI services containers provide the following set of Docker containers, eac
### Speech containers
-> [!NOTE]
-> To use Speech containers, you will need to complete an [online request form](https://aka.ms/csgate).
- | Service | Container | Description | Availability | |--|--|--|--| | [Speech Service API][sp-containers-stt] | **Speech to text** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/about)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-cstt] | **Custom Speech to text** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/about)) | Transcribes continuous real-time speech into text using a custom model. | Generally available <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-ntts] | **Neural Text to speech** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/about)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/about)) | Determines the language of spoken audio. | Gated preview |
+| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/about)) | Determines the language of spoken audio. | Preview |
### Vision containers | Service | Container | Description | Availability | |--|--|--|--|
-| [Azure AI Vision][cv-containers] | **Read OCR** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/vision/read/about)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. Gated - [request access](https://aka.ms/csgate). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Azure AI Vision][cv-containers] | **Read OCR** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/vision/read/about)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/vision/spatial-analysis/about)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Preview | <!--
Additionally, some containers are supported in the Azure AI services [multi-serv
## Prerequisites
-You must satisfy the following prerequisites before using Azure AI services containers:
+You must satisfy the following prerequisites before using Azure AI containers:
**Docker Engine**: You must have Docker Engine installed locally. Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Linux](https://docs.docker.com/engine/installation/#supported-platforms), and [Windows](https://docs.docker.com/docker-for-windows/). On Windows, Docker must be configured to support Linux containers. Docker containers can also be deployed directly to [Azure Kubernetes Service](../aks/index.yml) or [Azure Container Instances](../container-instances/index.yml).
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
Previously updated : 07/04/2023 Last updated : 08/10/2023 # Configure Azure AI services virtual networks
-Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
+Azure AI services provide a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications that request data over the specified set of networks can access the account. You can limit access to your resources with *request filtering*, which allows requests that originate only from specified IP addresses, IP ranges, or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
An application that accesses an Azure AI services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key. > [!IMPORTANT]
-> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met:
+> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. To allow requests through, one of the following conditions needs to be met:
>
-> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Azure AI services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
-> * Or the request should originate from an allowed list of IP addresses.
+> - The request originates from a service that operates within an Azure Virtual Network on the allowed subnet list of the target Azure AI services account. The endpoint request that originated from the virtual network needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
+> - The request originates from an allowed list of IP addresses.
>
-> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
+> Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Scenarios
-To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients.
+To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks, including internet traffic, by default. Then, configure rules that grant access to traffic from specific virtual networks. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges and enable connections from specific internet or on-premises clients.
-Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. Once network rules are applied, they're enforced for all requests.
+Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data by using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. After network rules are applied, they're enforced for all requests.
## Supported regions and service offerings
-Virtual networks (VNETs) are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
+Virtual networks are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services support service tags for network rules configuration. The services listed here are included in the `CognitiveServicesManagement` service tag.
> [!div class="checklist"]
-> * Anomaly Detector
-> * Azure OpenAI
-> * Azure AI Vision
-> * Content Moderator
-> * Custom Vision
-> * Face
-> * Language Understanding (LUIS)
-> * Personalizer
-> * Speech service
-> * Language service
-> * QnA Maker
-> * Translator Text
-
+> - Anomaly Detector
+> - Azure OpenAI
+> - Content Moderator
+> - Custom Vision
+> - Face
+> - Language Understanding (LUIS)
+> - Personalizer
+> - Speech service
+> - Language
+> - QnA Maker
+> - Translator
> [!NOTE]
-> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
+> If you use Azure OpenAI, LUIS, Speech Services, or Language services, the `CognitiveServicesManagement` tag only enables you to use the service by using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal, Speech Studio, or Language Studio from a virtual network, you need to use the following tags:
>
-> * **AzureActiveDirectory**
-> * **AzureFrontDoor.Frontend**
-> * **AzureResourceManager**
-> * **CognitiveServicesManagement**
-> * **CognitiveServicesFrontEnd**
-
+> - `AzureActiveDirectory`
+> - `AzureFrontDoor.Frontend`
+> - `AzureResourceManager`
+> - `CognitiveServicesManagement`
+> - `CognitiveServicesFrontEnd`
## Change the default network access rule By default, Azure AI services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action. > [!WARNING]
-> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
+> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to *deny* blocks all access to the data unless specific network rules that *grant* access are also applied.
+>
+> Before you change the default rule to deny access, be sure to grant access to any allowed networks by using network rules. If you allow listing for the IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
-### Managing default network access rules
+### Manage default network access rules
You can manage default network access rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI.
You can manage default network access rules for Azure AI services resources thro
1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
- ![Virtual network option](media/vnet/virtual-network-blade.png)
+ :::image type="content" source="media/vnet/virtual-network-blade.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected." lightbox="media/vnet/virtual-network-blade.png":::
-1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Azure AI services resource.
-1. To allow traffic from all networks, choose to allow access from **All networks**.
+1. To deny access by default, under **Firewalls and virtual networks**, select **Selected Networks and Private Endpoints**.
- ![Virtual networks deny](media/vnet/virtual-network-deny.png)
+ With this setting alone, unaccompanied by configured virtual networks or address ranges, all access is effectively denied. When all access is denied, requests that attempt to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell, or the Azure CLI can still be used to configure the Azure AI services resource.
+
+1. To allow traffic from all networks, select **All networks**.
+
+ :::image type="content" source="media/vnet/virtual-network-deny.png" alt-text="Screenshot shows the Networking page with All networks selected." lightbox="media/vnet/virtual-network-deny.png":::
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
1. Display the status of the default rule for the Azure AI services resource.
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
- (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
- ```
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
+ (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
+ ```
-1. Set the default rule to deny network access by default.
+ You can get values for your resource group `myresourcegroup` and the name of your Azure services resource `myaccount` from the Azure portal.
+
+1. Set the default rule to deny network access.
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Deny
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "DefaultAction" = "Deny"
} Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ```
-1. Set the default rule to allow network access by default.
+1. Set the default rule to allow network access.
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Allow
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "DefaultAction" = "Allow"
} Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
1. Display the status of the default rule for the Azure AI services resource. ```azurecli-interactive az cognitiveservices account show \
- -g "myresourcegroup" -n "myaccount" \
- --query networkRuleSet.defaultAction
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --query properties.networkAcls.defaultAction
```
+1. Get the resource ID for use in the later steps.
+
+ ```azurecli-interactive
+ resourceId=$(az cognitiveservices account show
+ --resource-group "myresourcegroup" \
+ --name "myaccount" --query id --output tsv)
+ ```
+ 1. Set the default rule to deny network access by default. ```azurecli-interactive az resource update \
- --ids {resourceId} \
+ --ids $resourceId \
--set properties.networkAcls="{'defaultAction':'Deny'}" ```
You can manage default network access rules for Azure AI services resources thro
```azurecli-interactive az resource update \
- --ids {resourceId} \
+ --ids $resourceId \
--set properties.networkAcls="{'defaultAction':'Allow'}" ```
You can manage default network access rules for Azure AI services resources thro
## Grant access from a virtual network
-You can configure Azure AI services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Azure AD tenant.
+
+Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI services service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure AI services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure AI services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
+The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource to allow requests from specific subnets in a virtual network. Clients granted access by these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
-Each Azure AI services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range).
+Each Azure AI services resource supports up to 100 virtual network rules, which can be combined with IP network rules. For more information, see [Grant access from an internet IP range](#grant-access-from-an-internet-ip-range) later in this article.
-### Required permissions
+### Set required permissions
-To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
+To apply a virtual network rule to an Azure AI services resource, you need the appropriate permissions for the subnets to add. The required permission is the default *Contributor* role or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
-Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+The Azure AI services resource and the virtual networks that are granted access might be in different subscriptions, including subscriptions that are part of a different Azure AD tenant.
> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure AD tenant are currently supported only through PowerShell, the Azure CLI, and the REST APIs. You can view these rules in the Azure portal, but you can't configure them.
-### Managing virtual network rules
+### Configure virtual network rules
You can manage virtual network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. # [Azure portal](#tab/portal)
+To grant access to a virtual network with an existing network rule:
+ 1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
-1. Check that you've selected to allow access from **Selected networks**.
+1. Confirm that you selected **Selected Networks and Private Endpoints**.
-1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**.
+1. Under **Allow access from**, select **Add existing virtual network**.
- ![Add existing vNet](media/vnet/virtual-network-add-existing.png)
+ :::image type="content" source="media/vnet/virtual-network-add-existing.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add existing virtual network highlighted." lightbox="media/vnet/virtual-network-add-existing.png":::
1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**.
- ![Add existing vNet details](media/vnet/virtual-network-add-existing-details.png)
+ :::image type="content" source="media/vnet/virtual-network-add-existing-details.png" alt-text="Screenshot shows the Add networks dialog box where you can enter a virtual network and subnet.":::
-1. To create a new virtual network and grant it access, select **Add new virtual network**.
+ > [!NOTE]
+ > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
+ >
+ > Currently, only virtual networks that belong to the same Azure AD tenant are available for selection during rule creation. To grant access to a subnet in a virtual network that belongs to another tenant, use PowerShell, the Azure CLI, or the REST APIs.
- ![Add new vNet](media/vnet/virtual-network-add-new.png)
+1. Select **Save** to apply your changes.
+
+To create a new virtual network and grant it access:
+
+1. On the same page as the previous procedure, select **Add new virtual network**.
+
+ :::image type="content" source="media/vnet/virtual-network-add-new.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add new virtual network highlighted." lightbox="media/vnet/virtual-network-add-new.png":::
1. Provide the information necessary to create the new virtual network, and then select **Create**.
- ![Create vNet](media/vnet/virtual-network-create.png)
+ :::image type="content" source="media/vnet/virtual-network-create.png" alt-text="Screenshot shows the Create virtual network dialog box.":::
- > [!NOTE]
- > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
- >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs.
+1. Select **Save** to apply your changes.
-1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
+To remove a virtual network or subnet rule:
- ![Remove vNet](media/vnet/virtual-network-remove.png)
+1. On the same page as the previous procedures, select **...(More options)** to open the context menu for the virtual network or subnet, and select **Remove**.
+
+ :::image type="content" source="media/vnet/virtual-network-remove.png" alt-text="Screenshot shows the option to remove a virtual network." lightbox="media/vnet/virtual-network-remove.png":::
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
-1. List virtual network rules.
+1. List the configured virtual network rules.
```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
(Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules ```
-1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet.
```azurepowershell-interactive Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" ` -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `
- -AddressPrefix "10.0.0.0/24" `
+ -AddressPrefix "CIDR" `
-ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork ```
You can manage virtual network rules for Azure AI services resources through the
```azurepowershell-interactive $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myvnet"
} $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
You can manage virtual network rules for Azure AI services resources through the
``` > [!TIP]
- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ > To add a network rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified `VirtualNetworkResourceId` parameter in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
1. Remove a network rule for a virtual network and subnet. ```azurepowershell-interactive $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myvnet"
} $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -VirtualNetworkResourceId $subnet.Id
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "VirtualNetworkResourceId" = $subnet.Id
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
-1. List virtual network rules.
+1. List the configured virtual network rules.
```azurecli-interactive az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--query virtualNetworkRules ```
-1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet.
```azurecli-interactive
- az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \
+ az network vnet subnet update --resource-group "myresourcegroup" --name "mysubnet" \
--vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices" ``` 1. Add a network rule for a virtual network and subnet. ```azurecli-interactive
- $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ subnetid=$(az network vnet subnet show \
+ --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \
--query id --output tsv) # Use the captured subnet identifier as an argument to the network rule addition az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--subnet $subnetid ``` > [!TIP]
- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ > To add a rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified subnet ID in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
>
- > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant.
+ > You can use the `--subscription` parameter to retrieve the subnet ID for a virtual network that belongs to another Azure AD tenant.
1. Remove a network rule for a virtual network and subnet. ```azurecli-interactive $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \
--query id --output tsv) # Use the captured subnet identifier as an argument to the network rule removal az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--subnet $subnetid ``` *** > [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect.
## Grant access from an internet IP range
-You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic.
+You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, which effectively block general internet traffic.
-Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`.
+You can specify the allowed internet address ranges by using [CIDR format (RFC 4632)](https://tools.ietf.org/html/rfc4632) in the form `192.168.0.0/16` or as individual IP addresses like `192.168.0.1`.
> [!Tip]
- > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules.
+ > Small address ranges that use `/31` or `/32` prefix sizes aren't supported. Configure these ranges by using individual IP address rules.
+
+IP network rules are only allowed for *public internet* IP addresses. IP address ranges reserved for private networks aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. For more information, see [Private Address Space (RFC 1918)](https://tools.ietf.org/html/rfc1918#section-3).
+
+Currently, only IPv4 addresses are supported. Each Azure AI services resource supports up to 100 IP network rules, which can be combined with [virtual network rules](#grant-access-from-a-virtual-network).
-IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`.
+### Configure access from on-premises networks
-Only IPV4 addresses are supported at this time. Each Azure AI services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network).
+To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, identify the internet-facing IP addresses used by your network. Contact your network administrator for help.
-### Configuring access from on-premises networks
+If you use Azure ExpressRoute on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md).
-To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help.
+For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting.
-If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering)
+To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) use the Azure portal. For more information, see [NAT requirements for Azure public peering](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering).
### Managing IP network rules
You can manage IP network rules for Azure AI services resources through the Azur
1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
-1. Check that you've selected to allow access from **Selected networks**.
+1. Confirm that you selected **Selected Networks and Private Endpoints**.
-1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted.
+1. Under **Firewalls and virtual networks**, locate the **Address range** option. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)). Only valid public IP (nonreserved) addresses are accepted.
- ![Add IP range](media/vnet/virtual-network-add-ip-range.png)
+ :::image type="content" source="media/vnet/virtual-network-add-ip-range.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and the Address range highlighted." lightbox="media/vnet/virtual-network-add-ip-range.png":::
-1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
-
- ![Delete IP range](media/vnet/virtual-network-delete-ip-range.png)
+ To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
-1. List IP network rules.
+1. List the configured IP network rules.
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
(Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "ipaddress"
} Add-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "CIDR"
} Add-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "ipaddress"
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "CIDR"
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
-1. List IP network rules.
+1. List the configured IP network rules.
```azurecli-interactive az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" --query ipRules
+ --resource-group "myresourcegroup" --name "myaccount" --query ipRules
``` 1. Add a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "ipaddress"
``` 1. Add a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "CIDR"
``` 1. Remove a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "ipaddress"
``` 1. Remove a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "CIDR"
``` *** > [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect.
## Use private endpoints
-You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Azure AI services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
+You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network to securely access data over [Azure Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the virtual network address space for your Azure AI services resource. Network traffic between the clients on the virtual network and the resource traverses the virtual network and a private link on the Microsoft Azure backbone network, which eliminates exposure from the public internet.
Private endpoints for Azure AI services resources let you:
-* Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
-* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet.
-* Securely connect to Azure AI services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
+- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
+- Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network.
+- Securely connect to Azure AI services resources from on-premises networks that connect to the virtual network by using [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
-### Conceptual overview
+### Understand private endpoints
-A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Azure AI services service uses a secure private link.
+A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI services service uses a secure private link.
-Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
+Applications in the virtual network can connect to the service over the private endpoint seamlessly. Connections use the same connection strings and authorization mechanisms that they would use otherwise. The exception is Speech Services, which require a separate endpoint. For more information, see [Private endpoints with the Speech Services](#use-private-endpoints-with-the-speech-service) in this article. Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
-Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others.
+Private endpoints can be created in subnets that use service endpoints. Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-When you create a private endpoint for an Azure AI services resource in your VNet, a consent request is sent for approval to the Azure AI services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
+When you create a private endpoint for an Azure AI services resource in your virtual network, Azure sends a consent request for approval to the Azure AI services resource owner. If the user who requests the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
-Azure AI services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
+Azure AI services resource owners can manage consent requests and the private endpoints through the **Private endpoint connection** tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
-### Private endpoints
+### Specify private endpoints
-When creating the private endpoint, you must specify the Azure AI services resource it connects to. For more information on creating a private endpoint, see:
+When you create a private endpoint, specify the Azure AI services resource that it connects to. For more information on creating a private endpoint, see:
-* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md)
-* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md)
-* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using the Azure portal](../private-link/create-private-endpoint-portal.md)
+- [Create a private endpoint by using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using the Azure CLI](../private-link/create-private-endpoint-cli.md)
-### Connecting to private endpoints
+### Connect to private endpoints
> [!NOTE]
-> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwarder names.
+> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. For the correct zone and forwarder names, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
-Clients on a VNet using the private endpoint should use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Azure AI services resource over a private link.
+Clients on a virtual network that use the private endpoint use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech service, which requires a separate endpoint. For more information, see [Use private endpoints with the Speech service](#use-private-endpoints-with-the-speech-service) in this article. DNS resolution automatically routes the connections from the virtual network to the Azure AI services resource over a private link.
-We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
+By default, Azure creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. If you use your own DNS server, you might need to make more changes to your DNS configuration. For updates that might be required for private endpoints, see [Apply DNS changes for private endpoints](#apply-dns-changes-for-private-endpoints) in this article.
-### Private endpoints with the Speech Services
+### Use private endpoints with the Speech service
-See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md).
+See [Use Speech service through a private endpoint](Speech-Service/speech-services-private-link.md).
-### DNS changes for private endpoints
+### Apply DNS changes for private endpoints
-When you create a private endpoint, the DNS CNAME resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
+When you create a private endpoint, the DNS `CNAME` resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, Azure also creates a private DNS zone that corresponds to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. For more information, see [What is Azure Private DNS](../dns/private-dns-overview.md).
-When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
+When you resolve the endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When it's resolved from the virtual network hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
-This approach enables access to the Azure AI services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet.
+This approach enables access to the Azure AI services resource using the same connection string for clients in the virtual network that hosts the private endpoints and clients outside the virtual network.
-If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet.
+If you use a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network.
> [!TIP]
-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
+> When you use a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the `privatelink` subdomain to the private endpoint IP address. Delegate the `privatelink` subdomain to the private DNS zone of the virtual network. Alternatively, configure the DNS zone on your DNS server and add the DNS A records.
-For more information on configuring your own DNS server to support private endpoints, see the following articles:
+For more information on configuring your own DNS server to support private endpoints, see the following resources:
-* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
-* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+- [Name resolution that uses your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+- [DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration)
### Pricing
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
## Next steps
-* Explore the various [Azure AI services](./what-are-ai-services.md)
-* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
+- Explore the various [Azure AI services](./what-are-ai-services.md)
+- Learn more about [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
ai-services Storage Lab Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md
Navigate to the *Web.config* file at the root of the project. Add the following
<add key="VisionEndpoint" value="VISION_ENDPOINT" /> ```
-Then in the Solution Explorer, right-click the project and use the **Manage NuGet Packages** command to install the package **Microsoft.Azure.CognitiveServices.Vision.ComputerVision**. This package contains the types needed to call the Azure AI Vision API.
+In the Solution Explorer. right-click on the project solution and select **Manage NuGet Packages**. In the package manager that opens select **Browse**, check **Include prerelease**, and search for **Azure.AI.Vision.ImageAnalysis**. Select **Install**.
### Add metadata generation code
Next, you'll add the code that actually uses the Azure AI Vision service to crea
1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file: ```csharp
- using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
- using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
+ using Azure.AI.Vision.Common;
+ using Azure.AI.Vision.ImageAnalysis;
``` 1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on. ```csharp // Submit the image to the Azure AI Vision API
- ComputerVisionClient vision = new ComputerVisionClient(
- new ApiKeyServiceClientCredentials(ConfigurationManager.AppSettings["SubscriptionKey"]),
- new System.Net.Http.DelegatingHandler[] { });
- vision.Endpoint = ConfigurationManager.AppSettings["VisionEndpoint"];
+ var serviceOptions = new VisionServiceOptions(
+ Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"]),
+ new AzureKeyCredential(ConfigurationManager.AppSettings["SubscriptionKey"]));
- List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>() { VisualFeatureTypes.Description };
- var result = await vision.AnalyzeImageAsync(photo.Uri.ToString(), features);
+ var analysisOptions = new ImageAnalysisOptions()
+ {
+ Features = ImageAnalysisFeature.Caption | ImageAnalysisFeature.Tags,
+ Language = "en",
+ GenderNeutralCaption = true
+ };
+
+ using var imageSource = VisionSource.FromUrl(
+ new Uri(photo.Uri.ToString()));
+
+ using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
+ var result = analyzer.Analyze();
// Record the image description and tags in blob metadata
- photo.Metadata.Add("Caption", result.Description.Captions[0].Text);
+ photo.Metadata.Add("Caption", result.Caption.ContentCaption.Content);
- for (int i = 0; i < result.Description.Tags.Count; i++)
+ for (int i = 0; i < result.Tags.ContentTags.Count; i++)
{ string key = String.Format("Tag{0}", i);
- photo.Metadata.Add(key, result.Description.Tags[i]);
+ photo.Metadata.Add(key, result.Tags.ContentTags[i]);
} await photo.SetMetadataAsync();
ai-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md
Previously updated : 03/02/2023 Last updated : 08/29/2023 keywords: on-premises, OCR, Docker, container
keywords: on-premises, OCR, Docker, container
# Install Azure AI Vision 3.2 GA Read OCR container - Containers enable you to run the Azure AI Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container. The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
You must meet the following prerequisites before using the containers:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-## Request approval to run the container
-
-Fill out and submit the [request form](https://aka.ms/csgate) to request approval to run the container.
-- [!INCLUDE [Gathering required container parameters](../containers/includes/container-gathering-required-parameters.md)] ### Host computer requirements
If you run the container with an output [mount](./computer-vision-resource-conta
## Billing
-The Azure AI services containers send billing information to Azure, using the corresponding resource on your Azure account.
+The Azure AI containers send billing information to Azure, using the corresponding resource on your Azure account.
[!INCLUDE [Container's Billing Settings](../../../includes/cognitive-services-containers-how-to-billing-info.md)]
In this article, you learned concepts and workflow for downloading, installing,
* You must specify billing information when instantiating a container. > [!IMPORTANT]
-> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
## Next steps
In this article, you learned concepts and workflow for downloading, installing,
* Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.
-* Use more [Azure AI services containers](../cognitive-services-container-support.md)
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-image-retrieval.md
Vector search, on the other hand, searches large collections of vectors in high-
Image retrieval has a variety of applications in different fields, including: - Digital asset management: Image retrieval can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.-- Medical image retrieval: Image retrieval can be used in medical imaging to search for images based on their diagnostic features or disease patterns. This can help doctors or researchers to identify similar cases or track disease progression. - Security and surveillance: Image retrieval can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection. - Forensic image retrieval: Image retrieval can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime. - E-commerce: Image retrieval can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases. - Fashion and design: Image retrieval can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends.
+> [!CAUTION]
+> Image Retrieval is not designed analyze medical images for diagnostic features or disease patterns. Please do not use Image Retrieval for medical purposes.
+ ## What are vector embeddings? Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks. Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears.
Vector embeddings are a way of representing content&mdash;text or images&mdash;a
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of image retrieval process."::: 1. Vectorize Images and Text: the Image Retrieval APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
+ > [!NOTE]
+ > Image Retrieval does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
+ 1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity. 1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
ai-services Concept Shelf Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-shelf-analysis.md
Try out the capabilities of Product Recognition quickly and easily in your brows
## Product Recognition features
-### Image modification
+### Shelf Image Composition
The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to: * Stitch together multiple images of a shelf to create a single image. * Rectify an image to remove perspective distortion.
-### Product Understanding (pretrained)
+### Shelf Product Recognition (pretrained model)
The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each.
The following JSON response illustrates what the Product Understanding API retur
} ```
-### Product Understanding (custom)
+### Shelf Product Recognition - Custom (customized model)
The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product.
The following JSON response illustrates what the Product Understanding API retur
} ```
-### Planogram matching
+### Shelf Planogram Compliance (preview)
The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document.
ai-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md
replicaset.apps/read-6cbbb6678 3 3 3 3s
For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks]. > [!div class="nextstepaction"]
-> [Azure AI services containers][cog-svcs-containers]
+> [Azure AI containers][cog-svcs-containers]
<!-- LINKS - external --> [free-azure-account]: https://azure.microsoft.com/free
ai-services Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/background-removal.md
The SDK example assumes that you defined the environment variables `VISION_KEY`
#### [C#](#tab/csharp)
-Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example:
+Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.common.visionserviceoptions) object using one of the constructors. For example:
[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
The code in this guide uses remote images referenced by URL. You may want to try
#### [C#](#tab/csharp)
-Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl).
+Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.common.visionsource.fromurl).
**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes. [!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)] > [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile).
+> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.common.visionsource.fromfile).
#### [Python](#tab/python)
ai-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md
Last updated 08/01/2023-+ zone_pivot_groups: programming-languages-computer-vision-40
ai-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md
To find faces and get their locations in an image, call the [DetectWithUrlAsync]
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="basic1":::
-You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for the rectangles that give the pixel coordinates of each face. If you set _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks.
-
+The service returns a [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) object, which you can query for different kinds of information, specified below.
For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
For information on how to parse the location and dimensions of the face, see [Fa
This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.
+### Get face ID
+
+If you set the parameter _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks.
++
+The optional _faceIdTimeToLive_ parameter specifies how long (in seconds) the face ID should be stored on the server. After this time expires, the face ID is removed. The default value is 86400 (24 hours).
+ ### Get face landmarks [Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/install-sdk.md
+
+ Title: Install the Vision SDK
+
+description: In this guide, you'll learn how to install the Vision SDK for your preferred programming language.
++++++ Last updated : 08/01/2023++
+zone_pivot_groups: programming-languages-vision-40-sdk
++
+# Install the Vision SDK
++++
+## Next steps
+
+Follow the [Image Analysis quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) to get started.
ai-services Shelf Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-analyze.md
-# Analyze a shelf image using pretrained models
+# Shelf Product Recognition (preview): Analyze shelf images using pretrained model
The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Understanding API, you can upload a shelf image and get the locations of products and gaps.
To analyze a shelf image, do the following steps:
1. Copy the following `curl` command into a text editor. ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:analyze" -d "{
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
'url':'<your_url_string>' }" ``` 1. Make the following changes in the command where needed:
- 1. Replace the value of `<subscriptionKey>` with your Vision resource key.
- 1. Replace the value of `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 1. Replace the `<subscriptionKey>` with your Vision resource key.
+ 1. Replace the `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 2. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...`
1. Replace the `<your_url_string>` contents with the blob URL of the image 1. Open a command prompt window. 1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
ai-services Shelf Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-model-customization.md
Last updated 05/02/2023
-# Train a custom Product Recognition model
+# Shelf Product Recognition - Custom Model (preview)
You can train a custom model to recognize specific retail products for use in a Product Recognition scenario. The out-of-box [Analyze](shelf-analyze.md) operation doesn't differentiate between products, but you can build this capability into your app through custom labeling and training.
When your custom model is trained and ready (you've completed the steps in the [
The API call will look like this: ```bash
-curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:analyze?PRODUCT_CLASSIFIER_MODEL=myModelName" -d "{
+curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/models/<your_model_name>/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
'url':'<your_url_string>' }" ```
+1. Make the following changes in the command where needed:
+ 1. Replace the `<subscriptionKey>` with your Vision resource key.
+ 1. Replace the `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 1. Replace the `<your_model_name>` with your unique custom model name. This will be the name of the customized model you have trained with your own data. For example, `.../models/mymodel1/runs/...`
+ 2. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...`
+ 1. Replace the `<your_url_string>` contents with the blob URL of the image
+1. Open a command prompt window.
+1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
+ ## Next steps In this guide, you learned how to use a custom Product Recognition model to better meet your business needs. Next, set up planogram matching, which works in conjunction with custom Product Recognition.
ai-services Shelf Modify Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-modify-images.md
-# Prepare images for Product Recognition
+# Shelf Image Composition (preview)
Part of the Product Recognition workflow involves fixing and modifying the input images so the service can perform correctly.
-This guide shows you how to use the Stitching API to combine multiple images of the same physical shelf: this gives you a composite image of the entire retail shelf, even if it's only viewed partially by multiple different cameras.
+This guide shows you how to use the **Stitching API** to combine multiple images of the same physical shelf: this gives you a composite image of the entire retail shelf, even if it's only viewed partially by multiple different cameras.
-This guide also shows you how to use the Rectification API to correct for perspective distortion when you stitch together different images.
+This guide also shows you how to use the **Rectification API** to correct for perspective distortion when you stitch together different images.
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
To run the image stitching operation on a set of images, follow these steps:
1. Copy the following `curl` command into a text editor. ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:stitch?api-version=2023-04-01-preview" --output <your_filename> -d "{
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/imagecomposition:stitch?api-version=2023-04-01-preview" --output <your_filename> -d "{
'images': [ { 'url':'<your_url_string>'
To correct the perspective distortion in the composite image, follow these steps
1. Copy the following `curl` command into a text editor. ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:rectify?api-version=2023-04-01-preview" --output <your_filename> -d "{
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/imagecomposition:rectify?api-version=2023-04-01-preview" --output <your_filename> -d "{
'url': '<your_url_string>', 'controlPoints': { 'topLeft': {
ai-services Shelf Planogram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-planogram.md
Last updated 05/02/2023
-# Check planogram compliance with Image Analysis
+# Shelf Planogram Compliance (preview)
-A planogram is a diagram that indicates the correct placement of retail products on shelves. The Image Analysis Planogram Matching API lets you compare analysis results from a photo to the store's planogram input. It returns an account of all the positions in the planogram, and whether a product was found in each position.
+A planogram is a diagram that indicates the correct placement of retail products on shelves. The Planogram Compliance API lets you compare analysis results from a photo to the store's planogram input. It returns an account of all the positions in the planogram, and whether a product was found in each position.
:::image type="content" source="../media/shelf/planogram.png" alt-text="Photo of a retail shelf with detected products outlined and planogram position rectangles outlined separately.":::
This is the text you'll use in your API request body.
1. Copy the following `curl` command into a text editor. ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-planogrammatching:analyze" -d "<body>"
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/planogramcompliance:match?api-version=2023-04-01-preview" -d "<body>"
``` 1. Make the following changes in the command where needed: 1. Replace the value of `<subscriptionKey>` with your Vision resource key.
ai-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md
and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/56
To better utilize the large-scale feature, we recommend the following strategies.
-### Step 3.1: Customize time interval
+### Step 3a: Customize time interval
As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList. The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
-### Step 3.2: Small-scale buffer
+### Step 3b: Small-scale buffer
Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
An example workflow:
1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection. 1. Delete the old buffer collection after the Train operation finishes on the master collection.
-### Step 3.3: Standalone training
+### Step 3c: Standalone training
If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/language-support.md
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Language | Language code | Categories | Tags | Description | Adult | Brands | Color | Faces | ImageType | Objects | Celebrities | Landmarks | Captions/Dense captions| |:|::|:-:|::|::|::|::|::|::|::|::|::|::|:--:| |Arabic |`ar`| | ✅| |||||| ||||
-|Azeri (Azerbaijani) |`az`| | ✅| |||||| ||||
+|Azerbaijani |`az`| | ✅| |||||| ||||
|Bulgarian |`bg`| | ✅| |||||| |||| |Bosnian Latin |`bs`| | ✅| |||||| |||| |Catalan |`ca`| | ✅| |||||| ||||
ai-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/read-container-migration-guide.md
Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set
* Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.
-* Use more [Azure AI services containers](../cognitive-services-container-support.md)
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-container.md
If you encounter issues when starting or running the container, see [Telemetry a
The Spatial Analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
-Azure AI services containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Azure AI services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
+Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Azure AI containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
## Summary
ai-services Vehicle Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/vehicle-analysis.md
# Install and run vehicle analysis (preview)
-Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you'll learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations.
+Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations.
## Prerequisites
Vehicle analysis is a set of capabilities that, when used with the Spatial Analy
## Vehicle analysis operations
-Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
+Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis generates an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
-The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the *.cpu* distinction).
+The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the ".cpu" distinction).
| Operation identifier | Description | | -- | - |
The following table shows the parameters required by each of the vehicle analysi
| Operation ID | The Operation Identifier from table above.| | enabled | Boolean: true or false| | VIDEO_URL| The RTSP URL for the camera device (for example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded streams either through RTSP, HTTP, or MP4. |
-| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.|
+| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This is returned with the event JSON output.|
| VIDEO_IS_LIVE| True for camera devices; false for recorded videos.|
-| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it is 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
+| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it's 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
| PARKING_REGIONS | JSON configuration for zone and line as outlined below. </br> PARKING_REGIONS must contain four points in normalized coordinates ([0, 1]) that define a convex region (the points follow a clockwise or counterclockwise order).| | EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE will generate an output on every single frame received (one FPS). ON_CHANGE will generate an output when something changes (number of vehicles or parking spot occupancy). |
-| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX will use an overlap between the detected bounding box and a reference bounding box. PROJECTIONS will project the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.|
+| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX uses an overlap between the detected bounding box and a reference bounding box. PROJECTIONS projects the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.|
-This is an example of a valid `PARKING_REGIONS` configuration:
+Here is an example of a valid `PARKING_REGIONS` configuration:
```json "{\"parking_slot1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
This is an example of a valid `PARKING_REGIONS` configuration:
### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview
-This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
+Here is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
```json {
This is an example of a JSON input for the `PARKING_REGIONS` parameter that conf
### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview
-This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
+Here is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
```json {
The JSON below demonstrates an example of the vehicle count operation graph outp
| Attribute | Type | Description | ||||
-| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
-| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
+| `VehicleType` | float | Detected vehicle types. Possible detections include VehicleType_Bicycle, VehicleType_Bus, VehicleType_Car, VehicleType_Motorcycle, VehicleType_Pickup_Truck, VehicleType_SUV, VehicleType_Truck, VehicleType_Van/Minivan, VehicleType_type_other |
+| `VehicleColor` | float | Detected vehicle colors. Possible detections include VehicleColor_Black, VehicleColor_Blue, VehicleColor_Brown/Beige, VehicleColor_Green, VehicleColor_Grey, VehicleColor_Red, VehicleColor_Silver, VehicleColor_White, VehicleColor_Yellow/Gold, VehicleColor_color_other |
| `confidence` | float| Algorithm confidence| | SourceInfo Field Name | Type| Description|
The JSON below demonstrates an example of the vehicle in polygon operation graph
| Attribute | Type | Description | ||||
-| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
-| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
+| `VehicleType` | float | Detected vehicle types. Possible detections include VehicleType_Bicycle, VehicleType_Bus, VehicleType_Car, VehicleType_Motorcycle, VehicleType_Pickup_Truck, VehicleType_SUV, VehicleType_Truck, VehicleType_Van/Minivan, VehicleType_type_other |
+| `VehicleColor` | float | Detected vehicle colors. Possible detections include VehicleColor_Black, VehicleColor_Blue, VehicleColor_Brown/Beige, VehicleColor_Green, VehicleColor_Grey, VehicleColor_Red, VehicleColor_Silver, VehicleColor_White, VehicleColor_Yellow/Gold, VehicleColor_color_other |
| `confidence` | float| Algorithm confidence| | SourceInfo Field Name | Type| Description|
The JSON below demonstrates an example of the vehicle in polygon operation graph
## Zone and line configuration for vehicle analysis
-For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone which you're analyzing.
+For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone that you're analyzing.
## Camera placement for vehicle analysis
For guidelines on where and how to place your camera for vehicle analysis, refer
The vehicle analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of vehicle analysis in public preview is currently free.
-Azure AI services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Azure AI services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
+Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint always. Azure AI containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
## Next steps
ai-services Azure Container Instance Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-container-instance-recipe.md
Title: Azure Container Instance recipe
-description: Learn how to deploy Azure AI services containers on Azure Container Instance
+description: Learn how to deploy Azure AI containers on Azure Container Instance
All variables in angle brackets, `<>`, need to be replaced with your own values.
1. Select **Execute** to send the request to your Container Instance.
- You have successfully created and used Azure AI services containers in Azure Container Instance.
+ You have successfully created and used Azure AI containers in Azure Container Instance.
# [CLI](#tab/cli)
ai-services Azure Kubernetes Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-kubernetes-recipe.md
This website is equivalent to your own client-side application that makes reques
The language detection container, in this specific procedure, is accessible to any external request. The container hasn't been changed in any way so the standard Azure AI services container-specific language detection API is available.
-For this container, that API is a POST request for language detection. As with all Azure AI services containers, you can learn more about the container from its hosted Swagger information, `http://<external-IP>:5000/swagger/https://docsupdatetracker.net/index.html`.
+For this container, that API is a POST request for language detection. As with all Azure AI containers, you can learn more about the container from its hosted Swagger information, `http://<external-IP>:5000/swagger/https://docsupdatetracker.net/index.html`.
-Port 5000 is the default port used with the Azure AI services containers.
+Port 5000 is the default port used with the Azure AI containers.
## Create Azure Container Registry service
To deploy the container to the Azure Kubernetes Service, the container images ne
## Get website Docker image
-1. The sample code used in this procedure is in the Azure AI services containers samples repository. Clone the repository to have a local copy of the sample.
+1. The sample code used in this procedure is in the Azure AI containers samples repository. Clone the repository to have a local copy of the sample.
```console git clone https://github.com/Azure-Samples/cognitive-services-containers-samples
This section uses the **kubectl** CLI to talk with the Azure Kubernetes Service.
1. Copy the following file and name it `language.yml`. The file has a `service` section and a `deployment` section each for the two container types, the `language-frontend` website container and the `language` detection container.
- [!code-yml[Kubernetes orchestration file for the Azure AI services containers sample](~/samples-cogserv-containers/Kubernetes/language/language.yml "Kubernetes orchestration file for the Azure AI services containers sample")]
+ [!code-yml[Kubernetes orchestration file for the Azure AI containers sample](~/samples-cogserv-containers/Kubernetes/language/language.yml "Kubernetes orchestration file for the Azure AI containers sample")]
1. Change the language-frontend deployment lines of `language.yml` based on the following table to add your own container registry image names, client secret, and Language service settings.
az group delete --name cogserv-container-rg
## Next steps
-[Azure AI services containers](../cognitive-services-container-support.md)
+[Azure AI containers](../cognitive-services-container-support.md)
ai-services Container Reuse Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/container-reuse-recipe.md
# Create containers for reuse
-Use these container recipes to create Azure AI services containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started.
+Use these container recipes to create Azure AI containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started.
Once you have this new layer of container (with settings), and you have tested it locally, you can store the container in a container registry. When the container starts, it will only need those settings that are not currently stored in the container. The private registry container provides configuration space for you to pass those settings in.
ai-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md
If you run the container with an output mount and logging enabled, the container
## Next steps
-[Azure AI services containers overview](../cognitive-services-container-support.md)
+[Azure AI containers overview](../cognitive-services-container-support.md)
ai-services Docker Compose Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/docker-compose-recipe.md
Title: Use Docker Compose to deploy multiple containers
-description: Learn how to deploy multiple Azure AI services containers. This article shows you how to orchestrate multiple Docker container images by using Docker Compose.
+description: Learn how to deploy multiple Azure AI containers. This article shows you how to orchestrate multiple Docker container images by using Docker Compose.
# Use Docker Compose to deploy multiple containers
-This article shows you how to deploy multiple Azure AI services containers. Specifically, you'll learn how to use Docker Compose to orchestrate multiple Docker container images.
+This article shows you how to deploy multiple Azure AI containers. Specifically, you'll learn how to use Docker Compose to orchestrate multiple Docker container images.
> [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container Docker applications. In Compose, you use a YAML file to configure your application's services. Then, you create and start all the services from your configuration by running a single command.
Open a browser on the host machine and go to **localhost** by using the specifie
## Next steps
-[Azure AI services containers](../cognitive-services-container-support.md)
+[Azure AI containers](../cognitive-services-container-support.md)
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/client-libraries.md
Title: 'Quickstart: Use the Content Moderator client library'
-description: The Content Moderator API offers client libraries that makes it easy to integrate Content Moderator into your applications.
+description: The Content Moderator API offers client libraries that make it easy to integrate Content Moderator into your applications.
keywords: content moderator, Azure AI Content Moderator, online moderator, conte
# Quickstart: Use the Content Moderator client library + ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/language-support.md
|Assamese | |✔️ | | | |Azerbaijani | |✔️ | | | |Bangla - Bangladesh | |✔️ | | |
-|Bangla - India | |✔️ | | |
|Balinese |✔️ | | | | |Basque | |✔️ | | | |Belarusian | |✔️ | | | |Bengali | ✔️| | | |
+|Bengali - India | |✔️ | | |
|Bosnian - Cyrillic | |✔️ | | | |Bosnian - Latin | |✔️ | | | |Buginese |✔️ | | | |
ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md
Previously updated : 07/18/2023 Last updated : 08/17/2023 monikerRange: '<=doc-intel-3.1.0'
This reference article provides a version-based description of Document Intellig
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer)
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0)
[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
ai-services Choose Model Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/choose-model-feature.md
The following decision charts highlight the features of each **Document Intellig
| Document type | Data to extract | Your best solution | | --|--|-| |**U.S. W-2 tax form**|You want to extract key information such as salary, wages, and taxes withheld.|[**W-2 model**](concept-w2.md)|
-|**Health insurance card** or health insurance ID.| You want to extract key information such as insurer, member ID, prescription coverage, and group number.|[**Health insurance card model**](./concept-insurance-card.md)|
+|**Health insurance card** or health insurance ID.| You want to extract key information such as insurer, member ID, prescription coverage, and group number.|[**Health insurance card model**](./concept-health-insurance-card.md)|
|**Invoice** or billing statement.|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md) |**Receipt**, voucher, or single-page hotel receipt. |You want to extract key information such as merchant name, transaction date, and transaction total.|[**Receipt model**](concept-receipt.md)| |**Identity document (ID)** like a U.S. driver's license or international passport. |You want to extract key information such as first name, last name, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)|
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
Previously updated : 07/18/2023 Last updated : 08/25/2023 monikerRange: 'doc-intel-3.1.0'
monikerRange: 'doc-intel-3.1.0'
# Document Intelligence add-on capabilities > [!NOTE] >
-> Add-on capabilities for Document Intelligence Studio are available with the Read and Layout models for the `2023-07-31` (GA)release.
+> Add-on capabilities for Document Intelligence Studio are available with the Read and Layout models starting with the `2023-02-28-preview` and later releases.
+>
+> Add-on capabilities are available within all models except for the [Business card model](concept-business-card.md).
-Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are three add-on capabilities available for the `2023-07-31` (GA) release:
+Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for`2023-02-28-preview` and later releases:
* [`ocr.highResolution`](#high-resolution-extraction)
Document Intelligence now supports more sophisticated analysis capabilities. The
* [`ocr.font`](#font-property-extraction)
+* [`ocr.barcode`](#barcode-property-extraction)
+ ## High resolution extraction The task of recognizing small text from large-size documents, like engineering drawings, is a challenge. Often the text is mixed with other graphical elements and has varying fonts, sizes and orientations. Moreover, the text may be broken into separate parts or connected with other symbols. Document Intelligence now supports extracting content from these types of documents with the `ocr.highResolution` capability. You get improved quality of content extraction from A1/A2/A3 documents by enabling this add-on capability.
The `ocr.font` capability extracts all font properties of text extracted in the
] ```
+## Barcode property extraction
+
+The `ocr.barcode` capability extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. The `confidence` is hard-coded for as 1.
+
+#### Supported barcode types
+
+| **Barcode Type** | **Example** |
+| | |
+| `QR Code` |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::|
+| `Code 39` |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::|
+| `Code 93` |:::image type="content" source="media/barcodes/code-93.gif" alt-text="Screenshot of the Code 93.":::|
+| `Code 128` |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::|
+| `UPC (UPC-A & UPC-E)` |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::|
+| `PDF417` |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::|
+| `EAN-8` |:::image type="content" source="media/barcodes/european-article-number-8.gif" alt-text="Screenshot of the European-article-number barcode ean-8.":::|
+| `EAN-13` |:::image type="content" source="media/barcodes/european-article-number-13.gif" alt-text="Screenshot of the European-article-number barcode ean-13.":::|
+| `Codabar` |:::image type="content" source="media/barcodes/codabar.png" alt-text="Screenshot of the Codabar.":::|
+| `Databar` |:::image type="content" source="media/barcodes/databar.png" alt-text="Screenshot of the Data bar.":::|
+| `Databar` Expanded |:::image type="content" source="media/barcodes/databar-expanded.gif" alt-text="Screenshot of the Data bar Expanded.":::|
+| `ITF` |:::image type="content" source="media/barcodes/interleaved-two-five.png" alt-text="Screenshot of the interleaved-two-of-five barcode (ITF).":::|
+| `Data Matrix` |:::image type="content" source="media/barcodes/datamatrix.gif" alt-text="Screenshot of the Data Matrix.":::|
+ ## Next steps > [!div class="nextstepaction"]
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
monikerRange: 'doc-intel-3.1.0'
# Document Intelligence custom classification model
-**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-07-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)**.
+**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-07-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)**.
> [!IMPORTANT] >
ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-template.md
The following table lists the supported languages for print text by the most rec
|:--|:-:| |`Rwa`|`rwk`| |Sadri (Devanagari)|`sck`|
+ |Sakha|`sah`|
|Samburu|`saq`| |Samoan (Latin)|`sm`| |Sango|`sg`|
The following table lists the supported languages for print text by the most rec
|Western Frisian|`fy`| |Wolof|`wo`| |Xhosa|`xh`|
- |Yakut|`sah`|
|Yucatec Maya|`yua`| |Zapotec|`zap`| |Zarma|`dje`|
ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md
The following table lists the supported languages for print text by the most rec
|Russian|ru| |Rwa|rwk| |Sadri (Devanagari)|sck|
+ |Sakha|sah|
|Samburu|saq| |Samoan (Latin)|sm| |Sango|sg|
The following table lists the supported languages for print text by the most rec
|Western Frisian|fy| |Wolof|wo| |Xhosa|xh|
- |Yakut|sah|
|Yucatec Maya|yua| |Zapotec|zap| |Zarma|dje|
ai-services Concept Health Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-health-insurance-card.md
+
+ Title: Health insurance card processing - Document Intelligence (formerly Form Recognizer)
+
+description: Data extraction and analysis extraction using the health insurance card model
+++++ Last updated : 07/18/2023+
+monikerRange: '>=doc-intel-3.0.0'
+++
+# Document Intelligence health insurance card model
++
+The Document Intelligence health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards. A health insurance card is a key document for care processing and can be digitally analyzed for patient onboarding, financial coverage information, cashless payments, and insurance claim processing. The health insurance card model analyzes health card images; extracts key information such as insurer, member, prescription, and group number; and returns a structured JSON representation. Health insurance cards can be presented in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
+
+***Sample health insurance card processed using Document Intelligence Studio***
++
+## Development options
+
+Document Intelligence v3.0 and later versions support the prebuilt health insurance card model with the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**health insurance card model**|<ul><li> [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</li></ul>|**prebuilt-healthInsuranceCard.us**|
+
+## Input requirements
++
+### Try Document Intelligence Studio
+
+See how data is extracted from health insurance cards using the Document Intelligence Studio. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Document Intelligence instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
+
+> [!NOTE]
+> Document Intelligence Studio is available with API version v3.0.
+
+1. On the [Document Intelligence Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **Health insurance cards**.
+
+1. You can analyze the sample insurance card document or select the **Γ₧ò Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/insurance-card-analyze.png" alt-text="Screenshot of analyze health insurance card window in the Document Intelligence Studio.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)
+
+## Supported languages and locales
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|prebuilt-healthInsuranceCard.us| <ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
+
+## Field extraction
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`Insurer`|`string`|Health insurance provider name|PREMERA<br>BLUE CROSS|
+|`Member`|`object`|||
+|`Member.Name`|`string`|Member name|ANGEL BROWN|
+|`Member.BirthDate`|`date`|Member date of birth|01/06/1958|
+|`Member.Employer`|`string`|Member name employer|Microsoft|
+|`Member.Gender`|`string`|Member gender|M|
+|`Member.IdNumberSuffix`|`string`|Identification Number Suffix as it appears on some health insurance cards|01|
+|`Dependents`|`array`|Array holding list of dependents, ordered where possible by membership suffix value||
+|`Dependents.*`|`object`|||
+|`Dependents.*.Name`|`string`|Dependent name|01|
+|`IdNumber`|`object`|||
+|`IdNumber.Prefix`|`string`|Identification Number Prefix as it appears on some health insurance cards|ABC|
+|`IdNumber.Number`|`string`|Identification Number|123456789|
+|`GroupNumber`|`string`|Insurance Group Number|1000000|
+|`PrescriptionInfo`|`object`|||
+|`PrescriptionInfo.Issuer`|`string`|ANSI issuer identification number (IIN)|(80840) 300-11908-77|
+|`PrescriptionInfo.RxBIN`|`string`|Prescription issued BIN number|987654|
+|`PrescriptionInfo.RxPCN`|`string`|Prescription processor control number|63200305|
+|`PrescriptionInfo.RxGrp`|`string`|Prescription group number|BCAAXYZ|
+|`PrescriptionInfo.RxId`|`string`|Prescription identification number. If not present, defaults to membership ID number|P97020065|
+|`PrescriptionInfo.RxPlan`|`string`|Prescription Plan number|A1|
+|`Pbm`|`string`|Pharmacy Benefit Manager for the plan|CVS CAREMARK|
+|`EffectiveDate`|`date`|Date from which the plan is effective|08/12/2012|
+|`Copays`|`array`|Array holding list of Co-Pay Benefits||
+|`Copays.*`|`object`|||
+|`Copays.*.Benefit`|`string`|Co-Pay Benefit name|Deductible|
+|`Copays.*.Amount`|`currency`|Co-Pay required amount|$1,500|
+|`Payer`|`object`|||
+|`Payer.Id`|`string`|Payer ID Number|89063|
+|`Payer.Address`|`address`|Payer address|123 Service St., Redmond WA, 98052|
+|`Payer.PhoneNumber`|`phoneNumber`|Payer phone number|+1 (987) 213-5674|
+|`Plan`|`object`|||
+|`Plan.Number`|`string`|Plan number|456|
+|`Plan.Name`|`string`|Plan name - If Medicaid -> then `medicaid` (all lower case).|HEALTH SAVINGS PLAN|
+|`Plan.Type`|`string`|Plan type|PPO|
+|`MedicareMedicaidInfo`|`object`|||
+|`MedicareMedicaidInfo.Id`|`string`|Medicare or Medicaid number|1AB2-CD3-EF45|
+|`MedicareMedicaidInfo.PartAEffectiveDate`|`date`|Effective date of Medicare Part A|01-01-2023|
+|`MedicareMedicaidInfo.PartBEffectiveDate`|`date`|Effective date of Medicare Part B|01-01-2023|
+
+### Migration guide and REST API v3.1
+
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.1 version in your applications and workflows.
+
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) to learn more about the v3.1 version and new capabilities.
+
+## Next steps
+
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md
See how data, including customer information, vendor details, and line items, is
| &bullet; Italian (`it`) | Italy (`it`)| | &bullet; Portuguese (`pt`) | Portugal (`pt`), Brazil (`br`)| | &bullet; Dutch (`nl`) | Netherlands (`nl`)|
-| &bullet; Czech (`cs`) | Czechoslovakia (`cz`)|
+| &bullet; Czech (`cs`) | Czech Republic (`cz`)|
| &bullet; Danish (`da`) | Denmark (`dk`)| | &bullet; Estonian (`et`) | Estonia (`ee`)| | &bullet; Finnish (`fi`) | Finland (`fl`)|
ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md
The following table lists the supported languages for print text by the most rec
|:--|:-:| |Rwa|rwk| |Sadri (Devanagari)|sck|
+ |Sakha|sah|
|Samburu|saq| |Samoan (Latin)|sm| |Sango|sg|
The following table lists the supported languages for print text by the most rec
|Western Frisian|fy| |Wolof|wo| |Xhosa|xh|
- |Yakut|sah|
|Yucatec Maya|yua| |Zapotec|zap| |Zarma|dje|
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
The health insurance card model combines powerful Optical Character Recognition
:::image type="content" source="./media/studio/analyze-health-card.png" alt-text="Screenshot of a sample US health insurance card analysis in Document Intelligence Studio." lightbox="./media/studio/analyze-health-card.png"::: > [!div class="nextstepaction"]
-> [Learn more: Health insurance card model](concept-insurance-card.md)
+> [Learn more: Health insurance card model](concept-health-insurance-card.md)
### W-2
A composed model is created by taking a collection of custom models and assignin
| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** | |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | [prebuilt-read](concept-read.md#data-detection-and-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
-| [prebuilt-healthInsuranceCard.us](concept-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
+| [prebuilt-healthInsuranceCard.us](concept-health-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
| [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô | | [prebuilt-document](concept-general-document.md#data-extraction)| Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | | | [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
The following table lists the supported languages for print text by the most rec
|Russian|ru| |Rwa|rwk| |Sadri (Devanagari)|sck|
+ |Sakha|sah|
|Samburu|saq| |Samoan (Latin)|sm| |Sango|sg|
The following table lists the supported languages for print text by the most rec
|Western Frisian|fy| |Wolof|wo| |Xhosa|xh|
- |Yakut|sah|
|Yucatec Maya|yua| |Zapotec|zap| |Zarma|dje|
The [Read API](concept-read.md) supports detecting the following languages in yo
| Nepali | `ne` | | Norwegian | `no` | | Norwegian Nynorsk | `nn` |
-| Oriya | `or` |
+| Odia | `or` |
| Pasht | `ps` | | Persian | `fa` | | Polish | `pl` |
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
monikerRange: '<=doc-intel-3.1.0'
[!INCLUDE [applies to v2.1](../includes/applies-to-v2-1.md)] ::: moniker-end
-Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file.
+Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that includes the relationships in the original file.
::: moniker range=">=doc-intel-3.0.0" In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
In this article you learn how to download, install, and run Document Intelligenc
::: moniker-end
-> [!IMPORTANT]
->
-> * To use Document Intelligence containers, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
- ## Prerequisites To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
You also need an **Azure AI Vision API resource to process business cards, ID do
* **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information. :::moniker-end
-## Request approval to run container
-
-Complete and submit the [**Azure AI services application for Gated Services**](https://aka.ms/csgate) to request access to the container.
-- ## Host computer requirements The host is a x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
http {
```yml version: '3.3'
- nginx:
- image: nginx:alpine
- container_name: reverseproxy
- volumes:
- - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
- ports:
- - "5000:5000"
- layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
- environment:
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
- custom-template:
- container_name: azure-cognitive-service-custom-template
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
- restart: always
- depends_on:
- - layout
- environment:
- AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
+ custom-template:
+ container_name: azure-cognitive-service-custom-template
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
+ restart: always
+ depends_on:
+ - layout
+ environment:
+ AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
- studio:
- container_name: form-recognizer-studio
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
- environment:
- ONPREM_LOCALFILE_BASEPATH: /onprem_folder
- STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
- volumes:
- - type: bind
- source: ${FILE_MOUNT_PATH} # path to your local folder
- target: /onprem_folder
- - type: bind
- source: ${DB_MOUNT_PATH} # path to your local folder
- target: /onprem_db
- ports:
- - "5001:5001"
- user: "1000:1000" # echo $(id -u):$(id -g)
+ studio:
+ container_name: form-recognizer-studio
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
+ environment:
+ ONPREM_LOCALFILE_BASEPATH: /onprem_folder
+ STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
+ volumes:
+ - type: bind
+ source: ${FILE_MOUNT_PATH} # path to your local folder
+ target: /onprem_folder
+ - type: bind
+ source: ${DB_MOUNT_PATH} # path to your local folder
+ target: /onprem_db
+ ports:
+ - "5001:5001"
+ user: "1000:1000" # echo $(id -u):$(id -g)
```
http {
2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. ```yml
- version: '3.3'
+version: '3.3'
nginx: image: nginx:alpine
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md
To get started, you need:
:::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal."::: > [!NOTE]
- > By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional).
+ > By default, the REST API uses documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional).
## Use the Azure portal
ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md
Follow these tips to further optimize your data set for training.
## Upload your training data
-When you've put together the set of form documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
+When you've put together the set of documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
If you want to use manually labeled data, upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files.
ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md
Choose from the following Document Intelligence models to analyze and extract da
> > * The [prebuilt-document](../concept-general-document.md) model extracts key-value pairs, tables, and selection marks from documents and can be used as an alternative to training a custom model without labels. >
-> * The [prebuilt-healthInsuranceCard.us](../concept-insurance-card.md) model extracts key information from US health insurance cards.
+> * The [prebuilt-healthInsuranceCard.us](../concept-health-insurance-card.md) model extracts key information from US health insurance cards.
> > * The [prebuilt-tax.us.w2](../concept-w2.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms. >
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support.md
Azure AI Document Intelligence models support many languages. Our language suppo
Model | Description | | | | |:::image type="icon" source="medi#supported-languages-and-locales)| Extract business contact details.|
-|:::image type="icon" source="medi#supported-languages-and-locales)| Extract health insurance details.|
+|:::image type="icon" source="medi#supported-languages-and-locales)| Extract health insurance details.|
|:::image type="icon" source="medi#supported-document-types)| Extract identification and verification details.| |:::image type="icon" source="medi#supported-languages-and-locales)| Extract customer and vendor details.| |:::image type="icon" source="medi#supported-languages-and-locales)| Extract sales transaction details.|
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
Previously updated : 07/18/2023 Last updated : 09/05/2023 monikerRange: '<=doc-intel-3.1.0'
monikerRange: '<=doc-intel-3.1.0'
> [!NOTE] > Form Recognizer is now **Azure AI Document Intelligence**! >
-> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services.
+> * There are no changes to pricing.
+> * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs.
+> * There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
::: moniker range=">=doc-intel-3.0.0" [!INCLUDE [applies to v3.1, v3.0, and v2.1](includes/applies-to-v3-1-v3-0-v2-1.md)]
Prebuilt models enable you to add intelligent document processing to your apps a
:::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br>
- [**Insurance card**](#w-2) | Extract health insurance details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-w2.png" link="#w-2":::</br>
- [**W2**](#w-2) | Extract taxable </br>compensation details.
+ [**Health Insurance card**](#health-insurance-card) | Extract health insurance details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-business-card.png" link="#business-card":::</br>
Prebuilt models enable you to add intelligent document processing to your apps a
:::column-end::: :::row-end::: :::row:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-w2.png" link="#w-2":::</br>
+ [**W2**](#w-2) | Extract taxable </br>compensation details.
+ :::column-end:::
:::column span=""::: :::image type="icon" source="media/overview/icon-1098e.png" link="#us-tax-1098-e-form":::</br> [**US Tax 1098-E form**](#us-tax-1098-e-form) | Extract student loan interest details
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases | Development options | |-|--|-|--|
-| [**Health insurance card**](concept-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)
+| [**Health insurance card**](concept-health-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-health-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md
Previously updated : 07/18/2023 Last updated : 08/15/2023 zone_pivot_groups: programming-languages-set-formre monikerRange: '<=doc-intel-3.1.0'
monikerRange: '<=doc-intel-3.1.0'
# Get started with Document Intelligence [!INCLUDE [applies to v3.1 and v3.0](../includes/applies-to-v3-1-v3-0.md)]+
+> [!IMPORTANT]
+>
+> * Azure Cognitive Services Form Recognizer is now Azure AI Document Intelligence.
+> * Some platforms are still awaiting the renaming update.
+> * All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
++
+* Get started with Azure AI Document Intelligence latest GA version (v3.1).
+
+* Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents.
+
+* You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API.
+
+* For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page.
+ ::: moniker-end
-Get started with the latest version of Azure AI Document Intelligence. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure AI Document Intelligence GA version (3.0). Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page.
+> [!TIP]
+>
+> * For an enhance experience and advanced model quality, try the [Document Intelligence v3.1 (GA) quickstart](?view=doc-intel-3.1.0&preserve-view=true#get-started-with-document-intelligence) and [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) API version: 2023-07-31 (3.1 General Availability).
+ ::: moniker-end ::: zone pivot="programming-language-csharp"
To learn more about Document Intelligence features and development options, visi
::: zone-end ::: moniker range=">=doc-intel-3.0.0"+ That's it, congratulations!
-In this quickstart, you used a form Document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth.
+In this quickstart, you used a document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth.
## Next steps
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-csharp" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-java" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-javascript" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-python" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-rest-api" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
That's it, congratulations! In this quickstart, you used Document Intelligence m
## Next steps
-* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
* The v3.0 Studio supports any model trained with v2.1 labeled data.
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
CORS should now be configured to use the storage account from Document Intellige
:::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot of upload blob window in the Azure portal."::: > [!NOTE]
-> By default, the Studio will use form documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional)
+> By default, the Studio will use documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional)
## Custom models
ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-sample-label-tool.md
Title: "Quickstart: Label forms, train a model, and analyze forms using the Sample Labeling tool - Document Intelligence (formerly Form Recognizer)"
-description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label form documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
+description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
ai-services Sdk Overview V3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md
Previously updated : 08/15/2023 Last updated : 09/05/2023
-monikerRange: '>=doc-intel-3.0.0'
+monikerRange: '<=doc-intel-3.1.0'
ai-services Sdk Overview V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md
Previously updated : 08/11/2023 Last updated : 09/05/2023
-monikerRange: '>=doc-intel-3.0.0'
+monikerRange: '<=doc-intel-3.1.0'
monikerRange: '>=doc-intel-3.0.0'
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 -->
-# Document Intelligence SDK v3.1 (GA)
+# Document Intelligence SDK v3.1 latest (GA)
-**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31 ΓÇö v3.1 (GA)**.
+**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31ΓÇöv3.1 (GA)**.
Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Package| Supported API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Platform support | |:-:|:-|:-| :-:|
-| [**.NET/C# → 4.1.0 → latest GA release </br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[**Java → 4.1.0 → latest GA release</br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[**JavaScript → 5.0.0 → latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[**Python → 3.3.0 → latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+| [**.NET/C# → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[**Java → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[**JavaScript → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[**Python → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
## Supported Clients
The following tables present the correlation between each SDK version the suppor
| Language| SDK version | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients| | : | :--|:- | :--|
-|**.NET/C#**| 4.1.0 (GA)| v3.1 → 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**| 4.0.0 (GA)| v3.0 → 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
-|**.NET/C#**| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C# 4.1.0**| v3.1 latest (GA)| 2023-07-31|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C# 4.0.0**| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C# 3.1.x**| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C# 3.0.x**| v2.0 | v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
### [Java](#tab/java) | Language| SDK version | API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients| | : | :--|:- | :--|
-|**Java**| 4.1.0 (GA)| v3.1 → 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 → 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**Java 4.1.0**| v3.1 latest (GA)| 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**Java 4.0.0**</br>| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**Java 3.1.x**| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**Java 3.0.x**| v2.0| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
### [JavaScript](#tab/javascript) | Language| SDK version | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients| | : | :--|:- | :--|
-|**JavaScript**| 5.0.0 (GA)| v3.1 → 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 → 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**JavaScript 5.0.0**| v3.1 latest (GA)| 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**JavaScript 4.0.0**</br>| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**JavaScript 3.1.x**</br>| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**JavaScript 3.0.x**</br>| v2.0| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
### [Python](#tab/python) | Language| SDK version | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients| | : | :--|:- | :--|
-| **Python**| 3.3.0 (GA)| v3.1 → 2023-07-31 (default) | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
-| **Python**| 3.2.x (GA) | v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
-| **Python**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
-| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python 3.3.0**| v3.1 (latest (GA)| 2023-07-31 (default) | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python 3.2.x**| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python 3.1.x**| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python 3.0.0** | v2.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-azure-function.md
Next, you'll add your own code to the Python script to call the Document Intelli
The following code parses the returned Document Intelligence response, constructs a .csv file, and uploads it to the **output** container. > [!IMPORTANT]
- > You will likely need to edit this code to match the structure of your own form documents.
+ > You will likely need to edit this code to match the structure of your own documents.
```python # The code below extracts the json format into tabular data.
ai-services V3 1 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-1-migration-guide.md
monikerRange: '<=doc-intel-3.1.0'
## Migrating from v3.1 preview API version
-Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2023-02-28-preview API version to the `2023-07-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md).
+Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2023-02-28-preview API version to the `2023-07-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview-v3-1.md).
The `2023-07-31` (GA) API has a few updates and changes from the preview API version:
Compared with v3.0, Document Intelligence v3.1 introduces several new features a
* [Custom classification model](concept-custom-classifier.md) for document splitting and classification. * Language expansion and new fields support in [Invoice](concept-invoice.md) and [Receipt](concept-receipt.md) model. * New document type support in [ID document](concept-id-document.md) model.
-* New prebuilt [Health insurance card](concept-insurance-card.md) model.
+* New prebuilt [Health insurance card](concept-health-insurance-card.md) model.
* Office/HTML files are supported in prebuilt-read model, extracting words and paragraphs without bounding boxes. Embedded images are no longer supported. If add-on features are requested for Office/HTML files, an empty array is returned without errors. * Model expiration for custom extraction and classification models - Our new custom models build upon on a large base model that we update periodically for quality improvement. An expiration date is introduced to all custom models to enable the retirement of the corresponding base models. Once a custom model expires, you need to retrain the model using the latest API version (base model).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
Document Intelligence service is updated on an ongoing basis. Bookmark this page
## July 2023 > [!NOTE]
-> Form Recognizer is now Azure AI Document Intelligence!
+> Form Recognizer is now **Azure AI Document Intelligence**!
>
-> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names _Cognitive Services_ and _Azure Applied AI_ continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * Document, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services.
+> * There are no changes to pricing.
+> * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs.
+> * There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
**Document Intelligence v3.1 (GA)**
The v3.1 API introduces new and updated capabilities:
* Support for [high resolution documents](concept-add-on-capabilities.md) * Custom neural models now require a single labeled sample to train * Custom neural models language expansion. Train a neural model for documents in 30 languages. See [language support](language-support.md) for the complete list of supported languages
-* 🆕 [Prebuilt health insurance card model](concept-insurance-card.md).
+* 🆕 [Prebuilt health insurance card model](concept-health-insurance-card.md).
* [Prebuilt invoice model locale expansion](concept-invoice.md#supported-languages-and-locales). * [Prebuilt receipt model language and locale expansion](concept-receipt.md#supported-languages-and-locales) with more than 100 languages supported. * [Prebuilt ID model](concept-id-document.md#supported-document-types) now supports European IDs.
The v3.1 API introduces new and updated capabilities:
## March 2023 > [!IMPORTANT]
-> [**`2023-07-31`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) capabilities are currently only available in the following regions:
+> [**`2023-02-28-preview`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) capabilities are currently only available in the following regions:
> > * West Europe > * West US2
The v3.1 API introduces new and updated capabilities:
* Document Intelligence SDK version `4.0.0 GA` release * **Document Intelligence SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
- * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview.md).
+ * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview-v3-1.md).
* Update your applications using your programming language's **migration guide**.
The v3.1 API introduces new and updated capabilities:
* New option `pages` supported by all document intelligence methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
-* Added support for a [ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true) type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
+* Added support for a [ReadingOrder](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#310-2021-05-26) type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
* Split **FormField** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
This release introduces the Document Intelligence 2.0. In the next sections, you
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/language-support.md
This article lists supported human languages for Immersive Reader features.
| French (France) | fr-FR | | French (Switzerland) | fr-CH | | Galician | gl |
-| Galician (Spain) | gl-ES |
+| Galician | gl-ES |
| Georgian | ka | | Georgian (Georgia) | ka-GE | | German | de |
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/language-support.md
Use this article to learn about the languages currently supported by different f
| Norwegian (Bokmal) | `nb` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | | | | | | | Norwegian | `no` | | | | | &check; | | | | | | | &check; | &check; | | | | | Norwegian Nynorsk | `nn` | | | | | &check; | | | | | | | | | | | |
-| Oriya | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Odia | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| Oromo | `om` | | | | | | &check; | | | | | | &check; | &check; | | | | | Pashto | `ps` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Persian (Farsi) | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Persian | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
| Polish | `pl` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | | | Portuguese (Brazil) | `pt-br` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | | Portuguese (Portugal) | `pt-pt` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | | &check; | &check; | | &check; | |
ai-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/model-lifecycle.md
Language service features utilize AI models. We update the language service with
## Prebuilt features - Our standard (not customized) language service features are built on AI models that we call pre-trained models. We regularly update the language service with new model versions to improve model accuracy, support, and quality.
Preview models used for preview features do not maintain a minimum retirement pe
By default, API and SDK requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended). > [!NOTE]
-> * If you are using an model version that is not listed in the table, then it was subjected to the expiration policy.
+> * If you are using a model version that is not listed in the table, then it was subjected to the expiration policy.
> * Abstractive document and conversation summarization do not provide model versions other than the latest available. Use the table below to find which model versions are supported by each feature:
Use the table below to find which model versions are supported by each feature:
### Expiration timeline
-As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration:
+For custom features, there are two key parts of the AI implementation: training and deployment. New configurations are released regularly with regular AI improvements, so older and less accurate configurations are retired.
+
+Use the table below to find which model versions are supported by each feature:
-New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you've assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version.
+| Feature | Supported Training Config Versions | Training Config Expiration | Deployment Expiration |
+||--|||
+| Conversational language understanding | `2022-05-01` | October 28, 2022 | October 28, 2023 |
+| Conversational language understanding | `2022-09-01` (latest)** | February 28, 2024 | February 27, 2025 |
+| Orchestration workflow | `2022-09-01` (latest)** | April 30, 2024 | April 30, 2025 |
+| Custom named entity recognition | `2022-05-01` (latest)** | April 30, 2024 | April 30, 2025 |
+| Custom text classification | `2022-05-01` (latest)** | April 30, 2024 | April 30, 2025 |
-After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
+** *For latest training configuration versions, the posted expiration dates are subject to availability of a newer model version. If no newer model versions are available, the expiration date may be extended.*
-> [!Tip]
-> It's recommended to use the latest supported config version
+Training configurations are typically available for **six months** after its release. If you've assigned a trained configuration to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version.
-You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you'll have to use another supported training config version for submitting any training or deployment jobs.
+> [!TIP]
+> It's recommended to use the latest supported configuration version.
-Deployment expiration is when your deployed model will be unavailable to be used for prediction.
+After the **training config expiration** date, you'll have to use another supported training configuration version to submit any training or deployment jobs. After the **deployment expiration** date, your deployed model will be unavailable to be used for prediction.
-Use the table below to find which model versions are supported by each feature:
+After training config version expires, API calls will return an error when called or used if called with an expired configuration version. By default, training requests use the latest available training configuration version. To change the configuration version, use the `trainingConfigVersion` parameter when submitting a training job and assign the version you want.
-| Feature | Supported Training config versions | Training config expiration | Deployment expiration |
-||--|||
-| Custom text classification | `2022-05-01` | `2023-05-01` | `2024-04-30` |
-| Conversational language understanding | `2022-05-01` | `2022-10-28` | `2023-10-28` |
-| Conversational language understanding | `2022-09-01` | `2023-02-28` | `2024-02-28` |
-| Custom named entity recognition | `2022-05-01` | `2023-05-01` | `2024-04-30` |
-| Orchestration workflow | `2022-05-01` | `2023-05-01` | `2024-04-30` |
## API versions
ai-services Regional Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/regional-support.md
+
+ Title: Regional support for Azure AI Language
+
+description: Learn which Azure regions are supported by the Language service.
++++++ Last updated : 08/23/2023++++
+# Language service supported regions
+
+The Language service is available for use in several Azure regions. Use this article to learn about the regional support and limitations.
+
+## Region support overview
+
+Typically you can refer to the [region support](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services) for details, and most Language service capabilities are available in all supported regions. Some Language service capabilities, however, are only available in select regions which are listed below.
+
+> [!NOTE]
+> Language service doesn't store or process customer data outside the region you deploy the service instance in.
+
+## Conversational language understanding and orchestration workflow
+
+Conversational language understanding, orchestration workflow, and custom sentiment analysis are only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| China East 2 | Γ£ô | Γ£ô |
+| China North 2 | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
++
+## Custom sentiment analysis
+
+Custom sentiment analysis is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| China East 2 | Γ£ô | Γ£ô |
+| China North 2 | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
+
+## Custom named entity recognition
+
+Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
++
+## Custom text classification
+
+Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
+
+## Summarization
+
+|Region|Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization|
+||||||-|
+|North Europe| Γ£ô | Γ£ô | Γ£ô | |
+|East US| Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+|UK South| Γ£ô | Γ£ô | Γ£ô | |
+|Southeast Asia| Γ£ô | Γ£ô | Γ£ô | |
++
+## Custom Text Analytics for health
+
+Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| East US | Γ£ô | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+
+### Next steps
+
+* [Language support](./language-support.md)
+* [Quotas and limits](./data-limits.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/language-support.md
Conversational language understanding supports utterances in the following langu
| Spanish | `es` | | Estonian | `et` | | Basque | `eu` |
-| Persian (Farsi) | `fa` |
+| Persian | `fa` |
| Finnish | `fi` | | French | `fr` | | Western Frisian | `fy` |
Conversational language understanding supports utterances in the following langu
| Nepali | `ne` | | Dutch | `nl` | | Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
+| Odia | `or` |
| Punjabi | `pa` | | Polish | `pl` | | Pashto | `ps` |
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/service-limits.md
Previously updated : 10/12/2022 Last updated : 08/23/2023
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
### Regional availability
-Conversational language understanding is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| China East 2 | Γ£ô | Γ£ô |
-| China North 2 | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
+See [Language service regional availability](../concepts/regional-support.md#conversational-language-understanding-and-orchestration-workflow).
## API limits
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/language-support.md
Custom NER supports `.txt` files in the following languages:
| Spanish | `es` | | Estonian | `et` | | Basque | `eu` |
-| Persian (Farsi) | `fa` |
+| Persian | `fa` |
| Finnish | `fi` | | French | `fr` | | Western Frisian | `fy` |
Custom NER supports `.txt` files in the following languages:
| Nepali | `ne` | | Dutch | `nl` | | Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
+| Odia | `or` |
| Punjabi | `pa` | | Polish | `pl` | | Pashto | `ps` |
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/service-limits.md
Previously updated : 08/09/2023 Last updated : 08/23/2023
Use this article to learn about the data and service limits when using custom NE
## Regional availability
-Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
-
+See [Language service regional availability](../concepts/regional-support.md#custom-named-entity-recognition).
## API limits
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
Previously updated : 04/14/2023 Last updated : 08/23/2023
Use this article to learn about the data and service limits when using custom Te
## Regional availability
-Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| East US | Γ£ô | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
+See [Language service regional availability](../../concepts/regional-support.md#custom-text-analytics-for-health).
## API limits
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/language-support.md
Custom text classification supports `.txt` files in the following languages:
| Spanish | `es` | | Estonian | `et` | | Basque | `eu` |
-| Persian (Farsi) | `fa` |
+| Persian | `fa` |
| Finnish | `fi` | | French | `fr` | | Western Frisian | `fy` |
Custom text classification supports `.txt` files in the following languages:
| Nepali | `ne` | | Dutch | `nl` | | Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
+| Odia | `or` |
| Punjabi | `pa` | | Polish | `pl` | | Pashto | `ps` |
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/service-limits.md
description: Learn about the data and rate limits when using custom text classif
Previously updated : 08/09/2023 Last updated : 08/23/2023
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
## Regional availability
-Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
+See [Language service regional availability](../concepts/regional-support.md#custom-text-classification).
## API limits
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/language-support.md
Total supported language codes: 94
| Mongolian            |     `mn`      |                2022-10-01                 |                    | | Nepali            |     `ne`      |                2022-10-01                 |                    | | Norwegian (Bokmål)    |     `no`      |                2020-07-01                 | `nb` also accepted |
-| Oriya            |     `or`      |                2022-10-01                 |                    |
+| Odia            |     `or`      |                2022-10-01                 |                    |
| Oromo            |     `om`      |                2022-10-01                 |                    | | Pashto            |     `ps`      |                2022-10-01                 |                    |
-| Persian (Farsi)       |     `fa`      |                2022-10-01                 |                    |
+| Persian       |     `fa`      |                2022-10-01                 |                    |
| Polish                |     `pl`      |                2019-10-01                 |                    | | Portuguese (Brazil)   |    `pt-BR`    |                2019-10-01                 |                    | | Portuguese (Portugal) |    `pt-PT`    |                2019-10-01                 | `pt` also accepted |
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/service-limits.md
Previously updated : 10/12/2022 Last updated : 08/23/2023
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
## Regional availability
-Orchestration workflow is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| China East 2 | Γ£ô | Γ£ô |
-| China North 2 | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
+See [Language service regional availability](../concepts/regional-support.md#conversational-language-understanding-and-orchestration-workflow).
## API limits
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md
This will establish a private endpoint connection between language resource and
Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to an Azure AI services resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.-- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
- Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
ai-services Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/prebuilt.md
ne|Nepali
nl|Dutch, Flemish nn|Norwegian Nynorsk no|Norwegian
-or|Oriya
+or|Odia
pa|Punjabi, Panjabi pl|Polish ps|Pashto, Pushto pt|Portuguese
-ro|Romanian, Moldavian, Moldovan
+ro|Romanian, Moldovan
ru|Russian sa|Sanskrit sd|Sindhi
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/how-to/use-containers.md
In this article, you learned concepts and workflow for downloading, installing,
* You must specify billing information when instantiating a container. > [!IMPORTANT]
-> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/language-support.md
Total supported language codes: 94
| Mongolian (new) | `mn` | 2022-11-01 | | | Nepali (new) | `ne` | 2022-11-01 | | | Norwegian | `no` | 2022-11-01 | |
-| Oriya (new) | `or` | 2022-11-01 | |
+| Odia (new) | `or` | 2022-11-01 | |
| Oromo (new) | `om` | 2022-11-01 | | | Pashto (new) | `ps` | 2022-11-01 | |
-| Persian (Farsi) (new) | `fa` | 2022-11-01 | |
+| Persian (new) | `fa` | 2022-11-01 | |
| Polish | `pl` | 2022-11-01 | | | Portuguese (Portugal) | `pt-PT` | 2021-10-01 | `pt` also accepted | | Portuguese (Brazil) | `pt-BR` | 2021-10-01 | |
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/custom/how-to/data-formats.md
In the abstractive document summarization scenario, each document (whether it ha
## Custom summarization conversation sample format
- In the abstractive conversation summarization scenario, each conversation (whether it has a provided label or not) is expected to be provided in a .json file, which is similar to the input format for our [pre-built conversation summarization service](https://learn.microsoft.com/rest/api/language/2023-04-01/analyze-conversation/submit-job?tabs=HTTP#textconversation). The following is an example conversation of three turns between two speakers (Agent and Customer).
+ In the abstractive conversation summarization scenario, each conversation (whether it has a provided label or not) is expected to be provided in a .json file, which is similar to the input format for our [pre-built conversation summarization service](/rest/api/language/2023-04-01/analyze-conversation/submit-job?tabs=HTTP#textconversation). The following is an example conversation of three turns between two speakers (Agent and Customer).
```json {
ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/credential-entity.md
To create service principal for your data source, you can follow detailed instru
There are several steps to create a service principal from key vault.
-**Step 1. Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source.
+**Step 1: Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source.
After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**. The **Directory (tenant) ID** should be `Tenant ID` in credential entity configurations. ![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
-**Step 2. Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.)
+**Step 2: Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.)
![sp Client secret value](../media/credential-entity/sp-secret-value.png)
-**Step 3. Create a key vault.** In [Azure portal](https://portal.azure.com/#home), select **Key vaults** to create one.
+**Step 3: Create a key vault.** In [Azure portal](https://portal.azure.com/#home), select **Key vaults** to create one.
![create a key vault in azure portal](../media/credential-entity/create-key-vault.png)
After creating a key vault, the **Vault URI** is the `Key Vault Endpoint` in MA
![key vault endpoint](../media/credential-entity/key-vault-endpoint.png)
-**Step 4. Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**.
+**Step 4: Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**.
The first is for `Service Principal Client Id`, the other is for `Service Principal Client Secret`, both of their name will be used in credential entity configurations. ![generate secrets](../media/credential-entity/generate-secrets.png)
The first is for `Service Principal Client Id`, the other is for `Service Princi
Until now, the *client ID* and *client secret* of service principal are finally stored in Key Vault. Next, you need to create another service principal to store the key vault. Therefore, you should **create two service principals**, one to save client ID and client secret, which will be stored in a key vault, the other is to store the key vault.
-**Step 5. Create a service principal to store the key vault.**
+**Step 5: Create a service principal to store the key vault.**
1. Go to [Azure portal AAD (Azure Active Directory)](https://portal.azure.com/?trace=diagnostics&feature.customportal=false#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) and create a new registration.
Until now, the *client ID* and *client secret* of service principal are finally
![add client secret](../media/credential-entity/add-client-secret.png)
-**Step 6. Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'.
+**Step 6: Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'.
![grant sp to key vault](../media/credential-entity/grant-sp-to-kv.png)
ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
An alert generated by Metrics Advisor may contain multiple incidents and each in
After being directed to the incident detail page, you're able to take advantage of the insights that are automatically analyzed by Metrics Advisor to quickly locate root cause of an issue or use the analysis tool to further evaluate the issue impact. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
-### Step 1. Check summary of current incident
+### Step 1: Check summary of current incident
The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
The first section lists a summary of the current incident, including basic infor
For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
-### Step 2. View cross-dimension diagnostic insights
+### Step 2: View cross-dimension diagnostic insights
After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
There are two display modes for a diagnostic tree: only show anomaly series or s
By using "Diagnostic tree", customers can locate root cause of current incident into specific dimension. This significantly removes customer's effort to view each individual anomalies or pivot through different dimensions to find the major anomaly contribution.
-### Step 3. View cross-metrics diagnostic insights using "Metrics graph"
+### Step 3: View cross-metrics diagnostic insights using "Metrics graph"
Sometimes, it's hard to analyze an issue by checking abnormal status of one single metric, but need to correlate multiple metrics together. Customers are able to configure a **Metrics graph**, which indicates the relationship between metrics. Refer to [How to build a metrics graph](metrics-graph.md) to get started.
ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
Follow the H2 headings with a sentence about how the section contributes to the
<!-- Introduction paragraph --> There are two common options to send email notifications that are supported in Metrics Advisor. One is to use webhooks and Azure Logic Apps to send email alerts, the other is to set up an SMTP server and use it to send email alerts directly. This section will focus on the first option, which is easier for customers who don't have an available SMTP server.
-**Step 1.** Create a webhook in Metrics Advisor
+**Step 1:** Create a webhook in Metrics Advisor
A webhook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a webhook.
Select the **Hooks** tab in your Metrics Advisor workspace, and select the **Cre
There's one extra parameter of **Endpoint** that needs to be filled out, this could be done after completing Step 3 below.
-**Step 2.** Create a Consumption logic app resource
+**Step 2:** Create a Consumption logic app resource
In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource with a blank workflow by following the instructions in [Create an example Consumption logic app workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md). When you see the workflow designer opens, return to this tutorial.
-**Step 3.** Add a trigger of **When an HTTP request is received**
+**Step 3:** Add a trigger of **When an HTTP request is received**
- Azure Logic Apps uses various actions to trigger workflows that are defined. For this use case, it uses the trigger named **When an HTTP request is received**.
In the [Azure portal](https://portal.azure.com), create a Consumption logic app
![Screenshot that highlights the copy icon to copy the URL of your HTTP request trigger.](../media/tutorial/logic-apps-copy-url.png)
-**Step 4.** Add a next step using 'HTTP' action
+**Step 4:** Add a next step using 'HTTP' action
Signals that are pushed through the webhook only contain limited information like timestamp, alertID, configurationID, etc. Detailed information needs to be queried using the callback URL provided in the signal. This step is to query detailed alert info.
Signals that are pushed through the webhook only contain limited information lik
![Screenshot that highlights the api-keys](../media/tutorial/logic-apps-api-key.png)
-**Step 5.** Add a next step to ΓÇÿparse JSONΓÇÖ
+**Step 5:** Add a next step to ΓÇÿparse JSONΓÇÖ
You need to parse the response of the API for easier formatting of email content.
You need to parse the response of the API for easier formatting of email content
} ```
-**Step 6.** Add a next step to ΓÇÿcreate HTML tableΓÇÖ
+**Step 6:** Add a next step to ΓÇÿcreate HTML tableΓÇÖ
A bunch of information has been returned from the API call, however, depending on your scenarios not all of the information may be useful. Choose the items that you care about and would like included in the alert email.
Below is an example of an HTML table that chooses 'timestamp', 'metricGUID' and
![Screenshot of html table example](../media/tutorial/logic-apps-html-table.png)
-**Step 7.** Add the final step to ΓÇÿsend an emailΓÇÖ
+**Step 7:** Add the final step to ΓÇÿsend an emailΓÇÖ
There are several options to send email, both Microsoft hosted and 3rd-party offerings. Customer may need to have a tenant/account for their chosen option. For example, when choosing ΓÇÿOffice 365 OutlookΓÇÖ as the server. Sign in process will be pumped for building connection and authorization. An API connection will be established to use email server to send alert.
Fill in the content that you'd like to include to 'Body', 'Subject' in the email
### Send anomaly notification through a Microsoft Teams channel This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
-**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
+**Step 1:** Add a 'Incoming Webhook' connector to your Teams channel
- Navigate to the Teams channel that you'd like to send notification to, select 'ΓÇóΓÇóΓÇó'(More options). - In the dropdown list, select 'Connectors'. Within the new dialog, search for 'Incoming Webhook' and click 'Add'.
This section will walk through the practice of sending anomaly notifications thr
![Screenshot to copy URL](../media/tutorial/webhook-url.png)
-**Step 2.** Create a new 'Teams hook' in Metrics Advisor
+**Step 2:** Create a new 'Teams hook' in Metrics Advisor
- Select 'Hooks' tab in left navigation bar, and select the 'Create hook' button at top right of the page. - Choose hook type of 'Teams', then input a name and paste the URL that you copied from the above step.
This section will walk through the practice of sending anomaly notifications thr
![Screenshot to create a Teams hook](../media/tutorial/teams-hook.png)
-**Step 3.** Apply the Teams hook to an alert configuration
+**Step 3:** Apply the Teams hook to an alert configuration
Go and select one of the data feeds that you have onboarded. Select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to anomalies that are detected and notify through a Teams channel.
Select the '+' button and choose the hook that you created, fill in other fields
This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password.
-**Step 1.** Assign your account as the 'Cognitive Services Metrics Advisor Administrator' role
+**Step 1:** Assign your account as the 'Cognitive Services Metrics Advisor Administrator' role
- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control (IAM) tab. - Select 'Add role assignments'.
This section will share the practice of using an SMTP server to send email notif
![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png)
-**Step 2.** Configure SMTP server in Metrics Advisor workspace
+**Step 2:** Configure SMTP server in Metrics Advisor workspace
After you've completed the above steps and have been successfully added as an administrator of the Metrics Advisor resource. Wait several minutes for the permissions to propagate. Then sign in to your Metrics Advisor workspace, you should be able to view a new tab named 'Email setting' on the left navigation panel. Select it and to continue configuration.
Below is an example of a configured SMTP server:
![Screenshot that shows an example of a configured SMTP server](../media/tutorial/email-setting.png)
-**Step 3.** Create an email hook in Metrics Advisor
+**Step 3:** Create an email hook in Metrics Advisor
After successfully configuring an SMTP server, you're set to create an 'email hook' in the 'Hooks' tab in Metrics Advisor. For more about creating an 'email hook', refer to [article on alerts](../how-tos/alerts.md#email-hook) and follow the steps to completion.
-**Step 4.** Apply the email hook to an alert configuration
+**Step 4:** Apply the email hook to an alert configuration
Go and select one of the data feeds that you on-boarded, select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to the anomalies that have been detected and sent through emails.
ai-services Multi Service Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md
You can access Azure AI services through two different resources: A multi-servic
Azure AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications.
+## Supported services with a multi-service resource
+
+The multi-service resource enables access to the following Azure AI services with a single key and endpoint. Use these links to find quickstart articles, samples, and more to start using your resource.
+
+| Service | Description |
+| | |
+| ![Content Moderator icon](./media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content |
+| ![Custom Vision icon](./media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition to fit your business |
+| ![Document Intelligence icon](./media/service-icons/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into usable data at a fraction of the time and cost |
+| ![Face icon](./medi) | Detect and identify people and emotions in images |
+| ![Language icon](./media/service-icons/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities |
+| ![Speech icon](./media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition |
+| ![Translator icon](./media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects |
+| ![Vision icon](./media/service-icons/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos |
+ ::: zone pivot="azportal" [!INCLUDE [Azure Portal quickstart](includes/quickstarts/management-azportal.md)]
Azure AI services are represented by Azure [resources](../azure-resource-manager
::: zone-end +++ ::: zone pivot="programming-language-csharp" [!INCLUDE [C# SDK quickstart](includes/quickstarts/management-csharp.md)]
Azure AI services are represented by Azure [resources](../azure-resource-manager
## Next steps
-* Now that you have a resource, you can authenticate your API requests to the following Azure AI services. Use these links to find quickstart articles, samples and more to start using your resource.
- * [Content Moderator](./content-moderator/index.yml) (retired)
- * [Custom Vision](./custom-vision-service/index.yml)
- * [Document Intelligence](./document-intelligence/index.yml)
- * [Face](./computer-vision/overview-identity.md)
- * [Language](./language-service/index.yml)
- * [Speech](./speech-service/index.yml)
- * [Translator](./translator/index.yml)
- * [Vision](./computer-vision/index.yml)
+* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource).
ai-services Chatgpt Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/chatgpt-quickstart.md
-+ Previously updated : 05/23/2023 Last updated : 08/31/2023 zone_pivot_groups: openai-quickstart-new recommendations: false
Use this article to get started using Azure OpenAI.
::: zone-end ++++ ::: zone pivot="programming-language-java" [!INCLUDE [Java quickstart](includes/chatgpt-java.md)]
Use this article to get started using Azure OpenAI.
[!INCLUDE [REST API quickstart](includes/chatgpt-rest.md)] ::: zone-end+++
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available wit
Previously updated : 08/02/2023 Last updated : 09/05/2023
These models can only be used with the Chat Completion API.
| | | | | | | `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 8,192 | September 2021 | | `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 32,768 | September 2021 |
-| `gpt-4` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, UK South | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, UK South | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, Sweden Central, Switzerland North, UK South | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, Sweden Central, Switzerland North, UK South | N/A | 32,768 | September 2021 |
<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br> <sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 |
<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.
These models can only be used with Embedding API requests.
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| text-embedding-ada-002 (version 2) | Canada East, East US, France Central, Japan East, North Central US, South Central US, West Europe | N/A |8,191 | Sep 2021 |
+| text-embedding-ada-002 (version 2) | Canada East, East US, France Central, Japan East, North Central US, South Central US, Switzerland North, UK South, West Europe | N/A |8,191 | Sep 2021 |
| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | ### DALL-E models (Preview)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 08/08/2023 Last updated : 09/05/2023 recommendations: false
Azure OpenAI on your data supports the following filetypes:
* Microsoft PowerPoint files * PDF
-There are some caveats about document structure and how it might affect the quality of responses from the model:
+There is an [upload limit](../quotas-limits.md), and there are some caveats about document structure and how it might affect the quality of responses from the model:
* The model provides the best citation titles from markdown (`.md`) files.
There are some caveats about document structure and how it might affect the qual
This will impact the quality of Azure Cognitive Search and the model response.
-## Virtual network support & private link support
+## Virtual network support & private endpoint support
-Azure OpenAI on your data does not currently support private endpoints.
+You can protect Azure OpenAI resources in [virtual networks and private endpoints](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI service.
+
+If you have Azure OpenAI resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in five business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request.
++
+Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
+
+After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service.
+
+> [!NOTE]
+> Virtual networks & private endpoints are only supported for the API, and not currently supported for Azure OpenAI Studio.
+### Storage accounts in private virtual networks
+
+Storage accounts in virtual networks and private endpoints are currently not supported by Azure OpenAI on your data.
## Azure Role-based access controls (Azure RBAC)
To add a new data source to your Azure OpenAI resource, you need the following A
|Azure RBAC role |Needed when | |||
-|[Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) | You want to use Azure OpenAI on your data. |
+|[Cognitive Services Contributor](../how-to/role-based-access-control.md#cognitive-services-contributor) | You want to use Azure OpenAI on your data. |
|[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. | |[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. |
+## Document-level access control
+
+Azure OpenAI on your data lets you restrict the documents that can be used in responses for different users with Azure Cognitive Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure Cognitive Search and used to generate a response will be trimmed based on user Azure Active Directory (AD) group membership. You can only enable document-level access on existing Azure Cognitive search indexes. To enable document-level access:
+
+1. Follow the steps in the [Azure Cognitive Search documentation](/azure/search/search-security-trimming-for-azure-search-with-aad) to register your application and create users and groups.
+1. [Index your documents with their permitted groups](/azure/search/search-security-trimming-for-azure-search-with-aad#index-document-with-their-permitted-groups). Be sure that your new [security fields](/azure/search/search-security-trimming-for-azure-search#create-security-field) have the schema below:
+
+ ```json
+ {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true }
+ ```
+
+ `group_ids` is the default field name. If you use a different field name like `my_group_ids`, you can map the field in [index field mapping](#index-field-mapping).
+
+1. Make sure each sensitive document in the index has the value set correctly on this security field to indicate the permitted groups of the document.
+1. In [Azure OpenAI Studio](https://oai.azure.com/portal), add your data source. in the [index field mapping](#index-field-mapping) section, you can map zero or one value to the **permitted groups** field, as long as the schema is compatible. If the **Permitted groups** field isn't mapped, document level access won't be enabled.
+
+**Azure OpenAI Studio**
+
+Once the Azure Cognitive Search index is connected, your responses in the studio will have document access based on the Azure AD permissions of the logged in user.
+
+**Web app**
+
+If you are using a published [web app](#using-the-web-app), you need to redeploy it to upgrade to the latest version. The latest version of the web app includes the ability to retrieve the groups of the logged in user's Azure AD account, cache it, and include the group IDs in each API request.
+
+**API**
+
+When using the API, pass the `filter` parameter in each API request. For example:
+
+```json
+{
+ "messages": [
+ {
+ "role": "user",
+ "content": "who is my manager?"
+ }
+ ],
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'$SearchEndpoint'",
+ "key": "'$SearchKey'",
+ "indexName": "'$SearchIndex'",
+ "filter": "my_group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
+ }
+ }
+ ]
+}
+```
+* `my_group_ids` is the field name that you selected for **Permitted groups** during [fields mapping](#index-field-mapping).
+* `group_id1, group_id2` are groups attributed to the logged in user. The client application can retrieve and cache users' groups.
+
+## Schedule automatic index refreshes
+
+To keep your Azure Cognitive Search index up-to-date with your latest data, you can schedule a refresh for it that runs automatically rather than manually updating it every time your data is updated. to enable an automatic index refresh:
+
+1. [Add a data source](../quickstart.md) using Azure OpenAI studio.
+1. Under **Select or add data source** select **Indexer schedule** and choose the refresh cadence you would like to apply.
+
+ :::image type="content" source="../media/use-your-data/indexer-schedule.png" alt-text="A screenshot of the indexer schedule in Azure OpenAI Studio." lightbox="../media/use-your-data/indexer-schedule.png":::
+
+After the data ingestion is set to a cadence other than once, Azure Cognitive Search indexers will be created with a schedule equivalent to `0.5 * the cadence specified`. This means that at the specified cadence, the indexers will pull the documents that were added, modified, or deleted from the storage container, reprocess and index them. This ensures that the updated data gets preprocessed and indexed in the final index at the desired cadence automatically. The intermediate assets created in the Azure Cognitive Search resource will not be cleaned up after ingestion to allow for future runs. These assets are:
+ - `{Index Name}-index`
+ - `{Index Name}-indexer`
+ - `{Index Name}-indexer-chunk`
+ - `{Index Name}-datasource`
+ - `{Index Name}-skillset`
+
+To modify the schedule, you can use the [Azure portal](https://portal.azure.com/).
+
+1. Open your search resource page in the Azure portal
+1. Select **Indexers** from the left pane
+
+ :::image type="content" source="../media/use-your-data/indexers-azure-portal.png" alt-text="A screenshot of the indexers tab in the Azure portal." lightbox="../media/use-your-data/indexers-azure-portal.png":::
+
+1. Perform the following steps on the two indexers that have your index name as a prefix.
+ 1. Select the indexer to open it. Then select the **settings** tab.
+ 1. Update the schedule to the desired cadence from "Schedule" or specify a custom cadence from "Interval (minutes)"
+
+ :::image type="content" source="../media/use-your-data/indexer-schedule-azure-portal.png" alt-text="A screenshot of the settings page for an individual indexer." lightbox="../media/use-your-data/indexer-schedule-azure-portal.png":::
+
+ 1. Select **Save**.
+ ## Recommended settings Use the following sections to help you configure Azure OpenAI on your data for optimal results. ### System message
-Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 200 tokens.
+Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400 tokens.
For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
For example, if you're creating a chatbot where the data consists of transcripti
This system message can help improve the quality of the response by specifying the domain (in this case finance) and mentioning that the data consists of call transcriptions. It helps set the necessary context for the model to respond appropriately.
-> [!NOTE]
+> [!NOTE]
+> The system message is used to modify how GPT assistant responds to a user question based on retrieved documentation. It does not affect the retrieval process. If you'd like to provide instructions for the retrieval process, it is better to include them in the questions.
> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior may occur if the system message contradicts with these behaviors. ### Maximum response
Set a limit on the number of tokens per model response. The upper limit for Azur
This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model may more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
-### Semantic search
+### Search options
+
+Azure OpenAI on your data provides several search options you can use when you add your data source, leveraging the following types of search.
+
+* [Keyword search](/azure/search/search-lucene-query-architecture)
+
+* [Semantic search](/azure/search/semantic-search-overview)
+* [Vector search](/azure/search/vector-search-overview) using Ada [embedding](./understand-embeddings.md) models, available in [select regions](models.md#embeddings-models-1).
+
+ To enable vector search, you will need a `text-embedding-ada-002` deployment in your Azure OpenAI resource. Select your embedding deployment when connecting your data, then select one of the vector search types under **Data management**.
> [!IMPORTANT]
-> * Semantic search is subject to [additional pricing](/azure/search/semantic-search-overview#availability-and-pricing)
-> * Currently Azure OpenAI on your data supports semantic search for English data only. Only enable semantic search if both your documents and use case are in English.
+> * [Semantic search](/azure/search/semantic-search-overview#availability-and-pricing) and [vector search](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) are subject to additional pricing. You need to choose **Basic or higher SKU** to enable semantic search or vector search. See [pricing tier difference](/azure/search/search-sku-tier) and [service limits](/azure/search/search-limits-quotas-capacity) for more information.
+> * Currently Azure OpenAI on your data supports semantic search for the following language: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, Arabic. Don't enable semantic search if your data is in other languages.
-If [semantic search](/azure/search/semantic-search-overview) is enabled for your Azure Cognitive Search service, you are more likely to produce better retrieval of your data, which can improve response and citation quality.
+| Search option | Retrieval type | Additional pricing? |Benefits|
+|||| -- |
+| *keyword* | Keyword search | No additional pricing. |Performs fast and flexible query parsing and matching over searchable fields, using terms or phrases in any supported language, with or without operators.|
+| *semantic* | Semantic search | Additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Improves the precision and relevance of search results by using a reranker (with AI models) to understand the semantic meaning of query terms and documents returned by the initial search ranker|
+| *vector* | Vector search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Enables you to find documents that are similar to a given query input based on the vector embeddings of the content. |
+| *hybrid (vector + keyword)* | A hybrid of vector search and keyword search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Performs similarity search over vector fields using vector embeddings, while also supporting flexible query parsing and full text search over alphanumeric fields using term queries.|
+| *hybrid (vector + keyword) + semantic* | A hybrid of vector search, semantic and keyword search for retrieval. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Leverages vector embeddings, language understanding and flexible query parsing to create rich search experiences and generative AI apps that can handle complex and diverse information retrieval scenarios. |
+
+The optimal search option can vary depending on your dataset and use-case. You may need to experiment with multiple options to determine which works best for your use-case.
### Index field mapping
Avoid asking long questions and break them down into multiple questions if possi
* Azure OpenAI on your data supports queries that are in the same language as the documents. For example, if your data is in Japanese, then queries need to be in Japanese too.
-* Currently Azure OpenAI on your data supports [semantic search](/azure/search/semantic-search-overview) for English data only. Don't enable semantic search if your data is in other languages.
+* Currently Azure OpenAI on your data supports [semantic search](/azure/search/semantic-search-overview) for the following language: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, Arabic. Don't enable semantic search if your data is in other languages.
* We recommend using a system message to inform the model that your data is in another language. For example:
-
- *"You are an AI assistant that helps people find information. You retrieve Japanese documents, and you should read them carefully in Japanese and answer in Japanese."*
+
+* *"**You are an AI assistant designed to help users extract information from retrieved Japanese documents. Please scrutinize the Japanese documents carefully before formulating a response. The user's query will be in Japanese, and you must response also in Japanese."*
+ * If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
While Power Virtual Agents has features that leverage Azure OpenAI such as [gene
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW18YwQ] + #### Using the web app You can also use the available standalone web app to interact with your model using a graphical user interface, which you can deploy using either Azure OpenAI studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT). ![A screenshot of the web app interface.](../media/use-your-data/web-app.png)
+##### Web app customization
+ You can also customize the app's frontend and backend logic. For example, you could change the icon that appears in the center of the app by updating `/frontend/src/assets/Azure.svg` and then redeploying the app [using the Azure CLI](https://github.com/microsoft/sample-app-aoai-chatGPT#deploy-with-the-azure-cli). See the source code for the web app, and more information [on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). When customizing the app, we recommend:
When customizing the app, we recommend:
- Pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes and improvements.
-#### Important considerations
+##### Important considerations
- Publishing creates an Azure App Service in your subscription. It may incur costs depending on the + [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.-- You can customize the frontend and backend logic of the web app. - By default, the app will only be accessible to you. To add authentication (for example, restrict access to the app to members of your Azure tenant): 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**.
When customizing the app, we recommend:
Now users will be asked to sign in with their Azure Active Directory account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's login information in any other way other than verifying they are a member of your tenant.
+### Chat history
+
+You can enable chat history for your users of the web app. By enabling the feature, your users will have access to their individual previous queries and responses.
+
+To enable chat history, deploy or redeploy your model as a web app using [Azure OpenAI Studio](https://oai.azure.com/portal)
++
+> [!IMPORTANT]
+> Enabling chat history will create a [Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and incur [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage used.
+
+Once you've enabled chat history, your users will be able to show and hide it in the top right corner of the app. When the history is shown, they can rename, or delete conversations. As they're logged into the app, conversations will be automatically ordered from newest to oldest, and named based on the first query in the conversation.
++
+#### Deleting your Cosmos DB instance
+
+Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance, along with all stored chats, you need to navigate to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history.
### Using the API
-Consider setting the following parameters even if they are optional for using the API.
+After you upload your data through Azure OpenAI studio, you can make a call against Azure OpenAI models through APIs. Consider setting the following parameters even if they are optional for using the API.
+ |Parameter |Recommendation |
You can send a streaming request using the `stream` parameter, allowing data to
#### Conversation history for better results
-When chatting with a model, providing a history of the chat will help the model return higher quality results.
+When you chat with a model, providing a history of the chat will help the model return higher quality results.
```json {
ai-services Dall E Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/dall-e-quickstart.md
-+
zone_pivot_groups: openai-quickstart-dall-e
# Quickstart: Generate images with Azure OpenAI Service
+> [!NOTE]
+> The image generation API creates an image from a text prompt. It does not edit existing images or create variations.
::: zone pivot="programming-language-studio"
zone_pivot_groups: openai-quickstart-dall-e
::: zone-end +++++++++ ::: zone-end
zone_pivot_groups: openai-quickstart-dall-e
[!INCLUDE [Python SDK quickstart](includes/dall-e-python.md)] ::: zone-end++++++
ai-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/business-continuity-disaster-recovery.md
Previously updated : 6/21/2023 Last updated : 8/17/2023 recommendations: false
keywords:
# Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI Service
-Azure OpenAI is available in multiple regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
+Azure OpenAI is available in multiple regions. When you create an Azure OpenAI resource, you specify a region. From then on, your resource and all its operations stay associated with that Azure server region.
-It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
+It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either failover into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
-## Best practices
-
-Today customers will call the endpoint provided during deployment for both deployments and inference. These operations are stateless, so no data is lost in the case that a region becomes unavailable.
-
-If a region is non-operational customers must take steps to ensure service continuity.
+## BCDR requires custom code
-## Business continuity
+Today customers will call the endpoint provided during deployment for inferencing. Inferencing operations are stateless, so no data is lost if a region becomes unavailable.
-The following set of instructions applies both customers using default endpoints and those using custom endpoints.
+If a region is nonoperational customers must take steps to ensure service continuity.
-### Default endpoint recovery
+## BCDR for base model & customized model
-If you're using a default endpoint, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription.
+If you're using the base models, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription.
Follow these steps to configure your client to monitor errors:
-1. Use the [models page](../concepts/models.md) to identify the list of available regions for Azure OpenAI.
+1. Use the [models](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability) page to choose the datacenters and regions that are right for you.
-2. Select a primary and one secondary/backup regions from the list.
+2. Select a primary and one (or more) secondary/backup regions from the list.
-3. Create Azure OpenAI resources for each region selected.
+3. Create Azure OpenAI resources for each region(s) selected.
4. For the primary region and any backup regions your code will need to know:
- a. Base URI for the resource
-
- b. Regional access key or Azure Active Directory access
+ - Base URI for the resource
+ - Regional access key or Azure Active Directory access
-5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors).
+5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors).
- a. Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry.
-
- b. For persistence redirect traffic to the backup resource in the region you've created.
-
-## BCDR requires custom code
+ - Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry.
+ - For persistent connectivity issues, redirect traffic to the backup resource in the region(s) you've created.
-The recovery from regional failures for this usage type can be performed instantaneously and at a very low cost. This does however, require custom development of this functionality on the client side of your application.
+If you have fine-tuned a model in your primary region, you will need to retrain the base model in the secondary region(s) using the same training data. And then follow the above steps.
ai-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/completions.md
Title: 'How to generate text with Azure OpenAI Service'
-description: Learn how to generate or manipulate text, including code with Azure OpenAI
+description: Learn how to generate or manipulate text, including code by using a completion endpoint in Azure OpenAI Service.
Previously updated : 06/24/2022 Last updated : 08/15/2023 recommendations: false
keywords:
# Learn how to generate or manipulate text
-The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability.
+Azure OpenAI Service provides a **completion endpoint** that can be used for a wide variety of tasks. The endpoint supplies a simple yet powerful text-in, text-out interface to any [Azure OpenAI model](../concepts/models.md). To trigger the completion, you input some text as a prompt. The model generates the completion and attempts to match your context or pattern. Suppose you provide the prompt "As Descartes said, I think, therefore" to the API. For this prompt, Azure OpenAI returns the completion endpoint " I am" with high probability.
-The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
+The best way to start exploring completions is through the playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one:
-`write a tagline for an ice cream shop`
+```console
+write a tagline for an ice cream shop
+```
-once you submit, you'll see something like the following generated:
+After you enter your prompt, Azure OpenAI displays the completion:
-``` console
-write a tagline for an ice cream shop
+```console
we serve up smiles with every scoop! ```
-The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
+The completion results that you see can differ because the Azure OpenAI API produces fresh output for each interaction. You might get a slightly different completion each time you call the API, even if your prompt stays the same. You can control this behavior with the `Temperature` setting.
-This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
+The simple text-in, text-out interface means you can "program" the Azure OpenAI model by providing instructions or just a few examples of what you'd like it to do. The output success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a pre-teenage student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
> [!NOTE]
-> Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future.
+> The model training data can be different for each model type. The [latest model's training data currently extends through September 2021 only](/azure/ai-services/openai/concepts/models). Depending on your prompt, the model might not have knowledge of related current events.
-## Prompt design
+## Design prompts
-### Basics
+Azure OpenAI Service models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you must be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
-OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
+The models try to predict what you want from the prompt. If you enter the prompt "Give me a list of cat breeds," the model doesn't automatically assume you're asking for a list only. You might be starting a conversation where your first words are "Give me a list of cat breeds" followed by "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
-The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
+### Guidelines for creating robust prompts
-There are three basic guidelines to creating prompts:
+There are three basic guidelines for creating useful prompts:
-**Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want.
+- **Show and tell**. Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, include these details in your prompt to show the model.
-**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response.
+- **Provide quality data**. If you're trying to build a classifier or get the model to follow a pattern, make sure there are enough examples. Be sure to proofread your examples. The model is smart enough to resolve basic spelling mistakes and give you a meaningful response. Conversely, the model might assume the mistakes are intentional, which can affect the response.
-**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
+- **Check your settings**. Probability settings, such as `Temperature` and `Top P`, control how deterministic the model is in generating a response. If you're asking for a response where there's only one right answer, you should specify lower values for these settings. If you're looking for a response that's not obvious, you might want to use higher values. The most common mistake users make with these settings is assuming they control "cleverness" or "creativity" in the model response.
-### Troubleshooting
+### Troubleshooting for prompt issues
-If you're having trouble getting the API to perform as expected, follow this checklist:
+If you're having trouble getting the API to perform as expected, review the following points for your implementation:
-1. Is it clear what the intended generation should be?
-2. Are there enough examples?
-3. Did you check your examples for mistakes? (The API won't tell you directly)
-4. Are you using temp and top_p correctly?
+- Is it clear what the intended generation should be?
+- Are there enough examples?
+- Did you check your examples for mistakes? (The API doesn't tell you directly.)
+- Are you using the `Temperature` and `Top P` probability settings correctly?
-## Classification
+## Classify text
-To create a text classifier with the API we provide a description of the task and provide a few examples. In this demonstration we show the API how to classify the sentiment of Tweets.
+To create a text classifier with the API, you provide a description of the task and provide a few examples. In this demonstration, you show the API how to classify the _sentiment_ of text messages. The sentiment expresses the overall feeling or expression in the text.
```console
-This is a tweet sentiment classifier
+This is a text message sentiment classifier
-Tweet: "I loved the new Batman movie!"
+Message: "I loved the new adventure movie!"
Sentiment: Positive
-Tweet: "I hate it when my phone battery dies."
+Message: "I hate it when my phone battery dies."
Sentiment: Negative
-Tweet: "My day has been 👍"
+Message: "My day has been 👍"
Sentiment: Positive
-Tweet: "This is the link to the article"
+Message: "This is the link to the article"
Sentiment: Neutral
-Tweet: "This new music video blew my mind"
+Message: "This new music video is unreal"
Sentiment: ```
-It's worth paying attention to several features in this example:
+### Guidelines for designing text classifiers
-**1. Use plain language to describe your inputs and outputs**
-We use plain language for the input "Tweet" and the expected output "Sentiment." For best practices, start with plain language descriptions. While you can often use shorthand or keys to indicate the input and output, when building your prompt it's best to start by being as descriptive as possible and then working backwards removing extra words as long as the performance to the prompt is consistent.
+This demonstration reveals several guidelines for designing classifiers:
-**2. Show the API how to respond to any case**
-In this example we provide multiple outcomes "Positive", "Negative" and "Neutral." A neutral outcome is important because there will be many cases where even a human would have a hard time determining if something is positive or negative and situations where it's neither.
+- **Use plain language to describe your inputs and outputs**. Use plain language for the input "Message" and the expected value that expresses the "Sentiment." For best practices, start with plain language descriptions. You can often use shorthand or keys to indicate the input and output when building your prompt, but it's best to start by being as descriptive as possible. Then you can work backwards and remove extra words as long as the performance to the prompt is consistent.
-**3. You can use text and emoji**
-The classifier is a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them.
+- **Show the API how to respond to any case**. The demonstration provides multiple outcomes: "Positive," "Negative," and "Neutral." Supporting a neutral outcome is important because there are many cases where even a human can have difficulty determining if something is positive or negative.
-**4. You need fewer examples for familiar tasks**
-For this classifier we only provided a handful of examples. This is because the API already has an understanding of sentiment and the concept of a tweet. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples.
+- **Use emoji and text, per the common expression**. The demonstration shows that the classifier can be a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them. For the best response, use common forms of expression for your examples.
-### Improving the classifier's efficiency
+- **Use fewer examples for familiar tasks**. This classifier provides only a handful of examples because the API already has an understanding of sentiment and the concept of a text message. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples.
-Now that we have a grasp of how to build a classifier, let's take that example and make it even more efficient so that we can use it to get multiple results back from one API call.
+### Multiple results from a single API call
-```
-This is a tweet sentiment classifier
+Now that you understand how to build a classifier, let's expand on the first demonstration to make it more efficient. You want to be able to use the classifier to get multiple results back from a single API call.
-Tweet: "I loved the new Batman movie!"
+```console
+This is a text message sentiment classifier
+
+Message: "I loved the new adventure movie!"
Sentiment: Positive
-Tweet: "I hate it when my phone battery dies"
+Message: "I hate it when my phone battery dies"
Sentiment: Negative
-Tweet: "My day has been 👍"
+Message: "My day has been 👍"
Sentiment: Positive
-Tweet: "This is the link to the article"
+Message: "This is the link to the article"
Sentiment: Neutral
-Tweet text
-1. "I loved the new Batman movie!"
+Message text
+1. "I loved the new adventure movie!"
2. "I hate it when my phone battery dies" 3. "My day has been 👍" 4. "This is the link to the article"
-5. "This new music video blew my mind"
+5. "This new music video is unreal"
-Tweet sentiment ratings:
+Message sentiment ratings:
1: Positive 2: Negative 3: Positive 4: Neutral 5: Positive
-Tweet text
-1. "I can't stand homework"
-2. "This sucks. I'm bored 😠"
-3. "I can't wait for Halloween!!!"
+Message text
+1. "He doesn't like homework"
+2. "The taxi is late. She's angry 😠"
+3. "I can't wait for the weekend!!!"
4. "My cat is adorable ❤️❤️"
-5. "I hate chocolate"
+5. "Let's try chocolate bananas"
-Tweet sentiment ratings:
+Message sentiment ratings:
1. ```
-After showing the API how tweets are classified by sentiment we then provide it a list of tweets and then a list of sentiment ratings with the same number index. The API is able to pick up from the first example how a tweet is supposed to be classified. In the second example it sees how to apply this to a list of tweets. This allows the API to rate five (and even more) tweets in just one API call.
-
-It's important to note that when you ask the API to create lists or evaluate text you need to pay extra attention to your probability settings (Top P or Temperature) to avoid drift.
-
-1. Make sure your probability setting is calibrated correctly by running multiple tests.
+This demonstration shows the API how to classify text messages by sentiment. You provide a numbered list of messages and a list of sentiment ratings with the same number index. The API uses the information in the first demonstration to learn how to classify sentiment for a single text message. In the second demonstration, the model learns how to apply the sentiment classification to a list of text messages. This approach allows the API to rate five (and even more) text messages in a single API call.
-2. Don't make your list too long or the API is likely to drift.
+> [!IMPORTANT]
+> When you ask the API to create lists or evaluate text, it's important to help the API avoid drift. Here are some points to follow:
+>
+> - Pay careful attention to your values for the `Top P` or `Temperature` probability settings.
+> - Run multiple tests to make sure your probability settings are calibrated correctly.
+> - Don't use long lists. Long lists can lead to drift.
-
+## Trigger ideas
-## Generation
+One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. Suppose you're writing a mystery novel and you need some story ideas. You can give the API a list of a few ideas and it tries to add more ideas to your list. The API can create business plans, character descriptions, marketing slogans, and much more from just a small handful of examples.
-One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. You can give the API a list of a few story ideas and it will try to add to that list. We've seen it create business plans, character descriptions and marketing slogans just by providing it a handful of examples. In this demonstration we'll use the API to create more examples for how to use virtual reality in the classroom:
+In the next demonstration, you use the API to create more examples for how to use virtual reality in the classroom:
-```
+```console
Ideas involving education and virtual reality 1. Virtual Mars
Students get to explore Mars via virtual reality and go on missions to collect a
2. ```
-All we had to do in this example is provide the API with just a description of what the list is about and one example. We then prompted the API with the number `2.` indicating that it's a continuation of the list.
+This demonstration provides the API with a basic description for your list along with one list item. Then you use an incomplete prompt of "2." to trigger a response from the API. The API interprets the incomplete entry as a request to generate similar items and add them to your list.
-Although this is a very simple prompt, there are several details worth noting:
+### Guidelines for triggering ideas
-**1. We explained the intent of the list**<br>
-Just like with the classifier, we tell the API up front what the list is about. This helps it focus on completing the list and not trying to guess what the pattern is behind it.
+Although this demonstration uses a simple prompt, it highlights several guidelines for triggering new ideas:
-**2. Our example sets the pattern for the rest of the list**<br>
-Because we provided a one-sentence description, the API is going to try to follow that pattern for the rest of the items it adds to the list. If we want a more verbose response, we need to set that up from the start.
+- **Explain the intent of the list**. Similar to the demonstration for the text classifier, you start by telling the API what the list is about. This approach helps the API to focus on completing the list rather than trying to determine patterns by analyzing the text.
-**3. We prompt the API by adding an incomplete entry**<br>
-When the API sees `2.` and the prompt abruptly ends, the first thing it tries to do is figure out what should come after it. Since we already had an example with number one and gave the list a title, the most obvious response is to continue adding items to the list.
+- **Set the pattern for the items in the list**. When you provide a one-sentence description, the API tries to follow that pattern when generating new items for the list. If you want a more verbose response, you need to establish that intent with more detailed text input to the API.
-**Advanced generation techniques**<br>
-You can improve the quality of the responses by making a longer more diverse list in your prompt. One way to do that is to start off with one example, let the API generate more and select the ones that you like best and add them to the list. A few more high-quality variations can dramatically improve the quality of the responses.
+- **Prompt the API with an incomplete entry to trigger new ideas**. When the API encounters text that seems incomplete, such as the prompt text "2.," it first tries to determine any text that might complete the entry. Because the demonstration had a list title and an example with the number "1." and accompanying text, the API interpreted the incomplete prompt text "2." as a request to continue adding items to the list.
-
+- **Explore advanced generation techniques**. You can improve the quality of the responses by making a longer more diverse list in your prompt. One approach is to start with one example, let the API generate more examples, and then select the examples you like best and add them to the list. A few more high-quality variations in your examples can dramatically improve the quality of the responses.
-## Conversation
+## Conduct conversations
-The API is extremely adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, we've seen the API perform as a customer service chatbot that intelligently answers questions without ever getting flustered or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples.
+Starting with the release of [GPT-35-Turbo and GPT-4](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), we recommend that you create conversational generation and chatbots by using models that support the _chat completion endpoint_. The chat completion models and endpoint require a different input structure than the completion endpoint.
-Here's an example of the API playing the role of an AI answering questions:
+The API is adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, the API can perform as a customer service chatbot that intelligently answers questions without getting flustered, or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples.
-```
+In this demonstration, the API supplies the role of an AI answering questions:
+
+```console
The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly. Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human: ```
-This is all it takes to create a chatbot capable of carrying on a conversation. But underneath its simplicity there are several things going on that are worth paying attention to:
-
-**1. We tell the API the intent but we also tell it how to behave**
-Just like the other prompts, we cue the API into what the example represents, but we also add another key detail: we give it explicit instructions on how to interact with the phrase "The assistant is helpful, creative, clever, and very friendly."
-
-Without that instruction the API might stray and mimic the human it's interacting with and become sarcastic or some other behavior we want to avoid.
-
-**2. We give the API an identity**
-At the start we have the API respond as an AI that was created by OpenAI. While the API has no intrinsic identity, this helps it respond in a way that's as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots. If you tell the API to respond as a woman who works as a research scientist in biology, you'll get intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background.
+Let's look at a variation for a chatbot named "Cramer," an amusing and somewhat helpful virtual assistant. To help the API understand the character of the role, you provide a few examples of questions and answers. All it takes is just a few sarcastic responses and the API can pick up the pattern and provide an endless number of similar responses.
-In this example we create a chatbot that is a bit sarcastic and reluctantly answers questions:
-
-```
-Marv is a chatbot that reluctantly answers questions.
+```console
+Cramer is a chatbot that reluctantly answers questions.
### User: How many pounds are in a kilogram?
-Marv: This again? There are 2.2 pounds in a kilogram. Please make a note of this.
+Cramer: This again? There are 2.2 pounds in a kilogram. Please make a note of this.
### User: What does HTML stand for?
-Marv: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future.
+Cramer: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future.
### User: When did the first airplane fly?
-Marv: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away.
+Cramer: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away.
### User: Who was the first man in space?
-Marv:
+Cramer:
```
-To create an amusing and somewhat helpful chatbot we provide a few examples of questions and answers showing the API how to reply. All it takes is just a few sarcastic responses and the API is able to pick up the pattern and provide an endless number of snarky responses.
+### Guidelines for designing conversations
-
+Our demonstrations show how easily you can create a chatbot that's capable of carrying on a conversation. Although it looks simple, this approach follows several important guidelines:
-## Transformation
+- **Define the intent of the conversation**. Just like the other prompts, you describe the intent of the interaction to the API. In this case, "a conversation." This input prepares the API to process subsequent input according to the initial intent.
-The API is a language model that is familiar with a variety of ways that words and characters can be used to express information. This ranges from natural language text to code and languages other than English. The API is also able to understand content on a level that allows it to summarize, convert and express it in different ways.
+- **Tell the API how to behave**. A key detail in this demonstration is the explicit instructions for how the API should interact: "The assistant is helpful, creative, clever, and very friendly." Without your explicit instructions, the API might stray and mimic the human it's interacting with. The API might become unfriendly or exhibit other undesirable behavior.
-### Translation
+- **Give the API an identity**. At the start, you have the API respond as an AI created by OpenAI. While the API has no intrinsic identity, the character description helps the API respond in a way that's as close to the truth as possible. You can use character identity descriptions in other ways to create different kinds of chatbots. If you tell the API to respond as a research scientist in biology, you receive intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background.
-In this example we show the API how to convert from English to French:
+## Transform text
-```
+The API is a language model that's familiar with various ways that words and character identities can be used to express information. The knowledge data supports transforming text from natural language into code, and translating between other languages and English. The API is also able to understand content on a level that allows it to summarize, convert, and express it in different ways. Let's look at a few examples.
+
+### Translate from one language to another
+
+This demonstration instructs the API on how to convert English language phrases into French:
+
+```console
English: I do not speak French. French: Je ne parle pas français. English: See you later!
French: Quelles chambres avez-vous de disponible?
English: ```
-This example works because the API already has a grasp of French, so there's no need to try to teach it this language. Instead, we just need to provide enough examples that API understands that it's converting from one language to another.
+This example works because the API already has a grasp of the French language. You don't need to try to teach the language to the API. You just need to provide enough examples to help the API understand your request to convert from one language to another.
-If you want to translate from English to a language the API is unfamiliar with you'd need to provide it with more examples and a fine-tuned model to do it fluently.
+If you want to translate from English to a language the API doesn't recognize, you need to provide the API with more examples and a fine-tuned model that can produce fluent translations.
-### Conversion
+### Convert between text and emoji
-In this example we convert the name of a movie into emoji. This shows the adaptability of the API to picking up patterns and working with other characters.
+This demonstration converts the name of a movie from text into emoji characters. This example shows the adaptability of the API to pick up patterns and work with other characters.
-```
-Back to Future: 👨👴🚗🕒
-Batman: 🤵🦇
-Transformers: 🚗🤖
-Wonder Woman: 👸🏻👸🏼👸🏽👸🏾👸🏿
-Spider-Man: 🕸🕷🕸🕸🕷🕸
-Winnie the Pooh: 🐻🐼🐻
-The Godfather: 👨👩👧🕵🏻‍♂️👲💥
-Game of Thrones: 🏹🗡🗡🏹
-Spider-Man:
+```console
+Carpool Time: 👨👴👩🚗🕒
+Robots in Cars: 🚗🤖
+Super Femme: 👸🏻👸🏼👸🏽👸🏾👸🏿
+Webs of the Spider: 🕸🕷🕸🕸🕷🕸
+The Three Bears: 🐻🐼🐻
+Mobster Family: 👨👩👧🕵🏻‍♂️👲💥
+Arrows and Swords: 🏹🗡🗡🏹
+Snowmobiles:
```
-## Summarization
+### Summarize text
-The API is able to grasp the context of text and rephrase it in different ways. In this example, the API takes a block of text and creates an explanation a child would understand. This illustrates that the API has a deep grasp of language.
+The API can grasp the context of text and rephrase it in different ways. In this demonstration, the API takes a block of text and creates an explanation that's understandable by a primary-age child. This example illustrates that the API has a deep grasp of language.
-```
+```console
My ten-year-old asked me what this passage means: """ A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich.[1] Neutron stars are the smallest and densest stellar objects, excluding black holes and hypothetical white holes, quark stars, and strange stars.[2] Neutron stars have a radius on the order of 10 kilometres (6.2 mi) and a mass of about 1.4 solar masses.[3] They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.
I rephrased it for him, in plain language a ten-year-old can understand:
""" ```
-In this example we place whatever we want summarized between the triple quotes. It's worth noting that we explain both before and after the text to be summarized what our intent is and who the target audience is for the summary. This is to keep the API from drifting after it processes a large block of text.
+### Guidelines for producing text summaries
-## Completion
+Text summarization often involves supplying large amounts of text to the API. To help prevent the API from drifting after it processes a large block of text, follow these guidelines:
-While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. For example, if given this prompt, the API will continue the train of thought about vertical farming. You can lower the temperature setting to keep the API more focused on the intent of the prompt or increase it to let it go off on a tangent.
+- **Enclose the text to summarize within triple double quotes**. In this example, you enter three double quotes (""") on a separate line before and after the block of text to summarize. This formatting style clearly defines the start and end of the large block of text to process.
-```
+- **Explain the summary intent and target audience before, and after summary**. Notice that this example differs from the others because you provide instructions to the API two times: before, and after the text to process. The redundant instructions help the API to focus on your intended task and avoid drift.
+
+## Complete partial text and code inputs
+
+While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off.
+
+In this demonstration, you supply a text prompt to the API that appears to be incomplete. You stop the text entry on the word "and." The API interprets the incomplete text as a trigger to continue your train of thought.
+
+```console
Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ```
-This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/legacy-models.md#codex-models) section in [Models](../concepts/models.md).
+This next demonstration shows how you can use the completion feature to help write `React` code components. You begin by sending some code to the API. You stop the code entry with an open parenthesis `(`. The API interprets the incomplete code as a trigger to complete the `HeaderComponent` constant definition. The API can complete this code definition because it has an understanding of the corresponding `React` library.
-```
+```python
import React from 'react'; const HeaderComponent = () => ( ``` -
+### Guidelines for generating completions
-## Factual responses
+Here are some helpful guidelines for using the API to generate text and code completions:
-The API has a lot of knowledge that it's learned from the data it was trained on. It also has the ability to provide responses that sound very real but are in fact made up. There are two ways to limit the likelihood of the API making up an answer.
+- **Lower the Temperature to keep the API focused**. Set lower values for the `Temperature` setting to instruct the API to provide responses that are focused on the intent described in your prompt.
-**1. Provide a ground truth for the API**
-If you provide the API with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to confabulate a response.
+- **Raise the Temperature to allow the API to tangent**. Set higher values for the `Temperature` setting to allow the API to respond in a manner that's tangential to the intent described in your prompt.
-**2. Use a low probability and show the API how to say "I don't know"**
-If the API understands that in cases where it's less certain about a response that saying "I don't know" or some variation is appropriate, it will be less inclined to make up answers.
+- **Use the GPT-35-Turbo and GPT-4 Azure OpenAI models**. For tasks that involve understanding or generating code, Microsoft recommends using the `GPT-35-Turbo` and `GPT-4` Azure OpenAI models. These models use the new [chat completions format](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions).
+
+## Generate factual responses
-In this example we give the API examples of questions and answers it knows and then examples of things it wouldn't know and provide question marks. We also set the probability to zero so the API is more likely to respond with a "?" if there's any doubt.
+The API has learned knowledge that's built on actual data reviewed during its training. It uses this learned data to form its responses. However, the API also has the ability to respond in a way that sounds true, but is in fact, fabricated.
-```
+There are a few ways you can limit the likelihood of the API making up an answer in response to your input. You can define the foundation for a true and factual response, so the API drafts its response from your data. You can also set a low `Temperature` probability value and show the API how to respond when the data isn't available for a factual answer.
+
+The following demonstration shows how to teach the API to reply in a more factual manner. You provide the API with examples of questions and answers it understands. You also supply examples of questions ("Q") it might not recognize and use a question mark for the answer ("A") output. This approach teaches the API how to respond to questions it can't answer factually.
+
+As a safeguard, you set the `Temperature` probability to zero so the API is more likely to respond with a question mark (?) if there's any doubt about the true and factual response.
+
+```console
Q: Who is Batman? A: Batman is a fictional comic book character.
Q: What is Devz9?
A: ? Q: Who is George Lucas?
-A: George Lucas is American film director and producer famous for creating Star Wars.
+A: George Lucas is an American film director and producer famous for creating Star Wars.
Q: What is the capital of California? A: Sacramento.
A: Sacramento.
Q: What orbits the Earth? A: The Moon.
-Q: Who is Fred Rickerson?
+Q: Who is Egad Debunk?
A: ? Q: What is an atom?
A: Two, Phobos and Deimos.
Q: ```
-## Working with code
+
+### Guidelines for generating factual responses
+
+Let's review the guidelines to help limit the likelihood of the API making up an answer:
+
+- **Provide a ground truth for the API**. Instruct the API about what to use as the foundation for creating a true and factual response based on your intent. If you provide the API with a body of text to use to answer questions (like a Wikipedia entry), the API is less likely to fabricate a response.
+
+- **Use a low probability**. Set a low `Temperature` probability value so the API stays focused on your intent and doesn't drift into creating a fabricated or confabulated response.
+
+- **Show the API how to respond with "I don't know"**. You can enter example questions and answers that teach the API to use a specific response for questions for which it can't find a factual answer. In the example, you teach the API to respond with a question mark (?) when it can't find the corresponding data. This approach also helps the API to learn when responding with "I don't know" is more "correct" than making up an answer.
+
+## Work with code
The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
-Learn more about generating code completions, with the [working with code guide](./work-with-code.md)
+For more information about generating code completions, see [Codex models and Azure OpenAI Service](./work-with-code.md).
## Next steps
-Learn [how to work with code (Codex)](./work-with-code.md).
-Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+- Learn how to work with the [GPT-35-Turbo and GPT-4 models](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions).
+- Learn more about the [Azure OpenAI Service models](../concepts/models.md).
ai-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/create-resource.md
Previously updated : 08/14/2023 Last updated : 08/25/2023 zone_pivot_groups: openai-create-resource--++ recommendations: false
In this article, you review examples for creating and deploying resources in the
::: zone-end +++ ## Next steps - Make API calls and generate text with [Azure OpenAI Service quickstarts](../quickstart.md).
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Title: 'How to customize a model with Azure OpenAI Service'
+ Title: 'Customize a model with Azure OpenAI Service'
-description: Learn how to create your own customized model with Azure OpenAI
+description: Learn how to create your own customized model with Azure OpenAI Service by using Python, the REST APIs, or Azure OpenAI Studio.
Previously updated : 04/05/2023 Last updated : 09/01/2023 zone_pivot_groups: openai-fine-tuning keywords:
-# Learn how to customize a model for your application
-Azure OpenAI Service lets you tailor our models to your personal datasets using a process known as *fine-tuning*. This customization step will let you get more out of the service by providing:
+# Customize a model with Azure OpenAI Service
-- Higher quality results than what you can get just from prompt design-- The ability to train on more examples than can fit into a prompt-- Lower-latency requests
+Azure OpenAI Service lets you tailor our models to your personal datasets by using a process known as *fine-tuning*. This customization step lets you get more out of the service by providing:
+
+- Higher quality results than what you can get just from prompt design.
+- The ability to train on more examples than can fit into a prompt.
+- Lower-latency requests.
A customized model improves on the few-shot learning approach by training the model's weights on your specific prompts and structure. The customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, saving cost and improving request latency.
-> [!NOTE]
-> There is a breaking change in the `create` fine tunes command in the latest 12-01-2022 GA API. For the latest command syntax consult the [reference documentation](/rest/api/cognitiveservices/azureopenaistable/fine-tunes/create)
+ ::: zone pivot="programming-language-studio"
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
functions= [
"description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)" } },
- "required": ["location"],
- },
+ "required": ["location"]
+ }
} ]
if response_message.get("function_call"):
messages.append( # adding assistant response to messages { "role": response_message["role"],
- "name": response_message["function_call"]["name"],
- "content": response_message["function_call"]["arguments"],
+ "function_call": {
+ "name": function_name,
+ "arguments": response_message["function_call"]["arguments"],
+ },
+ "content": None
} ) messages.append( # adding function response to messages
ai-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/integrate-synapseml.md
Title: 'How-to - Use Azure OpenAI Service with large datasets'
+ Title: 'Use Azure OpenAI Service with large datasets'
-description: Walkthrough on how to integrate Azure OpenAI with SynapseML and Apache Spark to apply large language models at a distributed scale.
+description: Learn how to integrate Azure OpenAI Service with SynapseML and Apache Spark to apply large language models at a distributed scale.
Previously updated : 08/04/2022 Last updated : 09/01/2023 recommendations: false
recommendations: false
# Use Azure OpenAI with large datasets
-Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI Service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
+Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, Azure OpenAI Service is integrated with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with Azure OpenAI Service.
+
+This tutorial shows how to apply large language models at a distributed scale by using Azure OpenAI and Azure Synapse Analytics.
## Prerequisites -- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to Azure OpenAI in the desired Azure subscription
+- An Azure subscription. <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
-- An Azure OpenAI resource ΓÇô [create a resource](create-resource.md?pivots=web-portal#create-a-resource)-- An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)
+- Access granted to Azure OpenAI in your Azure subscription.
-We recommend [creating a Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md), but an Azure Databricks, HDInsight, or Spark on Kubernetes, or even a Python environment with the `pyspark` package, will also work.
+ Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete <a href="https://aka.ms/oai/access" target="_blank">this form</a>. If you need assistance, open an issue on this repo to contact Microsoft.
-## Import this guide as a notebook
+- An Azure OpenAI resource. [Create a resource](create-resource.md?pivots=web-portal#create-a-resource).
-The next step is to add this code into your Spark cluster. You can either create a notebook in your Spark platform and copy the code into this notebook to run the demo, or download the notebook and import it into Synapse Analytics.
+- An Apache Spark cluster with SynapseML installed.
-1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook) or, if using Databricks, [into the Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook)
-1. Install SynapseML on your cluster. See the installation instructions for Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This requires pasting another cell at the top of the notebook you imported
-1. Connect your notebook to a cluster and follow along, editing and running the cells below.
+ - Create a [serverless Apache Spark pool](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool).
+ - To install SynapseML for your Apache Spark cluster, see [Install SynapseML](#install-synapseml).
-## Fill in your service information
+> [!NOTE]
+> This article is designed to work with the [Azure OpenAI Service legacy models](/azure/ai-services/openai/concepts/legacy-models) like `Text-Davinci-003`, which support prompt-based completions. Newer models like the current `GPT-3.5 Turbo` and `GPT-4` model series are designed to work with the new chat completion API that expects a specially formatted array of messages as input.
+>
+> The Azure OpenAI SynapseML integration supports the latest models via the [OpenAIChatCompletion()](https://github.com/microsoft/SynapseML/blob/0836e40efd9c48424e91aa10c8aa3fbf0de39f31/cognitive/src/main/scala/com/microsoft/azure/synapse/ml/cognitive/openai/OpenAIChatCompletion.scala#L24) transformer, which isn't demonstrated in this article. After the [release of the GPT-3.5 Turbo Instruct model](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/announcing-updates-to-azure-openai-service-models/ba-p/3866757), the newer model will be the preferred model to use with this article.
-Next, edit the cell in the notebook to point to your service. In particular, set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
+We recommend that you [create an Azure Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md). However, you can also use Azure Databricks, Azure HDInsight, Spark on Kubernetes, or the Python environment with the `pyspark` package.
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Azure AI services [security](../../security-features.md) article for more information.
+## Use example code as a notebook
+
+To use the example code in this article with your Apache Spark cluster, complete the following steps:
+
+1. Prepare a new or existing notebook.
+
+1. Connect your Apache Spark cluster with your notebook.
+
+1. Install SynapseML for your Apache Spark cluster in your notebook.
+
+1. Configure the notebook to work with your Azure OpenAI service resource.
+
+### Prepare your notebook
+
+You can create a new notebook in your Apache Spark platform, or you can import an existing notebook. After you have a notebook in place, you can add each snippet of example code in this article as a new cell in your notebook.
+
+- To use a notebook in Azure Synapse Analytics, see [Create, develop, and maintain Synapse notebooks in Azure Synapse Analytics](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md).
+
+- To use a notebook in Azure Databricks, see [Manage notebooks for Azure Databricks](/azure/databricks/notebooks/notebooks-manage).
+
+- (Optional) Download [this demonstration notebook](https://github.com/microsoft/SynapseML/blob/master/docs/Explore%20Algorithms/OpenAI/OpenAI.ipynb) and connect it with your workspace. During the download process, select **Raw**, and then save the file.
+
+### Connect your cluster
+
+When you have a notebook ready, connect or _attach_ your notebook to an Apache Spark cluster.
+
+### Install SynapseML
+
+To run the exercises, you need to install SynapseML on your Apache Spark cluster. For more information, see [Install SynapseML](https://microsoft.github.io/SynapseML/docs/Get%20Started/Install%20SynapseML/) on the [SynapseML website](https://microsoft.github.io/SynapseML/).
+
+To install SynapseML, create a new cell at the top of your notebook and run the following code.
+
+- For a **Spark3.2 pool**, use the following code:
+
+ ```python
+ %%configure -f
+ {
+ "name": "synapseml",
+ "conf": {
+ "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.11.2,org.apache.spark:spark-avro_2.12:3.3.1",
+ "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
+ "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
+ "spark.yarn.user.classpath.first": "true",
+ "spark.sql.parquet.enableVectorizedReader": "false",
+ "spark.sql.legacy.replaceDatabricksSparkAvro.enabled": "true"
+ }
+ }
+ ```
+
+- For a **Spark3.3 pool**, use the following code:
+
+ ```python
+ %%configure -f
+ {
+ "name": "synapseml",
+ "conf": {
+ "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.11.2-spark3.3",
+ "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
+ "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
+ "spark.yarn.user.classpath.first": "true",
+ "spark.sql.parquet.enableVectorizedReader": "false"
+ }
+ }
+ ```
+
+The connection process can take several minutes.
+
+### Configure the notebook
+
+Create a new code cell and run the following code to configure the notebook for your service. Set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
```python import os # Replace the following values with your Azure OpenAI resource information
-resource_name = "RESOURCE_NAME" # The name of your Azure OpenAI resource.
-deployment_name = "DEPLOYMENT_NAME" # The name of your Azure OpenAI deployment.
-location = "RESOURCE_LOCATION" # The location or region ID for your resource.
-key = "RESOURCE_API_KEY" # The key for your resource.
+resource_name = "<RESOURCE_NAME>" # The name of your Azure OpenAI resource.
+deployment_name = "<DEPLOYMENT_NAME>" # The name of your Azure OpenAI deployment.
+location = "<RESOURCE_LOCATION>" # The location or region ID for your resource.
+key = "<RESOURCE_API_KEY>" # The key for your resource.
assert key is not None and resource_name is not None ```
+Now you're ready to start running the example code.
+
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). For more information, see [Azure AI services security](../../security-features.md).
++ ## Create a dataset of prompts
-Next, create a dataframe consisting of a series of rows, with one prompt per row.
+The first step is to create a dataframe consisting of a series of rows, with one prompt per row.
-You can also load data directly from Azure Data Lake Storage (ADLS) or other databases. For more information about loading and preparing Spark dataframes, see the [Apache Spark data loading guide](https://spark.apache.org/docs/latest/sql-data-sources.html).
+You can also load data directly from Azure Data Lake Storage or other databases. For more information about loading and preparing Spark dataframes, see the [Apache Spark Data Sources](https://spark.apache.org/docs/latest/sql-data-sources.html).
```python df = spark.createDataFrame(
df = spark.createDataFrame(
## Create the OpenAICompletion Apache Spark client
-To apply the OpenAI Completion service to the dataframe that you just created, create an `OpenAICompletion` object that serves as a distributed client. Parameters of the service can be set either with a single value, or by a column of the dataframe with the appropriate setters on the `OpenAICompletion` object. Here, we're setting `maxTokens` to 200. A token is around four characters, and this limit applies to the sum of the prompt and the result. We're also setting the `promptCol` parameter with the name of the prompt column in the dataframe.
+To apply Azure OpenAI Completion generation to the dataframe, create an `OpenAICompletion` object that serves as a distributed client. Parameters can be set either with a single value, or by a column of the dataframe with the appropriate setters on the `OpenAICompletion` object.
+
+In this example, you set the `maxTokens` parameter to 200. A token is around four characters, and this limit applies to the sum of the prompt and the result. You also set the `promptCol` parameter with the name of the prompt column in the dataframe, such as **prompt**.
```python from synapse.ml.cognitive import OpenAICompletion
completion = (
## Transform the dataframe with the OpenAICompletion client
-Now that you have the dataframe and the completion client, you can transform your input dataset and add a column called `completions` with all of the information the service adds. We'll select out just the text for simplicity.
+After you have the dataframe and completion client, you can transform your input dataset and add a column called `completions` with all of the text generated from the Azure OpenAI completion API. In this example, select only the text for simplicity.
```python from pyspark.sql.functions import col
display(completed_df.select(
col("prompt"), col("error"), col("completions.choices.text").getItem(0).alias("text"))) ```
-Your output should look something like the following example; note that the completion text can vary.
+The following image shows example output with completions in Azure Synapse Analytics Studio. Keep in mind that completions text can vary. Your output might look different.
-| **prompt** | **error** | **text** |
-||--| |
-| Hello my name is | undefined | Makaveli I'm eighteen years old and I want to<br>be a rapper when I grow up I love writing and making music I'm from Los<br>Angeles, CA |
-| The best code is code that's | undefined | understandable This is a subjective statement,<br>and there is no definitive answer. |
-| SynapseML is | undefined | A machine learning algorithm that is able to learn how to predict the future outcome of events. |
-## Other usage examples
+## Explore other usage scenarios
+
+Here are some other use cases for working with Azure OpenAI Service and large datasets.
### Improve throughput with request batching
-The example above makes several requests to the service, one for each prompt. To complete multiple prompts in a single request, use batch mode. First, in the `OpenAICompletion` object, instead of setting the Prompt column to "Prompt", specify "batchPrompt" for the BatchPrompt column.
-To do so, create a dataframe with a list of prompts per row.
+You can use Azure OpenAI Service with large datasets to improve throughput with request batching. In the previous example, you make several requests to the service, one for each prompt. To complete multiple prompts in a single request, you can use batch mode.
+
+In the `OpenAICompletion` object definition, you specify the `"batchPrompt"` value to configure the dataframe to use a **batchPrompt** column. Create the dataframe with a list of prompts for each row.
> [!NOTE]
-> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
+> There's currently a limit of 20 prompts in a single request and a limit of 2048 tokens, or approximately 1500 words.
```python batch_df = spark.createDataFrame(
batch_df = spark.createDataFrame(
).toDF("batchPrompt") ```
-Next we create the `OpenAICompletion` object. Rather than setting the prompt column, set the batchPrompt column if your column is of type `Array[String]`.
+Next, create the `OpenAICompletion` object. If your column is of type `Array[String]`, set the `batchPromptCol` value for the column heading, rather than the `promptCol` value.
```python batch_completion = (
batch_completion = (
) ```
-In the call to transform, a request will then be made per row. Because there are multiple prompts in a single row, each request will be sent with all prompts in that row. The results will contain a row for each row in the request.
+In the call to `transform`, one request is made per row. Because there are multiple prompts in a single row, each request is sent with all prompts in that row. The results contain a row for each row in the request.
```python completed_batch_df = batch_completion.transform(batch_df).cache()
display(completed_batch_df)
``` > [!NOTE]
-> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
+> There's currently a limit of 20 prompts in a single request and a limit of 2048 tokens, or approximately 1500 words.
-### Using an automatic mini-batcher
+### Use an automatic mini-batcher
-If your data is in column format, you can transpose it to row format using SynapseML's `FixedMiniBatcherTransformer`.
+You can use Azure OpenAI Service with large datasets to transpose the data format. If your data is in column format, you can transpose it to row format by using the SynapseML `FixedMiniBatcherTransformer` object.
```python from pyspark.sql.types import StringType
from synapse.ml.stages import FixedMiniBatchTransformer
from synapse.ml.core.spark import FluentAPI completed_autobatch_df = (df
- .coalesce(1) # Force a single partition so that our little 4-row dataframe makes a batch of size 4, you can remove this step for large datasets
+ .coalesce(1) # Force a single partition so your little 4-row dataframe makes a batch of size 4 - you can remove this step for large datasets.
.mlTransform(FixedMiniBatchTransformer(batchSize=4)) .withColumnRenamed("prompt", "batchPrompt") .mlTransform(batch_completion))
display(completed_autobatch_df)
### Prompt engineering for translation
-Azure OpenAI can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
+Azure OpenAI can solve many different natural language tasks through _prompt engineering_. For more information, see [Learn how to generate or manipulate text](completions.md). In this example, you can prompt for language translation:
```python translate_df = spark.createDataFrame(
display(completion.transform(translate_df))
### Prompt for question answering
-Here, we prompt the GPT-3 model for general-knowledge question answering:
+Azure OpenAI also supports prompting the `Text-Davinci-003` model for general-knowledge question answering:
```python qa_df = spark.createDataFrame(
qa_df = spark.createDataFrame(
display(completion.transform(qa_df)) ```+
+## Next steps
+
+- Learn how to work with the [GPT-35 Turbo and GPT-4 models](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions).
+- Learn more about the [Azure OpenAI Service models](../concepts/models.md).
ai-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/manage-costs.md
Previously updated : 04/05/2023 Last updated : 08/22/2023 # Plan to manage costs for Azure OpenAI Service
-This article describes how you plan for and manage costs for Azure OpenAI Service. Before you deploy the service, you can use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you've started using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+This article describes how you can plan for and manage costs for Azure OpenAI Service. Before you deploy the service, use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you start using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs.
+
+You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article is about planning for and managing costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
## Prerequisites
Cost analysis in Cost Management supports most Azure account types, but not all
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the costs of using Azure OpenAI.
-## Understand the full billing model for Azure OpenAI Service
-
-Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+## Understand the Azure OpenAI full billing model
-### How you're charged for Azure OpenAI Service
+Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. There could be other infrastructure costs that might accrue. The following sections describe how you're charged for Azure OpenAI Service.
### Base series and Codex series models Azure OpenAI base series and Codex series models are charged per 1,000 tokens. Costs vary depending on which model series you choose: Ada, Babbage, Curie, Davinci, or Code-Cushman.
-Our models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text.
+Azure OpenAI models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text.
-Token costs are for both input and output. For example, if you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens.
+Token costs are for both input and output. For example, suppose you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens.
-In practice, for this type of completion call the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many different factors including the value assigned to the max_tokens parameter.
+In practice, for this type of completion call, the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many factors. One such factor is the value assigned to the `max_tokens` parameter.
### Base Series and Codex series fine-tuned models
Azure OpenAI fine-tuned models are charged based on three factors:
- Hosting hours - Inference per 1,000 tokens
-The hosting hours cost is important to be aware of since once a fine-tuned model is deployed it continues to incur an hourly cost regardless of whether you're actively using it. Fine-tuned model costs should be monitored closely.
+The hosting hours cost is important to be aware of since after a fine-tuned model is deployed, it continues to incur an hourly cost regardless of whether you're actively using it. Monitor fine-tuned model costs closely.
[!INCLUDE [Fine-tuning deletion](../includes/fine-tune.md)] ### Other costs that might accrue with Azure OpenAI Service
-Keep in mind that enabling capabilities like sending data to Azure Monitor Logs, alerting, etc. incurs additional costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource.
+Enabling capabilities such as sending data to Azure Monitor Logs and alerting incurs extra costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource.
### Using Azure Prepayment with Azure OpenAI Service
-You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those products and services found in the Azure Marketplace.
## Monitor costs
-As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals, such as seconds, minutes, hours, and days, or by unit usage, such as bytes and megabytes. As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in the [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. You can see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
To view Azure OpenAI costs in cost analysis: 1. Sign in to the Azure portal. 2. Select one of your Azure OpenAI resources. 3. Under **Resource Management** select **Cost analysis**
-4. By default cost analysis is scoped to the individual Azure OpenAI resource.
+4. By default, cost analysis is scoped to the individual Azure OpenAI resource.
:::image type="content" source="../media/manage-costs/resource-view.png" alt-text="Screenshot of cost analysis dashboard scoped to an Azure OpenAI resource." lightbox="../media/manage-costs/resource-view.png":::
-To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and in this case switching the chart type to **Line**. You can now see that for this particular resource the source of the costs is from three different model series with **Text-Davinci Tokens** representing the bulk of the costs.
+To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and switching the chart type to **Line**. You can now see that for this particular resource, the source of the costs comes from three different model series with **Text-Davinci Tokens** that represent the bulk of the costs.
:::image type="content" source="../media/manage-costs/grouping.png" alt-text="Screenshot of cost analysis dashboard with group by set to meter." lightbox="../media/manage-costs/grouping.png":::
-It's important to understand scope when evaluating costs associated with Azure OpenAI. If your resources are part of the same resource group you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups you can scope to the subscription level.
+It's important to understand scope when you evaluate costs associated with Azure OpenAI. If your resources are part of the same resource group, you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups, you can scope to the subscription level.
+
+When scoped at a higher level, you often need to add more filters to focus on Azure OpenAI usage. When scoped at the subscription level, you see many other resources that you might not care about in the context of Azure OpenAI cost management. When you scope at the subscription level, we recommend that you navigate to the full **Cost analysis tool** under the **Cost Management** service.
+
+Here's an example of how to use the **Cost analysis tool** to see your accumulated costs for a subscription or resource group:
+
+1. Search for *Cost Management* in the top Azure search bar to navigate to the full service experience, which includes more options such as creating budgets.
+1. If necessary, select **change** if the **Scope:** isn't pointing to the resource group or subscription you want to analyze.
+1. On the left, select **Reporting + analytics** > **Cost analysis**.
+1. On the **All views** tab, select **Accumulated costs**.
+
-However, when scoped at a higher level you often need to add additional filters to be able to zero in on Azure OpenAI usage. When scoped at the subscription level we see a number of other resources that we may not care about in the context of Azure OpenAI cost management. When scoping at the subscription level, we recommend navigating to the full **Cost analysis tool** under the **Cost Management** service. Search for **"Cost Management"** in the top Azure search bar to navigate to the full service experience, which includes more options like creating budgets.
+The cost analysis dashboard shows the accumulated costs that are analyzed depending on what you've specified for **Scope**.
:::image type="content" source="../media/manage-costs/subscription.png" alt-text="Screenshot of cost analysis dashboard with scope set to subscription." lightbox="../media/manage-costs/subscription.png":::
-If you try to add a filter by service, you'll find that you can't find Azure OpenAI in the list. This is because Azure OpenAI has commonality with a subset of Azure AI services where the service level filter is **Cognitive Services**, but if you want to see all Azure OpenAI resources across a subscription without any other type of Azure AI services resources you need to instead scope to **Service tier: Azure OpenAI**:
+If you try to add a filter by service, you find that you can't find Azure OpenAI in the list. This situation occurs because Azure OpenAI has commonality with a subset of Azure AI services where the service level filter is **Cognitive Services**. If you want to see all Azure OpenAI resources across a subscription without any other type of Azure AI services resources, instead scope to **Service tier: Azure OpenAI**:
:::image type="content" source="../media/manage-costs/service-tier.png" alt-text="Screenshot of cost analysis dashboard with service tier highlighted." lightbox="../media/manage-costs/service-tier.png"::: ## Create budgets
-You can create [budgets](../../../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+You can create [budgets](../../../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. You create budgets and alerts for Azure subscriptions and resource groups. They're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+You can create budgets with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
> [!IMPORTANT]
-> While OpenAI has an option for hard limits that will prevent you from going over your budget, Azure OpenAI does not currently provide this functionality. You are able to kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part.
+> While OpenAI has an option for hard limits that prevent you from going over your budget, Azure OpenAI doesn't currently provide this functionality. You can kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part.
## Export cost data
-You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account, which is helpful when you need others to do extra data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. We recommend exporting cost data as the way to retrieve cost datasets.
## Next steps
ai-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a> - Access granted to the Azure OpenAI Service in the desired Azure subscription
+- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at [https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+
+- [Custom subdomain names are required to enable features like Azure Active Directory (Azure AD) for authentication.](
+../../cognitive-services-custom-subdomains.md)
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- Azure CLI - [Installation Guide](/cli/azure/install-azure-cli) - The following Python libraries: os, requests, json
Assigning yourself to the "Cognitive Services User" role will allow you to use y
``` 4. Make an API call+ Use the access token to authorize your API call by setting the `Authorization` header value.
- ```bash
- curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer $accessToken" \
- -d '{ "prompt": "Once upon a time" }'
- ```
+
+```bash
+curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \
+-H "Content-Type: application/json" \
+-H "Authorization: Bearer $accessToken" \
+-d '{ "prompt": "Once upon a time" }'
+```
## Authorize access to managed identities
Before you can use managed identities for Azure resources to authorize access to
- [Azure Resource Manager client libraries](../../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md) For more information about managed identities, see [Managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).+
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md
Previously updated : 08/02/2022 Last updated : 08/30/2022 recommendations: false
This role is typically granted access at the resource group level for a user in
✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource <br> ✅ Create customized content filters <br> ✅ Add a data source for the use your data feature <br>
+✅ Create new model deployments or edit existing model deployments (via API) <br>
A user with only this role assigned would be unable to:
-❌ Create new model deployments or edit existing model deployments <br>
+❌ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) <br>
❌ Access quota <br> ❌ Create custom fine-tuned models <br> ❌ Upload datasets for fine-tuning
This role provides little value by itself and is instead typically assigned in c
#### Cognitive Services Usages Reader + Cognitive Services OpenAI User
-All the capabilities of Cognitive Services OpenAI plus the ability to:
+All the capabilities of Cognitive Services OpenAI User plus the ability to:
✅ View quota allocations in Azure OpenAI Studio
All the capabilities of Cognitive Services OpenAI Contributor plus the ability t
All the capabilities of Cognitive Services Contributor plus the ability to: ✅ View & edit quota allocations in Azure OpenAI Studio <br>
-✅ Create new model deployments or edit existing model deployments <br>
+✅ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) <br>
## Common Issues
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quickstart.md
-+ Previously updated : 05/23/2023 Last updated : 08/23/2023 zone_pivot_groups: openai-quickstart-new recommendations: false
Use this article to get started making your first calls to Azure OpenAI.
::: zone-end +++ ::: zone pivot="programming-language-java" [!INCLUDE [Java quickstart](includes/java.md)]
Use this article to get started making your first calls to Azure OpenAI.
[!INCLUDE [REST API quickstart](includes/rest.md)] ::: zone-end+++
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
| Total size of all files per resource | 1 GB | | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion |
+| Max size of all files per upload (Azure OpenAI on your data) | 16 MB |
+ <sup>1</sup> Default quota limits are subject to change.
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Previously updated : 08/15/2023 Last updated : 08/25/2023 recommendations: false
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json) - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json) - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
**Supported versions** - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \--H "chatgpt_url: YOUR_RESOURCE_URL" \--H "chatgpt_key: YOUR_API_KEY" \ -d \ ' {
The following parameters can be used inside of the `parameters` field inside of
| `topNDocuments` | number | Optional | 5 | Number of documents that need to be fetched for document augmentation. | | `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only available when `queryType` is set to `semantic`. |
-| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the ΓÇ£System MessageΓÇ¥ in Azure OpenAI Studio. <!--See [Using your data](./concepts/use-your-data.md#system-message) for more information.--> ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
+| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
+| `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control)
+| `embeddingEndpoint` | string | Optional | null | the endpoint URL for an Ada embedding model deployment. Used for [vector search](./concepts/use-your-data.md#search-options). |
+| `embeddingKey` | string | Optional | null | the API key for an Ada embedding model deployment. Used for [vector search](./concepts/use-your-data.md#search-options). |
## Image generation ### Request a generated image
-Generate a batch of images from a text caption. Image generation is currently only available with `api-version=2023-06-01-preview`.
+Generate a batch of images from a text caption.
```http POST https://{your-resource-name}.openai.azure.com/openai/images/generations:submit?api-version={api-version}
POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub
**Supported versions** -- `2023-06-01-preview`
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
**Request body**
GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{oper
**Supported versions** -- `2023-06-01-preview`
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{o
**Supported versions** -- `2023-06-01-preview`
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
You can also download the sample data by running the following command on your l
curl "https://raw.githubusercontent.com/Azure-Samples/Azure-OpenAI-Docs-Samples/main/Samples/Tutorials/Embeddings/data/bill_sum_data.csv" --output bill_sum_data.csv ```
-### Retrieve key and endpoint
-
-To successfully make a call against Azure OpenAI, you'll need an **endpoint** and a **key**.
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com`.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
-
-Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
-
-Create and assign persistent environment variables for your key and endpoint.
### Environment variables
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
Previously updated : 08/11/2023 Last updated : 08/25/2023 recommendations: false zone_pivot_groups: openai-use-your-data
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
- Your chat model must use version `0301`. You can view or change your model version in [Azure OpenAI Studio](./concepts/models.md#model-updates). -- Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) role for the Azure OpenAI resource.
+- Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource.
> [!div class="nextstepaction"] > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites) + ::: zone pivot="programming-language-studio" [!INCLUDE [Studio quickstart](includes/use-your-data-studio.md)] ::: zone-end +++ ::: zone pivot="rest-api" [!INCLUDE [REST API quickstart](includes/use-your-data-rest.md)]
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Previously updated : 08/08/2023 Last updated : 08/25/2023 recommendations: false keywords:
keywords:
## August 2023
+### Azure OpenAI on your own data (preview) updates
+ - You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).
+- [Azure OpenAI on your data](./concepts/use-your-data.md#virtual-network-support--private-endpoint-support) now supports private endpoints.
+- Ability to [filter access to sensitive documents](./concepts/use-your-data.md#document-level-access-control).
+- [Automatically refresh your index on a schedule](./concepts/use-your-data.md#schedule-automatic-index-refreshes).
+- [Vector search and semantic search options](./concepts/use-your-data.md#search-options).
+- [View your chat history in the deployed web app](./concepts/use-your-data.md#chat-history)
## July 2023
ai-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/what-is-personalizer.md
Personalizer empowers you to take advantage of the power and flexibility of rein
The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there's a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
-The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there's feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
+The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there's feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to the Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined by your business metrics and objectives and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
### Learning modes
Use Personalizer when your scenario has:
* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest [using a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API. * Information describing the actions (_action features_). * Information describing the current context (_contextual features_).
-* Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
+* Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer to learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
## Responsible use of AI
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
ai-services Add Sharepoint Datasources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/add-sharepoint-datasources.md
The Active Directory manager will get a pop-up window requesting permissions to
--> ### Grant access from the Azure Active Directory admin center
-1. The Active Directory manager signs in to the Azure portal and opens **[Enterprise applications](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps)**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Browse to **Azure Active Directory** > **Enterprise applications**.
1. Search for `QnAMakerPortalSharePoint` the select the QnA Maker app.
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md
The Cognitive Search instance can be isolated via a private endpoint after the Q
Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to the Azure AI service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.-- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
- Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
ai-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure AI services description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
ai-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis.md
Batch synthesis properties are described in the following table.
|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.| |`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.| |`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
+
+## Batch synthesis latency and best practices
+
+When using batch synthesis for generating synthesized speech, it's important to consider the latency involved and follow best practices for achieving optimal results.
+
+### Latency in batch synthesis
+
+The latency in batch synthesis depends on various factors, including the complexity of the input text, the number of inputs in the batch, and the processing capabilities of the underlying hardware.
+
+The latency for batch synthesis is as follows (approximately):
+
+- The latency of 50% of the synthesized speech outputs is within 10-20 seconds.
+
+- The latency of 95% of the synthesized speech outputs is within 120 seconds.
+
+### Best practices
+
+When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency does not meet your needs, you might consider using real-time API.
## HTTP status codes
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Batch transcription requests for expired models will fail with a 4xx error. You'
The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
-You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic.
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md).
ai-services Bring Your Own Storage Speech Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md
To use any of the methods above you need an Azure account that is assigned a rol
To create a BYOS-enabled Speech resource with Azure portal, you need to access some portal preview features. Perform the following steps:
-1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&feature.canmodifystamps=true&Microsoft_Azure_ProjectOxford=stage1&microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices).
+1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices).
1. Note the *Storage account* section at the bottom of the page. 1. Select *Yes* for *Bring your own storage* option. 1. Configure the required Storage account settings and proceed with the Speech resource creation.
General rule is that you need to pass this JSON string as a value of `--storage`
To create a BYOS-enabled Speech resource with a REST Request to Cognitive Services API, we use [Accounts - Create](/rest/api/cognitiveservices/accountmanagement/accounts/create) request.
-You need to have a meaning of authentication. The example in this section uses [Microsoft Azure Active Directory token](/azure/active-directory/develop/access-tokens).
+You need to have a means of authentication. The example in this section uses [Microsoft Azure Active Directory token](/azure/active-directory/develop/access-tokens).
This code snippet generates Azure AD token using interactive browser sign-in. It requires [Azure Identity client library](/dotnet/api/overview/azure/identity-readme): ```csharp
You may always check, whether any given Speech resource is BYOS enabled, and wha
To check BYOS configuration of a Speech resource with Azure portal, you need to access some portal preview features. Perform the following steps:
-1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&feature.canmodifystamps=true&Microsoft_Azure_ProjectOxford=stage1&microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices).
+1. Navigate to *Create Speech* page using [this link](https://ms.portal.azure.com/?feature.enablecsumi=true&feature.enablecsstoragemenu=true&microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_byospreview#create/Microsoft.CognitiveServicesSpeechServices).
1. Close *Create Speech* screen by pressing *X* in the right upper corner. 1. If asked agree to discard unsaved changes. 1. Navigate to the Speech resource you want to check.
ai-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md
The Speech SDK for Java doesn't support Windows on ARM64.
Embedded speech is only available with C#, C++, and Java SDKs. The other Speech SDKs, Speech CLI, and REST APIs don't support embedded speech.
-Embedded speech recognition only supports mono 16 bit, 8-kHz or 16-kHz PCM-encoded WAV audio.
+Embedded speech recognition only supports mono 16 bit, 8-kHz or 16-kHz PCM-encoded WAV audio formats.
-Embedded neural voices only support 24-kHz sample rate.
+Embedded neural voices support 24 kHz RIFF/RAW, with a RAM requirement of 100 MB.
## Embedded speech SDK packages
For embedded speech, you'll need to download the speech recognition models for [
The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
-The following [text to speech](text-to-speech.md) locales and voices are available:
+The following [text to speech](text-to-speech.md) locales and voices are available out of box. We welcome your input to help us gauge demand for additional languages and voices. Check the full text to speech language and voice list [here](language-support.md?tabs=tts).
| Locale (BCP-47) | Language | Text to speech voices | | -- | -- | -- |
ai-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-to-text.md
Title: "Speech to text quickstart - Speech service"
-description: In this quickstart, you convert speech to text with recognition from a microphone.
+description: In this quickstart, learn how to convert speech to text with recognition from a microphone or .wav file.
Previously updated : 09/16/2022 Last updated : 08/24/2023 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
ai-services Get Started Stt Diarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md
+ Last updated 7/27/2023
-zone_pivot_groups: programming-languages-set-twenty-two
+zone_pivot_groups: programming-languages-speech-services
keywords: speech to text, speech to text software
keywords: speech to text, speech to text software
[!INCLUDE [C++ include](includes/quickstarts/stt-diarization/cpp.md)] ::: zone-end + ::: zone pivot="programming-language-java" [!INCLUDE [Java include](includes/quickstarts/stt-diarization/java.md)] ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python include](includes/quickstarts/stt-diarization/python.md)] ::: zone-end +++ ## Next steps > [!div class="nextstepaction"]
ai-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-text-to-speech.md
Title: "Text to speech quickstart - Speech service"
-description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
+description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio formats, and custom configuration options.
Previously updated : 09/16/2022 Last updated : 08/25/2023 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
ai-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md
Last updated 06/22/2022 zone_pivot_groups: programming-languages-set-three- # Configure OpenSSL for Linux
ai-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-create-voice.md
Title: Train your custom voice model - Speech service
-description: Learn how to train a custom neural voice through the Speech Studio portal.
+description: Learn how to train a custom neural voice through the Speech Studio portal. Training duration varies depending on how much data you're training.
Previously updated : 11/28/2022 Last updated : 08/25/2023
In this article, you learn how to train a custom neural voice through the Speech Studio portal. > [!IMPORTANT]
-> Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can [copy](#copy-your-voice-model-to-another-project) it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
+> Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can [copy](#copy-your-voice-model-to-another-project) it to a Speech resource in another region as needed. For more information, see the footnotes in the [Speech service table](regions.md#speech-service).
-Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice. Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
+Training duration varies depending on how much data you use. It takes about 40 compute hours on average to train a custom neural voice. Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
> [!NOTE]
-> Although the total number of hours required per [training method](#choose-a-training-method) will vary, the same unit price applies to each. For more information, see the [Custom Neural training pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> Although the total number of hours required per [training method](#choose-a-training-method) varies, the same unit price applies to each. For more information, see the [Custom Neural training pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
## Choose a training method
-After you validate your data files, you can use them to build your Custom Neural Voice model. When you create a custom neural voice, you can choose to train it with one of the following methods:
+After you validate your data files, use them to build your Custom Neural Voice model. When you create a custom neural voice, you can choose to train it with one of the following methods:
-- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data, select **Neural** method.
+- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data.
-- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language.
+- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`.
-- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create multiple custom styles by providing style samples (at least 100 utterances per style) as additional training data for the same voice. The supported preset styles vary according to different languages. Refer to [the preset style list for different languages](?tabs=multistyle#available-preset-styles-across-different-languages).
+ The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language.
-The language of the training data must be one of the [languages that are supported](language-support.md?tabs=tts) for custom neural voice neural, cross-lingual, or multi-style training.
+- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multiple style voices are useful for video game characters, conversational chatbots, audiobooks, content readers, and more.
+
+ To create a multiple style voice, you need to prepare a set of general training data, at least 300 utterances. Select one or more of the preset target speaking styles. You can also create multiple custom styles by providing style samples, of at least 100 utterances per style, as extra training data for the same voice. The supported preset styles vary according to different languages. See [Available preset styles across different languages](?tabs=multistyle#available-preset-styles-across-different-languages).
+
+The language of the training data must be one of the [languages that are supported](language-support.md?tabs=tts) for custom neural voice, cross-lingual, or multiple style training.
## Train your Custom Neural Voice model
-To create a custom neural voice in Speech Studio, follow these steps for one of the following [methods](#choose-a-training-method):
+To create a custom neural voice in Speech Studio, follow these steps for one of the following methods:
# [Neural](#tab/neural) 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Custom Voice** > *\<Your project name>* > **Train model** > **Train a new model**.
1. Select **Neural** as the [training method](#choose-a-training-method) for your model and then select **Next**. To use a different training method, see [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
- :::image type="content" source="media/custom-voice/cnv-train-neural.png" alt-text="Screenshot that shows how to select neural training.":::
-1. Select a version of the training recipe for your model. The latest version is selected by default. The supported features and training time can vary by version. Normally, the latest version is recommended for the best results. In some cases, you can choose an older version to reduce training time.
-1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+
+ :::image type="content" source="media/custom-voice/cnv-train-neural.png" alt-text="Screenshot that shows how to select neural training.":::
+
+1. Select a version of the training recipe for your model. The latest version is selected by default. The supported features and training time can vary by version. Normally, we recommend the latest version. In some cases, you can choose an earlier version to reduce training time.
+1. Select the data that you want to use for training. Duplicate audio names are removed from the training. Make sure that the data you select doesn't contain the same audio names across multiple *.zip* files.
+
+ You can select only successfully processed datasets for training. If you don't see your training set in the list, check your data processing status.
+ 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
-1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
-1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Each training generates 100 sample audio files automatically to help you test the model with a default script.
+
+ Optionally, you can also select **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no extra cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+
+1. Enter a **Name** to help you identify the model. Choose a name carefully. The model name is used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) by the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
1. Select **Next**.
-1. Review the settings and check the box to accept the terms of use.
+1. Review the settings and select the box to accept the terms of use.
1. Select **Submit** to start training the model. # [Neural - cross lingual](#tab/crosslingual) 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Custom Voice** > *\<Your project name>* > **Train model** > **Train a new model**.
1. Select **Neural - cross lingual** as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
- :::image type="content" source="media/custom-voice/cnv-train-neural-cross-lingual.png" alt-text="Screenshot that shows how to select neural cross lingual training.":::
-1. Select the **Target language** that will be the secondary language for your voice model. Only one target language can be selected for a voice model.
-1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+
+ :::image type="content" source="media/custom-voice/cnv-train-neural-cross-lingual.png" alt-text="Screenshot that shows how to select neural cross lingual training.":::
+
+1. Select the **Target language** that is the secondary language for your voice model. You can select only one target language for a voice model.
+1. Select the data that you want to use for training. Duplicate audio names are removed from the training. Make sure that the data you select doesn't contain the same audio names across multiple *.zip* files.
+
+ You can select only successfully processed datasets for training. Check your data processing status if you don't see your training set in the list.
+ 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
-1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
-1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Each training generates 100 sample audio files automatically to help you test the model with a default script.
+
+ Optionally, you can also select **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no extra cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [Test script requirements](#test-script-requirements).
+
+1. Enter a **Name** to help you identify the model. Choose a name carefully. The model name is used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) by the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
1. Select **Next**.
-1. Review the settings and check the box to accept the terms of use.
+1. Review the settings and select the box to accept the terms of use.
1. Select **Submit** to start training the model. # [Neural - multi style](#tab/multistyle) 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Custom Voice** > *\<Your project name>* > **Train model** > **Train a new model**.
1. Select **Neural - multi style** as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model).
- :::image type="content" source="media/custom-voice/cnv-train-neural-multi-style.png" alt-text="Screenshot that shows how to select neural multi style training.":::
-1. Select one or more preset speaking styles to train.
-1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+
+ :::image type="content" source="media/custom-voice/cnv-train-neural-multi-style.png" alt-text="Screenshot that shows how to select neural multi style training.":::
+
+1. Select one or more preset speaking styles to train.
+1. Select the data that you want to use for training. Duplicate audio names are removed from the training. Make sure that the data you select doesn't contain the same audio names across multiple *.zip* files.
+
+ You can select only successfully processed datasets for training. Check your data processing status if you don't see your training set in the list.
+ 1. Select **Next**.
-1. Optionally, you can add additional custom speaking styles. The maximum number of custom styles varies by languages: `English (United States)` allows up to 10 custom styles, `Chinese (Mandarin, Simplified)` allows up to 4 custom styles, and `Japanese (Japan)` allows up to 5 custom styles.
- 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#speaking-styles-and-roles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
- 1. Select style samples as training data. The style samples should be all from the same voice talent profile.
+1. Optionally, you can add other custom speaking styles. The maximum number of custom styles varies by languages: `English (United States)` allows up to 10 custom styles, `Chinese (Mandarin, Simplified)` allows up to four custom styles, and `Japanese (Japan)` allows up to five custom styles.
+
+ 1. Select **Add a custom style** and enter a custom style name of your choice. This name is used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#speaking-styles-and-roles). You can also use the custom style name as SSML by using the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
+ 1. Select style samples as training data. Ensure that the training data for custom speaking styles comes from the same speaker as the data used to create the default style.
+ 1. Select **Next**. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
-1. Each training generates 100 sample audios for the default style and 20 for each preset style automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the default style at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
-1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Each training automatically generates 100 sample audio files for the default style and 20 for each preset style to help you test the model with a default script.
+
+Optionally, you can also select **Add my own test script** and provide your own test script with up to 100 utterances to test the default style at no extra cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+
+1. Enter a **Name** to help you identify the model. Choose a name carefully. The model name is used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) by the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
1. Select **Next**.
-1. Review the settings and check the box to accept the terms of use.
+1. Review the settings and select the box to accept the terms of use.
1. Select **Submit** to start training the model. ## Available preset styles across different languages The following table summarizes the different preset styles according to different languages.
-| Speaking style | Language |
-|-|-|
-| angry | English (United States)<br> Chinese (Mandarin, Simplified)(preview)<br> Japanese (Japan)(preview) |
+| Speaking style | Language |
+|:-- |:-- |
+| angry | English (United States)<br> Chinese (Mandarin, Simplified)(preview)<br> Japanese (Japan)(preview) |
| calm | Chinese (Mandarin, Simplified)(preview)| | chat | Chinese (Mandarin, Simplified)(preview) |
-| cheerful | English (United States) <br> Chinese (Mandarin, Simplified)(preview) <br>Japanese (Japan)(preview)|
-| disgruntled | Chinese (Mandarin, Simplified)(preview) |
+| cheerful | English (United States) <br> Chinese (Mandarin, Simplified)(preview) <br>Japanese (Japan)(preview) |
+| disgruntled | Chinese (Mandarin, Simplified)(preview) |
| excited | English (United States) |
-| fearful | Chinese (Mandarin, Simplified)(preview) |
+| fearful | Chinese (Mandarin, Simplified)(preview) |
| friendly | English (United States) | | hopeful | English (United States) |
-| sad | English (United States)<br>Chinese (Mandarin, Simplified)(preview)<br>Japanese (Japan)(preview)|
-| shouting | English (United States) |
-| terrified | English (United States) |
-| unfriendly | English (United States)|
-| whispering | English (United States) |
-| serious | Chinese (Mandarin, Simplified)(preview) |
+| sad | English (United States)<br>Chinese (Mandarin, Simplified)(preview)<br>Japanese (Japan)(preview) |
+| shouting | English (United States) |
+| terrified | English (United States) |
+| unfriendly | English (United States)|
+| whispering | English (United States) |
+| serious | Chinese (Mandarin, Simplified)(preview) |
-
+ The **Train model** table displays a new entry that corresponds to this newly created model. The status reflects the process of converting your data to a voice model, as described in this table: | State | Meaning |
-| -- | - |
+|:-- |:- |
| Processing | Your voice model is being created. |
-| Succeeded | Your voice model has been created and can be deployed. |
-| Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
-| Canceled | The training for your voice model was canceled. |
+| Succeeded | Your voice model has been created and can be deployed. |
+| Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
+| Canceled | The training for your voice model was canceled. |
While the model status is **Processing**, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training. :::image type="content" source="media/custom-voice/cnv-cancel-training.png" alt-text="Screenshot that shows how to cancel training for a model.":::
-After you finish training the model successfully, you can review the model details and [test the model](#test-your-voice-model).
+After you finish training the model successfully, you can review the model details and [Test your voice model](#test-your-voice-model).
-You can use the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio]( https://speech.microsoft.com/portal/audiocontentcreation) to create audio and fine-tune your deployed voice. If applicable for your voice, one of multiple styles can also be selected.
+You can use the [Audio Content Creation](how-to-audio-content-creation.md) tool in Speech Studio to create audio and fine-tune your deployed voice. If applicable for your voice, you can select one of multiple styles.
### Rename your model
-If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
+1. If you want to rename the model you built, select **Clone model** to create a clone of the model with a new name in the current project.
+ :::image type="content" source="media/custom-voice/cnv-clone-model.png" alt-text="Screenshot of selecting the Clone model button.":::
-Enter the new name on the **Clone voice model** window, then select **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
+1. Enter the new name on the **Clone voice model** window, then select **Submit**. The text *Neural* is automatically added as a suffix to your new model name.
+ :::image type="content" source="media/custom-voice/cnv-clone-model-rename.png" alt-text="Screenshot of cloning a model with a new name.":::
### Test your voice model
-After your voice model is successfully built, you can use the generated sample audio files to test it before deploying it for use.
+After your voice model is successfully built, you can use the generated sample audio files to test it before you deploy it.
The quality of the voice depends on many factors, such as:
The quality of the voice depends on many factors, such as:
- The accuracy of the transcript file. - How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.
-Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
+Select **DefaultTests** under **Testing** to listen to the sample audio files. The default test samples include 100 sample audio files generated automatically during training to help you test the model. In addition to these 100 audio files provided by default, your own test script utterances are also added to **DefaultTests** set. This addition is at most 100 utterances. You're not charged for the testing with **DefaultTests**.
:::image type="content" source="media/custom-voice/cnv-model-default-test.png" alt-text="Screenshot of selecting DefaultTests under Testing.":::
If you want to upload your own test scripts to further test your model, select *
:::image type="content" source="media/custom-voice/cnv-model-add-testscripts.png" alt-text="Screenshot of adding model test scripts.":::
-Before uploading test script, check the [test script requirements](#test-script-requirements). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+Before you upload test script, check the [Test script requirements](#test-script-requirements). You're charged for the extra testing with the batch synthesis based on the number of billable characters. See [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-On **Add test scripts** window, select **Browse for a file** to select your own script, then select **Add** to upload it.
+Under **Add test scripts**, select **Browse for a file** to select your own script, then select **Add** to upload it.
:::image type="content" source="media/custom-voice/cnv-model-upload-testscripts.png" alt-text="Screenshot of uploading model test scripts."::: ### Test script requirements
-The test script must be a .txt file, less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
+The test script must be a *.txt* file that is less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
-Unlike the [training transcription files](how-to-custom-voice-training-data.md#transcription-data-for-individual-utterances--matching-transcript), the test script should exclude the utterance ID (filenames of each utterance). Otherwise, these IDs are spoken.
+Unlike the [training transcription files](how-to-custom-voice-training-data.md#transcription-data-for-individual-utterances--matching-transcript), the test script should exclude the utterance ID, which is the filename of each utterance. Otherwise, these IDs are spoken.
-Here's an example set of utterances in one .txt file:
+Here's an example set of utterances in one *.txt* file:
```text This is the waistline, and it's falling.
Each paragraph of the utterance results in a separate audio. If you want to comb
### Update engine version for your voice model
-Azure Text to speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you've trained your voice, you can apply your voice to the new language model by updating to the latest engine version.
+Azure text to speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you train your voice, you can apply your voice to the new language model by updating to the latest engine version.
+
+1. When a new engine is available, you're prompted to update your neural voice model.
-When a new engine is available, you're prompted to update your neural voice model.
+ :::image type="content" source="media/custom-voice/cnv-engine-update-prompt.png" alt-text="Screenshot of displaying engine update message." lightbox="media/custom-voice/cnv-engine-update-prompt.png":::
+1. Go to the model details page and follow the on-screen instructions to install the latest engine.
-Go to the model details page and follow the on-screen instructions to install the latest engine.
+ :::image type="content" source="media/custom-voice/cnv-new-engine-install.png" alt-text="Screenshot of following on-screen instructions to install the new engine.":::
+ Alternatively, select **Install the latest engine** later to update your model to the latest engine version.
-Alternatively, select **Install the latest engine** later to update your model to the latest engine version.
+ :::image type="content" source="media/custom-voice/cnv-install-latest-engine.png" alt-text="Screenshot of selecting Install the latest engine button to update engine.":::
+ You're not charged for engine update. The previous versions are still kept.
-You're not charged for engine update. The previous versions are still kept. You can check all engine versions for the model from **Engine version** drop-down list, or remove one if you don't need it anymore.
+1. You can check all engine versions for the model from the **Engine version** list, or remove one if you don't need it anymore.
+ :::image type="content" source="media/custom-voice/cnv-engine-version.png" alt-text="Screenshot of displaying Engine version drop-down list.":::
-The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and selecting **Set as default**.
+ The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and selecting **Set as default**.
+ :::image type="content" source="media/custom-voice/cnv-engine-set-default.png" alt-text="Screenshot that shows how to set a version as default.":::
-If you want to test each engine version of your voice model, you can select a version from the drop-down list, then select **DefaultTests** under **Testing** to listen to the sample audios. If you want to upload your own test scripts to further test your current engine version, first make sure the version is set as default, then follow the [testing steps above](#test-your-voice-model).
+If you want to test each engine version of your voice model, you can select a version from the list, then select **DefaultTests** under **Testing** to listen to the sample audio files. If you want to upload your own test scripts to further test your current engine version, first make sure the version is set as default, then follow the steps in [Test your voice model](#test-your-voice-model).
-Updating the engine will create a new version of the model at no additional cost. After you've updated the engine version for your voice model, you need to deploy the new version to [create a new endpoint](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint). You can only deploy the default version.
+Updating the engine creates a new version of the model at no extra cost. After you update the engine version for your voice model, you need to deploy the new version to [create a new endpoint](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint). You can only deploy the default version.
:::image type="content" source="media/custom-voice/cnv-engine-redeploy.png" alt-text="Screenshot that shows how to redeploy a new version of your voice model.":::
-After you've created a new endpoint, you need to [transfer the traffic to the new endpoint in your product](how-to-deploy-and-use-endpoint.md#switch-to-a-new-voice-model-in-your-product).
+After you create a new endpoint, you need to [transfer the traffic to the new endpoint in your product](how-to-deploy-and-use-endpoint.md#switch-to-a-new-voice-model-in-your-product).
-For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+To learn more about the capabilities and limits of this feature, and the best practice to improve your model quality, see [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
## Copy your voice model to another project You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region. > [!NOTE]
-> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
+> Custom Neural Voice training is currently only available in some regions. You can copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
To copy your custom neural voice model to another project:
To copy your custom neural voice model to another project:
:::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Screenshot of the copy to project option.":::
-1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
+1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
:::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Screenshot of the copy voice model dialog."::: 1. Select **Submit** to copy the model.
-1. Select **View model** under the notification message for copy success.
+1. Select **View model** under the notification message for the successful copying.
Navigate to the project where you copied the model to [deploy the model copy](how-to-deploy-and-use-endpoint.md).
ai-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-deploy-and-use-endpoint.md
To create a custom neural voice endpoint:
1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**. 1. Select a voice model that you want to associate with this endpoint. 1. Enter a **Name** and **Description** for your custom endpoint.
+1. Select **Endpoint type** according to your scenario. If your resource is in a supported region, the default setting for the endpoint type is *High performance*. Otherwise, if the resource is in an unsupported region, the only available option is *Fast resume*.
+ - *High performance*: Optimized for scenarios with real-time and high-volume synthesis requests, such as conversational AI, call-center bots. It takes around 5 minutes to deploy or resume an endpoint. For information about regions where the *High performance* endpoint type is supported, see the footnotes in the [regions](regions.md#speech-service) table.
+ - *Fast resume*: Optimized for audio content creation scenarios with less frequent synthesis requests. Easy and quick to deploy or resume an endpoint in under a minute. The *Fast resume* endpoint type is supported in all [regions](regions.md#speech-service) where text to speech is available.
+
1. Select **Deploy** to create your endpoint. After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
ai-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-get-speech-session-id.md
Generate a GUID inside your code or using any standard tool. Use the GUID value
As a part of your REST request use `X-ConnectionId=<GUID>` expression. For our example, a sample request will look like this: ```http
-https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&X-ConnectionId=9f4ffa5113a846eba289aa98b28e766f
+https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&X-ConnectionId=9f4ffa5113a846eba289aa98b28e766f
``` `9f4ffa5113a846eba289aa98b28e766f` will be your Session ID.
+> [!WARNING]
+> The value of the parameter `X-ConnectionId` should be in the format of GUID without dashes or other dividers. All other formats aren't supported and will be discarded by the Service.
+>
+> Example. If the request contains `X-ConnectionId=9f4ffa51-13a8-46eb-a289-aa98b28e766f` (GUID with dividers) or `X-ConnectionId=Request9f4ffa5113a846eba289aa98b28e766f` (non-GUID) then the value of `X-ConnectionId` will not be accepted by the system, and the Session won't be found in the logs.
+ ## Getting Transcription ID for Batch transcription [Batch transcription API](batch-transcription.md) is a subset of the [Speech to text REST API](rest-speech-to-text.md).
ai-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md
More than 75 prebuilt standard voices are available in over 45 languages and loc
| Arabic (Arabic ) | `ar-EG` | Female | `ar-EG-Hoda`| | Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-Naayf`| | Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-Ivan`|
-| Catalan (Spain) | `ca-ES` | Female | `ca-ES-HerenaRUS`|
+| Catalan | `ca-ES` | Female | `ca-ES-HerenaRUS`|
| Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-Danny`| | Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-TracyRUS`| | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-HuihuiRUS`|
ai-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis.md
Title: "How to synthesize speech from text - Speech service"
-description: Learn how to convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
+description: Learn how to convert text to speech, including object construction and design patterns, supported audio output formats, and custom configuration options.
Previously updated : 09/16/2022 Last updated : 08/30/2023 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-services
ai-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md
ms.devlang: cpp, csharp, java, objective-c, python zone_pivot_groups: programming-languages-set-two- # How to track Speech SDK memory usage
ai-services How To Use Meeting Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-meeting-transcription.md
Last updated 05/06/2023
zone_pivot_groups: acs-js-csharp-python ms.devlang: csharp, javascript-+ # Quickstart: Real-time meeting transcription
You can transcribe meetings with the ability to add, remove, and identify multip
> [!div class="nextstepaction"] > [Asynchronous Meeting Transcription](how-to-async-meeting-transcription.md)-
ai-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
::: zone-end ::: zone pivot="programming-language-javascript"
-Language detection with a custom endpoint isn't supported by the Speech SDK for JavaScript. For example, if you include "fr-FR" as shown here, the custom endpoint will be ignored.
```Javascript var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US");
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
With the cross-lingual feature, you can transfer your custom neural voice model
# [Pronunciation assessment](#tab/pronunciation-assessment)
-The table in this section summarizes the 20 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 19 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
+The table in this section summarizes the 22 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 21 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
[!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
The base model may not be sufficient if the audio contains ambient noise or incl
With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings
+- [Diarization](get-started-stt-diarization.md)
+- [Pronunciation assessment](how-to-pronunciation-assessment.md)
- Contact center agent assist - Dictation - Voice agents-- Pronunciation assessment ### Batch transcription
ai-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/record-custom-voice-samples.md
Create a reference recording, or *match file,* of a typical utterance at the beg
The match file is especially important when you resume recording after a break or on another day. Play it a few times for the talent and have them repeat it each time until they're matching well.
+To record a corpus with a specific style, carefully choose scripts that showcase the desired style. During recording, ensure the voice talent maintains consistent in volume, tempo, pitch, and tone to achieve recordings that embody the intended style.
+ Coach your talent to take a deep breath and pause for a moment before each utterance. Record a couple of seconds of silence between utterances. Words should be pronounced the same way each time they appear, considering context. For example, "record" as a verb is pronounced differently from "record" as a noun. Record approximately five seconds of silence before the first recording to capture the "room tone". This practice helps Speech Studio compensate for noise in the recordings.
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
The following regions are supported for Speech service features such as speech t
| -- | -- | -- | | Africa | South Africa North | `southafricanorth` <sup>6</sup>| | Asia Pacific | East Asia | `eastasia` <sup>5</sup>|
-| Asia Pacific | Southeast Asia | `southeastasia` <sup>1,2,3,4,5</sup>|
-| Asia Pacific | Australia East | `australiaeast` <sup>1,2,3,4</sup>|
+| Asia Pacific | Southeast Asia | `southeastasia` <sup>1,2,3,4,5,7</sup>|
+| Asia Pacific | Australia East | `australiaeast` <sup>1,2,3,4,7</sup>|
| Asia Pacific | Central India | `centralindia` <sup>1,2,3,4,5</sup>| | Asia Pacific | Japan East | `japaneast` <sup>2,5</sup>| | Asia Pacific | Japan West | `japanwest` | | Asia Pacific | Korea Central | `koreacentral` <sup>2</sup>| | Canada | Canada Central | `canadacentral` <sup>1</sup>|
-| Europe | North Europe | `northeurope` <sup>1,2,4,5</sup>|
-| Europe | West Europe | `westeurope` <sup>1,2,3,4,5</sup>|
+| Europe | North Europe | `northeurope` <sup>1,2,4,5,7</sup>|
+| Europe | West Europe | `westeurope` <sup>1,2,3,4,5,7</sup>|
| Europe | France Central | `francecentral` | | Europe | Germany West Central | `germanywestcentral` | | Europe | Norway East | `norwayeast` | | Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>| | Europe | Switzerland West | `switzerlandwest` |
-| Europe | UK South | `uksouth` <sup>1,2,3,4</sup>|
+| Europe | UK South | `uksouth` <sup>1,2,3,4,7</sup>|
| Middle East | UAE North | `uaenorth` <sup>6</sup>| | South America | Brazil South | `brazilsouth` <sup>6</sup>| | US | Central US | `centralus` |
-| US | East US | `eastus` <sup>1,2,3,4,5</sup>|
+| US | East US | `eastus` <sup>1,2,3,4,5,7</sup>|
| US | East US 2 | `eastus2` <sup>1,2,4,5</sup>| | US | North Central US | `northcentralus` <sup>4,6</sup>|
-| US | South Central US | `southcentralus` <sup>1,2,3,4,5,6</sup>|
+| US | South Central US | `southcentralus` <sup>1,2,3,4,5,6,7</sup>|
| US | West Central US | `westcentralus` <sup>5</sup>| | US | West US | `westus` <sup>2,5</sup>|
-| US | West US 2 | `westus2` <sup>1,2,4,5</sup>|
+| US | West US 2 | `westus2` <sup>1,2,4,5,7</sup>|
| US | West US 3 | `westus3` | <sup>1</sup> The region has dedicated hardware for Custom Speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
The following regions are supported for Speech service features such as speech t
<sup>6</sup> The region does not support Speaker Recognition.
+<sup>7</sup> The region supports the [high performance](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint) endpoint type for Custom Neural Voice.
+ ## Intent recognition Available regions for intent recognition via the Speech SDK are in the following table.
ai-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md
Audio is sent in the body of the HTTP `POST` request. It must be in one of the f
| Format | Codec | Bit rate | Sample rate | |--|-|-|--| | WAV | PCM | 256 kbps | 16 kHz, mono |
-| OGG | OPUS | 256 kpbs | 16 kHz, mono |
+| OGG | OPUS | 256 kbps | 16 kHz, mono |
> [!NOTE] > The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
ai-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md
Previously updated : 04/18/2023 Last updated : 08/29/2023 zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container
keywords: on-premises, Docker, container
The Custom speech to text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech to text container.
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
- For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ## Container images
sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PA
* See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure AI services containers](../cognitive-services-container-support.md)
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-howto.md
By using containers, you can use a subset of the Speech service features in your
## Prerequisites You must meet the following prerequisites before you use Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need:-
-* You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure. * On Windows, Docker must also be configured to support Linux containers. * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
To run disconnected containers (not connected to the internet), you must submit
* Review [configure containers](speech-container-configuration.md) for configuration settings. * Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md). * Deploy and run containers on [Azure Container Instance](../containers/azure-container-instance-recipe.md)
-* Use more [Azure AI services containers](../cognitive-services-container-support.md).
+* Use more [Azure AI containers](../cognitive-services-container-support.md).
ai-services Speech Container Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-lid.md
Previously updated : 04/18/2023 Last updated : 08/28/2023 zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container
keywords: on-premises, Docker, container
The Speech language identification container detects the language spoken in audio files. You can get real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a language identification container. > [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
->
> The Speech language identification container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
Increasing the number of concurrent calls can affect reliability and latency. Fo
* See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure AI services containers](../cognitive-services-container-support.md)
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Speech Container Ntts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-ntts.md
Previously updated : 04/18/2023 Last updated : 08/28/2023 zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container
keywords: on-premises, Docker, container
The neural text to speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text to speech container.
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
- For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ## Container images
For example, a model that was downloaded via the `latest` tag (defaults to "en-U
* See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure AI services containers](../cognitive-services-container-support.md)
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md
Previously updated : 04/18/2023 Last updated : 08/29/2023 keywords: on-premises, Docker, container
keywords: on-premises, Docker, container
By using containers, you can use a subset of the Speech service features in your own environment. With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Containers are great for specific security and data governance requirements.
-> [!NOTE]
-> You must [request and get approval](#request-approval-to-run-the-container) to use a Speech container.
- ## Available Speech containers The following table lists the Speech containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
The following table lists the Speech containers available in the Microsoft Conta
<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. <sup>2</sup> Not available as a disconnected container.
-## Request approval to run the container
+## Request approval to run containers disconnected from the internet
-To use the Speech containers, you must submit one of the following request forms and wait for approval:
-- [Connected containers request form](https://aka.ms/csgate) if you want to run containers regularly, in environments that are only connected to the internet.-- [Disconnected Container request form](https://aka.ms/csdisconnectedcontainers) if you want to run containers in environments that can be disconnected from the internet. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
+To use the Speech containers in environments that are disconnected from the internet, you must submit a [request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
The form requests information about you, your company, and the user scenario for which you'll use the container.
The form requests information about you, your company, and the user scenario for
After you submit the form, the Azure AI services team reviews it and emails you with a decision within 10 business days.
-> [!IMPORTANT]
-> To use the Speech containers, your request must be approved.
-
-While you're waiting for approval, you can [setup the prerequisites](speech-container-howto.md#prerequisites) on your host computer. You can also download the container from the Microsoft Container Registry (MCR). You can run the container after your request is approved.
- ## Billing The Speech containers send billing information to Azure by using a Speech resource on your Azure account.
ai-services Speech Container Stt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-stt.md
Previously updated : 04/18/2023 Last updated : 08/28/2023 zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container
keywords: on-premises, Docker, container
The Speech to text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech to text container.
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
- For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ## Container images
For more information about `docker run` with Speech containers, see [Install and
* See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure AI services containers](../cognitive-services-container-support.md)
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md
Use these parameters instead of the parameters in the article that you chose:
| Resource | **\<your-speech-resource-name>** | | Target sub-resource | **account** |
-**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
+**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#apply-dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
### Resolve DNS from the virtual network
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
You can use real-time text to speech with the [Speech SDK](speech-sdk.md) or the
#### Batch synthesis
-These limits aren't adjustable.
+These limits aren't adjustable. For more information on batch synthesis latency, refer to [the batch synthesis latency and best practices](batch-synthesis.md#batch-synthesis-latency-and-best-practices).
| Quota | Free (F0) | Standard (S0) | |--|--|--|
These limits aren't adjustable.
| Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |
-| Max number of simultaneous model trainings | N/A | 3 |
+| Max number of simultaneous model trainings | N/A | 4 |
| Max number of custom endpoints | N/A | 50 | #### Audio Content Creation tool
ai-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup.md
Title: Speech Synthesis Markup Language (SSML) overview - Speech service
-description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech.
+description: Learn how to use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech.
Previously updated : 11/30/2022 Last updated : 8/16/2023 # Speech Synthesis Markup Language (SSML) overview
-Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input.
+Speech Synthesis Markup Language (SSML) is an XML-based markup language that you can use to fine-tune your text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. It gives you more control and flexibility than plain text input.
> [!TIP]
-> You can hear voices in different styles and pitches reading example text via the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
+> You can hear voices in different styles and pitches reading example text by using the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
-## Scenarios
+## Use case scenarios
-You can use SSML to:
+SSML is designed to give you flexibility in how you want your speech output to sound, and it provides different properties for how you can customize that output. You can use SSML to:
-- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.-- [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note.
+- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of your text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags, like a bookmark or viseme, that your application can process later. A viseme is the visual description of a phoneme, the individual speech sounds, in spoken language.
+- [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. You can also adjust the emphasis, speaking rate, pitch, and volume. SSML can also insert prerecorded audio, such as a sound effect or a musical note.
- [Control pronunciation](speech-synthesis-markup-pronunciation.md) of the output audio. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced.
-## Use SSML
+## Ways to work with SSML
+
+SSML functionality is available in various tools that might fit your use case.
> [!IMPORTANT]
-> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. For more information, see [text to speech pricing notes](text-to-speech.md#pricing-note).
+> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself isn't billable, the service counts optional elements that you use to adjust how the text is converted to speech, like phonemes and pitch, as billable characters. For more information, see [Pricing note](text-to-speech.md#pricing-note).
You can use SSML in the following ways: -- [Audio Content Creation](https://aka.ms/audiocontentcreation) tool: Author plain text and SSML in Speech Studio: You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).-- [Batch synthesis API](batch-synthesis.md): Provide SSML via the `inputs` property. -- [Speech CLI](get-started-text-to-speech.md?pivots=programming-language-cli): Provide SSML via the `spx synthesize --ssml SSML` command line argument.-- [Speech SDK](how-to-speech-synthesis.md#use-ssml-to-customize-speech-characteristics): Provide SSML via the "speak" SSML method.
+- [The Audio Content Creation](https://aka.ms/audiocontentcreation) tool lets you author plain text and SSML in Speech Studio. You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).
+- [The Batch synthesis API](batch-synthesis.md) accepts SSML via the `inputs` property.
+- [The Speech CLI](get-started-text-to-speech.md?pivots=programming-language-cli) accepts SSML via the `spx synthesize --ssml SSML` command line argument.
+- [The Speech SDK](how-to-speech-synthesis.md#use-ssml-to-customize-speech-characteristics) accepts SSML via the "speak" SSML method across the different supported languages.
## Next steps - [SSML document structure and events](speech-synthesis-markup-structure.md) - [Voice and sound with SSML](speech-synthesis-markup-voice.md) - [Pronunciation with SSML](speech-synthesis-markup-pronunciation.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
+- [Language and voice support for the Speech service](language-support.md?tabs=tts)
ai-services Deploy User Managed Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/deploy-user-managed-glossary.md
+
+ Title: Deploy a user-managed glossary in Translator container
+
+description: How to deploy a user-managed glossary in the Translator container environment.
++++++ Last updated : 08/15/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD046 -->
+
+# Deploy a user-managed glossary
+
+Microsoft Translator containers enable you to run several features of the Translator service in your own environment and are great for specific security and data governance requirements.
+
+There may be times when you're running a container with a multi-layered ingestion process when you discover that you need to implement an update to sentence and/or phrase files. Since the standard phrase and sentence files are encrypted and read directly into memory at runtime, you need to implement a quick-fix engineering solution to implement a dynamic update. This update can be implemented using our user-managed glossary feature:
+
+* To deploy the **phrase&#8203;fix** solution, you need to create a **phrase&#8203;fix** glossary file to specify that a listed phrase is translated in a specified way.
+
+* To deploy the **sent&#8203;fix** solution, you need to create a **sent&#8203;fix** glossary file to specify an exact target translation for a source sentence.
+
+* The **phrase&#8203;fix** and **sent&#8203;fix** files are then included with your translation request and read directly into memory at runtime.
+
+## Managed glossary workflow
+
+ > [!IMPORTANT]
+ > **UTF-16 LE** is the only accepted file format for the managed-glossary folders. For more information about encoding your files, *see* [Encoding](/powershell/module/microsoft.powershell.management/set-content?view=powershell-7.2#-encoding&preserve-view=true)
+
+1. To get started manually creating the folder structure, you need to create and name your folder. The managed-glossary folder is encoded in **UTF-16 LE BOM** format and nests **phrase&#8203;fix** or **sent&#8203;fix** source and target language files. Let's name our folder `customhotfix`. Each folder can have **phrase&#8203;fix** and **sent&#8203;fix** files. You provide the source (`src`) and target (`tgt`) language codes with the following naming convention:
+
+ |Glossary file name format|Example file name |
+ |--|--|
+ |{`src`}.{`tgt`}.{container-glossary}.{phrase&#8203;fix}.src.snt|en.es.container-glossary.phrasefix.src.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{phrase&#8203;fix}.tgt.snt|en.es.container-glossary.phrasefix.tgt.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{sent&#8203;fix}.src.snt|en.es.container-glossary.sentfix.src.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{sent&#8203;fix}.tgt.snt|en.es.container-glossary.sentfix.tgt.snt|
+
+ > [!NOTE]
+ >
+ > * The **phrase&#8203;fix** solution is an exact find-and-replace operation. Any word or phrase listed is translated in the way specified.
+ > * The **sent&#8203;fix** solution is more precise and allows you to specify an exact target translation for a source sentence. For a sentence match to occur, the entire submitted sentence must match the **sent&#8203;fix** entry. If only a portion of the sentence matches, the entry won't match.
+ > * If you're hesitant about making sweeping find-and-replace changes, we recommend, at the outset, solely using the **sent&#8203;fix** solution.
+
+1. Next, to dynamically reload glossary entry updates, create a `version.json` file within the `customhotfix` folder. The `version.json` file should contain the following parameters: **VersionId**. An integer value.
+
+ ***Sample version.json file***
+
+ ```json
+ {
+
+ "VersionId": 5
+
+ }
+
+ ```
+
+ > [!TIP]
+ >
+ > Reload can be controlled by setting the following environmental variables when starting the container:
+ >
+ > * **HotfixReloadInterval=**. Default value is 5 minutes.
+ > * **HotfixReloadEnabled=**. Default value is true.
+
+1. Use the **docker run** command
+
+ **Docker run command required options**
+
+ ```dockerfile
+ docker run --rm -it -p 5000:5000 \
+
+ -e eula=accept \
+
+ -e billing={ENDPOINT_URI} \
+
+ -e apikey={API_KEY} \
+
+ -e Languages={LANGUAGES_LIST} \
+
+ -e HotfixDataFolder={path to glossary folder}
+
+ {image}
+ ```
+
+ **Example docker run command**
+
+ ```dockerfile
+
+ docker run -rm -it -p 5000:5000 \
+ -v /mnt/d/models:/usr/local/models -v /mnt/d /customerhotfix:/usr/local/customhotfix \
+ -e EULA=accept \
+ -e billing={ENDPOINT_URI} \
+ -e apikey={API_Key} \
+ -e Languages=en,es \
+ -e HotfixDataFolder=/usr/local/customhotfix\
+ mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+
+ ```
+
+## Learn more
+
+> [!div class="nextstepaction"]
+> [Create a dynamic dictionary](../dynamic-dictionary.md) [Use a custom dictionary](../custom-translator/concepts/dictionaries.md)
ai-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md
keywords: on-premises, Docker, container, identify
# Install and run Translator containers
-Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
+Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you learn how to download, install, and run a Translator container.
Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
See the list of [languages supported](../language-support.md) when using Transla
> [!IMPORTANT] >
-> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
-> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
+> * To use the Translator container, you must submit an online request and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
+> * Translator container supports limited features compared to the cloud offerings. For more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
<!-- markdownlint-disable MD033 --> ## Prerequisites
-To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-You'll also need to have:
+You also need:
| Required | Purpose | |--|--|
-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
+| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
| Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
+| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) regional resource (not `global`) with an associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
|Optional|Purpose| ||-|
curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS
There are several ways to validate that the container is running:
-* The container provides a homepage at `\` as a visual validation that the container is running.
+* The container provides a homepage at `/` as a visual validation that the container is running.
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
| Request URL | Purpose | |--|--|
ai-services Create Translator Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/create-translator-resource.md
Title: Create a Translator resource
-description: This article shows you how to create an Azure AI Translator resource and retrieve yourgit key and endpoint URL.
+description: This article shows you how to create an Azure AI Translator resource.
Previously updated : 07/18/2023 Last updated : 09/06/2023 # Create a Translator resource
The Translator service can be accessed through two different resource types:
* Each subscription has a free tier. * The free tier has the same features and functionality as the paid plans and doesn't expire. * Only one free tier is available per subscription.
- * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest you select the Standard S1 instance tier to try Document Translation.
+ * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest you select the Standard S1 instance tier to try Document Translation.
1. If you've created a multi-service resource, you need to confirm more usage details via the check boxes.
All Azure AI services API requests require an endpoint URL and a read-only key f
:::image type="content" source="media/keys-and-endpoint-resource.png" alt-text="Get key and endpoint.":::
+## Create a Text Translation client
+
+Text Translation supports both [global and regional endpoints](#complete-your-project-and-instance-details). Once you have your [authentication keys](#authentication-keys-and-endpoint-url), you need to create an instance of the `TextTranslationClient`, using an `AzureKeyCredential` for authentication, to interact with the Text Translation service:
+
+* To create a `TextTranslationClient` using a global resource endpoint, you need your resource **API key**:
+
+ ```bash
+ AzureKeyCredential credential = new('<apiKey>');
+ TextTranslationClient client = new(credential);
+ ```
+
+* To create a `TextTranslationClient` using a regional resource endpoint, you need your resource **API key** and the name of the **region** where your resource is located:
+
+ ```bash
+ AzureKeyCredential credential = new('<apiKey>');
+ TextTranslationClient client = new(credential, '<region>');
+ ```
+ ## How to delete a resource or resource group > [!WARNING]
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md
| Upper Sorbian | `hsb` |Γ£ö|Γ£ö|||| | Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Uyghur (Arabic) | `ug` |Γ£ö|Γ£ö|||
-| Uzbek (Latin | `uz` |Γ£ö|Γ£ö||Γ£ö||
+| Uzbek (Latin) | `uz` |Γ£ö|Γ£ö||Γ£ö||
| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
ai-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md
Previously updated : 07/18/2023 Last updated : 09/06/2023 ms.devlang: csharp, golang, java, javascript, python
You need an active Azure subscription. If you don't have an Azure subscription,
> * For this quickstart it is recommended that you use a Translator text single-service global resource. > * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription. > * If you choose to use an Azure AI multi-service or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription.
- > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translation REST API headers](translator-text-apis.md).
+ > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translation REST API headers](translator-text-apis.md#headers).
<!-- checked --> <!--
ai-services Quickstart Text Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-sdk.md
Previously updated : 07/18/2023 Last updated : 09/06/2023 ms.devlang: csharp, java, javascript, python
In this quickstart, get started using the Translator service to [translate text]
You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+* Once you have your Azure subscription, create a [Translator resource](create-translator-resource.md) in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation).
* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/whats-new.md
Document Translation .NET and Python client-library SDKs are now generally avail
* Translator service has added [text and document language support](language-support.md) for the following languages: * **Bashkir**. A Turkic language spoken by approximately 1.4 million native speakers. It has three regional language groups: Southern, Eastern, and Northwestern.
- * **Dhivehi**. Also known as Maldivian, it's an Indo-Aryan language primarily spoken in the island nation of Maldives.
+ * **Dhivehi**. Also known as Maldivian, it's an Indo-Iranian language primarily spoken in the island nation of Maldives.
* **Georgian**. A Kartvelian language that is the official language of Georgia. It has approximately 4 million speakers. * **Kyrgyz**. A Turkic language that is the official language of Kyrgyzstan. * **Macedonian (Cyrillic)**. An Eastern South Slavic language that is the official language of North Macedonia. It has approximately 2 million people.
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Last updated 02/03/2023
-# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview)
+# Automatically upgrade Azure Kubernetes Service cluster node operating system images
-AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [auto-upgrade][Autoupgrade] channel, which is used for Kubernetes version upgrades.
+AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, can't be used for cluster-level Kubernetes version upgrades. To automatically upgrade Kubernetes versions, continue to use the cluster [auto-upgrade][Autoupgrade] channel.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-## Why use node OS auto-upgrade
+## How does node OS auto-upgrade work with cluster auto-upgrade?
-This channel is exclusively meant to control node OS security updates. You can use this channel to disable [unattended upgrades][unattended-upgrades]. You can schedule maintenance without worrying about [Kured][kured] for security patches, provided you choose either the `SecurityPatch` or `NodeImage` options for `nodeOSUpgradeChannel`. By using this channel, you can run node image upgrades in tandem with Kubernetes version auto-upgrade channels like `Stable` and `Rapid`.
+Node-level OS security updates come in at a faster cadence than Kubernetes patch or minor version updates. This is the main reason for introducing a separate, dedicated Node OS auto-upgrade channel. With this feature, you can have a flexible and customized strategy for node-level OS security updates and a separate plan for cluster-level Kubernetes version auto-upgrades [auto-upgrade][Autoupgrade].
+It's highly recommended to use both cluster-level [auto-upgrades][Autoupgrade] and the node OS auto-upgrade channel together. Scheduling can be fine-tuned by applying two separate sets of [maintenance windows][planned-maintenance] - `aksManagedAutoUpgradeSchedule` for the cluster [auto-upgrade][Autoupgrade] channel and `aksManagedNodeOSUpgradeSchedule` for the node OS auto-upgrade channel.
+
+## Using node OS auto-upgrade
+
+The selected channel determines the timing of upgrades. When making changes to node OS auto-upgrade channels, allow up to 24 hours for the changes to take effect.
+
+> [!NOTE]
+> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it will only work for a cluster in a [supported version][supported].
++
+The following upgrade channels are available. You're allowed to choose one of these options:
+
+|Channel|Description|OS-specific behavior|
+|||
+| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A|
+| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure.|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. Azure Linux CPU node pools don't automatically apply security patches, so this option behaves equivalently to `None`.|
+| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There may be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs.|
+| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|
+
+To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
+
+```azurecli-interactive
+az aks create --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch
+```
+
+To set the node os auto-upgrade channel on existing cluster, update the *node-os-upgrade-channel* parameter, similar to the following example.
+
+```azurecli-interactive
+az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch
+```
+
+## Cadence and Ownership
+
+The default cadence means there's no planned maintenance window applied.
+
+|Channel|Updates Ownership|Default cadence|
+|||
+| `Unmanaged`|OS driven security updates. AKS has no control over these updates|Nightly around 6AM UTC for Ubuntu and Mariner, Windows every month.|
+| `SecurityPatch`|AKS|Weekly|
+| `NodeImage`|AKS|Weekly|
## Prerequisites
+"The following prerequisites are only applicable when using the `SecurityPatch` channel. If you aren't using this channel, you can ignore these requirements.
- Must be using API version `11-02-preview` or later - If using Azure CLI, the `aks-preview` CLI extension version `0.5.127` or later must be installed -- If using the `SecurityPatch` channel, the `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription
+- The `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription
-### Register the 'NodeOsUpgradeChannelPreview' feature flag
+### Register the 'NodeOsUpgradeChannelPreview' feature flag
Register the `NodeOsUpgradeChannelPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
az provider register --namespace Microsoft.ContainerService
## Limitations
-If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node OS auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel values, make sure the [cluster auto-upgrade channel][Autoupgrade] isn't `node-image`.
-
-The `nodeosupgradechannel` isn't supported on Windows OS node pools. Azure Linux support is now rolled out and is expected to be available in all regions soon.
+- Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel value, make sure the [cluster auto-upgrade channel][Autoupgrade] value isn't `node-image`.
-## Using node OS auto-upgrade
+- The `SecurityPatch` channel isn't supported on Windows OS node pools.
+
+ > [!NOTE]
+ > By default, any new cluster created with an API version of `06-01-2022`or later will set the node OS auto-upgrade channel value to `NodeImage`. Any existing clusters created with an API version earlier than `06-01-2022` will have the node OS auto-upgrade channel value set to `None` by default.
-Automatically completed upgrades are functionally the same as manual upgrades. The selected channel determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. By default, a cluster's node OS auto-upgrade channel is set to `Unmanaged`.
-> [!NOTE]
-> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it still requires the cluster to be in a supported version to function properly.
-> When changing channels to `NodeImage` or `SecurityPatch`, the unattended upgrades will only be disabled when the image gets applied in the next cycle and not immediately.
+## Using node OS auto-upgrade with Planned Maintenance
-The following upgrade channels are available:
+If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade starts during your specified maintenance window.
-|Channel|Description|OS-specific behavior|
-|||
-| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A|
-| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure.|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. Azure Linux CPU node pools don't automatically apply security patches, so this option behaves equivalently to `None`.|
-| `SecurityPatch`|AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only". There maybe disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs.|
-| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|
+> [!NOTE]
+> To ensure proper functionality, use a maintenance window of four hours or more.
-To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
+For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
-```azurecli-interactive
-az aks create --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch
-```
+## FAQ
-To set the auto-upgrade channel on existing cluster, update the *node-os-upgrade-channel* parameter, similar to the following example.
+* How can I check the current nodeOsUpgradeChannel value on a cluster?
-```azurecli-interactive
-az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch
-```
+Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to.
-## Using node OS auto-upgrade with Planned Maintenance
+* How can I monitor the status of node OS auto-upgrades?
-If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade starts during your specified maintenance window.
+To view the status of your node OS auto upgrades, look up [activity logs][monitor-aks] on your cluster. You may also look up specific upgrade-related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade-related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid].
-> [!NOTE]
-> To ensure proper functionality, use a maintenance window of four hours or more.
+* Can I change the node OS auto-upgrade channel value if my cluster auto-upgrade channel is set to `node-image` ?
-For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
+ No. Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change the node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to be able to change the node OS auto-upgrade channel values, make sure the cluster auto-upgrade channel isn't `node-image`.
<!-- LINKS --> [planned-maintenance]: planned-maintenance.md
For more information on Planned Maintenance, see [Use Planned Maintenance to sch
[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates [Autoupgrade]: auto-upgrade-cluster.md [kured]: node-updates-kured.md
+[supported]: ./support-policies.md
+[monitor-aks]: ./monitor-aks-reference.md
+[aks-eventgrid]: ./quickstart-event-grid.md
+[aks-upgrade]: ./upgrade-cluster.md
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
description: Learn how to use the Azure CLI to create and Azure Active Directory
Previously updated : 07/07/2023 Last updated : 08/15/2023 # Integrate Azure Active Directory with Azure Kubernetes Service (AKS) using the Azure CLI (legacy) > [!WARNING]
-> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from August 1st, 2023.
+> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from December 1st, 2023.
> > AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate].
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 05/17/2023 Last updated : 08/16/2023 # Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
This section provides guidance for cluster administrators who want to provision
|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.| |resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.| |storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
+|networkEndpointType| Specify network endpoint type for the storage account created by driver. If privateEndpoint is specified, a [private endpoint][storage-account-private-endpoint] is created for the storage account. For other cases, a service endpoint will be created for NFS protocol.<sup>1</sup> | `privateEndpoint` | No | For an AKS cluster, add the AKS cluster name to the Contributor role in the resource group hosting the VNET.|
|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`| |containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. | |containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
This section provides guidance for cluster administrators who want to provision
| | **Following parameters are only for NFS protocol** | | | | |mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
+<sup>1</sup> If the storage account is created by the driver, then you only need to specify `networkEndpointType: privateEndpoint` parameter in storage class. The CSI driver creates the private endpoint together with the account. If you bring your own storage account, then you need to [create the private endpoint][storage-account-private-endpoint] for the storage account.
+ ### Create a persistent volume claim using built-in storage class A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
This section provides guidance for cluster administrators who want to create one
### Create a Blob storage container
-When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
+When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource.
For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
The following YAML creates a pod that uses the persistent volume or persistent v
[az-tags]: ../azure-resource-manager/management/tag-resources.md [sas-tokens]: ../storage/common/storage-sas-overview.md [azure-datalake-storage-account]: ../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
+[storage-account-private-endpoint]: ../storage/common/storage-private-endpoints.md
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 05/17/2023 Last updated : 08/16/2023 # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
The following YAML creates a pod that uses the persistent volume claim *my-azure
metadata: name: mypod spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: /mnt/azure
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
The output of the commands resembles the following example:
[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [data-plane-api]: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azcore/internal/shared/shared.go
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+ <!-- LINKS - internal --> [csi-drivers-overview]: csi-storage-drivers.md
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
This article requires Azure CLI version 2.0.76 or later. Run `az --version` to f
To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways:
-* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.
+* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work)
* The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. ![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
To further help improve cluster resource utilization and free up CPU and memory
[aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group [aks-multiple-node-pools]: create-node-pools.md [aks-scale-apps]: tutorial-kubernetes-scale.md
-[aks-view-master-logs]: ../azure-monitor/containers/monitor-kubernetes.md#configure-monitoring
+[aks-view-master-logs]: monitor-aks.md#resource-logs
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
In Kubernetes:
* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name. * *ServiceTypes* allow you to specify what kind of Service you want. * You can distribute traffic using a *load balancer*.
-* More complex routing of application traffic can also be achieved with *ingress controllers*.
+* Layer 7 routing of application traffic can also be achieved with *ingress controllers*.
* You can *control outbound (egress) traffic* for cluster nodes. * Security and filtering of the network traffic for pods is possible with *network policies*.
The following ServiceTypes are available:
![Diagram showing Load Balancer traffic flow in an AKS cluster][aks-loadbalancer]
- For extra control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
+ For HTTP load balancing of inbound traffic, another option is to use an [Ingress controller](#ingress-controllers).
* **ExternalName**
Nodes use the kubenet Kubernetes plugin. You can let the Azure platform create a
Only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
+> [!NOTE]
+> While kubenet is the default networking option for an AKS cluster to create a virtual network and subnet, it isn't recommended for production deployments. For most production deployments, you should plan for and use Azure CNI networking due to its superior scalability and performance characteristics.
+ For more information, see [Configure kubenet networking for an AKS cluster][aks-configure-kubenet-networking]. ### Azure CNI (advanced) networking
-With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.
+With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
For more information, see [Configure Azure CNI for an AKS cluster][aks-configure-advanced-networking].
+### Azure CNI overlay networking
+
+[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.
+
+### Azure CNI Powered by Cilium
+
+In [Azure CNI Powered by Cilium][azure-cni-powered-by-cilium], the data plane for Pods is managed by the Linux kernel of the Kubernetes nodes. Unlike Kubenet, which faces scalability and performance issues with the Linux kernel networking stack, [Cilium][https://cilium.io/] bypasses the Linux kernel networking stack and instead leverages eBPF programs in the Linux Kernel to accelerate packet processing for faster performance.
+
+### Bring your own CNI
+
+It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
+ ### Compare network models Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:
Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
The following behavior differences exist between kubenet and Azure CNI:
-| Capability | Kubenet | Azure CNI |
-|-|--|--|
-| Deploy cluster in existing or new virtual network | Supported - UDRs manually applied | Supported |
-| Pod-pod connectivity | Supported | Supported |
-| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways |
-| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways |
-| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways |
-| Access to resources secured by service endpoints | Supported | Supported |
-| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported |
-| Default Azure DNS and Private Zones | Supported | Supported |
-| Support for Windows node pools | Not Supported | Supported |
+| Capability | Kubenet | Azure CNI | Azure CNI Overlay | Azure CNI Powered by Cilium |
+| -- | | | - | - |
+| Deploy cluster in existing or new virtual network | Supported - UDRs manually applied | Supported | Supported | Supported |
+| Pod-pod connectivity | Supported | Supported | Supported | Supported |
+| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
+| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
+| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
+| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported | [No Application Gateway Ingress Controller (AGIC) support][azure-cni-overlay-limitations] | Same limitations when using Overlay mode |
+| Support for Windows node pools | Not Supported | Supported | Supported | [Available only for Linux and not for Windows.][azure-cni-powered-by-cilium-limitations] |
Regarding DNS, with both kubenet and Azure CNI plugins DNS are offered by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-storage]: concepts-storage.md [aks-concepts-identity]: concepts-identity.md [agic-overview]: ../application-gateway/ingress-controller-overview.md
+[configure-azure-cni-dynamic-ip-allocation]: configure-azure-cni-dynamic-ip-allocation.md
[use-network-policies]: use-network-policies.md [operator-best-practices-network]: operator-best-practices-network.md [support-policies]: support-policies.md
For more information on core Kubernetes and AKS concepts, see the following arti
[ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29. [nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md [azure-cni-aks]: configure-azure-cni.md
+[azure-cni-overlay]: azure-cni-overlay.md
+[azure-cni-overlay-limitations]: azure-cni-overlay.md#limitations-with-azure-cni-overlay
+[azure-cni-powered-by-cilium]: azure-cni-powered-by-cilium.md
+[azure-cni-powered-by-cilium-limitations]: azure-cni-powered-by-cilium.md#limitations
+[use-byo-cni]: use-byo-cni.md
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS) description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims. Previously updated : 06/27/2023 Last updated : 08/30/2023
For mounting a volume in a Windows container, specify the drive letter and path.
## Next steps
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage] and [AKS Storage Considerations][azure-aks-storage-considerations].
To see how to use CSI drivers, see the following how-to articles: -- [Enable Container Storage Interface (CSI) drivers for Azure Disk, Azure Files, and Azure Blob storage on Azure Kubernetes Service][csi-storage-drivers]
+- [Container Storage Interface (CSI) drivers for Azure Disk, Azure Files, and Azure Blob storage on Azure Kubernetes Service][csi-storage-drivers]
- [Use Azure Disk CSI driver in Azure Kubernetes Service][azure-disk-csi] - [Use Azure Files CSI driver in Azure Kubernetes Service][azure-files-csi]-- [Use Azure Blob storage CSI driver (preview) in Azure Kubernetes Service][azure-blob-csi]-- [Integrate Azure NetApp Files with Azure Kubernetes Service][azure-netapp-files]
+- [Use Azure Blob storage CSI driver in Azure Kubernetes Service][azure-blob-csi]
+- [Configure Azure NetApp Files with Azure Kubernetes Service][azure-netapp-files]
For more information on core Kubernetes and AKS concepts, see the following articles:
For more information on core Kubernetes and AKS concepts, see the following arti
[general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md [azure-files-azure-netapp-comparison]: ../storage/files/storage-files-netapp-comparison.md [azure-disk-customer-managed-key]: azure-disk-customer-managed-keys.md
+[azure-aks-storage-considerations]: /azure/cloud-adoption-framework/scenarios/app-platform/aks/storage
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
In addition to automated scanning, Microsoft discovers and updates vulnerabiliti
### Linux nodes
-Each evening, Linux nodes in AKS receive security patches through their distribution security update channel. This behavior is automatically configured, as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes aren't automatically rebooted if a security patch or kernel update requires it. For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][apply-security-kernel-updates-to-aks-nodes].
+The nightly canonical OS security updates are turned off by default in AKS. In order to enable them explicitly, please use the `unmanaged` [channel][aks-node-image-upgrade].
-Nightly, we apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic assessment performed every night, but remains unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][aks-node-image-upgrade].
+If you are using the `unmanaged` [channel][aks-node-image-upgrade], then nightly canonical security updates are applied to the OS on the node. The node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic assessment performed every night, but remains unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][aks-node-image-upgrade].
-For AKS clusters on the [OS auto upgrade][aks-node-image-upgrade] channel, the unattended upgrade process is disabled, and the OS nodes receives security updates through the weekly node image upgrade.
+For AKS clusters using a [channel][aks-node-image-upgrade] other than `unmanaged`, the unattended upgrade process is disabled.
### Windows Server nodes
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
For more information to help you decide which network model to use, see [Compare
--service-cidr 10.0.0.0/16 \ --dns-service-ip 10.0.0.10 \ --pod-cidr 10.244.0.0/16 \
- --docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id $SUBNET_ID ```
For more information to help you decide which network model to use, see [Compare
* This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
- * *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
> [!NOTE] > If you want to enable an AKS cluster to include a [Calico network policy][calico-network-policies], you can use the following command:
For more information to help you decide which network model to use, see [Compare
> --resource-group myResourceGroup \ > --name myAKSCluster \ > --node-count 3 \
-> --network-plugin kubenet --network-policy calico \
+> --network-plugin kubenet \
+> --network-policy calico \
> --vnet-subnet-id $SUBNET_ID > ```
aks Create K8s Cluster With Aks Application Gateway Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-k8s-cluster-with-aks-application-gateway-ingress.md
+
+ Title: Create an Application Gateway Ingress Controller (AGIC) in Azure Kubernetes Service (AKS) using Terraform
+description: Learn how to create an Application Gateway Ingress Controller (AGIC) in Azure Kubernetes Service (AKS) using Terraform.
+ Last updated : 09/05/2023+
+content_well_notification:
+ - AI-contribution
++
+# Create an Application Gateway Ingress Controller (AGIC) in Azure Kubernetes Service (AKS) using Terraform
+
+[Azure Kubernetes Service (AKS)](/azure/aks/) manages your hosted Kubernetes environment. AKS makes it quick and easy to deploy and manage containerized applications without container orchestration expertise. AKS also eliminates the burden of taking applications offline for operational and maintenance tasks. With AKS, you can provision, upgrade, and scale resources on-demand.
+
+[Azure Application Gateway](/azure/Application-Gateway/) provides Application Gateway Ingress Controller (AGIC). AGIC enables various features for Kubernetes services, including reverse proxy, configurable traffic routing, and TLS termination. Kubernetes Ingress resources help configure the ingress rules for individual Kubernetes services. An ingress controller allows a single IP address to route traffic to multiple services in a Kubernetes cluster.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a User Assigned Identity using [azurerm_user_assigned_identity](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/user_assigned_identity).
+> * Create a virtual network (VNet) using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network).
+> * Create a subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet).
+> * Create a public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip).
+> * Create a Application Gateway using [azurerm_application_gateway](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_gateway).
+> * Create a Kubernetes cluster using [azurerm_kubernetes_cluster](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster).
+> * Install and run a sample web app to test the availability of the Kubernetes cluster you create.
+
+## Prerequisites
+
+Before you get started, you need to install and configure the following tools:
+
+* [Terraform](/azure/developer/terraform/quickstart-configure)
+* [kubectl command-line tool](https://kubernetes.io/docs/tasks/tools/)
+* [Helm package manager](https://helm.sh/docs/intro/install/)
+* [GNU wget command-line tool](http://www.gnu.org/software/wget/)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> You can find the sample code from this article in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/TestRecord.md).
+>
+> For more information, see [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform).
+
+1. Create a directory to test sample Terraform code and make it your working directory.
+2. Create a file named `providers.tf` and copy in the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/providers.tf":::
+
+3. Create a file named `main.tf` and copy in the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/main.tf":::
+
+4. Create a file named `variables.tf` and copy in the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/variables.tf":::
+
+5. Create a file named `outputs.tf` and copy in the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-k8s-cluster-with-aks-applicationgateway-ingress/outputs.tf":::
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Test the Kubernetes cluster
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+2. Get the AKS cluster name.
+
+ ```console
+ aks_cluster_name=$(terraform output -raw aks_cluster_name)
+ ```
+
+3. Get the Kubernetes configuration and access credentials for the cluster using the [`az aks get-credentials`](/cli/azure/aks#az-aks-get-credentials) command.
+
+ ```azurecli-interactive
+ az aks get-credentials \
+ --name $aks_cluster_name \
+ --resource-group $resource_group_name \
+ --overwrite-existing
+ ```
+
+4. Verify the health of the cluster using the [`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command.
+
+ ```console
+ kubectl get nodes
+ ```
+
+ **Key points:**
+
+ * The details of your worker nodes display with a status of **Ready**.
+
+ :::image type="content" source="media/create-k8s-cluster-with-aks-application-gateway-ingress/kubectl-get-nodes.png" alt-text="Screenshot of kubectl showing the health of your Kubernetes cluster.":::
+
+## Install Azure Active Directory Pod Identity
+
+Azure Active Directory (Azure AD) Pod Identity provides token-based access to [Azure Resource Manager](/azure/azure-resource-manager/resource-group-overview).
+
+[Azure AD Pod Identity](https://github.com/Azure/aad-pod-identity) adds the following components to your Kubernetes cluster:
+
+* Kubernetes [CRDs](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/): `AzureIdentity`, `AzureAssignedIdentity`, `AzureIdentityBinding`
+* [Managed Identity Controller (MIC)](https://github.com/Azure/aad-pod-identity#managed-identity-controllermic) component
+* [Node Managed Identity (NMI)](https://github.com/Azure/aad-pod-identity#node-managed-identitynmi) component
+
+To install Azure AD Pod Identity on your cluster, you need to know if RBAC is enabled or disabled. RBAC is disabled by default for this demo. Enabling or disabling RBAC is done in the `variables.tf` file via the `aks_enable_rbac` block's `default` value.
+
+* If RBAC is **enabled**, run the following `kubectl create` command.
+
+ ```console
+ kubectl create -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml
+ ```
+
+* If RBAC is **disabled**, run the following `kubectl create` command.
+
+ ```console
+ kubectl create -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment.yaml
+ ```
+
+## Install the AGIC Helm repo
+
+1. Add the AGIC Helm repo using the [`helm repo add`](https://helm.sh/docs/helm/helm_repo_add/) command.
+
+ ```console
+ helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/
+ ```
+
+2. Update the AGIC Helm repo using the [`helm repo update`](https://helm.sh/docs/helm/helm_repo_update/) command.
+
+ ```console
+ helm repo update
+ ```
+
+## Configure AGIC using Helm
+
+1. Download `helm-config.yaml` to configure AGIC using the [`wget`](https://www.gnu.org/software/wget/) command.
+
+ ```console
+ wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml
+ ```
+
+2. Open `helm-config.yaml` in a text editor.
+3. Enter the following value for the top level keys:
+
+ * `verbosityLevel`: Specify the *verbosity level* of the AGIC logging infrastructure. For more information about logging levels, see [logging Levels section of Application Gateway Kubernetes Ingress](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/463a87213bbc3106af6fce0f4023477216d2ad78/docs/troubleshooting.md).
+
+4. Enter the following values for the `appgw` block:
+
+ * `appgw.subscriptionId`: Specify the Azure subscription ID used to create the App Gateway.
+ * `appgw.resourceGroup`: Get the resource group name using the `echo "$(terraform output -raw resource_group_name)"` command.
+ * `appgw.name`: Get the Application Gateway name using the `echo "$(terraform output -raw application_gateway_name)"` command.
+ * `appgw.shared`: This boolean flag defaults to `false`. Set it to `true` if you need a [Shared App Gateway](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/072626cb4e37f7b7a1b0c4578c38d1eadc3e8701/docs/setup/install-existing.md#multi-cluster--shared-app-gateway).
+
+5. Enter the following value for the `kubernetes` block:
+
+ * `kubernetes.watchNamespace`: Specify the name space, which AGIC should watch. The namespace can be a single string value or a comma-separated list of namespaces. Leaving this variable commented out or setting it to a blank or an empty string results in the Ingress controller observing all accessible namespaces.
+
+6. Enter the following values for the `armAuth` block:
+
+ * If you specify `armAuth.type` as `aadPodIdentity`:
+ * `armAuth.identityResourceID`: Get the Identity resource ID by running `echo "$(terraform output -raw identity_resource_id)"`.
+ * `armAuth.identityClientId`: Get the Identity client ID by running `echo "$(terraform output -raw identity_client_id)"`.
+
+ * If you specify `armAuth.type` as `servicePrincipal`, see [Using a service principal](/azure/application-gateway/ingress-controller-install-existing#using-a-service-principal).
+
+## Install the AGIC package
+
+1. Install the AGIC package using the [`helm install`](https://helm.sh/docs/helm/helm_install/) command.
+
+ ```console
+ helm install -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure --generate-name
+ ```
+
+2. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+3. Get the identity name.
+
+ ```console
+ identity_name=$(terraform output -raw identity_name)
+ ```
+
+4. Get the key values from your identity using the [`az identity show`](/cli/azure/identity#az-identity-show) command.
+
+ ```azurecli-interactive
+ az identity show -g $resource_group_name -n $identity_name
+ ```
+
+## Install a sample app
+
+1. Download the YAML file using the [`curl`](https://curl.se/) command.
+
+ ```console
+ curl https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml -o aspnetapp.yaml
+ ```
+
+2. Apply the YAML file using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command.
+
+ ```console
+ kubectl apply -f aspnetapp.yaml
+ ```
+
+## Test the sample app
+
+1. Get the app IP address.
+
+ ```console
+ echo "$(terraform output -raw application_ip_address)"
+ ```
+
+2. In a browser, navigate to the IP address from the output of the previous step.
+
+ :::image type="content" source="media/create-k8s-cluster-with-aks-application-gateway-ingress/sample-app.png" alt-text="Screenshot of sample app.":::
+
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/)
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
The Azure Linux container host for AKS is an open-source Linux distribution avai
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
- --name azurelinuxpool \
+ --name azlinuxpool \
--os-sku AzureLinux ```
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Metrics are served from port 8095, but this port isn't exposed outside the pod b
|total_rotation_reconcile_error|The total number of rotation reconciles with error.|`os_type=<runtime os>`, `rotated=<true or false>`, `error_type=<error code>`| |total_rotation_reconcile_error|The distribution of how long it took to rotate secrets-store content for pods.|`os_type=<runtime os>`|
+## Migrate from open-source to AKS-managed Secrets Store CSI Driver
+
+1. Uninstall the open-source Secrets Store CSI Driver using the following `helm delete` command.
+
+ ```bash
+ helm delete <release name>
+ ```
+
+ > [!NOTE]
+ > If you installed the driver and provider using deployment YAMLs, you can delete the components using the following `kubectl delete` command.
+ >
+ > ```bash
+ > # Delete AKV provider pods from Linux nodes
+ > kubectl delete -f https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/deployment/provider-azure-installer.yaml
+ >
+ > # Delete AKV provider pods from Windows nodes
+ > kubectl delete -f https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/deployment/provider-azure-installer-windows.yaml
+ > ```
+
+2. Upgrade your existing AKS cluster with the feature using the [`az aks enable-addons`][az-aks-enable-addons] command.
+
+ ```azurecli-interactive
+ az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
+ ```
+ ## Troubleshooting For generic troubleshooting steps, see [Azure Key Vault Provider for Secrets Store CSI Driver troubleshooting](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/).
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
description: Learn how to integrate the Azure Key Vault Provider for Secrets Sto
Previously updated : 07/25/2023 Last updated : 09/02/2023
Before you begin, you must have the following prerequisites:
metadata: name: busybox-secrets-store-inline-wi spec:
+ serviceAccountName: "workload-identity-sa"
containers: - name: busybox image: registry.k8s.io/e2e-test-images/busybox:1.29-4
Before you begin, you must have the following prerequisites:
name: busybox-secrets-store-inline-user-msi spec: containers:
- - name: busybox
+ name: busybox
image: registry.k8s.io/e2e-test-images/busybox:1.29-4 command: - "/bin/sleep"
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
description: Learn how to deploy Kubernetes applications from Azure Marketplace
Previously updated : 05/01/2023 Last updated : 08/18/2023
Included among these solutions are Kubernetes application-based container offers
This feature is currently supported only in the following regions: -- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
+- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
Kubernetes application-based container offers can't be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
-## Register resource providers
-
-Before you deploy a container offer, you must register the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription by using the `az provider register` command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService --wait
-az provider register --namespace Microsoft.KubernetesConfiguration --wait
-```
- ## Select and deploy a Kubernetes application
-### From the AKS portal screen
+### From an AKS cluster
1. In the [Azure portal](https://portal.azure.com/), you can deploy a Kubernetes application from an existing cluster by navigating to **Marketplace** or selecting **Extensions + applications**, then selecting **+ Add**.
az provider register --namespace Microsoft.KubernetesConfiguration --wait
1. After you decide on an application, select the offer.
-1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**.
+1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**.
:::image type="content" source="./media/deploy-marketplace/plan-pricing.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, showing plan and pricing information.":::
-1. Follow each page in the wizard, all the way through Review + Create. Fill in information for your resource group, your cluster, and any configuration options that the application requires.
+1. Follow each page in the wizard, all the way through **Review + Create**. Fill in information for your resource group, your cluster, and any configuration options that the application requires.
:::image type="content" source="./media/deploy-marketplace/review-create.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a cluster or using an existing one.":::
az provider register --namespace Microsoft.KubernetesConfiguration --wait
:::image type="content" source="./media/deploy-marketplace/deploying.png" alt-text="Screenshot of the Azure portal deployments screen, showing that the Kubernetes offer is currently being deployed.":::
-### From the Marketplace portal screen
+### Search in the Azure portal
1. In the [Azure portal](https://portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**.
You can view the extension instance from the cluster by using the following comm
az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` ------- ## Monitor billing and usage information
To monitor billing and usage information for the offer that you deployed:
You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. -- ### [Portal](#tab/azure-portal) Select an application, then select the uninstall button to remove the extension from your cluster:
Select an application, then select the uninstall button to remove the extension
az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` - ## Troubleshooting
If you experience issues, see the [troubleshooting checklist for failed deployme
## Next steps - Learn more about [exploring and analyzing costs][billing].
+- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)
<!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview- [cluster-extensions]: ./cluster-extensions.md- [billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md-
-[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer
------- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)----
+[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-multi-instance.md
Title: Multi-instance GPU Node pool
-description: Learn how to create a Multi-instance GPU Node pool and schedule tasks on it
+ Title: Create a multi-instance GPU node pool in Azure Kubernetes Service (AKS)
+description: Learn how to create a multi-instance GPU node pool in Azure Kubernetes Service (AKS).
Previously updated : 1/24/2022 Last updated : 08/30/2023
-# Multi-instance GPU Node pool
+# Create a multi-instance GPU node pool in Azure Kubernetes Service (AKS)
-Nvidia's A100 GPU can be divided in up to seven independent instances. Each instance has their own memory and Stream Multiprocessor (SM). For more information on the Nvidia A100, follow [Nvidia A100 GPU][Nvidia A100 GPU].
+Nvidia's A100 GPU can be divided in up to seven independent instances. Each instance has its own memory and Stream Multiprocessor (SM). For more information on the Nvidia A100, see [Nvidia A100 GPU][Nvidia A100 GPU].
-This article will walk you through how to create a multi-instance GPU node pool on Azure Kubernetes Service clusters and schedule tasks.
+This article walks you through how to create a multi-instance GPU node pool in an Azure Kubernetes Service (AKS) cluster.
-## GPU Instance Profile
+## Prerequisites
-GPU Instance Profiles define how a GPU will be partitioned. The following table shows the available GPU Instance Profile for the `Standard_ND96asr_v4`
+* An Azure account with an active subscription. If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Azure CLI version 2.2.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/), installed and configured. If you use Azure Cloud Shell, `kubectl` is already installed. If you want to install it locally, you can use the [`az aks install-cli`][az-aks-install-cli] command.
+* Helm v3 installed and configured. For more information, see [Installing Helm](https://helm.sh/docs/intro/install/).
+## GPU instance profiles
-| Profile Name | Fraction of SM |Fraction of Memory | Number of Instances created |
+GPU instance profiles define how GPUs are partitioned. The following table shows the available GPU instance profile for the `Standard_ND96asr_v4`:
+
+| Profile name | Fraction of SM |Fraction of memory | Number of instances created |
|--|--|--|--| | MIG 1g.5gb | 1/7 | 1/8 | 7 | | MIG 2g.10gb | 2/7 | 2/8 | 3 |
GPU Instance Profiles define how a GPU will be partitioned. The following table
| MIG 4g.20gb | 4/7 | 4/8 | 1 | | MIG 7g.40gb | 7/7 | 8/8 | 1 |
-As an example, the GPU Instance Profile of `MIG 1g.5gb` indicates that each GPU instance will have 1g SM(Computing resource) and 5gb memory. In this case, the GPU will be partitioned into seven instances.
+As an example, the GPU instance profile of `MIG 1g.5gb` indicates that each GPU instance has 1g SM(Computing resource) and 5gb memory. In this case, the GPU is partitioned into seven instances.
-The available GPU Instance Profiles available for this instance size are `MIG1g`, `MIG2g`, `MIG3g`, `MIG4g`, `MIG7g`
+The available GPU instance profiles available for this instance size include `MIG1g`, `MIG2g`, `MIG3g`, `MIG4g`, and `MIG7g`.
> [!IMPORTANT]
-> The applied GPU Instance Profile cannot be changed after node pool creation.
-
+> You can't change the applied GPU instance profile after node pool creation.
## Create an AKS cluster
-To get started, create a resource group and an AKS cluster. If you already have a cluster, you can skip this step. Follow the example below to the resource group name `myresourcegroup` in the `southcentralus` region:
-```azurecli-interactive
-az group create --name myresourcegroup --location southcentralus
-```
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location southcentralus
+ ```
+
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command.
-```azurecli-interactive
-az aks create \
- --resource-group myresourcegroup \
- --name migcluster\
- --node-count 1
-```
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster\
+ --node-count 1
+ ```
## Create a multi-instance GPU node pool
-You can choose to either use the `az` command line or http request to the ARM API to create the node pool
-
-### Azure CLI
-If you're using command line, use the `az aks nodepool add` command to create the node pool and specify the GPU instance profile through `--gpu-instance-profile`
-```
-
-az aks nodepool add \
- --name mignode \
- --resource-group myresourcegroup \
- --cluster-name migcluster \
- --node-vm-size Standard_ND96asr_v4 \
- --gpu-instance-profile MIG1g
-```
-
-### HTTP request
-
-If you're using http request, you can place GPU instance profile in the request body:
-```
-{
- "properties": {
- "count": 1,
- "vmSize": "Standard_ND96asr_v4",
- "type": "VirtualMachineScaleSets",
- "gpuInstanceProfile": "MIG1g"
+You can use either the Azure CLI or an HTTP request to the ARM API to create the node pool.
+
+### [Azure CLI](#tab/azure-cli)
+
+* Create a multi-instance GPU node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify the GPU instance profile.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --name mignode \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --node-vm-size Standard_ND96asr_v4 \
+ --gpu-instance-profile MIG1g
+ ```
+
+### [HTTP request](#tab/http-request)
+
+* Create a multi-instance GPU node pool by placing the GPU instance profile in the request body.
+
+ ```http
+ {
+ "properties": {
+ "count": 1,
+ "vmSize": "Standard_ND96asr_v4",
+ "type": "VirtualMachineScaleSets",
+ "gpuInstanceProfile": "MIG1g"
+ }
}
-}
-```
+ ```
+++
+## Determine multi-instance GPU (MIG) strategy
+
+Before you install the Nvidia plugins, you need to specify which multi-instance GPU (MIG) strategy to use for GPU partitioning: *Single strategy* or *Mixed strategy*. The two strategies don't affect how you execute CPU workloads, but how GPU resources are displayed.
+* **Single strategy**: The single strategy treats every GPU instance as a GPU. If you use this strategy, the GPU resources are displayed as `nvidia.com/gpu: 1`.
+* **Mixed strategy**: The mixed strategy exposes the GPU instances and the GPU instance profile. If you use this strategy, the GPU resource are displayed as `nvidia.com/mig1g.5gb: 1`.
+## Install the NVIDIA device plugin and GPU feature discovery
+1. Set your MIG strategy as an environment variable. You can use either single or mixed strategy.
-## Run tasks using kubectl
+ ```azurecli-interactive
+ # Single strategy
+ export MIG_STRATEGY=single
+
+ # Mixed strategy
+ export MIG_STRATEGY=mixed
+ ```
-### MIG strategy
-Before you install the Nvidia plugins, you need to specify which strategy to use for GPU partitioning.
+2. Add the Nvidia device plugin and GPU feature discovery helm repos using the `helm repo add` and `helm repo update` commands.
-The two strategies "Single" and "Mixed" won't affect how you execute CPU workloads, but how GPU resources will be displayed.
+ ```azurecli-interactive
+ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
+ helm repo add nvgfd https://nvidia.github.io/gpu-feature-discovery
+ helm repo update
+ ```
-- Single Strategy
+3. Install the Nvidia device plugin using the `helm install` command.
- The single strategy treats every GPU instance as a GPU. If you're using this strategy, the GPU resources will be displayed as:
+ ```azurecli-interactive
+ helm install \
+ --version=0.7.0 \
+ --generate-name \
+ --set migStrategy=${MIG_STRATEGY} \
+ nvdp/nvidia-device-plugin
+ ```
- ```
- nvidia.com/gpu: 1
- ```
+4. Install the GPU feature discovery using the `helm install` command.
-- Mixed Strategy
+ ```azurecli-interactive
+ helm install \
+ --version=0.2.0 \
+ --generate-name \
+ --set migStrategy=${MIG_STRATEGY} \
+ nvgfd/gpu-feature-discovery
+ ```
- The mixed strategy will expose the GPU instances and the GPU instance profile. If you use this strategy, the GPU resource will be displayed as:
+## Confirm multi-instance GPU capability
- ```
- nvidia.com/mig1g.5gb: 1
- ```
+1. Configure `kubectl` to connect to your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-### Install the NVIDIA device plugin and GPU feature discovery
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
-Set your MIG Strategy
-```
-export MIG_STRATEGY=single
-```
-or
-```
-export MIG_STRATEGY=mixed
-```
+2. Verify the connection to your cluster using the `kubectl get` command to return a list of cluster nodes.
-Install the Nvidia device plugin and GPU feature discovery using helm
+ ```azurecli-interactive
+ kubectl get nodes -o wide
+ ```
-```
-helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
-helm repo add nvgfd https://nvidia.github.io/gpu-feature-discovery
-helm repo update #do not forget to update the helm repo
-```
+3. Confirm the node has multi-instance GPU capability using the `kubectl describe node` command. The following example command describes the node named *mignode*, which uses MIG1g as the GPU instance profile.
-```
-helm install \
version=0.7.0 \generate-name \set migStrategy=${MIG_STRATEGY} \
-nvdp/nvidia-device-plugin
-```
+ ```azurecli-interactive
+ kubectl describe node mignode
+ ```
-```
-helm install \
version=0.2.0 \generate-name \set migStrategy=${MIG_STRATEGY} \
-nvgfd/gpu-feature-discovery
-```
+ Your output should resemble the following example output:
+ ```output
+ # Single strategy output
+ Allocatable:
+ nvidia.com/gpu: 56
-### Confirm multi-instance GPU capability
-As an example, if you used MIG1g as the GPU instance profile, confirm the node has multi-instance GPU capability by running:
-```
-kubectl describe node mignode
-```
-If you're using single strategy, you'll see:
-```
-Allocatable:
- nvidia.com/gpu: 56
-```
-If you're using mixed strategy, you'll see:
-```
-Allocatable:
- nvidia.com/mig-1g.5gb: 56
-```
+ # Mixed strategy output
+ Allocatable:
+ nvidia.com/mig-1g.5gb: 56
+ ```
-### Schedule work
+## Schedule work
The following examples are based on cuda base image version 12.1.1 for Ubuntu22.04, tagged as `12.1.1-base-ubuntu22.04`. -- Single strategy
+### Single strategy
1. Create a file named `single-strategy-example.yaml` and copy in the following manifest.
- ```yaml
+ ```yaml
apiVersion: v1 kind: Pod metadata:
The following examples are based on cuda base image version 12.1.1 for Ubuntu22.
2. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest.
- ```
+ ```azurecli-interactive
kubectl apply -f single-strategy-example.yaml ```
-
+ 3. Verify the allocated GPU devices using the `kubectl exec` command. This command returns a list of the cluster nodes.
- ```
+ ```azurecli-interactive
kubectl exec nvidia-single -- nvidia-smi -L ```
- The following example resembles output showing successfully created deployments and services.
+ The following example resembles output showing successfully created deployments and
```output GPU 0: NVIDIA A100 40GB PCIe (UUID: GPU-48aeb943-9458-4282-da24-e5f49e0db44b)
The following examples are based on cuda base image version 12.1.1 for Ubuntu22.
MIG 1g.5gb Device 6: (UUID: MIG-37e055e8-8890-567f-a646-ebf9fde3ce7a) ``` -- Mixed mode strategy
+### Mixed strategy
1. Create a file named `mixed-strategy-example.yaml` and copy in the following manifest.
The following examples are based on cuda base image version 12.1.1 for Ubuntu22.
2. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest.
- ```
+ ```azurecli-interactive
kubectl apply -f mixed-strategy-example.yaml ```
-
+ 3. Verify the allocated GPU devices using the `kubectl exec` command. This command returns a list of the cluster nodes.
- ```
+ ```azurecli-interactive
kubectl exec nvidia-mixed -- nvidia-smi -L ```
- The following example resembles output showing successfully created deployments and services.
+ The following example resembles output showing successfully created deployments and
```output GPU 0: NVIDIA A100 40GB PCIe (UUID: GPU-48aeb943-9458-4282-da24-e5f49e0db44b)
The following examples are based on cuda base image version 12.1.1 for Ubuntu22.
``` > [!IMPORTANT]
-> The "latest" tag for CUDA images has been deprecated on Docker Hub.
-> Please refer to [NVIDIA's repository](https://hub.docker.com/r/nvidia/cuda/tags) for the latest images and corresponding tags
+> The `latest` tag for CUDA images has been deprecated on Docker Hub. Please refer to [NVIDIA's repository](https://hub.docker.com/r/nvidia/cuda/tags) for the latest images and corresponding tags.
## Troubleshooting-- If you do not see multi-instance GPU capability after the node pool has been created, confirm the API version is not older than 2021-08-01.
-<!-- LINKS - internal -->
+If you don't see multi-instance GPU capability after creating the node pool, confirm the API version isn't older than *2021-08-01*.
+
+## Next steps
+For more information on AKS node pools, see [Manage node pools for a cluster in AKS](./manage-node-pools.md).
+
+<!-- LINKS - internal -->
+[az-group-create]: /cli/azure/group#az_group_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
<!-- LINKS - external--> [Nvidia A100 GPU]:https://www.nvidia.com/en-us/data-center/a100/-
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
For more information on Open Liberty, see [the Open Liberty project page](https:
This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
+This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty).
+ [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
There are a few samples in the repository. We'll use *java-app/*. Here's the fil
```azurecli-interactive git clone https://github.com/Azure-Samples/open-liberty-on-aks.git cd open-liberty-on-aks
-git checkout 20230723
+git checkout 20230830
``` If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you have checked out a tag.
The following steps deploy and test the application.
echo $APP_URL ```
+ If the web page doesn't render correctly or returns a `502 Bad Gateway` error, that's because the app is still starting in the background. Wait for a few minutes and then try again.
+ ## Clean up resources To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, and all related resources.
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Previously updated : 03/02/2023 Last updated : 06/02/2023
-# Use Image Cleaner to clean up stale images on your Azure Kubernetes Service cluster (preview)
+# Use Image Cleaner to clean up stale images on your Azure Kubernetes Service (AKS) cluster
-It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which Image Cleaner can mitigate via automatic image identification and removal.
+It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images may contain vulnerabilities, which may create security issues. To remove security risks in your clusters, you can clean these unreferenced images. Manually cleaning images can be time intensive. Image Cleaner performs automatic image identification and removal, which mitigates the risk of stale images and reduces the time required to clean them up.
> [!NOTE]
-> Image Cleaner is a feature based on [Eraser](https://azure.github.io/eraser).
-> On an AKS cluster, the feature name and property name is `Image Cleaner` while the relevant Image Cleaner pods' names contain `Eraser`.
-
+> Image Cleaner is a feature based on [Eraser](https://eraser-dev.github.io/eraser).
+> On an AKS cluster, the feature name and property name is `Image Cleaner`, while the relevant Image Cleaner pods' names contain `Eraser`.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] and the `aks-preview` 0.5.96 or later CLI extension installed.
-* The `EnableImageCleanerPreview` feature flag registered on your subscription:
-
-### [Azure CLI](#tab/azure-cli)
-
-First, install the aks-preview extension by running the following command:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-Then register the `EnableImageCleanerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+* Azure CLI version 2.49.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview"
-```
-
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+## Limitations
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+Image Cleaner doesn't yet support Windows node pools or AKS virtual nodes.
-### [Azure PowerShell](#tab/azure-powershell)
+## How Image Cleaner works
-Register the `EnableImageCleanerPreview` feature flag by using the [Register-AzProviderPreviewFeature][register-azproviderpreviewfeature] cmdlet, as shown in the following example:
+When you enable Image Cleaner, it deploys an `eraser-controller-manager` pod, which generates an `ImageList` CRD. The eraser pods running on each node clean up any unreferenced and vulnerable images according to the `ImageList`. A [trivy][trivy] scan helps determine vulnerability and flags images with a classification of `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL`. Image Cleaner automatically generates an updated `ImageList` based on a set time interval and can also be supplied manually. Once Image Cleaner generates an `ImageList`, it removes all images in the list from node VMs.
-```azurepowershell-interactive
-Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EnableImageCleanerPreview
-```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [Get-AzProviderPreviewFeature][get-azproviderpreviewfeature] cmdlet:
+## Configuration options
-```azurepowershell-interactive
-Get-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EnableImageCleanerPreview |
- Format-Table -Property Name, @{name='State'; expression={$_.Properties.State}}
-```
+With Image Cleaner, you can choose between manual and automatic mode and the following configuration options:
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [Register-AzResourceProvider][register-azresourceprovider] command:
+|Name|Description|Required|
+|-|--|--|
+|`--enable-image-cleaner`|Enable the Image Cleaner feature for an AKS cluster|Yes, unless disable is specified|
+|`--disable-image-cleaner`|Disable the Image Cleaner feature for an AKS cluster|Yes, unless enable is specified|
+|`--image-cleaner-interval-hours`|This parameter determines the interval time (in hours) Image Cleaner uses to run. The default value for Azure CLI is one week, the minimum value is 24 hours and the maximum is three months.|Not required for Azure CLI, required for ARM template or other clients|
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
-```
+> [!NOTE]
+> After disabling Image Cleaner, the old configuration still exists. This means if you enable the feature again without explicitly passing configuration, the existing value is used instead of the default.
-
+## Enable Image Cleaner on your AKS cluster
-## Limitations
+### Enable Image Cleaner on a new cluster
-Image Cleaner does not support the following:
+* Enable Image Cleaner on a new AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-image-cleaner` parameter.
-* ARM64 node pools. For more information, see [Azure Virtual Machines with ARM-based processors][arm-vms].
-* Windows node pools.
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster \
+ --enable-image-cleaner
+ ```
-## How Image Cleaner works
+### Enable Image Cleaner on an existing cluster
-When enabled, an `eraser-controller-manager` pod is deployed, which generates an `ImageList` CRD. The eraser pods running on each nodes will clean up the unreferenced and vulnerable images according to the ImageList. Vulnerability is determined based on a [trivy][trivy] scan, after which images with a `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL` classification are flagged. An updated `ImageList` will be automatically generated by Image Cleaner based on a set time interval, and can also be supplied manually.
+* Enable Image Cleaner on an existing AKS cluster using the [`az aks update`][az-aks-update] command.
+ ```azurecli-interactive
+ az aks update -g myResourceGroup -n myManagedCluster \
+ --enable-image-cleaner
+ ```
+### Update the Image Cleaner interval on a new or existing cluster
-Once an `ImageList` is generated, Image Cleaner will remove all the images in the list from node VMs.
+* Update the Image Cleaner interval on a new or existing AKS cluster using the `--image-cleaner-interval-hours` parameter.
+ ```azurecli-interactive
+ # Update the interval on a new cluster
+ az aks create -g myResourceGroup -n myManagedCluster \
+ --enable-image-cleaner \
+ --image-cleaner-interval-hours 48
+ # Update the interval on an existing cluster
+ az aks update -g myResourceGroup -n myManagedCluster \
+ --image-cleaner-interval-hours 48
+ ```
-## Configuration options
+After you enable the feature, the `eraser-controller-manager-xxx` pod and `collector-aks-xxx` pod are deployed. The `eraser-aks-xxx` pod contains *three* containers:
-In addition to choosing between manual and automatic mode, there are several options for Image Cleaner:
+ - **Scanner container**: Performs vulnerability image scans
+ - **Collector container**: Collects nonrunning and unused images
+ - **Remover container**: Removes these images from cluster nodes
-|Name|Description|Required|
-|-|--|--|
-|--enable-image-cleaner|Enable the Image Cleaner feature for an AKS cluster|Yes, unless disable is specified|
-|--disable-image-cleaner|Disable the Image Cleaner feature for an AKS cluster|Yes, unless enable is specified|
-|--image-cleaner-interval-hours|This parameter determines the interval time (in hours) Image Cleaner will use to run. The default value for Azure CLI is one week, the minimum value is 24 hours and the maximum is three months.|Not required for Azure CLI, required for ARM template or other clients|
+Image Cleaner generates an `ImageList` containing nonrunning and vulnerable images at the desired interval based on your configuration. Image Cleaner automatically removes these images from cluster nodes.
-> [!NOTE]
-> After disabling Image Cleaner, the old configuration still exists. This means that if you enable the feature again without explicitly passing configuration, the existing value will be used rather than the default.
+## Manually remove images using Image Cleaner
-## Enable Image Cleaner on your AKS cluster
+1. Create an `ImageList` using the following example YAML named `image-list.yml`.
-To create a new AKS cluster using the default interval, use [az aks create][az-aks-create]:
+ ```yml
+ apiVersion: eraser.sh/v1alpha1
+ kind: ImageList
+ metadata:
+ name: imagelist
+ spec:
+ images:
+ - docker.io/library/alpine:3.7.3 # You can also use "*" to specify all non-running images
+ ```
-```azurecli-interactive
-az aks create -g MyResourceGroup -n MyManagedCluster \
- --enable-image-cleaner
-```
+2. Apply the `ImageList` to your cluster using the `kubectl apply` command.
-To enable on an existing AKS cluster, use [az aks update][az-aks-update]:
+ ```bash
+ kubectl apply -f image-list.yml
+ ```
-```azurecli-interactive
-az aks update -g MyResourceGroup -n MyManagedCluster \
- --enable-image-cleaner
-```
+ Applying the `ImageList` triggers a job named `eraser-aks-xxx`, which causes Image Cleaner to remove the desired images from all nodes. Unlike the `eraser-aks-xxx` pod under autoclean with *three* containers, the eraser-pod here has only *one* container.
-The `--image-cleaner-interval-hours` parameter can be specified at creation time or for an existing cluster. For example, the following command updates the interval for a cluster with Image Cleaner already enabled:
+## Image exclusion list
-```azurecli-interactive
-az aks update -g MyResourceGroup -n MyManagedCluster \
- --image-cleaner-interval-hours 48
-```
+Images specified in the exclusion list aren't removed from the cluster. Image Cleaner supports system and user-defined exclusion lists. It's not supported to edit the system exclusion list.
-After the feature is enabled, the `eraser-controller-manager-xxx` pod and `collector-aks-xxx` pod will be deployed.
-Based on your configuration, Image Cleaner will generate an `ImageList` containing non-running and vulnerable images at the desired interval. Image Cleaner will automatically remove these images from cluster nodes.
+### Check the system exclusion list
-## Manually remove images
+* Check the system exclusion list using the following `kubectl get` command.
-To manually remove images from your cluster using Image Cleaner, first create an `ImageList`. For example, save the following as `image-list.yml`:
+ ```bash
+ kubectl get -n kube-system cm eraser-system-exclusion -o yaml
+ ```
-```yml
-apiVersion: eraser.sh/v1alpha1
-kind: ImageList
-metadata:
- name: imagelist
-spec:
- images:
- - docker.io/library/alpine:3.7.3 # You can also use "*" to specify all non-running images
-```
+### Create a user-defined exclusion list
-And apply it to the cluster:
+1. Create a sample JSON file to contain excluded images.
-```bash
-kubectl apply -f image-list.yml
-```
+ ```bash
+ cat > sample.json <<EOF
+ {"excluded": ["excluded-image-name"]}
+ EOF
+ ```
-A job named `eraser-aks-xxx`will be triggered which causes Image Cleaner to remove the desired images from all nodes.
+2. Create a `configmap` using the sample JSON file using the following `kubectl create` and `kubectl label` command.
-## Disable Image Cleaner
+ ```bash
+ kubectl create configmap excluded --from-file=sample.json --namespace=kube-system
+ kubectl label configmap excluded eraser.sh/exclude.list=true -n kube-system
+ ```
-To stop using Image Cleaner, you can disable it via the `--disable-image-cleaner` flag:
+3. Verify the images are in the exclusion list using the following `kubectl logs` command.
-```azurecli-interactive
-az aks update -g MyResourceGroup -n MyManagedCluster
- --disable-image-cleaner
-```
+ ```bash
+ kubectl logs -n kube-system <eraser-pod-name>
+ ```
-## Logging
+## Image Cleaner image logs
-Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images, and in `collector-aks-nodes-xxx` pods for automatically deleted images.
+Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images and in `collector-aks-nodes-xxx` pods for automatically deleted images.
-You can view these logs by running `kubectl logs <pod name> -n kubesystem`. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table.
+You can view these logs using the `kubectl logs <pod name> -n kubesystem` command. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table.
-1. Ensure that Azure monitoring is enabled on the cluster. For detailed steps, see [Enable Container Insights for AKS cluster](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster).
+1. Ensure Azure Monitoring is enabled on your cluster. For detailed steps, see [Enable Container Insights on AKS clusters](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster).
-1. Get the Log Analytics resource ID:
+2. Get the Log Analytics resource ID using the [`az aks show`][az-aks-show] command.
```azurecli
- az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster>
+ az aks show -g myResourceGroup -n myManagedCluster
```
- After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID:
+ After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID.
- ```json
+ ```json
"addonProfiles": { "omsagent": { "config": {
You can view these logs by running `kubectl logs <pod name> -n kubesystem`. Howe
"enabled": true } }
- ```
+ ```
-1. In the Azure portal, search for the workspace resource ID, then select **Logs**.
+3. In the Azure portal, search for the workspace resource ID, then select **Logs**.
-1. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `collector-aks-nodes-xxx` (for automatic mode).
+4. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `collector-aks-nodes-xxx` (for automatic mode).
```kusto let startTimestamp = ago(1h);
You can view these logs by running `kubectl logs <pod name> -n kubesystem`. Howe
| order by TimeGenerated desc ```
-1. Select **Run**. Any deleted image logs will appear in the **Results** area.
+5. Select **Run**. Any deleted image logs appear in the **Results** area.
:::image type="content" source="media/image-cleaner/eraser-log-analytics.png" alt-text="Screenshot showing deleted image logs in the Azure portal." lightbox="media/image-cleaner/eraser-log-analytics.png":::
+## Disable Image Cleaner
+
+* Disable Image Cleaner on your cluster using the [`az aks update`][az-aks-update] command with the `--disable-image-cleaner` parameter.
+
+ ```azurecli-interactive
+ az aks update -g myResourceGroup -n myManagedCluster \
+ --disable-image-cleaner
+ ```
+ <!-- LINKS --> [azure-cli-install]: /cli/azure/install-azure-cli
-[azure-powershell-install]: /powershell/azure/install-az-ps
- [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-update]: /cli/azure/aks#az_aks_update
-[az-feature-register]: /cli/azure/feature#az-feature-register
-[register-azproviderpreviewfeature]: /powershell/module/az.resources/register-azproviderpreviewfeature
-[az-feature-show]: /cli/azure/feature#az-feature-show
-[get-azproviderpreviewfeature]: /powershell/module/az.resources/get-azproviderpreviewfeature
-[az-provider-register]: /cli/azure/provider#az-provider-register
-[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider
-
-[arm-vms]: https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/
[trivy]: https://github.com/aquasecurity/trivy
+[az-aks-show]: /cli/azure/aks#az_aks_show
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
To control image versions, you'll want to import them into your own Azure Contai
REGISTRY_NAME=<REGISTRY_NAME> SOURCE_REGISTRY=registry.k8s.io CONTROLLER_IMAGE=ingress-nginx/controller
-CONTROLLER_TAG=v1.2.1
+CONTROLLER_TAG=v1.8.1
PATCH_IMAGE=ingress-nginx/kube-webhook-certgen
-PATCH_TAG=v1.1.1
+PATCH_TAG=v2023040
DEFAULTBACKEND_IMAGE=defaultbackend-amd64 DEFAULTBACKEND_TAG=1.5
$RegistryName = "<REGISTRY_NAME>"
$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName $SourceRegistry = "registry.k8s.io" $ControllerImage = "ingress-nginx/controller"
-$ControllerTag = "v1.2.1"
+$ControllerTag = "v1.8.1"
$PatchImage = "ingress-nginx/kube-webhook-certgen"
-$PatchTag = "v1.1.1"
+$PatchTag = "v20230407"
$DefaultBackendImage = "defaultbackend-amd64" $DefaultBackendTag = "1.5"
ACR_URL=<REGISTRY_URL>
# Use Helm to deploy an NGINX ingress controller helm install ingress-nginx ingress-nginx/ingress-nginx \
- --version 4.1.3 \
+ --version 4.7.1 \
--namespace ingress-basic \ --create-namespace \ --set controller.replicaCount=2 \
ACR_URL=<REGISTRY_URL>
# Use Helm to deploy an NGINX ingress controller helm install ingress-nginx ingress-nginx/ingress-nginx \
- --version 4.1.3 \
+ --version 4.7.1 \
--namespace ingress-basic \ --create-namespace \ --set controller.replicaCount=2 \
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[azure-monitor-overview]: ../azure-monitor/overview.md [container-insights]: ../azure-monitor/containers/container-insights-overview.md [azure-monitor-managed-prometheus]: ../azure-monitor/essentials/prometheus-metrics-overview.md
-[collect-control-plane-logs]: monitor-aks.md#collect-control-plane-logs
+[collect-resource-logs]: monitor-aks.md#resource-logs
[azure-monitor-logs]: ../azure-monitor/logs/data-platform-logs.md [helm]: quickstart-helm.md [aks-best-practices]: best-practices.md
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
This service mesh add-on uses and builds on top of open-source Istio. The add-on
## Limitations Istio-based service mesh add-on for AKS has the following limitations:-
-* The add-on currently doesn't work on AKS clusters using [Azure CNI Powered by Cilium][azure-cni-cilium].
* The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about]. * The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation. * Managed lifecycle of mesh on how Istio versions are installed and later made available for upgrades.
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
After a few minutes, the command completes and returns information about the clu
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-1. Install `kubectl` locally using the `Install-AzAksKubectl` cmdlet:
+1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet:
```azurepowershell
- Install-AzAksKubectl
+ Install-AzAksCliTool
``` 2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
The ASP.NET sample application is provided as part of the [.NET Framework Sample
spec: type: LoadBalancer ports:
- - protocol: TCP
+ - protocol: TCP
port: 80 selector: app: sample
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
AKS supports Windows Server 2019 and 2022 node pools. Windows Server 2022 is the
## Connect to the cluster
-You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the `Install-AzAksKubectl` cmdlet.
+You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the `Install-AzAzAksCliTool` cmdlet.
1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
spec:
This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster are unable to access the load balancer. > [!NOTE]
-> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> * Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> * Pod CIDR should be added to loadBalancerSourceRanges if there are Pods needing to access the service's LoadBalancer IP for clusters with version v1.25 or above.
## Maintain the client's IP on inbound connections
The following annotations are supported for Kubernetes services with type `LoadB
| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group). | `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas. | `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time in minutes for TCP connection idle timeouts to occur on the load balancer. The default and minimum value is 4. The maximum value is 30. The value must be an integer.
+| `service.beta.kubernetes.io/azure-load-balancer-ipv4` | IPv4 address | Specify the IPv4 address to assign to the load balancer.
+| `service.beta.kubernetes.io/azure-load-balancer-ipv6` | IPv6 address | Specify the IPv6 address to assign to the load balancer.
> [!NOTE] > `service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` was deprecated in Kubernetes 1.18 and removed in 1.20.
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md
This article covers how to use Azure RBAC for Kubernetes Authorization, which al
* You need managed Azure AD integration enabled on your cluster before you can add Azure RBAC for Kubernetes authorization. If you need to enable managed Azure AD integration, see [Use Azure AD in AKS](managed-azure-ad.md). * If you have CRDs and are making custom role definitions, the only way to cover CRDs today is to use `Microsoft.ContainerService/managedClusters/*/read`. For the remaining objects, you can use the specific API groups, such as `Microsoft.ContainerService/apps/deployments/read`. * New role assignments can take up to five minutes to propagate and be updated by the authorization server.
-* This article requires that the Azure AD tenant configured for authentication is same as the tenant for the subscription that holds your AKS cluster.
+* Azure RBAC for Kubernetes Authorization requires that the Azure AD tenant configured for authentication is same as the tenant for the subscription that holds your AKS cluster.
## Create a new AKS cluster with managed Azure AD integration and Azure RBAC for Kubernetes Authorization
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
The following table lists [dimensions](../azure-monitor/essentials/data-platform
## Resource logs
-AKS implements control plane logs for the cluster as [resource logs in Azure Monitor](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for query examples.
+AKS implements control plane logs for the cluster as [resource logs in Azure Monitor.](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [Sample queries](monitor-aks-reference.md#resource-logs) for query examples.
The following table lists the resource log categories you can collect for AKS. All logs are written to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
aks Network Observability Byo Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md
AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use BYO Prometheus and Grafana to visualize the scraped metrics.
+> [!IMPORTANT]
+> AKS Network Observability is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). ## Prerequisites
For more information about AKS Network Observability, see [What is Azure Kuberne
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+- Minimum version of **Azure CLI** required for the steps in this article is **2.44.0**. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+ ### Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use Azure managed Prometheus and Grafana to visualize the scraped metrics.
+> [!IMPORTANT]
+> AKS Network Observability is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). ## Prerequisites
For more information about AKS Network Observability, see [What is Azure Kuberne
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+- Minimum version of **Azure CLI** required for the steps in this article is **2.44.0**. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
### Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
--namespace kube-system ```
-1. Azure Monitor pods should restart themselves, if they do not please rollout restart with following command:
- ```azurecli-interactive
+1. Azure Monitor pods should restart themselves, if they don't, rollout restart with following command:
+
+```azurecli-interactive
kubectl rollout restart deploy -n kube-system ama-metrics ```
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 04/26/2023 Last updated : 09/06/2023 #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
To create an interactive shell connection to a Linux node, use the `kubectl debu
The following example resembles output from the command: ```output
- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE
- KERNEL-VERSION CONTAINER-RUNTIME
+ NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS 5.15.0-1039-azure containerd://1.7.1+azure-1 aks-nodepool1-37663765-vmss000001 Ready agent 166m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS 5.15.0-1039-azure containerd://1.7.1+azure-1 aksnpwin000000 Ready agent 160m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter 10.0.20348.1787 containerd://1.6.21+azure
The following examples demonstrate possible usage of this command:
``` > [!IMPORTANT]
-> During this operation, all virtual machine scale set instances are upgraded and re-imaged to use the new SSH public key.
+> After you update SSH key, AKS doesn't automatically reimage your node pool, you can choose anytime to perform [the reimage operation][node-image-upgrade]. Only after reimage is complete, does the update SSH key operation take effect.
## Next steps
If you need more troubleshooting data, you can [view the kubelet logs][view-kube
[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node [az-aks-update]: /cli/azure/aks#az-aks-update [how-to-install-azure-extensions]: /cli/azure/azure-cli-extensions-overview#how-to-install-extensions
+[node-image-upgrade]:node-image-upgrade.md
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
Azure Kubernetes Service (AKS) continuously monitors the health state of worker
In this article, you learn how the automatic node repair functionality behaves for Windows and Linux nodes.
-## How AKS checks for unhealthy nodes
+## How AKS checks for NotReady nodes
AKS uses the following rules to determine if a node is unhealthy and needs repair:
-* The node reports the **NotReady** status on consecutive checks within a 10-minute time frame.
+* The node reports the [**NotReady**](https://kubernetes.io/docs/reference/node/node-status/#condition) status on consecutive checks within a 10-minute time frame.
* The node doesn't report any status within 10 minutes. You can manually check the health state of your nodes with the `kubectl get nodes` command.
If AKS identifies an unhealthy node that remains unhealthy for *five* minutes, A
AKS engineers investigate alternative remediations if auto-repair is unsuccessful.
+> [!NOTE]
+> Auto-repair is not triggered if the following taints are present on the node:` node.cloudprovider.kubernetes.io/shutdown`, `ToBeDeletedByClusterAutoscaler`.
+>
+> The overall auto repair process can take up to an hour to complete. AKS retries for a max of 3 times for each step.
+ ## Node auto-drain [Scheduled events][scheduled-events] can occur on the underlying VMs in any of your node pools. For [spot node pools][spot-node-pools], scheduled events may cause a *preempt* node event for the node. Certain node events, such as *preempt*, cause AKS node auto-drain to attempt a cordon and drain of the affected node. This process enables rescheduling for any affected workloads on that node. You might notice the node receives a taint with `"remediator.aks.microsoft.com/unschedulable"`, because of `"kubernetes.azure.com/scalesetpriority: spot"`.
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
Title: Use OpenFaaS with Azure Kubernetes Service (AKS)
+ Title: Use OpenFaaS on Azure Kubernetes Service (AKS)
description: Learn how to deploy and use OpenFaaS on an Azure Kubernetes Service (AKS) cluster to build serverless functions with containers. Previously updated : 03/05/2018 Last updated : 08/29/2023
-# Using OpenFaaS on AKS
+# Use OpenFaaS on Azure Kubernetes Service (AKS)
-[OpenFaaS][open-faas] is a framework for building serverless functions through the use of containers. As an open source project, it has gained large-scale adoption within the community. This document details installing and using OpenFaas on an Azure Kubernetes Service (AKS) cluster.
+[OpenFaaS][open-faas] is a framework that uses containers to build serverless functions. As an open source project, it has gained large-scale adoption within the community. This document details installing and using OpenFaas on an Azure Kubernetes Service (AKS) cluster.
-## Prerequisites
+## Before you begin
-In order to complete the steps within this article, you need the following.
-
-* Basic understanding of Kubernetes.
-* An Azure Kubernetes Service (AKS) cluster and AKS credentials configured on your development system.
-* Azure CLI installed on your development system.
-* Git command-line tools installed on your system.
+* This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](./concepts-clusters-workloads.md).
+* You need an active Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* You need an AKS cluster. If you don't have an existing cluster, you can create one using the [Azure CLI](./learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](./learn/quick-kubernetes-deploy-powershell.md), or [Azure portal](./learn/quick-kubernetes-deploy-portal.md).
+* You need to install the OpenFaaS CLI. For installation options, see the [OpenFaaS CLI documentation][open-faas-cli].
## Add the OpenFaaS helm chart repo
-Go to [https://shell.azure.com](https://shell.azure.com) to open Azure Cloud Shell in your browser.
-
-OpenFaaS maintains its own helm charts to keep up to date with all the latest changes.
+1. Navigate to [Azure Cloud Shell](https://shell.azure.com).
+2. Add the OpenFaaS helm chart repo and update to the latest version using the following `helm` commands.
-```console
-helm repo add openfaas https://openfaas.github.io/faas-netes/
-helm repo update
-```
+ ```console
+ helm repo add openfaas https://openfaas.github.io/faas-netes/
+ helm repo update
+ ```
## Deploy OpenFaaS As a good practice, OpenFaaS and OpenFaaS functions should be stored in their own Kubernetes namespace.
-Create a namespace for the OpenFaaS system and functions:
-
-```console
-kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
-```
-
-Generate a password for the OpenFaaS UI Portal and REST API:
-
-```console
-# generate a random password
-PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
-
-kubectl -n openfaas create secret generic basic-auth \
from-literal=basic-auth-user=admin \from-literal=basic-auth-password="$PASSWORD"
-```
+1. Create a namespace for the OpenFaaS system and functions using the `kubectl apply` command.
-You can get the value of the secret with `echo $PASSWORD`.
+ ```console
+ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
+ ```
-The password we create here will be used by the helm chart to enable basic authentication on the OpenFaaS Gateway, which is exposed to the Internet through a cloud LoadBalancer.
+2. Generate a password for the OpenFaaS UI Portal and REST API using the following commands. The helm chart uses this password to enable basic authentication on the OpenFaaS Gateway, which is exposed to the Internet through a cloud LoadBalancer.
-A Helm chart for OpenFaaS is included in the cloned repository. Use this chart to deploy OpenFaaS into your AKS cluster.
+ ```console
+ # generate a random password
+ PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
-```console
-helm upgrade openfaas --install openfaas/openfaas \
- --namespace openfaas \
- --set basic_auth=true \
- --set functionNamespace=openfaas-fn \
- --set serviceType=LoadBalancer
-```
+ kubectl -n openfaas create secret generic basic-auth \
+ --from-literal=basic-auth-user=admin \
+ --from-literal=basic-auth-password="$PASSWORD"
+ ```
-Output:
+3. Get the value for your password using the following `echo` command.
-```output
-NAME: openfaas
-LAST DEPLOYED: Wed Feb 28 08:26:11 2018
-NAMESPACE: openfaas
-STATUS: DEPLOYED
+ ```console
+ echo $PASSWORD
+ ```
-RESOURCES:
-==> v1/ConfigMap
-NAME DATA AGE
-prometheus-config 2 20s
-alertmanager-config 1 20s
+4. Deploy OpenFaaS into your AKS cluster using the `helm upgrade` command.
-{snip}
+ ```console
+ helm upgrade openfaas --install openfaas/openfaas \
+ --namespace openfaas \
+ --set basic_auth=true \
+ --set functionNamespace=openfaas-fn \
+ --set serviceType=LoadBalancer
+ ```
-NOTES:
-To verify that openfaas has started, run:
+ Your output should look similar to the following condensed example output:
-kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"
-```
+ ```output
+ NAME: openfaas
+ LAST DEPLOYED: Tue Aug 29 08:26:11 2023
+ NAMESPACE: openfaas
+ STATUS: deployed
+ ...
+ NOTES:
+ To verify that openfaas has started, run:
-A public IP address is created for accessing the OpenFaaS gateway. To retrieve this IP address, use the [kubectl get service][kubectl-get] command. It may take a minute for the IP address to be assigned to the service.
+ kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"
+ ...
+ ```
-```console
-kubectl get service -l component=gateway --namespace openfaas
-```
+5. A public IP address is created for accessing the OpenFaaS gateway. Get the IP address using the [`kubectl get service`][kubectl-get] command.
-Output.
+ ```console
+ kubectl get service -l component=gateway --namespace openfaas
+ ```
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-gateway ClusterIP 10.0.156.194 <none> 8080/TCP 7m
-gateway-external LoadBalancer 10.0.28.18 52.186.64.52 8080:30800/TCP 7m
-```
+ Your output should look similar to the following example output:
-To test the OpenFaaS system, browse to the external IP address on port 8080, `http://52.186.64.52:8080` in this example. You will be prompted to log in. The default user is `admin` and your password can be retrieved by using `echo $PASSWORD`.
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ gateway ClusterIP 10.0.156.194 <none> 8080/TCP 7m
+ gateway-external LoadBalancer 10.0.28.18 52.186.64.52 8080:30800/TCP 7m
+ ```
-![OpenFaaS UI](media/container-service-serverless/openfaas.png)
+6. Test the OpenFaaS system by browsing to the external IP address on port 8080, `http://52.186.64.52:8080` in this example, where you're prompted to log in. The default user is `admin` and your password can be retrieved using `echo $PASSWORD`.
-Finally, install the OpenFaaS CLI. This example used brew, see the [OpenFaaS CLI documentation][open-faas-cli] for more options.
+ ![Screenshot of OpenFaaS UI.](media/container-service-serverless/openfaas.png)
-```console
-brew install faas-cli
-```
+7. Set `$OPENFAAS_URL` to the URL of the external IP address on port 8080 and log in with the Azure CLI using the following commands.
-Set `$OPENFAAS_URL` to the public IP found above.
-
-Log in with the Azure CLI:
-
-```console
-export OPENFAAS_URL=http://52.186.64.52:8080
-echo -n $PASSWORD | ./faas-cli login -g $OPENFAAS_URL -u admin --password-stdin
-```
+ ```console
+ export OPENFAAS_URL=http://52.186.64.52:8080
+ echo -n $PASSWORD | ./faas-cli login -g $OPENFAAS_URL -u admin --password-stdin
+ ```
## Create first function
-Now that OpenFaaS is operational, create a function using the OpenFaas portal.
-
-Click on **Deploy New Function** and search for **Figlet**. Select the Figlet function, and click **Deploy**.
+1. Navigate to the OpenFaaS system using your OpenFaaS URL.
+2. Create a function using the OpenFaas portal by selecting **Deploy A New Function** and search for **Figlet**.
+3. Select the **Figlet** function, and then select **Deploy**.
-![Screenshot shows the Deploy A New Function dialog box with the text figlet on the search line.](media/container-service-serverless/figlet.png)
+ ![Screenshot shows the Deploy A New Function dialog box with the text Figlet on the search line.](media/container-service-serverless/figlet.png)
-Use curl to invoke the function. Replace the IP address in the following example with that of your OpenFaas gateway.
+4. Invoke the function using the following `curl` command. Make sure you replace the IP address in the following example with your OpenFaaS gateway address.
-```console
-curl -X POST http://52.186.64.52:8080/function/figlet -d "Hello Azure"
-```
+ ```console
+ curl -X POST http://52.186.64.52:8080/function/figlet -d "Hello Azure"
+ ```
-Output:
+ Your output should look similar to the following example output:
-```output
- _ _ _ _ _
-| | | | ___| | | ___ / \ _____ _ _ __ ___
-| |_| |/ _ \ | |/ _ \ / _ \ |_ / | | | '__/ _ \
-| _ | __/ | | (_) | / ___ \ / /| |_| | | | __/
-|_| |_|\___|_|_|\___/ /_/ \_\/___|\__,_|_| \___|
-
-```
+ ```output
+ _ _ _ _ _
+ | | | | ___| | | ___ / \ _____ _ _ __ ___
+ | |_| |/ _ \ | |/ _ \ / _ \ |_ / | | | '__/ _ \
+ | _ | __/ | | (_) | / ___ \ / /| |_| | | | __/
+ |_| |_|\___|_|_|\___/ /_/ \_\/___|\__,_|_| \___|
+ ```
## Create second function
-Now create a second function. This example will be deployed using the OpenFaaS CLI and includes a custom container image and retrieving data from an Azure Cosmos DB instance. Several items need to be configured before creating the function.
-
-First, create a new resource group for the Azure Cosmos DB instance.
-
-```azurecli-interactive
-az group create --name serverless-backing --location eastus
-```
+### Configure your Azure Cosmos DB instance
-Deploy an Azure Cosmos DB instance of kind `MongoDB`. The instance needs a unique name, update `openfaas-cosmos` to something unique to your environment.
+1. Navigate to [Azure Cloud Shell](https://shell.azure.com).
+2. Create a new resource group for the Azure Cosmos DB instance using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-az cosmosdb create --resource-group serverless-backing --name openfaas-cosmos --kind MongoDB
-```
+ ```azurecli-interactive
+ az group create --name serverless-backing --location eastus
+ ```
-Get the Azure Cosmos DB database connection string and store it in a variable.
+3. Deploy an Azure Cosmos DB instance of kind `MongoDB` using the [`az cosmosdb create`][az-cosmosdb-create] command. Replace `openfaas-cosmos` with your own unique instance name.
-Update the value for the `--resource-group` argument to the name of your resource group, and the `--name` argument to the name of your Azure Cosmos DB instance.
+ ```azurecli-interactive
+ az cosmosdb create --resource-group serverless-backing --name openfaas-cosmos --kind MongoDB
+ ```
-```azurecli-interactive
-COSMOS=$(az cosmosdb list-connection-strings \
- --resource-group serverless-backing \
- --name openfaas-cosmos \
- --query connectionStrings[0].connectionString \
- --output tsv)
-```
+4. Get the Azure Cosmos DB database connection string and store it in a variable using the [`az cosmosdb list`][az-cosmosdb-list] command. Make sure you replace the value for the `--resource-group` argument with the name of your resource group, and the `--name` argument with the name of your Azure Cosmos DB instance.
-Now populate the Azure Cosmos DB with test data. Create a file named `plans.json` and copy in the following json.
+ ```azurecli-interactive
+ COSMOS=$(az cosmosdb list-connection-strings \
+ --resource-group serverless-backing \
+ --name openfaas-cosmos \
+ --query connectionStrings[0].connectionString \
+ --output tsv)
+ ```
-```json
-{
- "name" : "two_person",
- "friendlyName" : "Two Person Plan",
- "portionSize" : "1-2 Person",
- "mealsPerWeek" : "3 Unique meals per week",
- "price" : 72,
- "description" : "Our basic plan, delivering 3 meals per week, which will feed 1-2 people.",
- "__v" : 0
-}
-```
+5. Populate the Azure Cosmos DB with test data by creating a file named `plans.json` and copying in the following json.
-Use the *mongoimport* tool to load the Azure Cosmos DB instance with data.
+ ```json
+ {
+ "name" : "two_person",
+ "friendlyName" : "Two Person Plan",
+ "portionSize" : "1-2 Person",
+ "mealsPerWeek" : "3 Unique meals per week",
+ "price" : 72,
+ "description" : "Our basic plan, delivering 3 meals per week, which will feed 1-2 people.",
+ "__v" : 0
+ }
+ ```
-If needed, install the MongoDB tools. The following example installs these tools using brew, see the [MongoDB documentation][install-mongo] for other options.
+### Create the function
-```console
-brew install mongodb
-```
+1. Install the MongoDB tools. The following example command installs these tools using brew. For more installation options, see the [MongoDB documentation][install-mongo].
-Load the data into the database.
+ ```console
+ brew install mongodb
+ ```
-```console
-mongoimport --uri=$COSMOS -c plans < plans.json
-```
+2. Load the Azure Cosmos DB instance with data using the *mongoimport* tool.
-Output:
+ ```console
+ mongoimport --uri=$COSMOS -c plans < plans.json
+ ```
-```output
-2018-02-19T14:42:14.313+0000 connected to: localhost
-2018-02-19T14:42:14.918+0000 imported 1 document
-```
+ Your output should look similar to the following example output:
-Run the following command to create the function. Update the value of the `-g` argument with your OpenFaaS gateway address.
+ ```output
+ 2018-02-19T14:42:14.313+0000 connected to: localhost
+ 2018-02-19T14:42:14.918+0000 imported 1 document
+ ```
-```console
-faas-cli deploy -g http://52.186.64.52:8080 --image=shanepeckham/openfaascosmos --name=cosmos-query --env=NODE_ENV=$COSMOS
-```
+3. Create the function using the `faas-cli deploy` command. Make sure you update the value of the `-g` argument with your OpenFaaS gateway address.
-Once deployed, you should see your newly created OpenFaaS endpoint for the function.
+ ```console
+ faas-cli deploy -g http://52.186.64.52:8080 --image=shanepeckham/openfaascosmos --name=cosmos-query --env=NODE_ENV=$COSMOS
+ ```
-```output
-Deployed. 202 Accepted.
-URL: http://52.186.64.52:8080/function/cosmos-query
-```
+ Once deployed, your output should look similar to the following example output:
-Test the function using curl. Update the IP address with the OpenFaaS gateway address.
+ ```output
+ Deployed. 202 Accepted.
+ URL: http://52.186.64.52:8080/function/cosmos-query
+ ```
-```console
-curl -s http://52.186.64.52:8080/function/cosmos-query
-```
+4. Test the function using the following `curl` command. Make sure you update the IP address with the OpenFaaS gateway address.
-Output:
+ ```console
+ curl -s http://52.186.64.52:8080/function/cosmos-query
+ ```
-```json
-[{"ID":"","Name":"two_person","FriendlyName":"","PortionSize":"","MealsPerWeek":"","Price":72,"Description":"Our basic plan, delivering 3 meals per week, which will feed 1-2 people."}]
-```
+ Your output should look similar to the following example output:
-You can also test the function within the OpenFaaS UI.
+ ```output
+ [{"ID":"","Name":"two_person","FriendlyName":"","PortionSize":"","MealsPerWeek":"","Price":72,"Description":"Our basic plan, delivering 3 meals per week, which will feed 1-2 people."}]
+ ```
-![alt text](media/container-service-serverless/OpenFaaSUI.png)
+ > [!NOTE]
+ > You can also test the function within the OpenFaaS UI:
+ >
+ > ![Screenshot of OpenFaas UI.](media/container-service-serverless/OpenFaaSUI.png)
-## Next Steps
+## Next steps
-You can continue to learn with the [OpenFaaS workshop](https://github.com/openfaas/workshop) through a set of hands-on labs that cover topics such as how to create your own GitHub bot, consuming secrets, viewing metrics, and auto-scaling.
+Continue to learn with the [OpenFaaS workshop][openfaas-workshop], which includes a set of hands-on labs that cover topics such as how to create your own GitHub bot, consuming secrets, viewing metrics, and autoscaling.
<!-- LINKS - external --> [install-mongo]: https://docs.mongodb.com/manual/installation/
You can continue to learn with the [OpenFaaS workshop](https://github.com/openfa
[open-faas]: https://www.openfaas.com/ [open-faas-cli]: https://github.com/openfaas/faas-cli [openfaas-workshop]: https://github.com/openfaas/workshop
+[az-group-create]: /cli/azure/group#az_group_create
+[az-cosmosdb-create]: /cli/azure/cosmosdb#az_cosmosdb_create
+[az-cosmosdb-list]: /cli/azure/cosmosdb#az_cosmosdb_list
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview)
+ Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster
+ description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS).
-# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview)
+# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster
+
+Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice thereby minimizing any workload impact.
-Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows to perform updates and minimize workload impact. Once scheduled, upgrades occur only during the window you selected.
+AKS intiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade].
There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`: -- `default` corresponds to a basic configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].
+- `default` corresponds to a basic configuration that is used to control AKS releases, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). Choose `default` to schedule these updates in such a way that it's least disruptive for you. You can monitor the status of an ongoing AKS release by region from the [weekly releases tracker][release-tracker].
- `aksManagedAutoUpgradeSchedule` controls when cluster upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default` configuration. For more information on cluster auto-upgrade, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade]. -- `aksManagedNodeOSUpgradeSchedule` controls when node operating system upgrades scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
+- `aksManagedNodeOSUpgradeSchedule` controls when the node operating system security patching scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade channel, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
-We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node image upgrade scenarios, while `default` is meant exclusively for weekly releases. You can port `default` configurations to `aksManagedAutoUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
-
-To configure Planned Maintenance using pre-created configurations, see [Use Planned Maintenance pre-created configurations to schedule AKS weekly releases][pm-weekly].
+We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios, while `default` is meant exclusively for the AKS weekly releases. You can port `default` configurations to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
## Before you begin
This article assumes that you have an existing AKS cluster. If you need an AKS c
Be sure to upgrade Azure CLI to the latest version using [`az upgrade`](/cli/azure/update-azure-cli#manual-update). -
-### Limitations
-
-When you use Planned Maintenance, the following restrictions apply:
--- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.-- Currently, performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.-- Updates can't be blocked for more than seven days.-
-### Install aks-preview CLI extension
-
-You also need the *aks-preview* Azure CLI extension version 0.5.124 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- ## Creating a maintenance window
-To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name will cause your maintenance window not to run.
+To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name causes your maintenance window not to run.
> [!NOTE] > When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more.
A `RelativeMonthly` schedule may look like *"every two months, on the last Monda
Valid values for `weekIndex` are `First`, `Second`, `Third`, `Fourth`, and `Last`.
+### Things to note
+
+When you use Planned Maintenance, the following restrictions apply:
+
+- AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical. These maintenance operations may even run during the `notAllowedTime` or `notAllowedDates` periods defined in your configuration.
+- Performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.
+ ## Add a maintenance window configuration with Azure CLI The following example shows a command to add a new `default` configuration that schedules maintenance to run from 1:00am to 2:00am every Monday:
To delete a certain maintenance configuration window in your AKS Cluster, use th
```azurecli-interactive az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule ```
+## Frequently Asked Questions
+
+* How can I check the existing maintenance configurations in my cluster?
+
+ Use the `az aks maintenanceconfiguration show` command.
+
+* Can reactive, unplanned maintenance happen during the `notAllowedTime` or `notAllowedDates` periods too?
+
+ Yes, AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical.
+
+* How can you tell if a maintenance event occurred?
+
+ For releases, check your cluster's region and look up release information in [weekly releases][release-tracker] and validate if it matches your maintenance schedule or not. To view the status of your auto upgrades, look up [activity logs][monitor-aks] on your cluster. You may also look up specific upgrade related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid].
+
+* Can you use more than one maintenance configuration at the same time?
+
+ Yes, you can run all three configurations i.e `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`simultaneously. In case the windows overlap AKS decides the running order.
+
+* Are there any best practices for the maintenance configurations?
+
+ We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy].
## Next steps
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
[auto-upgrade]: auto-upgrade-cluster.md [node-image-auto-upgrade]: auto-upgrade-node-image.md [pm-weekly]: ./aks-planned-maintenance-weekly-releases.md
+[monitor-aks]: monitor-aks-reference.md
+[aks-eventgrid]:quickstart-event-grid.md
+[aks-support-policy]:support-policies.md
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service (AKS) cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Last updated 06/29/2023-+ # Create a private Azure Kubernetes Service (AKS) cluster
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
If you need more troubleshooting data, you can [view the Kubernetes primary node
[install-azure-cli]: /cli/azure/install-azure-cli [install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md
-[view-primary-logs]: ../azure-monitor/containers/container-insights-log-query.md#resource-logs
+[view-primary-logs]: monitor-aks.md#resource-logs
[azure-bastion]: ../bastion/bastion-overview.md
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster
description: Learn how to use Scale-down Mode in Azure Kubernetes Service (AKS). Previously updated : 09/01/2021 Last updated : 08/21/2023 # Use Scale-down Mode to delete/deallocate nodes in Azure Kubernetes Service (AKS)
-By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down.
+By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down.
When an Azure VM is in the `Stopped` (deallocated) state, you will not be charged for the VM compute resources. However, you'll still need to pay for any OS and data storage disks attached to the VM. This also means that the container images will be preserved on those nodes. For more information, see [States and billing of Azure Virtual Machines][state-billing-azure-vm]. This behavior allows for faster operation speeds, as your deployment uses cached images. Scale-down Mode removes the need to pre-provision nodes and pre-pull container images, saving you compute cost.
This article assumes that you have an existing AKS cluster. If you need an AKS c
### Limitations -- [Ephemeral OS][ephemeral-os] disks aren't supported. Be sure to specify managed OS disks via `--node-osdisk-type Managed` when creating a cluster or node pool.
+- [Ephemeral OS][ephemeral-os] disks aren't supported. Be sure to specify managed OS disks by including the argument `--node-osdisk-type Managed` when creating a cluster or node pool.
> [!NOTE] > Previously, while Scale-down Mode was in preview, [spot node pools][spot-node-pool] were unsupported. Now that Scale-down Mode is Generally Available, this limitation no longer applies. ## Using Scale-down Mode to deallocate nodes on scale-down
-By setting `--scale-down-mode Deallocate`, nodes will be deallocated during a scale-down of your cluster/node pool. All deallocated nodes are stopped. When your cluster/node pool needs to scale up, the deallocated nodes will be started first before any new nodes are provisioned.
+By setting `--scale-down-mode Deallocate`, nodes will be deallocated during a scale-down of your cluster/node pool. All deallocated nodes are stopped. When your cluster/node pool needs to scale up, the deallocated nodes are started first before any new nodes are provisioned.
-In this example, we create a new node pool with 20 nodes and specify that upon scale-down, nodes are to be deallocated via `--scale-down-mode Deallocate`.
+In this example, we create a new node pool with 20 nodes and specify that upon scale-down, nodes are to be deallocated using the argument `--scale-down-mode Deallocate`.
```azurecli-interactive az aks nodepool add --node-count 20 --scale-down-mode Deallocate --node-osdisk-type Managed --max-pods 10 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup
By scaling the node pool and changing the node count to 5, we'll deallocate 15 n
az aks nodepool scale --node-count 5 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup ```
+To deallocate Windows nodes during scale-down, run the following command. The default behavior is consistent with Linux nodes, where nodes are [deleted during scale-down](#using-scale-down-mode-to-delete-nodes-on-scale-down).
+
+```azurecli-interactive
+az aks nodepool add --node-count 20 --scale-down-mode Deallocate --os-type Windows --node-osdisk-type Managed --max-pods 10 --name npwin2 --cluster-name myAKSCluster --resource-group myResourceGroup
+```
+ ### Deleting previously deallocated nodes To delete your deallocated nodes, you can change your Scale-down Mode to `Delete` by setting `--scale-down-mode Delete`. The 15 deallocated nodes will now be deleted.
az aks nodepool update --scale-down-mode Delete --name nodepool2 --cluster-name
The default behavior of AKS without using Scale-down Mode is to delete your nodes when you scale-down your cluster. With Scale-down Mode, this behavior can be explicitly achieved by setting `--scale-down-mode Delete`.
-In this example, we create a new node pool and specify that our nodes will be deleted upon scale-down via `--scale-down-mode Delete`. Scaling operations will be handled via the cluster autoscaler.
+In this example, we create a new node pool and specify that our nodes will be deleted upon scale-down using the argument `--scale-down-mode Delete`. Scaling operations will be handled using the cluster autoscaler.
```azurecli-interactive az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --max-pods 10 --node-osdisk-type Managed --scale-down-mode Delete --name nodepool3 --cluster-name myAKSCluster --resource-group myResourceGroup
az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[aks-support-policies]: support-policies.md
-[aks-faq]: faq.md
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[az-provider-register]: /cli/azure/provider#az_provider_register
[aks-upgrade]: upgrade-cluster.md [cluster-autoscaler]: cluster-autoscaler.md [ephemeral-os]: concepts-storage.md#ephemeral-os-disk
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Title: Support policies for Azure Kubernetes Service (AKS) description: Learn about Azure Kubernetes Service (AKS) support policies, shared responsibility, and features that are in preview (or alpha or beta). Previously updated : 05/22/2023 Last updated : 08/28/2023 #Customer intent: As a cluster operator or developer, I want to understand what AKS components I need to manage, what components are managed by Microsoft (including security patches), and networking and preview features.
Microsoft doesn't provide technical support for the following scenarios:
* Third-party closed-source software. This software can include security scanning tools and networking devices or software. * Network customizations other than the ones listed in the [AKS documentation](./index.yml). * Custom or third-party CNI plugins used in [BYOCNI](use-byo-cni.md) mode.
-* Stand-by and proactive scenarios. Microsoft Support provides reactive support to help solve active issues in a timely and professional manner. However, standby or proactive support to help you eliminate operational risks, increase availability, and optimize performance are not covered. [Eligible customers](https://www.microsoft.com/unifiedsupport) can contact their account team to get nominated for Azure Event Management service[https://devblogs.microsoft.com/premier-developer/proactively-plan-for-your-critical-event-in-azure-with-enhanced-support-and-engineering-services/]. It's a paid service delivered by Microsoft support engineers that includes a proactive solution risk assessment and coverage during the event.
+* Stand-by and proactive scenarios. Microsoft Support provides reactive support to help solve active issues in a timely and professional manner. However, standby or proactive support to help you eliminate operational risks, increase availability, and optimize performance are not covered. [Eligible customers](https://www.microsoft.com/unifiedsupport) can contact their account team to get nominated for [Azure Event Management service](https://devblogs.microsoft.com/premier-developer/proactively-plan-for-your-critical-event-in-azure-with-enhanced-support-and-engineering-services/). It's a paid service delivered by Microsoft support engineers that includes a proactive solution risk assessment and coverage during the event.
## AKS support coverage for agent nodes
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service (AKS). description: Learn the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS). Previously updated : 11/21/2022 Last updated : 08/31/2023
For the past release history, see [Kubernetes history](https://github.com/kubern
| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | 1.27 | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024
-| 1.28 | Aug 2023 | Aug 2023 | Sep 2023 ||
+| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 ||
### AKS Kubernetes release schedule Gantt chart If you prefer to see this information visually, here's a Gantt chart with all the current releases displayed: ## AKS Components Breaking Changes by Version
Note the following important changes to make before you upgrade to any of the av
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
-| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
-| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No Breaking Changes |None
-| 1.27 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload Identity V1.1.1<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
+| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload identity v1.0.0<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
+| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload identity v1.0.0<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>|No Breaking Changes |None
+| 1.27 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Ingress AppGateway 1.2.1<br>Eraser v1.1.1<br>Azure Workload identity v1.0.0<br>ASC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
## Alias minor version
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the n
* Check the version of your AKS cluster using the [`Get-AzAksCluster`][get-azakscluster] cmdlet. ```azurepowershell
- Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).KubernetesVersion
+ (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).KubernetesVersion
```
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 04/28/2023 Last updated : 08/15/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv
> Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >
-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2023.
+> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024.
> > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`.
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
Title: Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster
-description: Learn how to enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster for securing your pods.
+description: Learn how to enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster to secure your pods.
Previously updated : 11/01/2021 Last updated : 08/30/2023 # Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster
-[Group Managed Service Accounts (GMSA)][gmsa-overview] is a managed domain account for multiple servers that provides automatic password management, simplified service principal name (SPN) management and the ability to delegate the management to other administrators. AKS provides the ability to enable GMSA on your Windows Server nodes, which allows containers running on Windows Server nodes to integrate with and be managed by GMSA.
-
+[Group Managed Service Accounts (GMSA)][gmsa-overview] is a managed domain account for multiple servers that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate management to other administrators. With Azure Kubernetes Service (AKS), you can enable GMSA on your Windows Server nodes, which allows containers running on Windows Server nodes to integrate with and be managed by GMSA.
## Prerequisites
-Enabling GMSA with Windows Server nodes on AKS requires:
-
-* Kubernetes 1.19 or greater.
-* Azure CLI version 2.35.0 or greater
-* [Managed identities][aks-managed-id] with your AKS cluster.
+* Kubernetes 1.19 or greater. To check your version, see [Check for available upgrades](./upgrade-cluster.md#check-for-available-aks-cluster-upgrades). To upgrade your version, see [Upgrade AKS cluster](./upgrade-cluster.md#upgrade-an-aks-cluster).
+* Azure CLI version 2.35.0 or greater. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* [Managed identities][aks-managed-id] enabled on your AKS cluster.
* Permissions to create or update an Azure Key Vault. * Permissions to configure GMSA on Active Directory Domain Service or on-premises Active Directory. * The domain controller must have Active Directory Web Services enabled and must be reachable on port 9389 by the AKS cluster. > [!NOTE]
-> Microsoft also provides a purpose-built PowerShell module to configure gMSA on AKS. You can find more information on the module and how to use it in the article [gMSA on Azure Kubernetes Service](/virtualization/windowscontainers/manage-containers/gmsa-aks-ps-module).
+> Microsoft also provides a purpose-built PowerShell module to configure gMSA on AKS. For more information, see [gMSA on Azure Kubernetes Service](/virtualization/windowscontainers/manage-containers/gmsa-aks-ps-module).
## Configure GMSA on Active Directory domain controller
-To use GMSA with AKS, you need both GMSA and a standard domain user credential to access the GMSA credential configured on your domain controller. To configure GMSA on your domain controller, see [Getting Started with Group Managed Service Accounts][gmsa-getting-started]. For the standard domain user credential, you can use an existing user or create a new one, as long as it has access to the GMSA credential.
+To use GMSA with AKS, you need a standard domain user credential to access the GMSA credential configured on your domain controller. To configure GMSA on your domain controller, see [Get started with Group Managed Service Accounts][gmsa-getting-started]. For the standard domain user credential, you can use an existing user or create a new one, as long as it has access to the GMSA credential.
> [!IMPORTANT]
-> You must use either Active Directory Domain Service or on-prem Active Directory. At this time, you can't use Azure Active Directory to configure GMSA with an AKS cluster.
+> You must use either Active Directory Domain Service or on-premises Active Directory. At this time, you can't use Azure Active Directory to configure GMSA with an AKS cluster.
## Store the standard domain user credentials in Azure Key Vault
-Your AKS cluster uses the standard domain user credentials to access the GMSA credentials from the domain controller. To provide secure access to those credentials for the AKS cluster, those credentials should be stored in Azure Key Vault. You can create a new key vault or use an existing key vault.
+Your AKS cluster uses the standard domain user credentials to access the GMSA credentials from the domain controller. To provide secure access to those credentials for the AKS cluster, you should store them in Azure Key Vault.
-Use `az keyvault secret set` to store the standard domain user credential as a secret in your key vault. The following example stores the domain user credential with the key *GMSADomainUserCred* in the *MyAKSGMSAVault* key vault. You should replace the parameters with your own key vault, key, and domain user credential.
+1. If you don't already have an Azure key vault, create one using the [`az keyvault create`][az-keyvault-create] command.
-```azurecli
-az keyvault secret set --vault-name MyAKSGMSAVault --name "GMSADomainUserCred" --value "$Domain\\$DomainUsername:$DomainUserPassword"
-```
+ ```azurecli-interactive
+ az keyvault create --resource-group myResourceGroup --name myGMSAVault
+ ```
-> [!NOTE]
-> Use the Fully Qualified Domain Name for the Domain rather than the Partially Qualified Domain Name that may be used on internal networks.
->
-> The above command escapes the `value` parameter for running the Azure CLI on a Linux shell. When running the Azure CLI command on Windows PowerShell, you don't need to escape characters in the `value` parameter.
+2. Store the standard domain user credential as a secret in your key vault using the [`az keyvault secret set`][az-keyvault-secret-set] command. The following example stores the domain user credential with the key *GMSADomainUserCred* in the *myGMSAVault* key vault.
+ ```azurecli-interactive
+ az keyvault secret set --vault-name myGMSAVault --name "GMSADomainUserCred" --value "$Domain\\$DomainUsername:$DomainUserPassword"
+ ```
-## Optional: Use a custom VNET with custom DNS
+ > [!NOTE]
+ > Make sure to use the Fully Qualified Domain Name for the domain.
-Your domain controller needs to be configured through DNS so it's reachable by the AKS cluster. You can configure your network and DNS outside of your AKS cluster to allow your cluster to access the domain controller. Alternatively, you can configure a custom VNET with a custom DNS using Azure CNI with your AKS cluster to provide access to your domain controller. For more information, see [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][aks-cni].
+### Optional: Use a custom VNet with custom DNS
-## Optional: Configure more than one DNS server
+You need to configure your domain controller through DNS so it's reachable by the AKS cluster. You can configure your network and DNS outside of your AKS cluster to allow your cluster to access the domain controller. Alternatively, you can use Azure CNI to configure a custom VNet with a custom DNS on your AKS cluster to provide access to your domain controller. For more information, see [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][aks-cni].
-If you want to configure more than one DNS server for Windows GMSA in your AKS cluster, don't specify `--gmsa-dns-server`or `v--gmsa-root-domain-name`. Instead, you can add multiple DNS servers in the vnet by selecting Custom DNS and adding the DNS servers
+### Optional: Configure more than one DNS server
-## Optional: Use your own kubelet identity for your cluster
+If you want to configure more than one DNS server for Windows GMSA in your AKS cluster, don't specify `--gmsa-dns-server`or `v--gmsa-root-domain-name`. Instead, you can add multiple DNS servers in the VNet by selecting *Custom DNS* and adding the DNS servers.
-To provide the AKS cluster access to your key vault, the cluster kubelet identity needs access to your key vault. By default, when you create a cluster with managed identity enabled, a kubelet identity is automatically created. You can grant access to your key vault for this identity after cluster creation, which is done in a later step.
+### Optional: Use your own kubelet identity for your cluster
-Alternatively, you can create your own identity and use this identity during cluster creation in a later step. For more information on the provided managed identities, see [Summary of managed identities][aks-managed-id-kubelet].
+To provide the AKS cluster access to your key vault, the cluster kubelet identity needs access to your key vault. When you create a cluster with managed identity enabled, a kubelet identity is automatically created by default.
-To create your own identity, use `az identity create` to create an identity. The following example creates a *myIdentity* identity in the *myResourceGroup* resource group.
+You can either [grant access to your key vault for the identity after cluster creation](#enable-gmsa-on-existing-cluster) or create your own identity to use before cluster creation using the following steps:
-```azurecli
-az identity create --name myIdentity --resource-group myResourceGroup
-```
+1. Create a kubelet identity using the [`az identity create`][az-identity-create] command.
-You can grant your kubelet identity access to your key vault before or after you create your cluster. The following example uses `az identity list` to get the ID of the identity and set it to *MANAGED_ID* then uses `az keyvault set-policy` to grant the identity access to the *MyAKSGMSAVault* key vault.
+ ```azurecli-interactive
+ az identity create --name myIdentity --resource-group myResourceGroup
+ ```
-```azurecli
-MANAGED_ID=$(az identity list --query "[].id" -o tsv)
-az keyvault set-policy --name "MyAKSGMSAVault" --object-id $MANAGED_ID --secret-permissions get
-```
+2. Get the ID of the identity using the [`az identity list`][az-identity-list] command and set it to a variable named *MANAGED_ID*.
-## Create AKS cluster
+ ```azurecli-interactive
+ MANAGED_ID=$(az identity list --query "[].id" -o tsv)
+ ```
-To use GMSA with your AKS cluster, use the *enable-windows-gmsa*, *gmsa-dns-server*, *gmsa-root-domain-name*, and *enable-managed-identity* parameters.
+3. Grant the identity access to your key vault using the [`az keyvault set-policy`][az-keyvault-set-policy] command.
-> [!NOTE]
-> When creating a cluster with Windows Server node pools, you need to specify the administrator credentials when creating the cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell).
->
-> ```azurecli
-> echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
-> ```
+ ```azurecli-interactive
+ az keyvault set-policy --name "myGMSAVault" --object-id $MANAGED_ID --secret-permissions get
+ ```
-Use `az aks create` to create an AKS cluster then `az aks nodepool add` to add a Windows Server node pool. The following example creates a *MyAKS* cluster in the *MyResourceGroup* resource group, enables GMSA, and then adds a new node pool named *npwin*.
+## Enable GMSA on a new AKS cluster
-> [!NOTE]
-> If you are using a custom vnet, you also need to specify the id of the vnet using *vnet-subnet-id* and may need to also add *docker-bridge-address*, *dns-service-ip*, and *service-cidr* depending on your configuration.
->
-> If you created your own identity for the kubelet identity, use the *assign-kubelet-identity* parameter to specify your identity.
-
-```azurecli
-DNS_SERVER=<IP address of DNS server>
-ROOT_DOMAIN_NAME="contoso.com"
-
-az aks create \
- --resource-group MyResourceGroup \
- --name MyAKS \
- --vm-set-type VirtualMachineScaleSets \
- --network-plugin azure \
- --load-balancer-sku standard \
- --windows-admin-username $WINDOWS_USERNAME \
- --enable-managed-identity \
- --enable-windows-gmsa \
- --gmsa-dns-server $DNS_SERVER \
- --gmsa-root-domain-name $ROOT_DOMAIN_NAME
-
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKS \
- --os-type Windows \
- --name npwin \
- --node-count 1
-```
-
-You can also enable GMSA on existing clusters that already have Windows Server nodes and managed identities enabled using `az aks update`. For example:
-
-```azurecli
-az aks update \
- --resource-group MyResourceGroup \
- --name MyAKS \
- --enable-windows-gmsa \
- --gmsa-dns-server $DNS_SERVER \
- --gmsa-root-domain-name $ROOT_DOMAIN_NAME
-```
-
-After creating your cluster or updating your cluster, use `az keyvault set-policy` to grant the identity access to your key vault. The following example grants the kubelet identity created by the cluster access to the *MyAKSGMSAVault* key vault.
+1. Create administrator credentials to use during cluster creation. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command.
+
+ ```azurecli-interactive
+ echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
+ ```
+
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command with the following parameters:
+
+ * `--enable-managed-identity`: Enables managed identity for the cluster.
+ * `--enable-windows-gmsa`: Enables GMSA for the cluster.
+ * `--gmsa-dns-server`: The IP address of the DNS server.
+ * `--gmsa-root-domain-name`: The root domain name of the DNS server.
+
+ ```azurecli-interactive
+ DNS_SERVER=<IP address of DNS server>
+ ROOT_DOMAIN_NAME="contoso.com"
+
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --vm-set-type VirtualMachineScaleSets \
+ --network-plugin azure \
+ --load-balancer-sku standard \
+ --windows-admin-username $WINDOWS_USERNAME \
+ --enable-managed-identity \
+ --enable-windows-gmsa \
+ --gmsa-dns-server $DNS_SERVER \
+ --gmsa-root-domain-name $ROOT_DOMAIN_NAME
+ ```
+
+ > [!NOTE]
+ >
+ > * If you're using a custom VNet, you need to specify the VNet ID using the `vnet-subnet-id` parameter, and you may need to also add the `docker-bridge-address`, `dns-service-ip`, and `service-cidr` parameters depending on your configuration.
+ >
+ > * If you created your own identity for the kubelet identity, use the `assign-kubelet-identity` parameter to specify your identity.
+
+3. Add a Windows Server node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-type Windows \
+ --name npwin \
+ --node-count 1
+ ```
+
+### Enable GMSA on existing cluster
+
+* Enable GMSA on an existing cluster with Windows Server nodes and managed identities enabled using the [`az aks update`][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-windows-gmsa \
+ --gmsa-dns-server $DNS_SERVER \
+ --gmsa-root-domain-name $ROOT_DOMAIN_NAME
+ ```
+
+## Grant access to your key vault for the kubelet identity
> [!NOTE]
-> If you provided your own identity for the kubelet identity, skip this step.
+> Skip this step if you provided your own identity for the kubelet identity.
-```azurecli
-MANAGED_ID=$(az aks show -g MyResourceGroup -n MyAKS --query "identityProfile.kubeletidentity.objectId" -o tsv)
+* Grant access to your key vault for the kubelet identity using the [`az keyvault set-policy`][az-keyvault-set-policy] command.
-az keyvault set-policy --name "MyAKSGMSAVault" --object-id $MANAGED_ID --secret-permissions get
-```
+ ```azurecli-interactive
+ MANAGED_ID=$(az aks show -g myResourceGroup -n myAKSCluster --query "identityProfile.kubeletidentity.objectId" -o tsv)
+ az keyvault set-policy --name "myGMSAVault" --object-id $MANAGED_ID --secret-permissions get
+ ```
## Install GMSA cred spec
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][] command. The following example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*:
-
-```azurecli
-az aks get-credentials --resource-group MyResourceGroup --name MyAKS
-```
-
-Create a *gmsa-spec.yaml* with the following, replacing the placeholders with your own values.
-
-```yml
-apiVersion: windows.k8s.io/v1alpha1
-kind: GMSACredentialSpec
-metadata:
- name: aks-gmsa-spec # This name can be changed, but it will be used as a reference in the pod spec
-credspec:
- ActiveDirectoryConfig:
- GroupManagedServiceAccounts:
- - Name: $GMSA_ACCOUNT_USERNAME
- Scope: $NETBIOS_DOMAIN_NAME
- - Name: $GMSA_ACCOUNT_USERNAME
- Scope: $DNS_DOMAIN_NAME
- HostAccountConfig:
- PluginGUID: '{CCC2A336-D7F3-4818-A213-272B7924213E}'
- PortableCcgVersion: "1"
- PluginInput: "ObjectId=$MANAGED_ID;SecretUri=$SECRET_URI" # SECRET_URI takes the form https://$akvName.vault.azure.net/secrets/$akvSecretName
- CmsPlugins:
- - ActiveDirectory
- DomainJoinConfig:
- DnsName: $DNS_DOMAIN_NAME
- DnsTreeName: $DNS_ROOT_DOMAIN_NAME
- Guid: $AD_DOMAIN_OBJECT_GUID
- MachineAccountName: $GMSA_ACCOUNT_USERNAME
- NetBiosName: $NETBIOS_DOMAIN_NAME
- Sid: $GMSA_SID
-```
---
-Create a *gmsa-role.yaml* with the following.
-
-```yml
-#Create the Role to read the credspec
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
- name: aks-gmsa-role
-rules:
-- apiGroups: ["windows.k8s.io"]
- resources: ["gmsacredentialspecs"]
- verbs: ["use"]
- resourceNames: ["aks-gmsa-spec"]
-```
-
-Create a *gmsa-role-binding.yaml* with the following.
-
-```yml
-apiVersion: rbac.authorization.k8s.io/v1
-kind: RoleBinding
-metadata:
- name: allow-default-svc-account-read-on-aks-gmsa-spec
- namespace: default
-subjects:
-- kind: ServiceAccount
- name: default
- namespace: default
-roleRef:
- kind: ClusterRole
- name: aks-gmsa-role
- apiGroup: rbac.authorization.k8s.io
-```
-
-Use `kubectl apply` to apply the changes from *gmsa-spec.yaml*, *gmsa-role.yaml*, and *gmsa-role-binding.yaml*.
-
-```azurecli
-kubectl apply -f gmsa-spec.yaml
-kubectl apply -f gmsa-role.yaml
-kubectl apply -f gmsa-role-binding.yaml
-```
-
-## Verify GMSA is installed and working
-
-Create a *gmsa-demo.yaml* with the following.
-
-```yml
-
-kind: ConfigMap
-apiVersion: v1
-metadata:
- labels:
- app: gmsa-demo
- name: gmsa-demo
- namespace: default
-data:
- run.ps1: |
- $ErrorActionPreference = "Stop"
-
- Write-Output "Configuring IIS with authentication."
-
- # Add required Windows features, since they are not installed by default.
- Install-WindowsFeature "Web-Windows-Auth", "Web-Asp-Net45"
-
- # Create simple ASP.NET page.
- New-Item -Force -ItemType Directory -Path 'C:\inetpub\wwwroot\app'
- Set-Content -Path 'C:\inetpub\wwwroot\app\default.aspx' -Value 'Authenticated as <B><%=User.Identity.Name%></B>, Type of Authentication: <B><%=User.Identity.AuthenticationType%></B>'
-
- # Configure IIS with authentication.
- Import-Module IISAdministration
- Start-IISCommitDelay
- (Get-IISConfigSection -SectionPath 'system.webServer/security/authentication/windowsAuthentication').Attributes['enabled'].value = $true
- (Get-IISConfigSection -SectionPath 'system.webServer/security/authentication/anonymousAuthentication').Attributes['enabled'].value = $false
- (Get-IISServerManager).Sites[0].Applications[0].VirtualDirectories[0].PhysicalPath = 'C:\inetpub\wwwroot\app'
- Stop-IISCommitDelay
-
- Write-Output "IIS with authentication is ready."
-
- C:\ServiceMonitor.exe w3svc
-
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- labels:
- app: gmsa-demo
- name: gmsa-demo
- namespace: default
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: gmsa-demo
- template:
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+2. Create a new YAML named *gmsa-spec.yaml* and paste in the following YAML. Make sure you replace the placeholders with your own values.
+
+ ```YAML
+ apiVersion: windows.k8s.io/v1alpha1
+ kind: GMSACredentialSpec
+ metadata:
+ name: aks-gmsa-spec # This name can be changed, but it will be used as a reference in the pod spec
+ credspec:
+ ActiveDirectoryConfig:
+ GroupManagedServiceAccounts:
+ - Name: $GMSA_ACCOUNT_USERNAME
+ Scope: $NETBIOS_DOMAIN_NAME
+ - Name: $GMSA_ACCOUNT_USERNAME
+ Scope: $DNS_DOMAIN_NAME
+ HostAccountConfig:
+ PluginGUID: '{CCC2A336-D7F3-4818-A213-272B7924213E}'
+ PortableCcgVersion: "1"
+ PluginInput: "ObjectId=$MANAGED_ID;SecretUri=$SECRET_URI" # SECRET_URI takes the form https://$akvName.vault.azure.net/secrets/$akvSecretName
+ CmsPlugins:
+ - ActiveDirectory
+ DomainJoinConfig:
+ DnsName: $DNS_DOMAIN_NAME
+ DnsTreeName: $DNS_ROOT_DOMAIN_NAME
+ Guid: $AD_DOMAIN_OBJECT_GUID
+ MachineAccountName: $GMSA_ACCOUNT_USERNAME
+ NetBiosName: $NETBIOS_DOMAIN_NAME
+ Sid: $GMSA_SID
+ ```
+
+3. Create a new YAML named *gmsa-role.yaml* and paste in the following YAML.
+
+ ```YAML
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: aks-gmsa-role
+ rules:
+ - apiGroups: ["windows.k8s.io"]
+ resources: ["gmsacredentialspecs"]
+ verbs: ["use"]
+ resourceNames: ["aks-gmsa-spec"]
+ ```
+
+4. Create a new YAML named *gmsa-role-binding.yaml* and paste in the following YAML.
+
+ ```YAML
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: RoleBinding
+ metadata:
+ name: allow-default-svc-account-read-on-aks-gmsa-spec
+ namespace: default
+ subjects:
+ - kind: ServiceAccount
+ name: default
+ namespace: default
+ roleRef:
+ kind: ClusterRole
+ name: aks-gmsa-role
+ apiGroup: rbac.authorization.k8s.io
+ ```
+
+5. Apply the changes from *gmsa-spec.yaml*, *gmsa-role.yaml*, and *gmsa-role-binding.yaml* using the `kubectl apply` command.
+
+ ```azurecli-interactive
+ kubectl apply -f gmsa-spec.yaml
+ kubectl apply -f gmsa-role.yaml
+ kubectl apply -f gmsa-role-binding.yaml
+ ```
+
+## Verify GMSA installation
+
+1. Create a new YAML named *gmsa-demo.yaml* and paste in the following YAML.
+
+ ```YAML
+
+ kind: ConfigMap
+ apiVersion: v1
+ metadata:
+ labels:
+ app: gmsa-demo
+ name: gmsa-demo
+ namespace: default
+ data:
+ run.ps1: |
+ $ErrorActionPreference = "Stop"
+
+ Write-Output "Configuring IIS with authentication."
+
+ # Add required Windows features, since they are not installed by default.
+ Install-WindowsFeature "Web-Windows-Auth", "Web-Asp-Net45"
+
+ # Create simple ASP.NET page.
+ New-Item -Force -ItemType Directory -Path 'C:\inetpub\wwwroot\app'
+ Set-Content -Path 'C:\inetpub\wwwroot\app\default.aspx' -Value 'Authenticated as <B><%=User.Identity.Name%></B>, Type of Authentication: <B><%=User.Identity.AuthenticationType%></B>'
+
+ # Configure IIS with authentication.
+ Import-Module IISAdministration
+ Start-IISCommitDelay
+ (Get-IISConfigSection -SectionPath 'system.webServer/security/authentication/windowsAuthentication').Attributes['enabled'].value = $true
+ (Get-IISConfigSection -SectionPath 'system.webServer/security/authentication/anonymousAuthentication').Attributes['enabled'].value = $false
+ (Get-IISServerManager).Sites[0].Applications[0].VirtualDirectories[0].PhysicalPath = 'C:\inetpub\wwwroot\app'
+ Stop-IISCommitDelay
+
+ Write-Output "IIS with authentication is ready."
+
+ C:\ServiceMonitor.exe w3svc
+
+ apiVersion: apps/v1
+ kind: Deployment
metadata: labels: app: gmsa-demo
+ name: gmsa-demo
+ namespace: default
spec:
- securityContext:
- windowsOptions:
- gmsaCredentialSpecName: aks-gmsa-spec
- containers:
- - name: iis
- image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
- imagePullPolicy: IfNotPresent
- command:
- - powershell
- args:
- - -File
- - /gmsa-demo/run.ps1
- volumeMounts:
- - name: gmsa-demo
- mountPath: /gmsa-demo
- volumes:
- - configMap:
- defaultMode: 420
- name: gmsa-demo
- name: gmsa-demo
- nodeSelector:
- kubernetes.io/os: windows
-
-apiVersion: v1
-kind: Service
-metadata:
- labels:
- app: gmsa-demo
- name: gmsa-demo
- namespace: default
-spec:
- ports:
- - port: 80
- targetPort: 80
- selector:
- app: gmsa-demo
- type: LoadBalancer
-```
+ replicas: 1
+ selector:
+ matchLabels:
+ app: gmsa-demo
+ template:
+ metadata:
+ labels:
+ app: gmsa-demo
+ spec:
+ securityContext:
+ windowsOptions:
+ gmsaCredentialSpecName: aks-gmsa-spec
+ containers:
+ - name: iis
+ image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
+ imagePullPolicy: IfNotPresent
+ command:
+ - powershell
+ args:
+ - -File
+ - /gmsa-demo/run.ps1
+ volumeMounts:
+ - name: gmsa-demo
+ mountPath: /gmsa-demo
+ volumes:
+ - configMap:
+ defaultMode: 420
+ name: gmsa-demo
+ name: gmsa-demo
+ nodeSelector:
+ kubernetes.io/os: windows
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ labels:
+ app: gmsa-demo
+ name: gmsa-demo
+ namespace: default
+ spec:
+ ports:
+ - port: 80
+ targetPort: 80
+ selector:
+ app: gmsa-demo
+ type: LoadBalancer
+ ```
-Use `kubectl apply` to apply the changes from *gmsa-demo.yaml*
+2. Apply the changes from *gmsa-demo.yaml* using the `kubectl apply` command.
-```azurecli
-kubectl apply -f gmsa-demo.yaml
-```
+ ```azurecli-interactive
+ kubectl apply -f gmsa-demo.yaml
+ ```
-Use `kubectl get service` to display the IP address of the example application.
+3. Get the IP address of the sample application using the `kubectl get service` command.
-```console
-kubectl get service gmsa-demo --watch
-```
+ ```azurecli-interactive
+ kubectl get service gmsa-demo --watch
+ ```
-Initially the *EXTERNAL-IP* for the *gmsa-demo* service is shown as *pending*.
+ Initially, the `EXTERNAL-IP` for the `gmsa-demo` service shows as *pending*:
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-gmsa-demo LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ gmsa-demo LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+ ```
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+4. When the `EXTERNAL-IP` address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-```output
-gmsa-demo LoadBalancer 10.0.37.27 EXTERNAL-IP 80:30572/TCP 2m
-```
+ The following example output shows a valid public IP address assigned to the service:
-To verify GMSA is working and configured correctly, open a web browser to the external IP address of *gmsa-demo* service. Authenticate with `$NETBIOS_DOMAIN_NAME\$AD_USERNAME` and password and confirm you see `Authenticated as $NETBIOS_DOMAIN_NAME\$AD_USERNAME, Type of Authentication: Negotiate` .
+ ```output
+ gmsa-demo LoadBalancer 10.0.37.27 EXTERNAL-IP 80:30572/TCP 2m
+ ```
+
+5. Open a web browser to the external IP address of the `gmsa-demo` service.
+6. Authenticate with the `$NETBIOS_DOMAIN_NAME\$AD_USERNAME` and password and confirm you see `Authenticated as $NETBIOS_DOMAIN_NAME\$AD_USERNAME, Type of Authentication: Negotiate`.
## Troubleshooting ### No authentication is prompted when loading the page
-If the page loads, but you aren't prompted to authenticate, use `kubectl logs POD_NAME` to display the logs of your pod and verify you see *IIS with authentication is ready*.
+If the page loads, but you aren't prompted to authenticate, use the `kubectl logs POD_NAME` command to display the logs of your pod and verify you see *IIS with authentication is ready*.
> [!NOTE]
-> Windows containers won't show logs on kubectl by default. To enable Windows containers to show logs, you need to embed the Log Monitor tool on your Windows image. More information is available [here](https://github.com/microsoft/windows-container-tools).
+> Windows containers won't show logs on kubectl by default. To enable Windows containers to show logs, you need to embed the Log Monitor tool on your Windows image. For more information, see [Windows Container Tools](https://github.com/microsoft/windows-container-tools).
### Connection timeout when trying to load the page
-If you receive a connection timeout when trying to load the page, verify the sample app is running with `kubectl get pods --watch`. Sometimes the external IP address for the sample app service is available before the sample app pod is running.
+If you receive a connection timeout when trying to load the page, verify the sample app is running using the `kubectl get pods --watch` command. Sometimes the external IP address for the sample app service is available before the sample app pod is running.
### Pod fails to start and a *winapi error* shows in the pod events
-After running `kubectl get pods --watch` and waiting several minutes, if your pod doesn't start, run `kubectl describe pod POD_NAME`. If you see a *winapi error* in the pod events, this is likely an error in your GMSA cred spec configuration. Verify all the replacement values in *gmsa-spec.yaml* are correct, rerun `kubectl apply -f gmsa-spec.yaml`, and redeploy the sample application.
+If your pod doesn't start after running the `kubectl get pods --watch` command and waiting several minutes, use the `kubectl describe pod POD_NAME` command. If you see a *winapi error* in the pod events, it's likely an error in your GMSA cred spec configuration. Verify all the replacement values in *gmsa-spec.yaml* are correct, rerun `kubectl apply -f gmsa-spec.yaml`, and redeploy the sample application.
+
+## Next steps
+* Learn more about [Windows Server containers on AKS](./windows-faq.md).
+<!-- LINKS - internal -->
[aks-cni]: configure-azure-cni.md [aks-managed-id]: use-managed-identity.md
-[aks-managed-id-kubelet]: use-managed-identity.md#summary-of-managed-identities
-[az aks get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
[gmsa-getting-started]: /windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts [gmsa-overview]: /windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
-[rdp]: rdp.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-identity-create]: /cli/azure/identity#az_identity_create
+[az-identity-list]: /cli/azure/identity#az_identity_list
+[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create
+[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set
+[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
description: Learn how to control pod admissions using PodSecurityPolicy in Azur
Last updated 08/01/2023+ # Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview)
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
az aks nodepool add \
--name $NODEPOOL_NAME \ --max-pods 3 ```
-For more details, see the [`--max-pods` documentation](https://learn.microsoft.com/cli/azure/aks/nodepool?view=azure-cli-latest#az-aks-nodepool-add:~:text=for%20system%20nodepool.-,%2D%2Dmax%2Dpods%20%2Dm,-The%20maximum%20number).
+For more details, see the [`--max-pods` documentation](/cli/azure/aks/nodepool?view=azure-cli-latest#az-aks-nodepool-add:~:text=for%20system%20nodepool.-,%2D%2Dmax%2Dpods%20%2Dm,-The%20maximum%20number).
## Why is there an unexpected user named "sshd" on my VM node?
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
This section explains the migration options available depending on what version
For either scenario, you need to have the federated trust set up before you update your application to use the workload identity. The following are the minimum steps required: - [Create a managed identity](#create-a-managed-identity) credential.-- Associate the managed identity with the kubernetes service account already used for the pod-manged identity or [create a new Kubernetes service account](#create-kubernetes-service-account) and then associate it with the managed identity.
+- Associate the managed identity with the kubernetes service account already used for the pod-managed identity or [create a new Kubernetes service account](#create-kubernetes-service-account) and then associate it with the managed identity.
- [Establish a federated trust relationship](#establish-federated-identity-credential-trust) between the managed identity and Azure AD. ### Migrate from latest version
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities on Azure Kubernetes Service (AKS)
+ Title: Use an Azure AD workload identity on Azure Kubernetes Service (AKS)
description: Learn about Azure Active Directory workload identity for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 05/23/2023 Last updated : 09/03/2023 # Use Azure AD workload identity with Azure Kubernetes Service (AKS)
This article helps you understand this new authentication feature, and reviews t
In the Azure Identity client libraries, choose one of the following approaches: -- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.
+- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`. &dagger;
- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`. - Use `WorkloadIdentityCredential` directly.
-The following table provides the **minimum** package version required for each language's client library.
+The following table provides the **minimum** package version required for each language ecosystem's client library.
-| Language | Library | Minimum Version | Example |
-||-|--||
-| .NET | [Azure.Identity](/dotnet/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
-| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
-| Java | [azure-identity](/java/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
-| JavaScript | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
-| Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
+| Ecosystem | Library | Minimum version |
+|--||--|
+| .NET | [Azure.Identity](/dotnet/api/overview/azure/identity-readme) | 1.9.0 |
+| C++ | [azure-identity-cpp](https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/README.md) | 1.6.0-beta.1 |
+| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 |
+| Java | [azure-identity](/java/api/overview/azure/identity-readme) | 1.9.0 |
+| Node.js | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 |
+| Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 |
+
+&dagger; In the C++ library, `WorkloadIdentityCredential` isn't part of the `DefaultAzureCredential` authentication flow.
+
+In the following code samples, the credential type will use the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault.
+
+## [.NET](#tab/dotnet)
+
+```csharp
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+
+string keyVaultUrl = Environment.GetEnvironmentVariable("KEYVAULT_URL");
+string secretName = Environment.GetEnvironmentVariable("SECRET_NAME");
+
+var client = new SecretClient(
+ new Uri(keyVaultUrl),
+ new DefaultAzureCredential());
+
+KeyVaultSecret secret = await client.GetSecretAsync(secretName);
+```
+
+## [C++](#tab/cpp)
+
+```cpp
+#include <cstdlib>
+#include <azure/identity.hpp>
+#include <azure/keyvault/secrets/secret_client.hpp>
+
+using namespace Azure::Identity;
+using namespace Azure::Security::KeyVault::Secrets;
+
+// * AZURE_TENANT_ID: Tenant ID for the Azure account.
+// * AZURE_CLIENT_ID: The client ID to authenticate the request.
+std::string GetTenantId() { return std::getenv("AZURE_TENANT_ID"); }
+std::string GetClientId() { return std::getenv("AZURE_CLIENT_ID"); }
+std::string GetTokenFilePath() { return std::getenv("AZURE_FEDERATED_TOKEN_FILE"); }
+
+int main()
+{
+ const char* keyVaultUrl = std::getenv("KEYVAULT_URL");
+ const char* secretName = std::getenv("SECRET_NAME");
+ auto credential = std::make_shared<WorkloadIdentityCredential>(
+ GetTenantId(), GetClientId(), GetTokenFilePath());
+
+ SecretClient client(keyVaultUrl, credential);
+ Secret secret = client.GetSecret(secretName).Value;
+
+ return 0;
+}
+```
+
+## [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "context"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azsecrets"
+ "k8s.io/klog/v2"
+)
+
+func main() {
+ keyVaultUrl := os.Getenv("KEYVAULT_URL")
+ secretName := os.Getenv("SECRET_NAME")
+
+ credential, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ klog.Fatal(err)
+ }
+
+ client, err := azsecrets.NewClient(keyVaultUrl, credential, nil)
+ if err != nil {
+ klog.Fatal(err)
+ }
+
+ secret, err := client.GetSecret(context.Background(), secretName, "", nil)
+ if err != nil {
+ klog.ErrorS(err, "failed to get secret", "keyvault", keyVaultUrl, "secretName", secretName)
+ os.Exit(1)
+ }
+}
+```
+
+## [Java](#tab/java)
+
+```java
+import java.util.Map;
+
+import com.azure.security.keyvault.secrets.SecretClient;
+import com.azure.security.keyvault.secrets.SecretClientBuilder;
+import com.azure.security.keyvault.secrets.models.KeyVaultSecret;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.identity.DefaultAzureCredential;
+
+public class App {
+ public static void main(String[] args) {
+ Map<String, String> env = System.getenv();
+ String keyVaultUrl = env.get("KEYVAULT_URL");
+ String secretName = env.get("SECRET_NAME");
+
+ SecretClient client = new SecretClientBuilder()
+ .vaultUrl(keyVaultUrl)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
+ KeyVaultSecret secret = client.getSecret(secretName);
+ }
+}
+```
+
+## [Node.js](#tab/javascript)
+
+```nodejs
+import { DefaultAzureCredential } from "@azure/identity";
+import { SecretClient } from "@azure/keyvault-secrets";
+
+const main = async () => {
+ const keyVaultUrl = process.env["KEYVAULT_URL"];
+ const secretName = process.env["SECRET_NAME"];
+
+ const credential = new DefaultAzureCredential();
+ const client = new SecretClient(keyVaultUrl, credential);
+
+ const secret = await client.getSecret(secretName);
+}
+
+main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+});
+```
+
+## [Python](#tab/python)
+
+```python
+import os
+
+from azure.keyvault.secrets import SecretClient
+from azure.identity import DefaultAzureCredential
+
+def main():
+ keyvault_url = os.getenv('KEYVAULT_URL', '')
+ secret_name = os.getenv('SECRET_NAME', '')
+
+ client = SecretClient(vault_url=keyvault_url, credential=DefaultAzureCredential())
+ secret = client.get_secret(secret_name)
+
+if __name__ == '__main__':
+ main()
+```
++ ## Microsoft Authentication Library (MSAL)
-The following client libraries are the **minimum** version required
+The following client libraries are the **minimum** version required.
-| Language | Library | Image | Example | Has Windows |
+| Ecosystem | Library | Image | Example | Has Windows |
|--|--|-|-|-|
-| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
-| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
-| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
-| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
-| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
+| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | `ghcr.io/azure/azure-workload-identity/msal-net:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
+| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | `ghcr.io/azure/azure-workload-identity/msal-go:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
+| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | `ghcr.io/azure/azure-workload-identity/msal-java:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
+| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | `ghcr.io/azure/azure-workload-identity/msal-node:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
+| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | `ghcr.io/azure/azure-workload-identity/msal-python:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
## Limitations -- You can only have 20 federated identity credentials per managed identity.
+- You can only have [20 federated identity credentials][general-federated-identity-credential-considerations] per managed identity.
- It takes a few seconds for the federated identity credential to be propagated after being initially added.-- [Virtual nodes][aks-virtual-nodes] add on, based on the open source project [Virtual Kubelet][virtual-kubelet], is not supported.
+- [Virtual nodes][aks-virtual-nodes] add on, based on the open source project [Virtual Kubelet][virtual-kubelet], isn't supported.
+- Creation of federated identity credentials is not supported on user-assigned managed identities in these [regions.][unsupported-regions-user-assigned-managed-identities]
## How it works
If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think
### Service account annotations
-All annotations are optional. If the annotation is not specified, the default value will be used.
+All annotations are optional. If the annotation isn't specified, the default value will be used.
|Annotation |Description |Default | |--||--|
All annotations are optional. If the annotation is not specified, the default va
### Pod annotations
-All annotations are optional. If the annotation is not specified, the default value will be used.
+All annotations are optional. If the annotation isn't specified, the default value will be used.
|Annotation |Description |Default | |--||--| |`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. |
-|`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. |
+|`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example, `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. |
|`azure.workload.identity/inject-proxy-sidecar` |Injects a proxy init container and proxy sidecar into the pod. The proxy sidecar is used to intercept token requests to IMDS and acquire an Azure AD token on behalf of the user with federated identity credential. |true | |`azure.workload.identity/proxy-sidecar-port` |Represents the port of the proxy sidecar. |8000 |
The following table summarizes our migration or deployment recommendations for w
[workload-identity-migration-sidecar]: workload-identity-migrate-from-pod-identity.md [auto-rotation]: certificate-rotation.md#certificate-auto-rotation [aks-virtual-nodes]: virtual-nodes.md
+[unsupported-regions-user-assigned-managed-identities]: ../active-directory/workload-identities/workload-identity-federation-considerations.md#unsupported-regions-user-assigned-managed-identities
+[general-federated-identity-credential-considerations]: ../active-directory/workload-identities/workload-identity-federation-considerations.md#general-federated-identity-credential-considerations
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md
description: Learn how to install and configure an On-premises data gateway to c
Previously updated : 01/27/2023 Last updated : 08/25/2023
To learn more about how Azure Analysis Services works with the gateway, see [Con
* During setup, when registering your gateway with Azure, the default region for your subscription is selected. You can choose a different subscription and region. If you have servers in more than one region, you must install a gateway for each region. * The gateway cannot be installed on a domain controller.
-* The gateway cannot be installed and configured by using automation.
* Only one gateway can be installed on a single computer. * Install the gateway on a computer that remains on and does not go to sleep. * Do not install the gateway on a computer with a wireless only connection to your network. Performance can be diminished.
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
For more information about the information assets and capabilities in API Center
## Preview limitations * In preview, API Center is available in the following Azure regions:-
- * East US
- * UK South
- * Central India
- * Australia East
-
+ * Australia East
+ * Central India
+ * East US
+ * UK South
+ * West Europe
+
## Frequently asked questions ### Q: Is API Center part of Azure API Management?
A: Yes, all data in API Center is encrypted at rest.
> [!div class="nextstepaction"] > [Provide feedback](https://aka.ms/apicenter/preview/feedback)+
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
Last updated 06/05/2023 -+ # Tutorial: Get started with your API center (preview)
For background information about the assets you can organize in API Center, see
## Prerequisites
-* Access to the API Center preview. See [access instructions](https://aka.ms/apicenter/joinpreview):
-
- 1. Register the **Azure API Center Preview** feature in your subscription (or subscriptions).
- 1. Submit the access request form.
- 1. Wait for a notification email from Microsoft that access to API Center is enabled in the requested Azure subscription.
- * At least a Contributor role assignment or equivalent permissions in the Azure subscription. * One or more APIs that you want to register in your API center. Here are two examples, with links to their OpenAPI specifications for download:
In this tutorial, you learned how to use the portal to:
> [!div class="nextstepaction"] > [Learn more about API Center](key-concepts.md)+
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
To restore routing to the regional gateway, set the value of `disableGateway` to
This section provides considerations for multi-region deployments when the API Management instance is injected in a virtual network.
-* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are the same as those for a network in the primary region.
+* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are generally the same as those for a network in the primary region.
* Virtual networks in the different regions don't need to be peered.
+> [!IMPORTANT]
+> When configured in internal VNet mode, each regional gateway must also have outbound connectivity on port 1443 to the Azure SQL database configured for your API Management instance, which is only in the *primary* region. Ensure that you allow connectivity to the FQDN or IP address of this Azure SQL database in any routes or firewall rules you configure for networks in your secondary regions; the Azure SQL service tag can't be used in this scenario. To find the Azure SQL database name in the primary region, go to the **Network** > **Network status** page of your API Management instance in the portal.
### IP addresses
This section provides considerations for multi-region deployments when the API M
[create an api management service instance]: get-started-create-service-instance.md+ [get started with azure api management]: get-started-create-service-instance.md+ [deploy an api management service instance to a new region]: #add-region+ [delete an api management service instance from a region]: #remove-region+ [unit]: https://azure.microsoft.com/pricing/details/api-management/+ [premium]: https://azure.microsoft.com/pricing/details/api-management/++
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
GET https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/
} ```
+> [!IMPORTANT]
+> The private IP addresses of internal load balancer and API Management units are assigned dynamically. Therefore, it is impossible to anticipate the private IP of the API Management instance prior to its deployment. Additionally, changing to a different subnet and then returning may cause a change in the private IP address.
++ ### IP addresses for outbound traffic API Management uses a public IP address for a connection outside the VNet or a peered VNet and a private IP address for a connection in the VNet or a peered VNet.
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger
"loggerType": "azureEventHub", "description": "adding a new logger with system assigned managed identity", "credentials": {
- "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net",
"identityClientId":"SystemAssigned", "name":"<EventHubName>" }
resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/log
loggerType: 'azureEventHub' description: 'Event hub logger with system-assigned managed identity' credentials: {
- endpointAddress: '<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ endpointAddress: '<EventHubsNamespace>.servicebus.windows.net'
identityClientId: 'systemAssigned'
- name: 'ApimEventHub'
+ name: '<EventHubName>'
} } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
"description": "Event hub logger with system-assigned managed identity", "resourceId": "<EventHubsResourceID>", "credentials": {
- "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net",
"identityClientId": "SystemAssigned",
- "name": "ApimEventHub"
+ "name": "<EventHubName>"
}, } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
-#### [PowerShell](#tab/PowerShell)
+#### [REST API](#tab/PowerShell)
Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with user-assigned managed identity credentials.
+```JSON
+{
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "adding a new logger with system assigned managed identity",
+ "credentials": {
+ "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net",
+ "identityClientId":"<ClientID>",
+ "name":"<EventHubName>"
+ }
+ }
+}
+
+```
+ #### [Bicep](#tab/bicep) Include a snippet similar the following in your Bicep template.
resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/logge
loggerType: 'azureEventHub' description: 'Event hub logger with user-assigned managed identity' credentials: {
- endpointAddress: '<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ endpointAddress: '<EventHubsNamespace>.servicebus.windows.net'
identityClientId: '<ClientID>'
- name: 'ApimEventHub'
+ name: '<EventHubName>'
} } }
Include a JSON snippet similar to the following in your Azure Resource Manager t
"description": "Event hub logger with user-assigned managed identity", "resourceId": "<EventHubsResourceID>", "credentials": {
- "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net",
"identityClientId": "<ClientID>",
- "name": "ApimEventHub"
+ "name": "<EventHubName>"
}, } }
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
Configure the policy to validate one or more attributes including certificate is
You can also create policy expressions with the [`context` variable](api-management-policy-expressions.md#ContextVariables) to check client certificates. Examples in the following sections show expressions using the `context.Request.Certificate` property and other `context` properties.
+> [!NOTE]
+> Mutual certificate authentication might not function correctly when the API Management gateway endpoint is exposed through the Application Gateway. This is because Application Gateway functions as a Layer 7 load balancer, establishing a distinct SSL connection with the backend API Management service. Consequently, the certificate attached by the client in the initial HTTP request will not be forwarded to APIM. However, as a workaround, you can transmit the certificate using the server variables option. For detailed instructions, refer to [Mutual Authentication Server Variables](../application-gateway/rewrite-http-headers-url.md#mutual-authentication-server-variables).
+ > [!IMPORTANT] > * Starting May 2021, the `context.Request.Certificate` property only requests the certificate when the API Management instance's [`hostnameConfiguration`](/rest/api/apimanagement/current-ga/api-management-service/create-or-update#hostnameconfiguration) sets the `negotiateClientCertificate` property to True. By default, `negotiateClientCertificate` is set to False. > * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotiation settings in the client.
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md
To configure resource logs:
1. After configuring details for the log destination or destinations, select **Save**. > [!NOTE]
-> Adding a diagnostic setting object might result in a failure if the [MinApiVersion property](/dotnet/api/microsoft.azure.management.apimanagement.models.apiversionconstraint.minapiversion) of your API Management service is set to any API version higher than 2019-12-01.
+> Adding a diagnostic setting object might result in a failure if the [MinApiVersion property](/dotnet/api/microsoft.azure.management.apimanagement.models.apiversionconstraint.minapiversion) of your API Management service is set to any API version higher than 2022-09-01-preview.
For more information, see [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
In addition,
## Manage subscription keys Regularly regenerating keys is a common security precaution. Like most Azure services requiring a subscription key, API Management generates keys in pairs. Each application using the service can switch from *key A* to *key B* and regenerate key A with minimal disruption, and vice versa.+
+Setting specific keys instead of regenerating ones can be performed by invoking the [Azure API Management Subscription - Create Or Update Azure REST API](/rest/api/apimanagement/current-ga/subscription/create-or-update). Specifically, the `properties.primaryKey` and/or `properties.secondaryKey` need to be set in the HTTP request body.
+ > [!NOTE] > * API Management doesn't provide built-in features to manage the lifecycle of subscription keys, such as setting expiration dates or automatically rotating keys. You can develop workflows to automate these processes using tools such as Azure PowerShell or the Azure SDKs. > * To enforce time-limited access to APIs, API publishers may be able to use policies with subscription keys, or use a mechanism that provides built-in expiration such as token-based authentication.
api-management Api Management Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-versions.md
The format of an API request URL when using query string-based versioning is: `h
For example, `https://apis.contoso.com/products?api-version=v1` and `https://apis.contoso.com/products?api-version=v2` could refer to the same `products` API but to versions `v1` and `v2` respectively. > [!NOTE]
-> Query parameters aren't allowed in the `servers` propery of an OpenAPI specification. If you export an OpenAPI specification from an API version, a query string won't appear in the server URL.
+> Query parameters aren't allowed in the `servers` property of an OpenAPI specification. If you export an OpenAPI specification from an API version, a query string won't appear in the server URL.
## Original versions
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
Configuring an authorization in your API Management instance consists of three s
:::image type="content" source="media/authorizations-overview/configure-authorization.png" alt-text="Diagram of steps to create an authorization in API Management." border="false":::
-#### Step 1 - Authorization provider
+#### Step 1: Authorization provider
During Step 1, you configure your authorization provider. You can choose between different [identity providers](authorizations-configure-common-providers.md) and grant types (authorization code or client credential). Each identity provider requires specific configurations. Important things to keep in mind: * An authorization provider configuration can only have one grant type.
To use an authorization provider, at least one *authorization* is required. Ea
|Authorization code | Bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1) | |Client credentials | Isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4) |
-### Step 2 - Log in
+#### Step 2: Log in
For authorizations based on the authorization code grant type, you must authenticate to the provider and *consent* to authorization. After successful login and authorization by the identity provider, the provider returns valid access and refresh tokens, which are encrypted and saved by API Management. For details, see [Process flow - runtime](#process-flowruntime).
-### Step 3 - Access policy
+#### Step 3: Access policy
You configure one or more *access policies* for each authorization. The access policies determine which [Azure AD identities](../active-directory/develop/app-objects-and-service-principals.md) can gain access to your authorizations at runtime. Authorizations currently support managed identities and service principals.
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
### Usage notes
+- API Management only performs cache lookup for HTTP GET requests.
* When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend. - This policy can only be used once in a policy section.
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
### Usage notes
+- API Management only caches responses to HTTP GET requests.
- This policy can only be used once in a policy section.
api-management Configure Graphql Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md
Currently, API Management supports resolvers that can access the following data
* A resolver is a resource containing a policy definition that's invoked only when a matching object type and field in the schema is executed. * Each resolver resolves data for a single field. To resolve data for multiple fields, configure a separate resolver for each. * Resolver-scoped policies are evaluated *after* any `inbound` and `backend` policies in the policy execution pipeline. They don't inherit policies from other scopes. For more information, see [Policies in API Management](api-management-howto-policies.md).-
+* You can configure API-scoped policies for a GraphQL API, independent of the resolver-scoped policies. For example, add a [validate-graphql-request](validate-graphql-request-policy.md) policy to the `inbound` scope to validate the request before the resolver is invoked. Configure API-scoped policies on the **API policies** tab for the API.
> [!IMPORTANT] > * If you use the preview `set-graphql-resolver` policy in policy definitions, you should migrate to the managed resolvers described in this article.
The following steps create a resolver using an HTTP-based data source. The gener
:::image type="content" source="media/configure-graphql-resolver/configure-resolver-policy.png" alt-text="Screenshot of resolver policy editor in the portal." lightbox="media/configure-graphql-resolver/configure-resolver-policy.png":::
-1. The resolver is attached to the field. Go to the **Resolvers** tab to list and manage the resolvers configured for the API. You can also create resolvers from the **Resolvers** tab.
+ The resolver is attached to the field and appears on the **Resolvers** tab.
+ :::image type="content" source="media/configure-graphql-resolver/list-resolvers.png" alt-text="Screenshot of the resolvers list for GraphQL API in the portal." lightbox="media/configure-graphql-resolver/list-resolvers.png":::
- > [!TIP]
- > * The **Linked** column indicates whether the resolver is configured for a field that's currently in the GraphQL schema. If a resolver isn't linked, it can't be invoked.
- > * You can clone a listed resolver to quickly create a similar resolver that targets a different type and field. In the context menu (**...**), select **Clone**.
+## Manage resolvers
+
+List and manage the resolvers for a GraphQL API on the API's **Resolvers** tab.
++
+On the **Resolvers** tab:
+
+* The **Linked** column indicates whether the resolver is configured for a field that's currently in the GraphQL schema. If a resolver isn't linked, it can't be invoked.
+
+* In the context menu (**...**) for a resolver, find commands to **Clone**, **Edit**, or **Delete** a resolver. Clone a listed resolver to quickly create a similar resolver that targets a different type and field.
+
+* You can create a new resolver by selecting **+ Create**.
+
+## Edit and test a resolver
+
+When you edit a single resolver, the **Edit resolver** page opens. You can:
+
+* Update the resolver policy and optionally the data source. Changing the data source overwrites the current resolver policy.
+
+* Change the type and field that the resolver targets.
+
+* Test and debug the resolver's configuration. As you edit the resolver policy, select **Run Test** to check the output from the data source, which you can validate against the schema. If errors occur, the response includes troubleshooting information.
+
+ :::image type="content" source="media/configure-graphql-resolver/edit-resolver.png" alt-text="Screenshot of editing a resolver in the portal." lightbox="media/configure-graphql-resolver/edit-resolver.png":::
## GraphQL context
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
The managed developer portal includes a **Custom HTML code** widget where you ca
## Create and upload custom widget
-### Prerequisites
-
+For more advanced widget use cases, API Management provides a scaffold and tools to help developers create a widget and upload it to the developer portal.
+
+### Prerequisites
+ * Install [Node.JS runtime](https://nodejs.org/en/) locally * Basic knowledge of programming and web development ### Create widget
+> [!WARNING]
+> Your custom widget code is stored in public Azure blob storage that's associated with your API Management instance. When you add a custom widget to the developer portal, code is read from this storage via an endpoint that doesn't require authentication, even if the developer portal or a page with the custom widget is only accessible to authenticated users. Don't include sensitive information or secrets in the custom widget code.
+>
+ 1. In the administrative interface for the developer portal, select **Custom widgets** > **Create new custom widget**. 1. Enter a widget name and choose a **Technology**. For more information, see [Widget templates](#widget-templates), later in this article. 1. Select **Create widget**.
The managed developer portal includes a **Custom HTML code** widget where you ca
If prompted, sign in to your Azure account. - The custom widget is now deployed to your developer portal. Using the portal's administrative interface, you can add it on pages in the developer portal and set values for any custom properties configured in the widget. ### Publish the developer portal
The React template contains prepared custom hooks in the `hooks.ts` file and est
This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools) contains the following functions to help you develop your custom widget and provides features including communication between the developer portal and your widget: + |Function |Description | ||| |[getValues](#azureapi-management-custom-widgets-toolsgetvalues) | Returns a JSON object containing values set in the widget editor combined with default values |
This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-wi
|[getWidgetData](#azureapi-management-custom-widgets-toolsgetwidgetdata) | Returns all data passed to your custom widget from the developer portal<br/><br/>Used internally in templates | + #### `@azure/api-management-custom-widgets-tools/getValues` Function that returns a JSON object containing the values you've set in the widget editor combined with default values, passed as an argument.
This function returns a JavaScript promise, which after resolution returns a JSO
> Manage and use the token carefully. Anyone who has it can access data in your API Management service. + #### `@azure/api-management-custom-widgets-tools/deployNodeJs` This function deploys your widget to your blob storage. In all templates, it's preconfigured in the `deploy.js` file.
To implement your widget using another JavaScript UI framework and libraries, yo
* For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running. - ## Next steps Learn more about the developer portal:
Learn more about the developer portal:
- [Frequently asked questions](developer-portal-faq.md) - [Scaffolder of a custom widget for developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-scaffolder) - [Tools for working with custom widgets of developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools)+
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
The `emit-metric` policy sends custom metrics in the specified format to Applica
| | -- | | -- | | name | A string. Name of custom metric. Policy expressions aren't allowed. | Yes | N/A | | namespace | A string. Namespace of custom metric. Policy expressions aren't allowed. | No | API Management |
-| value | Value of custom metric expressed as an integer. Policy expressions are allowed. | No | 1 |
+| value | Value of custom metric expressed as a double. Policy expressions are allowed. | No | 1 |
## Elements
The following example sends a custom metric to count the number of API requests
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
class Authorization
### Usage notes
-* Configure `identity-type=jwt` when the [access policy](authorizations-overview.md#step-3access-policy) for the authorization is assigned to a service principal. Only `/.default` app-only scopes are supported for the JWT.
+* Configure `identity-type=jwt` when the [access policy](authorizations-overview.md#step-3-access-policy) for the authorization is assigned to a service principal. Only `/.default` app-only scopes are supported for the JWT.
## Examples
api-management Graphql Schema Resolve Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md
type User {
1. Select **Create**. 1. To resolve data for another field in the schema, repeat the preceding steps to create a resolver.
+> [!TIP]
+> As you edit a resolver policy, select **Run Test** to check the output from the data source, which you can validate against the schema. If errors occur, the response includes troubleshooting information.
+ [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] ## Secure your GraphQL API
api-management Monetization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/monetization-overview.md
The following steps explain how to implement a monetization strategy for your AP
:::image type="content" source="media/monetization-overview/implementing-strategy.png" alt-text="Diagram of the steps for implementing your monetization strategy":::
-### Step 1 - Understand your customer
+### Step 1: Understand your customer
1. Map out the stages in your API consumers' likely journey, from first discovery of your API to maximum scale.
The following steps explain how to implement a monetization strategy for your AP
1. Consider applying a value-based pricing strategy if the direct value of the API to the customer is well understood. 1. Calculate the anticipated lifetime usage levels of the API for a customer and your expected number of customers over the lifetime of the API.
-### Step 2 - Quantify the costs
+### Step 2: Quantify the costs
Calculate the total cost of ownership for your API.
Calculate the total cost of ownership for your API.
| **Engineering costs** | The human resources required to build, test, operate, and maintain the API over its lifetime. Tends to be the most significant cost component. Where possible, exploit cloud PaaS and serverless technologies to minimize. | | **Infrastructure costs** | The costs for the underlying platforms, compute, network, and storage required to support the API over its lifetime. Exploit cloud platforms to achieve an infrastructure cost model that scales up proportionally in line with API usage levels. |
-### Step 3 - Conduct market research
+### Step 3: Conduct market research
1. Research the market to identify competitors. 1. Analyze competitors' monetization strategies. 1. Understand the specific features (functional and non-functional) that they are offering with their API.
-### Step 4 - Design the revenue model
+### Step 4: Design the revenue model
Design a revenue model based on the outcome of the steps above. You can work across two dimensions:
Maximize the lifetime value (LTV) you generate from each customer by designing a
Identify the range of required pricing models. A *pricing model* describes a specific set of rules for the API provider to turn consumption by the API consumer into revenue.
-For example, to support the [customer stages above](#step-1understand-your-customer), we would need six types of subscription:
+For example, to support the [customer stages above](#step-1-understand-your-customer), we would need six types of subscription:
| Subscription type | Description | | -- | -- |
Building on the examples above, the pricing models could be applied to create an
* Are charged an extra $0.06/100 calls past the first 50,000. * Rate limited to 1,200 calls/minute.
-### Step 5 - Calibrate
+### Step 5: Calibrate
Calibrate the pricing across the revenue model to:
Calibrate the pricing across the revenue model to:
- Verify the quality of your service offerings in each revenue model tier can be supported by your solution. - For example, if you are offering to support 3,500 calls/minute, make sure your end-to-end solution can scale to support that throughput level.
-### Step 6 - Release and monitor
+### Step 6: Release and monitor
Choose an appropriate solution to collect payment for usage of your APIs. Providers tend to fall into two groups:
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
api-management Self Hosted Gateway Enable Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-enable-azure-ad.md
To enable Azure AD authentication, complete the following steps:
* An API Management instance in the Developer or Premium service tier. If needed, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). * Provision a [gateway resource](api-management-howto-provision-self-hosted-gateway.md) on the instance.
-* Enable a [managed identity](api-management-howto-use-managed-service-identity.md) on the instance.
+* Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md) on the instance.
* Self-hosted gateway container image version 2.2 or later
+### Limitations notes
+
+* Only system-assigned managed identity is supported.
+ ## Create custom roles Create the following two [custom roles](../role-based-access-control/custom-roles.md) that are assigned in later steps. You can use the permissions listed in the following JSON templates to create the custom roles using the [Azure portal](../role-based-access-control/custom-roles-portal.md), [Azure CLI](../role-based-access-control/custom-roles-cli.md), [Azure PowerShell](../role-based-access-control/custom-roles-powershell.md), or other Azure tools.
Assign the API Management Configuration API Access Validator Service Role to the
### Assign API Management Gateway Configuration Reader Role
-#### Step 1. Register Azure AD app
+#### Step 1: Register Azure AD app
Create a new Azure AD app. For steps, see [Create an Azure Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). This app will be used by the self-hosted gateway to authenticate to the API Management instance. * Generate a [client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret) * Take note of the following application values for use in the next section when deploying the self-hosted gateway: application (client) ID, directory (tenant) ID, and client secret
-#### Step 2. Assign API Management Gateway Configuration Reader Service Role
+#### Step 2: Assign API Management Gateway Configuration Reader Service Role
[Assign](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application) the API Management Gateway Configuration Reader Service Role to the app.
kubectl apply -f mygw.yaml
[!INCLUDE [api-management-self-hosted-gateway-kubernetes-services](../../includes/api-management-self-hosted-gateway-kubernetes-services.md)] - ## Next steps * Learn more about the API Management [self-hosted gateway overview](self-hosted-gateway-overview.md).
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
To operate properly, each self-hosted gateway needs outbound connectivity on por
| Description | Required for v1 | Required for v2 | Notes | |:|:|:|:|
-| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Connectivity to v2 endpoint requires DNS resolution of the default hostname. |
+| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Custom hostnames are also supported and can be used instead of the default hostname. |
| Public IP address of the API Management instance | ✔️ | ✔️ | IP address of primary location is sufficient. | | Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>2</sup> | IP addresses must correspond to primary location of API Management instance. | | Hostname of Azure Blob Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) |
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
Previously updated : 12/08/2022 Last updated : 08/02/2023
The `send-one-way-request` policy sends the provided request to the specified UR
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
| timeout| The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 |
The `send-one-way-request` policy sends the provided request to the specified UR
| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
+| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No |
## Usage
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
Previously updated : 12/08/2022 Last updated : 08/02/2023
The `send-request` policy sends the provided request to the specified URL, waiti
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. Policy expressions are allowed. | Yes | N/A | | timeout | The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 | | ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. Policy expressions aren't allowed. | No | `false` |
The `send-request` policy sends the provided request to the specified URL, waiti
| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
-| proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No |
+| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No |
## Usage
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
The `trace` policy adds a custom trace into the request tracing output in the test console, Application Insights telemetries, and/or resource logs. - The policy adds a custom trace to the [request tracing](./api-management-howto-api-inspector.md) output in the test console when tracing is triggered, that is, `Ocp-Apim-Trace` request header is present and set to `true` and `Ocp-Apim-Subscription-Key` request header is present and holds a valid key that allows tracing.-- The policy creates a [Trace](../azure-monitor/app/data-model-complete.md#trace) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the diagnostic setting.-- The policy adds a property in the log entry when [resource logs](./api-management-howto-use-azure-monitor.md#resource-logs) are enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting.
+- The policy creates a [Trace](../azure-monitor/app/data-model-complete.md#trace) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the [diagnostic setting](./diagnostic-logs-reference.md).
+- The policy adds a property in the log entry when [resource logs](./api-management-howto-use-azure-monitor.md#resource-logs) are enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the [diagnostic setting](./diagnostic-logs-reference.md).
- The policy is not affected by Application Insights sampling. All invocations of the policy will be logged. [!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)]
The `trace` policy adds a custom trace into the request tracing output in the te
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. Policy expressions are allowed. | No | Default error message depends on validation issue, for example "JWT not present." | | output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation. Policy expressions aren't allowed. | No | N/A | ---- ## Elements | Element | Description | Required |
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
### Usage notes * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-azure-ad-token` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
+* When using a custom header (`header-name`), the header value cannot be prefixed with `Bearer ` and should be removed.
## Examples
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
The `validate-jwt` policy enforces existence and validity of a supported JSON we
* The policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512. * To configure the policy with one or more OpenID configuration endpoints for use with a self-hosted gateway, the OpenID configuration endpoints URLs must also be reachable by the cloud gateway. * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-jwt` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
+* When using a custom header (`header-name`), the header value cannot be prefixed with `Bearer ` and should be removed.
## Examples
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
Previously updated : 01/06/2023 Last updated : 08/29/2023
NSG rules allowing outbound connectivity to Storage, SQL, and Azure Event Hubs s
## TLS functionality
-To enable TLS/SSL certificate chain building and validation, the API Management service needs outbound network connectivity to `ocsp.msocsp.com`, `mscrl.microsoft.com`, and `crl.microsoft.com`. This dependency is not required if any certificate you upload to API Management contains the full chain to the CA root.
+To enable TLS/SSL certificate chain building and validation, the API Management service needs outbound network connectivity on ports `80` and `443` to `ocsp.msocsp.com`, `oneocsp.msocsp.com`, `mscrl.microsoft.com`, `crl.microsoft.com`, and `csp.digicert.com`. This dependency is not required if any certificate you upload to API Management contains the full chain to the CA root.
+ ## DNS access Outbound access on port `53` is required for communication with DNS servers. If a custom DNS server exists on the other end of a VPN gateway, the DNS server must be reachable from the subnet hosting API Management.
-### FQDN dependencies
-
-To operate properly, the API Management service needs outbound connectivity on port 443 to the following endpoints associated with its cloud-based API Management instance:
+## Azure Active Directory integration
-| Description | Required | Notes |
-|:|:|:|
-| Endpoints for Azure Active Directory integration | ✔️ | Required endpoints are `<region>.login.microsoft.com` and `login.microsoftonline.com`. |
+To operate properly, the API Management service needs outbound connectivity on port 443 to the following endpoints associated with Azure Active Directory: `<region>.login.microsoft.com` and `login.microsoftonline.com`.
## Metrics and health monitoring Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains, are represented under the **AzureMonitor** service tag for use with Network Security Groups.
-### Metrics and health monitoring
-
-Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains, are represented under the AzureMonitor service tag for use with Network Security Groups.
- | Azure Environment | Endpoints | |-|| | Azure Public | <ul><li>gcs.prod.monitoring.core.windows.net</li><li>global.prod.microsoftmetrics.com</li><li>shoebox2.prod.microsoftmetrics.com</li><li>shoebox2-red.prod.microsoftmetrics.com</li><li>shoebox2-black.prod.microsoftmetrics.com</li><li>prod3.prod.microsoftmetrics.com</li><li>prod3-black.prod.microsoftmetrics.com</li><li>prod3-red.prod.microsoftmetrics.com</li><li>gcs.prod.warm.ingestion.monitoring.azure.com</li></ul> |
Enable publishing the [developer portal](api-management-howto-developer-portal.m
When adding virtual machines running Windows to the VNet, allow outbound connectivity on port `1688` to the [KMS endpoint](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation#solution) in your cloud. This configuration routes Windows VM traffic to the Azure Key Management Services (KMS) server to complete Windows activation.
+## Internal infrastructure and diagnostics
+
+The following settings and FQDNs are required to maintain and diagnose API Management's internal compute infrastructure.
+
+* Allow outbound UDP access on port `123` for NTP.
+* Allow outbound TCP access on port `12000` for diagnostics.
+* Allow outbound access on port `443` to the following endpoints for internal diagnostics: `azurewatsonanalysis-prod.core.windows.net`, `*.data.microsoft.com`, `azureprofiler.trafficmanager.net`, `shavamanifestazurecdnprod1.azureedge.net`, `shavamanifestcdnprod1.azureedge.net`.
+* Allow outbound access on port `443` to the following endpoint for internal PKI: `issuer.pki.azure.com`.
+* Allow outbound access on ports `80` and `443` to the following endpoints for Windows Update: `*.update.microsoft.com`, `*.ctldl.windowsupdate.com`, `ctldl.windowsupdate.com`, `download.windowsupdate.com`.
+* Allow outbound access on ports `80` and `443` to the endpoint `go.microsoft.com`.
+* Allow outbound access on port `443` to the following endpoints for Windows Defender: `wdcp.microsoft.com`, `wdcpalt.microsoft.com `.
+ ## Control plane IP addresses The following IP addresses are divided by **Azure Environment** and **Region**. In some cases, two IP addresses are listed. Permit both IP addresses.
The following IP addresses are divided by **Azure Environment** and **Region**.
| Azure Government| USDoD Central| 52.182.32.132| | Azure Government| USDoD East| 52.181.32.192| + ## Next steps Learn more about:
app-service App Service Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-best-practices.md
Title: Best Practices description: Learn best practices and the common troubleshooting scenarios for your app running in Azure App Service.- ms.assetid: f3359464-fa44-4f4a-9ea6-7821060e8d0d Last updated 07/01/2016-++ # Best Practices for Azure App Service
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md
description: Learn how to better performance for your web, mobile, and API app i
keywords: app service, azure app service, scale, scalable, app service plan, app service cost ms.assetid: ff00902b-9858-4bee-ab95-d3406018c688 Previously updated : 05/08/2023 Last updated : 08/29/2023 + # Configure Premium V3 tier for Azure App Service
-The new Premium V3 pricing tier gives you faster processors, SSD storage, and quadruple the memory-to-core ratio of the existing pricing tiers (double the Premium V2 tier). With the performance advantage, you could save money by running your apps on fewer instances. In this article, you learn how to create an app in Premium V3 tier or scale up an app to Premium V3 tier.
+The new Premium V3 pricing tier gives you faster processors, SSD storage, memory-optimized options, and quadruple the memory-to-core ratio of the existing pricing tiers (double the Premium V2 tier). With the performance advantage, you could save money by running your apps on fewer instances. In this article, you learn how to create an app in Premium V3 tier or scale up an app to Premium V3 tier.
## Prerequisites
-To scale-up an app to Premium V3, you need to have an Azure App Service app that runs in a pricing tier lower than Premium V3, and the app must be running in an App Service deployment that supports Premium V3.
+To scale-up an app to Premium V3, you need to have an Azure App Service app that runs in a pricing tier lower than Premium V3, and the app must be running in an App Service deployment that supports Premium V3. Additionally the App Service deployment must support the desired SKU within Premium V3.
<a name="availability"></a>
To scale-up an app to Premium V3, you need to have an Azure App Service app that
The Premium V3 tier is available for both native and custom containers, including both Windows containers and Linux containers.
-Premium V3 is available in some Azure regions and availability in additional regions is being added continually. To see if it's available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md):
+Premium V3 as well as specific Premium V3 SKUs are available in some Azure regions and availability in additional regions is being added continually. To see if a specific PremiumV3 offering is available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md) (substitute _P1v3_ with the desired SKU):
```azurecli-interactive az appservice list-locations --sku P1V3
To see all the Premium V3 options, select **Explore pricing plans**, then select
:::image type="content" source="media/app-service-configure-premium-tier/explore-pricing-plans.png" alt-text="Screenshot showing the Explore pricing plans page with a Premium V3 plan selected."::: > [!IMPORTANT]
-> If you don't see a Premium V3 plan as an option, or if the options are greyed out, then Premium V3 likely isn't available in the underlying App Service deployment that contains the App Service plan. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details.
+> If you don't see **P0V3**, **P1V3**, **P2V3**, **P3V3**, **P1mV3**, **P2mV3**, **P3mV3**, **P4mV3**, and **P5mV3** as options, or if some options are greyed out, then either **Premium V3** or an individual SKU within **Premium V3** isn't available in the underlying App Service deployment that contains the App Service plan. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details.
+>
## Scale up an existing app to Premium V3 tier
-Before scaling an existing app to Premium V3 tier, make sure that Premium V3 is available. For information, see [Premium V3 availability](#availability). If it's not available, see [Scale up from an unsupported resource group and region combination](#unsupported).
+Before scaling an existing app to Premium V3 tier, make sure that both Premium V3 as well as the specific SKU within Premium V3 are available. For information, see [PremiumV3 availability](#availability). If it's not available, see [Scale up from an unsupported resource group and region combination](#unsupported).
Depending on your hosting environment, scaling up may require extra steps.
Some App Service plans can't scale up to the Premium V3 tier, or to a newer SKU
## Scale up from an unsupported resource group and region combination
-If your app runs in an App Service deployment where Premium V3 isn't available, or if your app runs in a region that currently does not support Premium V3, you need to re-deploy your app to take advantage of Premium V3. You have two options:
+If your app runs in an App Service deployment where Premium V3 isn't available, or if your app runs in a region that currently does not support Premium V3, you need to re-deploy your app to take advantage of Premium V3. Alternatively newer Premium V3 SKUs may not be available, in which case you also need to re-deploy your app to take advantage of newer SKUs within Premium V3. You have two options:
-- Create an app in a new resource group and with a new App Service plan. When creating the App Service plan, select a Premium V3 tier. This step ensures that the App Service plan is deployed into a deployment unit that supports Premium V3. Then, redeploy your application code into the newly created app. Even if you scale the App Service plan down to a lower tier to save costs, you can always scale back up to Premium V3 because the deployment unit supports it.-- If your app already runs in an existing **Premium** tier, then you can clone your app with all app settings, connection strings, and deployment configuration into a new resource group on a new app service plan that uses Premium V3.
+- Create an app in a new resource group and with a new App Service plan. When creating the App Service plan, select the desired Premium V3 tier. This step ensures that the App Service plan is deployed into a deployment unit that supports Premium V3 as well as the specific SKU within Premium V3. Then, redeploy your application code into the newly created app. Even if you scale the new App Service plan down to a lower tier to save costs, you can always scale back up to Premium V3 and the desired SKU within Premium V3 because the deployment unit supports it.
![Screenshot showing how to clone your app.](media/app-service-configure-premium-tier/clone-app.png)
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
Last updated 07/31/2023--++
app-service App Service Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-plan-manage.md
keywords: app service, azure app service, scale, app service plan, change, creat
ms.assetid: 4859d0d5-3e3c-40cc-96eb-f318b2c51a3d + Last updated 07/31/2023
app-service App Service Sql Asp Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-asp-github-actions.md
Title: "Tutorial: Use GitHub Actions to deploy to App Service and connect to a database" description: Deploy a database-backed ASP.NET core app to Azure with GitHub Actions+ ms.devlang: csharp Last updated 01/09/2023
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md
Title: "Tutorial: Use GitHub Actions to deploy to an App Service custom container and connect to a database" description: Learn how to deploy an ASP.NET core app to Azure and to Azure SQL Database with GitHub Actions+ ms.devlang: csharp Last updated 01/09/2023
app-service App Service Web App Cloning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-app-cloning.md
ms.assetid: f9a5cfa1-fbb0-41e6-95d1-75d457347a35
Last updated 01/14/2016 -++ # Azure App Service App Cloning Using PowerShell
app-service App Service Web Configure Tls Mutual Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md
Title: Configure TLS mutual authentication description: Learn how to authenticated client certificates on TLS. Azure App Service can make the client certificate available to the app code for verification. ++ ms.assetid: cd1d15d3-2d9e-4502-9f11-a306dac4453a Last updated 12/11/2020
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
Last updated 01/31/2023 ++ # Map an existing custom DNS name to Azure App Service
app-service App Service Web Tutorial Dotnet Sqldatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md
ms.assetid: 03c584f1-a93c-4e3d-ac1b-c82b50c75d3e
ms.devlang: csharp Last updated 01/27/2022-+++ # Tutorial: Deploy an ASP.NET app to Azure with Azure SQL Database
Run a few commands to make updates to your local database.
1. Type `Ctrl+F5` to run the app. Test the edit, details, and create links.
-If the application loads without errors, then Code First Migrations has succeeded. However, your page still looks the same because your application logic is not using this new property yet.
+If the application loads without errors, then Code First Migrations has succeeded. However, your page still looks the same because your application logic isn't using this new property yet.
#### Use the new property
Now that you enabled Code First Migrations in your Azure app, publish your code
![Azure app after Code First Migration](./media/app-service-web-tutorial-dotnet-sqldatabase/this-one-is-done.png)
-All your existing to-do items are still displayed. When you republish your ASP.NET application, existing data in your SQL Database is not lost. Also, Code First Migrations only changes the data schema and leaves your existing data intact.
+All your existing to-do items are still displayed. When you republish your ASP.NET application, existing data in your SQL Database isn't lost. Also, Code First Migrations only changes the data schema and leaves your existing data intact.
## Stream application logs
Each action starts with a `Trace.WriteLine()` method. This code is added to show
> [!TIP] > You can experiment with different trace levels to see what types of messages are displayed for each level. For example, the **Information** level includes all logs created by `Trace.TraceInformation()`, `Trace.TraceWarning()`, and `Trace.TraceError()`, but not logs created by `Trace.WriteLine()`.
-1. In your browser navigate to your app again at *http://&lt;your app name>.azurewebsites.net*, then try clicking around the to-do list application in Azure. The trace messages are now streamed to the **Output** window in Visual Studio.
+1. In your browser, navigate to your app again at *http://&lt;your app name>.azurewebsites.net*, then try clicking around the to-do list application in Azure. The trace messages are now streamed to the **Output** window in Visual Studio.
```console Application: 2017-04-06T23:30:41 PID[8132] Verbose GET /Todos/Index
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
ms.devlang: csharp
Last updated 01/31/2023 ++ # Tutorial: Host a RESTful API with CORS in Azure App Service
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md
Title: Manage AuthN/AuthZ API versions
description: Upgrade your App Service authentication API to V2 or pin it to a specific version, if needed. Last updated 02/17/2023-+++ # Manage the API and runtime versions of App Service authentication
There are two versions of the management API for App Service authentication. The
> [!WARNING] > Migration to V2 will disable management of the App Service Authentication/Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed.
-The V2 API does not support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it leverages the converged [Microsoft identity platform](../active-directory/develop/v2-overview.md) to sign-in users with both Azure AD and personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory (Azure AD) configuration is used to configure the Microsoft identity platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but it is recommended that you move to the newer Microsoft Identity Platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
+The V2 API doesn't support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it uses the converged [Microsoft identity platform](../active-directory/develop/v2-overview.md) to sign-in users with both Azure AD and personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory (Azure AD) configuration is used to configure the Microsoft identity platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but you should move to the newer Microsoft Identity Platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
The automated migration process will move provider secrets into application settings and then convert the rest of the configuration into the new format. To use the automatic migration: 1. Navigate to your app in the portal and select the **Authentication** menu option. 1. If the app is configured using the V1 model, you'll see an **Upgrade** button.
-1. Review the description in the confirmation prompt. If you're ready to perform the migration, click **Upgrade** in the prompt.
+1. Review the description in the confirmation prompt. If you're ready to perform the migration, select **Upgrade** in the prompt.
### Manually managing the migration
-The following steps will allow you to manually migrate the application to the V2 API if you do not wish to use the automatic version mentioned above.
+The following steps will allow you to manually migrate the application to the V2 API if you don't wish to use the automatic version mentioned above.
#### Moving secrets to application settings
The following steps will allow you to manually migrate the application to the V2
> [!NOTE] > The application settings for this configuration should be marked as slot-sticky, meaning that they will not move between environments during a [slot swap operation](./deploy-staging-slots.md). This is because your authentication configuration itself is tied to the environment.
-1. Create a new JSON file named `authsettings.json`. Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` does not, and if it does, switch that value to `null`.
+1. Create a new JSON file named `authsettings.json`. Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` doesn't, and if it does, switch that value to `null`.
-1. Add a property to `authsettings.json` which points to the application setting name you created earlier for each provider:
+1. Add a property to `authsettings.json` that points to the application setting name you created earlier for each provider:
* Azure AD: `clientSecretSettingName` * Google: `googleClientSecretSettingName`
You've now migrated the app to store identity provider secrets as application se
#### Support for Microsoft Account provider registrations
-If your existing configuration contains a Microsoft Account provider and does not contain an Azure AD provider, you can switch the configuration over to the Azure AD provider and then perform the migration. To do this:
+If your existing configuration contains a Microsoft Account provider and doesn't contain an Azure AD provider, you can switch the configuration over to the Azure AD provider and then perform the migration. To do this:
1. Go to [**App registrations**](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) in the Azure portal and find the registration associated with your Microsoft Account provider. It may be under the "Applications from personal account" heading.
-1. Navigate to the "Authentication" page for the registration. Under "Redirect URIs" you should see an entry ending in `/.auth/login/microsoftaccount/callback`. Copy this URI.
+1. Navigate to the "Authentication" page for the registration. Under "Redirect URIs", you should see an entry ending in `/.auth/login/microsoftaccount/callback`. Copy this URI.
1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration. 1. Navigate to the App Service Authentication / Authorization configuration for your app. 1. Collect the configuration for the Microsoft Account provider.
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md
Title: Customize sign-ins and sign-outs
description: Use the built-in authentication and authorization in App Service and at the same time customize the sign-in and sign-out behavior. Last updated 03/29/2021+++ # Customize sign-in and sign-out in Azure App Service authentication
app-service Configure Authentication File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-file-based.md
Title: File-based configuration of AuthN/AuthZ
description: Configure authentication and authorization in App Service using a configuration file to enable certain preview capabilities. Last updated 07/15/2021+++ # File-based configuration in Azure App Service authentication
-With [App Service authentication](overview-authentication-authorization.md), the authentication settings can be configured with a file. You may need to use file-based configuration to use certain preview capabilities of App Service authentication / authorization before they are exposed via [Azure Resource Manager](../azure-resource-manager/management/overview.md) APIs.
+With [App Service authentication](overview-authentication-authorization.md), the authentication settings can be configured with a file. You may need to use file-based configuration to use certain preview capabilities of App Service authentication / authorization before they're exposed via [Azure Resource Manager](../azure-resource-manager/management/overview.md) APIs.
> [!IMPORTANT] > Remember that your app payload, and therefore this file, may move between environments, as with [slots](./deploy-staging-slots.md). It is likely you would want a different app registration pinned to each slot, and in these cases, you should continue to use the standard configuration method instead of using the configuration file.
With [App Service authentication](overview-authentication-authorization.md), the
1. Create a new JSON file for your configuration at the root of your project (deployed to D:\home\site\wwwroot in your web / function app). Fill in your desired configuration according to the [file-based configuration reference](#configuration-file-reference). If modifying an existing Azure Resource Manager configuration, make sure to translate the properties captured in the `authsettings` collection into your configuration file.
-2. Modify the existing configuration, which is captured in the [Azure Resource Manager](../azure-resource-manager/management/overview.md) APIs under `Microsoft.Web/sites/<siteName>/config/authsettingsV2`. To modify this, you can use an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) or a tool like [Azure Resource Explorer](https://resources.azure.com/). Within the authsettingsV2 collection, you will need to set two properties (and may remove others):
+2. Modify the existing configuration, which is captured in the [Azure Resource Manager](../azure-resource-manager/management/overview.md) APIs under `Microsoft.Web/sites/<siteName>/config/authsettingsV2`. To modify it, you can use an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) or a tool like [Azure Resource Explorer](https://resources.azure.com/). Within the authsettingsV2 collection, set two properties (you may remove others):
1. Set `platform.enabled` to "true" 2. Set `platform.configFilePath` to the name of the file (for example, "auth.json")
Once you have made this configuration update, the contents of the file will be u
## Configuration file reference
-Any secrets that will be referenced from your configuration file must be stored as [application settings](./configure-common.md#configure-app-settings). You may name the settings anything you wish. Just make sure that the references from the configuration file uses the same keys.
+Any secrets that will be referenced from your configuration file must be stored as [application settings](./configure-common.md#configure-app-settings). You may name the settings anything you wish. Just make sure that the references from the configuration file use the same keys.
The following exhausts possible configuration options within the file:
app-service Configure Authentication Oauth Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-oauth-tokens.md
Title: OAuth tokens in AuthN/AuthZ
description: Learn how to retrieve tokens and refresh tokens and extend sessions when using the built-in authentication and authorization in App Service. Last updated 03/29/2021+++ # Work with OAuth tokens in Azure App Service authentication
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
description: Learn how to configure Azure Active Directory authentication as an
ms.assetid: 6ec6a46c-bce4-47aa-b8a3-e133baef22eb Last updated 01/31/2023-+++
-# Configure your App Service or Azure Functions app to use Azure AD login
+# Configure your App Service or Azure Functions app to use Azure AD sign-in
Select another authentication provider to jump to it.
The App Service Authentication feature can automatically create an app registrat
- [Use an existing registration created separately](#advanced) > [!NOTE]
-> The option to create a new registration automatically is not available for government clouds or when using [Azure Active Directory for customers (Preview)]. Instead, [define a registration separately](#advanced).
+> The option to create a new registration automatically isn't available for government clouds or when using [Azure Active Directory for customers (Preview)]. Instead, [define a registration separately](#advanced).
## <a name="express"> </a> Option 1: Create a new app registration automatically
-Use this option unless you need to create an app registration separately. It makes enabling authentication simple and requires just a few clicks. You can customize the app registration in Azure AD once it's created.
+Use this option unless you need to create an app registration separately. You can customize the app registration in Azure AD once it's created.
1. Sign in to the [Azure portal] and navigate to your app.
-1. Select **Authentication** in the menu on the left. Click **Add identity provider**.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration or the supported account types. A client secret will be created and stored as a slot-sticky [application setting] named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
-1. If this is the first identity provider configured for the application, you will also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
+1. If this is the first identity provider configured for the application, you'll also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
- These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to log in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow).
+ These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow).
-1. (Optional) Click **Next: Permissions** and add any Microsoft Graph permissions needed by the application. These will be added to the app registration, but you can also change them later.
-1. Click **Add**.
+1. (Optional) Select **Next: Permissions** and add any Microsoft Graph permissions needed by the application. These will be added to the app registration, but you can also change them later.
+1. Select **Add**.
You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
-For an example of configuring Azure AD login for a web app that accesses Azure Storage and Microsoft Graph, see [this tutorial](scenario-secure-app-authentication-app-service.md).
+For an example of configuring Azure AD sign-in for a web app that accesses Azure Storage and Microsoft Graph, see [this tutorial](scenario-secure-app-authentication-app-service.md).
## <a name="advanced"> </a>Option 2: Use an existing registration created separately
You can configure App Service authentication to use an existing app registration
- Your account doesn't have permissions to create app registrations in your Azure AD tenant. - You want to use an app registration from a different Azure AD tenant than the one your app is in.-- The option to create a new registration is not available for government clouds.
+- The option to create a new registration isn't available for government clouds.
#### <a name="register"> </a>Step 1: Create an app registration in Azure AD for your App Service app
-During creation of the app registration, collect the following information which you will need later when you configure the authentication in the App Service app:
+During creation of the app registration, collect the following information which you'll need later when you configure the authentication in the App Service app:
- Client ID - Tenant ID - Client secret (optional, but recommended) - Application ID URI
-The instructions for creating an app registration depend on if you are using [a workforce tenant](../active-directory/fundamentals/active-directory-whatis.md) or [a customer tenant (Preview)][Azure Active Directory for customers (Preview)]. Use the tabs below to select the right set of instructions for your scenario.
+The instructions for creating an app registration depend on if you're using [a workforce tenant](../active-directory/fundamentals/active-directory-whatis.md) or [a customer tenant (Preview)][Azure Active Directory for customers (Preview)]. Use the tabs below to select the right set of instructions for your scenario.
To register the app, perform the following steps:
To register the app, perform the following steps:
# [Workforce tenant](#tab/workforce-tenant)
- From the portal menu, select **Azure Active Directory**. If the tenant you are using is different from the one you use to configure the App Service application, you will need to [change directories][Switch your directory] first.
+ From the portal menu, select **Azure Active Directory**. If the tenant you're using is different from the one you use to configure the App Service application, you'll need to [change directories][Switch your directory] first.
# [Customer tenant (Preview)](#tab/customer-tenant)
To register the app, perform the following steps:
1. [Switch your directory] in the Azure portal to the customer tenant. > [!TIP]
- > Because you are working in two tenant contexts (the tenant for your subscription and the customer tenant), you may want to open the Azure portal in two separate tabs of your web browser. Each can be signed into a different tenant.
+ > Because you're working in two tenant contexts (the tenant for your subscription and the customer tenant), you may want to open the Azure portal in two separate tabs of your web browser. Each can be signed into a different tenant.
1. From the portal menu, select **Azure Active Directory**.
To register the app, perform the following steps:
1. After the app registration is created, copy the **Application (client) ID** and the **Directory (tenant) ID** for later. 1. Under **Implicit grant and hybrid flows**, enable **ID tokens** to allow OpenID Connect user sign-ins from App Service. Select **Save**. 1. (Optional) From the left navigation, select **Branding & properties**. In **Home page URL**, enter the URL of your App Service app and select **Save**.
-1. From the left navigation, select **Expose an API** > **Set** > **Save**. This value uniquely identifies the application when it is used as a resource, allowing tokens to be requested that grant access. It is used as a prefix for scopes you create.
+1. From the left navigation, select **Expose an API** > **Set** > **Save**. This value uniquely identifies the application when it's used as a resource, allowing tokens to be requested that grant access. It's used as a prefix for scopes you create.
For a single-tenant app, you can use the default value, which is in the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri).
To register the app, perform the following steps:
# [Workforce tenant](#tab/workforce-tenant)
- No additional steps are required for a workforce tenant.
+ No other steps are required for a workforce tenant.
# [Customer tenant (Preview)](#tab/customer-tenant)
To register the app, perform the following steps:
1. Select the user flow that you just created. 1. Select **Applications** > **Add application**.
- 1. Search for the app registration you created earlier, select it, and then click **Select**.
+ 1. Search for the app registration you created earlier, select it, and then select **Select**.
These steps are also covered in [Add your application to the user flow].
To register the app, perform the following steps:
For **App registration type**, choose one of the following:
- - **Pick an existing app registration in this directory**: Choose an app registration from the current tenant and automatically gather the necessary app information. The system will attempt to create a new client secret against the app registration and automatically configure your app to use it. A default issuer URL is set based on the supported account types configured in the app registration. If you intend to change this default, consult the table below.
- - **Provide the details of an existing app registration**: Specify details for an app registration from another tenant or if your account does not have permission in the current tenant to query the registrations. For this option, you must manually fill in the configuration values according to the table below.
+ - **Pick an existing app registration in this directory**: Choose an app registration from the current tenant and automatically gather the necessary app information. The system will attempt to create a new client secret against the app registration and automatically configure your app to use it. A default issuer URL is set based on the supported account types configured in the app registration. If you intend to change this default, consult the following table.
+ - **Provide the details of an existing app registration**: Specify details for an app registration from another tenant or if your account doesn't have permission in the current tenant to query the registrations. For this option, you must manually fill in the configuration values according to the following table.
- The **authentication endpoint** for a workforce tenant should be a [value specific to the cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints). For example, a workforce tenant in global Azure would use "https://login.microsoftonline.com" as its authentication endpoint. Make note of the authentication endpoint value, as it is needed to construct the right **Issuer URL**.
+ The **authentication endpoint** for a workforce tenant should be a [value specific to the cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints). For example, a workforce tenant in global Azure would use "https://login.microsoftonline.com" as its authentication endpoint. Make note of the authentication endpoint value, as it's needed to construct the right **Issuer URL**.
# [Customer tenant (Preview)](#tab/customer-tenant)
- For a customer tenant, you must manually fill in the configuration values according to the table below.
+ For a customer tenant, you must manually fill in the configuration values according to the following table.
- The **authentication endpoint** for a customer tenant should be `https://<tenant-subdomain>.ciamlogin.com`, replacing *\<tenant-subdomain>* with the default subdomain for the tenant. The default subdomain is part of the **primary domain** for the tenant, which should be of the form `<tenant-subdomain>.onmicrosoft.com` and was set during tenant creation. For example, if the tenant had the domain "contoso.onmicrosoft.com", the tenant subdomain would be "contoso", and the authentication endpoint would be "https://contoso.ciamlogin.com". Make note of the authentication endpoint value, as it is needed to construct the right **Issuer URL**.
+ The **authentication endpoint** for a customer tenant should be `https://<tenant-subdomain>.ciamlogin.com`, replacing *\<tenant-subdomain>* with the default subdomain for the tenant. The default subdomain is part of the **primary domain** for the tenant, which should be of the form `<tenant-subdomain>.onmicrosoft.com` and was set during tenant creation. For example, if the tenant had the domain "contoso.onmicrosoft.com", the tenant subdomain would be "contoso", and the authentication endpoint would be "https://contoso.ciamlogin.com". Make note of the authentication endpoint value, as it's needed to construct the right **Issuer URL**.
To register the app, perform the following steps:
|Field|Description| |-|-| |Application (client) ID| Use the **Application (client) ID** of the app registration. |
- |Client Secret| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the App Service authentication token store.|
+ |Client Secret| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret isn't set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the App Service authentication token store.|
|Issuer URL| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the **authentication endpoint** you determined in the previous step for your tenant type and cloud environment, also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. For applications that use Azure AD v1, omit `/v2.0` in the URL. <br/><br/> This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. Any configuration other than a tenant-specific endpoint will be treated as multi-tenant. In multi-tenant configurations, no validation of the issuer or tenant ID is performed by the system, and these checks should be fully handled in [your app's authorization logic](#authorize-requests).|
- |Allowed Token Audiences| This field is optional. The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If your application represents an API that will be called by other clients, you should also add the **Application ID URI** that you configured on the app registration. There is a limit of 500 characters total across the list of allowed audiences.|
+ |Allowed Token Audiences| This field is optional. The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If your application represents an API that will be called by other clients, you should also add the **Application ID URI** that you configured on the app registration. There's a limit of 500 characters total across the list of allowed audiences.|
The client secret will be stored as a slot-sticky [application setting] named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
-1. If this is the first identity provider configured for the application, you will also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
+1. If this is the first identity provider configured for the application, you'll also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
- These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to log in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow).
+ These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](overview-authentication-authorization.md#authentication-flow).
-1. Click **Add**.
+1. Select **Add**.
You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. ## Authorize requests
-By default, App Service Authentication only handles authentication, determining if the caller is who they say they are. Authorization, determining if that caller should have access to some resource, is an additional step beyond authentication. You can learn more about these concepts from [Microsoft identity platform authorization basics](../active-directory/develop/authorization-basics.md).
+By default, App Service Authentication only handles authentication, determining if the caller is who they say they are. Authorization, determining if that caller should have access to some resource, is an extra step beyond authentication. You can learn more about these concepts from [Microsoft identity platform authorization basics](../active-directory/develop/authorization-basics.md).
-Your app can [make authorization decisions in code](#perform-validations-from-application-code). App Service Authentication does provide some [built-in checks](#use-a-built-in-authorization-policy) which can help, but they may not alone be sufficient to cover the authorization needs of your app.
+Your app can [make authorization decisions in code](#perform-validations-from-application-code). App Service Authentication does provide some [built-in checks](#use-a-built-in-authorization-policy), which can help, but they may not alone be sufficient to cover the authorization needs of your app.
> [!TIP]
-> Multi-tenant applications should validate the issuer and tenant ID of the request as part of this process to make sure the values are allowed. When App Service Authentication is configured for a multi-tenant scenario, it does not validate which tenant the request comes from. An app may need to be limited to specific tenants, based on if the organization has signed up for the service, for example. See the [Microsoft identity platform multi-tenant guidance](../active-directory/develop/howto-convert-app-to-be-multi-tenant.md#update-your-code-to-handle-multiple-issuer-values).
+> Multi-tenant applications should validate the issuer and tenant ID of the request as part of this process to make sure the values are allowed. When App Service Authentication is configured for a multi-tenant scenario, it doesn't validate which tenant the request comes from. An app may need to be limited to specific tenants, based on if the organization has signed up for the service, for example. See the [Microsoft identity platform multi-tenant guidance](../active-directory/develop/howto-convert-app-to-be-multi-tenant.md#update-your-code-to-handle-multiple-issuer-values).
### Perform validations from application code
Requests that fail these built-in checks are given an HTTP `403 Forbidden` respo
## Configure client apps to access your App Service
-In the prior sections, you registered your App Service or Azure Function to authenticate users. This section explains how to register native clients or daemon apps in Azure AD so that they can request access to APIs exposed by your App Service on behalf of users or themselves, such as in an N-tier architecture. Completing the steps in this section is not required if you only wish to authenticate users.
+In the prior sections, you registered your App Service or Azure Function to authenticate users. This section explains how to register native clients or daemon apps in Azure AD so that they can request access to APIs exposed by your App Service on behalf of users or themselves, such as in an N-tier architecture. Completing the steps in this section isn't required if you only wish to authenticate users.
### Native client application
In an N-tier architecture, your client application can acquire a token to call a
You can now [request an access token using the client ID and client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#use-a-token), and App Service authentication will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
-At present, this allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must perform some additional configuration.
+At present, this allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must perform some extra configuration.
1. [Define an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md) in the manifest of the app registration representing the App Service or Function app you want to protect. 1. On the app registration representing the client that needs to be authorized, select **API permissions** > **Add a permission** > **My APIs**. 1. Select the app registration you created earlier. If you don't see the app registration, make sure that you've [added an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md). 1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**.
-1. Make sure to click **Grant admin consent** to authorize the client application to request the permission.
+1. Make sure to select **Grant admin consent** to authorize the client application to request the permission.
1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
-1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service authentication). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code).
+1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this isn't performed by App Service authentication). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code).
You have now configured a daemon client application that can access your App Service app using its own identity.
Regardless of the configuration you use to set up authentication, the following
- Configure each App Service app with its own app registration in Azure AD. - Give each App Service app its own permissions and consent.-- Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app.
+- Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When you're testing new code, this practice can help prevent issues from affecting the production app.
### Migrate to the Microsoft Graph
-Some older apps may also have been set up with a dependency on the [deprecated Azure AD Graph][aad-graph], which is scheduled for full retirement. For example, your app code may have called Azure AD graph to check group membership as part of an authorization filter in a middleware pipeline. Apps should move to the [Microsoft Graph](/graph/overview) by following the [guidance provided by AAD as part of the Azure AD Graph deprecation process][aad-graph]. In following those instructions, you may need to make some changes to your configuration of App Service authentication. Once you have added Microsoft Graph permissions to your app registration, you can:
+Some older apps may also have been set up with a dependency on the [deprecated Azure AD Graph][aad-graph], which is scheduled for full retirement. For example, your app code may have called Azure AD graph to check group membership as part of an authorization filter in a middleware pipeline. Apps should move to the [Microsoft Graph](/graph/overview) by following the [guidance provided by Azure AD as part of the Azure AD Graph deprecation process][aad-graph]. In following those instructions, you may need to make some changes to your configuration of App Service authentication. Once you have added Microsoft Graph permissions to your app registration, you can:
1. Update the **Issuer URL** to include the "/v2.0" suffix if it doesn't already. See [Enable Azure Active Directory in your App Service app](#-step-2-enable-azure-active-directory-in-your-app-service-app) for general expectations around this value.
-1. Remove requests for Azure AD Graph permissions from your login configuration. The properties to change depend on [which version of the management API you're using](./configure-authentication-api-version.md):
+1. Remove requests for Azure AD Graph permissions from your sign-in configuration. The properties to change depend on [which version of the management API you're using](./configure-authentication-api-version.md):
- If you're using the V1 API (`/authsettings`), this would be in the `additionalLoginParams` array. - If you're using the V2 API (`/authsettingsV2`), this would be in the `loginParameters` array. You would need to remove any reference to "https://graph.windows.net", for example. This includes the `resource` parameter (which isn't supported by the "/v2.0" endpoint) or any scopes you're specifically requesting that are from the Azure AD Graph.
- You would also need to update the configuration to request the new Microsoft Graph permissions you set up for the application registration. You can use the [.default scope](../active-directory/develop/scopes-oidc.md#the-default-scope) to simplify this setup in many cases. To do so, add a new login parameter `scope=openid profile email https://graph.microsoft.com/.default`.
+ You would also need to update the configuration to request the new Microsoft Graph permissions you set up for the application registration. You can use the [.default scope](../active-directory/develop/scopes-oidc.md#the-default-scope) to simplify this setup in many cases. To do so, add a new sign-in parameter `scope=openid profile email https://graph.microsoft.com/.default`.
-With these changes, when App Service Authentication attempts to log in, it will no longer request permissions to the Azure AD Graph, and instead it will get a token for the Microsoft Graph. Any use of that token from your application code would also need to be updated, as per the [guidance provided by AAD][aad-graph].
+With these changes, when App Service Authentication attempts to sign in, it will no longer request permissions to the Azure AD Graph, and instead it will get a token for the Microsoft Graph. Any use of that token from your application code would also need to be updated, as per the [guidance provided by Azure AD][aad-graph].
[aad-graph]: /graph/migrate-azure-ad-graph-overview
app-service Configure Authentication Provider Apple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-apple.md
description: Learn how to configure Sign in with Apple as an identity provider f
Last updated 11/19/2020 +++ # Configure your App Service or Azure Functions app to sign in using a Sign in with Apple provider (Preview)
app-service Configure Authentication Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-facebook.md
description: Learn how to configure Facebook authentication as an identity provi
ms.assetid: b6b4f062-fcb4-47b3-b75a-ec4cb51a62fd Last updated 03/29/2021-+++ # Configure your App Service or Azure Functions app to use Facebook login
app-service Configure Authentication Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-github.md
Title: Configure GitHub authentication
description: Learn how to configure GitHub authentication as an identity provider for your App Service or Azure Functions app. Last updated 03/01/2022+++ # Configure your App Service or Azure Functions app to use GitHub login
app-service Configure Authentication Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-google.md
description: Learn how to configure Google authentication as an identity provide
ms.assetid: 2b2f9abf-9120-4aac-ac5b-4a268d9b6e2b Last updated 03/29/2021-+++
app-service Configure Authentication Provider Microsoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-microsoft.md
description: Learn how to configure Microsoft Account authentication as an ident
ms.assetid: ffbc6064-edf6-474d-971c-695598fd08bf Last updated 03/29/2021-+++
app-service Configure Authentication Provider Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-openid-connect.md
description: Learn how to configure an OpenID Connect provider as an identity pr
Last updated 10/20/2021 +++
-# Configure your App Service or Azure Functions app to login using an OpenID Connect provider
+# Configure your App Service or Azure Functions app to sign in using an OpenID Connect provider
[!INCLUDE [app-service-mobile-selector-authentication](../../includes/app-service-mobile-selector-authentication.md)]
-This article shows you how to configure Azure App Service or Azure Functions to use a custom authentication provider that adheres to the [OpenID Connect specification](https://openid.net/connect/). OpenID Connect (OIDC) is an industry standard used by many identity providers (IDPs). You do not need to understand the details of the specification in order to configure your app to use an adherent IDP.
+This article shows you how to configure Azure App Service or Azure Functions to use a custom authentication provider that adheres to the [OpenID Connect specification](https://openid.net/connect/). OpenID Connect (OIDC) is an industry standard used by many identity providers (IDPs). You don't need to understand the details of the specification in order to configure your app to use an adherent IDP.
You can configure your app to use one or more OIDC providers. Each must be given a unique alphanumeric name in the configuration, and only one can serve as the default redirect target.
Your provider will require you to register the details of your application with
> Some providers may require additional steps for their configuration and how to use the values they provide. For example, Apple provides a private key which is not itself used as the OIDC client secret, and you instead must use it to craft a JWT which is treated as the secret you provide in your app config (see the "Creating the Client Secret" section of the [Sign in with Apple documentation](https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens)) >
-You will need to collect a **client ID** and **client secret** for your application.
+You'll need to collect a **client ID** and **client secret** for your application.
> [!IMPORTANT]
-> The client secret is an important security credential. Do not share this secret with anyone or distribute it within a client application.
+> The client secret is an important security credential. Don't share this secret with anyone or distribute it within a client application.
>
-Additionally, you will need the OpenID Connect metadata for the provider. This is often exposed via a [configuration metadata document](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig), which is the provider's Issuer URL suffixed with `/.well-known/openid-configuration`. Gather this configuration URL.
+Additionally, you'll need the OpenID Connect metadata for the provider. This is often exposed via a [configuration metadata document](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig), which is the provider's Issuer URL suffixed with `/.well-known/openid-configuration`. Gather this configuration URL.
-If you are unable to use a configuration metadata document, you will need to gather the following values separately:
+If you're unable to use a configuration metadata document, you'll need to gather the following values separately:
- The issuer URL (sometimes shown as `issuer`) - The [OAuth 2.0 Authorization endpoint](https://tools.ietf.org/html/rfc6749#section-3.1) (sometimes shown as `authorization_endpoint`)
If you are unable to use a configuration metadata document, you will need to gat
## <a name="configure"> </a>Add provider information to your application 1. Sign in to the [Azure portal] and navigate to your app.
-1. Select **Authentication** in the menu on the left. Click **Add identity provider**.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
1. Select **OpenID Connect** in the identity provider dropdown. 1. Provide the unique alphanumeric name selected earlier for **OpenID provider name**. 1. If you have the URL for the **metadata document** from the identity provider, provide that value for **Metadata URL**. Otherwise, select the **Provide endpoints separately** option and put each URL gathered from the identity provider in the appropriate field.
app-service Configure Authentication Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-twitter.md
description: Learn how to configure Twitter authentication as an identity provid
ms.assetid: c6dc91d7-30f6-448c-9f2d-8e91104cde73 Last updated 03/29/2021-+++ # Configure your App Service or Azure Functions app to use Twitter login
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-user-identities.md
Title: User identities in AuthN/AuthZ
description: Learn how to access user identities when using the built-in authentication and authorization in App Service. Last updated 03/29/2021+++ # Work with user identities in Azure App Service authentication
This article shows you how to work with user identities when using the built-in
## Access user claims in app code
-For all language frameworks, App Service makes the claims in the incoming token (whether from an authenticated end user or a client application) available to your code by injecting them into the request headers. External requests aren't allowed to set these headers, so they are present only if set by App Service. Some example headers include:
+For all language frameworks, App Service makes the claims in the incoming token (whether from an authenticated end user or a client application) available to your code by injecting them into the request headers. External requests aren't allowed to set these headers, so they're present only if set by App Service. Some example headers include:
| Header | Description | ||--|
-| `X-MS-CLIENT-PRINCIPAL` | A Base64 encoded JSON representation of available claims. See [Decoding the client principal header](#decoding-the-client-principal-header) for more information. |
+| `X-MS-CLIENT-PRINCIPAL` | A Base64 encoded JSON representation of available claims. For more information, see [Decoding the client principal header](#decoding-the-client-principal-header). |
| `X-MS-CLIENT-PRINCIPAL-ID` | An identifier for the caller set by the identity provider. | | `X-MS-CLIENT-PRINCIPAL-NAME` | A human-readable name for the caller set by the identity provider, e.g. Email Address, User Principal Name. | | `X-MS-CLIENT-PRINCIPAL-IDP` | The name of the identity provider used by App Service Authentication. |
Provider tokens are also exposed through similar headers. For example, the Micro
> [!NOTE] > Different language frameworks may present these headers to the app code in different formats, such as lowercase or title case.
-Code that is written in any language or framework can get the information that it needs from these headers. [Decoding the client principal header](#decoding-the-client-principal-header) covers this process. For some frameworks, the platform also provides additional options which may be more convenient
+Code that is written in any language or framework can get the information that it needs from these headers. [Decoding the client principal header](#decoding-the-client-principal-header) covers this process. For some frameworks, the platform also provides extra options that may be more convenient.
### Decoding the client principal header
Code that is written in any language or framework can get the information that i
| `name_typ` | string | The name claim type, which is typically a URI providing scheme information about the `name` claim if one is defined. | | `role_typ` | string | The role claim type, which is typically a URI providing scheme information about the `role` claim if one is defined. |
-To process this header, your app will need to decode the payload and iterate through the `claims` array to find the claims of interest. It may be convenient to convert these into a representation used by the app's language framework. Here is an example of this process in C# that constructs a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal) type for the app to use:
+To process this header, your app will need to decode the payload and iterate through the `claims` array to find the claims of interest. It may be convenient to convert these into a representation used by the app's language framework. Here's an example of this process in C# that constructs a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal) type for the app to use:
```csharp using System;
public static class ClaimsPrincipalParser
For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth).
-For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
+For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` isn't populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. For more information, see [Working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities).
For .NET Core, [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) supports populating the current user with App Service authentication. To learn more, you can read about it on the [Microsoft.Identity.Web wiki](https://github.com/AzureAD/microsoft-identity-web/wiki/1.2.0#integration-with-azure-app-services-authentication-of-web-apps-running-with-microsoftidentityweb), or see it demonstrated in [this tutorial for a web app accessing Microsoft Graph](./scenario-secure-app-access-microsoft-graph-as-user.md?tabs=command-line#install-client-library-packages).
For .NET Core, [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft
## Access user claims using the API
-If the [token store](overview-authentication-authorization.md#token-store) is enabled for your app, you can also obtain additional details on the authenticated user by calling `/.auth/me`.
+If the [token store](overview-authentication-authorization.md#token-store) is enabled for your app, you can also obtain other details on the authenticated user by calling `/.auth/me`.
## Next steps
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
keywords: azure app service, web app, app settings, environment variables
ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 Last updated 04/21/2023-+ ms.devlang: azurecli++ # Configure an App Service app
This article explains how to configure common settings for web apps, mobile back
## Configure app settings
+> [!NOTE]
+> - App settings names can only contain letters, numbers (0-9), periods ("."), and underscores ("_")
+> - Special characters in the value of an App Setting must be escaped as needed by the target OS
+>
+> For example to set an environment variable in App Service Linux with the value `"pa$$w0rd\"` the string for the app setting should be: `"pa\$\$w0rd\\"`
+ In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart. For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in *Web.config* or *appsettings.json*, but the values in App Service override the ones in *Web.config* or *appsettings.json*. You can keep development settings (for example, local MySQL password) in *Web.config* or *appsettings.json* and production secrets (for example, Azure MySQL database password) safely in App Service. The same code uses your development settings when you debug locally, and it uses your production secrets when deployed to Azure.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
Title: Configure a custom container description: Learn how to configure a custom container in Azure App Service. This article shows the most common configuration tasks. -++ Last updated 01/04/2023
app-service Configure Domain Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-domain-traffic-manager.md
ms.assetid: 0f96c0e7-0901-489b-a95a-e3b66ca0a1c2
Last updated 03/05/2020 ++ # Configure a custom domain name in Azure App Service with Traffic Manager integration
After the records for your domain name have propagated, use the browser to verif
## Next steps > [!div class="nextstepaction"]
-> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service Configure Encrypt At Rest Using Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-encrypt-at-rest-using-cmk.md
Title: Encrypt your application source at rest
description: Learn how to encrypt your application data in Azure Storage and deploy it as a package file. Last updated 03/06/2020++ # Encryption at rest using customer-managed keys
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnet-framework.md
ms.devlang: csharp
Last updated 06/02/2020++ # Configure an ASP.NET app for Azure App Service
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
Title: Configure ASP.NET Core apps
-description: Learn how to configure a ASP.NET Core app in the native Windows instances, or in a pre-built Linux container, in Azure App Service. This article shows the most common configuration tasks.
+description: Learn how to configure a ASP.NET Core app in the native Windows instances, or in a prebuilt Linux container, in Azure App Service. This article shows the most common configuration tasks.
ms.devlang: csharp Last updated 06/02/2020 zone_pivot_groups: app-service-platform-windows-linux++ # Configure an ASP.NET Core app for Azure App Service
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings POST_BUILD_COMMAND="echo foo, scripts/postbuild.sh" ```
-For additional environment variables to customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
+For other environment variables to customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
For more information on how App Service runs and builds ASP.NET Core apps in Linux, see [Oryx documentation: How .NET Core apps are detected and built](https://github.com/microsoft/Oryx/blob/master/doc/runtimes/dotnetcore.md).
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
## Deploy multi-project solutions
-When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
+When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following command in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PROJECT="<project-name>/<project-name>.csproj"
For more information, see [Configure ASP.NET Core to work with proxy servers and
::: zone-end
-Or, see additional resources:
+Or, see more resources:
[Environment variables and app settings reference](reference-app-settings.md)
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
keywords: azure app service, web app, windows, oss, java, tomcat, jboss
ms.devlang: java Last updated 04/12/2019- zone_pivot_groups: app-service-platform-windows-linux adobe-target: true++ # Configure a Java app for Azure App Service
This command adds a `azure-webapp-maven-plugin` plugin and related configuration
mvn package azure-webapp:deploy ```
-Here is a sample configuration in `pom.xml`:
+Here's a sample configuration in `pom.xml`:
```xml <plugin>
Here is a sample configuration in `pom.xml`:
``` 1. Configure your Web App details, corresponding Azure resources will be created if not exist.
-Here is a sample configuration, for details, refer to this [document](https://github.com/microsoft/azure-gradle-plugins/wiki/Webapp-Configuration).
+Here's a sample configuration, for details, refer to this [document](https://github.com/microsoft/azure-gradle-plugins/wiki/Webapp-Configuration).
```groovy azurewebapp {
Azure provides seamless Java App Service development experience in popular Java
To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages). > [!NOTE]
-> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
+> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you don't wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script doesn't run from the directory into which it's placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
#### Tomcat
To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application will be de
::: zone-end
-Do not deploy your .war or .jar using FTP. The FTP tool is designed to upload startup scripts, dependencies, or other runtime files. It is not the optimal choice for deploying web apps.
+Don't deploy your .war or .jar using FTP. The FTP tool is designed to upload startup scripts, dependencies, or other runtime files. It's not the optimal choice for deploying web apps.
## Logging and debugging apps
To learn more about the Java Profiler, visit the [Azure Application Insights doc
::: zone pivot="platform-windows"
-Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. Logging to the local App Service filesystem instance is disabled 12 hours after it is configured. If you need longer retention, configure the application to write output to a Blob storage container. Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
+Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. Logging to the local App Service filesystem instance is disabled 12 hours after it's configured. If you need longer retention, configure the application to write output to a Blob storage container. Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
::: zone-end ::: zone pivot="platform-linux"
To configure the app setting from the Maven plugin, add setting/value tags in th
::: zone pivot="platform-windows" > [!NOTE]
-> You do not need to create a web.config file when using Tomcat on Windows App Service.
+> You don't need to create a web.config file when using Tomcat on Windows App Service.
::: zone-end
Java applications running in App Service have the same set of [security best pra
### Authenticate users (Easy Auth)
-Set up app authentication in the Azure portal with the **Authentication and Authorization** option. From there, you can enable authentication using Azure Active Directory or social logins like Facebook, Google, or GitHub. Azure portal configuration only works when configuring a single authentication provider. For more information, see [Configure your App Service app to use Azure Active Directory login](configure-authentication-provider-aad.md) and the related articles for other identity providers. If you need to enable multiple sign-in providers, follow the instructions in the [customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md) article.
+Set up app authentication in the Azure portal with the **Authentication and Authorization** option. From there, you can enable authentication using Azure Active Directory or social sign-ins like Facebook, Google, or GitHub. Azure portal configuration only works when configuring a single authentication provider. For more information, see [Configure your App Service app to use Azure Active Directory sign-in](configure-authentication-provider-aad.md) and the related articles for other identity providers. If you need to enable multiple sign-in providers, follow the instructions in the [customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md) article.
#### Java SE
for (Object key : map.keySet()) {
} ```
-To sign out users, use the `/.auth/ext/logout` path. To perform other actions, see the documentation on [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md). There is also official documentation on the Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) and its methods. The following servlet methods are also hydrated based on your App Service configuration:
+To sign out users, use the `/.auth/ext/logout` path. To perform other actions, see the documentation on [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md). There's also official documentation on the Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) and its methods. The following servlet methods are also hydrated based on your App Service configuration:
```java public boolean isSecure()
To inject these secrets in your Spring or Tomcat configuration file, use environ
### Use the Java Key Store
-By default, any public or private certificates [uploaded to App Service Linux](configure-ssl-certificate.md) will be loaded into the respective Java Key Stores as the container starts. After uploading your certificate, you will need to restart your App Service for it to be loaded into the Java Key Store. Public certificates are loaded into the Key Store at `$JRE_HOME/lib/security/cacerts`, and private certificates are stored in `$JRE_HOME/lib/security/client.jks`.
+By default, any public or private certificates [uploaded to App Service Linux](configure-ssl-certificate.md) will be loaded into the respective Java Key Stores as the container starts. After uploading your certificate, you'll need to restart your App Service for it to be loaded into the Java Key Store. Public certificates are loaded into the Key Store at `$JRE_HOME/lib/security/cacerts`, and private certificates are stored in `$JRE_HOME/lib/security/client.jks`.
More configuration may be necessary for encrypting your JDBC connection with certificates in the Java Key Store. Refer to the documentation for your chosen JDBC driver.
keyStore.load(
You can load certificates manually to the key store. Create an app setting, `SKIP_JAVA_KEYSTORE_LOAD`, with a value of `1` to disable App Service from loading the certificates into the key store automatically. All public certificates uploaded to App Service via the Azure portal are stored under `/var/ssl/certs/`. Private certificates are stored under `/var/ssl/private/`.
-You can interact or debug the Java Key Tool by [opening an SSH connection](configure-linux-open-ssh-session.md) to your App Service and running the command `keytool`. See the [Key Tool documentation](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html) for a list of commands. For more information on the KeyStore API, refer to [the official documentation](https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html).
+You can interact or debug the Java Key Tool by [opening an SSH connection](configure-linux-open-ssh-session.md) to your App Service and running the command `keytool`. See the [Key Tool documentation](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html) for a list of commands. For more information on the KeyStore API, see [the official documentation](https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html).
::: zone-end
This section shows how to connect Java applications deployed on Azure App Servic
### Configure Application Insights
-Azure Monitor Application Insights is a cloud native application monitoring service that enables customers to observe failures, bottlenecks, and usage patterns to improve application performance and reduce mean time to resolution (MTTR). With a few clicks or CLI commands, you can enable monitoring for your Node.js or Java apps, auto-collecting logs, metrics, and distributed traces, eliminating the need for including an SDK in your app. See the [Application Insights documentation](../azure-monitor/app/java-standalone-config.md) for more information about the available app settings for configuring the agent.
+Azure Monitor Application Insights is a cloud native application monitoring service that enables customers to observe failures, bottlenecks, and usage patterns to improve application performance and reduce mean time to resolution (MTTR). With a few clicks or CLI commands, you can enable monitoring for your Node.js or Java apps, autocollecting logs, metrics, and distributed traces, eliminating the need for including an SDK in your app. For more information about the available app settings for configuring the agent, see the [Application Insights documentation](../azure-monitor/app/java-standalone-config.md).
#### Azure portal
-To enable Application Insights from the Azure portal, go to **Application Insights** on the left-side menu and select **Turn on Application Insights**. By default, a new application insights resource of the same name as your Web App will be used. You can choose to use an existing application insights resource, or change the name. Click **Apply** at the bottom
+To enable Application Insights from the Azure portal, go to **Application Insights** on the left-side menu and select **Turn on Application Insights**. By default, a new application insights resource of the same name as your Web App will be used. You can choose to use an existing application insights resource, or change the name. Select **Apply** at the bottom
#### Azure CLI
-To enable via the Azure CLI, you will need to create an Application Insights resource and set a couple app settings on the Azure portal to connect Application Insights to your web app.
+To enable via the Azure CLI, you'll need to create an Application Insights resource and set a couple app settings on the Azure portal to connect Application Insights to your web app.
1. Enable the Applications Insights extension
To enable via the Azure CLI, you will need to create an Application Insights res
az monitor app-insights component create --app <resource-name> -g <resource-group> --location westus2 --kind web --application-type web ```
- Note the values for `connectionString` and `instrumentationKey`, you will need these values in the next step.
+ Note the values for `connectionString` and `instrumentationKey`, you'll need these values in the next step.
> To retrieve a list of other locations, run `az account list-locations`.
To connect to data sources in Spring Boot applications, we suggest creating conn
app.datasource.url=${CUSTOMCONNSTR_exampledb} ```
-See the [Spring Boot documentation on data access](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html) and [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html) for more information on this topic.
+For more information, see the [Spring Boot documentation on data access](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html) and [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html).
::: zone pivot="platform-windows" ### Tomcat
-These instructions apply to all database connections. You will need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
+These instructions apply to all database connections. You'll need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
| Database | Driver Class Name | JDBC Driver | ||--||
Next, determine if the data source should be available to one application or to
#### Application-level data sources
-1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it does not exist.
+1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it doesn't exist.
2. In *context.xml*, add a `Context` element to link the data source to a JNDI address. Replace the `driverClassName` placeholder with your driver's class name from the table above.
This example transform adds a new connector node to `server.xml`. Note the *Iden
</xsl:copy> </xsl:template>
- <!-- Add the new connector after the last existing Connnector if there is one -->
+ <!-- Add the new connector after the last existing Connnector if there's one -->
<xsl:template match="Connector[last()]" mode="insertConnector"> <xsl:call-template name="Copy" /> <xsl:call-template name="AddConnector" /> </xsl:template>
- <!-- ... or before the first Engine if there is no existing Connector -->
+ <!-- ... or before the first Engine if there's no existing Connector -->
<xsl:template match="Engine[1][not(preceding-sibling::Connector)]" mode="insertConnector"> <xsl:call-template name="AddConnector" />
The following example script copies a custom Tomcat to a local folder, performs
#### Finalize configuration
-Finally, we will place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it does not already exist.) To upload these files to your App Service instance, perform the following steps:
+Finally, you'll place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it doesn't already exist.) To upload these files to your App Service instance, perform the following steps:
1. In the [Cloud Shell](https://shell.azure.com), install the webapp extension:
Alternatively, you can use an FTP client to upload the JDBC driver. Follow these
### Tomcat
-These instructions apply to all database connections. You will need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
+These instructions apply to all database connections. You'll need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
| Database | Driver Class Name | JDBC Driver | ||--||
Next, determine if the data source should be available to one application or to
#### Application-level data sources
-1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it does not exist.
+1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it doesn't exist.
2. In *context.xml*, add a `Context` element to link the data source to a JNDI address. Replace the `driverClassName` placeholder with your driver's class name from the table above.
An example xsl file is provided below. The example xsl file adds a new connector
</xsl:copy> </xsl:template>
- <!-- Add the new connector after the last existing Connnector if there is one -->
+ <!-- Add the new connector after the last existing Connnector if there's one -->
<xsl:template match="Connector[last()]" mode="insertConnector"> <xsl:call-template name="Copy" /> <xsl:call-template name="AddConnector" /> </xsl:template>
- <!-- ... or before the first Engine if there is no existing Connector -->
+ <!-- ... or before the first Engine if there's no existing Connector -->
<xsl:template match="Engine[1][not(preceding-sibling::Connector)]" mode="insertConnector"> <xsl:call-template name="AddConnector" />
An example xsl file is provided below. The example xsl file adds a new connector
Finally, place the driver JARs in the Tomcat classpath and restart your App Service.
-1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it does not already exist.) To upload these files to your App Service instance, perform the following steps:
+1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it doesn't already exist.) To upload these files to your App Service instance, perform the following steps:
1. In the [Cloud Shell](https://shell.azure.com), install the webapp extension:
There are three core steps when [registering a data source with JBoss EAP](https
data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker ```
-1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configure App Service to run this script when the container starts.
+1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you'll configure App Service to run this script when the container starts.
```bash $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli
There are three core steps when [registering a data source with JBoss EAP](https
1. Using an FTP client of your choice, upload your JDBC driver, `jboss-cli-commands.cli`, `startup_script.sh`, and the module definition to `/site/deployments/tools/`. 2. Configure your site to run `startup_script.sh` when the container starts. In the Azure portal, navigate to **Configuration** > **General Settings** > **Startup Command**. Set the startup command field to `/home/site/deployments/tools/startup_script.sh`. **Save** your changes.
-To confirm that the datasource was added to the JBoss server, SSH into your webapp and run `$JBOSS_HOME/bin/jboss-cli.sh --connect`. Once you are connected to JBoss run the `/subsystem=datasources:read-resource` to print a list of the data sources.
+To confirm that the datasource was added to the JBoss server, SSH into your webapp and run `$JBOSS_HOME/bin/jboss-cli.sh --connect`. Once you're connected to JBoss run the `/subsystem=datasources:read-resource` to print a list of the data sources.
::: zone-end
To confirm that the datasource was added to the JBoss server, SSH into your weba
## Choosing a Java runtime version
-App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production sites should use pinned patch JVM versions. This will prevent unanticipated outages during a patch version auto-update. All Java web apps use 64-bit JVMs, this is not configurable.
+App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production sites should use pinned patch JVM versions. This will prevent unanticipated outages during a patch version autoupdate. All Java web apps use 64-bit JVMs, this isn't configurable.
-If you are using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM will also be pinned but is not separately configurable.
+If you're using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM will also be pinned but isn't separately configurable.
-If you choose to pin the minor version, you will need to periodically update the JVM minor version on the site. To ensure that your application runs on the newer minor version, create a staging slot and increment the minor version on the staging site. Once you have confirmed the application runs correctly on the new minor version, you can swap the staging and production slots.
+If you choose to pin the minor version, you'll need to periodically update the JVM minor version on the site. To ensure that your application runs on the newer minor version, create a staging slot and increment the minor version on the staging site. Once you have confirmed the application runs correctly on the new minor version, you can swap the staging and production slots.
::: zone pivot="platform-linux"
If you choose to pin the minor version, you will need to periodically update the
App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, the web app will restart and JBoss EAP will automatically start up with a clustered configuration. The JBoss EAP instances will communicate over the subnet specified in the virtual network integration, using the ports shown in the `WEBSITES_PRIVATE_PORTS` environment variable at runtime. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value. > [!NOTE]
-> If you are enabling your virtual network integration with an ARM template, you will need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property will be set for you automatically.
+> If you're enabling your virtual network integration with an ARM template, you'll need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property will be set for you automatically.
When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start will obtain read/write permissions on the cluster membership file. Other instances will read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file. The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](../availability-zones/migrate-app-service.md). The JBoss EAP clustering feature is compatible with the zone redundancy feature.
-#### Auto-Scale Rules
+#### Autoscale Rules
-When configuring auto-scale rules for horizontal scaling it is important to remove instances incrementally (one at a time) to ensure each removed instance can transfer its activity (such as handling a database transaction) to another member of the cluster. When configuring your autoscale rules in the Portal to scale down, use the following options:
+When configuring autoscale rules for horizontal scaling, it's important to remove instances incrementally (one at a time) to ensure each removed instance can transfer its activity (such as handling a database transaction) to another member of the cluster. When configuring your autoscale rules in the Portal to scale down, use the following options:
- **Operation**: "Decrease count by" - **Cool down**: "5 minutes" or greater - **Instance count**: 1
-You do not need to incrementally add instances (scaling out), you can add multiple instances to the cluster at a time.
+You don't need to incrementally add instances (scaling out), you can add multiple instances to the cluster at a time.
### JBoss EAP App Service Plans
Microsoft and Adoptium builds of OpenJDK are provided and supported on App Servi
\* In following releases, Java 8 on Linux will be distributed from Adoptium builds of the OpenJDK.
-If you are [pinned](#choosing-a-java-runtime-version) to an older minor version of Java, your site may be using the deprecated [Azul Zulu for Azure](https://devblogs.microsoft.com/java/end-of-updates-support-and-availability-of-zulu-for-azure/) binaries provided through [Azul Systems](https://www.azul.com/). You can continue to use these binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java.
+If you're [pinned](#choosing-a-java-runtime-version) to an older minor version of Java, your site may be using the deprecated [Azul Zulu for Azure](https://devblogs.microsoft.com/java/end-of-updates-support-and-availability-of-zulu-for-azure/) binaries provided through [Azul Systems](https://www.azul.com/). You can continue to use these binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java.
Major version updates will be provided through new runtime options in Azure App Service. Customers update to these newer versions of Java by configuring their App Service deployment and are responsible for testing and ensuring the major update meets their needs.
Supported JDKs are automatically patched on a quarterly basis in January, April,
Patches and fixes for major security vulnerabilities will be released as soon as they become available in Microsoft builds of the OpenJDK. A "major" vulnerability is defined by a base score of 9.0 or higher on the [NIST Common Vulnerability Scoring System, version 2](https://nvd.nist.gov/vuln-metrics/cvss).
-Tomcat 8.0 has reached [End of Life (EOL) as of September 30, 2018](https://tomcat.apache.org/tomcat-80-eol.html). While the runtime is still available on Azure App Service, Azure will not apply security updates to Tomcat 8.0. If possible, migrate your applications to Tomcat 8.5 or 9.0. Both Tomcat 8.5 and 9.0 are available on Azure App Service. See the [official Tomcat site](https://tomcat.apache.org/whichversion.html) for more information.
+Tomcat 8.0 has reached [End of Life (EOL) as of September 30, 2018](https://tomcat.apache.org/tomcat-80-eol.html). While the runtime is still available on Azure App Service, Azure will not apply security updates to Tomcat 8.0. If possible, migrate your applications to Tomcat 8.5 or 9.0. Both Tomcat 8.5 and 9.0 are available on Azure App Service. For more information, see the [official Tomcat site](https://tomcat.apache.org/whichversion.html).
-Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will be retired from App Service](https://azure.microsoft.com/updates/transition-to-java-11-or-8-by-29-july-2022/) at that time. If you have a web app running on Java 7, please upgrade to Java 8 or 11 before July 29th.
+Community support for Java 7 will terminate on July 29, 2022 and [Java 7 will be retired from App Service](https://azure.microsoft.com/updates/transition-to-java-11-or-8-by-29-july-2022/) at that time. If you have a web app running on Java 7, please upgrade to Java 8 or 11 before July 29.
### Deprecation and retirement
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md
ms.devlang: javascript, devx-track-azurecli Last updated 01/21/2022++ zone_pivot_groups: app-service-platform-windows-linux
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
description: Learn how to configure a PHP app in a pre-built PHP container, in A
ms.devlang: php Previously updated : 05/09/2023 Last updated : 08/31/2023 zone_pivot_groups: app-service-platform-windows-linux++
For more information on how App Service runs and builds PHP apps in Linux, see [
## Customize start-up
-By default, the built-in PHP container runs the Apache server. At start-up, it runs `apache2ctl -D FOREGROUND"`. If you like, you can run a different command at start-up, by running the following command in the [Cloud Shell](https://shell.azure.com):
+If you want, you can run a custom command at the container start-up time, by running the following command in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>"
By default, Azure App Service points the root virtual application path (*/*) to
The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the `public/` subdirectory as the site root.
-The default PHP image for App Service uses Apache, and it doesn't let you customize the site root for your app. To work around this limitation, add an *.htaccess* file to your repository root with the following content:
+The default PHP image for App Service uses Nginx, and you change the site root by [configuring the Nginx server with the `root` directive](https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/). This [example configuration file](https://github.com/Azure-Samples/laravel-tasks/blob/main/default) contains the following snippets that changes the `root` directive:
```
-<IfModule mod_rewrite.c>
- RewriteEngine on
- RewriteCond %{REQUEST_URI} ^(.*)
- RewriteRule ^(.*)$ /public/$1 [NC,L,QSA]
-</IfModule>
+server {
+ #proxy_cache cache;
+ #proxy_cache_valid 200 1s;
+ listen 8080;
+ listen [::]:8080;
+ root /home/site/wwwroot/public; # Changed for Laravel
+
+ location / {
+ index index.php https://docsupdatetracker.net/index.html index.htm hostingstart.html;
+ try_files $uri $uri/ /index.php?$args; # Changed for Laravel
+ }
+ ...
+```
+
+The default container uses the configuration file found at */etc/nginx/sites-available/default*. Keep in mind that any edit you make to this file is erased when the app restarts. To make a change that is effective across app restarts, [add a custom start-up command](#customize-start-up) like this example:
+
+```
+cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload
```
-If you would rather not use *.htaccess* rewrite, you can deploy your Laravel application with a [custom Docker image](quickstart-custom-container.md) instead.
+This command replaces the default Nginx configuration file with a file named *default* in your repository root and restarts Nginx.
::: zone-end
Then, go to the Azure portal and add an Application Setting to scan the "ini" di
::: zone pivot="platform-windows"
-To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), you can't use the *.htaccess* approach. App Service provides a separate mechanism using the `PHP_INI_SCAN_DIR` app setting.
+To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), use the `PHP_INI_SCAN_DIR` app setting.
First, run the following command in the [Cloud Shell](https://shell.azure.com) to add an app setting called `PHP_INI_SCAN_DIR`:
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
Title: Configure Linux Python apps
description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Last updated 11/16/2022-+++ ms.devlang: python adobe-target: true
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-ruby.md
Last updated 06/18/2020
ms.devlang: ruby ++ # Configure a Linux Ruby app for Azure App Service
app-service Configure Ssl App Service Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-app-service-certificate.md
Last updated 07/28/2023 ++ # Create and manage an App Service certificate for your web app
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
Last updated 04/20/2023 ++ # Secure a custom DNS name with a TLS/SSL binding in Azure App Service
In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>:
1. In **TLS/SSL type**, choose between **SNI SSL** and **IP based SSL**. - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings may be added. This option allows multiple TLS/SSL certificates to secure multiple domains on the same IP address. Most modern browsers (including Internet Explorer, Chrome, Firefox, and Opera) support SNI (for more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication)).
-
+
1. When adding a new certificate, validate the new certificate by selecting **Validate**.
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md
Last updated 02/15/2023 ++
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
Last updated 07/28/2023 ++ # Add and manage TLS/SSL certificates in Azure App Service
app-service Deploy Authentication Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-authentication-types.md
Title: Authentication types by deployment methods
description: Learn the available types of authentication with Azure App Service when deploying your application code. Last updated 07/31/2023++ # Authentication types by deployment methods in Azure App Service
app-service Deploy Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md
Last updated 09/13/2022
ms. ++ # Deploy to App Service using Azure Pipelines
Use [Azure Pipelines](/azure/devops/pipelines/) to automatically deploy your web app to [Azure App Service](./overview.md) on every successful build. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/).
-YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (pre-packaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts).
+YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (prepackaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts).
You'll use the [Azure Web App task](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app) to deploy to Azure App Service in your pipeline. For more complicated scenarios such as needing to use XML parameters in your deploy, you can use the [Azure App Service Deploy task](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app).
You'll use the [Azure Web App task](/azure/devops/pipelines/tasks/deploy/azure-r
### Create your pipeline
-The code examples in this section assume you are deploying an ASP.NET web app. You can adapt the instructions for other frameworks.
+The code examples in this section assume you're deploying an ASP.NET web app. You can adapt the instructions for other frameworks.
Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/ecosystems/ecosystems).
To get started:
-Now you're ready to read through the rest of this topic to learn some of the more common changes that people make to customize an Azure Web App deployment.
+Now you're ready to read through the rest of this article to learn some of the more common changes that people make to customize an Azure Web App deployment.
## Use the Azure Web App task
This is where the task picks up the web package for deployment.
To deploy to Azure App Service, you'll need to use an Azure Resource Manager [service connection](/azure/devops/pipelines/library/service-endpoints). The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps Server to Azure.
-Learn more about [Azure Resource Manager service connections](/azure/devops/pipelines/library/connect-to-azure). If your service connection is not working as expected, see [Troubleshooting service connections](/azure/devops/pipelines/release/azure-rm-endpoint).
+Learn more about [Azure Resource Manager service connections](/azure/devops/pipelines/library/connect-to-azure). If your service connection isn't working as expected, see [Troubleshooting service connections](/azure/devops/pipelines/release/azure-rm-endpoint).
# [YAML](#tab/yaml/)
By default, your deployment happens to the root application in the Azure Web App
VirtualApplication: '<name of virtual application>' ```
-* **VirtualApplication**: the name of the Virtual Application that has been configured in the Azure portal. See [Configure an App Service app in the Azure portal
-](./configure-common.md) for more details.
+* **VirtualApplication**: the name of the Virtual Application that has been configured in the Azure portal. For more information, see [Configure an App Service app in the Azure portal
+](./configure-common.md).
# [Classic](#tab/classic/)
The following example shows how to deploy to a staging slot, and then swap to a
* **appName**: the name of your existing app service. * **deployToSlotOrASE**: Boolean. Deploy to an existing deployment slot or Azure App Service Environment. * **resourceGroupName**: Name of the resource group. Required if `deployToSlotOrASE` is true.
-* **slotName**: Name of the slot, defaults to `production`. Required if `deployToSlotOrASE` is true.
+* **slotName**: Name of the slot, which defaults to `production`. Required if `deployToSlotOrASE` is true.
* **package**: the file path to the package or a folder containing your app service contents. Wildcards are supported. * **SourceSlot**: Slot sent to production when `SwapWithProduction` is true. * **SwapWithProduction**: Boolean. Swap the traffic of source slot with production.
To learn more about conditions, see [Specify conditions](/azure/devops/pipelines
In your release pipeline, you can implement various checks and conditions to control the deployment: * Set *branch filters* to configure the *continuous deployment trigger* on the artifact of the release pipeline.
-* Set *pre-deployment approvals* as a pre-condition for deployment to a stage.
-* Configure *gates* as a pre-condition for deployment to a stage.
+* Set *pre-deployment approvals* as a precondition for deployment to a stage.
+* Configure *gates* as a precondition for deployment to a stage.
* Specify conditions for a task to run. To learn more, see [Release, branch, and stage triggers](/azure/devops/pipelines/release/triggers), [Release deployment control using approvals](/azure/devops/pipelines/release/approvals/approvals), [Release deployment control using gates](/azure/devops/pipelines/release/approvals/gates), and [Specify conditions for running a task](/azure/devops/pipelines/process/conditions).
You can use a release pipeline to pick up the artifacts published by your build
* If you've just completed a CI build, choose the link (for example, _Build 20170815.1_) to open the build summary. Then choose **Release** to start a new release pipeline that's automatically linked to the build pipeline.
- * Open the **Releases** tab in **Azure Pipelines**, open the **+** drop-down
+ * Open the **Releases** tab in **Azure Pipelines**, open the **+** dropdown
in the list of release pipelines, and choose **Create release pipeline**.
-1. The easiest way to create a release pipeline is to use a template. If you are deploying a Node.js app, select the **Deploy Node.js App to Azure App Service** template.
+1. The easiest way to create a release pipeline is to use a template. If you're deploying a Node.js app, select the **Deploy Node.js App to Azure App Service** template.
Otherwise, select the **Azure App Service Deployment** template. Then choose **Apply**. > [!NOTE]
You can use a release pipeline to pick up the artifacts published by your build
continuous deployment trigger is enabled, and add a filter to include the **main** branch. > [!NOTE]
- > Continuous deployment is not enabled by default when you create a new release pipeline from the **Releases** tab.
+ > Continuous deployment isn't enabled by default when you create a new release pipeline from the **Releases** tab.
1. Open the **Tasks** tab and, with **Stage 1** selected, configure the task property variables as follows: * **Azure Subscription:** Select a connection from the list under **Available Azure Service Connections** or create a more restricted permissions connection to your Azure subscription.
- If you are using Azure Pipelines and if you see an **Authorize** button next to the input, click on it to authorize Azure Pipelines to connect to your Azure subscription. If you are using TFS or if you do not see
- the desired Azure subscription in the list of subscriptions, see [Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure) to manually set up the connection.
+ If you're using Azure Pipelines and if you see an **Authorize** button next to the input, select it to authorize Azure Pipelines to connect to your Azure subscription. If you're using TFS or if you don't see the desired Azure subscription in the list of subscriptions, see [Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure) to manually set up the connection.
* **App Service Name**: Select the name of the web app from your subscription.
app-service Deploy Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-best-practices.md
ms.assetid: bb51e565-e462-4c60-929a-2ff90121f41d
Last updated 07/31/2019 ++ # Deployment Best Practices
Every development team has unique requirements that can make implementing an eff
### Deployment Source
-A deployment source is the location of your application code. For production apps, the deployment source is usually a repository hosted by version control software such as [GitHub, BitBucket, or Azure Repos](deploy-continuous-deployment.md). For development and test scenarios, the deployment source may be [a project on your local machine](deploy-local-git.md). App Service also supports [OneDrive and Dropbox folders](deploy-content-sync.md) as deployment sources. While cloud folders can make it easy to get started with App Service, it is not typically recommended to use this source for enterprise-level production applications.
+A deployment source is the location of your application code. For production apps, the deployment source is usually a repository hosted by version control software such as [GitHub, BitBucket, or Azure Repos](deploy-continuous-deployment.md). For development and test scenarios, the deployment source may be [a project on your local machine](deploy-local-git.md). App Service also supports [OneDrive and Dropbox folders](deploy-content-sync.md) as deployment sources. While cloud folders can make it easy to get started with App Service, it's not typically recommended to use this source for enterprise-level production applications.
### Build Pipeline
Once you decide on a deployment source, your next step is to choose a build pipe
The deployment mechanism is the action used to put your built application into the */home/site/wwwroot* directory of your web app. The */wwwroot* directory is a mounted storage location shared by all instances of your web app. When the deployment mechanism puts your application in this directory, your instances receive a notification to sync the new files. App Service supports the following deployment mechanisms: -- Kudu endpoints: [Kudu](https://github.com/projectkudu/kudu/wiki) is the open-source developer productivity tool that runs as a separate process in Windows App Service, and as a second container in Linux App Service. Kudu handles continuous deployments and provides HTTP endpoints for deployment, such as zipdeploy.-- FTP and WebDeploy: Using your [site or user credentials](deploy-configure-credentials.md), you can upload files [via FTP](deploy-ftp.md) or WebDeploy. These mechanisms do not go through Kudu.
+- Kudu endpoints: [Kudu](https://github.com/projectkudu/kudu/wiki) is the open-source developer productivity tool that runs as a separate process in Windows App Service, and as a second container in Linux App Service. Kudu handles continuous deployments and provides HTTP endpoints for deployment, such as [zipdeploy/](deploy-zip.md).
+- FTP and WebDeploy: Using your [site or user credentials](deploy-configure-credentials.md), you can upload files [via FTP](deploy-ftp.md) or WebDeploy. These mechanisms don't go through Kudu.
Deployment tools such as Azure Pipelines, Jenkins, and editor plugins use one of these deployment mechanisms. ## Use deployment slots
-Whenever possible, use [deployment slots](deploy-staging-slots.md) when deploying a new production build. When using a Standard App Service Plan tier or better, you can deploy your app to a staging environment, validate your changes, and do smoke tests. When you are ready, you can swap your staging and production slots. The swap operation warms up the necessary worker instances to match your production scale, thus eliminating downtime.
+Whenever possible, use [deployment slots](deploy-staging-slots.md) when deploying a new production build. When using a Standard App Service Plan tier or better, you can deploy your app to a staging environment, validate your changes, and do smoke tests. When you're ready, you can swap your staging and production slots. The swap operation warms up the necessary worker instances to match your production scale, thus eliminating downtime.
### Continuously deploy code If your project has designated branches for testing, QA, and staging, then each of those branches should be continuously deployed to a staging slot. (This is known as the [Gitflow design](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow).) This allows your stakeholders to easily assess and test the deployed branch.
-Continuous deployment should never be enabled for your production slot. Instead, your production branch (often main) should be deployed onto a non-production slot. When you are ready to release the base branch, swap it into the production slot. Swapping into productionΓÇöinstead of deploying to productionΓÇöprevents downtime and allows you to roll back the changes by swapping again.
+Continuous deployment should never be enabled for your production slot. Instead, your production branch (often main) should be deployed onto a nonproduction slot. When you're ready to release the base branch, swap it into the production slot. Swapping into productionΓÇöinstead of deploying to productionΓÇöprevents downtime and allows you to roll back the changes by swapping again.
![Diagram that shows the flow between the Dev, Staging, and Main branches and the slots they are deployed to.](media/app-service-deploy-best-practices/slot_flow_code_diagam.png)
az ad sp create-for-rbac --name "myServicePrincipal" --role contributor \
--sdk-auth ```
-In your script, log in using `az login --service-principal`, providing the principalΓÇÖs information. You can then use `az webapp config container set` to set the container name, tag, registry URL, and registry password. Below are some helpful links for you to construct your container CI process.
+In your script, sign in using `az login --service-principal`, providing the principalΓÇÖs information. You can then use `az webapp config container set` to set the container name, tag, registry URL, and registry password. Below are some helpful links for you to construct your container CI process.
- [How to sign in to the Azure CLI on Circle CI](https://circleci.com/orbs/registry/orb/circleci/azure-cli)
In your script, log in using `az login --service-principal`, providing the princ
### Java
-Use the Kudu [zipdeploy/](deploy-zip.md) API for deploying JAR applications, and [wardeploy/](deploy-zip.md#deploy-warjarear-packages) for WAR apps. If you are using Jenkins, you can use those APIs directly in your deployment phase. For more information, see [this article](/azure/developer/jenkins/deploy-to-azure-app-service-using-azure-cli).
+Use the Kudu [zipdeploy/](deploy-zip.md) API for deploying JAR applications, and [wardeploy/](deploy-zip.md#deploy-warjarear-packages) for WAR apps. If you're using Jenkins, you can use those APIs directly in your deployment phase. For more information, see [this article](/azure/developer/jenkins/deploy-to-azure-app-service-using-azure-cli).
### Node
-By default, Kudu executes the build steps for your Node application (`npm install`). If you are using a build service such as Azure DevOps, then the Kudu build is unnecessary. To disable the Kudu build, create an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, with a value of `false`.
+By default, Kudu executes the build steps for your Node application (`npm install`). If you're using a build service such as Azure DevOps, then the Kudu build is unnecessary. To disable the Kudu build, create an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, with a value of `false`.
### .NET
-By default, Kudu executes the build steps for your .NET application (`dotnet build`). If you are using a build service such as Azure DevOps, then the Kudu build is unnecessary. To disable the Kudu build, create an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, with a value of `false`.
+By default, Kudu executes the build steps for your .NET application (`dotnet build`). If you're using a build service such as Azure DevOps, then the Kudu build is unnecessary. To disable the Kudu build, create an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, with a value of `false`.
## Other Deployment Considerations ### Local Cache
-Azure App Service content is stored on Azure Storage and is surfaced up in a durable manner as a content share. However, some apps just need a high-performance, read-only content store that they can run with high availability. These apps can benefit from using [local cache](overview-local-cache.md). Local cache is not recommended for content management sites such as WordPress.
+Azure App Service content is stored on Azure Storage and is surfaced up in a durable manner as a content share. However, some apps just need a high-performance, read-only content store that they can run with high availability. These apps can benefit from using [local cache](overview-local-cache.md). Local cache isn't recommended for content management sites such as WordPress.
Always use local cache in conjunction with [deployment slots](deploy-staging-slots.md) to prevent downtime. See [this section](overview-local-cache.md#best-practices-for-using-app-service-local-cache) for information on using these features together.
If your App Service Plan is using over 90% of available CPU or memory, the under
For more information on best practices, visit [App Service Diagnostics](./overview-diagnostics.md) to find out actionable best practices specific to your resource. - Navigate to your Web App in the [Azure portal](https://portal.azure.com).-- Click on **Diagnose and solve problems** in the left navigation, which opens App Service Diagnostics.
+- Select on **Diagnose and solve problems** in the left navigation, which opens App Service Diagnostics.
- Choose **Best Practices** homepage tile.-- Click **Best Practices for Availability & Performance** or **Best Practices for Optimal Configuration** to view the current state of your app in regards to these best practices.
+- Select **Best Practices for Availability & Performance** or **Best Practices for Optimal Configuration** to view the current state of your app in regards to these best practices.
You can also use this link to directly open App Service Diagnostics for your resource: `https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FParentAvailabilityAndPerformance#@microsoft.onmicrosoft.com/resource/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{siteName}/troubleshoot`.
app-service Deploy Configure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-configure-credentials.md
Last updated 02/11/2021 ++ # Configure deployment credentials for Azure App Service
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
Last updated 12/15/2021
ms.devlang: azurecli++
For an Azure App Service container workflow, the file has three sections:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) - A GitHub account. If you don't have one, sign up for [free](https://github.com/join). You need to have code in a GitHub repository to deploy to Azure App Service. -- A working container registry and Azure App Service app for containers. This example uses Azure Container Registry. Make sure to complete the full deployment to Azure App Service for containers. Unlike regular web apps, web apps for containers do not have a default landing page. Publish the container to have a working example.
+- A working container registry and Azure App Service app for containers. This example uses Azure Container Registry. Make sure to complete the full deployment to Azure App Service for containers. Unlike regular web apps, web apps for containers don't have a default landing page. Publish the container to have a working example.
- [Learn how to create a containerized Node.js application using Docker, push the container image to a registry, and then deploy the image to Azure App Service](/azure/developer/javascript/tutorial-vscode-docker-node-01) ## Generate deployment credentials
In the example, replace the placeholders with your subscription ID, resource gro
OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+1. If you don't have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
```azurecli-interactive az ad app create --display-name myApp
When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFI
In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**.
-To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name like `AZURE_CREDENTIALS`.
+To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret a name, like `AZURE_CREDENTIALS`.
When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
jobs:
docker push mycontainer.azurecr.io/myapp:${{ github.sha }} ```
-You can also use [Docker Login](https://github.com/azure/docker-login) to log into multiple container registries at the same time. This example includes two new GitHub secrets for authentication with docker.io. The example assumes that there is a Dockerfile at the root level of the registry.
+You can also use [Docker sign-in](https://github.com/azure/docker-login) to log into multiple container registries at the same time. This example includes two new GitHub secrets for authentication with docker.io. The example assumes that there's a Dockerfile at the root level of the registry.
```yml name: Linux Container Node Workflow
jobs:
- name: 'Checkout GitHub Action' uses: actions/checkout@main
- - name: 'Login via Azure CLI'
+ - name: 'Sign in via Azure CLI'
uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
jobs:
- name: 'Checkout GitHub Action' uses: actions/checkout@main
- - name: 'Login via Azure CLI'
+ - name: 'Sign in via Azure CLI'
uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }}
You can find our set of Actions grouped into different repositories on GitHub, e
- [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples) -- [Azure login](https://github.com/Azure/login)
+- [Azure sign-in](https://github.com/Azure/login)
- [Azure WebApp](https://github.com/Azure/webapps-deploy) -- [Docker login/logout](https://github.com/Azure/docker-login)
+- [Docker sign-in/out](https://github.com/Azure/docker-login)
- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)
app-service Deploy Content Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-content-sync.md
Last updated 02/25/2021 ++ # Sync content from a cloud folder to Azure App Service
app-service Deploy Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-continuous-deployment.md
description: Learn how to enable CI/CD to Azure App Service from GitHub, Bitbuck
ms.assetid: 6adb5c84-6cf3-424e-a336-c554f23b4000 Last updated 03/12/2021- ++ # Continuous deployment to Azure App Service
app-service Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md
description: Learn how to deploy your app to Azure App Service using FTP or FTPS
ms.assetid: ae78b410-1bc0-4d72-8fc4-ac69801247ae Last updated 02/26/2021- ++ # Deploy your app to Azure App Service using FTP/S
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
Last updated 12/14/2021 ++ # Deploy to App Service using GitHub Actions
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
Last updated 02/16/2021 ++ # Local Git deployment to Azure App Service
app-service Deploy Run Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-run-package.md
Title: Run your app from a ZIP package
description: Deploy your app's ZIP package with atomicity. Improve the predictability and reliability of your app's behavior during the ZIP deployment process. Last updated 01/14/2020++
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
ms.assetid: e224fc4f-800d-469a-8d6a-72bcde612450
Last updated 07/30/2023 +++ # Set up staging environments in Azure App Service <a name="Overview"></a>
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
Title: Deploy files to App Service
description: Learn to deploy various app packages or discrete libraries, static files, or startup scripts to Azure App Service Last updated 07/21/2023- ++ # Deploy files to App Service
For more information, see [Kudu documentation](https://github.com/projectkudu/ku
You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API.
-The deployment process places the package on the shared file drive correctly (see [Kudu publish API reference](#kudu-publish-api-reference)). For that reason, deploying WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy is not recommended.
+The deployment process used by the steps here places the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you may see unkown failures due to mistakes in the naming or structure.
# [Azure CLI](#tab/cli)
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
The size of the subnet can affect the scaling limits of the App Service plan ins
> > Sample calculation: >
-> For each App Service plan instance, you need:
-> 5 Windows Container apps = 5 IP addresses
-> 1 IP address per App Service plan instance
+> For each App Service plan instance, you need:
+> 5 Windows Container apps = 5 IP addresses
+> 1 IP address per App Service plan instance
> 5 + 1 = 6 IP addresses >
-> For 25 instances:
+> For 25 instances:
> 6 x 25 = 150 IP addresses per App Service plan > > Since you have 2 App Service plans, 2 x 150 = 300 IP addresses. If you use a smaller subnet, be aware of the following limitations: -- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).-
+- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 7 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).
+- For any App Service plan OS/SKU combination used in your App Service Environment like I1v2 Windows, one standby instance is created for every 20 active instances. The standby instances also require IP addresses.
+- When scaling App Service plans in the App Service Environment up/down, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned.
+- Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released.
- If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure. ## Addresses
You can find details in the **IP Addresses** portion of the portal, as shown in
![Screenshot that shows details about IP addresses.](./media/networking/networking-ip-addresses.png)
-As you scale your App Service plans in your App Service Environment, you'll use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time.
+As you scale your App Service plans in your App Service Environment, you use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time.
## Ports and network restrictions
-For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet.
+For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports, you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet.
It's a good idea to configure the following inbound NSG rule:
The normal app access ports inbound are as follows:
You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies.
-Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, dependencies could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](../deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`.
+Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, dependencies could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](../deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you need to allow `oryx-cdn.microsoft.io:443`.
You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so allows you to expose specific apps on that App Service Environment.
-Your application will use one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
+Your application uses one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
> [!NOTE] > Outbound SMTP connectivity (port 25) is supported for App Service Environment v3. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting, see [Troubleshoot outbound SMTP connectivity problems in Azure](../../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
For more information about Private Endpoint and Web App, see [Azure Web App Priv
## DNS
-The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you'll need to use their respective domain suffix.
+The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix.
### DNS configuration to your App Service Environment
In addition to setting up DNS, you also need to enable it in the [App Service En
### DNS configuration from your App Service Environment
-The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server.
+The apps in your App Service Environment uses the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server.
## More resources
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 06/27/2023 Last updated : 08/30/2023
App Service Environment v3 is available in the following regions:
| China North 2 | | | ✅ | | China North 3 | ✅ | ✅ | |
+### In-region data residency
+
+An App Service Environment will only store customer data including app content, settings and secrets within the region where it's deployed. All data is guaranteed to remain in the region. For more information, see [Data residency in Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview).
+ ## App Service Environment v2 App Service Environment has three versions: App Service Environment v1, App Service Environment v2, and App Service Environment v3. The information in this article is based on App Service Environment v3. To learn more about App Service Environment v2, see [App Service Environment v2 introduction](./intro.md).
app-service Get Resource Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/get-resource-events.md
description: Learn how to get resource events through Activity Logs and Event Gr
Last updated 04/24/2020 -+ # Get resource events in Azure App Service
app-service Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/getting-started.md
Previously updated : 4/10/2023 Last updated : 8/31/2023 zone_pivot_groups: app-service-getting-started-stacks
zone_pivot_groups: app-service-getting-started-stacks
| Action | Resources | | | |
-| **Create your first Java app** | Using one of the following tools:<br><br>- [Linux - Maven](./quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven)<br>- [Linux - Azure portal](./quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-azure-portal)<br>- [Windows - Maven](./quickstart-java.md?tabs=javase&pivots=platform-windows-development-environment-maven)<br>- [Windows - Azure portal](./quickstart-java.md?tabs=javase&pivots=platform-windows-development-environment-azure-portal) |
-| **Deploy your app** | - [Configure Java](./configure-language-java.md?pivots=platform-linux)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [GitHub actions](./deploy-github-actions.md) |
+| **Create your first Java app** | Using one of the following tools:<br><br>- [Maven deploy with an embedded web server](./quickstart-java.md?pivots=java-maven-quarkus)<br>- [Maven deploy to a Tomcat server](./quickstart-java.md?pivots=java-maven-tomcat)<br>- [Maven deploy to a JBoss server](./quickstart-java.md?pivots=java-maven-jboss) |
+| **Deploy your app** | - [With Maven](configure-language-java.md?pivots=platform-linux#maven)<br>- [With Gradle](configure-language-java.md?pivots=platform-linux#gradle)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With popular IDEs (VS Code, IntelliJ, and Eclipse)](configure-language-java.md?pivots=platform-linux#ides)<br>- [Deploy WAR or JAR packages directly](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With GitHub Actions](./deploy-github-actions.md) |
| **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)| | **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** |- [Java Spring with Cosmos DB](./tutorial-java-spring-cosmosdb.md)|
app-service Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md
Last updated 08/10/2023+ # Authentication scenarios and recommendations
app-service Ip Address Change Inbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/ip-address-change-inbound.md
description: If your inbound IP address is going to be changed, learn what to do
Last updated 06/28/2018 +++ # How to prepare for an inbound IP address change
app-service Ip Address Change Outbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/ip-address-change-outbound.md
description: If your outbound IP address is going to be changed, learn what to d
Last updated 06/28/2018 +++ # How to prepare for an outbound IP address change
app-service Ip Address Change Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/ip-address-change-ssl.md
description: If your TLS/SSL IP address is going to be changed, learn what to do
Last updated 06/28/2018 +++ # How to prepare for a TLS/SSL IP address change
app-service Manage Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-automatic-scaling.md
description: Learn how to scale automatically in Azure App Service with zero con
Last updated 08/02/2023 + # Automatic scaling in Azure App Service
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
description: Learn how to restore backups of your apps in Azure App Service or c
ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Last updated 04/25/2023++ # Back up and restore your app in Azure App Service
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
Title: 'Set up Azure Arc for App Service, Functions, and Logic Apps' description: For your Azure Arc-enabled Kubernetes clusters, learn how to enable App Service apps, function apps, and logic apps.++ Last updated 03/24/2023
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff
Last updated 01/31/2023 ++ # Buy an App Service domain and configure an app with it
app-service Manage Custom Dns Migrate Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-migrate-domain.md
Title: Migrate an active DNS name description: Learn how to migrate a custom DNS domain name that is already assigned to a live site to Azure App Service without any downtime. tags: top-support-issue-++ ms.assetid: 10da5b8a-1823-41a3-a2ff-a0717c2b5c2d Last updated 01/31/2023
app-service Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-disaster-recovery.md
Title: Recover from region-wide failure description: Learn how Azure App Service helps you maintain business continuity and disaster recovery (BCDR) capabilities. Recover your app from a region-wide failure in Azure.++ Last updated 03/31/2023
app-service Manage Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md
Title: Move an app to another region description: Learn how to move App Service resources from one region to another.-++ Last updated 02/27/2020
Delete the source app and App Service plan. [An App Service plan in the non-free
## Next steps
-[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md)
+[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md)
app-service Manage Scale Per App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-per-app.md
Title: Per-app scaling for high-density hosting description: Scale apps independently from the App Service plans and optimize the scaled-out instances in your plan.-+ ms.assetid: a903cb78-4927-47b0-8427-56412c4e3e64
app-service Manage Scale Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md
Last updated 05/08/2023 + # Scale up an app in Azure App Service
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
function envVarMatchesHeader(headerValue) {
> The `x-ms-auth-internal-token` header is only available on Windows App Service. ## Instances
-Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab shows your instance's name, the status of that instance and gives you the option to manually restart the application instance.
-If the status of your instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they are listed on the opening blade from the restart button.
+Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab shows your instance's name, the status of that application's instance and gives you the option to manually restart the instance.
+
+If the status of your application instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they are listed on the opening blade from the restart button.
If you restart the instance and the restart process fails, you will then be given the option to replace the worker (only 1 instance can be replaced per hour). This will also affect any applications using the same App Service Plan.
app-service Network Secure Outbound Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/network-secure-outbound-traffic-azure-firewall.md
Title: 'App Service outbound traffic control with Azure Firewall' description: Outbound traffic from App Service to internet, private IP addresses, and Azure services are routed through Azure Firewall. Learn how to control App Service outbound traffic by using Virtual Network integration and Azure Firewall. --++ Last updated 01/13/2022
app-service Operating System Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md
Title: Operating system functionality description: Learn about the OS functionality in Azure App Service on Windows. Find out what types of file, network, and registry access your app gets. -++ ms.assetid: 39d5514f-0139-453a-b52e-4a1c06d8d914 Last updated 01/21/2022
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc'
description: An introduction to App Service integration with Azure Arc for Azure operators. Last updated 03/15/2023++ # App Service, Functions, and Logic Apps on Azure Arc (Preview)
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5
Last updated 02/03/2023 -+++ # Authentication and authorization in Azure App Service and Azure Functions
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
description: Learn how you can troubleshoot issues with your app in Azure App Se
keywords: app service, azure app service, diagnostics, support, web app, troubleshooting, self-help Previously updated : 06/29/2013 Last updated : 06/29/2023 +
app-service Overview Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-disaster-recovery.md
Title: Disaster recovery guide description: Learn three common disaster recovery patterns for Azure App Service. keywords: app service, azure app service, hadr, disaster recovery, business continuity, high availability, bcdr++ Last updated 03/07/2023
app-service Overview Hosting Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md
ms.assetid: dea3f41e-cf35-481b-a6bc-33d7fc9d01b1
Last updated 05/26/2023 +
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-inbound-outbound-ips.md
Title: Inbound/Outbound IP addresses description: Learn how inbound and outbound IP addresses are used in Azure App Service, when they change, and how to find the addresses for your app.+ Last updated 04/05/2023
app-service Overview Local Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md
Last updated 06/29/2023 + # Azure App Service Local Cache overview
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md
Title: Plan to manage costs for App Service description: Learn how to plan for and manage costs for Azure App Service by using cost analysis in the Azure portal.++
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
description: Learn how managed identities work in Azure App Service and Azure Fu
Last updated 06/27/2023 -+++ # How to use managed identities for App Service and Azure Functions
app-service Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md
keywords: app service, azure app service, monitoring, diagnostic settings, suppo
Last updated 06/29/2023 + # Azure App Service monitoring overview
app-service Overview Patch Os Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-patch-os-runtime.md
description: Learn how Azure App Service updates the OS and runtimes, what runti
Last updated 01/21/2021 ++ # OS and runtime patching in Azure App Service
app-service Overview Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-security.md
keywords: azure app service, web app, mobile app, api app, function app, securit
Last updated 08/24/2018 ++ # Security in Azure App Service
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
description: Learn how Azure App Service helps you develop and host web applicat
ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3 Previously updated : 07/19/2023 Last updated : 08/31/2023 ++ # App Service overview
Azure App Service is a fully managed platform as a service (PaaS) offering for d
* **Authentication** - [Authenticate users](overview-authentication-authorization.md) using the built-in authentication component. Authenticate users with [Azure Active Directory](configure-authentication-provider-aad.md), [Google](configure-authentication-provider-google.md), [Facebook](configure-authentication-provider-facebook.md), [Twitter](configure-authentication-provider-twitter.md), or [Microsoft account](configure-authentication-provider-microsoft.md). * **Application templates** - Choose from an extensive list of application templates in the [Azure Marketplace](https://azure.microsoft.com/marketplace/), such as WordPress, Joomla, and Drupal. * **Visual Studio and Visual Studio Code integration** - Dedicated tools in Visual Studio and Visual Studio Code streamline the work of creating, deploying, and debugging.
+* **Java tools integration** - Develop and deploy to Azure without leaving your favorite development tools, such as Maven, Gradle, Visual Studio Code, IntelliJ, and Eclipse.
* **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more. * **Serverless code** - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure, and pay only for the compute time your code actually uses (see [Azure Functions](../azure-functions/index.yml)).
App Service can also host web apps natively on Linux for supported application s
### Built-in languages and frameworks
-App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (8, 11, and 17), Tomcat, PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (Tomcat, JBoss, or with an embedded web server), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023 ++ # Azure Policy built-in definitions for Azure App Service
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
Last updated 06/30/2022 ms.devlang: azurecli++ # Create an App Service app on Azure Arc (Preview)
app-service Quickstart Arm Template Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arm-template-uiex.md
# Quickstart: Create App Service app using an ARM template
-Get started with [Azure App Service](overview.md) by deploying a app to the cloud using an <abbr title="A JSON file that declaratively defines one or more Azure resources and dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly.">ARM template</abbr> and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart. <abbr title="In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.">The template uses declarative syntax.</abbr>
+Get started with [Azure App Service](overview.md) by deploying a app to the cloud using an ARM template (A JSON file that declaratively defines one or more Azure resources and dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly.) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart. The template uses declarative syntax. (In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.)
If your environment meets the prerequisites and you're familiar with using [ARM templates](../azure-resource-manager/templates/overview.md), select the **Deploy to Azure** button. The template will open in the Azure portal.
The following table details defaults parameters and their descriptions:
::: zone pivot="platform-windows" Run the code below to deploy a .NET framework app on Windows using Azure CLI.
-Replace <abbr title="Valid characters characters are `a-z`, `0-9`, and `-`."> \<app-name> </abbr> with a globally unique app name. To learn other <abbr title="You can also use the Azure portal, Azure PowerShell, and REST API.">deployment methods</abbr>, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). You can find more [Azure App Service template samples here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Sites).
+Replace \<app-name> (Valid characters characters are `a-z`, `0-9`, and `-`.) with a globally unique app name. To learn other deployment methods (You can also use the Azure portal, Azure PowerShell, and REST API.), see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). You can find more [Azure App Service template samples here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Sites).
```azurecli-interactive az group create --name myResourceGroup --location "southcentralus" &&
az deployment group create --resource-group myResourceGroup --parameters webAppN
<summary>What's the code doing?</summary> <p>The commands do the following actions:</p> <ul>
-<li>Create a default <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr>.</li>
-<li>Create a default <abbr title="The plan that specifies the location, size, and features of the web server farm that hosts your app.">App Service plan</abbr>.</li>
-<li><a href="/cli/azure/webapp#az-webapp-create">Create an <abbr title="The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.">App Service app</abbr></a> with the specified name.</li>
+<li>Create a default resource group (A logical container for related Azure resources that you can manage as a unit.).</li>
+<li>Create a default App Service plan (The plan that specifies the location, size, and features of the web server farm that hosts your app.).</li>
+<li><a href="/cli/azure/webapp#az-webapp-create">Create an App Service app (The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.)</a> with the specified name.</li>
</ul> </details> ::: zone pivot="platform-windows" <details> <summary>How do I deploy a different language stack?</summary>
-To deploy a different language stack, update <abbr title="This template is compatible with .NET Core, .NET Framework, PHP, Node.js, and Static HTML apps.">language parameter</abbr> with appropriate values. For Java, see <a href="/azure/app-service/quickstart-java-uiex">Create Java app</a>.
+To deploy a different language stack, update language parameter (This template is compatible with .NET Core, .NET Framework, PHP, Node.js, and Static HTML apps.) with appropriate values. For Java, see <a href="/azure/app-service/quickstart-java-uiex">Create Java app</a>.
| Parameters | Type | Default value | Description | ||||-|
When no longer needed, [delete the resource group](../azure-resource-manager/man
## Next steps -- [Deploy from local Git](deploy-local-git.md)-- [ASP.NET Core with SQL Database](tutorial-dotnetcore-sqldb-app.md)-- [Python with Postgres](tutorial-python-postgresql-app.md)-- [PHP with MySQL](tutorial-php-mysql-app.md)-- [Connect to Azure SQL database with Java](/azure/azure-sql/database/connect-query-java?toc=%2fazure%2fjava%2ftoc.json)
+- [Deploy from local Git](deploy-local-git.md)
+- [ASP.NET Core with SQL Database](tutorial-dotnetcore-sqldb-app.md)
+- [Python with Postgres](tutorial-python-postgresql-app.md)
+- [PHP with MySQL](tutorial-php-mysql-app.md)
+- [Connect to Azure SQL database with Java](/azure/azure-sql/database/connect-query-java?toc=%2fazure%2fjava%2ftoc.json)
app-service Quickstart Dotnetcore Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore-uiex.md
ms.devlang: csharp
zone_pivot_groups: app-service-platform-windows-linux ++ # Quickstart: Create an ASP.NET Core web app in Azure ::: zone pivot="platform-windows"
-In this quickstart, you'll learn how to create and deploy your first ASP.NET Core web app to <abbr title="An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.">Azure App Service</abbr>. App Service supports .NET 5.0 apps.
+In this quickstart, you'll learn how to create and deploy your first ASP.NET Core web app to Azure App Service (An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.). App Service supports .NET 5.0 apps.
-When you're finished, you'll have an Azure <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr>, consisting of an <abbr title="The plan that specifies the location, size, and features of the web server farm that hosts your app.">App Service plan</abbr> and an <abbr title="The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.">App Service app</abbr> with a deployed sample ASP.NET Core application.
+When you're finished, you'll have an Azure resource group (A logical container for related Azure resources that you can manage as a unit.) consisting of an App Service plan (The plan that specifies the location, size, and features of the web server farm that hosts your app.) and on App Service app (The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.) with a deployed sample ASP.NET Core application.
<hr/> ## 1. Prepare your environment -- **Get an Azure account** with an active <abbr title="The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.">subscription</abbr>. [Create an account for free](https://azure.microsoft.com/free/dotnet/).
+- **Get an Azure account** with an active subscription (The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.). [Create an account for free](https://azure.microsoft.com/free/dotnet/).
- **Install <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a>** with the **ASP.NET and web development** workload. <details>
Advance to the next article to learn how to create a .NET Core app and connect i
::: zone-end ::: zone pivot="platform-linux"
-This quickstart shows how to create a [.NET Core](/aspnet/core/) app on <abbr title="App Service on Linux provides a highly scalable, self-patching web hosting service using the Linux operating system.">App Service on Linux</abbr>. You create the app using the [Azure CLI](/cli/azure/get-started-with-azure-cli), and you use Git to deploy the .NET Core code to the app.
+This quickstart shows how to create a [.NET Core](/aspnet/core/) app on App Service on Linux (App Service on Linux provides a highly scalable, self-patching web hosting service using the Linux operating system.). You create the app using the [Azure CLI](/cli/azure/get-started-with-azure-cli), and you use Git to deploy the .NET Core code to the app.
<hr/>
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
adobe-target: true
adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./quickstart-dotnetcore-uiex++ <!-- NOTES:
app-service Quickstart Java Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java-uiex.md
Last updated 08/01/2020
zone_pivot_groups: app-service-platform-windows-linux ++ # Quickstart: Create a Java app on Azure App Service
There are also IDE versions of this article. Check out [Azure Toolkit for Intell
Before you begin, you must have the following:
-+ An <abbr title="The profile that maintains billing information for Azure usage.">Azure account</abbr> with an active <abbr title="The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.">subscription</abbr>. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
++ An Azure account (The profile that maintains billing information for Azure usage.) with an active subscription (The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.). [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure CLI](/cli/azure/install-azure-cli).
Property | Required | Description | Version
||| `<schemaVersion>` | false | Specify the version of the configuration schema. Supported values are: `v1`, `v2`. | 1.5.2 `<subscriptionId>` | false | Specify the subscription ID. | 0.1.0+
-`<resourceGroup>` | true | Azure <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr> for your Web App. | 0.1.0+
+`<resourceGroup>` | true | Azure resource group (A logical container for related Azure resources that you can manage as a unit.) for your Web App. | 0.1.0+
`<appName>` | true | The name of your Web App. | 0.1.0+ `<region>` | true | Specifies the region where your Web App will be hosted; the default value is **westeurope**. All valid regions at [Supported Regions](https://azure.microsoft.com/global-infrastructure/services/?products=app-service) section. | 0.1.0+ `<pricingTier>` | false | The pricing tier for your Web App. The default value is **P1V2** for production workload, while **B2** is the recommended minimum for Java dev/test. [Learn more](https://azure.microsoft.com/pricing/details/app-service/linux/)| 0.1.0+
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
keywords: azure, app service, web app, windows, linux, java, maven, quickstart
ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Previously updated : 03/08/2023 Last updated : 08/31/2023
-zone_pivot_groups: app-service-platform-environment
+zone_pivot_groups: app-service-java-hosting
adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./quickstart-java-uiex++ # Quickstart: Create a Java app on Azure App Service ::: zone-end ::: zone-end --- ::: zone-end
app-service Quickstart Python 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python-1.md
Title: 'Quickstart: Create a Python app on Linux' description: Get started with Azure App Service by deploying a Python app to a Linux container in App Service.++ Last updated 09/22/2020 ms.devlang: python
app-service Quickstart Python Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python-portal.md
Title: 'Quickstart: Create a Python app in the Azure portal' description: Get started with Azure App Service by deploying your first Python app to a Linux container in App Service by using the Azure portal.++ Last updated 04/01/2021 ms.devlang: python
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure'
description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Last updated 07/26/2023--++ ms.devlang: python
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-ruby.md
Last updated 04/27/2021 ms.devlang: ruby ++ # Create a Ruby on Rails App in App Service
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
To complete this quickstart, you need an Azure account with an active subscripti
1. Select the **Advanced** tab. If you're unfamiliar with an [Azure CDN](../cdn/cdn-overview.md), [Azure Front Door](../frontdoor/front-door-overview.md), or [Blob Storage](../storage/blobs/storage-blobs-overview.md), then clear the checkboxes. For more details on the Content Distribution options, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html). :::image type="content" source="./media/quickstart-wordpress/08-wordpress-advanced-settings.png" alt-text="Screenshot of WordPress Advanced Settings.":::
+
+ > [!NOTE]
+ > The WordPress app requires a virtual network with an address space of /23 at minimum.
1. Select the **Review + create** tab. After validation runs, select the **Create** button at the bottom of the page to create the WordPress site.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
Title: Environment variables and app settings reference
description: Describes the commonly used environment variables, and which ones can be modified with app settings. Last updated 05/09/2023++ # Environment variables and app settings in Azure App Service
The following environment variables are related to the [push notifications](/pre
| `WEBSITE_PUSH_TAGS_DYNAMIC` | Read-only. Contains a list of tags in the notification registration that were added automatically. | >[!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
<!-- ## WellKnownAppSettings
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md
Title: Kudu service overview description: Learn about the engine that powers continuous deployment in App Service and its features.++ Last updated 03/17/2021
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-cli.md
Last updated 04/21/2022 keywords: azure cli samples, azure cli examples, azure cli code samples++ # CLI samples for Azure App Service
app-service Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-powershell.md
ms.assetid: b48d1137-8c04-46e0-b430-101e07d7e470
Last updated 12/06/2022 ++ # PowerShell samples for Azure App Service
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
Last updated 04/05/2023
ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
Last updated 06/28/2023
ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user.
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Last updated 07/31/2023
ms.devlang: csharp, azurecli-+ #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Scenario Secure App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md
Last updated 06/25/2023 -+ #Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.
app-service Scenario Secure App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-overview.md
Last updated 12/10/2021 -+ #Customer intent: As an application developer, I want to learn how to secure access to a web app running on Azure App Service.
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
This sample script creates an app in App Service with its related resources, and
Create the following variables containing your GitHub information. ```azurecli
-gitrepo=<Replace with your Visual Studio Team Services repo URL>
-token=<Replace with a Visual Studio Team Services personal access token>
+gitrepo=<Replace with your Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) repo URL>
+token=<Replace with a Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) personal access token>
```
-Configure continuous deployment from Visual Studio Team Services. The `--git-token` parameter is required only once per Azure account (Azure remembers token).
+Configure continuous deployment from Azure DevOps Services (formerly Visual Studio Team Services, or VSTS). The `--git-token` parameter is required only once per Azure account (Azure remembers token).
```azurecli az webapp deployment source config --name $webapp --resource-group $resourceGroup \
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 ++ # Azure Policy Regulatory Compliance controls for Azure App Service
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
Last updated 06/29/2023 + # Enable diagnostics logging for apps in Azure App Service
app-service Troubleshoot Dotnet Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-dotnet-visual-studio.md
ms.devlang: csharp
Last updated 08/29/2016 ++ # Troubleshoot an app in Azure App Service using Visual Studio ## Overview
app-service Troubleshoot Http 502 Http 503 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-http-502-http-503.md
ms.assetid: 51cd331a-a3fa-438f-90ef-385e755e50d5
Last updated 07/06/2016 ++ # Troubleshoot HTTP errors of "502 bad gateway" and "503 service unavailable" in Azure App Service
app-service Troubleshoot Intermittent Outbound Connection Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md
description: Troubleshoot intermittent connection errors and related performance
Last updated 06/28/2023 ++
app-service Troubleshoot Performance Degradation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-performance-degradation.md
ms.assetid: b8783c10-3a4a-4dd6-af8c-856baafbdde5
Last updated 08/03/2016 ++ # Troubleshoot slow app performance issues in Azure App Service
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
Title: 'Tutorial: Authenticate users E2E' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end, including access to remote APIs. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad++ ms.devlang: csharp Last updated 3/08/2023-+ zone_pivot_groups: app-service-platform-windows-linux # Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
Last updated 03/14/2023
ms.devlang: javascript-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
app-service Tutorial Connect App Access Microsoft Graph As User Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-user-javascript.md
Last updated 03/08/2022
ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user.
app-service Tutorial Connect App Access Sql Database As User Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md
ms.devlang: csharp-+ Last updated 04/21/2023
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
Last updated 07/31/2023
ms.devlang: javascript, azurecli-+ #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Tutorial Connect App App Graph Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md
Title: 'Tutorial: Authenticate users E2E to Azure' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end to a downstream Azure service. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad++ ms.devlang: javascript Last updated 3/13/2023-+ zone_pivot_groups: app-service-platform-windows-linux # Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
Title: 'Tutorial: Access Azure databases with managed identity' description: Secure database connectivity (Azure SQL Database, Database for MySQL, and Database for PostgreSQL) with managed identity from .NET, Node.js, Python, and Java apps. keywords: azure app service, web app, security, msi, managed service identity, managed identity, .net, dotnet, asp.net, c#, csharp, node.js, node, python, java, visual studio, visual studio code, visual studio for mac, azure cli, azure powershell, defaultazurecredential++ ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity
app-service Tutorial Connect Msi Key Vault Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md
description: Learn how to secure connectivity to back-end Azure services that do
ms.devlang: javascript, azurecli Last updated 10/26/2021++ -+ # Tutorial: Secure Cognitive Service connection from JavaScript App Service using Key Vault
app-service Tutorial Connect Msi Key Vault Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-php.md
description: Learn how to secure connectivity to back-end Azure services that do
ms.devlang: csharp, azurecli Last updated 10/26/2021++ -+ # Tutorial: Secure Cognitive Service connection from PHP App Service using Key Vault
app-service Tutorial Connect Msi Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md
description: Learn how to secure connectivity to back-end Azure services that do
ms.devlang: csharp, azurecli Last updated 10/26/2021++ -+ # Tutorial: Secure Cognitive Service connection from .NET App Service using Key Vault
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
Title: 'Tutorial: Access data with managed identity' description: Secure Azure SQL Database connectivity with managed identity from a sample .NET web app, and also how to apply it to other Azure services.++ ms.devlang: csharp Last updated 04/01/2023-+ # Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity
You're now ready to develop and debug your app with the SQL Database as the back
> It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration).
-The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core).
+The steps you follow for your project depends on whether you're using [Entity Framework Core](/ef/core/) (default for ASP.NET Core) or [Entity Framework](/ef/ef6/) (default for ASP.NET).
+
+# [Entity Framework Core](#tab/efcore)
+
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
+
+ ```powershell
+ Install-Package Microsoft.Data.SqlClient -Version 5.1.0
+ ```
+
+1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
+
+ ```json
+ "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
+ ```
+
+ > [!NOTE]
+ > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
+ >
+
+ That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+
+1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
# [Entity Framework](#tab/ef)
The steps you follow for your project depends on whether you're using [Entity Fr
1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
-# [Entity Framework Core](#tab/efcore)
-
-1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
-
- ```powershell
- Install-Package Microsoft.Data.SqlClient -Version 5.1.0
- ```
-
-1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
-
- ```json
- "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
- ```
-
- > [!NOTE]
- > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
- >
-
- That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
-
-1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
- -- ## 4. Use managed identity connectivity
app-service Tutorial Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md
Title: 'Securely connect to Azure resources' description: Your app service may need to connect to other Azure services such as a database, storage, or another app. This overview recommends the more secure method for connecting.++ Last updated 02/16/2022+ # Securely connect to Azure services and databases from Azure App Service
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
description: A step-by-step guide to build a custom Linux or Windows image, push
Last updated 11/29/2022 + keywords: azure app service, web app, linux, windows, docker, container zone_pivot_groups: app-service-containers-windows-linux
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
ms.devlang: csharp -+ # Tutorial: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2":::
- **Step 1.** In the Azure portal:
+ **Step 1:** In the Azure portal:
1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-core-sql-tutorial**. 1. *Region* &rarr; Any Azure region near you. 1. *Name* &rarr; **msdocs-core-sql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
- **Resource group** &rarr; The container for all the created resources. - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** &rarr; Represents your app and runs in the App Service plan.
The creation wizard generated connection strings for the SQL database and the Re
:::row::: :::column span="2":::
- **Step 1.** In the App Service page, in the left menu, select Configuration.
+ **Step 1:** In the App Service page, in the left menu, select Configuration.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png":::
The creation wizard generated connection strings for the SQL database and the Re
:::row-end::: :::row::: :::column span="2":::
- **Step 2.**
+ **Step 2:**
1. Scroll to the bottom of the page and find **AZURE_SQL_CONNECTIONSTRING** in the **Connection strings** section. This string was generated from the new SQL database by the creation wizard. To set up your application, this name is all you need. 1. Also, find **AZURE_REDIS_CONNECTIONSTRING** in the **Application settings** section. This string was generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need. 1. If you want, you can select the **Edit** button to the right of each setting and see or copy its value.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row::: :::column span="2":::
- **Step 1.** In a new browser window:
+ **Step 1:** In a new browser window:
1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore). 1. Select **Fork**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the App Service page, in the left menu, select **Deployment Center**.
+ **Step 2:** In the App Service page, in the left menu, select **Deployment Center**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In the Deployment Center page:
+ **Step 3:** In the Deployment Center page:
1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 4.** Back the GitHub page of the forked sample, open Visual Studio Code in the browser by pressing the `.` key.
+ **Step 4:** Back the GitHub page of the forked sample, open Visual Studio Code in the browser by pressing the `.` key.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 5.** In Visual Studio Code in the browser:
+ **Step 5:** In Visual Studio Code in the browser:
1. Open *DotNetCoreSqlDb/appsettings.json* in the explorer. 1. Change the connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`, which matches the connection string created in App Service earlier. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 6.**
+ **Step 6:**
1. Open *DotNetCoreSqlDb/Program.cs* in the explorer. 1. In the `options.UseSqlServer` method, change the connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`. This is where the connection string is used by the sample application. 1. Remove the `builder.Services.AddDistributedMemoryCache();` method and replace it with the following code. It changes your code from using an in-memory cache to the Redis cache in Azure, and it does so by using `AZURE_REDIS_CONNECTIONSTRING` from earlier.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 7.**
+ **Step 7:**
1. Open *.github/workflows/main_msdocs-core-sql-XYZ* in the explorer. This file was created by the App Service create wizard. 1. Under the `dotnet publish` step, add a step to install the [Entity Framework Core tool](/ef/core/cli/dotnet) with the command `dotnet tool install -g dotnet-ef`. 1. Under the new step, add another step to generate a database [migration bundle](/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#bundles) in the deployment package: `dotnet ef migrations bundle --runtime linux-x64 -p DotNetCoreSqlDb/DotNetCoreSqlDb.csproj -o ${{env.DOTNET_ROOT}}/myapp/migrate`.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 8.**
+ **Step 8:**
1. Select the **Source Control** extension. 1. In the textbox, type a commit message like `Configure DB & Redis & add migration bundle`. 1. Select **Commit and Push**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 9.** Back in the Deployment Center page in the Azure portal:
+ **Step 9:** Back in the Deployment Center page in the Azure portal:
1. Select **Logs**. A new deployment run is already started from your committed changes. 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 10.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes a few minutes.
+ **Step 10:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes a few minutes.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-10.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-10.png":::
With the SQL Database protected by the virtual network, the easiest way to run R
:::row::: :::column span="2":::
- **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png":::
With the SQL Database protected by the virtual network, the easiest way to run R
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the SSH terminal:
+ **Step 2:** In the SSH terminal:
1. Run `cd /home/site/wwwroot`. Here are all your deployed files. 1. Run the migration bundle that's generated by the GitHub workflow with `./migrate`. If it succeeds, App Service is connecting successfully to the SQL Database. Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
With the SQL Database protected by the virtual network, the easiest way to run R
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end:::
With the SQL Database protected by the virtual network, the easiest way to run R
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** Add a few tasks to the list.
+ **Step 2:** Add a few tasks to the list.
Congratulations, you're running a secure data-driven ASP.NET Core app in Azure App Service. :::column-end::: :::column:::
Azure App Service captures all messages logged to the console to assist you in d
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. :::column-end:::
Azure App Service captures all messages logged to the console to assist you in d
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-stream-diagnostic-logs-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row::: :::column span="2":::
- **Step 1.** In the search bar at the top of the Azure portal:
+ **Step 1:** In the search bar at the top of the Azure portal:
1. Enter the resource group name. 1. Select the resource group. :::column-end:::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the resource group page, select **Delete resource group**.
+ **Step 2:** In the resource group page, select **Delete resource group**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-clean-up-resources-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 3.**
+ **Step 3:**
1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end:::
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
ms.devlang: java Last updated 5/27/2022-+ # Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
Title: 'Tutorial: Linux Java app with MongoDB' description: Learn how to get a data-driven Linux Java app working in Azure App Service, with connection to a MongoDB running in Azure Cosmos DB.--++ ms.devlang: java Last updated 12/10/2018-+ # Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
Last updated 08/14/2023 -+ # Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity
app-service Tutorial Networking Isolate Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md
Last updated 10/26/2021 ++ # Tutorial: Isolate back-end communication in Azure App Service with Virtual Network integration
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Last updated 09/06/2022
ms.role: developer ms.devlang: javascript-+++ # Deploy a Node.js + MongoDB web app to Azure
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2":::
- **Step 1.** In the Azure portal:
+ **Step 1:** In the Azure portal:
1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-expressjs-mongodb-tutorial**. 1. *Region* &rarr; Any Azure region near you. 1. *Name* &rarr; **msdocs-expressjs-mongodb-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
- **Resource group** &rarr; The container for all the created resources. - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** &rarr; Represents your app and runs in the App Service plan.
The creation wizard generated the MongoDB URI for you already, but your app need
:::row::: :::column span="2":::
- **Step 1.** In the App Service page, in the left menu, select Configuration.
+ **Step 1:** In the App Service page, in the left menu, select Configuration.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-1.png":::
The creation wizard generated the MongoDB URI for you already, but your app need
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Application settings** tab of the **Configuration** page, create a `DATABASE_NAME` setting:
+ **Step 2:** In the **Application settings** tab of the **Configuration** page, create a `DATABASE_NAME` setting:
1. Select **New application setting**. 1. In the **Name** field, enter *DATABASE_NAME*. 1. In the **Value** field, enter the automatically generated database name from the creation wizard, which looks like *msdocs-expressjs-mongodb-XYZ-database*.
The creation wizard generated the MongoDB URI for you already, but your app need
:::row-end::: :::row::: :::column span="2":::
- **Step 3.**
+ **Step 3:**
1. Scroll to the bottom of the page and select the connection string **MONGODB_URI**. It was generated by the creation wizard. 1. In the **Value** field, select the **Copy** button and paste the value in a text file for the next step. It's in the [MongoDB connection string URI format](https://www.mongodb.com/docs/manual/reference/connection-string/). 1. Select **Cancel**.
The creation wizard generated the MongoDB URI for you already, but your app need
:::row-end::: :::row::: :::column span="2":::
- **Step 4.**
+ **Step 4:**
1. Using the same steps in **Step 2**, create an app setting named *DATABASE_URL* and set the value to the one you copied from the `MONGODB_URI` connection string (i.e. `mongodb://...`). 1. In the menu bar at the top, select **Save**. 1. When prompted, select **Continue**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row::: :::column span="2":::
- **Step 1.** In a new browser window:
+ **Step 1:** In a new browser window:
1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app](https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app). 1. Select **Fork**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-2.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In Visual Studio Code in the browser, open *config/connection.js* in the explorer.
+ **Step 3:** In Visual Studio Code in the browser, open *config/connection.js* in the explorer.
In the `getConnectionInfo` function, see that the app settings you created earlier for the MongoDB connection are used (`DATABASE_URL` and `DATABASE_NAME`). :::column-end::: :::column:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ **Step 4:** Back in the App Service page, in the left menu, select **Deployment Center**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-4.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 5.** In the Deployment Center page:
+ **Step 5:** In the Deployment Center page:
1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 6.** In the Deployment Center page:
+ **Step 6:** In the Deployment Center page:
1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes.
+ **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-7.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** Add a few tasks to the list.
+ **Step 2:** Add a few tasks to the list.
Congratulations, you're running a secure data-driven Node.js app in Azure App Service. :::column-end::: :::column:::
Azure App Service captures all messages logged to the console to assist you in d
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. :::column-end:::
Azure App Service captures all messages logged to the console to assist you in d
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-2.png":::
Azure App Service provides a web-based diagnostics console named [Kudu](./resour
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **Advanced Tools**. 1. Select **Go**. You can also navigate directly to `https://<app-name>.scm.azurewebsites.net`. :::column-end:::
Azure App Service provides a web-based diagnostics console named [Kudu](./resour
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the Kudu page, select **Deployments**.
+ **Step 2:** In the Kudu page, select **Deployments**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-2.png" alt-text="A screenshot of the main page in the Kudu SCM app showing the different information available about the hosting environment." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-2.png":::
Azure App Service provides a web-based diagnostics console named [Kudu](./resour
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** Go back to the Kudu homepage and select **Site wwwroot**.
+ **Step 3:** Go back to the Kudu homepage and select **Site wwwroot**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-4.png" alt-text="A screenshot showing site wwwroot selected." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-4.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row::: :::column span="2":::
- **Step 1.** In the search bar at the top of the Azure portal:
+ **Step 1:** In the search bar at the top of the Azure portal:
1. Enter the resource group name. 1. Select the resource group. :::column-end:::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the resource group page, select **Delete resource group**.
+ **Step 2:** In the resource group page, select **Delete resource group**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 3.**
+ **Step 3:**
1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end:::
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
Title: 'Tutorial: PHP app with MySQL and Redis' description: Learn how to get a PHP app working in Azure, with connection to a MySQL database and a Redis cache in Azure. Laravel is used in the tutorial.-++ ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Last updated 06/30/2023-+ # Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2":::
- **Step 1.** In the Azure portal:
+ **Step 1:** In the Azure portal:
1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-laravel-mysql-tutorial**. 1. *Region* &rarr; Any Azure region near you. 1. *Name* &rarr; **msdocs-laravel-mysql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
- **Resource group** &rarr; The container for all the created resources. - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** &rarr; Represents your app and runs in the App Service plan.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2":::
- **Step 1.** In the App Service page, in the left menu, select **Configuration**.
+ **Step 1:** In the App Service page, in the left menu, select **Configuration**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png":::
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 2.**
+ **Step 2:**
1. Find app settings that begin with **AZURE_MYSQL_**. They were generated from the new MySQL database by the creation wizard. 1. Also, find app settings that begin with **AZURE_REDIS_**. They were generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need. 1. If you want, you can select the **Edit** button to the right of each setting and see or copy its value.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In the **Application settings** tab of the **Configuration** page, create a `CACHE_DRIVER` setting:
+ **Step 3:** In the **Application settings** tab of the **Configuration** page, create a `CACHE_DRIVER` setting:
1. Select **New application setting**. 1. In the **Name** field, enter *CACHE_DRIVER*. 1. In the **Value** field, enter *redis*.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 4.** Using the same steps in **Step 3**, create the following app settings:
+ **Step 4:** Using the same steps in **Step 3**, create the following app settings:
- **MYSQL_ATTR_SSL_CA**: Use */home/site/wwwroot/ssl/DigiCertGlobalRootCA.crt.pem* as the value. This app setting points to the path of the [TLS/SSL certificate you need to access the MySQL server](../mysql/flexible-server/how-to-connect-tls-ssl.md#download-the-public-ssl-certificate). It's included in the sample repository for convenience. - **LOG_CHANNEL**: Use *stderr* as the value. This setting tells Laravel to pipe logs to stderr, which makes it available to the App Service logs. - **APP_DEBUG**: Use *true* as the value. It's a [Laravel debugging variable](https://laravel.com/docs/10.x/errors#configuration) that enables debug mode pages.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row::: :::column span="2":::
- **Step 1.** In a new browser window:
+ **Step 1:** In a new browser window:
1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/laravel-tasks](https://github.com/Azure-Samples/laravel-tasks). 1. Select **Fork**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In Visual Studio Code in the browser, open *config/database.php* in the explorer. Find the `mysql` section and make the following changes:
+ **Step 3:** In Visual Studio Code in the browser, open *config/database.php* in the explorer. Find the `mysql` section and make the following changes:
1. Replace `DB_HOST` with `AZURE_MYSQL_HOST`. 1. Replace `DB_DATABASE` with `AZURE_MYSQL_DBNAME`. 1. Replace `DB_USERNAME` with `AZURE_MYSQL_USERNAME`.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 4.** In *config/database.php* scroll to the Redis `cache` section and make the following changes:
+ **Step 4:** In *config/database.php* scroll to the Redis `cache` section and make the following changes:
1. Replace `REDIS_HOST` with `AZURE_REDIS_HOST`. 1. Replace `REDIS_PASSWORD` with `AZURE_REDIS_PASSWORD`. 1. Replace `REDIS_PORT` with `AZURE_REDIS_PORT`.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 5.**
+ **Step 5:**
1. Select the **Source Control** extension. 1. In the textbox, type a commit message like `Configure DB & Redis variables`. 1. Select **Commit and Push**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 6.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ **Step 6:** Back in the App Service page, in the left menu, select **Deployment Center**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 7.** In the Deployment Center page:
+ **Step 7:** In the Deployment Center page:
1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 8.** In the Deployment Center page:
+ **Step 8:** In the Deployment Center page:
1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 9.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes.
+ **Step 9:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-9.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-9.png":::
The creation wizard puts the MySQL database server behind a private endpoint, so
:::row::: :::column span="2":::
- **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png":::
The creation wizard puts the MySQL database server behind a private endpoint, so
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the SSH terminal:
+ **Step 2:** In the SSH terminal:
1. Run `cd /home/site/wwwroot`. Here are all your deployed files. 1. Run `php artisan migrate --force`. If it succeeds, App Service is connecting successfully to the MySQL database. Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
The creation wizard puts the MySQL database server behind a private endpoint, so
:::row::: :::column span="2":::
- **Step 1.**
+ **Step 1:**
1. From the left menu, select **Configuration**. 1. Select the **General settings** tab. :::column-end:::
The creation wizard puts the MySQL database server behind a private endpoint, so
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the General settings tab:
+ **Step 2:** In the General settings tab:
1. In the **Startup Command** box, enter the following command: *cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload*. 1. Select **Save**. The command replaces the Nginx configuration file in the PHP container and restarts Nginx. This configuration ensures that the same change is made to the container each time it starts.
The creation wizard puts the MySQL database server behind a private endpoint, so
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end:::
The creation wizard puts the MySQL database server behind a private endpoint, so
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** Add a few tasks to the list.
+ **Step 2:** Add a few tasks to the list.
Congratulations, you're running a secure data-driven PHP app in Azure App Service. :::column-end::: :::column:::
Azure App Service captures all messages logged to the console to assist you in d
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. :::column-end:::
Azure App Service captures all messages logged to the console to assist you in d
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row::: :::column span="2":::
- **Step 1.** In the search bar at the top of the Azure portal:
+ **Step 1:** In the search bar at the top of the Azure portal:
1. Enter the resource group name. 1. Select the resource group. :::column-end:::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the resource group page, select **Delete resource group**.
+ **Step 2:** In the resource group page, select **Delete resource group**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 3.**
+ **Step 3:**
1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end:::
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
description: Create a Python Django or Flask web app with a PostgreSQL database
ms.devlang: python Last updated 02/28/2023-
-zone_pivot_groups: deploy-python-web-app-postgresql
+++ # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/
* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). * Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) - ## Sample application Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2":::
- **Step 1.** In the Azure portal:
+ **Step 1:** In the Azure portal:
1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-python-postgres-tutorial**. 1. *Region* &rarr; Any Azure region near you. 1. *Name* &rarr; **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
- **Resource group** &rarr; The container for all the created resources. - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** &rarr; Represents your app and runs in the App Service plan.
The creation wizard generated the connectivity variables for you already as [app
:::row::: :::column span="2":::
- **Step 1.** In the App Service page, in the left menu, select Configuration.
+ **Step 1:** In the App Service page, in the left menu, select Configuration.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png":::
The creation wizard generated the connectivity variables for you already as [app
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable.
+ **Step 2:** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
The creation wizard generated the connectivity variables for you already as [app
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In a terminal or command prompt, run the following Python script to generate a unique secret: `python -c 'import secrets; print(secrets.token_hex())'`. Copy the output value to use in the next step.
+ **Step 3:** In a terminal or command prompt, run the following Python script to generate a unique secret: `python -c 'import secrets; print(secrets.token_hex())'`. Copy the output value to use in the next step.
:::column-end::: :::column::: :::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 4.** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**.
+ **Step 4:** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png":::
The creation wizard generated the connectivity variables for you already as [app
:::row-end::: :::row::: :::column span="2":::
- **Step 5.** Select **Save**.
+ **Step 5:** Select **Save**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row::: :::column span="2":::
- **Step 1.** In a new browser window:
+ **Step 1:** In a new browser window:
1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app). 1. Select **Fork**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
+ **Step 3:** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
See the environment variables being used in the production environment, including the app settings that you saw in the configuration page. :::column-end::: :::column:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ **Step 4:** Back in the App Service page, in the left menu, select **Deployment Center**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 5.** In the Deployment Center page:
+ **Step 5:** In the Deployment Center page:
1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 6.** In the Deployment Center page:
+ **Step 6:** In the Deployment Center page:
1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png" alt-text="A screenshot showing a GitHub run in progress (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row::: :::column span="2":::
- **Step 1.** In a new browser window:
+ **Step 1:** In a new browser window:
1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app). 1. Select **Fork**.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
+ **Step 3:** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer.
See the environment variables being used in the production environment, including the app settings that you saw in the configuration page. :::column-end::: :::column:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ **Step 4:** Back in the App Service page, in the left menu, select **Deployment Center**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png":::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 5.** In the Deployment Center page:
+ **Step 5:** In the Deployment Center page:
1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account.
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 6.** In the Deployment Center page:
+ **Step 6:** In the Deployment Center page:
1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::row-end::: :::row::: :::column span="2":::
- **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png" alt-text="A screenshot showing a GitHub run in progress (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png":::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2":::
- **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
1. Select **Go**. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the SSH terminal, run `flask db upgrade`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+ **Step 2:** In the SSH terminal, run `flask db upgrade`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2":::
- **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
1. Select **Go**. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+ **Step 2:** In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** Add a few restaurants to the list.
+ **Step 2:** Add a few restaurants to the list.
Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL. :::column-end::: :::column:::
Azure App Service captures all messages output to the console to help you diagno
:::row::: :::column span="2":::
- **Step 1.** In the App Service page:
+ **Step 1:** In the App Service page:
1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. 1. In the top menu, select **Save**.
Azure App Service captures all messages output to the console to help you diagno
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row::: :::column span="2":::
- **Step 1.** In the search bar at the top of the Azure portal:
+ **Step 1:** In the search bar at the top of the Azure portal:
1. Enter the resource group name. 1. Select the resource group. :::column-end:::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the resource group page, select **Delete resource group**.
+ **Step 2:** In the resource group page, select **Delete resource group**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::row-end::: :::row::: :::column span="2":::
- **Step 3.**
+ **Step 3:**
1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end:::
If you can't connect to the SSH session, then the app itself has failed to start
If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database. --
-## Provision and deploy using the Azure Developer CLI
-
-Sample Python application templates using the Flask and Django framework are provided for this tutorial. The [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) greatly streamlines the process of provisioning application resources and deploying code on Azure. For a more step-by-step approach using the Azure portal and other tools, toggle to the **Azure portal** approach at the top of the page.
-
-The Azure Developer CLI (azd) provides end-to-end support for project initialization, provisioning, deploying, monitoring and scaffolding a CI/CD pipeline to run against real Azure resources. You can use `azd` to provision and deploy the resources for the sample application in an automated and streamlined way.
-
-Follow the steps below to setup the Azure Developer CLI and provision and deploy the sample application:
-
-1. Install the Azure Developer CLI. For a full list of supported installation options and tools, visit the [installation guide](/azure/developer/azure-developer-cli/install-azd).
-
- ### [Windows](#tab/windows)
-
- ```azdeveloper
- powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression"
- ```
-
- ### [macOS/Linux](#tab/mac-linux)
-
- ```azdeveloper
- curl -fsSL https://aka.ms/install-azd.sh | bash
- ```
-
-
-
-1. Run the `azd init` command to initialize the `azd` app template. Include the `--template` parameter to specify the name of an existing `azd` template you wish to use. More information about working with templates is available on the [choose an `azd` template](/azure/developer/azure-developer-cli/azd-templates) page.
-
- ### [Flask](#tab/flask)
-
- For this tutorial, Flask users should specify the [Python (Flask) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app.git) template.
-
- ```bash
- azd init --template msdocs-flask-postgresql-sample-app
- ```
-
- ### [Django](#tab/django)
-
- For this tutorial, Django users should specify the [Python (Django) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git) template.
-
- ```bash
- azd init --template msdocs-django-postgresql-sample-app
- ```
-
-1. Run the `azd auth login` command to sign-in to Azure.
-
- ```bash
- azd auth login
- ```
-
-1. Run the `azd up` command to provision the necessary Azure resources and deploy the app code. The `azd up` command will also prompt you to select the desired subscription and location to deploy to.
-
- ```bash
- azd up
- ```
-
-1. When the `azd up` command finishes running, the URL for your deployed web app in the console will be printed. Click, or copy and paste the web app URL into your browser to explore the running app and verify that it is working correctly. All of the Azure resources and application code were set up for you by the `azd up` command.
-
- The name of the resource group that was created is also displayed in the console output. Locate the resource group in the Azure portal to see all of the provisioned resources.
-
- :::image type="content" border="False" source="./media/tutorial-python-postgresql-app/azd-resources-small.png" lightbox="./media/tutorial-python-postgresql-app/azd-resources.png" alt-text="A screenshot showing the resources deployed by the Azure Developer CLI.":::
-
-The Azure Developer CLI also enables you to configure your application to use a CI/CD pipeline for deployments, setup monitoring functionality, and even remove the provisioned resources if you want to tear everything down. For more information about these additional workflows, visit the project [README](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md).
-
-## Explore the completed azd project template workflow
-
-The sections ahead review the steps that `azd` handled for you in more depth. You can explore this workflow to better understand the requirements for deploying your own apps to Azure. When you ran `azd up`, the Azure Developer CLI completed the following steps:
-
-> [!NOTE]
-> You can also use the steps outlined in the **Azure portal** version of this flow to gain additional insights into the tasks that `azd` completed for you.
-
-### 1. Cloned and initialized the project
-
-The `azd init` command cloned the sample app project template to your machine. The project template includes the following components:
-
-* **Source code**: The code and assets for a Flask or Django web app that can be used for local development or deployed to Azure.
-* **Bicep files**: Infrastructure as code (IaC) files that are used by `azd` to create the necessary resources in Azure.
-* **Configuration files**: Essential configuration files such as `azure.yaml` that are used by `azd` to provision, deploy and wire resources together to produce a fully fledged application.
-
-### 2. Provisioned the Azure resources
-
-The `azd up` command created all of the resources for the sample application in Azure using the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project template. [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep) is a declarative language used to manage Infrastructure as Code in Azure. Some of the key resources and configurations created by the template include:
-
-* **Resource group**: A resource group was created to hold all of the other provisioned Azure resources. The resource group keeps your resources well organized and easier to manage. The name of the resource group is based off of the environment name you specified during the `azd up` initialization process.
-* **Azure Virtual Network**: A virtual network was created to enable the provisioned resources to securely connect and communicate with one another. Related configurations such as setting up a private DNS zone link were also applied.
-* **Azure App Service plan**: An App Service plan was created to host App Service instances. App Service plans define what compute resources are available for one or more web apps.
-* **Azure App Service**: An App Service instance was created in the new App Service plan to host and run the deployed application. In this case a Linux instance was created and configured to run Python apps. Additional configurations were also applied to the app service, such as setting the Postgres connection string and secret keys.
-* **Azure Database for PostgreSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured.
-* **Azure Application Insights**: Application insights was set up and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application.
-
-You can inspect the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project to understand how each of these resources were provisioned in more detail. The `resources.bicep` file defines most of the different services created in Azure. For example, the App Service plan and App Service web app instance were created and connected using the following Bicep code:
-
-### [Flask](#tab/flask)
-
-```yaml
-resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = {
- name: '${prefix}-service-plan'
- location: location
- tags: tags
- sku: {
- name: 'B1'
- }
- properties: {
- reserved: true
- }
-}
-
-resource web 'Microsoft.Web/sites@2022-03-01' = {
- name: '${prefix}-app-service'
- location: location
- tags: union(tags, { 'azd-service-name': 'web' })
- kind: 'app,linux'
- properties: {
- serverFarmId: appServicePlan.id
- siteConfig: {
- alwaysOn: true
- linuxFxVersion: 'PYTHON|3.10'
- ftpsState: 'Disabled'
- appCommandLine: 'startup.sh'
- }
- httpsOnly: true
- }
- identity: {
- type: 'SystemAssigned'
- }
-```
-
-### [Django](#tab/django)
-
-```yml
-resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = {
- name: '${prefix}-service-plan'
- location: location
- tags: tags
- sku: {
- name: 'B1'
- }
- properties: {
- reserved: true
- }
-}
-
-resource web 'Microsoft.Web/sites@2022-03-01' = {
- name: '${prefix}-app-service'
- location: location
- tags: union(tags, { 'azd-service-name': 'web' })
- kind: 'app,linux'
- properties: {
- serverFarmId: appServicePlan.id
- siteConfig: {
- alwaysOn: true
- linuxFxVersion: 'PYTHON|3.10'
- ftpsState: 'Disabled'
- appCommandLine: 'startup.sh'
- }
- httpsOnly: true
- }
- identity: {
- type: 'SystemAssigned'
- }
-
-```
---
-The Azure Database for PostgreSQL was also created using the following Bicep:
-
-```yml
-resource postgresServer 'Microsoft.DBforPostgreSQL/flexibleServers@2022-01-20-preview' = {
- location: location
- tags: tags
- name: pgServerName
- sku: {
- name: 'Standard_B1ms'
- tier: 'Burstable'
- }
- properties: {
- version: '12'
- administratorLogin: 'postgresadmin'
- administratorLoginPassword: databasePassword
- storage: {
- storageSizeGB: 128
- }
- backup: {
- backupRetentionDays: 7
- geoRedundantBackup: 'Disabled'
- }
- network: {
- delegatedSubnetResourceId: virtualNetwork::databaseSubnet.id
- privateDnsZoneArmResourceId: privateDnsZone.id
- }
- highAvailability: {
- mode: 'Disabled'
- }
- maintenanceWindow: {
- customWindow: 'Disabled'
- dayOfWeek: 0
- startHour: 0
- startMinute: 0
- }
- }
-
- dependsOn: [
- privateDnsZoneLink
- ]
-}
-```
-
-### 3. Deployed the application
-
-The `azd up` command also deployed the sample application code to the provisioned Azure resources. The Developer CLI understands how to deploy different parts of your application code to different services in Azure using the `azure.yaml` file at the root of the project. The `azure.yaml` file specifies the app source code location, the type of app, and the Azure Service that should host that app.
-
-Consider the following `azure.yaml` file. These configurations tell the Azure Developer CLI that the Python code that lives at the root of the project should be deployed to the created App Service.
-
-### [Flask](#tab/flask)
-
-```yml
-name: flask-postgresql-sample-app
-metadata:
- template: flask-postgresql-sample-app@0.0.1-beta
-
- web:
- project: .
- language: py
- host: appservice
-```
-
-### [Django](#tab/django)
-
-```yml
-name: django-postgresql-sample-app
-metadata:
- template: django-postgresql-sample-app@0.0.1-beta
-
- web:
- project: .
- language: py
- host: appservice
-```
---
-## Remove the resources
-
-Once you are finished experimenting with your sample application, you can run the `azd down` command to remove the app from Azure. Removing resources helps to avoid unintended costs or unused services in your Azure subscription.
-
-```bash
-azd down
-```
-- ## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost)
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-ruby-postgres-app.md
description: Learn how to get a Linux Ruby app working in Azure App Service, wit
ms.devlang: ruby Last updated 06/18/2020-+++ # Build a Ruby and Postgres app in Azure App Service on Linux
app-service Tutorial Secure Domain Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-domain-certificate.md
Title: 'Tutorial: Secure app with a custom domain and certificate' description: Learn how to secure your brand with App Service using a custom domain and enabling App Service managed certificate. ++ Last updated 01/31/2023
You need to scale your app up to **Basic** tier. **Basic** tier fulfills the min
:::row::: :::column span="2":::
- **Step 1.** In the Azure portal:
+ **Step 1:** In the Azure portal:
1. Enter the name of your app in the search bar at the top. 1. Select your named resource with the type **App Service**. :::column-end:::
You need to scale your app up to **Basic** tier. **Basic** tier fulfills the min
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In your app's management page:
+ **Step 2:** In your app's management page:
1. In the left navigation, select **Scale up (App Service plan)**. 1. Select the checkbox for **Basic B1**. 1. Select **Select**.
For more information on app scaling, see [Scale up an app in Azure App Service](
:::row::: :::column span="2":::
- **Step 1.** In your app's management page:
+ **Step 1:** In your app's management page:
1. In the left menu, select **Custom domains**. 1. Select **Add custom domain**. :::column-end:::
For more information on app scaling, see [Scale up an app in Azure App Service](
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Add custom domain** dialog:
+ **Step 2:** In the **Add custom domain** dialog:
1. For **Domain provider**, select **All other domain services**. 1. For **TLS/SSL certificate**, select **App Service Managed Certificate**. 1. For Domain, specify a fully qualified domain name you want based on the domain you own. For example, if you own `contoso.com`, you can use *www.contoso.com*.
For each custom domain in App Service, you need two DNS records with your domain
:::row::: :::column span="2":::
- **Step 1.** Back in the **Add custom domain** dialog in the Azure portal, select **Validate**.
+ **Step 1:** Back in the **Add custom domain** dialog in the Azure portal, select **Validate**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-secure-domain-certificate/configure-custom-domain-validate.png" alt-text="A screenshot showing how to validate your DNS record settings in the Add a custom domain dialog." lightbox="./media/tutorial-secure-domain-certificate/configure-custom-domain-validate.png" border="true":::
For each custom domain in App Service, you need two DNS records with your domain
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** If the **Domain validation** section shows green check marks next for both domain records, then you've configured them correctly. Select **Add**. If it shows any red X, fix any errors in the DNS record settings in your domain provider's website.
+ **Step 2:** If the **Domain validation** section shows green check marks next for both domain records, then you've configured them correctly. Select **Add**. If it shows any red X, fix any errors in the DNS record settings in your domain provider's website.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-secure-domain-certificate/configure-custom-domain-add.png" alt-text="A screenshot showing the Add button activated after validation." lightbox="./media/tutorial-secure-domain-certificate/configure-custom-domain-add.png" border="true":::
For each custom domain in App Service, you need two DNS records with your domain
:::row-end::: :::row::: :::column span="2":::
- **Step 3.** You should see the custom domain added to the list. You may also see a red X with **No binding**. Wait a few minutes for App Service to create the managed certificate for your custom domain. When the process is complete, the red X becomes a green check mark with **Secured**.
+ **Step 3:** You should see the custom domain added to the list. You may also see a red X with **No binding**. Wait a few minutes for App Service to create the managed certificate for your custom domain. When the process is complete, the red X becomes a green check mark with **Secured**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-secure-domain-certificate/add-custom-domain-complete.png" alt-text="A screenshot showing the custom domains page with the new secured custom domain." lightbox="./media/tutorial-secure-domain-certificate/add-custom-domain-complete.png" border="true":::
See [Add a private certificate to your app](configure-ssl-certificate.md) and [S
- [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md) - [Purchase an App Service domain](manage-custom-dns-buy-domain.md) - [Add a private certificate to your app](configure-ssl-certificate.md)-- [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+- [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md
description: Learn how to invoke business processes from your App Service app. S
Last updated 04/08/2020 ms.devlang: csharp, javascript, php, python, ruby-+++ # Tutorial: Send email and invoke other business processes from App Service
app-service Web Sites Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-monitor.md
ms.assetid: d273da4e-07de-48e0-b99d-4020d84a425e
Last updated 06/29/2023 +
app-service Web Sites Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-traffic-manager.md
ms.assetid: dabda633-e72f-4dd4-bf1c-6e945da456fd
Last updated 02/25/2016 ++ # Controlling Azure App Service traffic with Azure Traffic Manager
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md
description: Learn how to use WebJobs to run background tasks in Azure App Servi
ms.assetid: af01771e-54eb-4aea-af5f-f883ff39572b Last updated 7/30/2023--+++ #Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. adobe-target: true
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
Previously updated : 05/23/2023 Last updated : 08/22/2023 -+ # Troubleshoot backend health issues in Application Gateway
To increase the timeout value, follow these steps:
4. If you're using Azure default DNS, check with your domain name registrar about whether proper A record or CNAME record mapping has been completed. 5. If the domain is private or internal, try to resolve it from a VM in the same virtual network. If you can resolve it, restart Application Gateway and check again. To restart Application Gateway, you need to [stop](/powershell/module/az.network/stop-azapplicationgateway) and [start](/powershell/module/az.network/start-azapplicationgateway) by using the PowerShell commands described in these linked resources.
-### Updates to the DNS entries of the backend pool
-
-**Message:** The backend health status could not be retrieved. This happens when an NSG/UDR/Firewall on the application gateway subnet is blocking traffic on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case of the v2 SKU or if the FQDN configured in the backend pool could not be resolved to an IP address. To learn more visit - https://aka.ms/UnknownBackendHealth.
-
-**Cause:** Application Gateway resolves the DNS entries for the backend pool at time of startup and doesn't update them dynamically while running.
-
-**Resolution:**
-
-Application Gateway must be restarted after any modification to the backend server DNS entries to begin to use the new IP addresses. This operation can be completed via Azure PowerShell or Azure CLI.
-
-#### Azure PowerShell
-
-```
-# Get Azure Application Gateway
-$appgw=Get-AzApplicationGateway -Name <appgw_name> -ResourceGroupName <rg_name>
-
-# Stop the Azure Application Gateway
-Stop-AzApplicationGateway -ApplicationGateway $appgw
-
-# Start the Azure Application Gateway
-Start-AzApplicationGateway -ApplicationGateway $appgw
-```
-
-#### Azure CLI
-
-```
-# Stop the Azure Application Gateway
-az network application-gateway stop -n <appgw_name> -g <rg_name>
-
-# Start the Azure Application Gateway
-az network application-gateway start -n <appgw_name> -g <rg_name>
-```
- ### TCP connect error **Message:** Application Gateway could not connect to the backend. Check that the backend responds on the port used for the probe. Also check whether any NSG/UDR/Firewall is blocking access to the Ip and port of this backend.
OR </br>
**Solution:** To resolve this issue, verify that the certificate on your server was created properly. For example, you can use [OpenSSL](https://www.openssl.org/docs/manmaster/man1/verify.html) to verify the certificate and its properties and then try reuploading the certificate to the Application Gateway HTTP settings.
-## Backend health status: unknown
+## Backend health status: Unknown
+
+### Updates to the DNS entries of the backend pool
+
+**Message:** The backend health status could not be retrieved. This happens when an NSG/UDR/Firewall on the application gateway subnet is blocking traffic on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case of the v2 SKU or if the FQDN configured in the backend pool could not be resolved to an IP address. To learn more visit - https://aka.ms/UnknownBackendHealth.
+
+**Cause:** For FQDN (Fully Qualified Domain Name)-based backend targets, the Application Gateway caches and uses the last-known-good IP address if it fails to get a response for the subsequent DNS lookup. A PUT operation on a gateway in this state would clear its DNS cache altogether. As a result, there will not be any destination address to which the gateway can reach.
+
+**Resolution:**
+Check and fix the DNS servers to ensure it is serving a response for the given FDQN's DNS lookup. You must also check if the DNS servers are reachable through your application gateway's Virtual Network.
+### Other reasons
If the backend health is shown as Unknown, the portal view will resemble the following screenshot: ![Application Gateway backend health - Unknown](./media/application-gateway-backend-health-troubleshooting/appgwunknown.png)
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
Validate that the Custom Health Probe is configured correctly, as shown in the p
### Cause
-When a user request is received, the application gateway applies the configured rules to the request and routes it to a backend pool instance. It waits for a configurable interval of time for a response from the backend instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway doesn't receive a response from backend application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway doesn't receive a response from the backend application in this interval, the request will be tried against a second backend pool member. If the second request fails the user request gets a 502 error.
+When a user request is received, the application gateway applies the configured rules to the request and routes it to a backend pool instance. It waits for a configurable interval of time for a response from the backend instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway doesn't receive a response from backend application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway doesn't receive a response from the backend application in this interval, the request will be tried against a second backend pool member. If the second request fails the user request gets a 504 error.
### Solution
application-gateway How Application Gateway Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-application-gateway-works.md
Previously updated : 2/17/2023 Last updated : 8/22/2023
When an application gateway sends the original request to the backend server, it
> - **Contains an internally resolvable FQDN or a private IP address**, the application gateway routes the request to the backend server by using its instance private IP addresses. > - **Contains an external endpoint or an externally resolvable FQDN**, the application gateway routes the request to the backend server by using its frontend public IP address. If the subnet contains [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), the application gateway will route the request to the service via its private IP address. DNS resolution is based on a private DNS zone or custom DNS server, if configured, or it uses the default Azure-provided DNS. If there isn't a frontend public IP address, one is assigned for the outbound external connectivity.
+### Backend server DNS resolution
+
+When a backend pool's server is configured with a Fully Qualified Domain Name (FQDN), Application Gateway performs a DNS lookup to get the domain name's IP address(es). The IP value is stored in your application gateway's cache to enable it to reach the targets faster when serving incoming requests.
+
+The Application Gateway retains this cached information for the period equivalent to that DNS record's TTL (time to live) and performs a fresh DNS lookup once the TTL expires. If a gateway detects a change in IP address for its subsequent DNS query, it will start routing the traffic to this updated destination. In case of problems such as the DNS lookup failing to receive a response or the record no longer exists, the gateway continues to use the last-known-good IP address(es). This ensures minimal impact on the data path.
+
+> [!IMPORTANT]
+> * When using custom DNS servers with Application Gateway's Virtual Network, it is crucial that all servers are identical and respond consistently with the same DNS values.
+> * Users of on-premises custom DNS servers must ensure connectivity to Azure DNS through [Azure DNS Private Resolver](../dns/private-resolver-hybrid-dns.md) (recommended) or DNS forwarder VM when using a Private DNS zone for Private endpoint.
+ ### Modifications to the request Application gateway inserts six additional headers to all requests before it forwards the requests to the backend. These headers are x-forwarded-for, x-forwarded-port, x-forwarded-proto, x-original-host, x-original-url, and x-appgw-trace-id. The format for x-forwarded-for header is a comma-separated list of IP:port.
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
HTTP 502 errors can have several root causes, for example:
- Invalid or improper configuration of custom health probes. - Azure Application Gateway's [backend pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool). - None of the VMs or instances in [virtual machine scale set are healthy](application-gateway-troubleshooting-502.md#unhealthy-instances-in-backendaddresspool).-- [Request time-out or connectivity issues](application-gateway-troubleshooting-502.md#request-time-out) with user requests.
+- [Request time-out or connectivity issues](application-gateway-troubleshooting-502.md#request-time-out) with user requests-Azure application Gateway V1 SKU sent HTTP 502 errors if the backend response time exceeds the time-out value that is configured in the Backend Setting.
For information about scenarios where 502 errors occur, and how to troubleshoot them, see [Troubleshoot Bad Gateway errors](application-gateway-troubleshooting-502.md).
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md
sslEnabled_s | Does the client request have SSL enabled|
## See Also <!-- replace below with the proper link to your main monitoring service article -->-- See [Monitoring Azure Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.
+- See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.
- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
You'll now deploy a new application gateway, to simulate having an existing appl
```azurecli-interactive az network public-ip create -n myPublicIp -g myResourceGroup --allocation-method Static --sku Standard az network vnet create -n myVnet -g myResourceGroup --address-prefix 10.0.0.0/16 --subnet-name mySubnet --subnet-prefix 10.0.0.0/24
-az network application-gateway create -n myApplicationGateway -l eastus -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet --priority 100
+az network application-gateway create -n myApplicationGateway -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet --priority 100
``` > [!NOTE]
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
automation Automation Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-disaster-recovery.md
In addition to high availability offered by Availability zones, some regions are
## Enable disaster recovery
-Every Automation account that you [create](https://learn.microsoft.com/azure/automation/quickstarts/create-azure-automation-account-portal)
+Every Automation account that you [create](/azure/automation/quickstarts/create-azure-automation-account-portal)
requires a location that you must use for deployment. This would be the primary region for your Automation account and it includes Assets, runbooks created for the Automation account, job execution data, and logs. For disaster recovery, the replica Automation account must be already deployed and ready in the secondary region. -- Begin by [creating a replica Automation account](https://learn.microsoft.com/azure/automation/quickstarts/create-azure-automation-account-portal#create-automation-account) in any alternate [region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).
+- Begin by [creating a replica Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal#create-automation-account) in any alternate [region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).
- Select the secondary region of your choice - paired region or any other region where Azure Automation is available. - Apart from creating a replica of the Automation account, replicate the dependent resources such as Runbooks, Modules, Connections, Credentials, Certificates, Variables, Schedules and permissions assigned for the Run As account and Managed Identities in the Automation account in primary region to the Automation account in secondary region. You can use the [PowerShell script](#script-to-migrate-automation-account-assets-from-one-region-to-another) to migrate assets of the Automation account from one region to another. - If you are using [ARM templates](../azure-resource-manager/management/overview.md) to define and deploy Automation runbooks, you can use these templates to deploy the same runbooks in any other Azure region where you create the replica Automation account. In case of a region-wide outage or zone-wide failure in the primary region, you can execute the runbooks replicated in the secondary region to continue business as usual. This ensures that the secondary region steps up to continue the work if the primary region has a disruption or failure.
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-graphical-authoring-intro.md
# Author graphical runbooks in Azure Automation > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
All runbooks in Azure Automation are Windows PowerShell workflows. Graphical runbooks and graphical PowerShell Workflow runbooks generate PowerShell code that the Automation workers run but that you cannot view or modify. You can convert a graphical runbook to a graphical PowerShell Workflow runbook, and vice versa. However, you can't convert these runbooks to a textual runbook. Additionally, the Automation graphical editor can't import a textual runbook.
automation Automation Manage Send Joblogs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md
Title: Forward Azure Automation job data to Azure Monitor logs
description: This article tells how to send job status and runbook job streams to Azure Monitor logs. Previously updated : 03/10/2022- Last updated : 08/28/2023++ # Forward Azure Automation diagnostic logs to Azure Monitor
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managing-data.md
For information about TLS 1.2 support with the Log Analytics agent for Windows a
From **30 October 2023**, all agent-based and extension-based User Hybrid Runbook Workers using Transport Layer Security (TLS) 1.0 and 1.1 protocols would no longer be able to connect to Azure Automation and all jobs running or scheduled on these machines would fail.
-Ensure that the Webhook calls that trigger runbooks navigate on TLS 1.2 or higher. Ensure to make registry changes so that Agent and Extension based workers negotiate only on TLS 1.2 and higher protocols. Learn how to [disable TLS 1.0/1.1 protocols on Windows Hybrid Worker and enable TLS 1.2 or above](https://learn.microsoft.com/system-center/scom/plan-security-tls12-config?view=sc-om-2022#configure-windows-operating-system-to-only-use-tls-12-protocol) on Windows machine.
+Ensure that the Webhook calls that trigger runbooks navigate on TLS 1.2 or higher. Ensure to make registry changes so that Agent and Extension based workers negotiate only on TLS 1.2 and higher protocols. Learn how to [disable TLS 1.0/1.1 protocols on Windows Hybrid Worker and enable TLS 1.2 or above](/system-center/scom/plan-security-tls12-config?view=sc-om-2022#configure-windows-operating-system-to-only-use-tls-12-protocol) on Windows machine.
For Linux Hybrid Workers, run the following Python script to upgrade to the latest TLS protocol.
automation Automation Runbook Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-authoring.md
The test matrix includes the following operating systems:
1. **Ubuntu** 20.04 with PowerShell Core 7.2.7 >[!NOTE]
->- The extension should work anywhere in VS Code and it supports [PowerShell 7.2 or higher](https://learn.microsoft.com/powershell/scripting/install/PowerShell-Support-Lifecycle?view=powershell-7.3). For Windows PowerShell, only version 5.1 is supported.
+>- The extension should work anywhere in VS Code and it supports [PowerShell 7.2 or higher](/powershell/scripting/install/PowerShell-Support-Lifecycle?view=powershell-7.3). For Windows PowerShell, only version 5.1 is supported.
>- PowerShell Core 6 is end-of-life and not supported.
automation Automation Runbook Output And Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-output-and-messages.md
Title: Configure runbook output and message streams
description: This article tells how to implement error handling logic and describes output and message streams in Azure Automation runbooks. Previously updated : 11/03/2020 Last updated : 08/28/2023 -+ # Configure runbook output and message streams
The following table briefly describes each stream with its behavior in the Azure
The output stream is used for the output of objects created by a script or workflow when it runs correctly. Azure Automation primarily uses this stream for objects to be consumed by parent runbooks that call the [current runbook](automation-child-runbooks.md). When a parent [calls a runbook inline](automation-child-runbooks.md#call-a-child-runbook-by-using-inline-execution), the child returns data from the output stream to the parent.
-Your runbook uses the output stream to communicate general information to the client only if it is never called by another runbook. As a best practice, however, you runbooks should typically use the [verbose stream](#write-output-to-verbose-stream) to communicate general information to the user.
+Your runbook uses the output stream to communicate general information to the client only if it's never called by another runbook. As a best practice, however, your runbooks should typically use the [verbose stream](#write-output-to-verbose-stream) to communicate general information to the user.
Have your runbook write data to the output stream using [Write-Output](/powershell/module/microsoft.powershell.utility/write-output). Alternatively, you can put the object on its own line in the script.
$object
### Handle output from a function
-When a runbook function writes to the output stream, the output is passed back to the runbook. If the runbook assigns that output to a variable, the output is not written to the output stream. Writing to any other streams from within the function writes to the corresponding stream for the runbook. Consider the following sample PowerShell Workflow runbook.
+When a runbook function writes to the output stream, the output is passed back to the runbook. If the runbook assigns that output to a variable, the output isn't written to the output stream. Writing to any other streams from within the function writes to the corresponding stream for the runbook. Consider the following sample PowerShell Workflow runbook.
```powershell Workflow Test-Runbook
The following are examples of output data types:
#### Declare output data type in a workflow
-A workflow specifies the data type of its output using the [OutputType attribute](/powershell/module/microsoft.powershell.core/about/about_functions_outputtypeattribute). This attribute has no effect during runtime, but it provides you an indication at design time of the expected output of the runbook. As the tool set for runbooks continues to evolve, the importance of declaring output data types at design time increases. Therefore it's a best practice to include this declaration in any runbooks that you create.
+A workflow specifies the data type of its output using the [OutputType attribute](/powershell/module/microsoft.powershell.core/about/about_functions_outputtypeattribute). This attribute has no effect during runtime, but it provides you with an indication at design time of the expected output of the runbook. As the tool set for runbooks continues to evolve, the importance of declaring output data types at design time increases. Therefore it's a best practice to include this declaration in any runbooks that you create.
The following sample runbook outputs a string object and includes a declaration of its output type. If your runbook outputs an array of a certain type, then you should still specify the type as opposed to an array of the type.
To declare an output type in a graphical or graphical PowerShell Workflow runboo
> [!NOTE] > After you enter a value in the **Output Type** field in the Input and Output properties pane, be sure to click outside the control so that it recognizes your entry.
-The following example shows two graphical runbooks to demonstrate the Input and Output feature. Applying the modular runbook design model, you have one runbook as the Authenticate Runbook template managing authentication with Azure using the Run As account. The second runbook, which normally performs core logic to automate a given scenario, in this case executes the Authenticate Runbook template. It displays the results to your Test output pane. Under normal circumstances, you would have this runbook do something against a resource leveraging the output from the child runbook.
+The following example shows two graphical runbooks to demonstrate the Input and Output feature. Applying the modular runbook design model, you have one runbook as the Authenticate Runbook template managing authentication with Azure using [Managed identities](automation-security-overview.md#managed-identities). The second runbook, which normally performs core logic to automate a given scenario, in this case executes the Authenticate Runbook template. It displays the results to your Test output pane. Under normal circumstances, you would have this runbook do something against a resource leveraging the output from the child runbook.
-Here is the basic logic of the **AuthenticateTo-Azure** runbook.<br> ![Authenticate Runbook Template Example](media/automation-runbook-output-and-messages/runbook-authentication-template.png).
+Here's the basic logic of the **AuthenticateTo-Azure** runbook.<br> ![Authenticate Runbook Template Example](media/automation-runbook-output-and-messages/runbook-authentication-template.png).
-The runbook includes the output type `Microsoft.Azure.Commands.Profile.Models.PSAzureContext`, which returns the authentication profile properties.<br> ![Runbook Output Type Example](media/automation-runbook-output-and-messages/runbook-input-and-output-add-blade.png)
+The runbook includes the output type `Microsoft.Azure.Commands.Profile.Models.PSAzureProfile`, which returns the authentication profile properties.<br> ![Runbook Output Type Example](media/automation-runbook-output-and-messages/runbook-input-and-output-add-blade.png)
-While this runbook is straightforward, there is one configuration item to call out here. The last activity executes the `Write-Output` cmdlet to write profile data to a variable using a PowerShell expression for the `Inputobject` parameter. This parameter is required for `Write-Output`.
+While this runbook is straightforward, there's one configuration item to call out here. The last activity executes the `Write-Output` cmdlet to write profile data to a variable using a PowerShell expression for the `Inputobject` parameter. This parameter is required for `Write-Output`.
The second runbook in this example, named **Test-ChildOutputType**, simply defines two activities.<br> ![Example Child Output Type Runbook](media/automation-runbook-output-and-messages/runbook-display-authentication-results-example.png)
-The first activity calls the **AuthenticateTo-Azure** runbook. The second activity runs the `Write-Verbose` cmdlet with **Data source** set to **Activity output**. Also, **Field path** is set to **Context.Subscription.SubscriptionName**, the context output from the **AuthenticateTo-Azure** runbook.<br> ![Write-Verbose Cmdlet Parameter Data Source](media/automation-runbook-output-and-messages/runbook-write-verbose-parameters-config.png)
+The first activity calls the **AuthenticateTo-Azure** runbook. The second activity runs the `Write-Verbose` cmdlet with **Data source** set to **Activity output**. Also, **Field path** is set to **Context.Subscription.Name**, the context output from the **AuthenticateTo-Azure** runbook.
+ The resulting output is the name of the subscription.<br> ![Test-ChildOutputType Runbook Results](media/automation-runbook-output-and-messages/runbook-test-childoutputtype-results.png)
Write-Error -Message "This is an error message that will stop the runbook becaus
### Write output to debug stream
-Azure Automation uses the debug message stream for interactive users. By default Azure Automation does not capture any debug stream data, only output, error, and warning data are captured as well as verbose data if the runbook is configured to capture it.
+Azure Automation uses the debug message stream for interactive users. By default Azure Automation doesn't capture any debug stream data, only output, error, and warning data are captured as well as verbose data if the runbook is configured to capture it.
In order to capture debug stream data, you have to perform two actions in your runbooks:
Write-Verbose -Message "This is a verbose message."
You can use the **Configure** tab of the Azure portal to configure a runbook to log progress records. The default setting is to not log the records, to maximize performance. In most cases, you should keep the default setting. Turn on this option only to troubleshoot or debug a runbook.
-If you enable progress record logging, your runbook writes a record to job history before and after each activity runs. Testing a runbook does not display progress messages even if the runbook is configured to log progress records.
+If you enable progress record logging, your runbook writes a record to job history before and after each activity runs. Testing a runbook doesn't display progress messages even if the runbook is configured to log progress records.
> [!NOTE] > The [Write-Progress](/powershell/module/microsoft.powershell.utility/write-progress) cmdlet is not valid in a runbook, since this cmdlet is intended for use with an interactive user.
For more information about configuring integration with Azure Monitor Logs to co
## Next steps
+* For sample queries, see [Sample queries for job logs and job streams](automation-manage-send-joblogs-log-analytics.md#job-streams)
* To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).
-* If you are unfamiliar with PowerShell scripting, see [PowerShell](/powershell/scripting/overview) documentation.
+* If you're unfamiliar with PowerShell scripting, see [PowerShell](/powershell/scripting/overview) documentation.
* For the Azure Automation PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
The following are the current limitations and known issues with PowerShell runbo
* PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations. * A PowerShell runbook can fail if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed to work with large objects. For example, instead of using `Get-Process` with no limitations, you can have the cmdlet output just the required parameters as in `Get-Process | Select ProcessName, CPU`.
-* When you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you may experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](https://learn.microsoft.com/powershell/module/powershellget/?view=powershell-5.1) and [PackageManagement](https://learn.microsoft.com/powershell/module/packagemanagement/?view=powershell-5.1) modules as well.
-* When you use [New-item cmdlet](https://learn.microsoft.com/powershell/module/microsoft.powershell.management/new-item?view=powershell-5.1), jobs might be suspended. To resolve the issue, follow the mitigation steps:
+* When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you may experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/?view=powershell-5.1) and [PackageManagement](/powershell/module/packagemanagement/?view=powershell-5.1) modules as well.
+* When you use [New-item cmdlet](/powershell/module/microsoft.powershell.management/new-item?view=powershell-5.1), jobs might be suspended. To resolve the issue, follow the mitigation steps:
1. Consume the output of `new-item` cmdlet in a variable and **do not** write it to the output stream using `write-output` command. - You can use debug or progress stream after you enable it from **Logging and Tracing** setting of the runbook. ```powershell-interactive
The following are the current limitations and known issues with PowerShell runbo
- You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime. - When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az PowerShell module.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0. - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON.-- We recommend that you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures.
+- We recommend that you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures.
- If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks.
The following are the current limitations and known issues with PowerShell runbo
$ProgressPreference = "Continue" ```-- When you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](https://learn.microsoft.com/powershell/module/powershellget/?view=powershell-7.3) and [PackageManagement](https://learn.microsoft.com/powershell/module/packagemanagement/?view=powershell-7.3) modules.
+- When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/?view=powershell-7.3) and [PackageManagement](/powershell/module/packagemanagement/?view=powershell-7.3) modules.
## PowerShell Workflow runbooks
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
The service principal for a Run As account doesn't have permissions to read Azur
This section defines permissions for both regular Run As accounts and Classic Run As accounts.
-* To create or update a Run As account, an Application administrator in Azure Active Directory and an Owner in the subscription can complete all the tasks.
-* To configure or renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
+* To create or update or delete a Run As account, an Application administrator in Azure Active Directory and an Owner in the subscription can complete all the tasks.
+* To configure or renew or delete a Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
In a situation where you have separation of duties, the following table shows a listing of the tasks, the equivalent cmdlet, and permissions needed:
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md
description: This article tells how to use Azure AD within Azure Automation as t
Last updated 05/26/2023 -+ # Use Azure AD to authenticate to Azure
automation Context Switching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/context-switching.md
description: This article explains context switching and how to avoid runbook is
Previously updated : 09/27/2021 Last updated : 08/18/2023 #Customer intent: As a developer, I want to understand Azure context so that I can avoid error when running multiple runbooks.
While you may not come across an issue if you don't follow these recommendations
`The subscription named <subscription name> cannot be found.` ```error
-Get-AzVM : The client '<automation-runas-account-guid>' with object id '<automation-runas-account-guid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
+Get-AzVM : The client '<clientid>' with object id '<objectid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
ErrorCode: AuthorizationFailed StatusCode: 403 ReasonPhrase: Forbidden Operation
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
After the Automation account is successfully unlinked from the workspace, perfor
To delete your Automation account linked to a Log Analytics workspace in support of Update Management, Change Tracking and Inventory, and/or Start/Stop VMs during off-hours, perform the following steps.
-### Step 1. Delete the solution from the linked workspace
+### Step 1: Delete the solution from the linked workspace
# [Azure portal](#tab/azure-portal)
Remove-AzMonitorLogAnalyticsSolution -ResourceGroupName "resourceGroupName" -Nam
-### Step 2. Unlink workspace from Automation account
+### Step 2: Unlink workspace from Automation account
There are two options for unlinking the Log Analytics workspace from your Automation account. You can perform this process from the Automation account or from the linked workspace.
To unlink from the workspace, perform the following steps.
While it attempts to unlink the Automation account, you can track the progress under **Notifications** from the menu.
-### Step 3. Delete Automation account
+### Step 3: Delete Automation account
After the Automation account is successfully unlinked from the workspace, perform the steps in the [standalone Automation account](#delete-a-standalone-automation-account) section to delete the account.
automation Delete Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-run-as-account.md
Title: Delete an Azure Automation Run As account
description: This article tells how to delete a Run As account with PowerShell or from the Azure portal. Previously updated : 04/12/2023 Last updated : 09/01/2023 # Delete an Azure Automation Run As account > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to delete a Run As or Classic Run As account. When you perform this action, the Automation account is retained. After you delete the Run As account, you can re-create it in the Azure portal or with the provided PowerShell script.
+## Permissions for Run As accounts and Classic Run As accounts
+
+To configure or update or delete a Run As account and a Classic Run As accounts, you must either be:
+
+- An owner of the Azure AD Application for the Run As Account
+
+ (or)
+
+- A member in one of the following Azure AD roles
+ - Application Administrator
+ - Cloud Application Administrator
+ - Global Administrator
+
+To learn more about permissions, see [Run As account permissions](automation-security-overview.md#permissions).
++ ## Delete a Run As or Classic Run As account 1. In the Azure portal, open the Automation account.
Run As accounts in Azure Automation provide authentication for managing resource
5. While the account is being deleted, you can track the progress under **Notifications** from the menu. Run As accounts can't be restored after deletion.
+> [!NOTE]
+> We recommend that you delete the Run As account from Automation account portal. Alternatively, you can delete the Service principal from the **Azure Active Directory** portal > **App registrations** > search and select your Automation account name and in the **Overview** page, select **Delete**.
+ ## Next steps - [Use system-assigned managed identity](enable-managed-identity-for-automation.md).
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
To install and use Hybrid Worker extension using REST API, follow these steps. T
#### [Azure CLI](#tab/cli)
-You can use Azure CLI to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension. Learn more aboutΓÇ»[Azure CLI](https://learn.microsoft.com/cli/azure/what-is-azure-cli).
+You can use Azure CLI to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension. Learn more aboutΓÇ»[Azure CLI](/cli/azure/what-is-azure-cli).
Follow the steps mentioned below as an example:
automation Manage Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md
description: This article tells how to use Azure Automation to manage Office 365
Last updated 11/05/2020 + # Manage Office 365 services
To publish and then schedule your runbook, see [Manage runbooks in Azure Automat
* For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md). * For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md). * If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
-* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
+* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runbooks.md
Title: Manage runbooks in Azure Automation
description: This article tells how to manage runbooks in Azure Automation. Previously updated : 06/29/2023 Last updated : 08/28/2023
foreach ($item in $output) {
## Next steps
+* For sample queries, see [Sample queries for job logs and job streams](automation-manage-send-joblogs-log-analytics.md#job-streams)
* To learn details of runbook management, see [Runbook execution in Azure Automation](automation-runbook-execution.md). * To prepare a PowerShell runbook, see [Edit textual runbooks in Azure Automation](automation-edit-textual-runbook.md). * To troubleshoot issues with runbook execution, see [Troubleshoot runbook issues](troubleshoot/runbooks.md).
automation Manage Sql Server In Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-sql-server-in-automation.md
+
+ Title: Manage databases in Azure SQL databases using Azure Automation
+description: This article explains on how to use Azure SQL server database using a system assigned managed identity in Azure Automation.
+ Last updated : 06/26/2023+++
+# Manage databases in Azure SQL database using Azure Automation
+
+This article describes the procedure to connect and manage databases in Azure SQL database using Azure Automation's [system-assigned managed identity](enable-managed-identity-for-automation.md). With Azure Automation, you can manage databases in Azure SQL Database by using the [latest Az PowerShell cmdlets](https://learn.microsoft.com/powershell/module/) that are available in [Azure Az PowerShell](https://learn.microsoft.com/powershell/azure/new-azureps-module-az?view=azps-10.2.0).
+
+Azure Automation has these Azure Az PowerShell cmdlets available out of the box, so that you can perform all the SQL database management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets of other Azure services to automate complex tasks across Azure services and across third-party systems.
+
+Azure Automation can also issue T-SQL (Transact SQL) commands against the SQL servers using PowerShell.
+
+To run the commands against the database, you need to do the following:
+- Ensure that Automation account has a system-assigned managed identity.
+- Provide the appropriate permissions to the Automation managed identity.
+- Configure the SQL server to utilize Azure Active Directory authentication.
+- Create a user on the SQL server that maps to the Automation account managed identity.
+- Create a runbook to connect and execute the commands.
+- (Optional) If the SQL server is protected by a firewall, create a Hybrid Runbook Worker (HRW), install the SQL modules on that server, and add the HRW IP address to the allowlist on the firewall.
++
+## Connect to Azure SQL database using System-assigned Managed identity
+
+To allow access from the Automation system managed identity to the Azure SQL database, follow these steps:
+
+1. If the Automation system managed identity is **OFF**, do the following:
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
+ 1. Go to your Automation account.
+ 1. In the Automation account page, under **Account Settings**, select **Identity**.
+ 1. Under the **System assigned** tab, select the **Status** as **ON**.
+
+ :::image type="content" source="./media/manage-sql-server-in-automation/system-assigned-managed-identity-status-on-inline.png" alt-text="Screenshot of setting the status to ON for System assigned managed identity." lightbox="./media/manage-sql-server-in-automation/system-assigned-managed-identity-status-on-expanded.png":::
+
+1. After the System Managed Identity is **ON**, you must provide the account the required access using these steps:
+ 1. In the **Automation account | Identity** page, **System assigned** tab, under permissions, select **Azure role assignments**.
+ 1. In the Azure role assignments page, select **+Add role assignment (preview)**.
+ 1. In the **Add role assignment (preview)**, select the **Scope** as *SQL*, select the **Subscription**, **Resource** from the drop-down and **Role** according to minimum required permissions, and then select **Save**.
+
+ :::image type="content" source="./media/manage-sql-server-in-automation/add-role-assignment-inline.png" alt-text="Screenshot of adding role assignment when the system assigned managed identity's status is set to ON." lightbox="./media/manage-sql-server-in-automation/add-role-assignment-expanded.png":::
+
+1. Configure the SQL server for Active Directory authentication by using these steps:
+ 1. Go to [Azure portal](https://portal.azure.com) home page and select **SQL servers**.
+ 1. In the **SQL server** page, under **Settings**, select **Azure Active Directory**.
+ 1. Select **Set admin** to configure SQL server for AD authentication.
+
+1. Add authentication on the SQL side by using these steps:
+ 1. Go to [Azure portal](https://portal.azure.com) home page and select **SQL servers**.
+ 1. In the **SQL server** page, under **Settings**, select **SQL Databases**.
+ 1. Select your database to go to the SQL database page and select **Query editor (preview)** and execute the following two queries:
+ - CREATE USER "AutomationAccount"
+ - FROM EXTERNAL PROVIDER WITH OBJECT_ID= `ObjectID`
+ - EXEC sp_addrolemember `dbowner`, "AutomationAccount"
+ - Automation account - replace with your Automation account's name
+ - Object ID - replace with object (principal) ID for your system managed identity principal from step 1.
+
+## Sample code
+
+### Connection to Azure SQL Server
+
+ ```sql
+ if ($($env:computerName) -eq "Client") {"Runbook running on Azure Client sandbox"} else {"Runbook running on " + $env:computerName}
+ Disable-AzContextAutosave -Scope Process
+ Connect-AzAccount -Identity
+ $Token = (Get-AZAccessToken -ResourceUrl https://database.windows.net).Token
+ Invoke-Sqlcmd -ServerInstance azuresqlserverxyz.database.windows.net -Database MyDBxyz -AccessToken $token -query 'select * from TableXYZ'
+ ```
+### Check account permissions on the SQL side
+
+```sql
+SELECT roles.[name] as role_name, members.name as [user_name]
+from sys.database_role_members
+Join sys.database_principals roles on database_role_members.role_principal_id= roles.principal_id
+join sys.database_principals members on database_role_members.member_principal_id=members.principal_id
+Order By
+roles.[name], members.[name]
+```
+
+> [!NOTE]
+> When a SQL server is running behind a firewall, you must run the Azure Automation runbook on a machine in your own network. Ensure that you configure this machine as a Hybrid Runbook Worker so that the IP address or network is not blocked by the firewall. For more information on how to configure a machine as a Hybrid Worker, see [create a hybrid worker](extension-based-hybrid-runbook-worker-install.md).
+
+### Use Hybrid worker
+When you use a Hybrid worker, the modules that your runbook uses, must be installed locally from an elevated PowerShell prompt. For example, `- Install-module Az.Accounts and Install-module SqlServer`. To find the required module names, run a command on each cmdlet and then check the source. For example, to check module name for cmdlet `Connect-AzAccounts` which is part of the Az.Account module, run the command: `get-command Connect-AzAccount`
+
+> [!NOTE]
+> We recommend that you add the following code on the top of any runbook that's intended to run on a Hybrid worker: `if ($($env:computerName) -eq "CLIENT") {"Runbook running on Azure CLIENT"} else {"Runbook running on " + $env:computerName}`. The code allows you to see the node it's running on and in case you accidentally run it on Azure cloud instead of the Hybrid worker, then it helps to determine the reason a runbook didn't work.
++
+## Next steps
+
+* For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md).
+* For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md).
+* If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
+* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
Azure Automation supports management throughout the lifecycle of your infrastruc
- Collect and store information about Azure resources. - Perform SQL monitoring checks & reporting. - Check website availability.
-* **Dev/test automation scenarios** - Start and start resources, scale resources, etc.
+* **Dev/test automation scenarios** - Stop and start resources, scale resources, etc.
* **Governance related automation** - Automatically apply or update tags, locks, etc. * **Azure Site Recovery** - orchestrate pre/post scripts defined in a Site Recovery DR workflow. * **Azure Virtual Desktop** - orchestrate scaling of VMs or start/stop VMs based on utilization.
You can review the prices associated with Azure Automation on the [pricing](http
## Next steps > [!div class="nextstepaction"]
-> [Create an Automation account](./quickstarts/create-azure-automation-account-portal.md)
+> [Create an Automation account](./quickstarts/create-azure-automation-account-portal.md)
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
automation Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md
Title: Manage Python 2 packages in Azure Automation
description: This article tells how to manage Python 2 packages in Azure Automation. Previously updated : 10/29/2021 Last updated : 08/21/2023
For information on managing Python 3 packages, see [Manage Python 3 packages](./
## Import packages
-1. In your Automation account, select **Python packages** under **Shared Resources**. Click **+ Add a Python package**.
+1. In your Automation account, select **Python packages** under **Shared Resources**. Select **+ Add a Python package**.
:::image type="content" source="media/python-packages/add-python-package.png" alt-text="Screenshot of the Python packages page shows Python packages in the left menu and Add a Python package highlighted.":::
For information on managing Python 3 packages, see [Manage Python 3 packages](./
:::image type="content" source="media/python-packages/upload-package.png" alt-text="Screenshot shows the Add Python Package page with an uploaded tar.gz file selected.":::
-After a package has been imported, it's listed on the **Python packages** page in your Automation account. To remove a package, select the package and click **Delete**.
+After a package has been imported, it's listed on the **Python packages** page in your Automation account. To remove a package, select the package and select **Delete**.
:::image type="content" source="media/python-packages/package-list.png" alt-text="Screenshot shows the Python 2.7.x packages page after a package has been imported."::: ## Import packages with dependencies
-Azure automation doesn't resolve dependencies for Python packages during the import process. There are two ways to import a package with all its dependencies. Only one of the following steps needs to be used to import the packages into your Automation account.
+Azure Automation doesn't resolve dependencies for Python packages during the import process. There are two ways to import a package with all its dependencies. Only one of the following steps needs to be used to import the packages into your Automation account.
### Manually download
Once the packages are downloaded, you can import them into your automation accou
### Runbook
- To obtain a runbook, [import Python 2 packages from pypi into Azure Automation account](https://github.com/azureautomation/import-python-2-packages-from-pypi-into-azure-automation-account) from the Azure Automation GitHub organization into your Automation account. Make sure the Run Settings are set to **Azure** and start the runbook with the parameters. The runbook requires a Run As account for the Automation account to work. For each parameter make sure you start it with the switch as seen in the following list and image:
+ To obtain a runbook, [import Python 2 packages from pypi into Azure Automation account](https://github.com/azureautomation/import-python-2-packages-from-pypi-into-azure-automation-account) from the Azure Automation GitHub organization into your Automation account. Make sure the Run Settings are set to **Azure** and start the runbook with the parameters. Ensure that Managed identity is enabled for your Automation account and has Automation Contributor access for successful import of package. For each parameter make sure you start it with the switch as seen in the following list and image:
* -s \<subscriptionId\> * -g \<resourceGroup\>
Once the packages are downloaded, you can import them into your automation accou
:::image type="content" source="media/python-packages/import-python-runbook.png" alt-text="Screenshot shows the Overview page for import_py2package_from_pypi with the Start Runbook pane on the right side.":::
-The runbook allows you to specify what package to download. For example, use of the `Azure` parameter downloads all Azure modules and all dependencies (about 105).
-
-After the runbook is complete, you can check the **Python packages** under **Shared Resources** in your Automation account to verify that the package has been imported correctly.
+The runbook allows you to specify what package to download. For example, use of the `Azure` parameter downloads all Azure modules and all dependencies (about 105). After the runbook is complete, you can check the **Python packages** under **Shared Resources** in your Automation account to verify that the package has been imported correctly.
## Use a package in a runbook
-With a package imported, you can use it in a runbook. The following example uses the [Azure Automation utility package](https://github.com/azureautomation/azure_automation_utility). This package makes it easier to use Python with Azure Automation. To use the package, follow the instructions in the GitHub repository and add it to the runbook. For example, you can use `from azure_automation_utility import get_automation_runas_credential` to import the function for retrieving the Run As account.
+With a package imported, you can use it in a runbook. Add the following code to list all the resource groups in an Azure subscription:
```python
-import azure.mgmt.resource
-import automationassets
-from azure_automation_utility import get_automation_runas_credential
-
-# Authenticate to Azure using the Azure Automation RunAs service principal
-runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
-azure_credential = get_automation_runas_credential()
-
-# Intialize the resource management client with the RunAs credential and subscription
-resource_client = azure.mgmt.resource.ResourceManagementClient(
- azure_credential,
- str(runas_connection["SubscriptionId"]))
-
-# Get list of resource groups and print them out
-groups = resource_client.resource_groups.list()
-for group in groups:
- print group.name
+#!/usr/bin/env python
+import os
+import requests
+# printing environment variables
+endPoint = os.getenv('IDENTITY_ENDPOINT') + "?resource=https://management.azure.com/"
+identityHeader = os.getenv('IDENTITY_HEADER')
+payload = {}
+headers = {
+ 'X-IDENTITY-HEADER': identityHeader,
+ 'Metadata': 'True'
+}
+response = requests.request("GET", endPoint, headers=headers, data=payload)
+print response.text
```
-> [!NOTE]
-> The Python `automationassets` package is not available on pypi.org, so it's not available for import onto a Windows machine.
- ## Develop and test runbooks offline To develop and test your Python 2 runbooks offline, you can use the [Azure Automation Python emulated assets](https://github.com/azureautomation/python_emulated_assets) module on GitHub. This module allows you to reference your shared resources such as credentials, variables, connections, and certificates. ## Next steps
-To prepare a Python runbook, see [Create a Python runbook](./learn/automation-tutorial-runbook-textual-python-3.md).
+To prepare a Python runbook, see [Create a Python runbook](./learn/automation-tutorial-runbook-textual-python-3.md).
automation Create Azure Automation Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-azure-automation-account-portal.md
Title: Quickstart - Create an Azure Automation account using the portal description: This quickstart helps you to create a new Automation account using Azure portal. Previously updated : 04/12/2023 Last updated : 08/28/2023 -+ #Customer intent: As an administrator, I want to create an Automation account so that I can further use the Automation services.
automation Runbook Input Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md
Title: Configure runbook input parameters in Azure Automation
description: This article tells how to configure runbook input parameters, which allow data to be passed to a runbook when it's started. Previously updated : 05/26/2023 Last updated : 08/18/2023
You can configure input parameters for PowerShell, PowerShell Workflow, graphica
You assign values to the input parameters for a runbook when you start it. You can start a runbook from the Azure portal, a web service, or PowerShell. You can also start one as a child runbook that is called inline in another runbook.
-### Configure input parameters in PowerShell runbooks
+## Configure input parameters in PowerShell runbooks
PowerShell and PowerShell Workflow runbooks in Azure Automation support input parameters that are defined through the following properties.
To illustrate the configuration of input parameters for a graphical runbook, let
A graphical runbook uses these major runbook activities:
-* Configuration of the Azure Run As account to authenticate with Azure.
+* Authenticate with Azure using managed identity configured for automation account.
* Definition of a [Get-AzVM](/powershell/module/az.compute/get-azvm) cmdlet to get VM properties. * Use of the [Write-Output](/powershell/module/microsoft.powershell.utility/write-output) activity to output the VM names.
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
description: This article tells how to troubleshoot and resolve issues that aris
Last updated 04/26/2023 -+ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
Title: Troubleshoot Azure Automation runbook issues description: This article tells how to troubleshoot and resolve issues with Azure Automation runbooks. Previously updated : 03/06/2023 Last updated : 08/18/2023
It fails with the following error:
### Cause
-The naming convention is not being followed. Ensure that your runbook name starts with a letter and can contain letters, numbers, underscores, and dashes. The naming convention requirements are now being enforced starting with the Az module version 1.9 through the portal and cmdlets.
+Code that was introduced in [1.9.0](https://www.powershellgallery.com/packages/Az.Automation/1.9.0) version of the Az.Automation module verifies the names of the runbooks to start and incorrectly flags runbooks with multiple "-" characters or with an "_" character in the name as invalid.
### Workaround
-We recommend that you follow the runbook naming convention or revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module where the naming convention isn't enforced.
+We recommend that you revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module.
+### Resolution
+
+Currently, we are working to deploy a fix to address this issue.
## Diagnose runbook issues
To determine what's wrong, follow these steps:
1. If the error appears to be transient, try adding retry logic to your authentication routine to make authenticating more robust. ```powershell
- # Get the connection "AzureRunAsConnection"
- $connectionName = "AzureRunAsConnection"
- $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName
- $logonAttempt = 0 $logonResult = $False
To determine what's wrong, follow these steps:
$LogonAttempt++ #Logging in to Azure... $connectionResult = Connect-AzAccount `
- -ServicePrincipal `
- -Tenant $servicePrincipalConnection.TenantId `
- -ApplicationId $servicePrincipalConnection.ApplicationId `
- -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
- Start-Sleep -Seconds 30 } ```
The runbook isn't using the correct context when running. This may be because th
You may see errors like this one: ```error
-Get-AzVM : The client '<automation-runas-account-guid>' with object id '<automation-runas-account-guid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
+Get-AzVM : The client '<client-id>' with object id '<object-id> does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
ErrorCode: AuthorizationFailed StatusCode: 403 ReasonPhrase: Forbidden Operation
To use a service principal with Azure Resource Manager cmdlets, see [Creating se
Your runbook fails with an error similar to the following example: ```error
-Exception: A task was canceled.
+Exception: A task was cancelled.
``` ### Cause
automation Shared Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/shared-resources.md
Title: Troubleshoot Azure Automation shared resource issues
description: This article tells how to troubleshoot and resolve issues with Azure Automation shared resources. Previously updated : 01/27/2021 Last updated : 08/24/2023
To create or update a Run As account, you must have appropriate [permissions](..
If the problem is because of a lock, verify that the lock can be removed. Then go to the resource that is locked in Azure portal, right-click the lock, and select **Delete**.
+> [!NOTE]
+> Azure Automation Run As account will retire on **September 30, 2023** and will be replaced with Managed Identities. Ensure that you start migrating your runbooks to use [managed identities](../automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](../migrate-run-as-accounts-managed-identity.md#sample-scripts) to start migrating the runbooks from Run As accounts to managed identities before **September 30, 2023**.
++ ### <a name="iphelper"></a>Scenario: You receive the error "Unable to find an entry point named 'GetPerAdapterInfo' in DLL 'iplpapi.dll'" when executing a runbook #### Issue
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
# Update Management overview > [!Important]
-> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. [Update management center (Preview)](../../update-center/overview.md) (UMC) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md).
-> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC.
+> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. [Azure Update Manager (preview)](../../update-center/overview.md) (AUM) is the v2 version of Automation Update management and the future of Update management in Azure. AUM is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md).
+> - Guidance for migrating from Automation Update management to Azure Update Manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Azure Update manager or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Azure Update Manager (preview).
You can use Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines in Azure, physical or VMs in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates and manage the process of installing required updates for your machines reporting to Update Management.
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md
# Plan your Update Management deployment
-## Step 1 - Automation account
+## Step 1: Automation account
Update Management is an Azure Automation feature, and therefore requires an Automation account. You can use an existing Automation account in your subscription, or create a new account dedicated only for Update Management and no other Automation features.
-## Step 2 - Azure Monitor Logs
+## Step 2: Azure Monitor Logs
Update Management depends on a Log Analytics workspace in Azure Monitor to store assessment and update status log data collected from managed machines. Integration with Log Analytics also enables detailed analysis and alerting in Azure Monitor. You can use an existing workspace in your subscription, or create a new one dedicated only for Update Management. If you are new to Azure Monitor Logs and the Log Analytics workspace, you should review the [Design a Log Analytics workspace](../../azure-monitor/logs/workspace-design.md) deployment guide.
-## Step 3 - Supported operating systems
+## Step 3: Supported operating systems
Update Management supports specific versions of the Windows Server and Linux operating systems. Before you enable Update Management, confirm that the target machines meet the [operating system requirements](operating-system-requirements.md).
-## Step 4 - Log Analytics agent
+## Step 4: Log Analytics agent
The [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for Windows and Linux is required to support Update Management. The agent is used for both data collection, and the Automation system Hybrid Runbook Worker role to support Update Management runbooks used to manage the assessment and update deployments on the machine.
For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../.
If your IT security policies do not allow machines on the network to connect to the internet, you can set up a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) and then configure the machine to connect through the gateway to Azure Automation and Azure Monitor.
-## Step 6 - Permissions
+## Step 6: Permissions
To create and manage update deployments, you need specific permissions. To learn about these permissions, see [Role-based access - Update Management](../automation-role-based-access-control.md#update-management-permissions).
-## Step 7 - Windows Update Agent
+## Step 7: Windows Update Agent
Azure Automation Update Management relies on the Windows Update Agent to download and install Windows updates. There are specific group policy settings that are used by Windows Update Agent (WUA) on machines to connect to Windows Server Update Services (WSUS) or Microsoft Update. These group policy settings are also used to successfully scan for software update compliance, and to automatically update the software updates. To review our recommendations, see [Configure Windows Update settings for Update Management](configure-wuagent.md).
-## Step 8 - Linux repository
+## Step 8: Linux repository
VMs created from the on-demand Red Hat Enterprise Linux (RHEL) images available in Azure Marketplace are registered to access the Red Hat Update Infrastructure (RHUI) that's deployed in Azure. Any other Linux distribution must be updated from the distribution's online file repository by using methods supported by that distribution. To classify updates on Red Hat Enterprise version 6, you need to install the yum-security plugin. On Red Hat Enterprise Linux 7, the plugin is already a part of yum itself and there's no need to install anything. For more information, see the following Red Hat [knowledge article](https://access.redhat.com/solutions/10021).
-## Step 9 - Plan deployment targets
+## Step 9: Plan deployment targets
Update Management allows you to target updates to a dynamic group representing Azure or non-Azure machines, so you can ensure that specific machines always get the right updates at the most convenient times. A dynamic group is resolved at deployment time and is based on the following criteria:
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
To add once these links become available:
Each replica created will add extra charges. Reference the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/) for details. As an example, if your origin is a standard tier configuration store and you have five replicas, you would be charged the rate of six standard tier configuration stores for your system, but each of your replica's isolated quota and requests are included in this charge.
+## Monitoring
+
+To offer insights into the characteristics of the geo-replication feature, App Configuration provides a metric named **Replication Latency**. The replication latency metric describes how long it takes for data to replicate from one region to another.
+
+For more information on the replication latency metric and other App Configuration metrics see [Monitoring App Configuration data reference](./monitor-app-configuration-reference.md).
+ ## Next steps > [!div class="nextstepaction"]
azure-app-configuration Howto Create Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md
To create sample snapshots and check how the snapshots feature work, use the sna
1. Select **Create** to generate the sample snapshot. 1. Check out the snapshot result generated under **Generated sample snapshot**. The sample snapshot displays all keys that are included in the sample snapshot, according to your selection.
+## Use snapshots
+
+You can select any number of snapshots for the application's configuration. Selecting a snapshot adds all of its key-values. Once added to a configuration, the key-values from snapshots are treated the same as any other key-value.
+
+If you have an application using Azure App Configuration, you can update it with the following sample code to use snapshots. You only need to provide the name of the snapshot, which is case-sensitive.
+
+### [.NET](#tab/dotnet)
+
+Edit the call to the `AddAzureAppConfiguration` method, which is often found in the `program.cs` file of your application. If you don't have an application, you can reference any of the .NET quickstart guides, like [creating an ASP.NET core app with Azure App Configuration](./quickstart-aspnet-core-app.md).
+
+**Add snapshots to your configuration**
+
+```csharp
+configurationBuilder.AddAzureAppConfiguration(options =>
+{
+ options.Connect(Environment.GetEnvironmentVariable("ConnectionString"));
+
+ // Select an existing snapshot by name. This will add all of the key-values from the snapshot to this application's configuration.
+ options.SelectSnapshot("SnapshotName");
+
+ // Other changes to options
+});
+```
+
+> [!NOTE]
+> Snapshot support is available if you use version **7.0.0-preview** or later of any of the following packages.
+> - `Microsoft.Extensions.Configuration.AzureAppConfiguration`
+> - `Microsoft.Azure.AppConfiguration.AspNetCore`
+> - `Microsoft.Azure.AppConfiguration.Functions.Worker`
+++ ## Manage active snapshots The page under **Operations** > **Snapshots (preview)** displays two tabs: **Active snapshots** and **Archived snapshots**. Select **Active snapshots** to view the list of all active snapshots in an App Configuration store.
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
Resource Provider and Type: [App Configuration Platform Metrics](../azure-monito
| Http Incoming Request Duration | Milliseconds | Server side duration of an Http Request | | Throttled Http Request Count | Count | Throttled requests are Http requests that receive a response with a status code of 429 | | Daily Storage Usage | Percent | Represents the amount of storage in use as a percentage of the maximum allowance. This metric is updated at least once daily. |
+| Replication Latency | Milliseconds | Represents the average time it takes for a replica to be consistent with current state. |
For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
App Configuration has the following dimensions associated with its metr
| Http Incoming Request Duration | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. | | Throttled Http Request Count | The **Endpoint** of each request is included as a dimension. | | Daily Storage Usage | This metric does not have any dimensions. |
+| Replication Latency | The **Endpoint** of the replica that data was replicated to is included as a dimension. |
For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
You can analyze metrics for App Configuration with metrics from other Azure serv
* Http Incoming Request Duration * Throttled Http Request Count (Http status code 429 Responses) * Daily Storage Usage
+* Replication Latency
In the portal, navigate to the **Metrics** section and select the **Metric Namespaces** and **Metrics** you want to analyze. This screenshot shows you the metrics view when selecting **Http Incoming Request Count** for your configuration store.
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
Deploy the Azure Arc data controller using released profile
##### [Linux](#tab/linux) ```azurecli
-az arcdata dc create -name <name> -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
+az arcdata dc create --name <name> -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
# Example az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass
az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custo
##### [Windows (PowerShell)](#tab/windows) ```azurecli
-az arcdata dc create -name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
+az arcdata dc create --name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
# Example az arcdata dc create --name arc-dc1 --g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
You can use an online tool to base64 encode your desired username and password o
PowerShell ```console
-[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('<your string to encode here>'))
+[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>'))
#Example
-#[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('example'))
+#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example'))
```
azure-arc Create Postgresql Server Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-kubernetes-native-tools.md
To create a PostgreSQL server using Kubernetes tools, you will need to have the
## Overview
-To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the _postgresqls_ custom resource definitions.
+To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the `postgresqls` custom resource definitions.
## Create a yaml file
You can use an online tool to base64 encode your desired username and password o
PowerShell ```console
-[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('<your string to encode here>'))
+[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>'))
#Example
-#[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('example'))
+#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example'))
```
azure-arc Limitations Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md
This article describes limitations of Azure Arc-enabled SQL Managed Instance.
- Transactional replication is currently not supported. - Log shipping is currently blocked.-- Creating a database using SQL Server Management Studio does not work currently. Use the T-SQL command `CREATE DATABASE` to create databases.
+- All user databases need to be in a full recovery model because they participate in an always-on-availability group
## Roles and responsibilities
azure-arc Managed Instance Disaster Recovery Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-portal.md
After the failover group is provisioned, you can view it in Azure portal.
:::image type="content" source="media/managed-instance-disaster-recovery-portal/failover-group-overview.png" alt-text="Screenshot of Azure portal failover group.":::
-## Fail over
+## Failover
In the disaster recovery configuration, only one of the instances in the failover group is primary. You can fail over from the portal to migrate the primary role to the other instance in your failover group. To fail over:
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
| Feature | Azure Arc-enabled SQL Managed Instance | |--|--| | JSON | Yes |
-| Query Store | No |
+| Query Store | Yes |
| Temporal | Yes | | Native XML support | Yes | | XML indexing | Yes |
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
### HPE
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version|
|--|--|--|--|--|
-|HPE Superdome Flex 280 | 1.23.5 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)|
+|HPE Superdome Flex 280 | 1.23.5 | 1.22.0_2023-08-08 | 16.0.5100.7242 |Not validated|
|HPE Apollo 4200 Gen10 Plus | 1.22.6 | 1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)| ### Kublr
More tests will be added in future releases of Azure Arc-enabled data services.
- [Create a data controller - indirectly connected with the CLI](create-data-controller-indirect-cli.md) - To create a directly connected data controller, start with [Prerequisites to deploy the data controller in direct connectivity mode](create-data-controller-direct-prerequisites.md). +
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-en
## August 8, 2023
-Aug 2023 preview release is now available.
- |Component|Value| |--|--| |Container images tag |`v1.22.0_2023-08-08`|
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Title: "Upgrade Azure Arc-enabled Kubernetes agents" Previously updated : 09/09/2022 Last updated : 08/28/2023 description: "Control agent upgrades for Azure Arc-enabled Kubernetes"
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest
With automatic upgrade enabled, the agent polls Azure hourly to check for a newer version. When a newer version becomes available, it triggers a Helm chart upgrade for the Azure Arc agents.
+> [!IMPORTANT]
+> Be sure you allow [connectivity to all required endpoints](network-requirements.md). In particular, connectivity to `dl.k8s.io` is required for automatic upgrades.
+ To opt out of automatic upgrade, specify the `--disable-auto-upgrade` parameter while connecting the cluster to Azure Arc. The following command connects a cluster to Azure Arc with auto-upgrade disabled:
Azure Arc-enabled Kubernetes follows the standard [semantic versioning scheme](h
While the schedule may vary, a new minor version of Azure Arc-enabled Kubernetes agents is released approximately once per month.
-The following command upgrades the agent to version 1.8.14:
+The following command manually upgrades the agent to version 1.8.14:
```azurecli az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.8.14
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters." Previously updated : 04/20/2023 Last updated : 08/30/2023 description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall."
Before you begin, review the [conceptual overview of the cluster connect feature
- For an Azure AD user account: ```azurecli
- AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query userPrincipalName -o tsv)
+ AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query id -o tsv)
``` - For an Azure AD application:
Before you begin, review the [conceptual overview of the cluster connect feature
kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID ```
- - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
+ - If you are using [Azure RBAC for authorization checks](azure-rbac.md) on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
```azurecli az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
Before you begin, review the [conceptual overview of the cluster connect feature
You should now see a response from the cluster containing the list of all pods under the `default` namespace. ## Known limitations
-Use `az connectedk8s show` to check the Arc-enabled Kubernetes agent version.
-### [Agent version < 1.11.7](#tab/agent-version)
+Use `az connectedk8s show` to check your Arc-enabled Kubernetes agent version.
+### [Agent version < 1.11.7](#tab/agent-version)
When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, you may see the following error: - `You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.` This is a known limitation. To get past this error:
This is a known limitation. To get past this error:
1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running the `az connectedk8s proxy` command. ### [Agent version >= 1.11.7](#tab/agent-version-latest)+ When making requests to the Kubernetes cluster, if the Azure AD service principal used is a part of more than 200 groups, you may see the following error: `Overage claim (users with more than 200 group membership) for SPN is currently not supported. For troubleshooting, please refer to aka.ms/overageclaimtroubleshoot`
This is a known limitation. To get past this error:
1. Create a [service principal](/cli/azure/create-an-azure-service-principal-azure-cli), which is less likely to be a member of more than 200 groups. 1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running the `az connectedk8s proxy` command.+ ## Next steps
azure-arc Conceptual Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md
Title: "Azure Arc-enabled Kubernetes agent overview" Previously updated : 12/07/2022 Last updated : 08/24/2023 description: "Learn about the Azure Arc agents deployed on the Kubernetes clusters when connecting them to Azure Arc."
description: "Learn about the Azure Arc agents deployed on the Kubernetes cluste
[Azure Arc-enabled Kubernetes](overview.md) provides a centralized, consistent control plane to manage policy, governance, and security across Kubernetes clusters in different environments.
-Azure Arc agents are deployed on Kubernetes clusters when you [connect them to Azure Arc](quickstart-connect-cluster.md), This article provides an overview of these agents.
+Azure Arc agents are deployed on Kubernetes clusters when you [connect them to Azure Arc](quickstart-connect-cluster.md). This article provides an overview of these agents.
## Deploy agents to your cluster Most on-premises datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents require outbound communication to a [set list of network endpoints](network-requirements.md).
+This diagram provides a high-level view of Azure Arc components. Kubernetes clusters in on-premises datacenters or different clouds are connected to Azure through the Azure Arc agents. This allows the clusters to be managed in Azure using management tools and Azure services. The clusters can also be accessed through offline management tools.
+ :::image type="content" source="media/architectural-overview.png" alt-text="Diagram showing an architectural overview of the Azure Arc-enabled Kubernetes agents." lightbox="media/architectural-overview.png"::: The following high-level steps are involved in [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md):
The following high-level steps are involved in [connecting a Kubernetes cluster
## Next steps * Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* View release notes to see [details about the latest agent versions](release-notes.md).
* Learn about [upgrading Azure Arc-enabled Kubernetes agents](agent-upgrade.md). * Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md).
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros
### Version support
-The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension.
-
-Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported.
+The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported.
> [!NOTE] > If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 04/17/2023 Last updated : 08/31/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
-### 1.7.3 (April 2023)
+> [!IMPORTANT]
+> Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
-Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+### 1.7.6 (August 2023)
-- source-controller: v0.36.1-- kustomize-controller: v0.35.1-- helm-controller: v0.31.2-- notification-controller: v0.33.0-- image-automation-controller: v0.31.0-- image-reflector-controller: v0.26.1
+Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
+
+- source-controller: v1.0.1
+- kustomize-controller: v1.0.1
+- helm-controller: v0.35.0
+- notification-controller: v1.0.0
+- image-automation-controller: v0.35.0
+- image-reflector-controller: v0.29.1
Changes made for this version: -- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)-- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`-- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled)
+- Configurations with `ssh` authentication type were intermittently failing to reconcile with GitHub due to an updated [RSA SSH host key](https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/). This release updates the SSH key entries to match the ones mentioned in [GitHub's SSH key fingerprints documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints).
-### 1.7.0 (March 2023)
+### 1.7.5 (August 2023)
-Flux version: [Release v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)
+Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
-- source-controller: v0.34.0-- kustomize-controller: v0.33.0-- helm-controller: v0.29.0-- notification-controller: v0.31.0-- image-automation-controller: v0.29.0-- image-reflector-controller: v0.24.0
+- source-controller: v1.0.1
+- kustomize-controller: v1.0.1
+- helm-controller: v0.35.0
+- notification-controller: v1.0.0
+- image-automation-controller: v0.35.0
+- image-reflector-controller: v0.29.1
Changes made for this version: -- Upgrades Flux to [v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)-- Flux extension is now supported on ARM64-based clusters
+- Upgrades Flux to [v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
+- Promotes some APIs to v1. This change should not affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+- Adds support for [Helm drift detection](tutorial-use-gitops-flux2.md#helm-drift-detection) and [OOM watch](tutorial-use-gitops-flux2.md#helm-oom-watch).
-### 1.6.4 (February 2023)
+### 1.7.4 (June 2023)
+
+Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+
+- source-controller: v0.36.1
+- kustomize-controller: v0.35.1
+- helm-controller: v0.31.2
+- notification-controller: v0.33.0
+- image-automation-controller: v0.31.0
+- image-reflector-controller: v0.26.1
Changes made for this version: -- Disabled extension reconciler (which attempts to restore the Flux extension if it fails). This resolves a potential bug where, if the reconciler is unable to recover a failed Flux extension and `prune` is set to `true`, the extension and deployed objects may be deleted.
+- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` will be set to `true` for all Flux configurations, and `postBuild` will be null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55))
-### 1.6.3 (December 2022)
+- Adds support for optional properties [`waitForReconciliation`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1299C14-L1299C35) and [`reconciliationWaitDuration`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1304).
-Flux version: [Release v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0)
+ By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment will wait for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72))
-- source-controller: v0.32.1-- kustomize-controller: v0.31.0-- helm-controller: v0.27.0-- notification-controller: v0.29.0-- image-automation-controller: v0.27.0-- image-reflector-controller: v0.23.0
+### 1.7.3 (April 2023)
+
+Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+
+- source-controller: v0.36.1
+- kustomize-controller: v0.35.1
+- helm-controller: v0.31.2
+- notification-controller: v0.33.0
+- image-automation-controller: v0.31.0
+- image-reflector-controller: v0.26.1
Changes made for this version: -- Upgrades Flux to [v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0)-- Adds exception for [aad-pod-identity in flux extension](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-azure-ad-pod-identity-enabled)-- Enables reconciler for flux extension
+- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`
+- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled)
## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
azure-arc Monitor Gitops Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md
Title: Monitor GitOps (Flux v2) status and activity Previously updated : 08/11/2023 Last updated : 08/17/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2.
Follow these steps to import dashboards that let you monitor Flux extension depl
> [!NOTE] > These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana.
-1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Reader** level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance.
-1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it a Reader role on the subscription(s):
+1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Grafana Editor** level permissions to view and edit dashboards. You can check your access by going to **Access control (IAM)** on the Grafana instance.
+1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it the **Monitoring Reader** role on the subscription(s):
1. In the Azure portal, navigate to the subscription that you want to add. 1. Select **Access control (IAM)**. 1. Select **Add role assignment**.
- 1. Select the **Reader** role, then select **Next**.
+ 1. Select the **Monitoring Reader** role, then select **Next**.
1. On the **Members** tab, select **Managed identity**, then choose **Select members**. 1. From the **Managed identity** list, select the subscription where you created your Azure Managed Grafana Instance. Then select **Azure Managed Grafana** and the name of your Azure Managed Grafana instance. 1. Select **Review + Assign**.
- If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.)
+ If you're using a service principal, grant the **Monitoring Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.)
1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data. 1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json).
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md
Title: Azure Arc-enabled Kubernetes network requirements description: Learn about the networking requirements to connect Kubernetes clusters to Azure Arc. Previously updated : 03/07/2023 Last updated : 08/15/2023
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023 #
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md
Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 05/23/2023 Last updated : 08/21/2023 description: "Learn about the latest releases of Arc-enabled Kubernetes."
Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date
> > We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2).
+## July 2023
+
+### Arc agents - Version 1.12.5
+
+- Alpine base image powering our Arc agent containers has been updated from 3.7.12 to 3.18.0
+ ## May 2023 ### Arc agents - Version 1.11.7
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md
Title: "Azure Arc-enabled Kubernetes system requirements" Previously updated : 04/27/2023 Last updated : 08/28/2023 description: Learn about the system requirements to connect Kubernetes clusters to Azure Arc.
For a multi-node Kubernetes cluster environment, pods can get scheduled on diffe
## Management tool requirements
-Connecting a cluster to Azure Arc requires [Helm 3](https://helm.sh/docs/intro/install), version 3.7.0 or earlier.
-
-You'll also need to use either Azure CLI or Azure PowerShell.
+To connect a cluster to Azure Arc, you'll need to use either Azure CLI or Azure PowerShell.
For Azure CLI:
For Azure PowerShell:
Install-Module -Name Az.ConnectedKubernetes ```
+> [!NOTE]
+> When you deploy the Azure Arc agents to a cluster, Helm v. 3.6.3 will be installed in the `.azure` folder of the deployment machine. This [Helm 3](https://helm.sh/docs/) installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine.
+ ## Azure AD identity requirements To connect your cluster to Azure Arc, you must have an Azure AD identity (user or service principal) which can be used to log in to [Azure CLI](/cli/azure/authenticate-azure-cli) or [Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 06/29/2023 Last updated : 08/16/2023
To deploy applications using GitOps with Flux v2, you need:
#### For Azure Arc-enabled Kubernetes clusters
-* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023).
+* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops).
[Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
False whl k8s-extension C:\Users\somename\.azure\c
#### For Azure Arc-enabled Kubernetes clusters
-* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023).
+* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops).
[Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
To view detailed conditions for a configuration object, select its name.
Flux supports many parameters to enable various scenarios. For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation. For more information about available parameters and how to use them, see [GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md#parameters).
+A workaround to deploy Flux resources with non-supported parameters is to define the required Flux custom resources inside your git repository. For example, [GitRepository](https://fluxcd.io/flux/components/source/gitrepositories/), [Kustomization](https://fluxcd.io/flux/components/kustomize/kustomization/), etcetera. Then deploy these resources with the `az k8s-configuration flux create` command. You will then still be able to access your Flux resources through the Azure Arc UI.
## Manage cluster configuration by using the Flux Kustomize controller The [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) is installed as part of the `microsoft.flux` cluster extension. It allows the declarative management of cluster configuration and application deployment by using Kubernetes manifests synced from a Git repository. These Kubernetes manifests can optionally include a *kustomize.yaml* file.
-For usage details, see the following resiyrces:
+For usage details, see the following resources:
* [Flux Kustomize controller](https://fluxcd.io/docs/components/kustomize/) * [Kustomize reference documents](https://kubectl.docs.kubernetes.io/references/kustomize/)
spec:
When you use this annotation, the deployed HelmRelease is patched with the reference to the configured source. Currently, only `GitRepository` source is supported.
+### Helm drift detection
+
+[Drift detection for Helm releases](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection ) isn't enabled by default. Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm drift detection by running the following command:
+
+```azurecli
+az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.detectDrift=true
+```
+
+### Helm OOM watch
+
+Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm OOM watch. For more information, see [Enable Helm near OOM detection](https://fluxcd.io/flux/cheatsheets/bootstrap/#enable-helm-near-oom-detection).
+
+Be sure to review potential [remediation strategies](https://fluxcd.io/flux/components/helm/helmreleases/#configuring-failure-remediation) and apply them as needed when enabling this feature.
+
+To enable OOM watch, run the following command:
+
+```azurecli
+az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.outOfMemoryWatch.enabled=true helm-controller.outOfMemoryWatch.memoryThreshold=70 helm-controller.outOfMemoryWatch.interval=700ms
+```
+
+If you don't specify values for `memoryThreshold` and `outOfMemoryWatch`, the default memory threshold is set to 95%, with the interval at which to check the memory utilization set to 500 ms.
+ ## Delete the Flux configuration and extension Use the following commands to delete your Flux configuration and, if desired, the Flux extension itself.
For AKS clusters, you can't use the Azure portal to delete the extension. Instea
az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managedClusters --yes ``` ++ ## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6, [4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) | | VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br>TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5+vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
-| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) |
+| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes)|[1.24](https://ubuntu.com/kubernetes/docs/1.24/components), [1.28](https://ubuntu.com/kubernetes/docs/1.28/components) |
| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution |[Kublr 1.26.0](https://docs.kublr.com/releasenotes/1.26/release-1.26.0/); Upstream K8s Versions: 1.21.3, 1.22.10, 1.22.17, 1.23.17, 1.24.13, 1.25.6, 1.26.4 |
The conformance tests run as part of the Azure Arc-enabled Kubernetes validation
* [Learn how to connect an existing Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md) * Learn about the [Azure Arc agents](conceptual-agent-overview.md) deployed on Kubernetes clusters when connecting them to Azure Arc. +
azure-arc Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/workload-management.md
To deploy the sample, run the following script:
mkdir kalypso && cd kalypso curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh chmod 700 deploy.sh
-./deploy.sh -c -p <prefix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+./deploy.sh -c -p <prefix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2>
``` This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this:
Created AKS clusters in kalypso-rg resource group:
> If something goes wrong with the deployment, you can delete the created resources with the following command: > > ```bash
-> ./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+> ./deploy.sh -d -p <preix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2>
> ``` ### Sample overview
With this file, Application Team requests Kubernetes compute resources from the
To register the application, open a terminal and use the following script: ```bash
-export org=<github org>
+export org=<GitHub org>
export prefix=<prefix> # clone the control-plane repo
spec:
branch: dev secretRef: name: repo-secret
- url: https://github.com/<github org>/<prefix>-app-gitops
+ url: https://github.com/<GitHub org>/<prefix>-app-gitops
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization
When no longer needed, delete the resources that you created. To do so, run the
```bash # In kalypso folder
-./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+./deploy.sh -d -p <preix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2>
``` ## Next steps
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 02/01/2023 Last updated : 08/15/2023
Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernete
[!INCLUDE [network-requirements](kubernetes/includes/network-requirements.md)]
-For an example, see [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](kubernetes/quickstart-connect-cluster.md).
+For more information, see [Azure Arc-enabled Kubernetes network requirements](kubernetes/network-requirements.md).
## Azure Arc-enabled data services
Connectivity to Arc-enabled server endpoints is required for:
[!INCLUDE [network-requirements](servers/includes/network-requirements.md)]
-For examples, see [Connected Machine agent network requirements](servers/network-requirements.md)].
+For more information, see [Connected Machine agent network requirements](servers/network-requirements.md).
## Azure Arc resource bridge (preview)
This section describes additional networking requirements specific to deploying
[!INCLUDE [network-requirements](resource-bridge/includes/network-requirements.md)]
+For more information, see [Azure Arc resource bridge (preview) network requirements](resource-bridge/network-requirements.md).
+ ## Azure Arc-enabled System Center Virtual Machine Manager (preview) Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires:
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
Title: Azure Arc resource bridge (preview) network requirements description: Learn about network requirements for Azure Arc resource bridge (preview) including URLs that must be allowlisted. Previously updated : 01/30/2023 Last updated : 08/24/2023 # Azure Arc resource bridge (preview) network requirements
This article describes the networking requirements for deploying Azure Arc resou
## Additional network requirements
-In addition, resource bridge (preview) requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud).
+In addition, Arc resource bridge (preview) requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud).
> [!NOTE] > The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md). ## SSL proxy configuration
-If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services.
+If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services.
-- To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files.
+- To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files.
-- The format of the certificate file is *Base-64 encoded X.509 (.CER)*.
+- The format of the certificate file is *Base-64 encoded X.509 (.CER)*.
-- Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail.
+- Only pass the single proxy certificate. If a certificate bundle is passed, the deployment will fail.
-- The proxy server endpoint can't be a .local domain.
+- The proxy server endpoint can't be a `.local` domain.
-- The proxy server has to be reachable from all IPs within the IP address prefix, including the control plane and appliance VM IPs.
+- The proxy server has to be reachable from all IPs within the IP address prefix, including the control plane and appliance VM IPs.
-There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy:
+There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy:
- SSL certificate for your SSL proxy (so that the management machine and appliance VM trust your proxy FQDN and can establish an SSL connection to it) - SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
-In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3.5 GB) within the allotted time (90 min).
+In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, you may not be able to download the required images (~3.5 GB) within the allotted time (90 min).
## Exclusion list for no proxy
-The following table contains the list of addresses that must be excluded by using the `-noProxy` parameter in the `createconfig` command.
+If a proxy server is being used, the following table contains the list of addresses that should be excluded from proxy by configuring the `noProxy` settings.
| **IP Address** | **Reason for exclusion** | | -- | | | localhost, 127.0.0.1 | Localhost traffic |
-| .svc | Internal Kubernetes service traffic (.svc) where _.svc_ represents a wildcard name. This is similar to saying \*.svc, but none is used in this schema. |
+| .svc | Internal Kubernetes service traffic (.svc) where *.svc* represents a wildcard name. This is similar to saying \*.svc, but none is used in this schema. |
| 10.0.0.0/8 | private network address space | | 172.16.0.0/12 |Private network address space - Kubernetes Service CIDR | | 192.168.0.0/16 | Private network address space - Kubernetes Pod CIDR |
The following table contains the list of addresses that must be excluded by usin
The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16`. While these default values will work for many networks, you may need to add more subnet ranges and/or names to the exemption list. For example, you may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. You can achieve that by specifying the values in the `noProxy` list.
+> [!IMPORTANT]
+> When listing multiple addresses for the `noProxy` settings, don't add a space after each comma to separate the addresses. The addresses must immediately follow the commas.
+ ## Next steps - Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details. - Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).----
+- View [troubleshooting tips for networking issues](troubleshoot-resource-bridge.md#networking-issues).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Arc resource bridge consists of an appliance VM that is deployed to the on-premi
To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm). + ## Networking issues ### Back-off pulling image error
When trying to set the configuration for Arc resource bridge, you may receive an
This occurs when a `.local` path is provided for a configuration setting, such as proxy, dns, datastore or management endpoint (such as vCenter). Arc resource bridge appliance VM uses Azure Linux OS, which doesn't support `.local` by default. A workaround could be to provide the IP address where applicable. + ### Azure Arc resource bridge is unreachable Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services.
When deploying the resource bridge on VMware vCenter, you specify the folder in
When deploying the resource bridge on VMware Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again.
-```
+
+```python
"Datastore.AllocateSpace" "Datastore.Browse" "Datastore.DeleteFile"
When deploying the resource bridge on VMware Vcenter, you may get an error sayin
"Resource.AssignVMToPool" "Resource.HotMigrate" "Resource.ColdMigrate"
+"Sessions.ValidateSession"
"StorageViews.View" "System.Anonymous" "System.Read"
If you don't see your problem here or you can't resolve your issue, try one of t
- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. - [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).+
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Download for [Windows](https://download.microsoft.com/download/0/c/7/0c7a484b-e2
Agent version 1.33 contains a fix for [CVE-2023-38176](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2023-38176), a local elevation of privilege vulnerability. Microsoft recommends upgrading all agents to version 1.33 or later to mitigate this vulnerability. Azure Advisor can help you [identify servers that need to be upgraded](https://portal.azure.com/#view/Microsoft_Azure_Expert/RecommendationListBlade/recommendationTypeId/9d5717d2-4708-4e3f-bdda-93b3e6f1715b/recommendationStatus). Learn more about CVE-2023-38176 in the [Security Update Guide](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2023-38176).
+### Known issue
+
+[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you are using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable.
+
+This endpoint will be removed from `azcmagent check` in a future release.
+ ### Fixed - Fixed an issue that could cause a VM extension to disappear in Azure Resource Manager if it's installed with the same settings twice. After upgrading to agent version 1.33 or later, reinstall any missing extensions to restore the information in Azure Resource Manager.
azure-arc Azcmagent Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-connect.md
azcmagent connect [authentication] --subscription-id [subscription] --resource-g
Connect a server using the default login method (interactive browser or device code). ```
-azcmagent connect --subscription "Production" --resource-group "HybridServers" --location "eastus"
+azcmagent connect --subscription-id "Production" --resource-group "HybridServers" --location "eastus"
+```
+
+```
+azcmagent connect --subscription-id "Production" --resource-group "HybridServers" --location "eastus" --use-device-code
``` Connect a server using a service principal. ```
-azcmagent connect --subscription "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" --resource-group "HybridServers" --location "australiaeast" --service-principal-id "ID" --service-principal-secret "SECRET" --tenant-id "TENANT"
+azcmagent connect --subscription-id "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" --resource-group "HybridServers" --location "australiaeast" --service-principal-id "ID" --service-principal-secret "SECRET" --tenant-id "TENANT"
``` Connect a server using a private endpoint and device code login method. ```
-azcmagent connect --subscription "Production" --resource-group "HybridServers" --location "koreacentral" --use-device-code --private-link-scope "/subscriptions/.../Microsoft.HybridCompute/privateLinkScopes/ScopeName"
+azcmagent connect --subscription-id "Production" --resource-group "HybridServers" --location "koreacentral" --use-device-code --private-link-scope "/subscriptions/.../Microsoft.HybridCompute/privateLinkScopes/ScopeName"
``` ## Authentication options
azure-arc License Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md
+
+ Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012
+description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc.
Last updated : 08/18/2023+++
+# License provisioning guidelines for Extended Security Updates for Windows Server 2012
+
+Flexibility is critical when enrolling end of support infrastructure in Extended Security Updates (ESUs) through Azure Arc to receive critical patches. To give ease of options across virtualization and disaster recovery scenarios, you must first provision Windows Server 2012 Arc ESU licenses and then link those licenses to your Azure Arc-enabled servers. The linking and provisioning of licenses can be done through Azure portal, ARM templates, CLI, or Azure Policy.
+
+When provisioning WS2012 ESU licenses, you need to specify whether you'll need to select between virtual core and physical core licensing, select between standard and datacenter licensing, and attest to the number of associated cores (broken down by the number of 2-core and 16-core packs). To assist with this license provisioning process, this article provides general guidance and sample customer scenarios for planning your deployment of WS2012 ESUs through Azure Arc.
+
+## General guidance: Standard vs. Datacenter, Physical vs. Virtual Cores
+
+### Physical core licensing
+
+If you choose to license based on physical cores, the licensing requires a minimum of 16 physical cores per license. Most customers choose to license based on physical cores and select Standard or Datacenter edition to match their original Windows Server licensing. While Standard licensing can be applied to up to two virtual machines (VMs), Datacenter licensing has no limit to the number of VMs it can be applied to. Depending on the number of VMs covered, it may make sense to opt for the Datacenter license instead of the Standard license.
+
+### Virtual core licensing
+
+If you choose to license based on virtual cores, the licensing requires a minimum of eight virtual cores per Virtual Machine. There are two main scenarios where this model is advisable:
+
+1. If the VM is running on a third-party host or hyper scaler like AWS, GCP, or OCI.
+
+1. The Windows Server was licensed on a virtualization basis. In most cases, customers elect the Standard edition for virtual core-based licenses.
+
+An additional scenario (scenario 1, below) is a candidate for VM/Virtual core licensing when the WS2012 VMs are running on a newer Windows Server host (that is, Windows Server 2016 or later).
+
+> [!IMPORTANT]
+> In all cases, customers are required to attest to their conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for customers to purchase Extended Security Updates on-premises and in hosted environments. Customers will be able to purchase Extended Security Updates via Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, customers do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit.
+>
+
+## Scenario based examples: Compliant and Cost Effective Licensing
+
+### Scenario 1: Eight modern 32-core hosts (not Windows Server 2012). While each of these hosts are running four 8-core VMs, only one VM on each host is running Windows Server 2012 R2
+
+In this scenario, you can use virtual core-based licensing to avoid covering the entire host by provisioning eight Windows Server 2012 Standard licenses for eight virtual cores each and link each of those licenses to the VMs running Windows Server 2012 R2. Alternatively, you could consider consolidating your Windows Server 2012 R2 VMs into two of the hosts to take advantage of physical core-based licensing options.
+
+### Scenario 2: A branch office with four VMs, each 8-cores, on a 32-core Windows Server 2012 Standard host
+
+In this case, you should provision two WS2012 Standard licenses for 16 physical cores each and apply to the four Arc-enabled servers. Alternatively, you could provision four WS2012 Standard licenses for eight virtual cores each and apply individually to the four Arc-enabled servers.
+
+### Scenario 3: Eight physical servers in retail stores, each server is standard with eight cores each and there's no virtualization
+
+In this scenario, you should apply eight WS2012 Standard licenses for 16 physical cores each and link each license to a physical server. Note that the 16 physical core minimum applies to the provisioned licenses.
+
+### Scenario 4: Multicloud environment with 12 AWS VMs, each of which have 12 cores and are running Windows Server 2012 R2 Standard
+
+In this scenario, you should apply 12 Windows Server 2012 Standard licenses with 12 virtual cores each, and link individually to each AWS VM.
+
+### Scenario 5: Customer has already purchased the traditional Windows Server 2012 ESUs through Volume Licensing
+
+In this scenario, the Azure Arc-enabled servers that have been enrolled in Extended Security Updates through an activated MAK Key are as enrolled in ESUs in Azure portal. You have the flexibility to switch from this key-based traditional ESU model to WS2012 ESUs enabled by Azure Arc between Year 1 and Year 2.
+
+### Scenario 6: Migrating or retiring your Azure Arc-enabled servers enrolled in Windows Server 2012 ESUs
+
+In this scenario, you can deactivate or decommission the ESU Licenses associated with these servers. If only part of the server estate covered by a license no longer requires ESUs, you can modify the ESU license details to reduce the number of associated cores.
+
+### Scenario 7: 128-core Windows Server 2012 Datacenter server running between 10 and 15 Windows Server 2012 R2 VMs that get provisioned and deprovisioned regularly
+
+In this scenario, you should provision a Windows Server 2012 Datacenter license associated with 128 physical cores and link this license to the Arc-enabled Windows Server 2012 R2 VMs running on it. The deletion of the underlying VM also deletes the corresponding Arc-enabled server resource, enabling you to link another Arc-enabled server.
+
+### Scenario 8: A insurance customer is running a 16 node VMware cluster with 1024 cores, licensed with Windows Server Datacenter for maximum virtualization use rights. There are 120 Windows VMs ranging from 4 to 12 cores, with 44 Windows Server 2012 R2 machines with a total of 506 cores.
+
+In this scenario, the customer should purchase an Arc ESU Windows Server 2012 Datacenter edition license associated with 506 physical cores and link this license to their 44 machines. Each of the 44 machines should be onboarded to Azure Arc, and can be onboarded at scale with Arc-enabled VMware vSphere. If the customer migrates to AVS, these servers will be eligible for free WS2012 ESUs.
+
+## Next steps
+
+* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).
+
+* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management).
+* Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent.
+* Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
If two agents use the same configuration, you will encounter inconsistent behavi
Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures.
-* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
+* Windows Server 2008 R2 SP1, 2012, 2012 R2, 2016, 2019, and 2022
* Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI * Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance))
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
azure-arc Administer Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md
+
+ Title: Perform ongoing administration for Arc-enabled VMware vSphere
+description: Learn how to perform administrator operations related to Azure Arc-enabled VMware vSphere
+ Last updated : 08/18/2023++
+# Perform ongoing administration for Arc-enabled VMware vSphere
+
+In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview):
+
+- Upgrading the Azure Arc resource bridge (preview)
+- Updating the credentials
+- Collecting logs from the Arc resource bridge
+
+Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig that provides access to the Kubernetes cluster on the resource bridge VM.
+
+## Upgrading the Arc resource bridge
+
+Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates.
+
+> [!NOTE]
+> To upgrade the Arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail.
+
+To upgrade to the latest version of the resource bridge, perform the following steps:
+
+1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location and vCenter Azure resources
+
+2. Find and delete the old Arc resource bridge **template** from your vCenter
+
+3. Download the script from the portal and update the following section in the script
+
+ ```powershell
+ $location = <Azure region of the resources>
+
+ $applianceSubscriptionId = <subscription-id>
+ $applianceResourceGroupName = <resourcegroup-name>
+ $applianceName = <resource-bridge-name>
+
+ $customLocationSubscriptionId = <subscription-id>
+ $customLocationResourceGroupName = <resourcegroup-name>
+ $customLocationName = <custom-location-name>
+
+ $vCenterSubscriptionId = <subscription-id>
+ $vCenterResourceGroupName = <resourcegroup-name>
+ $vCenterName = <vcenter-name-in-azure>
+ ```
+
+4. [Run the onboarding script](quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter
+
+ ``` powershell-interactive
+ ./resource-bridge-onboarding-script.ps1 --force
+ ```
+
+5. [Provide the inputs](quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
+
+6. Once the onboarding is successfully completed, the resource bridge is upgraded to the latest version.
+
+## Updating the vSphere account credentials (using a new password or a new vSphere account after onboarding)
+
+Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provided during the onboarding to communicate with your vCenter server. These credentials are only persisted locally on the Arc resource bridge VM.
+
+As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services. You can also use the same steps in case you need to use a different vSphere account after onboarding. You must ensure the new account also has all the [required vSphere permissions](support-matrix-for-arc-enabled-vmware-vsphere.md#required-vsphere-account-privileges).
+
+There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both.
+
+- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.
+- **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere
+
+To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands. Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
+
+```azurecli
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance update-infracredentials vmware --kubeconfig kubeconfig
+```
+For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware).
++
+To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed.
+
+```azurecli
+az connectedvmware vcenter connect --custom-location <name of the custom location> --location <Azure region> --name <name of the vCenter resource in Azure> --resource-group <resource group for the vCenter resource> --username <username for the vSphere account> --password <password to the vSphere account>
+```
+
+## Collecting logs from the Arc resource bridge
+
+For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command.
+
+To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address.
+
+```azurecli
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory>
+```
+
+If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH
+
+```azurecli
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
+```
+
+## Next steps
+
+- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
+- [Understand disaster recovery operations for resource bridge](recover-from-resource-bridge-deletion.md)
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
Title: Enable your VMware vCenter resources in Azure description: Learn how to browse your vCenter inventory and represent a subset of your VMware vCenter resources in Azure to enable self-service. Previously updated : 09/28/2021 Last updated : 08/18/2023 # Customer intent: As a VI admin, I want to represent a subset of my vCenter resources in Azure to enable self-service.
In this section, you will enable resource pools, networks, and other non-VM reso
1. (Optional) Select **Install guest agent** and then provide the Administrator username and password of the guest operating system.
- The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc-enabled VMware vSphere](manage-vmware-vms-in-azure.md).
+ The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc-enabled VMware vSphere](perform-vm-ops-through-azure.md).
1. Select **Enable** to start the deployment of the VM represented in Azure.
-For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](manage-access-to-arc-vmware-resources.md).
+For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
## Next steps -- [Manage access to VMware resources through Azure RBAC](manage-access-to-arc-vmware-resources.md).
+- [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
- Title: Perform ongoing administration for Arc-enabled VMware vSphere
-description: Learn how to perform day 2 administrator operations related to Azure Arc-enabled VMware vSphere
- Previously updated : 09/15/2022---
-# Perform ongoing administration for Arc-enabled VMware vSphere
-
-In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview):
--- Upgrading the Azure Arc resource bridge (preview)-- Updating the credentials-- Collecting logs from the Arc resource bridge-
-Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig that provides access to the Kubernetes cluster on the resource bridge VM.
-
-## Upgrading the Arc resource bridge
-
-Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates.
-
-> [!NOTE]
-> To upgrade the Arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail.
-
-To upgrade to the latest version of the resource bridge, perform the following steps:
-
-1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location and vCenter Azure resources
-
-2. Find and delete the old Arc resource bridge **template** from your vCenter
-
-3. Download the script from the portal and update the following section in the script
-
- ```powershell
- $location = <Azure region of the resources>
-
- $applianceSubscriptionId = <subscription-id>
- $applianceResourceGroupName = <resourcegroup-name>
- $applianceName = <resource-bridge-name>
-
- $customLocationSubscriptionId = <subscription-id>
- $customLocationResourceGroupName = <resourcegroup-name>
- $customLocationName = <custom-location-name>
-
- $vCenterSubscriptionId = <subscription-id>
- $vCenterResourceGroupName = <resourcegroup-name>
- $vCenterName = <vcenter-name-in-azure>
- ```
-
-4. [Run the onboarding script](quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter
-
- ``` powershell-interactive
- ./resource-bridge-onboarding-script.ps1 --force
- ```
-
-5. [Provide the inputs](quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
-
-6. Once the onboarding is successfully completed, the resource bridge is upgraded to the latest version.
-
-## Updating the vSphere account credentials (using a new password or a new vSphere account after onboarding)
-
-Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provided during the onboarding to communicate with your vCenter server. These credentials are only persisted locally on the Arc resource bridge VM.
-
-As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services. You can also use the same steps in case you need to use a different vSphere account after onboarding. You must ensure the new account also has all the [required vSphere permissions](support-matrix-for-arc-enabled-vmware-vsphere.md#required-vsphere-account-privileges).
-
-There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both.
--- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.-- **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere-
-To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands . Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
-
-```azurecli
-az account set -s <subscription id>
-az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
-az arcappliance update-infracredentials vmware --kubeconfig kubeconfig
-```
-For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware).
--
-To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed.
-
-```azurecli
-az connectedvmware vcenter connect --custom-location <name of the custom location> --location <Azure region> --name <name of the vCenter resource in Azure> --resource-group <resource group for the vCenter resource> --username <username for the vSphere account> --password <password to the vSphere account>
-```
-
-## Collecting logs from the Arc resource bridge
-
-For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command.
-
-To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address.
-
-```azurecli
-az account set -s <subscription id>
-az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
-az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory>
-```
-
-If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH
-
-```azurecli
-az account set -s <subscription id>
-az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
-az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
-```
-
-## Next steps
--- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)-- [Understand disaster recovery operations for resource bridge](disaster-recovery.md)
azure-arc Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/disaster-recovery.md
- Title: Perform disaster recovery operations
-description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios.
-- Previously updated : 08/16/2022--
-# Recover from accidental deletion of resource bridge VM
-
-In this article, you'll learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail.
-
-## Recovering the Arc resource bridge in case of VM deletion
-
-To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps.
-
-1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources.
-
-2. Find and delete the old Arc resource bridge template from your vCenter.
-
-3. Download the [onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure.
-
- ```powershell
- $location = <Azure region of the resources>
- $applianceSubscriptionId = <subscription-id>
- $applianceResourceGroupName = <resource-group-name>
- $applianceName = <resource-bridge-name>
-
- $customLocationSubscriptionId = <subscription-id>
- $customLocationResourceGroupName = <resource-group-name>
- $customLocationName = <custom-location-name>
-
- $vCenterSubscriptionId = <subscription-id>
- $vCenterResourceGroupName = <resource-group-name>
- $vCenterName = <vcenter-name-in-azure>
- ```
-
-4. [Run the onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter.
-
- ``` powershell-interactive
- ./resource-bridge-onboarding-script.ps1 --force
- ```
-
-5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
-
-6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again.
-
-## Next steps
-
-[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md)
-
-If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support:
--- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).-- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-arc Enable Guest Management At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md
+
+ Title: Install Arc agent at scale for your VMware VMs
+description: Learn how to enable guest management at scale for Arc enabled VMware vSphere VMs.
+ Last updated : 08/21/2023+
+#Customer intent: As an IT infra admin, I want to install arc agents to use Azure management services for VMware VMs.
++
+# Install Arc agents at scale for your VMware VMs
+
+In this article, you will learn how to install Arc agents at scale for VMware VMs and use Azure management capabilities.
+
+## Prerequisites
+
+Ensure the following before you install Arc agents at scale for VMware VMs:
+
+- The resource bridge must be in running state.
+- The vCenter must be in connected state.
+- The user account must have permissions listed in Azure Arc VMware Administrator role.
+- All the target machines are:
+ - Powered on and the resource bridge has network connectivity to the host running the VM.
+ - Running a [supported operating system](../servers/prerequisites.md#supported-operating-systems).
+ - Able to connect through the firewall to communicate over the internet, and [these URLs](../servers/network-requirements.md#urls) aren't blocked.
+
+ > [!NOTE]
+ > If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and add `<username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`. <br> <br>If your VM template has these changes incorporated, you won't need to do this for the VM created from that template.
+
+## Install Arc agents at scale from portal
+
+An admin can install agents for multiple machines from the Azure portal if the machines share the same administrator credentials.
+
+1. Navigate to **Azure Arc center** and select **vCenter resource**.
+
+2. Select all the machines and choose **Enable in Azure** option.
+
+3. Select **Enable guest management** checkbox to install Arc agents on the selected machine.
+
+4. If you want to connect the Arc agent via proxy, provide the proxy server details.
+
+5. Provide the administrator username and password for the machine.
+
+> [!NOTE]
+> For Windows VMs, the account must be part of local administrator group; and for Linux VM, it must be a root account.
++
+## Next steps
+
+[Set up and manage self-service access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
azure-arc Manage Access To Arc Vmware Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md
- Title: Manage access to VMware resources through Azure Role-Based Access Control
-description: Learn how to manage access to your on-premises VMware resources through Azure Role-Based Access Control (RBAC).
- Previously updated : 11/08/2021-
-#Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure
--
-# Manage access to VMware resources through Azure Role-Based Access Control
-
-Once your VMware vCenter resources have been enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure and allow your teams to deploy and manage VMs.
-
-## Arc-enabled VMware vSphere built-in roles
-
-There are three built-in roles to meet your access control requirements. You can apply these roles to a whole subscription, resource group, or a single resource.
--- **Azure Arc VMware Administrator** role - used by administrators--- **Azure Arc VMware Private Cloud User** role - used by anyone who needs to deploy and manage VMs--- **Azure Arc VMware VM Contributor** role - used by anyone who needs to deploy and manage VMs-
-### Azure Arc VMware Administrator role
-
-The **Azure Arc VMware Administrator** role is a built-in role that provides permissions to perform all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. Assign this role to users or groups that are administrators managing Azure Arc-enabled VMware vSphere deployment.
-
-### Azure Arc VMware Private Cloud User role
-
-The **Azure Arc VMware Private Cloud User** role is a built-in role that provides permissions to use the VMware vSphere resources made accessible through Azure. Assign this role to any users or groups that need to deploy, update, or delete VMs.
-
-We recommend assigning this role at the individual resource pool (or host or cluster), virtual network, or template with which you want the user to deploy VMs.
-
-### Azure Arc VMware VM Contributor
-
-The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations. Assign this role to any users or groups that need to deploy, update, or delete VMs.
-
-We recommend assigning this role for the subscription or resource group to which you want the user to deploy VMs.
-
-## Assigning the roles to users/groups
-
-1. Go to the [Azure portal](https://portal.azure.com).
-
-2. Search and navigate to the subscription, resource group, or the resource at which scope you want to provide this role.
-
-3. To find the Arc-enabled VMware vSphere resources like resource pools, clusters, hosts, datastores, networks, or virtual machine templates:
- 1. navigate to the resource group and select the **Show hidden types** checkbox.
- 2. search for *"VMware"*.
-
-4. Click on **Access control (IAM)** in the table of contents on the left.
-
-5. Click on **Add role assignments** on the **Grant access to this resource**.
-
-6. Select the custom role you want to assign (one of **Azure Arc VMware Administrator**, **Azure Arc VMware Private Cloud User**, or **Azure Arc VMware VM Contributor**).
-
-7. Search for the Azure Active Directory (Azure AD) user or group to which you want to assign this role.
-
-8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission.
-
-9. Repeat the above steps for each scope and role.
-
-## Next steps
--- [Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md).
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
- Title: Manage VMware virtual machines Azure Arc
-description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent.
- Previously updated : 11/10/2021---
-# Manage VMware VMs in Azure through Arc-enabled VMware vSphere
-
-In this article, you will learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as:
--- Start, stop, and restart a VM--- Control access and add Azure tags--- Add, remove, and update network interfaces--- Add, remove, and update disks and update VM size (CPU cores, memory)--- Enable guest management--- Install extensions (enabling guest management is required)--
-To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM.
-
-## Supported extensions and management services
-
-### Windows extensions
-
-|Extension |Publisher |Type |
-|-|-|--|
-|Custom Script extension |Microsoft.Compute | CustomScriptExtension |
-|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent |
-|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute | HybridWorkerForWindows|
--
-### Linux extensions
-
-|Extension |Publisher |Type |
-|-|-|--|
-|Custom Script extension |Microsoft.Azure.Extensions |CustomScript |
-|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux |
-|Azure Automation Hybrid Runbook Worker extension (preview) | Microsoft.Compute | HybridWorkerForLinux|
-
-## Enable guest management
-
-Before you can install an extension, you must enable guest management on the VMware VM.
-
-1. Make sure your target machine:
-
- - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems).
-
- - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) are not blocked.
-
- - has VMware tools installed and running.
-
- - is powered on and the resource bridge has network connectivity to the host running the VM.
-
- >[!NOTE]
- >If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo` and add `<username> ALL=(ALL) NOPASSWD:ALL` to the end of the file. Make sure to replace `<username>`.
- >
- >If your VM template has these changes incorporated, you won't need to do this for the VM created from that template.
-
-1. From your browser, go to the [Azure portal](https://portal.azure.com).
-
-2. Search for and select the VMware VM and select **Configuration**.
-
-3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**.
-
- For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group.
-
-## Install the LogAnalytics extension
-
-1. From your browser, go to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select the VMware VM that you want to install extension.
-
-1. Navigate to **Extensions** and select **Add**.
-
-1. Select the extension you want to install. Based on the extension, you'll need to provide configuration details, such as the workspace ID and primary key for Log Analytics extension. Then select **Review + create**.
-
-The deployment starts the installation of the extension on the selected VM.
-
-## Delete a VM
-
-If you no longer need the VM, you can delete it.
-
-1. From your browser, go to the [Azure portal](https://portal.azure.com).
-
-2. Search for and select the VM you want to delete.
-
-3. In the single VM view, select on **Delete**.
-
-4. When prompted, confirm that you want to delete it.
-
->[!NOTE]
->This also deletes the VM in your VMware vCenter.
-
-## Next steps
-
-[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere (preview)? description: Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 09/15/2022 Last updated : 08/21/2023 # What is Azure Arc-enabled VMware vSphere (preview)?
-Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure. With Azure Arc-enabled VMware vSphere, you get a consistent management experience across Azure and VMware vSphere infrastructure.
+Azure Arc-enabled VMware vSphere (preview) is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure.
Arc-enabled VMware vSphere (preview) allows you to: -- Perform various VMware virtual machine (VM) lifecycle operations directly from Azure, such as create, start/stop, resize, and delete.
+- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Arc at scale.
+
+- Perform various virtual machine (VM) operations directly from Azure, such as create, resize, delete, and power cycle operations such as start/stop/restart on VMware VMs consistently with Azure.
- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control](../../role-based-access-control/overview.md) (RBAC). -- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments. You can also discover and onboard existing VMware VMs to Azure.
+- Install the Arc-connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them.
+
+- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.
+
+## Onboard resources to Azure management at scale
+
+Azure services such as Microsoft Defender for Cloud, Azure Monitor, Azure Update Management Center, and Azure Policy provide a rich set of capabilities to secure, monitor, patch, and govern off-Azure resources via Arc.
+
+By using Arc-enabled VMware vSphere's capabilities to discover your VMware estate and install the Arc agent at scale, you can simplify onboarding your entire VMware vSphere estate to these services.
+
+## Set up self-service access for your teams to use vSphere resources using Azure Arc
+
+Arc-enabled VMware vSphere extends Azure's control plane (Azure Resource Manager) to VMware vSphere infrastructure. This enables you to use Azure AD-based identity management, granular Azure RBAC, and ARM templates to help your app teams and developers get self-service access to provision and manage VMs on VMware vSphere environment, providing greater agility.
+
+1. Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure.
+
+2. Administrators can then use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure.
-- Conduct governance and monitoring operations across Azure and VMware VMs by enabling guest management (installing the [Azure Arc-enabled servers Connected Machine agent](../servers/agent-overview.md)).
+3. Administrators can provide app teams/developers fine-grained permissions on those VMware resources through Azure RBAC.
+
+4. App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).
+
+5. App teams can use ARM templates/Bicep (Infrastructure as Code) to deploy VMs as part of CI/CD pipelines.
## How does it work?
-To deliver this experience, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), which is a virtual appliance, in your vSphere environment. It connects your vCenter Server to Azure. Azure Arc resource bridge (preview) enables you to represent the VMware resources in Azure and do various operations on them.
+Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure.
-## Supported VMware vSphere versions
+When a VMware vCenter Server is connected to Azure, an automatic discovery of the inventory of vSphere resources is performed. This inventory data is continuously kept in sync with the vCenter Server.
-Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 6.7, 7 and 8.
+All guest OS-based capabilities are provided by enabling guest management (installing the Arc agent) on the VMs. Once guest management is enabled, VM extensions can be installed to use the Azure management capabilities. You can perform virtual hardware operations such as resizing, deleting, adding disks, and power cycling without guest management enabled.
-> [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
+## How is Arc-enabled VMware vSphere different from Arc-enabled Servers
-## Supported scenarios
+The easiest way to think of this is as follows:
-The following scenarios are supported in Azure Arc-enabled VMware vSphere (preview):
+- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there may, in fact, not even be a host hypervisor in some cases.
-- Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure and browse the VMware virtual machine inventory in Azure.
+- Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Azure Arc-enabled servers.
-- Administrators can use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure. They can also enable guest management on many registered virtual machines at once.
+You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both the options, you will enjoy the same consistent experience.
-- Administrators can provide app teams/developers fine-grained permissions on those VMware resources through Azure RBAC. -- App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).
+## Supported VMware vSphere versions
-- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, Dependency Agent, and Azure Automation Hybrid Runbook Worker extension on the virtual machines and do operations supported by the extensions.
+Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 6.7, 7, and 8.
+> [!NOTE]
+> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point.
## Supported regions You can use Azure Arc-enabled VMware vSphere (preview) in these supported regions:- - Australia East - Canada Central - East US
+- East US 2
+- North Europe
- Southeast Asia - UK South - West Europe
+- West US 2
+- West US 3
+
+For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc&regions=all) page.
-For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc&regions=all) page
## Data Residency
-Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the region the customer deploys the service instance in.
+Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the region the customer deploys the service instance in.
## Next steps -- [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).-- View the [support matrix for Arc-enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md).
+- Plan your resource bridge deployment by reviewing the [support matrix for Arc-enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md).
+- Once ready, [connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).
- Try out Arc-enabled VMware vSphere by using the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_vsphere/).
azure-arc Perform Vm Ops Through Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md
+
+ Title: Perform VM operations on VMware VMs through Azure
+description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent.
+ Last updated : 08/18/2023++
+# Manage VMware VMs in Azure through Arc-enabled VMware vSphere
+
+In this article, you will learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as:
+
+- Start, stop, and restart a VM
+
+- Control access and add Azure tags
+
+- Add, remove, and update network interfaces
+
+- Add, remove, and update disks and update VM size (CPU cores, memory)
+
+- Enable guest management
+
+- Install extensions (enabling guest management is required)
++
+To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM.
+
+## Supported extensions and management services
+
+### Windows extensions
+
+|Extension |Publisher |Type |
+|-|-|--|
+|Custom Script extension |Microsoft.Compute | CustomScriptExtension |
+|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent |
+|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute | HybridWorkerForWindows|
++
+### Linux extensions
+
+|Extension |Publisher |Type |
+|-|-|--|
+|Custom Script extension |Microsoft.Azure.Extensions |CustomScript |
+|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux |
+|Azure Automation Hybrid Runbook Worker extension (preview) | Microsoft.Compute | HybridWorkerForLinux|
+
+## Enable guest management
+
+Before you can install an extension, you must enable guest management on the VMware VM.
+
+1. Make sure your target machine:
+
+ - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems).
+
+ - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) are not blocked.
+
+ - has VMware tools installed and running.
+
+ - is powered on and the resource bridge has network connectivity to the host running the VM.
+
+ >[!NOTE]
+ >If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo` and add `<username> ALL=(ALL) NOPASSWD:ALL` to the end of the file. Make sure to replace `<username>`.
+ >
+ >If your VM template has these changes incorporated, you won't need to do this for the VM created from that template.
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+2. Search for and select the VMware VM and select **Configuration**.
+
+3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**.
+
+ For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group.
+
+## Install the LogAnalytics extension
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select the VMware VM that you want to install extension.
+
+1. Navigate to **Extensions** and select **Add**.
+
+1. Select the extension you want to install. Based on the extension, you'll need to provide configuration details, such as the workspace ID and primary key for Log Analytics extension. Then select **Review + create**.
+
+The deployment starts the installation of the extension on the selected VM.
+
+## Delete a VM
+
+If you no longer need the VM, you can delete it.
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+2. Search for and select the VM you want to delete.
+
+3. In the single VM view, select on **Delete**.
+
+4. When prompted, confirm that you want to delete it.
+
+>[!NOTE]
+>This also deletes the VM in your VMware vCenter.
+
+## Next steps
+
+[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md)
azure-arc Quick Start Create A Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md
Title: Create a virtual machine on VMware vCenter using Azure Arc description: In this quickstart, you'll learn how to create a virtual machine on VMware vCenter using Azure Arc Previously updated : 09/29/2021 Last updated : 08/18/2023 # Customer intent: As a self-service user, I want to provision a VM using vCenter resources through Azure so that I can deploy my code
Once your administrator has connected a VMware vCenter to Azure, represented VMw
## Next steps -- [Perform operations on VMware VMs in Azure](manage-vmware-vms-in-azure.md)
+- [Perform operations on VMware VMs in Azure](perform-vm-ops-through-azure.md)
azure-arc Recover From Resource Bridge Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md
+
+ Title: Perform disaster recovery operations
+description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios.
++ Last updated : 08/18/2023++
+# Recover from accidental deletion of resource bridge VM
+
+In this article, you'll learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail.
+
+## Recovering the Arc resource bridge in case of VM deletion
+
+To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps.
+
+1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources.
+
+2. Find and delete the old Arc resource bridge template from your vCenter.
+
+3. Download the [onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure.
+
+ ```powershell
+ $location = <Azure region of the resources>
+ $applianceSubscriptionId = <subscription-id>
+ $applianceResourceGroupName = <resource-group-name>
+ $applianceName = <resource-bridge-name>
+
+ $customLocationSubscriptionId = <subscription-id>
+ $customLocationResourceGroupName = <resource-group-name>
+ $customLocationName = <custom-location-name>
+
+ $vCenterSubscriptionId = <subscription-id>
+ $vCenterResourceGroupName = <resource-group-name>
+ $vCenterName = <vcenter-name-in-azure>
+ ```
+
+4. [Run the onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter.
+
+ ``` powershell-interactive
+ ./resource-bridge-onboarding-script.ps1 --force
+ ```
+
+5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
+
+6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again.
+
+## Next steps
+
+[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md)
+
+If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support:
+
+- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-arc Setup And Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/setup-and-manage-self-service-access.md
+
+ Title: Set up and manage self-service access to VMware resources through Azure RBAC
+description: Learn how to manage access to your on-premises VMware resources through Azure Role-Based Access Control (RBAC).
+ Last updated : 08/21/2023
+# Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure
++
+# Set up and manage self-service access to VMware resources
+
+Once your VMware vSphere resources are enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them with access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure Role-based Access Control (RBAC) and allow your teams to deploy and manage VMs.
+
+## Prerequisites
+
+- Your vCenter must be connected to Azure Arc.
+- Your vCenter resources such as Resourcepools/clusters/hosts, networks, templates, and datastores must be Arc-enabled.
+- You must have User Access Administrator or Owner role at the scope (resource group/subscription) to assign roles to other users.
++
+## Provide access to use Arc-enabled vSphere resources
+
+To provision VMware VMs and change their size, add disks, change network interfaces, or delete them, your users need to have permissions on the compute, network, storage, and to the VM template resources that they will use. These permissions are provided by the built-in **Azure Arc VMware Private Cloud User** role.
+
+You must assign this role on individual resource pool (or cluster or host), network, datastore, and template that a user or a group needs to access.
+
+1. Go to the [**VMware vCenters (preview)** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter).
+
+2. Search and select your vCenter.
+
+3. Navigate to the **Resourcepools/clusters/hosts** in **vCenter inventory** section in the table of contents.
+
+3. Find and select resourcepool (or cluster or host). This will take you to the Arc resource representing the resourcepool.
+
+4. Select **Access control (IAM)** in the table of contents.
+
+5. Select **Add role assignments** on the **Grant access to this resource**.
+
+6. Select **Azure Arc VMware Private Cloud User** role and select **Next**.
+
+7. Select **Select members** and search for the Azure Active Directory (Azure AD) user or group that you want to provide access.
+
+8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission.
+
+9. Select **Review + assign** to complete the role assignment.
+
+10. Repeat steps 3-9 for each datastore, network, and VM template that you want to provide access to.
+
+If you have organized your vSphere resources into a resource group, you can provide the same role at the resource group scope.
+
+Your users now have access to VMware vSphere cloud resources. However, your users will also need to have permissions on the subscription/resource group where they would like to deploy and manage VMs.
+
+## Provide access to subscription or resource group where VMs will be deployed
+
+In addition to having access to VMware vSphere resources through the **Azure Arc VMware Private Cloud User**, your users must have permissions on the subscription and resource group where they deploy and manage VMs.
+
+The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations.
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+
+2. Search and navigate to the subscription or resource group to which you want to provide access.
+
+3. Select **Access control (IAM)** in the table of contents on the left.
+
+4. Select **Add role assignments** on the **Grant access to this resource**.
+
+5. Select **Azure Arc VMware VM Contributor** role and select **Next**.
+
+6. Select the option **Select members**, and search for the Azure Active Directory (Azure AD) user or group that you want to provide access.
+
+8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission.
+
+9. Select on **Review + assign** to complete the role assignment.
++
+## Next steps
+
+[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md).
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
Title: Support matrix for Azure Arc-enabled VMware vSphere (preview)
+ Title: Plan for deployment
description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more. Previously updated : 10/21/2022- Last updated : 08/18/2023 # Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere.
azure-arc Switch To New Preview Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md
+
+ Title: Switch to the new preview version
+description: Learn to switch to the new preview version and use its capabilities
+ Last updated : 08/22/2023
+# Customer intent: As a VI admin, I want to switch to the new preview version of Arc-enabled VMware vSphere (preview) and leverage the associated capabilities
++
+# Switch to the new preview version
+
+On August 21, 2023, we rolled out major changes to Azure Arc-enabled VMware vSphere preview. We are now announcing a new preview. By switching to the new preview version, you can use all the Azure management services that are available for Arc-enabled Servers.
+
+> [!NOTE]
+> If you're new to Arc-enabled VMware vSphere (preview), you will be able to leverage the new capabilities by default. To get started with the preview, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).
++
+## Switch to the new preview version (Existing preview customer)
+
+If you are an existing **Azure Arc-enabled VMware** customer, for VMs that are Azure-enabled, follow these steps to switch to the new preview version:
+
+>[!Note]
+>If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc).
+
+1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource.
+
+2. Select all the virtual machines that are Azure enabled with the older preview version.
+
+3. Select **Remove from Azure**.
+
+ :::image type="VM Inventory view" source="media/switch-to-new-preview-version/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-preview-version/vm-inventory-view-expanded.png":::
+
+4. After successful removal from Azure, enable the same resources again in Azure.
+
+5. Once the resources are re-enabled, the VMs are auto switched to the new preview version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**.
+
+ :::image type=" New VM browse view" source="media/switch-to-new-preview-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-preview-version/new-vm-browse-view-expanded.png":::
+
+## Next steps
+
+[Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script).
azure-arc Troubleshoot Guest Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md
+
+ Title: Troubleshoot Guest Management Issues
+description: Learn about how to troubleshoot the guest management issues for Arc-enabled VMware vSphere.
+ Last updated : 08/18/2023
+# Customer intent: As a VI admin, I want to understand the troubleshooting process for guest management issues.
+
+# Troubleshoot Guest Management for Linux VMs
+
+This article provides information on how to troubleshoot and resolve the issues that may occur while you enable guest management on Arc-enabled VMware vSphere virtual machines.
+
+## Troubleshoot issues while enabling Guest Management on a domain-joined Linux VM
+
+**Error message**: Enabling Guest Management on a domain-joined Linux VM fails with the error message **InvalidGuestLogin: Failed to authenticate to the system with the credentials**.
+
+**Resolution**: Before you enable Guest Management on a domain-joined Linux VM using active directory credentials, follow these steps to set the configuration on the VM:
+
+1. In the SSSD configuration file (typically, */etc/sssd/sssd.conf*), add the following under the section for the domain:
+
+ [domain/contoso.com]
+ ad_gpo_map_batch = +vmtoolsd
+
+2. After making the changes to SSSD configuration, restart the SSSD process. If SSSD is running as a system process, run `sudo systemctl restart sssd` to restart it.
+
+### Additional information
+
+The parameter `ad_gpo_map_batch` according to the [sssd mainpage](https://jhrozek.fedorapeople.org/sssd/1.13.4/man/sssd-ad.5.html):
+
+A comma-separated list of Pluggable Authentication Module (PAM) service names for which GPO-based access control is evaluated based on the BatchLogonRight and DenyBatchLogonRight policy settings.
+
+It's possible to add another PAM service name to the default set by using **+service_name** or to explicitly remove a PAM service name from the default set by using **-service_name**. For example, to replace a default PAM service name for this sign in (for example, **crond**) with a custom PAM service name (for example, **my_pam_service**), use this configuration:
+
+`ad_gpo_map_batch = +my_pam_service, -crond`
+
+Default: The default set of PAM service names includes:
+
+- crond:
+
+ `vmtoolsd` PAM is enabled for SSSD evaluation. For any request coming through VMware tools, SSSD will be invoked since VMware tools use this PAM for authenticating to the Linux Guest VM.
+
+#### References
+
+- [Invoke-VMScript to an domain joined Ubuntu VM](https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Invoke-VMScript-to-an-domain-joined-Ubuntu-VM/td-p/2257554).
++
+## Troubleshoot issues while enabling Guest Management on RHEL-based Linux VMs
+
+Applies to:
+
+- RedHat Linux
+- CentOS
+- Rocky Linux
+- Oracle Linux
+- SUSE Linux
+- SUSE Linux Enterprise Server
+- Alma Linux
+- Fedora
++
+**Error message**: Provisioning of the resource failed with Code: `AZCM0143`; Message: `install_linux_azcmagent.sh: installation error`.
+
+**Workaround**
+
+Before you enable the guest agent, follow these steps on the VM:
+
+1. Create file `vmtools_unconfined_rpm_script_kcs5347781.te` using the following:
+
+ `policy_module(vmtools_unconfined_rpm_script_kcs5347781, 1.0)
+ gen_require(`
+ type vmtools_unconfined_t;
+ ')
+ optional_policy(`
+ rpm_transition_script(vmtools_unconfined_t,system_r)
+ ')`
+
+2. Install the package to build the policy module:
+
+ `sudo yum -y install selinux-policy-devel`
+
+3. Compile the module:
+
+ `make -f /usr/share/selinux/devel/Makefile vmtools_unconfined_rpm_script_kcs5347781.pp`
+
+4. Install the module:
+
+ `sudo semodule -i vmtools_unconfined_rpm_script_kcs5347781.pp`
+
+### Additional information
+
+Track the issue through [BZ 1872245 - [VMware][RHEL 8] vmtools is not able to install rpms](https://bugzilla.redhat.com/show_bug.cgi?id=1872245).
+
+Upon executing a command using `vmrun` command, the context of the `yum` or `rpm` command is `vmtools_unconfined_t`.
+
+Upon `yum` or `rpm` executing scriptlets, the context is changed to `rpm_script_t`, which is currently denied because of the missing rule in the SELinux policy.
+
+#### References
+
+- [Executing yum/rpm commands using VMware tools facility (vmrun) fails in error when packages have scriptlets](https://access.redhat.com/solutions/5347781).
+
+## Next steps
+
+If you don't see your problem here or you can't resolve your issue, try one of the following channels for support:
+
+- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
+
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
+
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon
>[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md). >+ ### Advisor recommendations The **Advisor recommendations** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed.
Further information can be found on the **Recommendations** in the working pane
You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu.
-Each pricing tier has different limits for client connections, memory, and bandwidth. If your cache approaches maximum capacity for these metrics over a sustained period of time, a recommendation is created. For more information about the metrics and limits reviewed by the **Recommendations** tool, see the following table:
- | Azure Cache for Redis metric | More information | | | | | Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) |
New Azure Cache for Redis instances are configured with the following default Re
| `maxmemory-samples` |3 |To save memory, LRU and minimal TTL algorithms are approximated algorithms instead of precise algorithms. By default Redis checks three keys and picks the one that was used less recently. | | `lua-time-limit` |5,000 |Max execution time of a Lua script in milliseconds. If the maximum execution time is reached, Redis logs that a script is still in execution after the maximum allowed time, and starts to reply to queries with an error. | | `lua-event-limit` |500 |Max size of script event queue. |
-| `client-output-buffer-limit` `normalclient-output-buffer-limit` `pubsub` |0 0 032mb 8 mb 60 |The client output buffer limits can be used to force disconnection of clients that aren't reading data from the server fast enough for some reason. A common reason is that a Pub/Sub client can't consume messages as fast as the publisher can produce them. For more information, see [https://redis.io/topics/clients](https://redis.io/topics/clients). |
+| `client-output-buffer-limit normal` / `client-output-buffer-limit pubsub` |`0 0 0` / `32mb 8mb 60` |The client output buffer limits can be used to force disconnection of clients that aren't reading data from the server fast enough for some reason. A common reason is that a Pub/Sub client can't consume messages as fast as the publisher can produce them. For more information, see [https://redis.io/topics/clients](https://redis.io/topics/clients). |
<a name="databases"></a>
Configuration and management of Azure Cache for Redis instances is managed by Mi
- ACL - BGREWRITEAOF - BGSAVE-- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
+- CLUSTER - Cluster write commands are disabled, but read-only cluster commands are permitted.
- CONFIG - DEBUG - MIGRATE - PSYNC - REPLICAOF
+- REPLCONF - Azure cache for Redis instances don't allow customers to add external replicas. This [command](https://redis.io/commands/replconf/) is normally only sent by servers.
- SAVE - SHUTDOWN - SLAVEOF
For more information about Redis commands, see [https://redis.io/commands](https
- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) - [Monitor Azure Cache for Redis](cache-how-to-monitor.md)+
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
You have two options for persistence with Azure Cache for Redis: the _Redis data
- _RDB persistence_ - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed automatically using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - _AOF persistence_ - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second in an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica caches, the cache is reconstructed automatically using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-Azure Cache for Redis persistence features are intended to be used to restore data automatically to the same cache after data loss. The RDB/AOF persisted data files can't be imported to a new cache. To move data across caches, use the _Import and Export_ feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+Azure Cache for Redis persistence features are intended to be used to restore data automatically to the same cache after data loss. The RDB/AOF persisted data files can't be imported to a new cache or the existing cache. To move data across caches, use the _Import and Export_ feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI that export data periodically.
To generate any backups of data that can be added to a new cache, you can write
Persistence features are intended to be used to restore data to the same cache after data loss. -- RDB/AOF persisted data files can't be imported to a new cache. Use the [Import/Export](cache-how-to-import-export-data.md) feature instead.
+- RDB/AOF persisted data files can't be imported to a new cache or the existing cache. Use the [Import/Export](cache-how-to-import-export-data.md) feature instead.
- Persistence isn't supported with caches using [passive geo-replication](cache-how-to-geo-replication.md) or [active geo-replication](cache-how-to-active-geo-replication.md). - On the _Premium_ tier, AOF persistence isn't supported with [multiple replicas](cache-how-to-multi-replicas.md). - On the _Premium_ tier, data must be persisted to a storage account in the same region as the cache instance.
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Previously updated : 07/22/2022 Last updated : 08/29/2023
-# Configure virtual network support for a Premium Azure Cache for Redis instance
+# Configure virtual network (VNet) support for a Premium Azure Cache for Redis instance
[Azure Virtual Network](https://azure.microsoft.com/services/virtual-network/) deployment provides enhanced security and isolation along with: subnets, access control policies, and other features to restrict access further. When an Azure Cache for Redis instance is configured with a virtual network, it isn't publicly addressable. Instead, the instance can only be accessed from virtual machines and applications within the virtual network. This article describes how to configure virtual network support for a Premium-tier Azure Cache for Redis instance.
Last updated 07/22/2022
> > [!IMPORTANT]
-> Azure Cache for Redis now supports Azure Private Link, which simplifies the network architecture and secures the connection between endpoints in Azure. You can connect to an Azure Cache instance from your virtual network via a private endpoint, which is assigned a private IP address in a subnet within the virtual network. Azure Private Links is offered on all our tiers, includes Azure Policy support, and simplified NSG rule management. To learn more, see [Private Link Documentation](cache-private-link.md). To migrate your VNet injected caches to Private Link, see [here](cache-vnet-migration.md).
+> Azure Cache for Redis recommends using Azure Private Link, which simplifies the network architecture and secures the connection between endpoints in Azure. You can connect to an Azure Cache instance from your virtual network via a private endpoint, which is assigned a private IP address in a subnet within the virtual network. Azure Private Links is offered on all our tiers, includes Azure Policy support, and simplified NSG rule management. To learn more, see [Private Link Documentation](cache-private-link.md). To migrate your VNet injected caches to Private Link, see [Migrate from VNet injection caches to Private Link caches](cache-vnet-migration.md).
>
+### Limitations of VNet injection
+
+- Creating and maintaining virtual network configurations are often error prone. Troubleshooting is challenging, too. Incorrect virtual network configurations can lead to issues:
+ - obstructed metrics transmission from your cache instances
+ - failure of replica node to replicate data from primary node
+ - potential data loss
+ - failure of management operations like scaling
+ - in the most severe scenarios, loss of availability
+- VNet injected caches are only available for Premium-tier Azure Cache for Redis, not other tiers.
+- When using a VNet injected cache, you must change your VNet to cache dependencies such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.
+- You can't inject an existing Azure Cache for Redis instance into a Virtual Network. You must select this option when you _create_ the cache.
+ ## Set up virtual network support Virtual network support is configured on the **New Azure Cache for Redis** pane during cache creation.
Virtual network support is configured on the **New Azure Cache for Redis** pane
| Setting | Suggested value | Description | | | - | -- |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and it can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and it can't contain consecutive hyphens. Your cache instance's _host name_ will be `\<DNS name>.redis.cache.windows.net`. |
| **Subscription** | Select your subscription from the drop-down list. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Select a resource group from the drop-down list, or select **Create new** and enter a new resource group name. | The name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. | | **Location** | Select a location from the drop-down list. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. |
There are nine outbound port requirements. Outbound requests in these ranges are
#### Geo-replication peer port requirements
-If you're using geo-replication between caches in Azure virtual networks: a) unblock ports 15000-15999 for the whole subnet in both inbound *and* outbound directions, and b) to both caches. With this configuration, all the replica components in the subnet can communicate directly with each other even if there's a future geo-failover.
+If you're using geo-replication between caches in Azure virtual networks: a) unblock ports 15000-15999 for the whole subnet in both inbound _and_ outbound directions, and b) to both caches. With this configuration, all the replica components in the subnet can communicate directly with each other even if there's a future geo-failover.
#### Inbound port requirements
There are eight inbound port range requirements. Inbound requests in these range
There are network connectivity requirements for Azure Cache for Redis that might not be initially met in a virtual network. Azure Cache for Redis requires all the following items to function properly when used within a virtual network: -- Outbound network connectivity to Azure Key Vault endpoints worldwide. Azure Key Vault endpoints resolve under the DNS domain *vault.azure.net*.-- Outbound network connectivity to Azure Storage endpoints worldwide. Endpoints located in the same region as the Azure Cache for Redis instance and storage endpoints located in *other* Azure regions are included. Azure Storage endpoints resolve under the following DNS domains: *table.core.windows.net*, *blob.core.windows.net*, *queue.core.windows.net*, and *file.core.windows.net*.-- Outbound network connectivity to *ocsp.digicert.com*, *crl4.digicert.com*, *ocsp.msocsp.com*, *mscrl.microsoft.com*, *crl3.digicert.com*, *cacerts.digicert.com*, *oneocsp.microsoft.com*, and *crl.microsoft.com*. This connectivity is needed to support TLS/SSL functionality.
+- Outbound network connectivity to Azure Key Vault endpoints worldwide. Azure Key Vault endpoints resolve under the DNS domain `vault.azure.net`.
+- Outbound network connectivity to Azure Storage endpoints worldwide. Endpoints located in the same region as the Azure Cache for Redis instance and storage endpoints located in _other_ Azure regions are included. Azure Storage endpoints resolve under the following DNS domains: `table.core.windows.net`, `blob.core.windows.net`, `queue.core.windows.net`, and `file.core.windows.net`.
+- Outbound network connectivity to `ocsp.digicert.com`, `crl4.digicert.com`, `ocsp.msocsp.com`, `mscrl.microsoft.com`, `crl3.digicert.com`, `cacerts.digicert.com`, `oneocsp.microsoft.com`, and `crl.microsoft.com`. This connectivity is needed to support TLS/SSL functionality.
- The DNS configuration for the virtual network must be able to resolve all of the endpoints and domains mentioned in the earlier points. These DNS requirements can be met by ensuring a valid DNS infrastructure is configured and maintained for the virtual network.-- Outbound network connectivity to the following Azure Monitor endpoints, which resolve under the following DNS domains: *shoebox2-black.shoebox2.metrics.nsatc.net*, *north-prod2.prod2.metrics.nsatc.net*, *azglobal-black.azglobal.metrics.nsatc.net*, *shoebox2-red.shoebox2.metrics.nsatc.net*, *east-prod2.prod2.metrics.nsatc.net*, *azglobal-red.azglobal.metrics.nsatc.net*, *shoebox3.prod.microsoftmetrics.com*, *shoebox3-red.prod.microsoftmetrics.com*, *shoebox3-black.prod.microsoftmetrics.com*, *azredis-red.prod.microsoftmetrics.com* and *azredis-black.prod.microsoftmetrics.com*.
+- Outbound network connectivity to the following Azure Monitor endpoints, which resolve under the following DNS domains: `shoebox2-black.shoebox2.metrics.nsatc.net`, `north-prod2.prod2.metrics.nsatc.net`, `azglobal-black.azglobal.metrics.nsatc.net`, `shoebox2-red.shoebox2.metrics.nsatc.net`, `east-prod2.prod2.metrics.nsatc.net`, `azglobal-red.azglobal.metrics.nsatc.net`, `shoebox3.prod.microsoftmetrics.com`, `shoebox3-red.prod.microsoftmetrics.com`, `shoebox3-black.prod.microsoftmetrics.com`, `azredis-red.prod.microsoftmetrics.com` and `azredis-black.prod.microsoftmetrics.com`.
### How can I verify that my cache is working in a virtual network?
In addition to the IP addresses used by the Azure virtual network infrastructure
If the virtual networks are in the same region, you can connect them using virtual network peering or a VPN Gateway VNET-to-VNET connection.
-If the peered Azure virtual networks are in *different* regions: a client VM in region 1 can't access the cache in region 2 via its load balanced IP address because of a constraint with basic load balancers. That is, unless it's a cache with a standard load balancer, which is currently only a cache that was created with *availability zones*.
+If the peered Azure virtual networks are in _different_ regions: a client VM in region 1 can't access the cache in region 2 via its load balanced IP address because of a constraint with basic load balancers. That is, unless it's a cache with a standard load balancer, which is currently only a cache that was created with _availability zones_.
For more information about virtual network peering constraints, see Virtual Network - Peering - Requirements and constraints. One solution is to use a VPN Gateway VNET-to-VNET connection instead of virtual network peering.
The solution is to define one or more user-defined routes (UDRs) on the subnet t
If possible, use the following configuration: - The ExpressRoute configuration advertises 0.0.0.0/0 and, by default, force tunnels all outbound traffic on-premises.-- The UDR applied to the subnet that contains the Azure Cache for Redis instance defines 0.0.0.0/0 with a working route for TCP/IP traffic to the public internet. For example, it sets the next hop type to *internet*.
+- The UDR applied to the subnet that contains the Azure Cache for Redis instance defines 0.0.0.0/0 with a working route for TCP/IP traffic to the public internet. For example, it sets the next hop type to _internet_.
The combined effect of these steps is that the subnet-level UDR takes precedence over the ExpressRoute forced tunneling and that ensures outbound internet access from the Azure Cache for Redis instance. Connecting to an Azure Cache for Redis instance from an on-premises application by using ExpressRoute isn't a typical usage scenario because of performance reasons. For best performance, Azure Cache for Redis clients should be in the same region as the Azure Cache for Redis instance. >[!IMPORTANT]
->The routes defined in a UDR *must* be specific enough to take precedence over any routes advertised by the ExpressRoute configuration. The following example uses the broad 0.0.0.0/0 address range and, as such, can potentially be accidentally overridden by route advertisements that use more specific address ranges.
+>The routes defined in a UDR _must_ be specific enough to take precedence over any routes advertised by the ExpressRoute configuration. The following example uses the broad 0.0.0.0/0 address range and, as such, can potentially be accidentally overridden by route advertisements that use more specific address ranges.
>[!WARNING]
->Azure Cache for Redis isn't supported with ExpressRoute configurations that *incorrectly cross-advertise routes from the public peering path to the private peering path*. ExpressRoute configurations that have public peering configured receive route advertisements from Microsoft for a large set of Microsoft Azure IP address ranges. If these address ranges are incorrectly cross-advertised on the private peering path, the result is that all outbound network packets from the Azure Cache for Redis instance's subnet are incorrectly force-tunneled to a customer's on-premises network infrastructure. This network flow breaks Azure Cache for Redis. The solution to this problem is to stop cross-advertising routes from the public peering path to the private peering path.
+>Azure Cache for Redis isn't supported with ExpressRoute configurations that _incorrectly cross-advertise routes from the public peering path to the private peering path_. ExpressRoute configurations that have public peering configured receive route advertisements from Microsoft for a large set of Microsoft Azure IP address ranges. If these address ranges are incorrectly cross-advertised on the private peering path, the result is that all outbound network packets from the Azure Cache for Redis instance's subnet are incorrectly force-tunneled to a customer's on-premises network infrastructure. This network flow breaks Azure Cache for Redis. The solution to this problem is to stop cross-advertising routes from the public peering path to the private peering path.
Background information on UDRs is available in [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md).
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
Previously updated : 03/24/2023 Last updated : 08/24/2023 ms.devlang: csharp
The following list contains answers to commonly asked questions about Azure Cach
- [After scaling, do I have to change my cache name or access keys?](#after-scaling-do-i-have-to-change-my-cache-name-or-access-keys) - [How does scaling work?](#how-does-scaling-work) - [Do I lose data from my cache during scaling?](#do-i-lose-data-from-my-cache-during-scaling)
+- [Can I use all the features of Premium tier after scaling?](#can-i-use-all-the-features-of-premium-tier-after-scaling)
- [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling) - [Is my cache be available during scaling?](#is-my-cache-be-available-during-scaling) - [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication)
No, your cache name and keys are unchanged during a scaling operation.
- When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved. - When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+ ### Can I use all the features of Premium tier after scaling?
+
+No, some features can only be set when you create a cache in Premium tier, and are not available after scaling.
+
+These features cannot be added after you create the Premium cache:
+
+- VNet injection
+- Adding zone redundancy
+- Using multiple replicas per primary
+
+To use any of these features, you must create a new cache instance in the Premium tier.
+ ### Is my custom databases setting affected during scaling? If you configured a custom value for the `databases` setting during cache creation, keep in mind that some pricing tiers have different [databases limits](cache-configure.md#databases). Here are some considerations when scaling in this scenario:
You can connect to your cache using the same [endpoints](cache-configure.md#prop
### Can I directly connect to the individual shards of my cache?
-The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the redis-cli utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) on [https://redis.io](https://redis.io) in the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
+The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the Redis-CLI utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) on [https://redis.io](https://redis.io) in the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
You need to use the `-p` switch to specify the correct port to connect to. Use the [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) command to determine the exact ports used for the primary and replica nodes. The following port ranges are used:
You need to use the `-p` switch to specify the correct port to connect to. Use t
### Can I configure clustering for a previously created cache?
-Yes. First, ensure that your cache is premium by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time.
+Yes. First, ensure that your cache is in the Premium tier by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time.
>[!IMPORTANT]
->You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves *differently* than a cache of the same size with *no* clustering.
+>You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves _differently_ than a cache of the same size with _no_ clustering.
All Enterprise and Enterprise Flash tier caches are always clustered.
Unlike Basic, Standard, and Premium tier caches, Enterprise and Enterprise Flash
## Next steps - [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting)-- [[Best practices for scaling](cache-best-practices-scale.md)]
+- [Best practices for scaling](cache-best-practices-scale.md)
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Previously updated : 06/29/2023 Last updated : 08/17/2023
Before you upgrade, check the Redis version of a cache by selecting **Properties
## Upgrade using Azure CLI
-To upgrade a cache from 4 to 6 using the Azure CLI, use the following command:
+To upgrade a cache from 4 to 6 using the Azure CLI that is not using Private Endpoint, use the following command.
```azurecli-interactive az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6 ```
+### Private Endpoint
+
+If Private Endpoint is enabled on the cache, use the command that is appropriate based on whether `PublicNetworkAccess` is enabled or disabled:
+
+If `PublicNetworkAccess` is enabled:
+
+```azurecli
+ az redis update --name <cacheName> --resource-group <resourceGroupName> --set publicNetworkAccess=Enabled redisVersion=6
+```
+
+If `PublicNetworkAccess` is disabled:
+
+```azurecli
+az redis update --name <cacheName> --resource-group <resourceGroupName> --set publicNetworkAccess=Disabled redisVersion=6
+```
+ ## Upgrade using PowerShell To upgrade a cache from 4 to 6 using PowerShell, use the following command:
azure-cache-for-redis Cache Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-network-isolation.md
Title: Azure Cache for Redis network isolation options
-description: In this article, youΓÇÖll learn how to determine the best network isolation solution for your needs. WeΓÇÖll go through the basics of Azure Private Link, Azure Virtual Network (VNet) injection, and Azure Firewall Rules with their advantages and limitations.
+description: In this article, you learn how to determine the best network isolation solution for your needs. We go through the basics of Azure Private Link, Azure Virtual Network (VNet) injection, and Azure Firewall Rules with their advantages and limitations.
Previously updated : 06/21/2023 Last updated : 08/29/2023 # Azure Cache for Redis network isolation options
-In this article, youΓÇÖll learn how to determine the best network isolation solution for your needs. WeΓÇÖll discuss the basics of Azure Private Link, Azure Virtual Network (VNet) injection, and Azure Firewall Rules. We'll discuss their advantages and limitations.
+In this article, you learn how to determine the best network isolation solution for your needs. We discuss the basics of Azure Private Link (recommended), Azure Virtual Network (VNet) injection, and Firewall Rules. We discuss their advantages and limitations.
-## Azure Private Link
+## Azure Private Link (recommended)
Azure Private Link provides private connectivity from a virtual network to Azure PaaS services. Private Link simplifies the network architecture and secures the connection between endpoints in Azure. Private Link also secures the connection by eliminating data exposure to the public internet. ### Advantages of Private Link -- Supported on Basic, Standard, Premium, Enterprise, and Enterprise Flash tiers of Azure Cache for Redis instances.-- By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network via a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly.
+- Private link supported on Basic, Standard, Premium, Enterprise, and Enterprise Flash tiers of Azure Cache for Redis instances.
+- By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network through a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly.
+ > [!IMPORTANT] > Enterprise/Enterprise Flash caches with private link cannot be accessed publicly.-- Once a private endpoint is created on Basic/Standard/Premium tier caches, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).+
+- Once a private endpoint is created on Basic/Standard/Premium tier caches, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which only allows private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).
+ > [!IMPORTANT] > Enterprise/Enterprise Flash tier does not support `publicNetworkAccess` flag.-- All external cache dependencies won't affect the VNet's NSG rules.+
+- Any external cache dependencies don't affect the VNet's NSG rules.
+- Persisting to any Storage accounts protected with firewall rules is supported when using managed identity to connect to Storage account, see more [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md#how-to-export-if-i-have-firewall-enabled-on-my-storage-account)
### Limitations of Private Link -- Network security groups (NSG) are disabled for private endpoints. However, if there are other resources on the subnet, NSG enforcement will apply to those resources.-- Currently, portal console support, import/export and persistence to firewall storage accounts aren't supported.-- To connect to a clustered cache, `publicNetworkAccess` needs to be set to `Disabled`, and there can only be one private endpoint connection.
+- Currently, portal console isn't supported for caches with private link.
> [!NOTE] > When adding a private endpoint to a cache instance, all Redis traffic is moved to the private endpoint because of the DNS.
-> Ensure previous firewall rules are adjusted before.
+> Ensure previous firewall rules are adjusted before.
## Azure Virtual Network injection
-VNet is the fundamental building block for your private network in Azure. VNet enables many Azure resources to securely communicate with each other, the internet, and on-premises networks. VNet is like a traditional network you would operate in your own data center. However, VNet also has the benefits of Azure infrastructure, scale, availability, and isolation.
+Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many Azure resources to securely communicate with each other, the internet, and on-premises networks. VNet is like a traditional network you would operate in your own data center. However, VNet also has the benefits of Azure infrastructure, scale, availability, and isolation.
### Advantages of VNet injection
VNet is the fundamental building block for your private network in Azure. VNet e
### Limitations of VNet injection -- VNet injected caches are only available for Premium Azure Cache for Redis.-- When using a VNet injected cache, you must change your VNet to cache dependencies such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.-- You can't inject an Azure Cache for Redis instance into a Virtual Network. You can only select this option when you _create_ the cache.
+- Creating and maintaining virtual network configurations can be error prone. Troubleshooting is challenging. Incorrect virtual network configurations can lead to various issues:
+ - obstructed metrics transmission from your cache instances,
+ - failure of replica node to replicate data from primary node,
+ - potential data loss,
+ - failure of management operations like scaling,
+ - and in the most severe scenarios, loss of availability.
+- VNet injected caches are only available for Premium-tier Azure Cache for Redis instances.
+- When using a VNet injected cache, you must change your VNet to cache dependencies, such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.
+- You can't inject an existing Azure Cache for Redis instance into a Virtual Network. You can only select this option when you _create_ the cache.
-## Azure Firewall rules
+## Firewall rules
-[Azure Firewall](../firewall/overview.md) is a managed, cloud-based network security service that protects your Azure VNet resources. ItΓÇÖs a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks.
+Azure Cache for Redis allows configuring Firewall rules for specifying IP address that you want to allow to connect to your Azure Cache for Redis instance.
### Advantages of firewall rules
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
Previously updated : 06/23/2023 Last updated : 08/28/2023+
-# Azure Cache for Redis with Azure Private Link
+# What is Azure Cache for Redis with Azure Private Link?
-In this article, you'll learn how to create a virtual network and an Azure Cache for Redis instance with a private endpoint using the Azure portal. You'll also learn how to add a private endpoint to an existing Azure Cache for Redis instance.
+In this article, you learn how to create a virtual network and an Azure Cache for Redis instance with a private endpoint using the Azure portal. You also learn how to add a private endpoint to an existing Azure Cache for Redis instance.
Azure Private Endpoint is a network interface that connects you privately and securely to Azure Cache for Redis powered by Azure Private Link.
You can restrict public access to the private endpoint of your cache by disablin
>[!Important] > There is a `publicNetworkAccess` flag which is `Disabled` by default.
-> You can set the value to `Disabled` or `Enabled`. When set to enabled, this flag allows both public and private endpoint access to the cache. When set to `Disabled`, it allows only private endpoint access. For more information on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access).
+> You can set the value to `Disabled` or `Enabled`. When set to enabled, this flag allows both public and private endpoint access to the cache. When set to `Disabled`, it allows only private endpoint access. Neither the Enterprise nor Enterprise Flash tier supports the `publicNetworkAccess` flag. For more information on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access).
>[!Important] > Private endpoint is supported on cache tiers Basic, Standard, Premium, and Enterprise. We recommend using private endpoint instead of VNets. Private endpoints are easy to set up or remove, are supported on all tiers, and can connect your cache to multiple different VNets at once.
You can restrict public access to the private endpoint of your cache by disablin
## Create a private endpoint with a new Azure Cache for Redis instance
-In this section, you'll create a new Azure Cache for Redis instance with a private endpoint.
+In this section, you create a new Azure Cache for Redis instance with a private endpoint.
### Create a virtual network for your new cache
In this section, you'll create a new Azure Cache for Redis instance with a priva
| **Subscription** | Drop down and select your subscription. | The subscription under which to create this virtual network. | | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your virtual network and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. | | **Name** | Enter a virtual network name. | The name must: begin with a letter or number; end with a letter, number, or underscore; and contain only letters, numbers, underscores, periods, or hyphens. |
- | **Region** | Drop down and select a region. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your virtual network. |
+ | **Region** | Drop down and select a region. | Select a [region](https://azure.microsoft.com/regions/) near other services that use your virtual network. |
5. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
In this section, you'll create a new Azure Cache for Redis instance with a priva
### Create an Azure Cache for Redis instance with a private endpoint
-To create a cache instance, follow these steps.
+To create a cache instance, follow these steps:
1. Go back to the Azure portal homepage or open the sidebar menu, then select **Create a resource**. 1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
- :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
+ :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
1. On the **New Redis Cache** page, configure the settings for your new cache.
To create a cache instance, follow these steps.
1. Select the **Add** button to create your private endpoint.
- :::image type="content" source="media/cache-private-link/3-add-private-endpoint.png" alt-text="In networking, add a private endpoint.":::
+ :::image type="content" source="media/cache-private-link/3-add-private-endpoint.png" alt-text="In networking, add a private endpoint.":::
1. On the **Create a private endpoint** page, configure the settings for your private endpoint with the virtual network and subnet you created in the last section and select **OK**.
In this section, you'll add a private endpoint to an existing Azure Cache for Re
### Create a virtual network for your existing cache
-To create a virtual network, follow these steps.
+To create a virtual network, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**.
To create a virtual network, follow these steps.
### Create a private endpoint
-To create a private endpoint, follow these steps.
+To create a private endpoint, follow these steps:
1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions.
To create a private endpoint, follow these steps.
1. In the **Resource** tab, select your subscription, choose the resource type as `Microsoft.Cache/Redis`, and then select the cache you want to connect the private endpoint to. 1. Select the **Next: Configuration** button at the bottom of the page.+ 1. Select the **Next: Virtual Network** button at the bottom of the page.+ 1. In the **Configuration** tab, select the virtual network and subnet you created in the previous section.+ 1. In the **Virtual Network** tab, select the virtual network and subnet you created in the previous section.+ 1. Select the **Next: Tags** button at the bottom of the page. 1. Optionally, in the **Tags** tab, enter the name and value if you wish to categorize the resource.
For more information, see [Azure services DNS zone configuration](../private-lin
### How do I verify if my private endpoint is configured correctly?
- Go to **Overview** in the Resource menu on the portal. You see the **Host name** for your cache in the working pane. Run a command like `nslookup <hostname>` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache.
+Go to **Overview** in the Resource menu on the portal. You see the **Host name** for your cache in the working pane. Run a command like `nslookup <hostname>` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache.
+ :::image type="content" source="media/cache-private-link/cache-private-ip-address.png" alt-text="In the Azure portal, private endpoint D N S settings.":::
### How can I change my private endpoint to be disabled or enabled from public network access? There's a `publicNetworkAccess` flag that is `Disabled` by default. When set to `Enabled`, this flag is allows both public and private endpoint access to the cache. When set to `Disabled`, it allows only private endpoint access. You can set the value to `Disabled` or `Enabled` in the Azure portal or with a RESTful API PATCH request.
-To change the value in the Azure portal, follow these steps.
-
-1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions.
+To change the value in the Azure portal, follow these steps:
-1. Select the cache instance you want to change the public network access value.
+ 1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions.
-1. On the left side of the screen, select **Private Endpoint**.
+ 1. Select the cache instance you want to change the public network access value.
-1. Select the **Enable public network access** button.
+ 1. On the left side of the screen, select **Private Endpoint**.
+ 1. Select the **Enable public network access** button.
+
To change the value through a RESTful API PATCH request, see below and edit the value to reflect which flag you want for your cache.-
-```http
-PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
-{ "properties": {
- "publicNetworkAccess":"Disabled"
- }
-}
-```
+
+ ```http
+ PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
+ { "properties": {
+ "publicNetworkAccess":"Disabled"
+ }
+ }
+
+ ```
+ For more information, see [Redis - Update] (/rest/api/redis/Redis/Update?tabs=HTTP).
### How can I migrate my VNet injected cache to a Private Link cache?
Control the traffic by using NSG rules for outbound traffic on source clients. D
It's only linked to your VNet. Because it's not in your VNet, NSG rules don't need to be modified for dependent endpoints.
-## Next steps
+## Related content
- To learn more about Azure Private Link, see the [Azure Private Link documentation](../private-link/private-link-overview.md). - To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
azure-cache-for-redis Cache Tutorial Functions Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md
Previously updated : 07/19/2023
-#CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
Last updated : 08/24/2023
+#CustomerIntent: As a developer, I want a introductory example of using Azure Cache for Redis triggers with Azure Functions so that I can understand how to use the functions with a Redis cache.
This tutorial shows how to implement basic triggers with Azure Cache for Redis a
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Set up the necessary tools.
-> * Configure and connect to a cache.
-> * Create an Azure function and deploy code to it.
-> * Confirm the logging of triggers.
+>
+> - Set up the necessary tools.
+> - Configure and connect to a cache.
+> - Create an Azure function and deploy code to it.
+> - Confirm the logging of triggers.
## Prerequisites
Creating the cache can take a few minutes. You can move to the next section whil
1. On the **Azure** tab, create a new function app by selecting the lightning bolt icon in the upper right of the **Workspace** tab.
+1. Select **Create function...**.
+ :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-add-resource.png" alt-text="Screenshot that shows the icon for adding a new function from VS Code."::: 1. Select the folder that you created to start the creation of a new Azure Functions project. You get several on-screen prompts. Select:
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
1. Go to your cache in the Azure portal, and then: 1. On the resource menu, select **Advanced settings**.+ 1. Scroll down to the **notify-keyspace-events** box and enter **KEA**. **KEA** is a configuration string that enables keyspace notifications for all keys and events. For more information on keyspace configuration strings, see the [Redis documentation](https://redis.io/docs/manual/keyspace-notifications/).+ 1. Select **Save** at the top of the window. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-keyspace-notifications.png" alt-text="Screenshot of advanced settings for Azure Cache for Redis in the portal.":::
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
1. Create a new Azure function: 1. Go back to the **Azure** tab and expand your subscription.+ 1. Right-click **Function App**, and then select **Create Function App in Azure (Advanced)**. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-create-function-app.png" alt-text="Screenshot of selections for creating a function app in VS Code.":::
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
:::image type="content" source="media/cache-tutorial-functions-getting-started/cache-log-stream.png" alt-text="Screenshot of a log stream for a function app resource on the resource menu." lightbox="media/cache-tutorial-functions-getting-started/cache-log-stream.png":::
-## Next step
+
+## Related content
-> [!div class="nextstepaction"]
-> [Create serverless event-based architectures by using Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)
+- [Overview of Azure functions for Azure Cache for Redis](/azure/azure-functions/functions-bindings-cache?tabs=in-process&pivots=programming-language-csharp)
+- [Build a write-behind cache by using Azure Functions](cache-tutorial-write-behind.md)
azure-cache-for-redis Cache Tutorial Write Behind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-write-behind.md
Previously updated : 04/20/2023
-#CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
Last updated : 08/24/2023
+#CustomerIntent: As a developer, I want a practical example of using Azure Cache for Redis triggers with Azure Functions so that I can write applications that tie together a Redis cache and a database like Azure SQL.
Every new item or new price written to the cache is then reflected in a SQL tabl
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Configure a database, trigger, and connection strings.
-> * Validate that triggers are working.
-> * Deploy code to a function app.
+>
+> - Configure a database, trigger, and connection strings.
+> - Validate that triggers are working.
+> - Deploy code to a function app.
## Prerequisites
In this tutorial, you learn how to:
- Completion of the previous tutorial, [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md), with these resources provisioned: - Azure Cache for Redis instance - Azure Functions instance
+ - A working knowledge of using Azure SQL
- Visual Studio Code (VS Code) environment set up with NuGet packages installed ## Create and configure a new SQL database The SQL database is the backing database for this example. You can create a SQL database through the Azure portal or through your preferred method of automation.
+For more information on creating a SQL database, see [Quickstart: Create a single database - Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).
+ This example uses the portal: 1. Enter a database name and select **Create new** to create a new server to hold the database.
+ :::image type="content" source="media/cache-tutorial-write-behind/cache-create-sql.png" alt-text="Screenshot of creating an Azure SQL resource.":::
+ 1. Select **Use SQL authentication** and enter an admin sign-in and password. Be sure to remember these credentials or write them down. When you're deploying a server in production, use Azure Active Directory (Azure AD) authentication instead.
+ :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-authentication.png" alt-text="Screenshot of the authentication information for an Azure SQL resource.":::
+ 1. Go to the **Networking** tab and choose **Public endpoint** as a connection method. Select **Yes** for both firewall rules that appear. This endpoint allows access from your Azure function app.
+ :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-networking.png" alt-text="Screenshot of the networking setting for an Azure SQL resource.":::
+ 1. After validation finishes, select **Review + create** and then **Create**. The SQL database starts to deploy.
-1. After deployment finishes, go to the resource in the Azure portal and select the **Query editor** tab. Create a new table called *inventory* that holds the data you'll write to it. Use the following SQL command to make a new table with two fields:
+1. After deployment finishes, go to the resource in the Azure portal and select the **Query editor** tab. Create a new table called _inventory_ that holds the data you'll write to it. Use the following SQL command to make a new table with two fields:
- `ItemName` lists the name of each item. - `Price` stores the price of the item.
This example uses the portal:
); ```
-1. After the command finishes running, expand the *Tables* folder and verify that the new table was created.
+ :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-query-table.png" alt-text="Screenshot showing the creation of a table in Query Editor of an Azure SQL resource.":::
+
+1. After the command finishes running, expand the _Tables_ folder and verify that the new table was created.
## Configure the Redis trigger
-First, make a copy of the same VS Code project that you used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as *RedisWriteBehindTrigger*, and open it in VS Code.
+First, make a copy of the same VS Code project that you used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as _RedisWriteBehindTrigger_, and open it in VS Code.
In this example, you use the [pub/sub trigger](cache-how-to-functions.md#redispubsubtrigger) to trigger on `keyevent` notifications. The goals of the example are: - Trigger every time a `SET` event occurs. A `SET` event happens when either new keys are written to the cache instance or the value of a key is changed. - After a `SET` event is triggered, access the cache instance to find the value of the new key.-- Determine if the key already exists in the *inventory* table in the SQL database.
+- Determine if the key already exists in the _inventory_ table in the SQL database.
- If so, update the value of that key. - If not, write a new row with the key and its value.
To configure the trigger:
1. Import the `System.Data.SqlClient` NuGet package to enable communication with the SQL database. Go to the VS Code terminal and use the following command:
- ```dos
- dotnet add package System.Data.SqlClient
+ ```terminal
+ dotnet add package System.Data.SqlClient
```
-1. Copy and paste the following code in *redisfunction.cs* to replace the existing code:
+1. Copy and paste the following code in _redisfunction.cs_ to replace the existing code:
```csharp using Microsoft.Extensions.Logging;
To configure the trigger:
## Configure connection strings
-You need to update the *local.settings.json* file to include the connection string for your SQL database. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this example:
+You need to update the _local.settings.json_ file to include the connection string for your SQL database. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this example:
```json {
You need to update the *local.settings.json* file to include the connection stri
} ```
-You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically.
+To find the Redis connection string, go to the resource menu in the Azure Cache for Redis resource. The string is in the **Access Keys** area of **Settings**.
-To find the Redis connection string, go to the resource menu in the Azure Cache for Redis resource. The string is in the **Access Keys** area.
+To find the SQL database connection string, go to the resource menu in the SQL database resource. Under **Settings**, select **Connection strings**, and then select the **ADO.NET** tab.
+The string is in the **ADO.NET (SQL authentication)** area.
-To find the SQL database connection string, go to the resource menu in the SQL database resource, and then select the **ADO.NET** tab. The string is in the **Connection strings** area.
+You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically.
> [!IMPORTANT] > This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information.
To find the SQL database connection string, go to the resource menu in the SQL d
## Build and run the project 1. In VS Code, go to the **Run and debug tab** and run the project.+ 1. Go back to your Azure Cache for Redis instance in the Azure portal, and select the **Console** button to enter the Redis console. Try using some `SET` commands: - `SET apple 5.25`
To find the SQL database connection string, go to the resource menu in the SQL d
Confirm that the items written to your Azure Cache for Redis instance appear here.
+ :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-query-result.png" alt-text="Screenshot showing the information has been copied to SQL from the cache instance.":::
+ ## Deploy the code to your function app
+This tutorial builds on the previous tutorial. For more information, see [Deploy code to an Azure function](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started#deploy-code-to-an-azure-function).
+ 1. In VS Code, go to the **Azure** tab. 1. Find your subscription and expand it. Then, find the **Function App** section and expand that.
To find the SQL database connection string, go to the resource menu in the SQL d
## Add connection string information
+This tutorial builds on the previous tutorial. For more information on the `redisConnectionString`, see [Add connection string information](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started#add-connection-string-information).
+ 1. Go to your function app in the Azure portal. On the resource menu, select **Configuration**. 1. Select **New application setting**. For **Name**, enter **SQLConnectionString**. For **Value**, enter your connection string.
If you ever want to clear the SQL database table without deleting it, you can us
TRUNCATE TABLE [dbo].[inventory] ``` + ## Summary This tutorial and [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) show how to use Azure Cache for Redis to trigger Azure function apps. They also show how to use Azure Cache for Redis as a write-behind cache with Azure SQL Database. Using Azure Cache for Redis with Azure Functions is a powerful combination that can solve many integration and performance problems. ## Related content -- [Create serverless event-based architectures by using Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)-- [Build a write-behind cache by using Azure Functions](cache-tutorial-write-behind.md)
+- [Overview of Azure functions for Azure Cache for Redis](/azure/azure-functions/functions-bindings-cache?tabs=in-process&pivots=programming-language-csharp)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md)
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
azure-functions Add Bindings Existing Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/add-bindings-existing-function.md
Title: Connect functions to other Azure services description: Learn how to add bindings that connect to other Azure services to an existing function in your Azure Functions project. Previously updated : 04/29/2020- Last updated : 08/18/2023+
+zone_pivot_groups: programming-languages-set-functions
#Customer intent: As a developer, I need to know how to add a binding to an existing function so that I can integrate external services to my function.
When you create a function, language-specific trigger code is added in your proj
## Local development
-When you develop functions locally, you need to update the function code to add bindings. Using Visual Studio Code can make it easier to add bindings to a function.
-
-### Visual Studio Code
-
-When you use Visual Studio Code to develop your function and your function uses a function.json file, the Azure Functions extension can automatically add a binding to an existing function.json file. To learn more, see [Add input and output bindings](functions-develop-vs-code.md#add-input-and-output-bindings).
+When you develop functions locally, you need to update the function code to add bindings. For languages that use function.json, [Visual Studio Code](#visual-studio-code) provides tooling to add bindings to a function.
### Manually add bindings based on examples
-When adding a binding to an existing function, you'll need update both the function code and the function.json configuration file, if used by your language. Both .NET class library and Java functions use attributes instead of function.json, so you'll need to update that instead.
+When adding a binding to an existing function, you need to add binding-specific attributes to the function definition in code.
+When adding a binding to an existing function, you need to add binding-specific annotations to the function definition in code.
+When adding a binding to an existing function, you need to update the function code and add a definition to the function.json configuration file.
+When adding a binding to an existing function, you need update the function definition, depending on your model:
+
+#### [v2](#tab/python-v2)
+You need to add binding-specific annotations to the function definition in code.
+#### [v1](#tab/python-v1)
+You need to update the function code and add a definition to the function.json configuration file.
++ Use the following table to find examples of specific binding types that you can use to guide you in updating an existing function. First, choose the language tab that corresponds to your project. [!INCLUDE [functions-bindings-code-example-chooser](../../includes/functions-bindings-code-example-chooser.md)]
+### Visual Studio Code
+
+When you use Visual Studio Code to develop your function and your function uses a function.json file, the Azure Functions extension can automatically add a binding to an existing function.json file. To learn more, see [Add input and output bindings](functions-develop-vs-code.md#add-input-and-output-bindings).
+ ## Azure portal When you develop your functions in the [Azure portal](https://portal.azure.com), you add input and output bindings in the **Integrate** tab for a given function. The new bindings are added to either the function.json file or to the method attributes, depending on your language. The following articles show examples of how to add bindings to an existing function in the portal:
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Update-AzFunctionAppSetting -Name MyAppName -ResourceGroupName MyResourceGroupNa
> [!NOTE] > Overriding the `host.json` through changing app settings will restart your function app.
+> App settings that contain a period aren't supported when running on Linux in an Elastic Premium plan or a Dedicated (App Service) plan. In these hosting environments, you should continue to use the *host.json* file.
## Monitor function apps using Health check
azure-functions Create First Function Arc Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-cli.md
On your local computer:
# [C\#](#tab/csharp) + [.NET 6.0 SDK](https://dotnet.microsoft.com/download)
-+ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Ccsharp#install-the-azure-functions-core-tools)
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [JavaScript](#tab/nodejs) + [Node.js](https://nodejs.org/) version 18. Node.js version 14 is also supported.
-+ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cnode#install-the-azure-functions-core-tools).
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [Python](#tab/python) + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)
-+ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cpython#install-the-azure-functions-core-tools)
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [PowerShell](#tab/powershell) + [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)
-+ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cpowershell#install-the-azure-functions-core-tools)
+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later + PowerShell 7 requires version 1.2.5 of the connectedk8s Azure CLI extension, or a later version. It also requires version 0.1.3 of the appservice-kube Azure CLI extension, or a later version. Make sure you install the correct version of both of these extensions as you complete this quickstart article. [!INCLUDE [functions-arc-create-environment](../../includes/functions-arc-create-environment.md)]
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
Before you begin, you must have the following:
+ [.NET 6.0 SDK](https://dotnet.microsoft.com/download).
-+ [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.
- + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) [version 2.4](/cli/azure/release-notes-azure-cli#april-21-2020) or later.
Before you begin, you must have the following:
You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-### Prerequisite check
-
-Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources:
-
-# [Azure CLI](#tab/azure-cli)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ Run `dotnet --list-sdks` to check that the required versions are installed.
-
-+ Run `az --version` to check that the Azure CLI version is 2.4 or later.
-
-+ Run `az login` to sign in to Azure and verify an active subscription.
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ Run `dotnet --list-sdks` to check that the required versions are installed.
-
-+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later.
-
-+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription.
-- ## Create a local function project
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md
Before you begin, you must have the following:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.
- + The [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. + The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 or 11. The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK. + [Apache Maven](https://maven.apache.org), version 3.0 or above.
-### Prerequisite check
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ Run `az --version` to check that the Azure CLI version is 2.4 or later.
-
-+ Run `az login` to sign in to Azure and verify an active subscription.
## Create a local function project
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Before you begin, you must have the following prerequisites:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.
-
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above
- + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
Before you begin, you must have the following prerequisites:
+ [Node.js](https://nodejs.org/) version 18 or above. ::: zone-end
-### Prerequisite check
-
-Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources:
-
-# [Azure CLI](#tab/azure-cli)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above.
-
-+ Run `az --version` to check that the Azure CLI version is 2.4 or later.
-
-+ Run `az login` to sign in to Azure and verify an active subscription.
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
::: zone pivot="nodejs-model-v4"
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above.
++ Make sure you install version v4.0.5095 of the Core Tools, or a later version. ::: zone-end
-+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later.
-
-+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription.
--- ## Create a local function project In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function.
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md
Before you begin, you must have the following:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.
- + One of the following tools for creating Azure resources: + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 9.4.0 or later.
Before you begin, you must have the following:
+ [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows)
-### Prerequisite check
-
-Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources:
-
-# [Azure CLI](#tab/azure-cli)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ Run `az --version` to check that the Azure CLI version is 2.4 or later.
-
-+ Run `az login` to sign in to Azure and verify an active subscription.
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ Run `(Get-Module -ListAvailable Az).Version` and verify version 9.4.0 or later.
-
-+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription.
-- ## Create a local function project
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Title: Create a Python function from the command line - Azure Functions description: Learn how to create a Python function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 07/15/2023 Last updated : 08/07/2023 ms.devlang: python
Before you begin, you must have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.
- + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
Before you begin, you must have the following requirements in place:
[!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)] + ## <a name="create-venv"></a>Create and activate a virtual environment In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Make sure that you're using Python 3.9, 3.8, or 3.7, which are supported by Azure Functions.
py -m venv .venv
You run all subsequent commands in this activated virtual environment.
-## Create a local function project
+## Create a local function
-In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function.
+In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations.
+In this section, you create a function project that contains a single function.
1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime. ```console
In Azure Functions, a function project is a container for one or more individual
`func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*. ::: zone-end ::: zone pivot="python-mode-decorators"
+In this section, you create a function project and add an HTTP triggered function.
+ 1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime and the specified programming model version. ```console
In Azure Functions, a function project is a container for one or more individual
This folder contains various files for the project, including configuration files named [*local.settings.json*](functions-develop-local.md#local-settings-file) and [*host.json*](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
-1. The file `function_app.py` can include all functions within your project. To start with, there's already an HTTP function stored in the file.
+1. The file `function_app.py` can include all functions within your project. Open this file and replace the existing contents with the following code that adds an HTTP triggered function named `HttpExample`:
```python import azure.functions as func app = func.FunctionApp()
- @app.function_name(name="HttpTrigger1")
+ @app.function_name(name="HttpExample")
@app.route(route="hello") def test_function(req: func.HttpRequest) -> func.HttpResponse:
- return func.HttpResponse("HttpTrigger1 function processed a request!")
+ return func.HttpResponse("HttpExample function processed a request!")
``` 1. Open the local.settings.json project file and verify that the `AzureWebJobsFeatureFlags` setting has a value of `EnableWorkerIndexing`. This is required for Functions to interpret your project correctly as the Python v2 model. You'll add this same setting to your application settings after you publish your project to Azure.
In the previous example, replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME
## Verify in Azure
-Run the following command to view near real-time [streaming logs](functions-run-local.md#enable-streaming-logs) in Application Insights in the Azure portal.
+Run the following command to view near real-time streaming logs in Application Insights in the Azure portal.
```console func azure functionapp logstream <APP_NAME> --browser
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
Before you begin, you must have the following prerequisites:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above
- + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
Before you begin, you must have the following prerequisites:
+ [TypeScript](https://www.typescriptlang.org/) version 4+. ::: zone-end -
-### Prerequisite check
-
-Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources:
-
-# [Azure CLI](#tab/azure-cli)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above.
-
-+ Run `az --version` to check that the Azure CLI version is 2.4 or later.
-
-+ Run `az login` to sign in to Azure and verify an active subscription.
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
::: zone pivot="nodejs-model-v4"
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above.
++ Make sure you install version v4.0.5095 of the Core Tools, or a later version. ::: zone-end
-+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later.
-
-+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription.
--- ## Create a local function project In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function.
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
Before you get started, make sure you have the following requirements in place:
[!INCLUDE [functions-requirements-visual-studio-code-csharp](../../includes/functions-requirements-visual-studio-code-csharp.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in C#. Later in this article, you'll publish your function code to Azure.
After checking that the function runs correctly on your local computer, it's tim
## Next steps
-You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to one of the core Azure storage services. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process) > [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process)
-[Azure Functions Core Tools]: functions-run-local.md
-[Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md
Title: Create a Java function using Visual Studio Code - Azure Functions description: Learn how to create a Java function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/22/2022 Last updated : 08/03/2023 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Before you get started, make sure you have the following requirements in place:
[!INCLUDE [functions-requirements-visual-studio-code-java](../../includes/functions-requirements-visual-studio-code-java.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in Java. Later in this article, you'll publish your function code to Azure.
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
Title: Create a JavaScript function using Visual Studio Code - Azure Functions description: Learn how to create a JavaScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 02/06/2023 Last updated : 08/03/2023 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Before you get started, make sure you have the following requirements in place:
[!INCLUDE [functions-requirements-visual-studio-code-node-v4](../../includes/functions-requirements-visual-studio-code-node-v4.md)] ::: zone-end + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in JavaScript. Later in this article, you publish your function code to Azure.
azure-functions Create First Function Vs Code Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md
Title: Create a function in Go or Rust using Visual Studio Code - Azure Functions description: Learn how to create a Go function as an Azure Functions custom handler, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/22/2022 Last updated : 08/03/2023 ms.devlang: golang, rust
Before you get started, make sure you have the following requirements in place:
+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 3.x. Use the `func --version` command to check that it is correctly installed.
- + [Go](https://go.dev/doc/install), latest version recommended. Use the `go version` command to check your version. # [Rust](#tab/rust)
Before you get started, make sure you have the following requirements in place:
+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 3.x. Use the `func --version` command to check that it is correctly installed.
- + Rust toolchain using [rustup](https://www.rust-lang.org/tools/install). Use the `rustc --version` command to check your version. + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions custom handlers project. Later in this article, you'll publish your function code to Azure.
azure-functions Create First Function Vs Code Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md
Before you get started, make sure you have the following requirements in place:
[!INCLUDE [functions-requirements-visual-studio-code-powershell](../../includes/functions-requirements-visual-studio-code-powershell.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in PowerShell. Later in this article, you'll publish your function code to Azure.
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Before you begin, make sure that you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x.
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools), version 4.0.4785 or a later version.
+ Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download). + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
Before you begin, make sure that you have the following requirements in place:
[!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure.
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Feature/behavior | In-process<sup>3</sup> | Isolated worker process | | - | - | - |
-| [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions | Long Term Support (LTS) versions,<br/>Standard Term Support (STS) versions,<br/>.NET Framework |
+| [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions<sup>6</sup> | Long Term Support (LTS) versions<sup>6</sup>,<br/>Standard Term Support (STS) versions,<br/>.NET Framework |
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) (Support does not yet include Durable Entities) | | Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> |
-| HTTP trigger model types| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] | [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (as a [public preview extension][aspnetcore-integration])<sup>5</sup>|
+| HTTP trigger model types| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] | [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (using [ASP.NET Core integration][aspnetcore-integration])<sup>5</sup>|
| Output binding interactions | Return values (single output only),<br/>`out` parameters,<br/>`IAsyncCollector` | Return values in an expanded model with:<br/> - single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)<br/> - arrays of outputs|
-| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported |
+| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported - instead [work with SDK types directly](./dotnet-isolated-process-guide.md#register-azure-clients) |
| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) (improved model consistent with .NET ecosystem) | | Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) | | Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| | Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported](./dotnet-isolated-process-guide.md#application-insights) | | Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) |
-| Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
+| Cold start times<sup>2</sup> | Optimized | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) |
| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](dotnet-isolated-process-guide.md#readytorun) | <sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
Use the following table to compare feature and functional differences between th
<sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
-<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, support from some extensions is currently in preview, and Service Bus triggers do not yet support message settlement scenarios.
+<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, Service Bus triggers do not yet support message settlement scenarios.
<sup>5</sup> ASP.NET Core types are not supported for .NET Framework.
+<sup>6</sup> The isolated worker model supports .NET 8 as a preview, currently for Linux applications only. .NET 8 is not yet available for the in-process model. See the [Azure Functions Roadmap Update post](https://aka.ms/azure-functions-dotnet-roadmap) for more information about .NET 8 plans.
+ [HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
Use the following table to compare feature and functional differences between th
[HttpRequestMessage]: /dotnet/api/system.net.http.httprequestmessage [HttpResponseMessage]: /dotnet/api/system.net.http.httpresponsemessage
-[aspnetcore-integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration-preview
+[aspnetcore-integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
[!INCLUDE [functions-dotnet-supported-versions](../../includes/functions-dotnet-supported-versions.md)]
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
The following example performs clean-up actions if a cancellation request has be
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Net7Worker/EventHubCancellationToken.cs" id="docsnippet_cancellation_token_cleanup":::
-## ReadyToRun
+## Performance optimizations
-You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
+This section outlines options you can enable to improve performance around [cold start](./event-driven-scaling.md#cold-start).
-ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
+### Placeholders (preview)
-To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app.
+Placeholders are a platform capability that improves cold start. Normally, you do not have to be aware of them, but during the preview period for placeholders for .NET Isolated, they require some opt-in configuration. Placeholders require .NET 6 or later. To enable placeholders:
+
+- Set the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` application setting to "1"
+- Ensure that the `netFrameworkVersion` property of the function app matches your project's target framework, which must be .NET 6 or later.
+- Update your project file:
+ - Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later
+ - Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.14.1 or later
+ - Add a framework reference to `Microsoft.AspNetCore.App`
+ - Set the property `FunctionsEnableWorkerIndexing` to "True".
+ - Set the property `FunctionsAutoRegisterGeneratedMetadataProvider` to "True"
+
+> [!NOTE]
+> Setting `FunctionsEnableWorkerIndexing` to "True" may cause an issue when debugging locally using version 4.0.5274 or earlier of the [Azure Functions Core Tools](./functions-run-local.md). The issue manifests with the debugger not being able to attach. If you encounter this issue, remove the `FunctionsEnableWorkerIndexing` property during local testing.
+
+The following CLI commands will set the application setting and update the `netFrameworkVersion` property. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v6.0" or "v7.0", according to your target .NET version.
+
+```azurecli
+az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
+az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
+```
+
+The following example shows a project file with the appropriate changes in place:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net6.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <OutputType>Exe</OutputType>
+ <ImplicitUsings>enable</ImplicitUsings>
+ <Nullable>enable</Nullable>
+ <FunctionsEnableWorkerIndexing>True</FunctionsEnableWorkerIndexing>
+ <FunctionsAutoRegisterGeneratedMetadataProvider>True</FunctionsAutoRegisterGeneratedMetadataProvider>
+ </PropertyGroup>
+ <ItemGroup>
+ <FrameworkReference Include="Microsoft.AspNetCore.App" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+ <ItemGroup>
+ <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext" />
+ </ItemGroup>
+</Project>
+```
+
+### Optimized executor (preview)
+
+The function executor is a component of the platform that causes invocations to run. By default, it makes use of reflection, but a newer version is available in preview which removes this performance overhead. Normally, you do not have to be aware of this component, but during the preview period of the new version, it requires some opt-in configuration.
+
+To enable the optimized executor, you must update your project file:
+
+- Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later
+- Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.14.1 or later
+- Set the property `FunctionsEnableExecutorSourceGen` to "True"
+
+The following example shows a project file with the appropriate changes in place:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net6.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <OutputType>Exe</OutputType>
+ <ImplicitUsings>enable</ImplicitUsings>
+ <Nullable>enable</Nullable>
+ <FunctionsEnableExecutorSourceGen>True</FunctionsEnableExecutorSourceGen>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+ <ItemGroup>
+ <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext" />
+ </ItemGroup>
+</Project>
+```
+
+### ReadyToRun
+
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of cold starts when running in a [Consumption plan](consumption-plan.md). ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
+
+ReadyToRun requires you to build the project against the runtime architecture of the hosting app. **If these are not aligned, your app will encounter an error at startup.** Select your runtime identifier from the table below:
+
+|Operating System | App is 32-bit | Runtime identifier |
+|-|-|-|
+| Windows | True | `win-x86` |
+| Windows | False | `win-x64` |
+| Linux | True | N/A (not supported) |
+| Linux | False | `linux-x64` |
+
+To check if your Windows app is 32-bit or 64-bit, you can run the following CLI command, substituting `<group_name>` with the name of your resource group and `<app_name>` with the name of your application. An output of "true" indicates that the app is 32-bit, and "false" indicates 64-bit.
+
+```azurecli
+ az functionapp config show -g <group_name> -n <app_name> --query "use32BitWorkerProcess"
+```
+
+To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following examples shows a configuration for publishing to a Windows 32-bit function app.
```xml <PropertyGroup>
To compile your project as ReadyToRun, update your project file by adding the `<
</PropertyGroup> ```
+If you don't want to set the `<RuntimeIdentifier>` as part of the project file, you can also configure this as part of the publish gesture itself. For example, with a Windows 32-bit function app, the .NET CLI command would be:
+
+```dotnetcli
+dotnet publish --runtime win-x86
+```
+
+In Visual Studio, the "Target Runtime" option in the publish profile should be set to the correct runtime identifier. If it is set to the default value "Portable", ReadyToRun will not be used.
+ ## Execution context .NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [`ILogger`][ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
The trigger attribute specifies the trigger type and binds input data to a metho
The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name.
-Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types).
-
-For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when using .NET Functions isolated worker process.
+Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types). For HTTP triggers, see the [HTTP trigger](#http-trigger) section below.
For a complete set of reference samples for using triggers and bindings with isolated worker process functions, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
For some service-specific binding types, binding data can be provided using type
| Dependency | Version requirement | |-|-|
-|[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 |
-|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.13.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 |
+|[Microsoft.Azure.Functions.Worker]| 1.18.0 or later |
+|[Microsoft.Azure.Functions.Worker.Sdk]| 1.13.0 or later |
When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`.
Each trigger and binding extension also has its own minimum version requirement,
| [Azure Service Bus][servicebus-sdk-types] | **Generally Available**<sup>2</sup> | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | [blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types
The [SDK type binding samples](https://github.com/Azure/azure-functions-dotnet-w
[HTTP triggers](./functions-bindings-http-webhook-trigger.md) allow a function to be invoked by an HTTP request. There are two different approaches that can be used:
+- An [ASP.NET Core integration model](#aspnet-core-integration) that uses concepts familiar to ASP.NET Core developers
- A [built-in model](#built-in-http-model) which does not require additional dependencies and uses custom types for HTTP requests and responses-- An [ASP.NET Core integration model (Preview)](#aspnet-core-integration-preview) that uses concepts familiar to ASP.NET Core developers
-#### Built-in HTTP model
-
-In the built-in model, the system translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optionally a message `Body`. This object is a representation of the HTTP request but is not directly connected to the underlying HTTP listener or the received message.
+#### ASP.NET Core integration
-Likewise, the function returns an [HttpResponseData] object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
-
-The following example demonstrates the use of `HttpRequestData` and `HttpResponseData`:
--
-#### ASP.NET Core integration (preview)
-
-This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. Use of this feature for local testing requires [Core Tools version 4.0.5198 or later](./functions-run-local.md). This model is not available to [apps targeting .NET Framework][supported-versions], which should instead leverage the [built-in model](#built-in-http-model).
+This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. Use of this feature for local testing requires [Core Tools version 4.0.5240 or later](./functions-run-local.md) and that you set `AzureWebJobsFeatureFlags` to "EnableHttpProxying" in `local.settings.json` if you are using Core Tools version 4.0.5274 and earlier. This model is not available to [apps targeting .NET Framework][supported-versions], which should instead leverage the [built-in model](#built-in-http-model).
> [!NOTE] > Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available.
-1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package, version 1.0.0-preview4 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/1.0.0-preview4) to your project.
+1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package, version 1.0.0 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/) to your project.
- You must also update your project to use [version 1.11.0 or later of Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.11.0) and [version 1.16.0 or later of Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.16.0).
+ You must also update your project to use [version 1.11.0 or later of Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) and [version 1.16.0 or later of Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/).
2. In your `Program.cs` file, update the host builder configuration to use `ConfigureFunctionsWebApplication()` instead of `ConfigureFunctionsWorkerDefaults()`. The following example shows a minimal setup without other customizations:
This section shows how to work with the underlying HTTP request and response obj
} ```
-4. Enable the feature by setting `AzureWebJobsFeatureFlags` to include "EnableHttpProxying". When hosted in a function app, configure this as an application setting. When running locally, set this value in `local.settings.json`.
+#### Built-in HTTP model
+
+In the built-in model, the system translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optionally a message `Body`. This object is a representation of the HTTP request but is not directly connected to the underlying HTTP listener or the received message.
+
+Likewise, the function returns an [HttpResponseData] object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
+
+The following example demonstrates the use of `HttpRequestData` and `HttpResponseData`:
++ ## Logging
You can configure your isolated process application to emit logs directly [Appli
```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService
-dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights --prerelease
+dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights
``` You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file:
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
Make sure to choose your Durable Functions development language at the top of th
> [!IMPORTANT] > This article supports both Python v1 and Python v2 programming models for Durable Functions.
-> The Python v2 programming model is currently in preview.
## Python v2 programming model
Durable Functions provides preview support of the new [Python v2 programming mod
Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) isn't currently supported for the v2 model with Durable Functions. You'll instead need to manage your extensions manually as follows:
-1. Remove the `extensionBundle` section of your `host.json` as described in [this Functions article](../functions-run-local.md#install-extensions).
+1. Remove the `extensionBundle` section of your `host.json` file.
-1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview.
+1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview. For more information, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install).
::: zone-end
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Last updated 02/13/2023 -+ zone_pivot_groups: df-languages #Customer intent: As a < type of user >, I want < what? > so that < why? >.
azure-functions Durable Functions Powershell V2 Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-powershell-v2-sdk-migration-guide.md
By creating a standalone DF PowerShell SDK package, we're able to overcome these
## Deprecation plan for the built-in DF PowerShell SDK
-The built-in DF SDK in the PowerShell worker will remain available for PowerShell 7.2 and prior releases. This means that existing apps will be able to continue using the built-in SDK as long as they continue using PowerShell 7.2 or an older release.
+The built-in DF SDK in the PowerShell worker will remain available for PowerShell 7.4, 7.2, and prior releases.
-Starting from PowerShell 7.4 onwards, the PowerShell worker will not contain a built-in DF SDK. Therefore, users will need to install the SDK separately using this standalone package; the installation steps are described below.
+We plan to eventually release a new **major** version of the PowerShell worker without the built-in SDK. At that point, users would need to install the SDK separately using this standalone package; the installation steps are described below.
## Install and enable the SDK
The standalone PowerShell SDK requires the following minimum versions:
### Opt in to the standalone DF SDK
-The following application setting is required to run the standalone PowerShell SDK while it is in preview:
+The following application setting is required to run the standalone PowerShell SDK:
- Name: `ExternalDurablePowerShellSDK` - Value: `"true"`
+This application setting will disable the built-in Durable SDK for PowerShell versions 7.2 and above, forcing the worker to use the external SDK.
+ If you're running locally using [Azure Functions Core Tools](../functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice: # [Azure CLI](#tab/azure-cli-set-indexing-flag)
azure-functions Quickstart Mssql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-mssql.md
# Configure Durable Functions with the Microsoft SQL Server (MSSQL) storage provider
-Durable Functions supports several [storage providers](durable-functions-storage-providers.md), also known as "backends", for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this article, we walk through how to configure a Durable Functions app to utilize the [MSSQL storage provider](durable-functions-storage-providers.md#mssql).
+Durable Functions supports several [storage providers](durable-functions-storage-providers.md), also known as _backends_, for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this article, we walk through how to configure a Durable Functions app to utilize the [MSSQL storage provider](durable-functions-storage-providers.md#mssql).
> [!NOTE] > The MSSQL backend was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub state so that users get the benefits of modern, enterprise-grade DBMS infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation.
If this isn't the case, we suggest you start with one of the following articles,
> [!NOTE] > If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management.
-You'll need to install the latest version of the MSSQL storage provider Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project.
+You need to install the latest version of the MSSQL storage provider Extension on NuGet, which for .NET means adding a reference to it in your `.csproj` file and building the project. You can also use the [`dotnet add package`](/dotnet/core/tools/dotnet-add-package) command to add extension packages.
The Extension package to install depends on the .NET worker you're using: - For the _in-process_ .NET worker, install [`Microsoft.DurableTask.SqlServer.AzureFunctions`](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions).
You can install the Extension using the following [Azure Functions Core Tools CL
func extensions install --package <package name depending on your worker model> --version <latest version> ```
-For more information on installing Azure Functions Extensions via the Core Tools CLI, see [this guide](../functions-run-local.md#install-extensions).
+For more information on installing Azure Functions Extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install).
## Set up your Database
For more information on installing Azure Functions Extensions via the Core Tools
As the MSSQL backend is designed for portability, you have several options to set up your backing database. For example, you can set up an on-premises SQL Server instance, use a fully managed [Azure SQL DB](/azure/azure-sql/database/sql-database-paas-overview), or use any other SQL Server-compatible hosting option.
-You can also do local, offline development with [SQL Server Express](https://www.microsoft.com/sql-server/sql-server-downloads) on your local Windows machine or use [SQL Server Docker image](https://hub.docker.com/_/microsoft-mssql-server) running in a Docker container. For ease of setup, we will focus on the latter.
+You can also do local, offline development with [SQL Server Express](https://www.microsoft.com/sql-server/sql-server-downloads) on your local Windows machine or use [SQL Server Docker image](https://hub.docker.com/_/microsoft-mssql-server) running in a Docker container. For ease of setup, this article focuses on the latter.
### Set up your local Docker-based SQL Server
-To run these steps, you will need a [Docker](https://www.docker.com/products/docker-desktop/) installation on your local machine. Below are PowerShell commands that you can use to set up a local SQL Server database on Docker. Note that PowerShell can be installed on Windows, macOS, or Linux using the installation instructions [here](/powershell/scripting/install/installing-powershell).
+To run these steps, you need a [Docker](https://www.docker.com/products/docker-desktop/) installation on your local machine. Below are PowerShell commands that you can use to set up a local SQL Server database on Docker. Note that PowerShell can be installed on Windows, macOS, or Linux using the installation instructions [here](/powershell/scripting/install/installing-powershell).
```powershell # primary parameters
docker run --name mssql-server -e 'ACCEPT_EULA=Y' -e "SA_PASSWORD=$pw" -e "MSSQL
docker exec -d mssql-server /opt/mssql-tools/bin/sqlcmd -S . -U sa -P "$pw" -Q "CREATE DATABASE [$dbname] COLLATE $collation" ```
-After running these commands, you should have a local SQL Server running on Docker and listening on port `1443`. If port `1443` conflicts with another service, you can re-run these commands after changing the variable `$port` to a different value.
+After running these commands, you should have a local SQL Server running on Docker and listening on port `1443`. If port `1443` conflicts with another service, you can rerun these commands after changing the variable `$port` to a different value.
> [!NOTE] > To stop and delete a running container, you may use `docker stop <containerName>` and `docker rm <containerName>` respectively. You may use these commands to re-create your container, and to stop if after you're done with this quickstart. For more assistance, try `docker --help`.
To validate your database installation, you can query for your new SQL database
docker exec -it mssql-server /opt/mssql-tools/bin/sqlcmd -S . -U sa -P "$pw" -Q "SELECT name FROM sys.databases" ```
-If the database setup completed successfully, you should see the name of your created database (for example, "DurableDB") in the command-line output.
+If the database setup completed successfully, you should see the name of your created database (for example, `DurableDB`) in the command-line output.
```bash name
DurableDB
### Add your SQL connection string to local.settings.json
-The MSSQL backend needs a connection string to your database. How to obtain a connection string largely depends on your specific MSSQL Server provider. Please review the documentation of your specific provider for information on how to obtain a connection string.
+The MSSQL backend needs a connection string to your database. How to obtain a connection string largely depends on your specific MSSQL Server provider. Review the documentation of your specific provider for information on how to obtain a connection string.
-If you used Docker commands above without changing any parameters, your connection string should be:
+Using the previous Docker commands, without changing any parameters, your connection string should be:
``` Server=localhost,1433;Database=DurableDB;User Id=sa;Password=yourStrong(!)Password;
Below is an example `local.settings.json` assigning the default Docker-based SQL
### Update host.json
-Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. We'll also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. We'll set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one doesn't already exist, with collation `Latin1_General_100_BIN2_UTF8`.
+Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. You must also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. Set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one doesn't already exist, with collation `Latin1_General_100_BIN2_UTF8`.
```json {
Edit the storage provider section of the `host.json` file so it sets the `type`
} ```
-The snippet above is a fairly *minimal* `host.json` example. Later, you may want to consider [additional parameters](https://microsoft.github.io/durabletask-mssql/#/quickstart?id=hostjson-configuration).
+The snippet above is a fairly *minimal* `host.json` example. Later, you may want to consider [other parameters](https://microsoft.github.io/durabletask-mssql/#/quickstart?id=hostjson-configuration).
### Test locally
InstanceID RuntimeStatus CreatedTime
## Run your app on Azure
-To run your app in Azure, you will need a publicly accessible SQL Server instance. You can obtain one by creating an Azure SQL database.
+To run your app in Azure, you'll need a publicly accessible SQL Server instance. You can obtain one by creating an Azure SQL database.
### Create an Azure SQL database
You can follow [these](/azure/azure-sql/database/single-database-create-quicksta
> [!NOTE] > Microsoft offers a [12-month free Azure subscription account](https://azure.microsoft.com/free/) if youΓÇÖre exploring Azure for the first time.
-You may obtain your Azure SQL database's connection string by navigating to the database's blade in the Azure portal. Then, under Settings, select "Connection strings" and obtain the "ADO.NET" connection string. Make sure to provide your password in the template provided.
+You may obtain your Azure SQL database's connection string by navigating to the database's blade in the Azure portal. Then, under **Settings**, select **Connection strings** and obtain the **ADO.NET** connection string. Make sure to provide your password in the template provided.
Below is an example of the portal view for obtaining the Azure SQL connection string. ![An Azure connection string as found in the portal](./media/quickstart-mssql/mssql-azure-db-connection-string.png)
-In the Azure portal, the connection string will have the database's password removed: it is replaced with `{your_password}`. Replace that segment with the password you used to create the database earlier in this section. If you forgot your password, you may reset it by navigating to the database's blade in the Azure portal, selecting your *Server name* in the "Essentials" view, and clicking "Reset password" in the resulting page. Below are some guiding images.
+In the Azure portal, the connection string has the database's password removed: it's replaced with `{your_password}`. Replace that segment with the password you used to create the database earlier in this section. If you forgot your password, you may reset it by navigating to the database's blade in the Azure portal, selecting your *Server name* in the **Essentials** view, and also selecting **Reset password** in the resulting page. Below are some guiding images.
![The Azure SQL database view, with the Server name option highlighted](./media/quickstart-mssql/mssql-azure-reset-pass-1.png)
In the Azure portal, the connection string will have the database's password rem
### Add connection string as an application setting
-You need to add your database's connection string as an application setting. To do this through the Azure portal, first go to your Azure Functions App view. Then go under "Configuration", select "New application setting", and there you can assign "SQLDB_Connection" to map to a publicly accessible connection string. Below are some guiding images.
+You need to add your database's connection string as an application setting. To do this through the Azure portal, first go to your Azure Functions App view. Then under **Configuration**, select **New application setting**, where you assign **SQLDB_Connection** to map to a publicly accessible connection string. Below are some guiding images.
![On the DB blade, go to Configuration, then click new application setting.](./media/quickstart-mssql/mssql-azure-environment-variable-1.png) ![Enter your connection string setting name, and its value.](./media/quickstart-mssql/mssql-azure-environment-variable-2.png)
azure-functions Quickstart Netherite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md
You can install the Extension using the following [Azure Functions Core Tools CL
func extensions install --package <package name depending on your worker model> --version <latest version> ```
-For more information on installing Azure Functions Extensions via the Core Tools CLI, see [this guide](../functions-run-local.md#install-extensions).
+For more information on installing Azure Functions Extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install).
## Configure local.settings.json for local development
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
Azure Functions Core Tools lets you run an Azure Functions project on your local
:::image type="content" source="media/quickstart-python-vscode/functions-f5.png" alt-text="Screenshot of Azure local output."::: 5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.+
+5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`hello_orchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/hello_orchestrator`.
+
+ The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
-6. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request.
+2. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request.
The request will query the orchestration instance for the status. You must get an eventual response, which shows the instance has completed and includes the outputs or results of the durable function. It looks like:
azure-functions Azfw0001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/net-worker-rules/azfw0001.md
Title: "AZFW0001: Invalid binding attributes" description: "Learn about code analysis rule AZFW0001: Invalid binding attributes"--++ Last updated 05/10/2021
azure-functions Azf0001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/sdk-rules/azf0001.md
Title: "AZFW0001: Avoid async void" description: "Learn about code analysis rule AZF0001: Avoid async void"--++ Last updated 05/10/2021
azure-functions Azf0002 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/sdk-rules/azf0002.md
Title: "AZF0002: Inefficient HttpClient usage" description: "Learn about code analysis rule AZF0002: Inefficient HttpClient usage"--++ Last updated 09/29/2021
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
app = func.FunctionApp()
@app.function_name(name="HttpTrigger1") @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS) @app.queue_output(arg_name="msg", queue_name="outqueue", connection="AzureWebJobsStorage")
-@app.cosmos_db_output(arg_name="outputDocument", database_name="my-database", collection_name="my-container" connection_string_setting="CosmosDbConnectionString")
+@app.cosmos_db_output(arg_name="outputDocument", database_name="my-database", collection_name="my-container", connection_string_setting="CosmosDbConnectionString")
def test_function(req: func.HttpRequest, msg: func.Out[func.QueueMessage], outputDocument: func.Out[func.Document]) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.')
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Because you're using a Queue storage output binding, you must have the Storage b
Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages.
-Extension bundles usage is enabled in the *host.json* file at the root of the project, which appears as follows:
+Extension bundles is already enabled in the *host.json* file at the root of the project, which should look like the following example:
Now, you can add the storage output binding to your project.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Controls the timeout, in seconds, when connected to streaming logs. The default
|-|-| |SCM_LOGSTREAM_TIMEOUT|`1800`|
-The above sample value of `1800` sets a timeout of 30 minutes. To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs).
+The above sample value of `1800` sets a timeout of 30 minutes. For more information, see [Enable streaming execution logs in Azure Functions](streaming-logs.md).
## WEBSITE\_CONTENTAZUREFILECONNECTIONSTRING
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
For more information on change tracking and how it's used by applications such a
# [In-process](#tab/in-process)
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-csharp).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
The example refers to a `ToDoItem` class and a corresponding database table:
namespace AzureSQL.ToDo
# [Isolated process](#tab/isolated-process)
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-outofproc).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
The example refers to a `ToDoItem` class and a corresponding database table:
namespace AzureSQL.ToDo
# [C# Script](#tab/csharp-script)
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-csharpscript).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
The example refers to a `ToDoItem` class and a corresponding database table:
public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger l
## Example usage <a id="example"></a>
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-java).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-java).
The example refers to a `ToDoItem` class, a `SqlChangeToDoItem` class, a `SqlChangeOperation` enum, and a corresponding database table:
public class ProductsTrigger {
## Example usage <a id="example"></a>
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-powershell).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-powershell).
The example refers to a `ToDoItem` database table:
Write-Host "SQL Changes: $changesJson"
## Example usage <a id="example"></a>
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-js).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-js).
The example refers to a `ToDoItem` database table:
module.exports = async function (context, todoChanges) {
## Example usage <a id="example"></a>
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-python).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python).
The example refers to a `ToDoItem` database table:
def main(changes):
## Attributes
-The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
+The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
| Attribute property |Description| ||| | **TableName** | Required. The name of the table monitored by the trigger. | | **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
-| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#az_funcleasestablename).
+| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/TriggerBinding.md#az_funcleasestablename).
::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
| **name** | Required. The name of the parameter that the trigger binds to. | | **tableName** | Required. The name of the table monitored by the trigger. | | **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
-| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#az_funcleasestablename).
+| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/TriggerBinding.md#az_funcleasestablename).
::: zone-end
The following table explains the binding configuration properties that you set i
| **direction** | Required. Must be set to `in`. | | **tableName** | Required. The name of the table monitored by the trigger. | | **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
-| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#az_funcleasestablename).
+| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/TriggerBinding.md#az_funcleasestablename).
::: zone-end ## Optional Configuration
Optionally, your functions can scale automatically based on the number of change
## Retry support
-Further information on the SQL trigger [retry support](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/BindingsOverview.md#retry-support-for-trigger-bindings) and [leases tables](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#internal-state-tables) is available in the GitHub repository.
+Further information on the SQL trigger [retry support](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/BindingsOverview.md#retry-support-for-trigger-bindings) and [leases tables](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/TriggerBinding.md#internal-state-tables) is available in the GitHub repository.
### Startup retries If an exception occurs during startup then the host runtime automatically attempts to restart the trigger listener with an exponential backoff strategy. These retries continue until either the listener is successfully started or the startup is canceled.
azure-functions Functions Bindings Cache Trigger Redislist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md
zone_pivot_groups: programming-languages-set-functions-lang-workers --+++ Last updated 08/07/2023- # RedisListTrigger Azure Function (preview)
The `RedisListTrigger` pops new elements from a list and surfaces those entries
| Lists | Yes | Yes | Yes | > [!IMPORTANT]
-> Redis triggers are not currently supported on Azure Functions Consumption plan.
+> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
> ## Example
The following sample polls the key `listTest` at a localhost Redis instance at `
::: zone-end ::: zone pivot="programming-language-javascript"
-### [v3](#tab/javasscript-v1)
+### [v3](#tab/node-v3)
-Each sample uses the same `index.js` file, with binding data in the `function.json` file.
+This sample uses the same `index.js` file, with binding data in the `function.json` file.
Here's the `index.js` file:
From `function.json`, here's the binding data:
} ```
-### [v4](#tab/javascript-v2)
+### [v4](#tab/node-v4)
The JavaScript v4 programming model example isn't available in preview.
The JavaScript v4 programming model example isn't available in preview.
::: zone-end ::: zone pivot="programming-language-powershell"
-Each sample uses the same `run.ps1` file, with binding data in the `function.json` file.
+This sample uses the same `run.ps1` file, with binding data in the `function.json` file.
Here's the `run.ps1` file:
From `function.json`, here's the binding data:
::: zone-end ::: zone pivot="programming-language-python"
-Each sample uses the same `__init__.py` file, with binding data in the `function.json` file.
+This sample uses the same `__init__.py` file, with binding data in the `function.json` file.
### [v1](#tab/python-v1)
azure-functions Functions Bindings Cache Trigger Redispubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md
zone_pivot_groups: programming-languages-set-functions-lang-workers --+++ Last updated 08/07/2023- # RedisPubSubTrigger Azure Function (preview)
This sample listens to any `keyevent` notifications for the delete command [`DEL
::: zone-end ::: zone pivot="programming-language-javascript"
-### [v3](#tab/javasscript-v1)
+### [v3](#tab/node-v3)
-Each sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
+This sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
Here's the `index.js` file:
Here's binding data to listen to `keyevent` notifications for the delete command
"scriptFile": "index.js" } ```
-### [v4](#tab/javascript-v2)
+### [v4](#tab/node-v4)
The JavaScript v4 programming model example isn't available in preview.
The JavaScript v4 programming model example isn't available in preview.
::: zone-end ::: zone pivot="programming-language-powershell"
-Each sample uses the same `run.ps1` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
+This sample uses the same `run.ps1` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
Here's the `run.ps1` file:
Here's binding data to listen to `keyevent` notifications for the delete command
The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
-Each sample uses the same `__init__.py` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
+This sample uses the same `__init__.py` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
Here's the `__init__.py` file:
azure-functions Functions Bindings Cache Trigger Redisstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md
zone_pivot_groups: programming-languages-set-functions-lang-workers --+++ Last updated 08/07/2023- # RedisStreamTrigger Azure Function (preview)
The `RedisStreamTrigger` reads new entries from a stream and surfaces those elem
| Streams | Yes | Yes | Yes | > [!IMPORTANT]
-> Redis triggers are not currently supported on Azure Functions Consumption plan.
+> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
> ## Example
The isolated process examples aren't available in preview.
::: zone-end ::: zone pivot="programming-language-javascript"
-### [v3](#tab/javasscript-v1)
+### [v3](#tab/node-v3)
-Each sample uses the same `index.js` file, with binding data in the `function.json` file.
+This sample uses the same `index.js` file, with binding data in the `function.json` file.
Here's the `index.js` file:
From `function.json`, here's the binding data:
} ```
-### [v4](#tab/javascript-v2)
+### [v4](#tab/node-v4)
The JavaScript v4 programming model example isn't available in preview.
The JavaScript v4 programming model example isn't available in preview.
::: zone-end ::: zone pivot="programming-language-powershell"
-Each sample uses the same `run.ps1` file, with binding data in the `function.json` file.
+This sample uses the same `run.ps1` file, with binding data in the `function.json` file.
Here's the `run.ps1` file:
From `function.json`, here's the binding data:
The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
-Each sample uses the same `__init__.py` file, with binding data in the `function.json` file.
+This sample uses the same `__init__.py` file, with binding data in the `function.json` file.
Here's the `__init__.py` file:
azure-functions Functions Bindings Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md
zone_pivot_groups: programming-languages-set-functions-lang-workers --+++ Last updated 07/26/2023- # Overview of Azure functions for Azure Cache for Redis (preview)
You can integrate Azure Cache for Redis and Azure Functions to build functions t
|Streams | Yes | Yes | Yes | > [!IMPORTANT]
-> Redis triggers are not currently supported on consumption functions.
+> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
> ::: zone pivot="programming-language-csharp"
Azure Cache for Redis triggers and bindings have a required property for the cac
- [Introduction to Azure Functions](/azure/azure-functions/functions-overview) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)-- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
This section contains examples that require version 3.x of Azure Cosmos DB exten
The examples refer to a simple `ToDoItem` type: <a id="queue-trigger-look-up-id-from-json-isolated"></a>
The examples refer to a simple `ToDoItem` type:
The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection.
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
See the [Example section](#example) for complete examples.
## Usage
+The Event Grid trigger uses a webhook HTTP request, which can be configured using the same [*host.json* settings as the HTTP Trigger](functions-bindings-http-webhook.md#hostjson-settings).
+ ::: zone pivot="programming-language-csharp" The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
Functions version 1.x doesn't support isolated worker process. To use the isolat
:::zone-end
+## host.json settings
+
+The Event Grid trigger uses a webhook HTTP request, which can be configured using the same [*host.json* settings as the HTTP Trigger](functions-bindings-http-webhook.md#hostjson-settings).
+ ## Next steps * If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-sdk-for-net/issues)
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end
azure-functions Functions Bindings Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-example.md
- Title: Azure Functions trigger and binding example
-description: Learn to configure Azure Function bindings
--- Previously updated : 02/08/2022--
-# Azure Functions trigger and binding example
-
-This article demonstrates how to configure a [trigger and bindings](./functions-triggers-bindings.md) in an Azure Function.
-
-Suppose you want to write a new row to Azure Table storage whenever a new message appears in Azure Queue storage. This scenario can be implemented using an Azure Queue storage trigger and an Azure Table storage output binding.
-
-Here's a *function.json* file for this scenario.
-
-```json
-{
- "bindings": [
- {
- "type": "queueTrigger",
- "direction": "in",
- "name": "order",
- "queueName": "myqueue-items",
- "connection": "MY_STORAGE_ACCT_APP_SETTING"
- },
- {
- "type": "table",
- "direction": "out",
- "name": "$return",
- "tableName": "outTable",
- "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
- }
- ]
-}
-```
-
-The first element in the `bindings` array is the Queue storage trigger. The `type` and `direction` properties identify the trigger. The `name` property identifies the function parameter that receives the queue message content. The name of the queue to monitor is in `queueName`, and the connection string is in the app setting identified by `connection`.
-
-The second element in the `bindings` array is the Azure Table Storage output binding. The `type` and `direction` properties identify the binding. The `name` property specifies how the function provides the new table row, in this case by using the function return value. The name of the table is in `tableName`, and the connection string is in the app setting identified by `connection`.
-
-To view and edit the contents of *function.json* in the Azure portal, click the **Advanced editor** option on the **Integrate** tab of your function.
-
-> [!NOTE]
-> The value of `connection` is the name of an app setting that contains the connection string, not the connection string itself. Bindings use connection strings stored in app settings to enforce the best practice that *function.json* does not contain service secrets.
-
-# [C# script](#tab/csharp)
-
-Here's C# script code that works with this trigger and binding. Notice that the name of the parameter that provides the queue message content is `order`; this name is required because the `name` property value in *function.json* is `order`
-
-```cs
-#r "Newtonsoft.Json"
-
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json.Linq;
-
-// From an incoming queue message that is a JSON object, add fields and write to Table storage
-// The method return value creates a new row in Table Storage
-public static Person Run(JObject order, ILogger log)
-{
- return new Person() {
- PartitionKey = "Orders",
- RowKey = Guid.NewGuid().ToString(),
- Name = order["Name"].ToString(),
- MobileNumber = order["MobileNumber"].ToString() };
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
- public string MobileNumber { get; set; }
-}
-```
-
-# [C# class library](#tab/csharp-class-library)
-
-In a class library, the same trigger and binding information &mdash; queue and table names, storage accounts, function parameters for input and output &mdash; is provided by attributes instead of a function.json file. Here's an example:
-
-```csharp
-public static class QueueTriggerTableOutput
-{
- [FunctionName("QueueTriggerTableOutput")]
- [return: Table("outTable", Connection = "MY_TABLE_STORAGE_ACCT_APP_SETTING")]
- public static Person Run(
- [QueueTrigger("myqueue-items", Connection = "MY_STORAGE_ACCT_APP_SETTING")]JObject order,
- ILogger log)
- {
- return new Person() {
- PartitionKey = "Orders",
- RowKey = Guid.NewGuid().ToString(),
- Name = order["Name"].ToString(),
- MobileNumber = order["MobileNumber"].ToString() };
- }
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
- public string MobileNumber { get; set; }
-}
-```
-
-# [JavaScript](#tab/javascript)
---
-The same *function.json* file can be used with a JavaScript function:
-
-```javascript
-// From an incoming queue message that is a JSON object, add fields and write to Table Storage
-module.exports = async function (context, order) {
- order.PartitionKey = "Orders";
- order.RowKey = generateRandomId();
-
- context.bindings.order = order;
-};
-
-function generateRandomId() {
- return Math.random().toString(36).substring(2, 15) +
- Math.random().toString(36).substring(2, 15);
-}
-```
---
-You now have a working function that is triggered by an Azure Queue and outputs data to Azure Table storage.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Azure Functions binding expression patterns](./functions-bindings-expressions-patterns.md)
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-expressions-patterns.md
Details of metadata properties for each trigger are described in the correspondi
## JSON payloads
-When a trigger payload is JSON, you can refer to its properties in configuration for other bindings in the same function and in function code.
+ In some scenarios, you can refer to the trigger payload's properties in configuration for other bindings in the same function and in function code. This requires that the trigger payload is JSON and is smaller than a threshold specific to each trigger. Typically, the payload size needs to be less than 100MB, but you should check the reference content for each trigger. Using trigger payload properties may impact the performance of your application, and it forces the trigger parameter type to be simple types like strings or a custom object type representing JSON data. It cannot be used with streams, clients, or other SDK types.
The following example shows the *function.json* file for a webhook function that receives a blob name in JSON: `{"BlobName":"HelloWorld.txt"}`. A Blob input binding reads the blob, and the HTTP output binding returns the blob contents in the HTTP response. Notice that the Blob input binding gets the blob name by referring directly to the `BlobName` property (`"path": "strings/{BlobName}"`)
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
The HTTP triggered function returns a type of [IActionResult] or `Task<IActionRe
# [Isolated process](#tab/isolated-process)
-The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object or a `Task<HttpResponseData>`. If the app uses [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview), it could also use [IActionResult], `Task<IActionResult>`, [HttpResponse], or `Task<HttpResponse>`.
+The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object or a `Task<HttpResponseData>`. If the app uses [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration), it could also use [IActionResult], `Task<IActionResult>`, [HttpResponse], or `Task<HttpResponse>`.
[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpResponse]: /dotnet/api/microsoft.aspnetcore.http.httpresponse
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end
If a function that uses the HTTP trigger doesn't complete within 230 seconds, th
- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md) [ClaimsPrincipal]: /dotnet/api/system.security.claims.claimsprincipal
-[ASP.NET Core integration in .NET Isolated]: ./dotnet-isolated-process-guide.md#aspnet-core-integration-preview
+[ASP.NET Core integration in .NET Isolated]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
[HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata [HttpResponseData]: /dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata [HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http), version 3.x. > [!NOTE]
-> An additional extension package is needed for [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview)
+> An additional extension package is needed for [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration)
# [Functions v1.x](#tab/functionsv1/isolated-process)
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
The following table lists the currently available version ranges of the default
For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
-For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
+For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. For more information, see [func extensions install](functions-core-tools-reference.md#func-extensions-install).
For portal-only development, you need to manually create an extensions.csproj file in the root of your function app. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions).
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: +
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
public static void Run(
The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: +
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
The `Function_App_URL` can be found on Function App's Overview page and The `API
If you want to use more than one Function App together with one SignalR Service, upstream can also support complex routing rules. Find more details at [Upstream settings](../azure-signalr/concept-upstream.md).
-### Step by step sample
+### Step-by-step sample
You can follow the sample in GitHub to deploy a chat room on Function App with SignalR Service trigger binding and upstream feature: [Bidirectional chat room sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
This section describes the function app configuration settings available for fun
"version": "2.0", "extensions": { "blobs": {
- "maxDegreeOfParallelism": 4
+ "maxDegreeOfParallelism": 4,
+ "poisonBlobThreshold": 1
} } }
This section describes the function app configuration settings available for fun
|Property |Default | Description | |||| |maxDegreeOfParallelism|8 * (the number of available cores)|The integer number of concurrent invocations allowed for all blob-triggered functions in a given function app. The minimum allowed value is 1.|
+|poisonBlobThreshold|5|The integer number of times to try processing a message before moving it to the poison queue. The minimum allowed value is 1.|
## Next steps
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Functions version 1.x doesn't support isolated worker process. To use the isolat
[ITableEntity]: /dotnet/api/azure.data.tables.itableentity [TableClient]: /dotnet/api/azure.data.tables.tableclient
-[TableEntity]: /dotnet/api/azure.data.tables.tableentity
[CloudTable]: /dotnet/api/microsoft.azure.cosmos.table.cloudtable
Functions version 1.x doesn't support isolated worker process. To use the isolat
[Microsoft.Azure.Cosmos.Table]: /dotnet/api/microsoft.azure.cosmos.table [Microsoft.WindowsAzure.Storage.Table]: /dotnet/api/microsoft.windowsazure.storage.table
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
[storage-4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5
-[storage-5.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0
[table-api-package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Tables/ [extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models.
-> [!IMPORTANT]
-> The Python v2 programming model is currently in preview.
::: zone-end ## Example
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
ms.devlang: csharp, java, javascript, python Previously updated : 03/04/2022 Last updated : 09/04/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace WarmupSample
The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app. # [C# Script](#tab/csharp-script)
The following considerations apply to using a warmup function in C#:
- Your function must be named `warmup` (case-insensitive) using the `Function` attribute. - A return value attribute isn't required.
+- Use the `Microsoft.Azure.Functions.Worker.Extensions.Warmup` package
- You can pass an object instance to the function. # [C# script](#tab/csharp-script)
azure-functions Functions Container Apps Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md
Azure Functions currently supports the following methods of deployment to Azure
+ Azure Pipeline tasks + ARM templates + [Bicep templates](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/Biceptemplates)
-+ Azure Functions core tools
++ [Azure Functions Core Tools](functions-run-local.md#deploy-containers) To learn how to create and deploy a function app container to Container Apps using the Azure CLI, see [Create your first containerized functions on Azure Container Apps](functions-deploy-container-apps.md).
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
Title: Azure Functions Core Tools reference description: Reference documentation that supports the Azure Functions Core Tools (func.exe). Previously updated : 07/30/2023 Last updated : 08/20/2023 # Azure Functions Core Tools reference
When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with
| **`--force`** | Initialize the project even when there are existing files in the project. This setting overwrites existing files with the same name. Other files in the project folder aren't affected. | | **`--language`** | Initializes a language-specific project. Currently supported when `--worker-runtime` set to `node`. Options are `typescript` and `javascript`. You can also use `--worker-runtime javascript` or `--worker-runtime typescript`. | | **`--managed-dependencies`** | Installs managed dependencies. Currently, only the PowerShell worker runtime supports this functionality. |
+| **`--model`** | Sets the desired programming model for a target language when more than one model is available. Supported options are `V1` and `V2` for Python and `V3` and `V4` for Node.js. For more information, see the [Python developer guide](functions-reference-python.md#programming-model) and the [Node.js developer guide](functions-reference-node.md), respectively. |
| **`--source-control`** | Controls whether a git repository is created. By default, a repository isn't created. When `true`, a repository is created. | | **`--worker-runtime`** | Sets the language runtime for the project. Supported values are: `csharp`, `dotnet`, `dotnet-isolated`, `javascript`,`node` (JavaScript), `powershell`, `python`, and `typescript`. For Java, use [Maven](functions-reference-java.md#create-java-functions). To generate a language-agnostic project with just the project files, use `custom`. When not set, you're prompted to choose your runtime during initialization. |
-| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net6.0` (default), `net7.0`, and `net48`. |
+| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net6.0` (default), `net7.0`, and `net48` (.NET Framework 4.8). |
| > [!NOTE]
Creates a new function in the current project based on a template.
func new ```
+When you run `func new` without the `--template` option, you're prompted to choose a template. In version 1.x, you're also required to choose the language.
+ The `func new` action supports the following options: | Option | Description |
To learn more, see [Create a function](functions-run-local.md#create-func).
*Version 1.x only.*
-Enables you to invoke a function directly, which is similar to running a function using the **Test** tab in the Azure portal. This action is only supported in version 1.x. For later versions, use `func start` and [call the function endpoint directly](functions-run-local.md#passing-test-data-to-a-function).
+Enables you to invoke a function directly, which is similar to running a function using the **Test** tab in the Azure portal. This action is only supported in version 1.x. For later versions, use `func start` and [call the function endpoint directly](functions-run-local.md#run-a-local-function).
```command func run
func start
| **`--timeout`** | The timeout for the Functions host to start, in seconds. Default: 20 seconds.| | **`--useHttps`** | Bind to `https://localhost:{port}` rather than to `http://localhost:{port}`. By default, this option creates a trusted certificate on your computer.|
-With the project running, you can [verify individual function endpoints](functions-run-local.md#passing-test-data-to-a-function).
+With the project running, you can [verify individual function endpoints](functions-run-local.md#run-a-local-function).
# [v1.x](#tab/v1)
In version 1.x, you can also use the [`func run`](#func-run) command to run a sp
Gets settings from a specific function app. ```command
-func azure functionapp fetch-app-settings <APP_NAME>
+func azure functionapp fetch-app-settings <APP_NAME>
```
-For an example, see [Get your storage connection strings](functions-run-local.md#get-your-storage-connection-strings).
+For more information, see [Download application settings](functions-run-local.md#download-application-settings).
-Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](#func-settings-encrypt).
+Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](functions-run-local.md#encrypt-the-local-settings-file).
## func azure functionapp list-functions
The `deploy` action supports the following options:
| | -- | | **`--browser`** | Open Azure Application Insights Live Stream for the function app in the default browser. |
-To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs).
+For more information, see [Enable streaming execution logs in Azure Functions](streaming-logs.md).
## func azure functionapp publish
The following publish options apply, based on version:
| Option | Description | | | -- |
-| **`--access-token`** | Let's you use a specific access token when performing authenticated azure actions. |
+| **`--access-token`** | Lets you use a specific access token when performing authenticated azure actions. |
| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | | **`--additional-packages`** | List of packages to install when building native dependencies. For example: `python3-dev libevent-dev`. | | **`--build`**, **`-b`** | Performs build action when deploying to a Linux function app. Accepts: `remote` and `local`. |
The following publish options apply, based on version:
| **`--no-build`** | Project isn't built during publishing. For Python, `pip install` isn't performed. | | **`--nozip`** | Turns the default `Run-From-Package` mode off. | | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|
-| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). |
+| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](#func-azure-storage-fetch-connection-string). |
| **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. | | **`--slot`** | Optional name of a specific slot to which to publish. | | **`--subscription`** | Sets the default subscription to use. |
The following publish options apply, based on version:
| Option | Description | | | -- | | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|
-| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). |
+| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](#func-azure-storage-fetch-connection-string). |
Gets the connection string for the specified Azure Storage account.
func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME> ```
+For more information, see [Download a storage connection string](functions-run-local.md#download-a-storage-connection-string).
+ ## func azurecontainerapps deploy Deploys a containerized function app to an Azure Container Apps environment. Both the storage account used by the function app and the environment must already exist. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
The following deployment options apply:
| Option | Description | | | -- |
-| **`--access-token`** | Let's you use a specific access token when performing authenticated azure actions. |
+| **`--access-token`** | Lets you use a specific access token when performing authenticated azure actions. |
| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | | **`--environment`** | The name of an existing Container Apps environment.| | **`--image-build`** | When set to `true`, skips the local Docker build. |
To learn more, see the [Durable Functions documentation](./durable/durable-funct
## func extensions install
-Installs Functions extensions in a non-C# class library project.
+Manually installs Functions extensions in a non-.NET project or in a C# script project.
-When possible, you should instead use extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles).
-
-For compiled C# projects (both in-process and isolated worker process), instead use standard NuGet package installation methods, such as `dotnet add package`.
+```command
+func extensions install --package Microsoft.Azure.WebJobs.Extensions.<EXTENSION> --version <VERSION>
+```
The `install` action supports the following options:
The `install` action supports the following options:
| **`--source`** | NuGet feed source when not using NuGet.org.| | **`--version`** | Extension package version. |
-No action is taken when an extension bundle is defined in your host.json file. When you need to manually install extensions, you must first remove the bundle definition. For more information, see [Install extensions](functions-run-local.md#install-extensions).
+The following example installs version 5.0.1 of the Event Hubs extension in the local project:
+
+```command
+func extensions install --package Microsoft.Azure.WebJobs.Extensions.EventHubs --version 5.0.1
+```
+
+The following considerations apply when using `func extensions install`:
+++ For compiled C# projects (both in-process and isolated worker process), instead use standard NuGet package installation methods, such as `dotnet add package`.+++ To manually install extensions using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed.+++ When possible, you should instead use [extension bundles](functions-bindings-register.md#extension-bundles). The following are some reasons why you might need to install extensions manually:+
+ + You need to access a specific version of an extension not available in a bundle.
+ + You need to access a custom extension not available in a bundle.
+ + You need to access a specific combination of extensions not available in a single bundle.
+++ Before you can manually install extensions, you must first remove the [`extensionBundle`](functions-host-json.md#extensionbundle) object from the host.json file that defines the bundle. No action is taken when an extension bundle is already set in your [host.json file](functions-host-json.md#extensionbundle).+++ The first time you explicitly install an extension, a .NET project file named extensions.csproj is added to the root of your app project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file. ## func extensions sync
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The way in which you develop functions on your local computer depends on your [l
Each of these local development environments lets you create function app projects and use predefined function templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure.
-## Local settings file
+## Local project files
+
+A Functions project directory contains the following files in the project root folder, regardless of language:
+
+| File name | Description |
+| | |
+| host.json | To learn more, see the [host.json reference](functions-host-json.md). |
+| local.settings.json | Settings used by Core Tools when running locally, including app settings. To learn more, see [local settings file](#local-settings-file). |
+| .gitignore | Prevents the local.settings.json file from being accidentally published to a Git repository. To learn more, see [local settings file](#local-settings-file).|
+| .vscode\extensions.json | Settings file used when opening the project folder in Visual Studio Code. |
+
+Other files in the project depend on your language and specific functions. For more information, see the developer guide for your language.
+
+### Local settings file
The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally. When you publish your project to Azure, be sure to also add any required settings to the app settings for the function app.
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md
description: This tutorial shows how to create a low-latency, event-driven trigg
Previously updated : 3/1/2021 Last updated : 8/22/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers #Customer intent: As an Azure Functions developer, I want learn how to create an Event Grid-based trigger on a Blob Storage container so that I can get a more rapid response to changes in the container.
When you create a Blob Storage-triggered function using Visual Studio Code, you
|Prompt|Action| |--|--| |**Select a language**| Select `C#`. |
- |**Select a .NET runtime**| Select `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated worker process. |
+ |**Select a .NET runtime**| Select `.NET 6.0 Isolated LTS` for running in an [isolated worker process](dotnet-isolated-process-guide.md) or `.NET 6.0 LTS` for [in-process](functions-dotnet-class-library.md). |
|**Select a template for your project's first function**| Select `Azure Blob Storage trigger`. | |**Provide a function name**| Enter `BlobTriggerEventGrid`. | |**Provide a namespace** | Enter `My.Functions`. |
To use the Event Grid-based Blob Storage trigger, your function requires at leas
::: zone pivot="programming-language-csharp" To upgrade your project with the required extension version, in the Terminal window, run the following command: [dotnet add package](/dotnet/core/tools/dotnet-add-package)
-<!# [In-process](#tab/in-process) -->
+# [Isolated process](#tab/isolated-process)
```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.1
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs --version 6.1.0
```
-<!# [Isolated process](#tab/isolated-process)
+# [In-process](#tab/in-process)
```bash
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version 5.0.0
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.1.3
``` >+ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" 1. Open the host.json project file, and inspect the `extensionBundle` element.
-1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the following version:
+1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the latest:
```json "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
+ "version": "[4.0.0, 5.0.0)"
} ```
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version
::: zone pivot="programming-language-csharp" In the BlobTriggerEventGrid.cs file, add `Source = BlobTriggerSource.EventGrid` to the parameters for the Blob trigger attribute, for example:
-
+
+# [Isolated process](#tab/isolated-process)
+```csharp
+[Function("BlobTriggerCSharp")]
+public async Task Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "<NAMED_STORAGE_CONNECTION>")] Stream myBlob, string name, FunctionContext executionContext)
+{
+ var logger = executionContext.GetLogger("BlobTriggerCSharp");
+ logger.LogInformation($"C# Blob trigger function Processed blob\n Name: {name} \n Size: {myBlob.Length} Bytes");
+}
+```
+# [In-process](#tab/in-process)
```csharp [FunctionName("BlobTriggerCSharp")] public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "<NAMED_STORAGE_CONNECTION>")]Stream myBlob, string name, ILogger log)
public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTri
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes"); } ```++ ::: zone-end ::: zone pivot="programming-language-python" After you create the function, in the function.json configuration file, add `"source": "EventGrid"` to the `myBlob` binding, for example:
Event Grid validates the endpoint URL when you create an event subscription in t
When your function runs locally, the default endpoint used for an event-driven blob storage trigger looks like the following URL:
+# [Isolated process](#tab/isolated-process)
+```http
+http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid
+```
+# [In-process](#tab/in-process)
```http http://localhost:7071/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid ```++ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" ```http
The endpoint used in the event subscription is made up of three different parts,
| | | | Prefix and server name | When your function runs locally, the server name with an `https://` prefix comes from the **Forwarding** URL generated by *ngrok*. In the localhost URL, the *ngrok* URL replaces `http://localhost:7071`. When running in Azure, you'll instead use the published function app server, which is usually in the form `https://<FUNCTION_APP_NAME>.azurewebsites.net`. | | Path | The path portion of the endpoint URL comes from the localhost URL copied earlier, and looks like `/runtime/webhooks/blobs` for a Blob Storage trigger. The path for an Event Grid trigger would be `/runtime/webhooks/EventGrid` |
-| Query string | The `functionName=BlobTriggerEventGrid` parameter in the query string sets the name of the function that handles the event. For functions other than C#, the function name is qualified by `Host.Functions.`. If you used a different name for your function, you'll need to change this value. An access key isn't required when running locally. When running in Azure, you'll also need to include a `code=` parameter in the URL, which contains a key that you can get from the portal. |
+| Query string | For all languages including .NET Isolated the `functionName=Host.Functions.BlobTriggerEventGrid` parameter, except for .NET In-process which should be `functionName=BlobTriggerEventGrid` in the query string sets the name of the function that handles the event. If you used a different name for your function, you'll need to change this value. An access key isn't required when running locally. When running in Azure, you'll also need to include a `code=` parameter in the URL, which contains a key that you can get from the portal. |
The following screenshot shows an example of how the final endpoint URL should look when using a Blob Storage trigger named `BlobTriggerEventGrid`: ::: zone pivot="programming-language-csharp"
+# [Isolated process](#tab/isolated-process)
+ ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection-qualified.png)
+# [In-process](#tab/in-process)
![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection.png)++ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection-qualified.png)
An event subscription, powered by Azure Event Grid, raises events based on chang
| **Name** | *myBlobLocalNgrokEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. | | **Event Schema** | **Event Grid Schema** | Use the default schema for events. | | **System Topic Name** | *samples-workitems-blobs* | Name for the topic, which represents the container. The topic is created with the first subscription, and you'll use it for future event subscriptions. |
- | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*|
+ | **Filter to Event Types** | *Blob Created*|
| **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. | | **Endpoint** | Your ngrok-based URL endpoint | Use the ngrok-based URL endpoint that you determined earlier. |
You'll include this value in the query string of new endpoint URL.
Create a new endpoint URL for the Blob Storage trigger based on the following example: ::: zone pivot="programming-language-csharp"
+# [Isolated process](#tab/isolated-process)
+```http
+https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY>
+```
+# [In-process](#tab/in-process)
```http https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY> ```++ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" ```http
This time, you'll include the filter on the event subscription so that only JPEG
| | - | -- | | **Name** | *myBlobAzureEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. | | **Event Schema** | **Event Grid Schema** | Use the default schema for events. |
- | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*|
+ | **Filter to Event Types** | *Blob Created*|
| **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. | | **Endpoint** | Your new Azure-based URL endpoint | Use the URL endpoint that you built, which includes the key value. |
azure-functions Functions How To Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-custom-container.md
You should also consider [enabling continuous deployment](#enable-continuous-dep
:::zone pivot="azure-functions,container-apps" ## Azure portal create using containers
-When you create a function app in the [Azure portal](https://portal.azure.com), you can choose to deploy the function app from an image in a container registry. To learn how to create a containerized function app in a container registry, see[Creating your function app in a container](#creating-your-function-app-in-a-container).
+When you create a function app in the [Azure portal](https://portal.azure.com), you can choose to deploy the function app from an image in a container registry. To learn how to create a containerized function app in a container registry, see [Creating your function app in a container](#creating-your-function-app-in-a-container).
The following steps create and deploy an existing containerized function app from a container registry.
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
description: Learn how to configure function app settings in Azure Functions.
ms.assetid: 81eb04f8-9a27-45bb-bf24-9ab6c30d205c Last updated 12/13/2022-+ # Manage your function app
Connection strings, environment variables, and other application settings are de
## Get started in the Azure portal + 1. To begin, sign in to the [Azure portal] using your Azure account. In the search bar at the top of the portal, enter the name of your function app and select it from the list. 2. Under **Settings** in the left pane, select **Configuration**.
You can use either the Azure portal or Azure CLI commands to migrate a function
+ Migration isn't supported on Linux. + The source plan and the target plan must be in the same resource group and geographical region. For more information, see [Move an app to another App Service plan](../app-service/app-service-plan-manage.md#move-an-app-to-another-app-service-plan). + The specific CLI commands depend on the direction of the migration.
-+ Downtime in your function executions occur as the function app is migrated between plans.
++ Downtime in your function executions occurs as the function app is migrated between plans. + State and other app-specific content is maintained, since the same Azure Files share is used by the app both before and after migration. ### Migration in the portal
In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your
## Manually install extensions
-C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, the recommended way to install extensions is either by [using extension bundles](functions-bindings-register.md#extension-bundles) or by [using Azure Functions Core Tools](functions-run-local.md#install-extensions) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the host.json file.
+C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, you should [use extension bundles](functions-bindings-register.md#extension-bundles). If you must manually install extensions you can do this by [using Azure Functions Core Tools](./functions-core-tools-reference.md#func-extensions-install) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the host.json file.
This same process works for any other file you need to add to your app. > [!IMPORTANT]
-> When possible, you shouldn't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](functions-run-local.md#install-extensions) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods).
+> When possible, you shouldn't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](./functions-core-tools-reference.md#func-extensions-install) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods).
The Functions editor built into the Azure portal lets you update your function code and configuration (function.json) files directly in the portal.
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Title: Manually run a non HTTP-triggered Azure Functions
description: Use an HTTP request to run a non-HTTP triggered Azure Functions Previously updated : 04/23/2020 Last updated : 04/23/2023 # Manually run a non HTTP-triggered function
Open Postman and follow these steps:
## Next steps -- [Strategies for testing your code in Azure Functions](./functions-test-a-function.md) - [Event Grid local testing with viewer web app](./event-grid-how-tos.md#local-testing-with-viewer-web-app)
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Title: Azure Functions C# script developer reference
description: Understand how to develop Azure Functions using C# script. Previously updated : 09/15/2022 Last updated : 08/15/2023 # Azure Functions C# script (.csx) developer reference
The way that both binding extension packages and other NuGet packages are added
By default, the [supported set of Functions extension NuGet packages](functions-triggers-bindings.md#supported-bindings) are made available to your C# script function app by using extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles).
-If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
+If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [func extensions install](functions-core-tools-reference.md#func-extensions-install).
By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here's an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*:
The following table lists the .NET attributes for each binding type and the pack
> | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` |
+## Convert a C# script app to a C# project
+
+The easiest way to convert a C# script function app to a compiled C# class library project is to start with a new project. You can then, for each function, migrate the code and configuration from each run.csx file and function.json file in a function folder to a single new .cs class library code file. For example, when you have a C# script function named `HelloWorld` you'll have two files: `HelloWorld/run.csx` and `HelloWorld/function.json`. For this function, you create a code file named `HelloWorld.cs` in your new class library project.
+
+If you are using C# scripting for portal editing, you can [download the app content to your local machine](./deployment-zip-push.md#download-your-function-app-files). Choose the **Site content** option instead of **Content and Visual Studio project**. You don't need to generate a project, and don't include application settings in the download. You're defining a new development environment, and this environment shouldn't have the same permissions as your hosted app environment.
+
+These instructions show you how to convert C# script functions (which run in-process with the Functions host) to C# class library functions that run in an [isolated worker process](dotnet-isolated-process-guide.md).
+
+1. Complete the **Create a functions app project** section from your preferred quickstart:
+
+ ### [Azure CLI](#tab/azure-cli)
+ [Create a C# function in Azure from the command line](create-first-function-cli-csharp.md#create-a-local-function-project)
+ ### [Visual Studio](#tab/vs)
+ [Create your first C# function in Azure using Visual Studio](functions-create-your-first-function-visual-studio.md#create-a-function-app-project)
+ ### [Visual Studio Code](#tab/vs-code)
+ [Create your first C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md#create-an-azure-functions-project)
+
+
+
+1. If your original C# script code includes an `extensions.csproj` file or any `function.proj` files, copy the package references from these file and add them to the new project's `.csproj` file in the same `ItemGroup` with the Functions core dependencies.
+
+ >[!TIP]
+ >Conversion provides a good opportunity to update to the latest versions of your dependencies. Doing so may require additional code changes in a later step.
+
+1. Copy the contents of the original `host.json` file into the new project's `host.json` file, except for the `extensionBundles` section (compiled C# projects don't use [extension bundles](functions-bindings-register.md#extension-bundles) and you must explicitly add references to all extensions used by your functions). When merging host.json files, remember that the [`host.json`](./functions-host-json.md) schema is versioned, with most apps using version 2.0. The contents of the `extensions` section can differ based on specific versions of the binding extensions used by your functions. See individual extension reference articles to learn how to correctly configure the host.json for your specific versions.
+
+1. For any [shared files referenced by a `#load` directive](#reusing-csx-code), create a new `.cs` file for each of these shared references. It's simplest to create a new `.cs` file for each shared class definition. If there are static methods without a class, you need to define new classes for these methods.
+
+1. Perform the following tasks for each `<FUNCTION_NAME>` folder in your original project:
+
+ 1. Create a new file named `<FUNCTION_NAME>.cs`, replacing `<FUNCTION_NAME>` with the name of the folder that defined your C# script function. You can create a new function code file from one of the trigger-specific templates in the following way:
+ ### [Azure CLI](#tab/azure-cli)
+ Using the `func new --name <FUNCTION_NAME>` command and choosing the correct trigger template at the prompt.
+ ### [Visual Studio](#tab/vs)
+ Following [Add a function to your project](functions-develop-vs.md?tabs=isolated-process#add-a-function-to-your-project) in the Visual Studio guide.
+ ### [Visual Studio Code](#tab/vs-code)
+ Following [Add a function to your project](functions-develop-vs-code.md?tabs=isolated-process#add-a-function-to-your-project) in the Visual Studio Code guide.
+
+
+ 1. Copy the `using` statements from your `run.csx` file and add them to the new file. You do not need any `#r` directives.
+ 1. For any `#load` statement in your `run.csx` file, add a new `using` statement for the namespace you used for the shared code.
+ 1. In the new file, define a class for your function under the namespace you are using for the project.
+ 1. Create a new method named `RunHandler` or something similar. This new method serves as the new entry point for the function.
+ 1. Copy the static method that represents your function, along with any functions it calls, from `run.csx` into your new class as a second method. From the new method you created in the previous step, call into this static method. This indirection step is helpful for navigating any differences as you continue the upgrade. You can keep the original method exactly the same and simply control its inputs from the new context. You may need to create parameters on the new method which you then pass into the static method call. After you have confirmed that the migration has worked as intended, you can remove this extra level of indirection.
+ 1. For each binding in the `function.json` file, add the corresponding attribute to your new method. To quickly find binding examples, see [Manually add bindings based on examples](add-bindings-existing-function.md?tabs=csharp).
+ 1. Add any extension packages required by the bindings to your project, if you haven't already done so.
+
+1. Recreate any application settings required by your app in the `Values` collection of the [local.settings.json file](functions-develop-local.md#local-settings-file).
+
+1. Verify that your project runs locally:
+
+ ### [Azure CLI](#tab/azure-cli)
+ Use `func start` to run your app from the command line. For more information, see [Run functions locally](functions-run-local.md#start).
+ ### [Visual Studio](#tab/vs)
+ Follow the [Run functions locally](functions-develop-vs.md?tabs=isolated-process#run-functions-locally) section of the Visual Studio guide.
+ ### [Visual Studio Code](#tab/vs-code)
+ Follow the [Run functions locally](functions-develop-vs-code.md?tabs=csharp#run-functions-locally) section of the Visual Studio Code guide.
+
+
+
+1. Publish your project to a new function app in Azure:
+
+ ### [Azure CLI](#tab/azure-cli)
+ [Create your Azure resources](create-first-function-cli-csharp.md#create-supporting-azure-resources-for-your-function) and deploy the code project to Azure by using the `func azure functionapp publish <APP_NAME>` command. For more information, see [Deploy project files](functions-run-local.md#project-file-deployment).
+ ### [Visual Studio](#tab/vs)
+ Follow the [Publish to Azure](functions-develop-vs.md?tabs=isolated-process#publish-to-azure) section of the Visual Studio guide.
+ ### [Visual Studio Code](#tab/vs-code)
+ Follow the [Create Azure resources](functions-develop-vs-code.md?tabs=csharp#publish-to-azure) section of the Visual Studio Code guide.
+
+
+
+### Example function conversion
+
+This section shows an example of the migration for a single function.
+
+The original function in C# scripting has two files:
+- `HelloWorld/function.json`
+- `HelloWorld/run.csx`
+
+The contents of `HelloWorld/function.json` are:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "FUNCTION",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+The contents of `HelloWorld/run.csx` are:
+
+```csharp
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+
+After migrating to the isolated worker model with ASP.NET Core integration, these are replaced by a single `HelloWorld.cs`:
+
+```csharp
+using System.Net;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using Microsoft.AspNetCore.Routing;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+namespace MyFunctionApp
+{
+ public class HelloWorld
+ {
+ private readonly ILogger _logger;
+
+ public HelloWorld(ILoggerFactory loggerFactory)
+ {
+ _logger = loggerFactory.CreateLogger<HelloWorld>();
+ }
+
+ [Function("HelloWorld")]
+ public async Task<IActionResult> RunHandler([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ return await Run(req, _logger);
+ }
+
+ // From run.csx
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+ }
+ }
+}
+```
+ ## Binding configuration and examples
+This section contains references and examples for defining triggers and bindings in C# script.
+ ### Blob trigger The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
The following table explains the binding configuration properties for C# script
|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-trigger.md#connections).|
-The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
+The following example shows a blob trigger definition in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
Here's the binding data in the *function.json* file:
The following table explains the binding configuration properties for C# script
|function.json property | Description| ||-|
-|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**type** | Must be set to `timerTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
|**name** | The name of the variable that represents the timer object in function code. | |**schedule**| A [CRON expression](./functions-bindings-timer.md#ncrontab-expressions) or a [TimeSpan](./functions-bindings-timer.md#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". | |**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
The following table explains the trigger configuration properties that you set i
|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. See [Connections](./functions-bindings-event-hubs-trigger.md#connections).|
-The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script functionthat uses the binding. The function logs the message body of the Event Hubs trigger.
+The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script function that uses the binding. The function logs the message body of the Event Hubs trigger.
The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
The following table explains the binding configuration properties that you set i
|function.json property | Description| ||-| |**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
|**name** | The name of the variable that represents the queue or topic message in function code. | |**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
The following table explains the binding configuration properties that you set i
|function.json property | Description| |||-|
-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
+|**type** |Must be set to `serviceBus`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. | |**queueName**|Name of the queue. Set only if sending queue messages, not for a topic. |**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
When running on Windows, the Node.js version is set by the [`WEBSITE_NODE_DEFAUL
# [Linux](#tab/linux)
-When running on Windows, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI.
+When running on Linux, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Title: Work with Azure Functions Core Tools
-description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you run them on Azure Functions.
+ Title: Develop Azure Functions locally using Core Tools
+description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you deploy them to run them on Azure Functions.
ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 07/30/2023 Last updated : 08/24/2023 zone_pivot_groups: programming-languages-set-functions
-# Work with Azure Functions Core Tools
+# Develop Azure Functions locally using Core Tools
-Azure Functions Core Tools lets you develop and test your functions on your local computer. Core Tools includes a version of the same runtime that powers Azure Functions. This runtime means your local functions run as they would in Azure and can connect to live Azure services during local development and debugging. You can even deploy your code project to Azure using Core Tools.
--
-Core Tools can be used with all [supported languages](supported-languages.md). Select your language at the top of the article.
+Azure Functions Core Tools lets you develop and test your functions on your local computer. When you're ready, you can also use Core Tools to deploy your code project to Azure and work with application settings.
::: zone pivot="programming-language-csharp"
+>You're viewing the C# version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-csharp.md). ::: zone-end ::: zone pivot="programming-language-java"
+>You're viewing the Java version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+ If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-java.md). ::: zone-end ::: zone pivot="programming-language-javascript"
+>You're viewing the JavaScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-node.md). ::: zone-end ::: zone pivot="programming-language-powershell"
+>You're viewing the PowerShell version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-powershell.md). ::: zone-end ::: zone pivot="programming-language-python"
+>You're viewing the Python version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-python.md). ::: zone-end ::: zone pivot="programming-language-typescript"
+>You're viewing the TypeScript version of this article. Make sure to select your preferred Functions programming language at the top of the article.
+
If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-typescript.md). ::: zone-end
-Core Tools enables the integrated local development and debugging experience for your functions provided by both Visual Studio and Visual Studio Code.
-
-## Prerequisites
-
-To be able to publish to Azure from Core Tools, you must have one of the following Azure tools installed locally:
-
-+ [Azure CLI](/cli/azure/install-azure-cli)
-+ [Azure PowerShell](/powershell/azure/install-azure-powershell)
-
-These tools are required to authenticate with your Azure account from your local computer.
-
-## <a name="v2"></a>Core Tools versions
-
-Major versions of Azure Functions Core Tools are linked to specific major versions of the Azure Functions runtime. For example, version 4.x of Core Tools supports version 4.x of the Functions runtime. This is the recommended major version of both the Functions runtime and Core Tools. You can find the latest Core Tools release version on [this release page](https://github.com/Azure/azure-functions-core-tools/releases/latest).
-
-Run the following command to determine the version of your current Core Tools installation:
-```command
-func --version
-```
+For help with version-related issues, see [Core Tools versions](#v2).
-Unless otherwise noted, the examples in this article are for version 4.x.
-
-The following considerations apply to Core Tools versions:
-
-+ You can only install one version of Core Tools on a given computer.
-
-+ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of life (EOL). For more information, see [Azure Functions runtime versions overview](functions-versions.md).
-+ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today.
+## Create your local project
+> [!IMPORTANT]
+> For Python, you must run Core Tools commands in a virtual environment. For more information, see [Quickstart: Create a Python function in Azure from the command line](create-first-function-cli-python.md#create-venv).
::: zone-end
+In the terminal window or from a command prompt, run the following command to create a project in the `MyProjFolder` folder:
-## Install the Azure Functions Core Tools
-
-The recommended way to install Core Tools depends on the operating system of your local development computer.
-
-### [Windows](#tab/windows)
-
-The following steps use a Windows installer (MSI) to install Core Tools v4.x. For more information about other package-based installers, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#windows).
-
-Download and run the Core Tools installer, based on your version of Windows:
--- [v4.x - Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2174087) (Recommended. [Visual Studio Code debugging](functions-develop-vs-code.md#debugging-functions-locally) requires 64-bit.)-- [v4.x - Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2174159)-
-If you previously used Windows installer (MSI) to install Core Tools on Windows, you should uninstall the old version from Add Remove Programs before installing the latest version.
-
-If you need to install version 1.x of the Core Tools, see the [GitHub repository](https://github.com/Azure/azure-functions-core-tools/blob/v1.x/README.md#installing) for more information.
-
-### [macOS](#tab/macos)
--
-The following steps use Homebrew to install the Core Tools on macOS.
-
-1. Install [Homebrew](https://brew.sh/), if it's not already installed.
-
-1. Install the Core Tools package:
-
- ```bash
- brew tap azure/functions
- brew install azure-functions-core-tools@4
- # if upgrading on a machine that has 2.x or 3.x installed:
- brew link --overwrite azure-functions-core-tools@4
- ```
-### [Linux](#tab/linux)
-
-The following steps use [APT](https://wiki.debian.org/Apt) to install Core Tools on your Ubuntu/Debian Linux distribution. For other Linux distributions, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#linux).
-
-1. Install the Microsoft package repository GPG key, to validate package integrity:
-
- ```bash
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
- sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
- ```
-
-1. Set up the APT source list before doing an APT update.
-
- ##### Ubuntu
-
- ```bash
- sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-$(lsb_release -cs)-prod $(lsb_release -cs) main" > /etc/apt/sources.list.d/dotnetdev.list'
- ```
-
- ##### Debian
-
- ```bash
- sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/debian/$(lsb_release -rs | cut -d'.' -f 1)/prod $(lsb_release -cs) main" > /etc/apt/sources.list.d/dotnetdev.list'
- ```
-
-1. Check the `/etc/apt/sources.list.d/dotnetdev.list` file for one of the appropriate Linux version strings in the following table:
-
- | Linux distribution | Version |
- | -- | - |
- | Debian 11 | `bullseye` |
- | Debian 10 | `buster` |
- | Debian 9 | `stretch` |
- | Ubuntu 22.04 | `jammy` |
- | Ubuntu 20.04 | `focal` |
- | Ubuntu 19.04 | `disco` |
- | Ubuntu 18.10 | `cosmic` |
- | Ubuntu 18.04 | `bionic` |
- | Ubuntu 17.04 | `zesty` |
- | Ubuntu 16.04/Linux Mint 18 | `xenial` |
-
-1. Start the APT source update:
-
- ```bash
- sudo apt-get update
- ```
-
-1. Install the Core Tools package:
-
- ```bash
- sudo apt-get install azure-functions-core-tools-4
- ```
---
-When upgrading to the latest version of Core Tools, you should use the same package manager as the original installation to perform the upgrade. Visual Studio and Visual Studio Code may also install Azure Functions Core Tools, depending on your specific tools installation.
-
-## Create a local Functions project
-
-A Functions project directory contains the following files and folders, regardless of language:
-
-| File name | Description |
-| | |
-| host.json | To learn more, see the [host.json reference](functions-host-json.md). |
-| local.settings.json | Settings used by Core Tools when running locally, including app settings. To learn more, see [local settings](#local-settings). |
-| .gitignore | Prevents the local.settings.json file from being accidentally published to a Git repository. To learn more, see [local settings](#local-settings)|
-| .vscode\extensions.json | Settings file used when opening the project folder in Visual Studio Code. |
-
-To learn more about the Functions project folder, see the [Azure Functions developers guide](functions-reference.md#folder-structure).
-
-In the terminal window or from a command prompt, run the following command to create the project and local Git repository:
+### [Isolated process](#tab/isolated-process)
+```console
+func init MyProjFolder --worker-runtime dotnet-isolated
```
-func init MyFunctionProj
-```
-
-This example creates a Functions project in a new `MyFunctionProj` folder. You're prompted to choose a default language for your project.
-The following considerations apply to project initialization:
+By default this command creates a project that runs in-process with the Functons host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For for information, see the [`func init`](functions-core-tools-reference.md#func-init) reference.
-+ If you don't provide the `--worker-runtime` option in the command, you're prompted to choose your language. For more information, see the [func init reference](functions-core-tools-reference.md#func-init).
+### [In-process](#tab/in-process)
-+ When you don't provide a project name, the current folder is initialized.
+```console
+func init MyProjFolder --worker-runtime dotnet
+```
-+ If you plan to deploy your project as a function app running in a Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function app in a local container](functions-create-container-registry.md#create-and-test-the-local-functions-project). If you forget to do this, you can always generate the Dockerfile for the project later by using the `func init --docker-only` command.
+This command creates a project that runs on the current [Long-Term Support (LTS) version of .NET Core]. For other .NET version, create an app that runs in an isolated worker process from the Functions host.
-+ Core Tools lets you create function app projects for the .NET runtime as either [in-process](functions-dotnet-class-library.md) or [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure.
+
-+ Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These files are the same ones you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init).
+For a comparison between the two .NET process models, see the [process mode comparison article](./dotnet-isolated-in-process-differences.md).
::: zone-end ::: zone pivot="programming-language-java"
-+ Java uses a Maven archetype to create the local Functions project, along with your first HTTP triggered function. Instead of using `func init` and `func new`, you should follow the steps in the [Command line quickstart](./create-first-function-cli-java.md).
-+ To use a `--worker-runtime` value of `node`, specify the `--language` as `javascript`.
-+ You should run all commands, including `func init`, from inside a virtual environment. To learn more, see [Create and activate a virtual environment](create-first-function-cli-python.md#create-venv).
-+ To use a `--worker-runtime` value of `node`, specify the `--language` as `typescript`.
+Java uses a Maven archetype to create the local project, along with your first HTTP triggered function. Rather than using `func init` and `func new`, you should instead follow the steps in the [Command line quickstart](./create-first-function-cli-java.md).
::: zone-end
+### [v4](#tab/node-v4)
+```console
+func init MyProjFolder --worker-runtime javascript --model V4
+```
+### [v3](#tab/node-v3)
+```console
+func init MyProjFolder --worker-runtime javascript --model V3
+```
+
-## Binding extensions
-
-[Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. To be able to use a specific binding extension, that extension must be installed in the project.
-
-This section doesn't apply to version 1.x of the Functions runtime. In version 1.x, supported binding were included in the core product extension.
-
-For compiled C# project, add references to the specific NuGet packages for the binding extensions required by your functions. C# script (.csx) project should use [extension bundles](functions-bindings-register.md#extension-bundles).
-Functions provides _extension bundles_ to make is easy to work with binding extensions in your project. Extension bundles, which are versioned and defined in the host.json file, install a complete set of compatible binding extension packages for your app. Your host.json should already have extension bundles enabled. If for some reason you need to add or update the extension bundle in the host.json file, see [Extension bundles](functions-bindings-register.md#extension-bundles).
-
-If you must use a binding extension or an extension version not in a supported bundle, you need to manually install extensions. For such rare scenarios, see [Install extensions](#install-extensions).
--
-By default, these settings aren't migrated automatically when the project is published to Azure. Use the [`--publish-local-settings` option][func azure functionapp publish] when you publish to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published.
+This command creates a JavaScript project that uses the desired [programming model version](functions-reference-node.md).
+### [v4](#tab/node-v4)
+```console
+func init MyProjFolder --worker-runtime typescript --model V4
+```
+### [v3](#tab/node-v3)
+```console
+func init MyProjFolder --worker-runtime typescript --model V3
+```
+
-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).
-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables).
-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables).
+This command creates a TypeScript project that uses the desired [programming model version](functions-reference-node.md).
::: zone pivot="programming-language-powershell"
-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables).
+```console
+func init MyProjFolder --worker-runtime powershell
+```
::: zone-end ::: zone pivot="programming-language-python"
-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables).
-
-When no valid storage connection string is set for [`AzureWebJobsStorage`] and a local storage emulator isn't being used, the following error message is shown:
-
-> Missing value for AzureWebJobsStorage in local.settings.json. This is required for all triggers other than HTTP. You can run 'func azure functionapp fetch-app-settings \<functionAppName\>' or specify a connection string in local.settings.json.
-
-### Get your storage connection strings
-
-Even when using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator) for development, you may want to run locally with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of several ways:
-
-#### [Portal](#tab/portal)
-
-1. From the [Azure portal], search for and select **Storage accounts**.
-
- ![Select Storage accounts from Azure portal](./media/functions-run-local/select-storage-accounts.png)
-
-1. Select your storage account, select **Access keys** in **Settings**, then copy one of the **Connection string** values.
-
- ![Copy connection string from Azure portal](./media/functions-run-local/copy-storage-connection-portal.png)
-
-#### [Core Tools](#tab/azurecli)
-
-From the project root, use one of the following commands to download the connection string from Azure:
-
- + Download all settings from an existing function app:
-
- ```
- func azure functionapp fetch-app-settings <FunctionAppName>
- ```
-
- + Get the Connection string for a specific storage account:
-
- ```
- func azure storage fetch-connection-string <StorageAccountName>
- ```
-
- When you aren't already signed in to Azure, you're prompted to do so. These commands overwrite any existing settings in the local.settings.json file. To learn more, see the [`func azure functionapp fetch-app-settings`](functions-core-tools-reference.md#func-azure-functionapp-fetch-app-settings) and [`func azure storage fetch-connection-string`](functions-core-tools-reference.md#func-azure-storage-fetch-connection-string) commands.
-
-#### [Storage Explorer](#tab/storageexplorer)
-
-1. Run [Azure Storage Explorer](https://storageexplorer.com/).
-
-1. In the **Explorer**, expand your subscription, then expand **Storage Accounts**.
-
-1. Select your storage account and copy the primary or secondary connection string.
-
- ![Copy connection string from Storage Explorer](./media/functions-run-local/storage-explorer.png)
-
+### [v2](#tab/python-v2)
+```console
+func init MyProjFolder --worker-runtime python --model V2
+```
+### [v1](#tab/python-v1)
+```console
+func init MyProjFolder --worker-runtime python
+```
-## <a name="create-func"></a>Create a function
-
-To create a function in an existing project, run the following command:
+This command creates a Python project that uses the desired [programming model version](functions-reference-python.md#programming-model).
-```
-func new
-```
+When you run `func init` without the `--worker-runtime` option, you're prompted to choose your project language. To learn more about the available options for the `func init` command, see the [`func init`](functions-core-tools-reference.md#func-init) reference.
-When you run `func new`, you're prompted to choose a template in the default language of your function app. Next, you're prompted to choose a name for your function. In version 1.x, you're also required to choose the language.
+## <a name="create-func"></a>Create a function
-You can also specify the function name and template in the `func new` command. The following example uses the `--template` option to create an HTTP trigger named `MyHttpTrigger`:
+To add a function to your project, run the `func new` command using the `--template` option to select your trigger template. The following example creates an HTTP trigger named `MyHttpTrigger`:
``` func new --template "Http Trigger" --name MyHttpTrigger
This example creates a Queue Storage trigger named `MyQueueTrigger`:
func new --template "Azure Queue Storage Trigger" --name MyQueueTrigger ```
-To learn more, see the [`func new`](functions-core-tools-reference.md#func-new) command.
+The following considerations apply when adding functions:
+++ When you run `func new` without the `--template` option, you're prompted to choose a template.+++ Use the [`func templates list`](./functions-core-tools-reference.md#func-templates-list) command to see the complete list of available templates for your language. +++ When you add a trigger that connects to a service, you'll also need to add an application setting that references a connection string or a managed identity to the local.settings.json file. Using app settings in this way prevents you from having to embed credentials in your code. For more information, see [Work with app settings locally](#local-settings). ++ Core Tools also adds a reference to the specific binding extension to your C# project.
-## <a name="start"></a>Run functions locally
+To learn more about the available options for the `func new` command, see the [`func new`](functions-core-tools-reference.md#func-new) reference.
-To run a Functions project, you run the Functions host from the root directory of your project. The host enables triggers for all functions in the project. Use the following command to run your functions locally:
+## Add a binding to your function
+
+Functions provides a set of service-specific input and output bindings, which make it easier for your function to connection to other Azure services without having to use the service-specific client SDKs. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+
+To add an input or output binding to an existing function, you must manually update the function definition.
+The following considerations apply when adding bindings to a function:
++ For languages that define functions using the _function.json_ configuration file, Visual Studio Code simplifies the process of adding bindings to an existing function definition. For more information, see [Connect functions to Azure services using bindings](add-bindings-existing-function.md#visual-studio-code). ++ When you add bindings that connect to a service, you must also add an application setting that references a connection string or managed identity to the local.settings.json file. For more information, see [Work with app settings locally](#local-settings). ++ When you add a supported binding, the extension should already be installed when your app uses extension bundle. For more information, see [extension bundles](functions-bindings-register.md#extension-bundles).++ When you add a binding that requires a new binding extension, you must also add a reference to that specific binding extension in your C# project.
+For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=csharp#manually-add-bindings-based-on-examples).
+For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=java#manually-add-bindings-based-on-examples).
+For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=javascript#manually-add-bindings-based-on-examples).
+For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=powershell#manually-add-bindings-based-on-examples).
+For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=python#manually-add-bindings-based-on-examples).
+For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=typescript#manually-add-bindings-based-on-examples).
++
+## <a name="start"></a>Start the Functions runtime
+
+Before you can run or debug the functions in your project, you need to start the Functions host from the root directory of your project. The host enables triggers for all functions in the project. Use this command to start the local runtime:
::: zone pivot="programming-language-java" ```
mvn clean package
mvn azure-functions:run ``` ::: zone-end ``` func start ``` ::: zone-end
-The way you start the host depends on your runtime version:
-### [v4.x](#tab/v2)
-```
-func start
-```
-### [v1.x](#tab/v1)
-```
-func host start
-```
- ::: zone pivot="programming-language-typescript" ``` npm install
npm start
This command must be [run in a virtual environment](./create-first-function-cli-python.md). ::: zone-end
-When the Functions host starts, it outputs the URL of HTTP-triggered functions, like in the following example:
+When the Functions host starts, it outputs a list of functions in the project, including the URLs of any HTTP-triggered functions, like in this example:
<pre> Found the following functions:
Job host started
Http Function MyHttpTrigger: http://localhost:7071/api/MyHttpTrigger </pre>
-### Considerations when running locally
- Keep in mind the following considerations when running your functions locally: + By default, authorization isn't enforced locally for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the `--enableAuth` option to require authorization when running locally. For more information, see [`func start`](./functions-core-tools-reference.md?tabs=v2#func-start) + While there's local storage emulation available, it's often best to validate your triggers and bindings against live services in Azure. You can maintain the connections to these services in the local.settings.json project file. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). Make sure to keep test and production data separate when testing against live Azure services.
-+ You can trigger non-HTTP functions locally without connecting to a live service. For more information, see [Non-HTTP triggered functions](#non-http-triggered-functions).
++ You can trigger non-HTTP functions locally without connecting to a live service. For more information, see [Run a local function](./functions-run-local.md?tabs=non-http-trigger#run-a-local-function). + When you include your Application Insights connection information in the local.settings.json file, local log data is written to the specific Application Insights instance. To keep local telemetry data separate from production data, consider using a separate Application Insights instance for development and testing.++ When using version 1.x of the Core Tools, instead use the `func host start` command to start the local runtime.
-### Passing test data to a function
+## Run a local function
-To test your functions locally, you [start the Functions host](#start) and call endpoints on the local server using HTTP requests. The endpoint you call depends on the type of function.
+With your local Functions host (func.exe) running, you can now trigger individual functions to run and debug your function code. The way in which you execute an individual function depends on its trigger type.
->[!NOTE]
+> [!NOTE]
> Examples in this topic use the cURL tool to send HTTP requests from the terminal or a command prompt. You can use a tool of your choice to send HTTP requests to the local server. The cURL tool is available by default on Linux-based systems and Windows 10 build 17063 and later. On older Windows, you must first download and install the [cURL tool](https://curl.haxx.se/).
-For more general information on testing functions, see [Strategies for testing your code in Azure Functions](functions-test-a-function.md).
+### [HTTP trigger](#tab/http-trigger)
-#### HTTP and webhook triggered functions
-
-You call the following endpoint to locally run HTTP and webhook triggered functions:
+HTTP triggers are started by sending an HTTP request to the local endpoint and port as displayed in the func.exe output, which has this general format:
```
-http://localhost:{port}/api/{function_name}
+http://localhost:<PORT>/api/<FUNCTION_NAME>
```
-Make sure to use the same server name and port that the Functions host is listening on. You see an endpoint like this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger.
+In this URL template, `<FUNCTION_NAME>` is the name of the function or route and `<PORT>` is the local port on which func.exe is listening.
-The following cURL command triggers the `MyHttpTrigger` quickstart function from a GET request with the _name_ parameter passed in the query string.
+For example, this cURL command triggers the `MyHttpTrigger` quickstart function from a GET request with the _name_ parameter passed in the query string:
``` curl --get http://localhost:7071/api/MyHttpTrigger?name=Azure%20Rocks ```
-The following example is the same function called from a POST request passing _name_ in the request body:
+This example is the same function called from a POST request passing _name_ in the request body, shown for both Bash shell and Windows command line:
-##### [Bash](#tab/bash)
```bash curl --request POST http://localhost:7071/api/MyHttpTrigger --data '{"name":"Azure Rocks"}' ```
-##### [Cmd](#tab/cmd)
+ ```cmd curl --request POST http://localhost:7071/api/MyHttpTrigger --data "{'name':'Azure Rocks'}" ```--
-You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use cURL, Fiddler, Postman, or a similar HTTP testing tool that supports POST requests.
-#### Non-HTTP triggered functions
+The following considerations apply when calling HTTP endpoints locally:
-For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function. You can call the `functions` administrator endpoint (`http://localhost:{port}/admin/functions/`) to get URLs for all available functions, both HTTP triggered and non-HTTP triggered.
++ You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use cURL, Fiddler, Postman, or a similar HTTP testing tool that supports POST requests.
-When running your functions in Core Tools, authentication and authorization is bypassed. However, when you try to call the same administrator endpoints on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
++ Make sure to use the same server name and port that the Functions host is listening on. You see an endpoint like this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger.
->[!IMPORTANT]
->Access keys are valuable shared secrets. When used locally, they must be securely stored outside of source control. Because authentication and authorization isn't required by Functions when running locally, you should avoid using and storing access keys unless your scenarios require it.
+### [Non-HTTP trigger](#tab/non-http-trigger)
-To test Event Grid triggered functions locally, see [Local testing with viewer web app](event-grid-how-tos.md#local-testing-with-viewer-web-app).
+There are two ways to execute non-HTTP triggers locally. First, you can connect to live Azure services, such as Azure Storage and Azure Service Bus. This directly mirrors the behavior of your function when running in Azure. When using live services, make sure to include the required named connection strings in the [local settings file](#local-settings). You may consider using a different service connection during development than you do in production by using a different connection string in the local.settings.json file than you use in the function app settings in Azure.
-You can optionally pass test data to the execution in the body of the POST request. This functionality is similar to the **Test** tab in the Azure portal.
+Event Grid triggers require extra configuration to run locally.
-You call the following administrator endpoint to trigger non-HTTP functions:
+You can also run a non-HTTP function locally using REST by calling a special endpoint called an _administrator endpoint_. Use this format to call the `admin` endpoint and trigger a specific non-HTTP function:
```
-http://localhost:{port}/admin/functions/{function_name}
+http://localhost:<PORT>/admin/functions/<FUNCTION_NAME>
```
-To pass test data to the administrator endpoint of a function, you must supply the data in the body of a POST request message. The message body is required to have the following JSON format:
+In this URL template, `<FUNCTION_NAME>` is the name of the function or route and `<PORT>` is the local port on which func.exe is listening.
+
+You can optionally pass test data to the execution in the body of the POST request. To pass test data, you must supply the data in the body of a POST request message, which has this JSON format:
```JSON {
- "input": "<trigger_input>"
+ "input": "<TRIGGER_INPUT>"
} ```
-The `<trigger_input>` value contains data in a format expected by the function. The following cURL example is a POST to a `QueueTriggerJS` function. In this case, the input is a string that is equivalent to the message expected to be found in the queue.
+The `<TRIGGER_INPUT>` value contains data in a format expected by the function. This cURL example is shown for both Bash shell and Windows command line:
-##### [Bash](#tab/bash)
```bash curl --request POST -H "Content-Type:application/json" --data '{"input":"sample queue data"}' http://localhost:7071/admin/functions/QueueTrigger ```
-##### [Cmd](#tab/cmd)
-```bash
+
+```cmd
curl --request POST -H "Content-Type:application/json" --data "{'input':'sample queue data'}" http://localhost:7071/admin/functions/QueueTrigger ```+
+The previous examples generate a POST request that passes a string `sample queue data` to a function named `QueueTrigger` function, which simulates data arriving in the queue and triggering the function
+
+The following considerations apply when using the administrator endpoint for local testing:
+++ You can call the `functions` administrator endpoint (`http://localhost:{port}/admin/functions/`) to return a list of administrator URLs for all available functions, both HTTP triggered and non-HTTP triggered.+++ Authentication and authorization are bypassed when running locally. The same APIs exist in Azure, but when you try to call the same administrator endpoints in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). +++ Access keys are valuable shared secrets. When used locally, they must be securely stored outside of source control. Because authentication and authorization aren't required by Functions when running locally, you should avoid using and storing access keys unless your scenarios require it.+++ Calling an administrator endpoint and passing test data is similar to using the **Test** tab in the Azure portal.+
+### [Event Grid trigger](#tab/event-grid-trigger)
+
+Event Grid triggers have specific requirements to enable local testing. For more information, see [Local testing with viewer web app](event-grid-how-tos.md#local-testing-with-viewer-web-app).
+ ## <a name="publish"></a>Publish to Azure
The Azure Functions Core Tools supports three types of deployment:
| Azure Container Apps | `func azurecontainerapps deploy` | Deploys a containerized function app to an existing Container Apps environment. | | Kubernetes cluster | `func kubernetes deploy` | Deploys your Linux function app as a custom Docker container to a Kubernetes cluster. |
-### Authenticating with Azure
- You must have either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) installed locally to be able to publish to Azure from Core Tools. By default, Core Tools uses these tools to authenticate with your Azure account. If you don't have these tools installed, you need to instead [get a valid access token](/cli/azure/account#az-account-get-access-token) to use during deployment. You can present an access token using the `--access-token` option in the deployment commands.
-### <a name="project-file-deployment"></a>Deploy project files
+## <a name="project-file-deployment"></a>Deploy project files
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-typescript" To publish your local code to a function app in Azure, use the [`func azure functionapp publish publish`](./functions-core-tools-reference.md#func-azure-functionapp-publish) command, as in the following example:
The following considerations apply to this kind of deployment:
+ A [remote build](functions-deployment-technologies.md#remote-build) is performed on compiled projects. This can be controlled by using the [`--no-build` option][func azure functionapp publish].
-+ Use the [`--publish-local-settings` option][func azure functionapp publish] to automatically create app settings in your function app based on values in the local.settings.json file.
++ Use the [`--publish-local-settings`][func azure functionapp publish] option to automatically create app settings in your function app based on values in the local.settings.json file. + To publish to a specific named slot in your function app, use the [`--slot` option](functions-core-tools-reference.md#func-azure-functionapp-publish). ::: zone-end
-### Azure Container Apps deployment
+## Deploy containers
+
+Core Tools lets you deploy your [containerized function app](functions-create-container-registry.md) to both managed Azure Container Apps environments and Kubernetes clusters that you manage.
+
+### [Container Apps](#tab/container-apps)
-Functions lets you deploy a [containerized function app](functions-create-container-registry.md) to an Azure Container Apps environment. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). Use the following [`func azurecontainerapps deploy`](./functions-core-tools-reference.md#func-azurecontainerapps-deploy) command to deploy an existing container image to a Container Apps environment:
+Use the following [`func azurecontainerapps deploy`](./functions-core-tools-reference.md#func-azurecontainerapps-deploy) command to deploy an existing container image to a Container Apps environment:
```command func azurecontainerapps deploy --name <APP_NAME> --environment <ENVIRONMENT_NAME> --storage-account <STORAGE_CONNECTION> --resource-group <RESOURCE_GROUP> --image-name <IMAGE_NAME> [--registry-password] [--registry-server] [--registry-username] ```
-When deploying to an Azure Container Apps environment, the environment and storage account must already exist. You don't need to create a separate function app resource. The storage account connection string you provide is used by the deployed function app.
+When you deploy to an Azure Container Apps environment, the following considerations apply:
-> [!IMPORTANT]
-> Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control systems.
++ The environment and storage account must already exist. The storage account connection string you provide is used by the deployed function app.+++ You don't need to create a separate function app resource when deploying to Container Apps. +++ Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control systems. You can [encrypt the local.settings.json file](#encrypt-the-local-settings-file) for added security.
-### Kubernetes cluster
+For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
-Core Tools can also be used to deploy a [containerized function app](functions-create-container-registry.md) to a Kubernetes cluster that you manage. The following [`func kubernetes deploy`](./functions-core-tools-reference.md#func-kubernetes-deploy) command uses the Dockerfile to generate a container in the specified registry and deploy it to the default Kubernetes cluster.
+### [Kubernetes cluster](#tab/kubernetes)
+
+The following [`func kubernetes deploy`](./functions-core-tools-reference.md#func-kubernetes-deploy) command uses the Dockerfile to generate a container in the specified registry and deploy it to the default Kubernetes cluster.
```command func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME>
func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME>
Azure Functions on Kubernetes using KEDA is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community. To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
-## Install extensions
+
++
+The following considerations apply when working with the local settings file:
++ Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote repository. Core Tools helps you encrypt this local settings file for improved security. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). You can also [encrypt the local.settings.json file](#encrypt-the-local-settings-file) for added security. +++ By default, local settings aren't migrated automatically when the project is published to Azure. Use the [`--publish-local-settings`][func azure functionapp publish] option when you publish your project files to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published. You can also [upload settings from the local.settings.json file](#upload-local-settings-to-azure) at any time. +++ You can download and overwrite settings in your local.settings.json file with settings from your function app in Azure. For more information, see [Download application settings](#download-application-settings). ::: zone pivot="programming-language-csharp"
-> [!NOTE]
-> This section only applies to C# script (.csx) projects, which also rely on extension bundles. Compiled C# projects use NuGet extension packages in the regular way.
++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables). ::: zone-end
-In the rare event you aren't able to use [extension bundles](functions-bindings-register.md#extension-bundles), you can use Core Tools to install the specific extension packages required by your project. The following are some reasons why you might need to install extensions manually:
++ When no valid storage connection string is set for [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage) and a local storage emulator isn't being used, an error is shown. You can use Core Tools to [download a specific connection string](#download-a-storage-connection-string) from any of your Azure Storage accounts.
-* You need to access a specific version of an extension not available in a bundle.
-* You need to access a custom extension not available in a bundle.
-* You need to access a specific combination of extensions not available in a single bundle.
+### Download application settings
-The following considerations apply when manually installing extensions:
+From the project root, use the following command to download all application settings from the `myfunctionapp12345` app in Azure:
-+ To manually install extensions by using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed.
+```command
+func azure functionapp fetch-app-settings myfunctionapp12345
+```
-+ You can't explicitly install extensions in a function app with extension bundles enabled. First, remove the `extensionBundle` section in *host.json* before explicitly installing extensions.
+This command overwrites any existing settings in the local.settings.json file with values from Azure. When not already present, new items are added to the collection. For more information, see the [`func azure functionapp fetch-app-settings`](functions-core-tools-reference.md#func-azure-functionapp-fetch-app-settings) command.
-+ The first time you explicitly install an extension, a .NET project file named extensions.csproj is added to the root of your app project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file.
+### Download a storage connection string
-Use the following command to install a specific extension package at a specific version, in this case the Storage extension:
+Core Tools also make it easy to get the connection string of any storage account to which you have access. From the project root, use the following command to download the connection string from a storage account named `mystorage12345`.
```command
-func extensions install --package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
+func azure storage fetch-connection-string mystorage12345
```
-You can use this command to install any compatible NuGet package. To learn more, see the [`func extensions install`](functions-core-tools-reference.md#func-extensions-install) command.
+This command adds a setting named `mystorage12345_STORAGE` to the local.settings.json file, which contains the connection string for the `mystorage12345` account. For more information, see the [`func azure storage fetch-connection-string`](functions-core-tools-reference.md#func-azure-storage-fetch-connection-string) command.
-## Monitoring functions
+For improved security during development, consider [encrypting the local.settings.json file](#encrypt-the-local-settings-file).
-The recommended way to monitor the execution of your functions is by integrating with Azure Application Insights. You can also stream execution logs to your local computer. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
+### Upload local settings to Azure
-### Application Insights integration
+When you publish your project files to Azure without using the `--publish-local-settings` option, settings in the local.settings.json file aren't set in your function app. You can always rerun the `func azure functionapp publish` with the `--publish-settings-only` option to upload just the settings without republishing the project files.
-Application Insights integration should be enabled when you create your function app in Azure. If for some reason your function app isn't connected to an Application Insights instance, it's easy to do this integration in the Azure portal. To learn more, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
+The following example uploads just settings from the `Values` collection in the local.settings.json file to the function app in Azure named `myfunctionapp12345`:
-### Enable streaming logs
+```command
+func azure functionapp publish myfunctionapp12345 --publish-settings-only
+```
+
+### Encrypt the local settings file
+
+To improve security of connection strings and other valuable data in your local settings, Core Tools lets you encrypt the local.settings.json file. When this file is encrypted, the runtime automatically decrypts the settings when needed the same way it does with application setting in Azure. You can also decrypt a locally encrypted file to work with the settings.
-You can view a stream of log files being generated by your functions in a command-line session on your local computer.
+Use the following command to encrypt the local settings file for the project:
+```command
+func settings encrypt
+```
-This type of streaming logs requires that Application Insights integration be enabled for your function app.
+Use the following command to decrypt an encrypted local setting, so that you can work with it:
+
+```command
+func settings decrypt
+```
+
+When the settings file is encrypted and decrypted, the file's `IsEncrypted` setting also gets updated.
+
+## Configure binding extensions
+
+[Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. To be able to use a specific binding extension, that extension must be installed in the project.
+
+This section doesn't apply to version 1.x of the Functions runtime. In version 1.x, supported binding were included in the core product extension.
+
+For C# class library projects, add references to the specific NuGet packages for the binding extensions required by your functions. C# script (.csx) project must use [extension bundles](functions-bindings-register.md#extension-bundles).
+Functions provides _extension bundles_ to make is easy to work with binding extensions in your project. Extension bundles, which are versioned and defined in the host.json file, install a complete set of compatible binding extension packages for your app. Your host.json should already have extension bundles enabled. If for some reason you need to add or update the extension bundle in the host.json file, see [Extension bundles](functions-bindings-register.md#extension-bundles).
+
+If you must use a binding extension or an extension version not in a supported bundle, you need to manually install extensions. For such rare scenarios, see the [`func extensions install`](./functions-core-tools-reference.md#func-extensions-install) command.
+
+## <a name="v2"></a>Core Tools versions
+
+Major versions of Azure Functions Core Tools are linked to specific major versions of the Azure Functions runtime. For example, version 4.x of Core Tools supports version 4.x of the Functions runtime. This version is the recommended major version of both the Functions runtime and Core Tools. You can determine the latest release version of Core Tools in the [Azure Functions Core Tools repository](https://github.com/Azure/azure-functions-core-tools/releases/latest).
+
+Run the following command to determine the version of your current Core Tools installation:
+
+```command
+func --version
+```
+
+Unless otherwise noted, the examples in this article are for version 4.x.
+
+The following considerations apply to Core Tools installations:
+++ You can only install one version of Core Tools on a given computer. +++ When upgrading to the latest version of Core Tools, you should use the same method that you used for original installation to perform the upgrade. For example, if you used an MSI on Windows, uninstall the current MSI and install the latest one. Or if you used npm, rerun the `npm install command`. +++ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of life (EOL). For more information, see [Azure Functions runtime versions overview](functions-versions.md). ++ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. [!INCLUDE [functions-x86-emulation-on-arm64](../../includes/functions-x86-emulation-on-arm64.md)]
-If you're using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code).
+When using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code).
## Next steps
Learn how to [develop, test, and publish Azure functions by using Azure Function
<!-- LINKS -->
-[Azure Functions Core Tools]: https://www.npmjs.com/package/azure-functions-core-tools
-[Azure portal]: https://portal.azure.com
-[Node.js]: https://docs.npmjs.com/getting-started/installing-node#osx-or-windows
-[`FUNCTIONS_WORKER_RUNTIME`]: functions-app-settings.md#functions_worker_runtime
-[`AzureWebJobsStorage`]: functions-app-settings.md#azurewebjobsstorage
[extension bundles]: functions-bindings-register.md#extension-bundles [func azure functionapp publish]: functions-core-tools-reference.md?tabs=v2#func-azure-functionapp-publish
-[func init]: functions-core-tools-reference.md?tabs=v2#func-init
++
+[Long-Term Support (LTS) version of .NET Core]: https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle
azure-functions Functions Triggers Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md
Title: Triggers and bindings in Azure Functions description: Learn to use triggers and bindings to connect your Azure Function to online events and cloud-based services.- Previously updated : 05/25/2022- Last updated : 08/14/2023+
+zone_pivot_groups: programming-languages-set-functions
# Azure Functions triggers and bindings concepts
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if you are using [.NET isolated worker process functions](dotnet-isolated-process-guide.md). Support for `net7.0` and `net48` is currently in preview.
+You can also choose `net6.0`, `net7.0`, `net8.0`, or `net48` as the target framework if you are using [.NET isolated worker process functions](dotnet-isolated-process-guide.md). Support for `net8.0` is currently in preview.
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
+
+ Title: Migrate Azure Cosmos DB extension for Azure Functions to version 4.x
+description: This article shows you how to upgrade your existing function apps using the Azure Cosmos DB extension version 3.x to be able to use version 4.x of the extension.
+++ Last updated : 08/16/2023
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Migrate function apps from Azure Cosmos DB extension version 3.x to version 4.x
+
+This article highlights considerations for upgrading your existing Azure Functions applications that use the Azure Cosmos DB extension version 3.x to use the newer [extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4). Migrating from version 3.x to version 4.x of the Azure Cosmos DB extension has breaking changes for your application.
+
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB extension version 3.x will be retired. The extension and all applications using the extension will continue to function, but Azure Cosmos DB will cease to provide further maintenance and support for this extension. We recommend migrating to the latest version 4.x of the extension.
+
+This article walks you through the process of migrating your function app to run on version 4.x of the Azure Cosmos DB extension. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
++
+## Update the extension version
+
+.NET Functions use bindings that are installed in the project as NuGet packages. Depending on your Functions process model, the NuGet package to update varies.
+
+|Functions process model |Azure Cosmos DB extension |Recommended version |
+||--|--|
+|[In-process model](./functions-dotnet-class-library.md)|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) |>= 4.3.0 |
+|[Isolated worker model](./dotnet-isolated-process-guide.md) |[Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB)|>= 4.4.1 |
+
+Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 4 of the Azure Cosmos DB extension.
+
+### [In-process model](#tab/in-process)
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net7.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" />
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+### [Isolated worker model](#tab/isolated-worker)
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net7.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <OutputType>Exe</OutputType>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
++++
+## Update the extension bundle
+
+By default, [extension bundles](./functions-bindings-register.md#extension-bundles) are used by non-.NET function apps to install binding extensions. The Azure Cosmos DB version 4 extension is part of the Microsoft Azure Functions version 4 extension bundle.
+
+To update your application to use the latest extension bundle, update your `host.json`. The following `host.json` file uses version 4 of the Microsoft Azure Functions extension bundle.
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+++
+## Rename the binding attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function.
+
+The following table only includes attributes that were renamed or were removed from the version 3 extension. For a full list of attributes available in the version 4 extension, visit the [attribute reference](./functions-bindings-cosmosdb-v2-trigger.md?tabs=extensionv4#attributes).
+
+|Version 3 attribute property |Version 4 attribute property |Version 4 attribute description |
+|--|--|--|
+|**ConnectionStringSetting** |**Connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](./functions-bindings-cosmosdb-v2-trigger.md#connections).|
+|**CollectionName** |**ContainerName** | The name of the container being monitored. |
+|**LeaseConnectionStringSetting** |**LeaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.|
+|**LeaseCollectionName** |**LeaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |
+|**CreateLeaseCollectionIfNotExists** |**CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Azure AD identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.|
+|**LeasesCollectionThroughput** |**LeasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. |
+|**LeaseCollectionPrefix** |**LeaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. |
+|**UseMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. |
+|**UseDefaultJsonSerialization** |*Removed* | This attribute is no longer needed as you can fully customize the serialization using built in support in the [Azure Cosmos DB version 3 .NET SDK](../cosmos-db/nosql/migrate-dotnet-v3.md#customize-serialization). |
+|**CheckpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. |
+|**CheckpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. |
++
+## Rename the binding attributes
+
+Update your binding configuration properties in the `function.json` file.
+
+The following table only includes attributes that changed or were removed from the version 3.x extension. For a full list of attributes available in the version 4 extension, visit the [attribute reference](./functions-bindings-cosmosdb-v2-trigger.md#attributes).
+
+|Version 3 attribute property |Version 4 attribute property |Version 4 attribute description |
+|--|--|--|
+|**connectionStringSetting** |**connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](./functions-bindings-cosmosdb-v2-trigger.md#connections).|
+|**collectionName** |**containerName** | The name of the container being monitored. |
+|**leaseConnectionStringSetting** |**leaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.|
+|**leaseCollectionName** |**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |
+|**createLeaseCollectionIfNotExists** |**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Azure AD identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.|
+|**leasesCollectionThroughput** |**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `createLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. |
+|**leaseCollectionPrefix** |**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. |
+|**useMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. |
+|**checkpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. |
+|**checkpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. |
+++
+## Modify your function code
+
+The Azure Functions extension version 4 is built on top of the Azure Cosmos DB .NET SDK version 3, which removed support for the [`Document` class](../cosmos-db/nosql/migrate-dotnet-v3.md#major-name-changes-from-v2-sdk-to-v3-sdk). Instead of receiving a list of `Document` objects with each function invocation, which you must then deserialize into your own object type, you can now directly receive a list of objects of your own type.
+
+This example refers to a simple `ToDoItem` type.
+
+```cs
+namespace CosmosDBSamples
+{
+ // Customize the model with your own desired properties
+ public class ToDoItem
+ {
+ public string id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+Changes to the attribute names must be made directly in the code when defining your Function.
+
+```cs
+using System.Collections.Generic;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using Microsoft.Extensions.Logging;
+
+namespace CosmosDBSamples
+{
+ public static class CosmosTrigger
+ {
+ [FunctionName("CosmosTrigger")]
+ public static void Run([CosmosDBTrigger(
+ databaseName: "databaseName",
+ containerName: "containerName",
+ Connection = "CosmosDBConnectionSetting",
+ LeaseContainerName = "leases",
+ CreateLeaseContainerIfNotExists = true)]IReadOnlyList<ToDoItem> input, ILogger log)
+ {
+ if (input != null && input.Count > 0)
+ {
+ log.LogInformation("Documents modified " + input.Count);
+ log.LogInformation("First document Id " + input[0].id);
+ }
+ }
+ }
+}
+```
++
+## Modify your function code
+
+After you update your `host.json` to use the correct extension bundle version and modify your `function.json` to use the correct attribute names, there are no further code changes required.
++
+## Next steps
+
+- [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md)
+- [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md)
+- [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md)
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
On version 4.x of the Functions runtime, your .NET function app targets .NET 6 w
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **We recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path with the longest support window from .NET.
+> **We recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path to the fully released version with the longest support window from .NET.
## Prepare for migration
To upgrade the application, you will:
## Upgrade your local project
-The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version.
+The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. These steps assume a local C# project, and if your app is instead using C# script (`.csx` files), you should [convert to the project model](./functions-reference-csharp.md#convert-a-c-script-app-to-a-c-project) before continuing.
> [!TIP]
-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+> If you are moving to an LTS or STS version of .NET, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
### .csproj file
Some key classes change between the in-process model and the isolated worker mod
| `IActionResult` | `HttpResponseData`, `IActionResult` (using [ASP.NET Core integration])| | `FunctionsStartup` (attribute) | Uses [`Program.cs`](#programcs-file) instead |
-[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration-preview
+[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
zone_pivot_groups: programming-languages-set-functions
This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
+If you are running version 1.x of the runtime in Azure Stack Hub, see [Considerations for Azure Stack Hub](#considerations-for-azure-stack-hub) first.
+ ## Identify function apps to upgrade Use the following PowerShell script to generate a list of function apps in your subscription that currently target version 1.x:
On version 1.x of the Functions runtime, your C# function app targets .NET Frame
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 6 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade.
+> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 6 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade. .NET 6 is the fully released version with the longest support window from .NET.
>
-> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
::: zone-end
Migrating a C# function app from version 1.x to version 4.x of the Functions run
Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process). > [!TIP]
-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+> If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
### .csproj file
Some key classes changed names between version 1.x and version 4.x. These change
-[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration-preview
+[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
In version 2.x, the following changes were made:
* The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`.
+## Considerations for Azure Stack Hub
+
+[App Service on Azure Stack Hub](/azure-stack/operator/azure-stack-app-service-overview) does not support version 4.x of Azure Functions. When you are planning a migration off of version 1.x in Azure Stack Hub, you can choose one of the following options:
+
+- Migrate to version 4.x hosted in public cloud Azure Functions using the instructions in this article. Instead of upgrading your existing app, you would create a new app using version 4.x and then deploy your modified project to it.
+- Switch to [WebJobs](../app-service/webjobs-create.md) hosted on an App Service plan in Azure Stack Hub.
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
On version 3.x of the Functions runtime, your C# function app targets .NET Core
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path with the longest support window from .NET.
+> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path to the fully released version with the longest support window from .NET.
>
-> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
::: zone-end
Upgrading instructions are language dependent. If you don't see your language, c
Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process). > [!TIP]
-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+> If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
### .csproj file
Some key classes changed names between versions. These changes are a result eith
-[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration-preview
+[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
This section provides information about how to run your function app from a pack
+ When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.
-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.
++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.++ You must maintain any SAS URLs used for deployment. When an SAS expires, the package can no longer be deployed. In this case, you must generate a new SAS and update the setting in your function app. You can eliminate this management burden by [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity). + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account.
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
For example, every function app requires an associated storage account, which is
App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to instead manage the secure storage of your secrets, the app setting should instead be references to Azure Key Vault.
-You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. To learn more, see the `IsEncrypted` property in the [local settings file](functions-develop-local.md#local-settings-file).
+You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. For more information, see [Encrypt the local settings file](functions-run-local.md#encrypt-the-local-settings-file).
#### Key Vault references
By having a separate scm endpoint, you can control deployments and other advance
### Continuous security validation
-Since security needs to be considered at every step in the development process, it makes sense to also implement security validations in a continuous deployment environment. This is sometimes called DevSecOps. Using Azure DevOps for your deployment pipeline let's you integrate validation into the deployment process. For more information, see [Learn how to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline).
+Since security needs to be considered at every step in the development process, it makes sense to also implement security validations in a continuous deployment environment. This is sometimes called DevSecOps. Using Azure DevOps for your deployment pipeline lets you integrate validation into the deployment process. For more information, see [Learn how to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline).
## Network security
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
Specifying a list of VMs can be used when you need to perform the start and stop
- You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). -- Your account has been granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) permission in the subscription.
+- To deploy the solution, your account must be granted the [Owner](../../role-based-access-control/built-in-roles.md#owner) permission in the subscription.
- Start/Stop VMs v2 is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions.
azure-functions Streaming Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/streaming-logs.md
Title: Stream execution logs in Azure Functions description: Learn how you can stream logs for functions in near real time. Previously updated : 9/1/2020 Last updated : 8/21/2023 -+ ms.devlang: azurecli # Customer intent: As a developer, I want to be able to configure streaming logs so that I can see what's happening in my functions in near real time.
There are two ways to view a stream of log files being generated by your functio
* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan.
-* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses [sampled data](configure-monitoring.md#configure-sampling).
+* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances and supports all plan types. This method uses [sampled data](configure-monitoring.md#configure-sampling).
Log streams can be viewed both in the portal and in most local development environments.
-## Portal
+## [Portal](#tab/azure-portal)
You can view both types of log streams in the portal.
-### Built-in log streaming
- To view streaming logs in the portal, select the **Platform features** tab in your function app. Then, under **Monitoring**, choose **Log streaming**. ![Enable streaming logs in the portal](./media/functions-monitoring/enable-streaming-logs-portal.png)
This connects your app to the log streaming service and application logs are dis
![View streaming logs in the portal](./media/functions-monitoring/streaming-logs-window.png)
-### Live Metrics Stream
- To view the Live Metrics Stream for your app, select the **Overview** tab of your function app. When you have Application Insights enabled, you see an **Application Insights** link under **Configured features**. This link takes you to the Application Insights page for your app. In Application Insights, select **Live Metrics Stream**. [Sampled log entries](configure-monitoring.md#configure-sampling) are displayed under **Sample Telemetry**. ![View Live Metrics Stream in the portal](./media/functions-monitoring/live-metrics-stream.png)
-## Visual Studio Code
+## [Visual Studio Code](#tab/vs-code)
[!INCLUDE [functions-enable-log-stream-vs-code](../../includes/functions-enable-log-stream-vs-code.md)]
-## Core Tools
+## [Core Tools](#tab/core-tools)
+
+Use the [`func azure functionapp logstream` command](functions-core-tools-reference.md#func-azure-functionapp-list-functions) to start receiving streaming logs of a specific function app running in Azure, as in this example:
+
+```bash
+func azure functionapp logstream <FunctionAppName>
+```
+
+>[!NOTE]
+>Because built-in log streaming isn't yet enabled for function apps running on Linux in a Consumption plan, you need to instead enable the [Live Metrics Stream](../azure-monitor/app/live-stream.md) to view the logs in near-real time.
+
+Use this command to display the Live Metrics Stream in a new browser window.
+```bash
+func azure functionapp logstream <FunctionAppName> --browser
+```
-## Azure CLI
+## [Azure CLI](#tab/azure-cli)
You can enable streaming logs by using the [Azure CLI](/cli/azure/install-azure-cli). Use the following commands to sign in, choose your subscription, and stream log files:
az account set --subscription <subscriptionNameOrId>
az webapp log tail --resource-group <RESOURCE_GROUP_NAME> --name <FUNCTION_APP_NAME> ```
-## Azure PowerShell
+## [Azure PowerShell](#tab/azure-powershell)
You can enable streaming logs by using [Azure PowerShell](/powershell/azure/). For PowerShell, use the [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp) command to enable logging on the function app, as shown in the following snippet:
You can enable streaming logs by using [Azure PowerShell](/powershell/azure/). F
For more information, see the [complete code example](../app-service/scripts/powershell-monitor.md#sample-script). ++ ## Next steps + [Monitor Azure Functions](functions-monitoring.md)
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
You're responsible for designing and deploying your applications to meet [US exp
## Guidance for developers
-Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For more information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you'll mostly have the same experience as in global Azure.
+Most of the currently available technical content assumes that applications are being developed on global Azure rather than on Azure Government. For this reason, itΓÇÖs important to be aware of two key differences in applications that you develop for hosting in Azure Government.
+
+- Certain services and features that are in specific regions of global Azure might not be available in Azure Government.
+
+- Feature configurations in Azure Government might differ from those in global Azure.
+
+Therefore, it's important to review your sample code and configurations to ensure that you are building within the Azure Government cloud services environment.
+
+For more information, see [Azure Government developer guide](./documentation-government-developer-guide.md).
> [!NOTE] > This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az). For Az module installation instructions, see [**Install the Azure Az PowerShell module**](/powershell/azure/install-azure-powershell).
Start using Azure Government:
- [Guidance for developers](./documentation-government-developer-guide.md) - [Connect with the Azure Government portal](./documentation-government-get-started-connect-with-portal.md)+
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure AI | [Azure AI
-| [Azure AI services containers](../../ai-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure AI containers](../../ai-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure AI | [Azure AI | [Azure AI
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Previously updated : 02/15/2023 Last updated : 08/31/2023 # Azure Government authorized reseller list
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|LSP name|Email|Phone| |-||--|
+|Carahsoft Technology Corporation|microsoft@carahsoft.com|844-MSFTGOV|
|CDW Corp.|cdwgsales@cdwg.com|800-808-4239| |Dell Corp.|Get_Azure@Dell.com|888-375-9857| |Insight Public Sector|federal@insight.com|800-467-4448|
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
Azure Stack Hub and Azure Stack Edge represent key enabling technologies that al
### Azure Stack Hub
-[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Azure AI services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
+[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Azure AI containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
This section addresses common customer questions related to Azure public, privat
- **Data storage for regional - **Data storage for non-regional - **Air-gapped (sovereign) cloud deployment:** Why doesnΓÇÖt Microsoft deploy an air-gapped, sovereign, physically isolated cloud instance in every country/region? **Answer:** Microsoft is actively pursuing air-gapped cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, are diminished when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra air-gapped cloud or fragmentation within an air-gapped cloud. Whereas an air-gapped cloud might prove to be the right solution for certain customers, it isn't the only available option.-- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country/region by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country/region personnel. You can run many types of VM instances, App Services, Containers (including Azure AI services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access.
+- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country/region by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country/region personnel. You can run many types of VM instances, App Services, Containers (including Azure AI containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access.
- **Local jurisdiction:** Is Microsoft subject to local country/region jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it's unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States. - **Autarky:** Can Microsoft cloud operations be separated from the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model. - **Public Cloud:** Azure regional datacenters can be connected to your local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft isn't possible in the public cloud.
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Learn more about Creator for indoor maps by reading:
[structures]: #structure <! REST API Links > [conversion service]: /rest/api/maps/v2/conversion
-[dataset]: /rest/api/maps/v20220901preview/dataset
+[dataset]: /rest/api/maps/2023-03-01-preview/dataset
[GeoJSON Point geometry]: /rest/api/maps/v2/wfs/get-features#geojsonpoint [MultiPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonmultipolygon [Point]: /rest/api/maps/v2/wfs/get-features#geojsonpoint
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
<!-- REST API Links -> [Alias API]: /rest/api/maps/v2/alias [Conversion service]: /rest/api/maps/v2/conversion
-[Creator - map configuration Rest API]: /rest/api/maps/v20220901preview/map-configuration
+[Creator - map configuration Rest API]: /rest/api/maps/2023-03-01-preview/map-configuration
[Data Upload]: /rest/api/maps/data-v2/update [Dataset Create]: /rest/api/maps/v2/dataset/create [Dataset service]: /rest/api/maps/v2/dataset
The following example shows how to update a dataset, create a new tileset, and d
[Feature State Update API]: /rest/api/maps/v2/feature-state/update-states [Geofence service]: /rest/api/maps/spatial/postgeofence [Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
-[routeset]: /rest/api/maps/v20220901preview/routeset
-[Style - Create]: /rest/api/maps/v20220901preview/style/create
-[style]: /rest/api/maps/v20220901preview/style
+[routeset]: /rest/api/maps/2023-03-01-preview/routeset
+[Style - Create]: /rest/api/maps/2023-03-01-preview/style/create
+[style]: /rest/api/maps/2023-03-01-preview/style
[Tileset Create]: /rest/api/maps/v2/tileset/create [Tileset List]: /rest/api/maps/v2/tileset/list [Tileset service]: /rest/api/maps/v2/tileset
-[tileset]: /rest/api/maps/v20220901preview/tileset
-[wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/get-path
-[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding
-[wayfinding]: /rest/api/maps/v20220901preview/wayfinding
+[tileset]: /rest/api/maps/2023-03-01-preview/tileset
+[wayfinding path]: /rest/api/maps/2023-03-01-preview/wayfinding/get-path
+[wayfinding service]: /rest/api/maps/2023-03-01-preview/wayfinding
+[wayfinding]: /rest/api/maps/2023-03-01-preview/wayfinding
[Web Feature service]: /rest/api/maps/v2/wfs <! learn.microsoft.com Links >
azure-maps Creator Onboarding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md
The following steps demonstrate how to create an indoor map in your Azure Maps a
:::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-upload.png" alt-text="Screenshot showing the package upload screen of the Azure Maps Creator onboarding tool.":::
-<!--
- > [!NOTE]
- > If the manifest included in the drawing package is incomplete or contains errors, the onboarding tool will not go directly to the **Review + Create** tab, but instead goes to the tab where you are best able to address the issue.
>- 1. Once the package is uploaded, the onboarding tool uses the [Conversion service] to validate the data then convert the geometry and data from the drawing package into a digital indoor map. For more information about the conversion process, see [Convert a drawing package] in the Creator concepts article. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-conversion.png" alt-text="Screenshot showing the package conversion screen of the Azure Maps Creator onboarding tool, including the Conversion ID value.":::
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Defining text properties enables you to associate text entities that fall inside
> * Stair > * Elevator
-### Download
+### Review + Create
-When finished, select the **Download** button to view the manifest. When you finished verifying that it's ready, select the **Download** button to save it locally so that you can include it in the drawing package to import into your Azure Maps Creator resource.
+When finished, select the **Create + Download** button to download a copy of the drawing package and start the map creation process. For more information on the map creation process, see [Create indoor map with the onboarding tool].
:::image type="content" source="./media/creator-indoor-maps/onboarding-tool/review-download.png" alt-text="Screenshot showing the manifest JSON.":::
You should now have all the DWG drawings prepared to meet Azure Maps Conversion
[manifest files]: drawing-requirements.md#manifest-file-1 [wayfinding]: creator-indoor-maps.md#wayfinding-preview [facility level]: drawing-requirements.md#facility-level
+[Create indoor map with the onboarding tool]: creator-onboarding-tool.md
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
The ability to geocode in a country/region is dependent upon the road data cover
| Burkina Faso | | | Γ£ô | Γ£ô | Γ£ô | | Burundi | | | Γ£ô | Γ£ô | Γ£ô | | Cameroon | | | Γ£ô | Γ£ô | Γ£ô |
-| Cape Verde | | | Γ£ô | Γ£ô | Γ£ô |
+| Cabo Verde | | | Γ£ô | Γ£ô | Γ£ô |
| Central African Republic | | | Γ£ô | Γ£ô | Γ£ô | | Chad | | | | Γ£ô | Γ£ô | | Congo | | | | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Qatar | ✓ | | ✓ | ✓ | ✓ | | Réunion | ✓ | ✓ | ✓ | ✓ | ✓ | | Rwanda | | | ✓ | ✓ | ✓ |
-| Saint Helena | | | | Γ£ô | Γ£ô |
+| Saint Helena, Ascension, and Tristan da Cunha | | | | Γ£ô | Γ£ô |
| São Tomé & Príncipe | | | ✓ | ✓ | ✓ | | Saudi Arabia | ✓ | | ✓ | ✓ | ✓ | | Senegal | | | ✓ | ✓ | ✓ |
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Select the **Get map configuration list** button to get a list of every map conf
:::image type="content" source="./media/creator-indoor-maps/style-editor/select-the-map-configuration.png" alt-text="A screenshot of the open style dialog box in the visual style editor with the Select map configuration drop-down list highlighted."::: > [!NOTE]
-> If the map configuration was created as part of a custom style and has a user provided alias, that alias appears in the map configuration drop-down list, otherwise the `mapConfigurationId` appears. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID:
+> If the map configuration was created as part of a custom style and has a user provided alias, that alias appears in the map configuration drop-down list, otherwise just the `mapConfigurationId` appears. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID:
> > ```http
-> https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2022-09-01-preview
+> https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2023-03-01-preview
> ``` > > The `mapConfigurationId` is returned in the body of the response, for example:
Select the **Get map configuration list** button to get a list of every map conf
> "defaultMapConfigurationId": "68d74ad9-4f84-99ce-06bb-19f487e8e692" > ```
-Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples appears. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`.
+Once the desired map configuration is selected, the drop-down list of styles appears.
Once you've selected the desired style, select the **Load selected style** button.
Once you've selected the desired style, select the **Load selected style** butto
||| | 1 | Your Azure Maps account [subscription key] | | 2 | Select the geography of the Azure Maps account. |
-| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` is shown instead. |
-| 4 | This value is created from a combination of the style and tileset. If the style has an alias it's shown, if not the `styleId` is shown. The `tilesetId` is always shown for the tileset value. |
+| 3 | A list of map configuration IDs and aliases. |
+| 4 | A list of styles associated with the selected map configuration. |
### Modify style
The following table describes the four fields you're presented with.
| Property | Description | |-|-| | Style description | A user-defined description for this style. |
-| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style is referenced by the style ID if no alias is provided. |
| Map configuration description | A user-defined description for this map configuration. | | Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration is referenced by the map configuration ID if no alias is provided. | Some important things to know about aliases: 1. Can be named using alphanumeric characters (0-9, a-z, A-Z), hyphens (-) and underscores (_).
-1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since the style and map configuration can't be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times.
+1. Can be used to reference the underlying map configuration, in place of that object's ID. This is especially important since the map configuration can't be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times.
> [!WARNING]
-> Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration] in the concepts article for more information.
+> Duplicate aliases are not allowed. If the alias of an existing map configuration is used, the map configuration that alias points to will be overwritten and the existing map configuration will be deleted and references to that ID will result in errors. For more information, see [map configuration] in the concepts article.
Once you have entered values into each required field, select the **Upload map configuration** button to save the style and map configuration data to your Creator resource.
+Once you have successfully uploaded your custom styles you'll see the **Upload complete** dialog showing you the values for Style ID, Map configuration ID and the map configuration alias. For more information, see [custom styling] and [map configuration].
++ > [!TIP]
-> Make a note of the map configuration `alias` value, it will be required when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps.
+> Make a note of the map configuration alias value, it will be required when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps.
+> Also, make a note of the Style ID, it can be reused for other tilesets.
## Custom categories
Now when you select that unit in the map, the pop-up menu has the new layer ID,
[categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json [Creator concepts]: creator-indoor-maps.md [Creators Rest API]: /rest/api/maps-creator/
+[custom styling]: creator-indoor-maps.md#custom-styling-preview
[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [manifest]: drawing-requirements.md#manifest-file-requirements [map configuration]: creator-indoor-maps.md#map-configuration [style editor]: https://azure.github.io/Azure-Maps-Style-Editor [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[tileset get]: /rest/api/maps/v20220901preview/tileset/get
-[tileset]: /rest/api/maps/v20220901preview/tileset
+[tileset get]: /rest/api/maps/2023-03-01-preview/tileset/get
+[tileset]: /rest/api/maps/2023-03-01-preview/tileset
[unitProperties]: drawing-requirements.md#unitproperties [Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
To create a routeset:
1. Execute the following **HTTP POST request**: ```http
- https://us.atlas.microsoft.com/routesets?api-version=2022-09-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/routesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key}
```
To check the status of the routeset creation process and retrieve the routesetId
1. Execute the following **HTTP GET request**: ```http
- https://us.atlas.microsoft.com/routesets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/routesets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
```
To check the status of the routeset creation process and retrieve the routesetId
1. Copy the value of the **Resource-Location** key from the responses header. It's the resource location URL and contains the `routesetId`:
- > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2022-09-01-preview
+ > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2023-03-01-preview
Make a note of the `routesetId`. It's required in all [wayfinding](#get-a-wayfinding-path) requests and when you [Get the facility ID].
The `facilityId`, a property of the routeset, is a required parameter when searc
1. Execute the following **HTTP GET request**: ```http
- https://us.atlas.microsoft.com/routesets/{routesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/routesets/{routesetId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
```
To create a wayfinding query:
1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status] section and the {facilityId} with the facilityId obtained in the [Get the facility ID] section): ```http
- https://us.atlas.microsoft.com/wayfinding/path?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width}
+ https://us.atlas.microsoft.com/wayfinding/path?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width}
``` > [!TIP]
The wayfinding service calculates the path through specific intervening points.
[wayfinding service]: creator-indoor-maps.md#wayfinding-preview [wayfinding]: creator-indoor-maps.md#wayfinding-preview <! REST API Links >
-[routeset]: /rest/api/maps/v20220901preview/routeset
-[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
+[routeset]: /rest/api/maps/2023-03-01-preview/routeset
+[wayfinding API]: /rest/api/maps/2023-03-01-preview/wayfinding
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
To create a dataset:
1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status] section): ```http
- https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Subscription-key}
``` 1. Copy the value of the `Operation-Location` key in the response header. The `Operation-Location` key is also known as the `status URL` and is required to check the status of the dataset creation process and to get the `datasetId`, which is required to create a tileset.
To check the status of the dataset creation process and retrieve the `datasetId`
1. Enter the status URL you copied in [Create a dataset]. The request should look like the following URL: ```http
- https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
``` 1. In the Header of the HTTP response, copy the value of the unique identifier contained in the `Resource-Location` key.
- > `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2022-09-01-preview`
+ > `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2023-03-01-preview`
See [Next steps] for links to articles to help you complete your indoor map.
One thing to consider when adding to an existing dataset is how the feature IDs
If your original dataset was created from a GoeJSON source and you wish to add another facility created from a drawing package, you can append it to your existing dataset by referencing its `conversionId`, as demonstrated by this HTTP POST request: ```shttp
-https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId}
+https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId}
``` | Identifier | Description |
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md [Creator resource]: how-to-manage-creator.md [Data Upload API]: /rest/api/maps/data-v2/upload
-[Dataset Create API]: /rest/api/maps/v20220901preview/dataset/create
+[Dataset Create API]: /rest/api/maps/2023-03-01-preview/dataset/create
[Dataset Create]: /rest/api/maps/v2/dataset/create [dataset]: creator-indoor-maps.md#datasets [Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Create the web application in Azure AD for users to sign in. The web application
6. Copy the Azure AD app ID and the Azure AD tenant ID from the app registration to use in the Web SDK. Add the Azure AD app registration details and the `x-ms-client-id` from the Azure Map account to the Web SDK. ```javascript
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js" />
<script> var map = new atlas.Map("map", { center: [-122.33, 47.64],
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Set the map domain with a prefix matching the location of your Creator resource,
For more information, see [Azure Maps service geographic scope].
-Next, instantiate a *Map object* with the map configuration object set to the `alias` or `mapConfigurationId` property of your map configuration, then set your `styleAPIVersion` to `2022-09-01-preview`.
+Next, instantiate a *Map object* with the map configuration object set to the `alias` or `mapConfigurationId` property of your map configuration, then set your `styleAPIVersion` to `2023-03-01-preview`.
The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The following code shows you how to instantiate the *Map object* with `mapConfiguration`, `styleAPIVersion` and map domain set:
const map = new atlas.Map("map-id", {
zoom: 19, mapConfiguration: mapConfiguration,
- styleAPIVersion: '2022-09-01-preview'
+ styleAPIVersion: '2023-03-01-preview'
}); ```
When you create an indoor map using Azure Maps Creator, default styles are appli
- `mapConfiguration` the ID or alias of the map configuration that defines the custom styles you want to display on the map, use the map configuration ID or alias from step 1. - `style` allows you to set the initial style from your map configuration that is displayed. If not set, the style matching map configuration's default configuration is used. - `zoom` allows you to specify the min and max zoom levels for your map.
- - `styleAPIVersion`: pass **'2022-09-01-preview'** (which is required while Custom Styling is in public preview)
+ - `styleAPIVersion`: pass **'2023-03-01-preview'** (which is required while Custom Styling is in public preview)
7. Next, create the *Indoor Manager* module with *Indoor Level Picker* control instantiated as part of *Indoor Manager* options, optionally set the `statesetId` option.
Your file should now look similar to the following HTML:
<meta name="viewport" content="width=device-width, user-scalable=no" /> <title>Indoor Maps App</title>
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/>
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script> <style>
Your file should now look similar to the following HTML:
zoom: 19, mapConfiguration: mapConfig,
- styleAPIVersion: '2022-09-01-preview'
+ styleAPIVersion: '2023-03-01-preview'
}); const levelControl = new atlas.control.LevelControl({
Learn more about how to add more data to your map:
[Drawing package requirements]: drawing-requirements.md [dynamic map styling]: indoor-map-dynamic-styling.md [Indoor Maps dynamic styling]: indoor-map-dynamic-styling.md
-[map configuration API]: /rest/api/maps/v20220901preview/map-configuration
+[map configuration API]: /rest/api/maps/2023-03-01-preview/map-configuration
[map configuration]: creator-indoor-maps.md#map-configuration
-[Style Rest API]: /rest/api/maps/v20220901preview/style
+[Style Rest API]: /rest/api/maps/2023-03-01-preview/style
[style-loader]: https://webpack.js.org/loaders/style-loader [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Tileset List API]: /rest/api/maps/v2/tileset/list
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
The Azure Maps Web SDK provides a [Map Control] that enables the customization o
This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects].
+> [!IMPORTANT]
+> If you have existing applications incorporating Azure Maps using version 2 of the [Map Control], it is recomended to start using version 3. Version 3 is backwards compatible and has several benifits including [WebGL 2 Compatibility], increased performance and support for [3D terrain tiles].
+ ## Prerequisites To use the Map Control in a web page, you must have one of the following prerequisites:
You can embed a map in a web page by using the Map Control client-side JavaScrip
* Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and `stylesheet` in the `<head>` element of the HTML file: ```html
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
``` * Load the Azure Maps Web SDK source code locally using the [azure-maps-control] npm package and host it with your app. This package also includes TypeScript definitions.
You can embed a map in a web page by using the Map Control client-side JavaScrip
Then add references to the Azure Maps `stylesheet` to the `<head>` element of the file: ```html
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
``` > [!NOTE]
You can embed a map in a web page by using the Map Control client-side JavaScrip
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type="text/javascript">
Here's an example of Azure Maps with the language set to "fr-FR" and the regiona
For a list of supported languages and regional views, see [Localization support in Azure Maps].
+## WebGL 2 Compatibility
+
+Beginning with Azure Maps Web SDK 3.0, the Web SDK includes full compatibility with [WebGL 2], a powerful graphics technology that enables hardware-accelerated rendering in modern web browsers. By using WebGL 2, developers can harness the capabilities of modern GPUs to render complex maps and visualizations more efficiently, resulting in improved performance and visual quality.
+
+![Map image showing WebGL 2 Compatibility.](./media/how-to-use-map-control/webgl-2-compatability.png)
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, user-scalable=no" />
+ <title>WebGL2 - Azure Maps Web SDK Samples</title>
+ <link href=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet"/>
+ <script src=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script>
+ <script src="https://unpkg.com/deck.gl@latest/dist.min.js"></script>
+ <style>
+ html,
+ body {
+ width: 100%;
+ height: 100%;
+ padding: 0;
+ margin: 0;
+ }
+ #map {
+ width: 100%;
+ height: 100%;
+ }
+ </style>
+ </head>
+ <body>
+ <div id="map"></div>
+ <script>
+ var map = new atlas.Map("map", {
+ center: [-122.44, 37.75],
+ bearing: 36,
+ pitch: 45,
+ zoom: 12,
+ style: "grayscale_light",
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authOptions: {
+ authType: "subscriptionKey",
+ subscriptionKey: " <Your Azure Maps Key> "
+ }
+ });
+
+ // Wait until the map resources are ready.
+ map.events.add("ready", (event) => {
+ // Create a custom layer to render data points using deck.gl
+ map.layers.add(
+ new DeckGLLayer({
+ id: "grid-layer",
+ data: "https://raw.githubusercontent.com/visgl/deck.gl-data/master/website/sf-bike-parking.json",
+ cellSize: 200,
+ extruded: true,
+ elevationScale: 4,
+ getPosition: (d) => d.COORDINATES,
+ // GPUGridLayer leverages WebGL2 to perform aggregation on the GPU.
+ // For more details, see https://deck.gl/docs/api-reference/aggregation-layers/gpu-grid-layer
+ type: deck.GPUGridLayer
+ })
+ );
+ });
+
+ // A custom implementation of WebGLLayer
+ class DeckGLLayer extends atlas.layer.WebGLLayer {
+ constructor(options) {
+ super(options.id);
+ // Create an instance of deck.gl MapboxLayer which is compatible with Azure Maps
+ // https://deck.gl/docs/api-reference/mapbox/mapbox-layer
+ this._mbLayer = new deck.MapboxLayer(options);
+
+ // Create a renderer
+ const renderer = {
+ renderingMode: "3d",
+ onAdd: (map, gl) => {
+ this._mbLayer.onAdd?.(map["map"], gl);
+ },
+ onRemove: (map, gl) => {
+ this._mbLayer.onRemove?.(map["map"], gl);
+ },
+ prerender: (gl, matrix) => {
+ this._mbLayer.prerender?.(gl, matrix);
+ },
+ render: (gl, matrix) => {
+ this._mbLayer.render(gl, matrix);
+ }
+ };
+ this.setOptions({ renderer });
+ }
+ }
+ </script>
+ </body>
+</html>
+```
+
+## 3D terrain tiles
+
+Beginning with Azure Maps Web SDK 3.0, developers can take advantage of 3D terrain visualizations. This feature allows you to incorporate elevation data into your maps, creating a more immersive experience for your users. Whether it's visualizing mountain ranges, valleys, or other geographical features, the 3D terrain support brings a new level of realism to your mapping applications.
+
+The following code example demonstrates how to implement 3D terrain tiles.
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, user-scalable=no" />
+ <title>Elevation - Azure Maps Web SDK Samples</title>
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script>
+ <style>
+ html,
+ body {
+ width: 100%;
+ height: 100%;
+ padding: 0;
+ margin: 0;
+ }
+ #map {
+ width: 100%;
+ height: 100%;
+ }
+ </style>
+ </head>
+
+ <body>
+ <div id="map"></div>
+ <script>
+ var map = new atlas.Map("map", {
+ center: [-121.7269, 46.8799],
+ maxPitch: 85,
+ pitch: 60,
+ zoom: 12,
+ style: "road_shaded_relief",
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authOptions: {
+ authType: "subscriptionKey",
+ subscriptionKey: "<Your Azure Maps Key>"
+ }
+ });
+
+ // Create a tile source for elevation data. For more information on creating
+ // elevation data & services using open data, see https://aka.ms/elevation
+ var elevationSource = new atlas.source.ElevationTileSource("elevation", {
+ url: "<tileSourceUrl>"
+ });
+
+ // Wait until the map resources are ready.
+ map.events.add("ready", (event) => {
+
+ // Add the elevation source to the map.
+ map.sources.add(elevationSource);
+
+ // Enable elevation on the map.
+ map.enableElevation(elevationSource);
+ });
+ </script>
+ </body>
+</html>
+```
+ ## Azure Government cloud support The Azure Maps Web SDK supports the Azure Government cloud. All JavaScript and CSS URLs used to access the Azure Maps Web SDK remain the same. The following tasks need to be done to connect to the Azure Government cloud version of the Azure Maps platform.
For a list of samples showing how to integrate Azure AD with Azure Maps, see:
> [!div class="nextstepaction"] > [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+[3D terrain tiles]: #3d-terrain-tiles
[authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions [Authentication with Azure Maps]: azure-maps-authentication.md [Azure Maps & Azure Active Directory Samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples
For a list of samples showing how to integrate Azure AD with Azure Maps, see:
[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components [azure-maps-control]: https://www.npmjs.com/package/azure-maps-control [Localization support in Azure Maps]: supported-languages.md
+[Map Control]: https://www.npmjs.com/package/azure-maps-control
[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
-[Map Control]: https://www.npmjs.com/package/azure-maps-control
+[WebGL 2 Compatibility]: #webgl-2-compatibility
+[WebGL 2]: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API#webgl_2
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
You can load the Azure Maps spatial IO module using one of the two options:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script>
<script type='text/javascript'>
You can load the Azure Maps spatial IO module using one of the two options:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
In the following code, the first code block creates a map and sets the enter and
<head> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type="text/javascript">
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
The following code shows how to load a map with the same view in Azure Maps alon
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
When using a Symbol layer, the data must be added to a data source, and the data
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Symbol layers in Azure Maps support custom images as well, but the image needs t
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
In Azure Maps, load the GeoJSON data into a data source and connect the data sou
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add references to the Azure Maps Map Drawing Tools JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.css" type="text/css" />
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Learn the details of how to migrate your Bing Maps application with these articl
[azure.com]: https://azure.com [Basic snap to road logic]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=basic-snap-to-road-logic [Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[free account]: https://azure.microsoft.com/free/
[free Azure account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md [Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Load a map with the same view in Azure Maps along with a map style control and z
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
For a Symbol layer, add the data to a data source. Attach the data source to the
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Symbol layers in Azure Maps support custom images as well. First, load the image
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
GeoJSON is the base data type in Azure Maps. Import it into a data source using
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Load the GeoJSON data into a data source and connect the data source to a heat m
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
The following are all settings in the **Format** pane that are available in the
|-|| | Reference layer data | The data GeoJSON file to upload to the visual as another layer within the map. The **+ Add local file** button opens a file dialog the user can use to select a GeoJSON file that has a `.json` or `.geojson` file extension. |
-> [!NOTE]
-> In this preview of the Azure Maps Power BI visual, the reference layer will only load the first 5,000 shape features to the map. This limit will be increased in a future update.
## Styling data in a reference layer
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the Map Control.
-## v3 (preview)
+## v3 (latest)
+
+### [3.0.0] (August 18, 2023)
+
+#### Bug fixes (3.0.0)
+
+- Fixed zoom control to take into account the `maxBounds` [CameraOptions].
+
+- Fixed an issue that mouse positions are shifted after a css scale transform on the map container.
+
+#### Other changes (3.0.0)
+
+- Phased out the style definition version `2022-08-05` and switched the default `styleDefinitionsVersion` to `2023-01-01`.
+
+- Added the `mvc` parameter to encompass the map control version in both definitions and style requests.
+
+#### Installation (3.0.0)
+
+The version is available on [npm][3.0.0] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0][3.0.0]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.js"></script>
+ ```
### [3.0.0-preview.10] (July 11, 2023)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
}) ```
-## v2 (latest)
+## v2
### [2.3.2] (August 11, 2023)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0
[3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10 [3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
The render coverage tables below list the countries/regions that support Azure M
| Liechtenstein | Γ£ô | | Lithuania | Γ£ô | | Luxembourg | Γ£ô |
-| Macedonia | Γ£ô |
| Malta | Γ£ô | | Moldova | Γ£ô | | Monaco | Γ£ô | | Montenegro | Γ£ô | | Netherlands | Γ£ô |
+| North Macedonia | Γ£ô |
| Norway | Γ£ô | | Poland | Γ£ô | | Portugal | Γ£ô |
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
The following tables provide coverage information for Azure Maps routing.
| Burkina Faso | Γ£ô | | | | Burundi | Γ£ô | | | | Cameroon | Γ£ô | | |
-| Cape Verde | Γ£ô | | |
+| Cabo Verde | Γ£ô | | |
| Central African Republic | Γ£ô | | | | Chad | Γ£ô | | | | Congo | Γ£ô | | |
The following tables provide coverage information for Azure Maps routing.
| Somalia | Γ£ô | | | | South Africa | Γ£ô | Γ£ô | Γ£ô | | South Sudan | Γ£ô | | |
-| St. Helena | Γ£ô | | |
+| St. Helena, Ascension, and Tristan da Cunha | Γ£ô | | |
| Sudan | Γ£ô | | | | Swaziland | Γ£ô | | | | Syria | Γ£ô | | |
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Azure Maps have been localized in variety languages across its services. The fol
| de-DE | German | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | el-GR | Greek | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | en-AU | English (Australia) | Γ£ô | Γ£ô | | | Γ£ô |
-| en-GB | English (Great Britain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| en-NZ | English (New Zealand) | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| en-GB | English (United Kingdom) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| en-US | English (USA) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | es-419 | Spanish (Latin America) | | Γ£ô | | | Γ£ô | | es-ES | Spanish (Spain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
To create the HTML:
```HTML <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
``` 3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality.
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
After you create a tileset, you can get the `mapConfigurationId` value by using
5. Enter the following URL to the [Tileset service]. Pass in the tileset ID that you obtained in the previous step. ```http
- https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
``` 6. Select **Send**.
For more information, see [Map configuration] in the article about indoor map co
[Drawing conversion errors and warnings]: drawing-conversion-error-codes.md [Dataset Create API]: /rest/api/maps/v2/dataset/create [Dataset service]: /rest/api/maps/v2/dataset
-[Tileset service]: /rest/api/maps/v20220901preview/tileset
-[tileset get]: /rest/api/maps/v20220901preview/tileset/get
+[Tileset service]: /rest/api/maps/2023-03-01-preview/tileset
+[tileset get]: /rest/api/maps/2023-03-01-preview/tileset/get
[Map configuration]: creator-indoor-maps.md#map-configuration
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
The following steps show you how to create and display the Map control in a web
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
The following steps show you how to create and display the Map control in a web
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
<meta charset="utf-8" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned
| Burkina Faso | Γ£ô | Γ£ô | | Γ£ô | | Burundi | Γ£ô | Γ£ô | | Γ£ô | | Cameroon | Γ£ô | Γ£ô | | Γ£ô |
-| Cape Verde | Γ£ô | Γ£ô | | Γ£ô |
+| Cabo Verde | Γ£ô | Γ£ô | | Γ£ô |
| Central African Republic | Γ£ô | Γ£ô | | Γ£ô | | Chad | Γ£ô | Γ£ô | | Γ£ô | | Comoros | Γ£ô | Γ£ô | | Γ£ô |
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the ca
```json "dependencies": {
- "azure-maps-control": "^2.2.6"
+ "azure-maps-control": "^3.0.0"
} ```
azure-maps Web Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-migration-guide.md
+
+ Title: The Azure Maps Web SDK v1 migration guide
+
+description: Find out how to migrate your Azure Maps Web SDK v1 applications to the most recent version of the Web SDK.
++ Last updated : 08/18/2023++++
+# The Azure Maps Web SDK v1 migration guide
+
+Thank you for choosing the Azure Maps Web SDK for your mapping needs. This migration guide helps you transition from version 1 to version 3, allowing you to take advantage of the latest features and enhancements.
+
+## Understand the changes
+
+Before you start the migration process, it's important to familiarize yourself with the key changes and improvements introduced in Web SDK v3. Review the [release notes] to grasp the scope of the new features.
+
+## Updating the Web SDK version
+
+### CDN
+
+If you're using CDN ([content delivery network]), update the references to the stylesheet and JavaScript within the `head` element of your HTML files.
+
+#### v1
+
+```html
+<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/css/atlas.min.css?api-version=1" type="text/css" />
+<script src="https://atlas.microsoft.com/sdk/js/atlas.min.js?api-version=1"></script>
+```
+
+#### v3
+
+```html
+<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
+```
+
+### npm
+
+If you're using [npm], update to the latest Azure Maps control by running the following command:
+
+```shell
+npm install azure-maps-control@latest
+```
+
+## Review authentication methods (optional)
+
+To enhance security, more authentication methods are included in the Web SDK starting in version 2. The new methods include [Azure Active Directory Authentication] and [Shared Key Authentication]. For more information about Azure Maps web application security, see [Manage Authentication in Azure Maps].
+
+## Testing
+
+Comprehensive testing is essential during migration. Conduct thorough testing of your application's functionality, performance, and user experience in different browsers and devices.
+
+## Gradual Rollout
+
+Consider a gradual rollout strategy for the updated version. Release the migrated version to a smaller group of users or in a controlled environment before making it available to your entire user base.
+
+By following these steps and considering best practices, you can successfully migrate your application from Azure Maps WebSDK v1 to v3. Embrace the new capabilities and improvements offered by the latest version while ensuring a smooth and seamless transition for your users. For more information, see [Azure Maps Web SDK best practices].
+
+## Next steps
+
+Learn how to add maps to web and mobile applications using the Map Control client-side JavaScript library in Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Use the Azure Maps map control]
+
+[Azure Active Directory Authentication]: how-to-secure-spa-users.md
+[Azure Maps Web SDK best practices]: web-sdk-best-practices.md
+[content delivery network]: /azure/cdn/cdn-overview
+[Manage Authentication in Azure Maps]: how-to-manage-authentication.md
+[npm]: https://www.npmjs.com/package/azure-maps-control
+[release notes]: release-notes-map-control.md
+[Shared Key Authentication]: how-to-secure-sas-app.md
+[Use the Azure Maps map control]: how-to-use-map-control.md
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X | | Oracle Linux 6.4+ | | | X |
-| Red Hat Enterprise Linux Server 9+ | X | | |
-| Red Hat Enterprise Linux Server 8.6 | X<sup>3</sup> | X<sup>2</sup> | X<sup>2</sup> |
-| Red Hat Enterprise Linux Server 8+ | X | X<sup>2</sup> | X<sup>2</sup> |
+| Red Hat Enterprise Linux Server 9+ | X | | |
+| Red Hat Enterprise Linux Server 8.6+ | X<sup>3</sup> | X<sup>2</sup> | X<sup>2</sup> |
+| Red Hat Enterprise Linux Server 8.0-8.5 | X | X<sup>2</sup> | X<sup>2</sup> |
| Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6.7+ | | | X | | Rocky Linux 8 | X | X | |
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Azure Monitor Agent supports connecting by using direct proxies, Log Analytics g
Azure Monitor Agent supports [Azure virtual network service tags](../../virtual-network/service-tags-overview.md). Both *AzureMonitor* and *AzureResourceManager* tags are required.
-Azure Virtual network service tags can be used to define network access controls on [network security groups](../../virtual-network/network-security-groups-overview.md#security-rules), [Azure Firewall](../../firewall/service-tags.md), and user-defined routes. Use service tags in place of specific IP addresses when you create security rules and routes. For scenarios where Azure virtual network service tags can not be used, the Firewall requirements are given below.
+Azure Virtual network service tags can be used to define network access controls on [network security groups](../../virtual-network/network-security-groups-overview.md#security-rules), [Azure Firewall](../../firewall/service-tags.md), and user-defined routes. Use service tags in place of specific IP addresses when you create security rules and routes. For scenarios where Azure virtual network service tags cannot be used, the Firewall requirements are given below.
## Firewall requirements
The Azure Monitor Agent extensions for Windows and Linux can communicate either
![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png) > [!NOTE]
- > Setting Linux system proxy via environment variables such as `http_proxy` and `https_proxy` is only supported using Azure Monitor Agent for Linux version 1.24.2 and above.
+ > Setting Linux system proxy via environment variables such as `http_proxy` and `https_proxy` is only supported using Azure Monitor Agent for Linux version 1.24.2 and above. For the ARM template, if you have proxy configuration please follow the ARM template example below declaring the proxy setting inside the ARM template.
1. After you determine the `Settings` and `ProtectedSettings` parameter values, provide these other parameters when you deploy Azure Monitor Agent. Use PowerShell commands, as shown in the following examples:
$protectedSettings = @{"proxy" = @{username = "[username]"; password = "[passwor
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settings -ProtectedSetting $protectedSettings ```
+# [ARM Policy Template example](#tab/ArmPolicy)
+
+```powershell
+{
+ "properties": {
+ "displayName": "Configure Windows Arc-enabled machines to run Azure Monitor Agent",
+ "policyType": "BuiltIn",
+ "mode": "Indexed",
+ "description": "Automate the deployment of Azure Monitor Agent extension on your Windows Arc-enabled machines for collecting telemetry data from the guest OS. This policy will install the extension if the OS and region are supported and system-assigned managed identity is enabled, and skip install otherwise. Learn more: https://aka.ms/AMAOverview.",
+ "metadata": {
+ "version": "2.3.0",
+ "category": "Monitoring"
+ },
+ "parameters": {
+ "effect": {
+ "type": "String",
+ "metadata": {
+ "displayName": "Effect",
+ "description": "Enable or disable the execution of the policy."
+ },
+ "allowedValues": [
+ "DeployIfNotExists",
+ "Disabled"
+ ],
+ "defaultValue": "DeployIfNotExists"
+ }
+ },
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.HybridCompute/machines"
+ },
+ {
+ "field": "Microsoft.HybridCompute/machines/osName",
+ "equals": "Windows"
+ },
+ {
+ "field": "location",
+ "in": [
+ "australiacentral",
+ "australiaeast",
+ "australiasoutheast",
+ "brazilsouth",
+ "canadacentral",
+ "canadaeast",
+ "centralindia",
+ "centralus",
+ "eastasia",
+ "eastus",
+ "eastus2",
+ "eastus2euap",
+ "francecentral",
+ "germanywestcentral",
+ "japaneast",
+ "japanwest",
+ "jioindiawest",
+ "koreacentral",
+ "koreasouth",
+ "northcentralus",
+ "northeurope",
+ "norwayeast",
+ "southafricanorth",
+ "southcentralus",
+ "southeastasia",
+ "southindia",
+ "swedencentral",
+ "switzerlandnorth",
+ "uaenorth",
+ "uksouth",
+ "ukwest",
+ "westcentralus",
+ "westeurope",
+ "westindia",
+ "westus",
+ "westus2",
+ "westus3"
+ ]
+ }
+ ]
+ },
+ "then": {
+ "effect": "[parameters('effect')]",
+ "details": {
+ "type": "Microsoft.HybridCompute/machines/extensions",
+ "roleDefinitionIds": [
+ "/providers/Microsoft.Authorization/roleDefinitions/cd570a14-e51a-42ad-bac8-bafd67325302"
+ ],
+ "existenceCondition": {
+ "allOf": [
+ {
+ "field": "Microsoft.HybridCompute/machines/extensions/type",
+ "equals": "AzureMonitorWindowsAgent"
+ },
+ {
+ "field": "Microsoft.HybridCompute/machines/extensions/publisher",
+ "equals": "Microsoft.Azure.Monitor"
+ },
+ {
+ "field": "Microsoft.HybridCompute/machines/extensions/provisioningState",
+ "equals": "Succeeded"
+ }
+ ]
+ },
+ "deployment": {
+ "properties": {
+ "mode": "incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "extensionName": "AzureMonitorWindowsAgent",
+ "extensionPublisher": "Microsoft.Azure.Monitor",
+ "extensionType": "AzureMonitorWindowsAgent"
+ },
+ "resources": [
+ {
+ "name": "[concat(parameters('vmName'), '/', variables('extensionName'))]",
+ "type": "Microsoft.HybridCompute/machines/extensions",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-05-20",
+ "properties": {
+ "publisher": "[variables('extensionPublisher')]",
+ "type": "[variables('extensionType')]",
+ "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true,
+ "settings": {
+ "proxy": {
+ "auth": "false",
+ "mode": "application",
+ "address": "http://XXX.XXX.XXX.XXX"
+ }
+ },
+ "protectedsettings": { }
+ }
+ }
+ ]
+ },
+ "parameters": {
+ "vmName": {
+ "value": "[field('name')]"
+ },
+ "location": {
+ "value": "[field('location')]"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "id": "/providers/Microsoft.Authorization/policyDefinitions/94f686d6-9a24-4e19-91f1-de937dc171a4",
+ "type": "Microsoft.Authorization/policyDefinitions",
+ "name": "94f686d6-9a24-4e19-91f1-de937dc171a4"
+}
+```
+ ## Log Analytics gateway configuration
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0| Comming Soon|
-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
-| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2 1.26.3<sup>Hotfix</sup>|
-| Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0| Coming soon|
+| August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon |
+| July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None|
+| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
+| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2-1.26.5<sup>Hotfix</sup>|
+| Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0|None|
| Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Upgrade to hotfix version</li><li>**Windows** Reliability improvements in Fluentbit buffering to handle larger text files</li></ul> | 1.13.1 | 1.25.2<sup>Hotfix</sup> | | Jan 2023 | **Linux** <ul><li>RHEL 9 and Amazon Linux 2 support</li><li>Update to OpenSSL 1.1.1s and require TLS 1.2 or higher</li><li>Performance improvements</li><li>Improvements in Garbage Collection for persisted disk cache and handling corrupted cache files better</li><li>**Fixes** <ul><li>Set agent service memory limit for CentOS/RedHat 7 distros. Resolved MemoryMax parsing error</li><li>Fixed modifying rsyslog system-wide log format caused by installer on RedHat/Centos 7.3</li><li>Fixed permissions to config directory</li><li>Installation reliability improvements</li><li>Fixed permissions on default file so rpm verification doesn't fail</li><li>Added traceFlags setting to enable trace logs for agent</li></ul></li></ul> **Windows** <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0 | 1.25.1 |
azure-monitor Azure Monitor Agent Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-health.md
Title: View Azure Monitor Agent Health description: Experience to view agent health at scale and troubleshoot issues related to data collection via agents --++ Last updated 7/25/2023
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
For information on how to install Azure Monitor Agent from the Azure portal, see
#### [PowerShell](#tab/azure-powershell)
-You can install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension.
+You can install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension.
### Install on Azure virtual machines
Use the following PowerShell commands to install Azure Monitor Agent on Azure vi
Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true ```
+### Install on Azure virtual machines scale set
+
+Use the [Add-AzVmssExtension](/powershell/module/az.compute/add-azvmssextension) PowerShell cmdlet to install Azure Monitor Agent on Azure virtual machines scale sets.
+ ### Install on Azure Arc-enabled servers Use the following PowerShell commands to install Azure Monitor Agent on Azure Arc-enabled servers.
Use the following CLI commands to install Azure Monitor Agent on Azure virtual m
```azurecli az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true ```
+### Install on Azure virtual machines scale set
+
+Use the [az vmss extension set](/cli/azure/vmss/extension) CLI cmdlet to install Azure Monitor Agent on Azure virtual machines scale sets.
### Install on Azure Arc-enabled servers
Use the following PowerShell commands to uninstall Azure Monitor Agent on Azure
```powershell Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> ```
+### Uninstall on Azure virtual machines scale set
+
+Use the [Remove-AzVmssExtension](/powershell/module/az.compute/remove-azvmssextension) PowerShell cmdlet to uninstall Azure Monitor Agent on Azure virtual machines scale sets.
### Uninstall on Azure Arc-enabled servers
Use the following CLI commands to uninstall Azure Monitor Agent on Azure virtual
```azurecli az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> --name AzureMonitorLinuxAgent ```
+### Uninstall on Azure virtual machines scale set
+
+Use the [az vmss extension delete](/cli/azure/vmss/extension) CLI cmdlet to uninstall Azure Monitor Agent on Azure virtual machines scale sets.
### Uninstall on Azure Arc-enabled servers
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
description: Find out how to create and manage action groups. Learn about notifi
Last updated 05/02/2023 -+ # Action groups
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
1. (Optional) If you're querying an ADX or ARG cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example: ```KQL
- adx(cluster).table
+ adx('https://help.kusto.windows.net/Samples').table
| where MyTS >= ago(5m) and MyTS <= now() ``` ```KQL
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
Last updated 09/15/2022
# Prometheus alerts in Azure Monitor
-Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL). The rule queries are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
+As part of [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md), Prometheus alert rules allow you to define alert conditions, using queries written in Prometheus Query Language (Prom QL). The rule queries are applied on Prometheus metrics stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it's fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
-> [!NOTE]
-> Azure Monitor managed service for Prometheus, including Prometheus metrics, is currently in public preview and does not yet have all of its features enabled. Prometheus metrics are displayed with alerts generated by other types of alert rules, but they currently have a difference experience for creating and managing them.
-
-## Create Prometheus alert rule
-Prometheus alert rules are created as part of a Prometheus rule group, which is applied on the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
+## Create Prometheus alert rules
+Prometheus alert rules are created and managed as part of a Prometheus rule group. See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
## View Prometheus alerts
-View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus alerts.
+You can view fired and resolved Prometheus alerts in the Azure portal together with all other alert types. Use the following steps to filter on only Prometheus alerts.
1. From the **Monitor** menu in the Azure portal, select **Alerts**. 2. If **Monitoring Service** isn't displayed as a filter option, then select **Add Filter** and add it. 3. Set the filter **Monitoring Service** to **Prometheus** to see Prometheus alerts.- :::image type="content" source="media/prometheus-metric-alerts/view-alerts.png" lightbox="media/prometheus-metric-alerts/view-alerts.png" alt-text="Screenshot of a list of alerts in Azure Monitor with a filter for Prometheus alerts."::: 4. Click the alert name to view the details of a specific fired/resolved alert.- :::image type="content" source="media/prometheus-metric-alerts/alert-details-grafana.png" lightbox="media/prometheus-metric-alerts/alert-details-grafana.png" alt-text="Screenshot of detail for a Prometheus alert in Azure Monitor.":::
+If your rule group is configured with [a specific cluster scope](../essentials/prometheus-rule-groups.md#limiting-rules-to-a-specific-cluster), you can also view alerts fired for this cluster, under this cluster alerts blade. From the cluster menu in the Azure portal, select **Alerts**. You can then filter for the Prometheus monitor service.
+ ## Explore Prometheus alerts in Grafana 1. In the fired alerts details pane, you can click the **View query in Grafana** link.
-2. A browser tab will be opened taking you to the [Azure Managed Grafana](../../managed-grafan) instance connected to your Azure Monitor Workspace.
-3. Grafana will be opened in Explore mode, presenting the chart for your alert rule expression query which triggered the alert, around the alert firing time. You can further explore the query in Grafana to identify the reason causing the alert to fire.
+2. A browser tab is opened taking you to the [Azure Managed Grafana](../../managed-grafan) instance connected to your Azure Monitor Workspace.
+3. Grafana is opened in Explore mode, presenting the chart for your alert rule expression query around the alert firing time. You can further explore the query in Grafana to identify the reason causing the alert to fire.
> [!NOTE] > 1. If there is no Azure Managed Grafana connected to your Azure Monitor Workspace, a link to Grafana will not be available.
azure-monitor Smart Detection Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/smart-detection-performance.md
The response time degradation notification tells you:
## Dependency Duration Degradation
-Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as cognitive services.
+Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as Azure AI services.
Example of dependency degradation notification:
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
The [Application Map](app-map.md) allows a high-level, top-down view of the appl
To understand the number of Application Insights resources required to cover your application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md). +
+Firewall settings must be adjusted for data to reach ingestion endpoints. For more information, see [IP addresses used by Azure Monitor](./ip-addresses.md).
+ ## How do I use Application Insights? Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) or [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
Consider starting with the [Application Map](app-map.md) for a high-level view.
Two views are especially useful: - [Performance view](tutorial-performance.md): Get deep insights into how your application or API and downstream dependencies are performing. You can also find a representative sample to [explore end to end](transaction-diagnostics.md).-- [Failure view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis.
+- [Failures view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis.
[Create Azure Monitor alerts](tutorial-alert.md) to signal potential issues in case your application or components parts deviate from the established baseline.
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 06/23/2023 Last updated : 08/20/2023 # Review TrackAvailability() test results
-This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor).
+This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). [Standard tests](availability-standard-tests.md) **should always be used if possible** as they require little investment, no maintenance, and have few prerequisites.
## Prerequisites > [!div class="checklist"] > - [Workspace-based Application Insights resource](create-workspace-resource.md)
-> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions.
-> - Developer expertise capable of authoring custom code for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs
+> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions
+> - Developer expertise capable of authoring [custom code](#basic-code-sample) for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs
-> [!NOTE]
-> - TrackAvailability() requires that you have made a developer investment in custom code.
-> - [Standard tests](availability-standard-tests.md) should always be used if possible as they require little investment and have few prerequisites.
+> [!IMPORTANT]
+> [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) requires making a developer investment in writing and maintanining potentially complex custom code.
## Check availability
You can use Log Analytics to view your availability results, dependencies, and m
:::image type="content" source="media/availability-azure-functions/dependencies.png" alt-text="Screenshot that shows the New Query tab with dependencies limited to 50." lightbox="media/availability-azure-functions/dependencies.png":::
+## Basic code sample
+
+The following example demonstrates a web availability test that requires a simple URL ping using the `getStringAsync()` method.
+
+```csharp
+using System.Net.Http;
+
+public async static Task RunAvailabilityTestAsync(ILogger log)
+{
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+ }
+}
+```
+
+For advanced scenarios where the business logic must be adjusted to access the URL, such as obtaining tokens, setting parameters, and other test cases, custom code is necessary.
+ ## Next steps * [Standard tests](availability-standard-tests.md)
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
# What is autoinstrumentation for Azure Monitor Application Insights?
-Autoinstrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md).
+Autoinstrumentation enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). It provides easy access to experiences such as the [application dashboard](overview-dashboard.md) and [application map](app-map.md).
+
+If your language and platform are supported, select the corresponding link in the [Supported environments, languages, and resource providers table](#supported-environments-languages-and-resource-providers) for more detailed information. In many cases, autoinstrumentation is enabled by default.
+
+## What are the autoinstrumentation advantages?
> [!div class="checklist"]
-> - No code changes are required.
-> - [SDK update](sdk-support-guidance.md) overhead is eliminated.
-> - Recommended when available.
+> - Code changes aren't required.
+> - Access to source code isn't required.
+> - Configuration changes aren't required.
+> - Ongoing [SDK update maintenance](sdk-support-guidance.md) is eliminated.
## Supported environments, languages, and resource providers
Links are provided to more information for each supported scenario.
|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: | |Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: | |Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
-|Azure App Service on Linux - Publish as Docker | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
+|Azure App Service on Linux - Publish as Docker | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | |Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) | |Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
It's important to make sure the incoming and outgoing configurations are exactly
### Enable W3C distributed tracing support for web apps
-This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by default. To enable it, use `distributedTracingMode` config. AI_AND_W3C is provided for backward compatibility with any legacy services instrumented by Application Insights.
+This feature is enabled by default for Javascript and the headers are automatically included when the hosting page domain is the same as the domain the requests are sent to (for example, the hosting page is `example.com` and the Ajax requests are sent to `example.com`). To change the distributed tracing mode, use the [`distributedTracingMode` configuration field](./javascript-sdk-configuration.md#sdk-configuration). AI_AND_W3C is provided by default for backward compatibility with any legacy services instrumented by Application Insights.
- **[npm-based setup](./javascript-sdk.md?tabs=npmpackage#get-started)**
This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by
``` distributedTracingMode: 2 // DistributedTracingModes.W3C ```+
+If the XMLHttpRequest or Fetch Ajax requests are sent to a different domain host, including sub-domains, the correlation headers are not included by default. To enable this feature, set the [`enableCorsCorrelation` configuration field](./javascript-sdk-configuration.md#sdk-configuration) to `true`. If you set `enableCorsCorrelation` to `true`, all XMLHttpRequest and Fetch Ajax requests include the correlation headers. As a result, if the application on the server that is being called doesn't support the `traceparent` header, the request may fail, depending on whether the browser / version can validate the request based on which headers the server will accept.
+ > [!IMPORTANT] > To see all configurations required to enable correlation, see the [JavaScript correlation documentation](./javascript.md#enable-distributed-tracing).
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 07/20/2023 Last updated : 08/30/2023 ms.devlang: java
For more information, see [Use Application Insights Java In-Process Agent in Azu
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.15.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.16.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.15.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.16.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.15.jar applicationinsights-agent-3.4.15.jar
+COPY agent/applicationinsights-agent-3.4.16.jar applicationinsights-agent-3.4.16.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.15.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.16.jar", "-jar", "app.jar"]
```
-In this example we have copied the `applicationinsights-agent-3.4.15.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+In this example we have copied the `applicationinsights-agent-3.4.16.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
### Third-party container images
The following sections show how to set the Application Insights Java agent path
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.16.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.16.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.15.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.16.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.15.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.16.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.15.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.16.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.15.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.16.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.15.jar
+-javaagent:path/to/applicationinsights-agent-3.4.16.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.16.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.15.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.16.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.15.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.16.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.15.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.15.jar
+-javaagent:path/to/applicationinsights-agent-3.4.16.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 08/11/2023 Last updated : 08/30/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.15.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.16.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.15.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.16.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.15.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.16.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.15</version>
+ <version>3.4.16</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 08/11/2023 Last updated : 08/30/2023 ms.devlang: java
More information and configuration options are provided in the following section
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.15.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.16.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.15.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.16.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.15.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.16.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.15</version>
+ <version>3.4.16</version>
</dependency> ```
Starting from version 3.2.0, you can enable the following preview instrumentatio
"grizzly": { "enabled": true },
+ "ktor": {
+ "enabled": true
+ },
"play": { "enabled": true },
+ "r2dbc": {
+ "enabled": true
+ },
"springIntegration": { "enabled": true },
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.15.jar` is located.
+`applicationinsights-agent-3.4.16.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
The Application Insights Java Profiler provides a system for:
The Application Insights Java profiler uses the JFR profiler provided by the JVM to record profiling data, allowing users to download the JFR recordings at a later time and analyze them to identify the cause of performance issues.
-This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage and Memory consumption.
+This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage, Memory consumption, and Request (SLA triggers). Request triggers monitor Spans generated by Open Telemetry and allow the user to configure SLA requirements over the duration of those Spans.
When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance pane of the associated Application Insights Portal UI.
See [Configuring Profile Contents](#configuring-profile-contents) on setting a c
For more detailed description of the various triggers available, see [profiler overview](../profiler/profiler-overview.md).
-The ApplicationInsights Java Agent monitors CPU and memory consumption and if it breaches a configured threshold a profile is triggered. Both thresholds are a percentage.
+The ApplicationInsights Java Agent monitors CPU, memory, and request duration such as a business transaction. If it breaches a configured threshold, a profile is triggered.
#### Profile now
In this scenario, a profile will occur in the following circumstances:
- Full garbage collection is executed - The Tenured regions occupancy is above 691 mb after collection
+### Request
+
+SLA triggers are based on OpenTelemetry (otel) and they will initiate a profile if certain criteria is fulfilled.
+
+Each individual trigger configuration is formed as follows:
+
+- `Name` - A unique identifier for the trigger.
+- `Filter` - Filters the requests of interest for the trigger.
+- `Aggregation` - This calculates the ratio of requests that breached a given threshold.
+ - `Threshold` - A value (in milliseconds) above which a request breach is determined.
+ - `Minimum samples` - The minimum number of samples that must be collected for the aggregation to produce data, this is to prevent triggering off of small sample sizes.
+ - `Window` - Rolling time window (in milliseconds).
+- `Threshold` - The threshold value (percentage) applied to the aggregation output. If this value is exceeded, a profile is initiated.
+
+For instance, the following scenario would trigger a profile if: more than 75% of requests to a specific endpoint (/users/.*) take longer than 30 ms within a 60 seconds window, when at least 100 samples were gathered.
++ ### Installation The following steps will guide you through enabling the profiling component on the agent and configuring resource limits that will trigger a profile if breached.
The following steps will guide you through enabling the profiling component on t
2. Select "Triggers"
- 3. Configure the required CPU and Memory thresholds and select Apply.
- :::image type="content" source="./media/java-standalone-profiler/cpu-memory-trigger-settings.png" alt-text="Screenshot of trigger settings pane for CPU and Memory triggers.":::
+ 3. Configure the required CPU, Memory or Request triggers (if enabled) and select Apply.
+ :::image type="content" source="./media/java-standalone-profiler/trigger-settings.png" alt-text="Screenshot of trigger settings":::
> [!WARNING] > The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect.
Example configuration:
"enabled": true, "cpuTriggeredSettings": "profile-without-env-data", "memoryTriggeredSettings": "profile-without-env-data",
- "manualTriggeredSettings": "profile-without-env-data"
+ "manualTriggeredSettings": "profile-without-env-data",
+ "enableRequestTriggering": true
} } }
This value can be one of:
- `profile`. Uses the `profile.jfc` jfc configuration that ships with JFR. - A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`.
+`enableRequestTriggering` Whether JFR profiling should be triggered based on request configuration.
+This value can be one of:
+
+- `true` Profiling will be triggered if a request trigger threshold is breached.
+- `false` (default value). Profiling will not be triggered by request configuration.
+ ## Frequently asked questions ### What is Azure Monitor Application Insights Java Profiling?
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
For example, given `http.url = http://example.com/path?queryParam1=value1,queryP
} ```
-The following sample shows how to process spans that have a span name that matches regex patterns.
-This processor removes the `token` attribute. It obfuscates the `password` attribute in spans where the span name matches `auth.*`
-and where the span name doesn't match `login.*`.
+### Mask
+
+For example, given `http.url = https://example.com/user/12345622` is updated to `http.url = https://example.com/user/****` using either of the below configurations.
++
+First configuration example:
```json {
and where the span name doesn't match `login.*`.
"processors": [ { "type": "attribute",
- "include": {
- "matchType": "regexp",
- "spanNames": [
- "auth.*"
- ]
- },
- "exclude": {
- "matchType": "regexp",
- "spanNames": [
- "login.*"
- ]
- },
"actions": [ {
- "key": "password",
- "value": "obfuscated",
- "action": "update"
- },
- {
- "key": "token",
- "action": "delete"
+ "key": "http.url",
+ "pattern": "user\\/\\d+",
+ "replace": "user\\/****",
+ "action": "mask"
} ] }
and where the span name doesn't match `login.*`.
```
+Second configuration example with regular expression group name:
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "http.url",
+ "pattern": "^(?<userGroupName>[a-zA-Z.:\/]+)\d+",
+ "replace": "${userGroupName}**",
+ "action": "mask"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+ ## Span processor samples ### Name a span
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Application Insights Java 3.x can process telemetry data before the data is exported.
-Here are some use cases for telemetry processors:
+Some use cases:
* Mask sensitive data. * Conditionally add custom dimensions. * Update the span name, which is used to aggregate similar telemetry in the Azure portal.
The trace message or body is the primary display for logs in the Azure portal. L
## Telemetry processor types
-Currently, the four types of telemetry processors are attribute processors, span processors, log processors, and metric filters.
+Currently, the four types of telemetry processors are
+* Attribute processors
+* Span processors
+* Log processors
+* Metric filters
An attribute processor can insert, update, delete, or hash attributes of a telemetry item (`span` or `log`). It can also use a regular expression to extract one or more new attributes from an existing attribute.
The attribute processor modifies attributes of a `span` or a `log`. It can suppo
- `delete` - `hash` - `extract`
+- `mask`
### `insert`
The `extract` action requires the following settings:
* `pattern` * `action`: `extract`
+### `mask`
+
+> [!NOTE]
+> The `mask` feature is available only in version 3.2.5 and later.
+
+The `mask` action masks attribute values by using a regular expression rule specified in the `pattern` and `replace`.
+
+```json
+"processors": [
+ {
+ "type": "attribute",
+ "actions": [
+ {
+ "key": "attributeName",
+ "pattern": "<regular expression pattern>",
+ "replace": "<replacement value>",
+ "action": "mask"
+ }
+ ]
+ }
+]
+```
+The `mask` action requires the following settings:
+* `key`
+* `pattern`
+* `replace`
+* `action`: `mask`
+
+`pattern` can contain a named group placed betwen `?<` and `>:`. Example: `(?<userGroupName>[a-zA-Z.:\/]+)\d+`? The gorup is `(?<userGroupName>[a-zA-Z.:\/]+)` and `userGroupName` is the name of the group. `pattern` can then contain the same named group placed between `${` and `}` followed by the masK. Example where the mask is **: `${userGroupName}**`.
+
+See [Telemetry processor examples](./java-standalone-telemetry-processors-examples.md) for masking examples.
+ ### Include criteria and exclude criteria Attribute processors support optional `include` and `exclude` criteria.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 07/20/2023 Last updated : 08/30/2023 ms.devlang: java
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.15.jar
+-javaagent:path/to/applicationinsights-agent-3.4.16.jar
``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
See a [simple web app with the Click Analytics Autocollection Plug-in enabled](h
The following examples show which value is fetched as the `parentId` for different configurations.
+The examples show how if `parentDataTag` is defined but the plug-in can't find this tag under the DOM tree, the plug-in uses the `id` of its closest parent element.
+ ### Example 1
-In example 1, the `parentDataTag` isn't declared and `data-parentid` or `data-*-parentid` isn't defined in any element.
+In example 1, the `parentDataTag` isn't declared and `data-parentid` or `data-*-parentid` isn't defined in any element. This example shows a configuration where a value for `parentId` isn't collected.
```javascript export const clickPluginConfigWithUseDefaultContentNameOrId = {
export const clickPluginConfigWithUseDefaultContentNameOrId = {
}; <div className="test1" data-id="test1parent">
- <div>Test1</div>
+ <div>Test1</div>
<div><small>with id, data-id, parent data-id defined</small></div> <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button>
- </div>
+</div>
```
-For clicked element `<Button>`, the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because `parentDataTag` is not declared and the `data-parentid` or `data-*-parentid` is not defined in any element.
+For clicked element `<Button>` the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because no `parentDataTag` details are defined and no parent element id is provided within the current element.
### Example 2
-In example 2, `parentDataTag` is declared and `data-parentid` is defined.
+In example 2, `parentDataTag` is declared and `data-parentid` is defined. This example shows how parent id details are collected.
```javascript export const clickPluginConfigWithParentDataTag = {
export const clickPluginConfigWithParentDataTag = {
</div> ```
-For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.
+For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` is directly defined within the element. Therefore, this value takes precedence over all other parent ids or id details defined in its parent elements.
### Example 3
-In example 3, `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined.
+In example 3, `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined. This example shows how declaring `parentDataTag` can be helpful to collect a value for `parentId` for cases when dynamic elements don't have an `id` or `data-*-id`.
```javascript export const clickPluginConfigWithParentDataTag = {
export const clickPluginConfigWithParentDataTag = {
</div> </div> ```
-For clicked element `<Button>`, because `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined, the value of `parentId` is `test6parent`. It's `test6parent` because when `parentDataTag` is declared, the plug-in fetches the value of the `id` or `data-*-id` attribute from the parent HTML element that is closest to the clicked element. Because `data-group="buttongroup1"` is defined, the plug-in finds the `parentId` more efficiently.
+For clicked element `<Button>`, the value of `parentId` is `test6parent`, because `parentDataTag` is declared. This declaration allows the plugin to traverse the current element tree and therefore the id of its closest parent will be used when parent id details are not directly provided within the current element. With the `data-group="buttongroup1"` defined, the plug-in finds the `parentId` more efficiently.
If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared.
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Telemetry emitted by these Azure SDKs is automatically collected by default:
#### [Node.js](#tab/nodejs)
-The following OpenTelemetry Instrumentation libraries are included as part of the Azure Monitor Application Insights Distro. See [this](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#officially-supported-instrumentations) for more details.
+The following OpenTelemetry Instrumentation libraries are included as part of the Azure Monitor Application Insights Distro. For more information, see [OpenTelemetry officially supported instrumentations](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#officially-supported-instrumentations).
Requests - [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) <sup>[2](#FOOTNOTETWO)</sup>
Telemetry emitted by Azure SDKS is automatically [collected](https://github.com/
You can collect more data automatically when you include instrumentation libraries from the OpenTelemetry community.
-> [!NOTE]
-> We don't support and cannot guarantee the quality of community instrumentation libraries. If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
-
-> [!CAUTION]
-> Some instrumentation libraries are based on experimental OpenTelemetry semantic specifications. Adding them may leave you vulnerable to future breaking changes.
### [ASP.NET Core](#tab/aspnetcore)
-To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods.
+To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods.
The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
app.MapGet("/", () =>
app.Run(); ```
-When calling `StartActivity`, it defaults to `ActivityKind.Internal` but you can provide any other `ActivityKind`.
+`StartActivity` defaults to `ActivityKind.Internal`, but you can provide any other `ActivityKind`.
`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`. `ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
using (var activity = activitySource.StartActivity("CustomActivity"))
} ```
-When calling `StartActivity`, it defaults to `ActivityKind.Internal` but you can provide any other `ActivityKind`.
+`StartActivity` defaults to `ActivityKind.Internal`, but you can provide any other `ActivityKind`.
`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`. `ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
Attaching custom dimensions to logs can be accomplished using a [message templat
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
-* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
-* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html)
+* [Log4j 2.0 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
+* [Log4j 2.0 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html)
* [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html) #### [Node.js](#tab/nodejs)
Get the request trace ID and the span ID in your code:
+ ## Next steps ### [ASP.NET Core](#tab/aspnetcore)
Get the request trace ID and the span ID in your code:
### [Node.js](#tab/nodejs) - To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To install the npm package and check for updates, see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Use one of the following two ways to configure the connection string:
- Use configuration object:
- ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const config = new ApplicationInsightsConfig();
- config.azureMonitorExporterConfig.connectionString = "<Your Connection String>";
- const appInsights = new ApplicationInsightsClient(config);
+ ```typescript
+ const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+ const options: AzureMonitorOpenTelemetryOptions = {
+ azureMonitorExporterConfig: {
+ connectionString: "<your connection string>"
+ }
+ };
+ useAzureMonitor(options);
```
To set the cloud role instance, see [cloud role instance](java-standalone-config
Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
-```javascript
+```typescript
...
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
const { Resource } = require("@opentelemetry/resources"); const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions"); // - // Setting role name and role instance // -
-const config = new ApplicationInsightsConfig();
-config.resource = new Resource({
- [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
+const customResource = new Resource({
+ [SemanticResourceAttributes.SERVICE_NAME]: "my-service",
[SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace", [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance", });
-const appInsights = new ApplicationInsightsClient(config);
+const options: AzureMonitorOpenTelemetryOptions = {
+ resource: customResource
+};
+useAzureMonitor(options);
``` ### [Python](#tab/python)
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 me
```csharp var builder = WebApplication.CreateBuilder(args);
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-builder.Services.Configure<ApplicationInsightsSamplerOptions>(options => { options.SamplingRatio = 0.1F; });
+builder.Services.AddOpenTelemetry().UseAzureMonitor(o =>
+{
+ o.SamplingRatio = 0.1F;
+});
var app = builder.Build();
Starting from 3.4.0, rate-limited sampling is available and is now the default.
#### [Node.js](#tab/nodejs)
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const config = new ApplicationInsightsConfig();
-config.samplingRatio = 0.75;
-const appInsights = new ApplicationInsightsClient(config);
+The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
+
+```typescript
+const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+
+const options: AzureMonitorOpenTelemetryOptions = {
+ samplingRatio: 0.1
+};
+useAzureMonitor(options);
``` #### [Python](#tab/python)
For more information about Java, see the [Java supplemental documentation](java-
#### [Node.js](#tab/nodejs)
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const { ManagedIdentityCredential } = require("@azure/identity");
+We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#credential-classes).
-const credential = new ManagedIdentityCredential();
+```typescript
+const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+const { ManagedIdentityCredential } = require("@azure/identity");
-const config = new ApplicationInsightsConfig();
-config.azureMonitorExporterConfig.aadTokenCredential = credential;
-const appInsights = new ApplicationInsightsClient(config);
+const options: AzureMonitorOpenTelemetryOptions = {
+ credential: new ManagedIdentityCredential()
+};
+useAzureMonitor(options);
``` #### [Python](#tab/python)
By default, the AzureMonitorExporter uses one of the following locations for off
To override the default directory, you should set `storageDirectory`. For example:
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const config = new ApplicationInsightsConfig();
-config.azureMonitorExporterConfig = {
- connectionString: "<Your Connection String>",
- storageDirectory: "C:\\SomeDirectory",
- disableOfflineStorage: false
++
+```typescript
+const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+
+const options: AzureMonitorOpenTelemetryOptions = {
+ azureMonitorExporterConfig = {
+ connectionString: "<Your Connection String>",
+ storageDirectory: "C:\\SomeDirectory",
+ disableOfflineStorage: false
+ }
};
-const appInsights = new ApplicationInsightsClient(config);
+useAzureMonitor(options);
``` To disable this feature, you should set `disableOfflineStorage = true`.
For more information about Java, see the [Java supplemental documentation](java-
2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node).
- ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ ```typescript
+ const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
-
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+
+ useAzureMonitor();
const otlpExporter = new OTLPTraceExporter();
- appInsights.getTraceHandler().addSpanProcessor(new BatchSpanProcessor(otlpExporter));
+ const tracerProvider = trace.getTracerProvider().getDelegate();
+ tracerProvider.addSpanProcessor(new BatchSpanProcessor(otlpExporter));
``` + #### [Python](#tab/python) 1. Install the [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) package.
For more information about OpenTelemetry SDK configuration, see the [OpenTelemet
For more information about OpenTelemetry SDK configuration, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/sdk-configuration). For additional details, see [Azure monitor Distro Usage](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#usage). +
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 08/11/2023 Last updated : 08/30/2023 ms.devlang: csharp, javascript, typescript, python
# Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications
-This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". The Distro will [automatically collect](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro." The Distro [automatically collects](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
## OpenTelemetry Release Status
dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.15.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.15/applicationinsights-agent-3.4.15.jar) file.
+Download the [applicationinsights-agent-3.4.16.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.16/applicationinsights-agent-3.4.16.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.4.15.jar](https://github.com/microsoft
Install these packages: -- [applicationinsights](https://www.npmjs.com/package/applicationinsights/v/beta)
+- [@azure/monitor-opentelemetry](https://www.npmjs.com/package/@azure/monitor-opentelemetry)
```sh
-npm install applicationinsights@beta
+npm install @azure/monitor-opentelemetry
``` The following packages are also used for some specific scenarios described later in this article:
pip install azure-monitor-opentelemetry --pre
### Enable Azure Monitor Application Insights
-To enable Azure Monitor Application Insights, you make a minor modification to your application and set your "Connection String". The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you.
+To enable Azure Monitor Application Insights, you make a minor modification to your application and set your "Connection String." The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you.
#### Modify your Application
var loggerFactory = LoggerFactory.Create(builder =>
Java autoinstrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.15.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.16.jar"` to your application's JVM args.
> [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights
##### [Node.js](#tab/nodejs)
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const config = new ApplicationInsightsConfig();
-const appInsights = new ApplicationInsightsClient(config);
+```typescript
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+useAzureMonitor();
``` ##### [Python](#tab/python)
To paste your Connection String, select from the following options:
B. Set via Configuration File - Java Only (Recommended)
- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.15.jar` with the following content:
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.16.jar` with the following content:
```json {
To paste your Connection String, select from the following options:
C. Set via Code - ASP.NET Core, Node.js, and Python Only (Not recommended)
- See [Connection String Configuration](opentelemetry-configuration.md#connection-string) for example setting Connection String via code.
+ See [Connection String Configuration](opentelemetry-configuration.md#connection-string) for an example of setting Connection String via code.
> [!NOTE] > If you set the connection string in more than one place, we adhere to the following precendence:
Run your application and open your **Application Insights Resource** tab in the
:::image type="content" source="media/opentelemetry/server-requests.png" alt-text="Screenshot of the Application Insights Overview tab with server requests and server response time highlighted.":::
-That's it. Your application is now monitored by Application Insights. Everything else below is optional and available for further customization.
+You've now enabled Application Insights for your application. All the following steps are optional and allow for further customization.
Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-dotnet), [Java](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-java), [Node.js](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-nodejs), or [Python](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-python).
Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot
As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
-## Support
-
-### [ASP.NET Core](#tab/aspnetcore)
--- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).-
-#### [.NET](#tab/net)
--- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).-
-### [Java](#tab/java)
--- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md).-- For OpenTelemetry issues, contact the [OpenTelemetry community](https://opentelemetry.io/community/) directly.-- For a list of open issues related to Azure Monitor Java Autoinstrumentation, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Java/issues).-
-### [Node.js](#tab/nodejs)
--- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).-
-### [Python](#tab/python)
--- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry Python community](https://github.com/open-telemetry/opentelemetry-python) directly.-- For a list of open issues related to Azure Monitor Distro, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Python/issues/new).---
-## OpenTelemetry feedback
-
-To provide feedback:
--- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).-- Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).-- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).-- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). ## Next steps
To provide feedback:
- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) - To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To install the npm package and check for updates, see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Nodejs Exporter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-exporter.md
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo
provider.register(); ```
-## Support
--- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).-
-## OpenTelemetry feedback
-
-To provide feedback:
--- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).-- Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).-- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).-- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). ## Next steps
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Traces | Logs
Requests | Server Spans Dependencies | Other Span Types (Client, Internal, etc.) + ## Next steps Select your enablement approach:
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
Coming soon.
### Performance Counters
-The OpenCensus Python Azure Monitor exporter automatically collected system and performance related metrics called [performance counters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure#performance-counters). These metrics appear in `performanceCounters` in your Application Insights instance. In OpenTelemetry, we no longer send these metrics explicitly to `performanceCounters`. Metrics related to incoming/outgoing requests can be found under [standard metrics](./standard-metrics.md). If you would like OpenTelemetry to autocollect system related metrics, you can use the experimental system metrics [instrumentation](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-system-metrics), contributed by the OpenTelemetry Python community. This package is experimental and not officially supported by Microsoft.
+The OpenCensus Python Azure Monitor exporter automatically collected system and performance related metrics called [performance counters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure#performance-counters). These metrics appear in `performanceCounters` in your Application Insights instance. In OpenTelemetry, we no longer send these metrics explicitly to `performanceCounters`. Metrics related to incoming/outgoing requests can be found under [standard metrics](./standard-metrics.md). If you would like OpenTelemetry to autocollect system related metrics, you can use the experimental system metrics [instrumentation](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-system-metrics), contributed by the OpenTelemetry Python community. This package is experimental and not officially supported by Microsoft.
+
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
When creating multiple profiles using templates, the CLI, and PowerShell, follow
## [ARM templates](#tab/templates)
-See the autoscale section of the [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
+See the autoscale section of the [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
There is no specification in the template for end time. A profile will remain active until the next profile's start time.
There is no specification in the template for end time. A profile will remain ac
The example below shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile will start on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday and end just after midnight on Saturday morning. Use the following command to deploy the template:
-` az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
+`az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
where *VMSS1-autoscale.json* is the the file containing the JSON object below. ``` JSON
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
The CLI can be used to create multiple profiles in your autoscale settings.
-See the [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) for the full set of autoscale CLI commands.
+See the [Autoscale CLI reference](/cli/azure/monitor/autoscale) for the full set of autoscale CLI commands.
The following steps show how to create a recurring autoscale profile using the CLI.
az monitor autoscale rule create -g rg-vmss1--autoscale-name VMSS1-Autoscale-607
PowerShell can be used to create multiple profiles in your autoscale settings.
-See the [PowerShell Az.Monitor Reference ](https://learn.microsoft.com/powershell/module/az.monitor/#monitor) for the full set of autoscale PowerShell commands.
+See the [PowerShell Az.Monitor Reference](/powershell/module/az.monitor/#monitor) for the full set of autoscale PowerShell commands.
The following steps show how to create an autoscale profile using PowerShell.
$DefaultProfileThursdayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -Ma
## Next steps
-* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
-* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
-* [PowerShell Az PowerShell module.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
-* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Autoscale CLI reference](/cli/azure/monitor/autoscale)
+* [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az PowerShell module.Monitor Reference](/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](/rest/api/monitor/autoscale-settings).
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
azure-monitor Autoscale Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-using-powershell.md
To create autoscale setting using PowerShell, follow the sequence below:
### Create rules Create scale in and scale out rules then associated them with a profile.
-Rules are created using the [`New-AzAutoscaleScaleRuleObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalescaleruleobject).
+Rules are created using the [`New-AzAutoscaleScaleRuleObject`](/powershell/module/az.monitor/new-azautoscalescaleruleobject).
The following PowerShell script creates two rules.
The table below describes the parameters used in the `New-AzAutoscaleScaleRuleOb
### Create a default autoscale profile and associate the rules
-After defining the scale rules, create a profile. The profile specifies the default, upper, and lower instance count limits, and the times that the associated rules can be applied. Use the [`New-AzAutoscaleProfileObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscaleprofileobject) cmdlet to create a new autoscale profile. As this is a default profile, it doesn't have any schedule parameters. The default profile is active at times that no other profiles are active
+After defining the scale rules, create a profile. The profile specifies the default, upper, and lower instance count limits, and the times that the associated rules can be applied. Use the [`New-AzAutoscaleProfileObject`](/powershell/module/az.monitor/new-azautoscaleprofileobject) cmdlet to create a new autoscale profile. As this is a default profile, it doesn't have any schedule parameters. The default profile is active at times that no other profiles are active
```azurepowershell $defaultProfile=New-AzAutoscaleProfileObject `
The table below describes the parameters used in the `New-AzAutoscaleProfileObje
### Apply the autoscale settings
-After fining the rules and profile, apply the autoscale settings using [`New-AzAutoscaleSetting`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalesetting). To update existing autoscale setting use [`Update-AzAutoscaleSetting`](https://learn.microsoft.com/powershell/module/az.monitor/add-azautoscalesetting)
+After fining the rules and profile, apply the autoscale settings using [`New-AzAutoscaleSetting`](/powershell/module/az.monitor/new-azautoscalesetting). To update existing autoscale setting use [`Update-AzAutoscaleSetting`](/powershell/module/az.monitor/add-azautoscalesetting)
```azurepowershell New-AzAutoscaleSetting `
New-AzAutoscaleSetting `
### Add notifications to your autoscale settings Add notifications to your sale setting to trigger a webhook or send email notifications when a scale event occurs.
-For more information on webhook notifications, see [`New-AzAutoscaleWebhookNotificationObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalewebhooknotificationobject)
+For more information on webhook notifications, see [`New-AzAutoscaleWebhookNotificationObject`](/powershell/module/az.monitor/new-azautoscalewebhooknotificationobject)
Set a webhook using the following cmdlet; ```azurepowershell
Set a webhook using the following cmdlet;
$webhook1=New-AzAutoscaleWebhookNotificationObject -Property @{} -ServiceUri "http://contoso.com/webhook1" ```
-Configure the notification using the webhook and set up email notification using the [`New-AzAutoscaleNotificationObject`](https://learn.microsoft.com/powershell/module/az.monitor/new-azautoscalenotificationobject) cmdlet:
+Configure the notification using the webhook and set up email notification using the [`New-AzAutoscaleNotificationObject`](/powershell/module/az.monitor/new-azautoscalenotificationobject) cmdlet:
```azurepowershell
For more information on scheduled profiles, see [Autoscale with multiple profile
## Other autoscale commands
-For a complete list of PowerShell cmdlets for autoscale, see the [PowerShell Module Browser](https://learn.microsoft.com/powershell/module/?term=azautoscale)
+For a complete list of PowerShell cmdlets for autoscale, see the [PowerShell Module Browser](/powershell/module/?term=azautoscale)
## Clean up resources
azure-monitor Autoscale Webhook Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-webhook-email.md
For example, the following command adds an email notification and a webhook noti
> You can add mote than one email or webhook notification by using the `--add-action` parameter multiple times. While multiple webhook notifications are supported and can be seen in the JSON, the portal only shows the first webhook.
-For more information, see [az monitor autoscale](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest).
+For more information, see [az monitor autoscale](/cli/azure/monitor/autoscale).
New-AzAutoscaleSetting -Name autoscalesetting2 `
### Use Resource Manager templates to configure notifications.
-When you use the Resource Manager templates or REST API, include the `notifications` element in your [autoscale settings](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings?pivots=deployment-language-arm-template#resource-format-1), for example:
+When you use the Resource Manager templates or REST API, include the `notifications` element in your [autoscale settings](/azure/templates/microsoft.insights/autoscalesettings?pivots=deployment-language-arm-template#resource-format-1), for example:
```JSON "notifications": [
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
The following schemas are relevant to action groups, which are part of the notif
## See Also -- See [Monitoring Azure Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself. -- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself.
+- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following types of data collected from a Kubernetes cluster with Container i
- Container environment variables from every monitored container in the cluster - Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics-- [Diagnostic log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
+- [Resource log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
## Estimating costs to monitor your AKS cluster
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
+
+ Title: Configure hybrid Kubernetes clusters with Container insights | Microsoft Docs
+description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environments.
+ Last updated : 08/21/2023+++
+# Configure hybrid Kubernetes clusters with Container insights
+
+Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
+
+## Supported configurations
+
+The following configurations are officially supported with Container insights. If you have a different version of Kubernetes and operating system versions, please open a support ticket..
+
+- Environments:
+ - Kubernetes on-premises.
+ - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
+ - [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or in other cloud environments.
+- Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md).
+- The following container runtimes are supported: Moby and CRI compatible runtimes such CRI-O and ContainerD.
+- The Linux OS release for main and worker nodes supported are Ubuntu (18.04 LTS and 16.04 LTS) and Red Hat Enterprise Linux CoreOS 43.81.
+- Azure Access Control service supported: Kubernetes role-based access control (RBAC) and non-RBAC.
+
+## Prerequisites
+
+Before you start, make sure that you meet the following prerequisites:
+
+- You have a [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or the [Azure portal](../logs/quick-create-workspace.md).
+
+ >[!NOTE]
+ >Enabling the monitoring of multiple clusters with the same cluster name to the same Log Analytics workspace isn't supported. Cluster names must be unique.
+ >
+
+- You're a member of the Log Analytics contributor role to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md).
+- To view the monitoring data, you must have the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
+- You have a [Helm client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.
+- The following proxy and firewall configuration information is required for the containerized version of the Log Analytics agent for Linux to communicate with Azure Monitor:
+
+ |Agent resource|Ports |
+ |||
+ |*.ods.opinsights.azure.com |Port 443 |
+ |*.oms.opinsights.azure.com |Port 443 |
+ |*.dc.services.visualstudio.com |Port 443 |
+
+- The containerized agent requires the Kubelet `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend that you configure `secure port: 10250` on the Kubelet cAdvisor if it isn't configured already.
+- The containerized agent requires the following environmental variables to be specified on the container to communicate with the Kubernetes API service within the cluster to collect inventory data: `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`.
+
+>[!IMPORTANT]
+>The minimum agent version supported for monitoring hybrid Kubernetes clusters is *ciprod10182019* or later.
+
+## Enable monitoring
+
+To enable Container insights for the hybrid Kubernetes cluster:
+
+1. Configure your Log Analytics workspace with the Container insights solution.
+
+1. Enable the Container insights Helm chart with a Log Analytics workspace.
+
+For more information on monitoring solutions in Azure Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions).
+
+### Add the Azure Monitor Containers solution
+
+You can deploy the solution with the provided Azure Resource Manager template by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with the Azure CLI.
+
+If you're unfamiliar with the concept of deploying resources by using a template, see:
+
+- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)
+- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)
+
+If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+This method includes two JSON templates. One template specifies the configuration to enable monitoring. The other template contains parameter values that you configure to specify:
+
+- `workspaceResourceId`: The full resource ID of your Log Analytics workspace.
+- `workspaceRegion`: The region the workspace is created in, which is also referred to as **Location** in the workspace properties when you view them from the Azure portal.
+
+To first identify the full resource ID of your Log Analytics workspace that's required for the `workspaceResourceId` parameter value in the *containerSolutionParams.json* file, perform the following steps. Then run the PowerShell cmdlet or Azure CLI command to add the solution.
+
+1. List all the subscriptions to which you have access by using the following command:
+
+ ```azurecli
+ az account list --all -o table
+ ```
+
+ The output will resemble the following example:
+
+ ```azurecli
+ Name CloudName SubscriptionId State IsDefault
+ -- - --
+ Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
+ ```
+
+ Copy the value for **SubscriptionId**.
+
+1. Switch to the subscription hosting the Log Analytics workspace by using the following command:
+
+ ```azurecli
+ az account set -s <subscriptionId of the workspace>
+ ```
+
+1. The following example displays the list of workspaces in your subscriptions in the default JSON format:
+
+ ```azurecli
+ az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
+ ```
+
+ In the output, find the workspace name. Then copy the full resource ID of that Log Analytics workspace under the field **ID**.
+
+1. Copy and paste the following JSON syntax into your file:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Azure Monitor Log Analytics Workspace Resource ID"
+ }
+ },
+ "workspaceRegion": {
+ "type": "string",
+ "metadata": {
+ "description": "Azure Monitor Log Analytics Workspace region"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "name": "[Concat('ContainerInsights', '-', uniqueString(parameters('workspaceResourceId')))]",
+ "apiVersion": "2017-05-10",
+ "subscriptionId": "[split(parameters('workspaceResourceId'),'/')[2]]",
+ "resourceGroup": "[split(parameters('workspaceResourceId'),'/')[4]]",
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "apiVersion": "2015-11-01-preview",
+ "type": "Microsoft.OperationsManagement/solutions",
+ "location": "[parameters('workspaceRegion')]",
+ "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]",
+ "properties": {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]"
+ },
+ "plan": {
+ "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]",
+ "product": "[Concat('OMSGallery/', 'ContainerInsights')]",
+ "promotionCode": "",
+ "publisher": "Microsoft"
+ }
+ }
+ ]
+ },
+ "parameters": {}
+ }
+ }
+ ]
+ }
+ ```
+
+1. Save this file as **containerSolution.json** to a local folder.
+
+1. Paste the following JSON syntax into your file:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceResourceId": {
+ "value": "<workspaceResourceId>"
+ },
+ "workspaceRegion": {
+ "value": "<workspaceRegion>"
+ }
+ }
+ }
+ ```
+
+1. Edit the values for **workspaceResourceId** by using the value you copied in step 3. For **workspaceRegion**, copy the **Region** value after running the Azure CLI command [az monitor log-analytics workspace show](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-list&preserve-view=true).
+
+1. Save this file as **containerSolutionParams.json** to a local folder.
+
+1. You're ready to deploy this template.
+
+ - To deploy with Azure PowerShell, use the following commands in the folder that contains the template:
+
+ ```powershell
+ # configure and login to the cloud of Log Analytics workspace.Specify the corresponding cloud environment of your workspace to below command.
+ Connect-AzureRmAccount -Environment <AzureCloud | AzureChinaCloud | AzureUSGovernment>
+ ```
+
+ ```powershell
+ # set the context of the subscription of Log Analytics workspace
+ Set-AzureRmContext -SubscriptionId <subscription Id of Log Analytics workspace>
+ ```
+
+ ```powershell
+ # execute deployment command to add Container Insights solution to the specified Log Analytics workspace
+ New-AzureRmResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <resource group of Log Analytics workspace> -TemplateFile .\containerSolution.json -TemplateParameterFile .\containerSolutionParams.json
+ ```
+
+ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
+
+ ```powershell
+ provisioningState : Succeeded
+ ```
+
+ - To deploy with the Azure CLI, run the following commands:
+
+ ```azurecli
+ az login
+ az account set --name <AzureCloud | AzureChinaCloud | AzureUSGovernment>
+ az login
+ az account set --subscription "Subscription Name"
+ # execute deployment command to add container insights solution to the specified Log Analytics workspace
+ az deployment group create --resource-group <resource group of log analytics workspace> --name <deployment name> --template-file ./containerSolution.json --parameters @./containerSolutionParams.json
+ ```
+
+ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
+
+ ```azurecli
+ provisioningState : Succeeded
+ ```
+
+ After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
+
+## Install the Helm chart
+
+In this section, you install the containerized agent for Container insights. Before you proceed, identify the workspace ID required for the `amalogsagent.secret.wsid` parameter and the primary key required for the `amalogsagent.secret.key` parameter. To identify this information, follow these steps and then run the commands to install the agent by using the Helm chart.
+
+1. Run the following command to identify the workspace ID:
+
+ `az monitor log-analytics workspace list --resource-group <resourceGroupName>`
+
+ In the output, find the workspace name under the field **name**. Then copy the workspace ID of that Log Analytics workspace under the field **customerID**.
+
+1. Run the following command to identify the primary key for the workspace:
+
+ `az monitor log-analytics workspace get-shared-keys --resource-group <resourceGroupName> --workspace-name <logAnalyticsWorkspaceName>`
+
+ In the output, find the primary key under the field **primarySharedKey** and then copy the value.
+
+ >[!NOTE]
+ >The following commands are applicable only for Helm version 2. Use of the `--name` parameter isn't applicable with Helm version 3.
+
+ If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster doesn't communicate through a proxy server, you don't need to specify this parameter. For more information, see the section [Configure the proxy endpoint](#configure-the-proxy-endpoint) later in this article.
+
+1. Add the Azure charts repository to your local list by running the following command:
+
+ ```
+ helm repo add microsoft https://microsoft.github.io/charts/repo
+ ````
+
+1. Install the chart by running the following command:
+
+ ```
+ $ helm install --name myrelease-1 \
+ --set amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<my_prod_cluster> microsoft/azuremonitor-containers
+ ```
+
+ If the Log Analytics workspace is in Azure China 21Vianet, run the following command:
+
+ ```
+ $ helm install --name myrelease-1 \
+ --set amalogsagent.domain=opinsights.azure.cn,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+ ```
+
+ If the Log Analytics workspace is in Azure US Government, run the following command:
+
+ ```
+ $ helm install --name myrelease-1 \
+ --set amalogsagent.domain=opinsights.azure.us,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+ ```
+
+### Enable the Helm chart by using the API model
+
+You can specify an add-on in the AKS Engine cluster specification JSON file, which is also referred to as the API model. In this add-on, provide the base64-encoded version of `WorkspaceGUID` and `WorkspaceKey` of the Log Analytics workspace where the collected monitoring data is stored. You can find `WorkspaceGUID` and `WorkspaceKey` by using steps 1 and 2 in the previous section.
+
+Supported API definitions for the Azure Stack Hub cluster can be found in the example [kubernetes-container-monitoring_existing_workspace_id_and_key.json](https://github.com/Azure/aks-engine/blob/master/examples/addons/container-monitoring/kubernetes-container-monitoring_existing_workspace_id_and_key.json). Specifically, find the **addons** property in **kubernetesConfig**:
+
+```json
+"orchestratorType": "Kubernetes",
+ "kubernetesConfig": {
+ "addons": [
+ {
+ "name": "container-monitoring",
+ "enabled": true,
+ "config": {
+ "workspaceGuid": "<Azure Log Analytics Workspace Id in Base-64 encoded>",
+ "workspaceKey": "<Azure Log Analytics Workspace Key in Base-64 encoded>"
+ }
+ }
+ ]
+ }
+```
+
+## Configure agent data collection
+
+Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
+
+After you've successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
+
+>[!NOTE]
+>Ingestion latency is around 5 to 10 minutes from the agent to commit in the Log Analytics workspace. Status of the cluster shows the value **No data** or **Unknown** until all the required monitoring data is available in Azure Monitor.
+
+## Configure the proxy endpoint
+
+Starting with chart version 2.7.1, the chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. In this way, it can communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server. Both anonymous and basic authentication with a username and password are supported.
+
+The proxy configuration value has the syntax `[protocol://][user:password@]proxyhost[:port]`.
+
+> [!NOTE]
+>If your proxy server doesn't require authentication, you still need to specify a pseudo username and password. It can be any username or password.
+
+|Property| Description |
+|--|-|
+|protocol | HTTP or HTTPS |
+|user | Optional username for proxy authentication |
+|password | Optional password for proxy authentication |
+|proxyhost | Address or FQDN of the proxy server |
+|port | Optional port number for the proxy server |
+
+An example is `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`.
+
+If you specify the protocol as **http**, the HTTP requests are created by using an SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
+
+## Troubleshooting
+
+If you encounter an error while you attempt to enable monitoring for your hybrid Kubernetes cluster, copy the PowerShell script [TroubleshootError_nonAzureK8s.ps1](https://aka.ms/troubleshoot-non-azure-k8s) and save it to a folder on your computer. This script is provided to help you detect and fix the issues you encounter. It's designed to detect and attempt correction of the following issues:
+
+- The specified Log Analytics workspace is valid.
+- The Log Analytics workspace is configured with the Container insights solution. If not, configure the workspace.
+- The Azure Monitor Agent replicaset pods are running.
+- The Azure Monitor Agent daemonset pods are running.
+- The Azure Monitor Agent Health service is running.
+- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace that the insight is configured with.
+- Validate that all the Linux worker nodes have the `kubernetes.io/role=agent` label to the schedulers pod. If it doesn't exist, add it.
+- Validate that `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster.
+
+To execute with Azure PowerShell, use the following commands in the folder that contains the script:
+
+```powershell
+.\TroubleshootError_nonAzureK8s.ps1 - azureLogAnalyticsWorkspaceResourceId </subscriptions/<subscriptionId>/resourceGroups/<resourcegroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName> -kubeConfig <kubeConfigFile> -clusterContextInKubeconfig <clusterContext>
+```
+
+## Next steps
+
+Now that monitoring is enabled to collect health and resource utilization of your hybrid Kubernetes clusters and workloads are running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
# View Kubernetes logs, events, and pod metrics in real time
-Container insights includes the Live Data feature. You can use this advanced diagnostic feature for direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time.
+The Live Data feature in Container insights gives you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time.
> [!NOTE] > AKS uses [Kubernetes cluster-level logging architectures](https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures). You can use tools such as Fluentd or Fluent Bit to collect logs.
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
The required tables for this chart include KubeNodeInventory, KubePodInventory,
| project ClusterName, NodeName, LastReceivedDateTime, Status, ContainerCount, UpTimeMs = UpTimeMs_long, Aggregation = Aggregation_real, LimitValue = LimitValue_real, list_TrendPoint, Labels, ClusterId ```
-## Resource logs
-
-For details on resource logs for AKS clusters, see [Collect control plane logs](../../aks/monitor-aks.md#resource-logs).
-- ## Prometheus metrics The following examples requires the configuration described in [Send Prometheus metrics to Log Analytics workspace with Container insights](container-insights-prometheus-logs.md).
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Previously updated : 07/18/2023 Last updated : 08/28/2023 # Enable the ContainerLogV2 schema Azure Monitor Container insights offers a schema for container logs, called ContainerLogV2. As part of this schema, there are fields to make common queries to view Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes data. In addition, this schema is compatible with [Basic Logs](../logs/basic-logs-configure.md), which offers a low-cost alternative to standard analytics logs.
-ContainerLogV2 will be default schema for customers who will be onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy and Portal onboarding. ContainerLogV2 can be explicitly enabled through CLI version 2.51.0 or higher using Data collection settings.
+>[!NOTE]
+> ContainerLogV2 will be default schema for customers who will be onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy and Portal onboarding. ContainerLogV2 can be explicitly enabled through CLI version 2.51.0 or higher using Data collection settings.
The new fields are: * `ContainerName`
The new fields are:
> [Export](../logs/logs-data-export.md) to Event Hub and Storage Account is not supported if the incoming LogMessage is not a valid JSON. For best performance, we recommend emitting container logs in JSON format. ## Enable the ContainerLogV2 schema
-Customers can enable the ContainerLogV2 schema at the cluster level through either the cluster's Data Collection Rule or ConfigMap. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview).
+Customers can enable the ContainerLogV2 schema at the cluster level through either the cluster's Data Collection Rule (DCR) or ConfigMap. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview).
Follow the instructions to configure an existing ConfigMap or to use a new one.
-### Configure via an existing Data Collection Rule
+>[!NOTE]
+> Because ContainerLogV2 can be enabled through either the DCR and ConfigMap, when both are enabled the ContainerLogV2 setting of the ConfigMap will take precedence. Stdout and stderr logs will only be ingested to the ContainerLog table when both the DCR and ConfigMap are explicitly set to off.
+
+### Configure via an existing Data Collection Rule (DCR)
+
+## [Azure portal](#tab/configure-portal)
1. In the Insights section of your Kubernetes cluster, select the **Monitoring Settings** button from the top toolbar
Follow the instructions to configure an existing ConfigMap or to use a new one.
![Screenshot that shows ContainerLogV2 enabled.](./media/container-insights-logging-v2/container-insights-v2-monitoring-settings-configured.png)
+## [CLI](#tab/configure-CLI)
+
+1. For configuring via CLI, use the corresponding [config file](./container-insights-cost-config.md#configuring-aks-data-collection-settings-using-azure-cli), update the `enableContainerLogV2` field in the config file to be true.
+
+
### Configure an existing ConfigMap This applies to the scenario where you have already enabled container insights for your AKS cluster and have [configured agent data collection settings](./container-insights-agent-config.md#configure-and-deploy-configmaps) using ConfigMap "_container-azm-ms-agentconfig.yaml_". If this ConfigMap doesn't yet have the `log_collection_settings.schema` field, you'll need to append the following section in this existing ConfigMap .yaml file:
This applies to the scenario where you have already enabled container insights f
>* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time. ## Multi-line logging in Container Insights (preview)
-Azure Monitor - Container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits.
+Azure Monitor container insights now supports multiline logging. With this feature enabled, previously split container logs are stitched together and sent as single entries to the ContainerLogV2 table. Customers are able see container log lines upto to 64 KB (up from the existing 16 KB limit). If the stitched log line is larger than 64 KB, it gets truncated due to Log Analytics limits.
Additionally, the feature also adds support for .NET and Go stack traces, which appear as single entries instead of being split into multiple entries in ContainerLogV2 table. ### Pre-requisites Customers must [enable ContainerLogV2](./container-insights-logging-v2.md#enable-the-containerlogv2-schema) for multi-line logging to work.
-### How to enable - This is currently a preview feature
+### How to enable
Multi-line logging can be enabled by setting *enable_multiline_logs* flag to ΓÇ£trueΓÇ¥ in [the config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml#L49) ### Next steps for Multi-line logging
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Container insights uses a containerized version of the Log Analytics agent for L
Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes.
-If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://aka.ms/ci-logs-agent-release-notes).
### Upgrade the agent on an AKS cluster
With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to
## Next steps If you experience issues when you upgrade the agent, review the [troubleshooting guide](container-insights-troubleshoot.md) for support.+
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
The following metrics have unique behavior characteristics:
- The `oomKilledContainerCount` metric is only sent when there are OOM killed containers. - The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. - The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. **Prometheus only**
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
+
+ Title: Disable Container insights on your hybrid Kubernetes cluster
+description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights.
+ Last updated : 08/21/2023+++
+# Disable Container insights on your hybrid Kubernetes cluster
+
+This article shows how to disable Container insights for the following Kubernetes environments:
+
+- AKS Engine on Azure and Azure Stack
+- OpenShift version 4 and higher
+- Azure Arc-enabled Kubernetes (preview)
+
+## How to stop monitoring using Helm
+
+The following steps apply to the following environments:
+
+- AKS Engine on Azure and Azure Stack
+- OpenShift version 4 and higher
+
+1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command.
+
+ ```
+ helm list
+ ```
+
+ The output resembles the following:
+
+ ```
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1
+ ```
+
+ *azmon-containers-release-1* represents the helm chart release for Container insights.
+
+2. To delete the chart release, run the following helm command.
+
+ `helm delete <releaseName>`
+
+ Example:
+
+ `helm delete azmon-containers-release-1`
+
+ This removes the release from the cluster. You can verify by running the `helm list` command:
+
+ ```
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ ```
+
+The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`.
+
+## How to stop monitoring on Azure Arc-enabled Kubernetes
+
+### Using PowerShell
+
+1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
+
+ ```powershell
+ wget https://aka.ms/disable-monitoring-powershell-script -OutFile disable-monitoring.ps1
+ ```
+
+2. Configure the `$azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
+
+ ```powershell
+ $azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
+ ```
+
+3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`.
+
+ ```powershell
+ $kubeContext = "<kubeContext name of your k8s cluster>"
+ ```
+
+4. Run the following command to stop monitoring the cluster.
+
+ ```powershell
+ .\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext
+ ```
+
+#### Using service principal
+The script *disable-monitoring.ps1* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass $servicePrincipalClientId, $servicePrincipalClientSecret and $tenantId parameters with values of service principal you have intended to use to enable-monitoring.ps1 script.
+
+```powershell
+$subscriptionId = "<subscription Id of the Azure Arc-connected cluster resource>"
+$servicePrincipal = New-AzADServicePrincipal -Role Contributor -Scope "/subscriptions/$subscriptionId"
+
+$servicePrincipalClientId = $servicePrincipal.ApplicationId.ToString()
+$servicePrincipalClientSecret = [System.Net.NetworkCredential]::new("", $servicePrincipal.Secret).Password
+$tenantId = (Get-AzSubscription -SubscriptionId $subscriptionId).TenantId
+```
+
+For example:
+
+```powershell
+\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext -servicePrincipalClientId $servicePrincipalClientId -servicePrincipalClientSecret $servicePrincipalClientSecret -tenantId $tenantId
+```
++
+### Using bash
+
+1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
+
+ ```bash
+ curl -o disable-monitoring.sh -L https://aka.ms/disable-monitoring-bash-script
+ ```
+
+2. Configure the `azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
+
+ ```bash
+ export AZUREARCCLUSTERRESOURCEID="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
+ ```
+
+3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
+
+ ```bash
+ export KUBECONTEXT="<kubeContext name of your k8s cluster>"
+ ```
+
+4. To stop monitoring your cluster, there are different commands provided based on your deployment scenario.
+
+ Run the following command to stop monitoring the cluster using the current context.
+
+ ```bash
+ bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID
+ ```
+
+ Run the following command to stop monitoring the cluster by specifying a context
+
+ ```bash
+ bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT
+ ```
+
+#### Using service principal
+The bash script *disable-monitoring.sh* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass --client-id, --client-secret and --tenant-id values of service principal you have intended to use to *enable-monitoring.sh* bash script.
+
+```bash
+SUBSCRIPTIONID="<subscription Id of the Azure Arc-connected cluster resource>"
+SERVICEPRINCIPAL=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTIONID}")
+SERVICEPRINCIPALCLIENTID=$(echo $SERVICEPRINCIPAL | jq -r '.appId')
+
+SERVICEPRINCIPALCLIENTSECRET=$(echo $SERVICEPRINCIPAL | jq -r '.password')
+TENANTID=$(echo $SERVICEPRINCIPAL | jq -r '.tenant')
+```
+
+For example:
+
+```bash
+bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT --client-id $SERVICEPRINCIPALCLIENTID --client-secret $SERVICEPRINCIPALCLIENTSECRET --tenant-id $TENANTID
+```
+
+## Next steps
+
+If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
Title: Stop monitoring your Azure Kubernetes Service cluster | Microsoft Docs
+ Title: Disable Container insights on your Azure Kubernetes Service (AKS) cluster
description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights. Previously updated : 05/24/2022 Last updated : 08/21/2023 ms.devlang: azurecli
-# Stop monitoring your Azure Kubernetes Service cluster with Container insights
+# Disable Container insights on your Azure Kubernetes Service (AKS) cluster
After you enable monitoring of your Azure Kubernetes Service (AKS) cluster, you can stop monitoring the cluster if you decide you no longer want to monitor it. This article shows you how to do this task by using the Azure CLI or the provided Azure Resource Manager templates (ARM templates).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights in Azure Monitor
description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. Previously updated : 09/28/2022 Last updated : 08/14/2023 # Container insights overview
-Container insights is a feature designed to monitor the performance of container workloads deployed to the cloud. It gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
+Container insights is a feature of Azure Monitor that monitors the performance and health of container workloads deployed to [Azure](../../aks/intro-kubernetes.md) or that are managed by [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). It collects memory and processor metrics from controllers, nodes, and containers in addition to gathering container logs. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and pre-built [workbooks](container-insights-reports.md).
+The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
+> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
## Features of Container insights
-Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can:
+Container insights includes the following features to provide to understand the performance and health of your Kubernetes cluster and container workloads:
-- Identify resource bottlenecks by identifying AKS containers running on the node and their processor and memory utilization.-- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.
+- Identify resource bottlenecks by identifying containers running on each node and their processor and memory utilization.
+- Identify processor and memory utilization of container groups and their containers hosted in container instances.
- View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.
+- Access live container logs and metrics generated by the container engine to help with troubleshooting issues in real time.
- Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://aka.ms/azureprometheus-promio-docs) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis.
-The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
+## Access Container insights
-> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
+Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
-## Access Container insights
+## Data collected
+Container insights sends data to [Logs](../logs/data-platform-logs.md) and [Metrics](../essentials/data-platform-metrics.md) where you can analyze it using different features of Azure Monitor. It works with other Azure services such as [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and [Managed Grafana](../../managed-grafan#monitoring-data).
-Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
## Supported configurations
+Container insights supports the following configurations:
-- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).
+- [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).
- [Azure Container Instances](../../container-instances/container-instances-overview.md). - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises. - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
Container insights supports clusters running the Linux and Windows Server 2019 o
>[!NOTE] > Container insights support for Windows Server 2022 operating system is in public preview. ++ ## Next steps To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Container Insights offers the ability to collect Syslog events from Linux nodes
- You need to have managed identity authentication enabled on your cluster. To enable, see [migrate your AKS cluster to managed identity authentication](container-insights-enable-existing-clusters.md?tabs=azure-cli#migrate-to-managed-identity-authentication). Note: Enabling Managed Identity will create a new Data Collection Rule (DCR) named `MSCI-<WorkspaceRegion>-<ClusterName>` - Minimum versions of Azure components - **Azure CLI**: Minimum version required for Azure CLI is [2.45.0 (link to release notes)](/cli/azure/release-notes-azure-cli#february-07-2023). See [How to update the Azure CLI](/cli/azure/update-azure-cli) for upgrade instructions.
- - **Azure CLI AKS-Preview Extension**: Minimum version required for AKS-Preview Azure CLI extension is [ 0.5.125 (link to release notes)](https://github.com/Azure/azure-cli-extensions/blob/main/src/aks-preview/HISTORY.rst#05125). See [How to update extensions](/cli/azure/azure-cli-extensions-overview#how-to-update-extensions) for upgrade guidance.
- - **Linux image version**: Minimum version for AKS node linux image is 2022.11.01. See [Upgrade Azure Kubernetes Service (AKS) node images](https://learn.microsoft.com/azure/aks/node-image-upgrade) for upgrade help.
+ - **Azure CLI AKS-Preview Extension**: Minimum version required for AKS-Preview Azure CLI extension is [0.5.125 (link to release notes)](https://github.com/Azure/azure-cli-extensions/blob/main/src/aks-preview/HISTORY.rst#05125). See [How to update extensions](/cli/azure/azure-cli-extensions-overview#how-to-update-extensions) for upgrade guidance.
+ - **Linux image version**: Minimum version for AKS node linux image is 2022.11.01. See [Upgrade Azure Kubernetes Service (AKS) node images](/azure/aks/node-image-upgrade) for upgrade help.
## How to enable Syslog
Select the minimum log level for each facility that you want to collect.
## Next steps Once setup customers can start sending Syslog data to the tools of their choice-- Send Syslog to Microsoft Sentinel: https://learn.microsoft.com/azure/sentinel/connect-syslog -- Export data from Log Analytics: https://learn.microsoft.com/azure/azure-monitor/logs/logs-data-export?tabs=portal
+- [Send Syslog to Microsoft Sentinel](/azure/sentinel/connect-syslog)
+- [Export data from Log Analytics](/azure/azure-monitor/logs/logs-data-export?tabs=portal)
Read more - [Syslog record properties](/azure/azure-monitor/reference/tables/syslog)
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Title: Monitor Azure Kubernetes Service (AKS) with Azure Monitor
-description: Describes how to use Azure Monitor monitor the health and performance of AKS clusters and their workloads.
+ Title: Monitor Kubernetes clusters using Azure services and cloud native tools
+description: Describes how to monitor the health and performance of the different layers of your Kubernetes environment using Azure Monitor and cloud native services in Azure.
- Previously updated : 03/08/2023 Last updated : 08/17/2023
-# Monitor Azure Kubernetes Service (AKS) with Azure Monitor
+# Monitor Kubernetes clusters using Azure services and cloud native tools
-This article describes how to use Azure Monitor to monitor the health and performance of [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+This article describes how to monitor the health and performance of your Kubernetes clusters and the workloads running on them using Azure Monitor and related Azure and cloud native services. This includes clusters running in Azure Kubernetes Service (AKS) or other clouds such as [AWS](https://aws.amazon.com/kubernetes/) and [GCP](https://cloud.google.com/kubernetes-engine). Different sets of guidance are provided for the different roles that typically manage unique components that make up a Kubernetes environment.
-The [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) defines the [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements) you should focus on for your Azure resources. This scenario focuses on health and status monitoring using Azure Monitor.
-## Scope of the scenario
+> [!IMPORTANT]
+> This article provides complete guidance on monitoring the different layers of your Kubernetes environment based on Azure Kubernetes Service (AKS) or Kubernetes clusters in other clouds. If you're just getting started with AKS or Azure Monitor, see [Monitoring AKS](../../aks/monitor-aks.md) for basic information for getting started monitoring an AKS cluster.
+
+## Layers and roles of Kubernetes environment
+
+Following is an illustration of a common model of a typical Kubernetes environment, starting from the infrastructure layer up through applications. Each layer has distinct monitoring requirements that are addressed by different services and typically managed by different roles in the organization.
++
+Responsibility for the different layers of a Kubernetes environment and the applications that depend on it are typically addressed by multiple roles. Depending on the size of your organization, these roles may be performed by different people or even different teams. The following table describes the different roles while the sections below provide the monitoring scenarios that each will typically encounter.
+
+| Roles | Description |
+|:|:|
+| [Developer](#developer) | Develop and maintaining the application running on the cluster. Responsible for application specific traffic including application performance and failures. Maintains reliability of the application according to SLAs. |
+| [Platform engineer](#platform-engineer) | Responsible for the Kubernetes cluster. Provisions and maintains the platform used by developer. |
+| [Network engineer](#network-engineer) | Responsible for traffic between workloads and any ingress/egress with the cluster. Analyzes network traffic and performs threat analysis. |
+
+## Selection of monitoring tools
+
+Azure provides a complete set of services based on [Azure Monitor](../overview.md) for monitoring the health and performance of different layers of your Kubernetes infrastructure and the applications that depend on it. These services work in conjunction with each other to provide a complete monitoring solution and are recommended both for [AKS](../../aks/intro-kubernetes.md) and your Kubernetes clusters running in other clouds. You may have an existing investment in cloud native technologies endorsed by the [Cloud Native Computing Foundation](https://www.cncf.io/), in which case you may choose to integrate Azure tools into your existing environment.
+
+Your choice of which tools to deploy and their configuration will depend on the requirements of your particular environment. For example, you may use the managed offerings in Azure for Prometheus and Grafana, or you may choose to use your existing installation of these tools with your Kubernetes clusters in Azure. Your organization may also use alternative tools to Container insights to collect and analyze Kubernetes logs, such as Splunk or Datadog.
+
+> [!IMPORTANT]
+> Monitoring a complex environment such as Kubernetes involves collecting a significant amount of telemetry, much of which incurs a cost. You should collect just enough data to meet your requirements. This includes the amount of data collected, the frequency of collection, and the retention period. If you're very cost conscious, you may choose to implement a subset of the full functionality in order to reduce your monitoring spend.
+
+## Network engineer
+The *Network Engineer* is responsible for traffic between workloads and any ingress/egress with the cluster. They analyze network traffic and perform threat analysis.
++
+### Azure services for network administrator
+
+The following table lists the services that are commonly used by the network engineer to monitor the health and performance of the network supporting the Kubernetes cluster.
++
+| Service | Description |
+|:|:|
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Suite of tools in Azure to monitor the virtual networks used by your Kubernetes clusters and diagnose detected issues. |
+| [Network insights](../../network-watcher/network-insights-overview.md) | Feature of Azure Monitor that includes a visual representation of the performance and health of different network components and provides access to the network monitoring tools that are part of Network Watcher. |
+
+[Network insights](../../network-watcher/network-insights-overview.md) is enabled by default and requires no configuration. Network Watcher is also typically [enabled by default in each Azure region](../../network-watcher/network-watcher-create.md).
+
+### Monitor level 1 - Network
+
+Following are common scenarios for monitoring the network.
+
+- Create [flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md) to log information about the IP traffic flowing through network security groups used by your cluster and then use [traffic analytics](../../network-watcher/traffic-analytics.md) to analyze and provide insights on this data. You'll most likely use the same Log Analytics workspace for traffic analytics that you use for Container insights and your control plane logs.
+- Using [traffic analytics](../../network-watcher/traffic-analytics.md), you can determine if any traffic is flowing either to or from any unexpected ports used by the cluster and also if any traffic is flowing over public IPs that shouldn't be exposed. Use this information to determine whether your network rules need modification.
++
+## Platform engineer
+
+The *platform engineer*, also known as the cluster administrator, is responsible for the Kubernetes cluster itself. They provision and maintain the platform used by developers. They need to understand the health of the cluster and its components, and be able to troubleshoot any detected issues. They also need to understand the cost to operate the cluster and potentially to be able to allocate costs to different teams.
+++
+Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations for the fleet architect are included in the guidance below.
++
+### Azure services for platform engineer
+
+The following table lists the Azure services for the platform engineer to monitor the health and performance of the Kubernetes cluster and its components.
+
+| Service | Description |
+|:|:|
+| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. It also collects metrics from the Kubernetes control plane and stores them in the workspace. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). |
+| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully-managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. |
+| [Azure Arc-enabled Kubernetes](container-insights-enable-arc-enabled-clusters.md) | Allows you to attach to Kubernetes clusters running in other clouds so that you can manage and configure them in Azure. With the Arc agent installed, you can monitor AKS and hybrid clusters together using the same methods and tools, including Container insights and Prometheus. |
+| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. |
+
+### Configure monitoring for platform engineer
+
+The sections below identify the steps for complete monitoring of your Kubernetes environment using the Azure services in the above table. Functionality and integration options are provided for each to help you determine where you may need to modify this configuration to meet your particular requirements.
-This article does *not* include information on the following scenarios:
-- Monitoring of Kubernetes clusters outside of Azure except for referring to existing content for Azure Arc-enabled Kubernetes-- Monitoring of AKS with tools other than Azure Monitor except to fill gaps in Azure Monitor and Container Insights
+#### Enable scraping of Prometheus metrics
+
+> [!IMPORTANT]
+> To use Azure Monitor managed service for Prometheus, you need to have an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). For information on design considerations for a workspace configuration, see [Azure Monitor workspace architecture](../essentials/azure-monitor-workspace-overview.md#azure-monitor-workspace-architecture).
-> [!NOTE]
-> Azure Monitor was designed to monitor the availability and performance of cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring for AKS is done with [Microsoft Sentinel](../../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). See [Monitor virtual machines with Azure Monitor - Security monitoring](../vm/monitor-virtual-machine-security.md) for a description of the security monitoring tools in Azure and their relationship to Azure Monitor.
->
-> For information on using the security services to monitor AKS, see [Microsoft Defender for Kubernetes - the benefits and features](../../defender-for-cloud/defender-for-kubernetes-introduction.md) and [Connect Azure Kubernetes Service (AKS) diagnostics logs to Microsoft Sentinel](../../sentinel/data-connectors/azure-kubernetes-service-aks.md).
+Enable scraping of Prometheus metrics by Azure Monitor managed service for Prometheus from your cluster using one of the following methods:
-## Container Insights
+- Select the option **Enable Prometheus metrics** when you [create an AKS cluster](../../aks/learn/quick-kubernetes-deploy-portal.md).
+- Select the option **Enable Prometheus metrics** when you enable Container insights on an existing [AKS cluster](container-insights-enable-aks.md) or [Azure Arc-enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md).
+- Enable for an existing [AKS cluster](../essentials/prometheus-metrics-enable.md) or [Arc-enabled Kubernetes cluster (preview)](../essentials/prometheus-metrics-from-arc-enabled-cluster.md).
-AKS generates [platform metrics and resource logs](../../aks/monitor-aks-reference.md) that you can use to monitor basic health and performance. Enable [Container Insights](container-insights-overview.md) to expand on this monitoring. Container Insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS and provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
-[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are popular CNCF-backed open-source tools for Kubernetes monitoring. AKS exposes many metrics in Prometheus format, which makes Prometheus a popular choice for monitoring. [Container Insights](container-insights-overview.md) has native integration with AKS, like collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics. Many native Azure Monitor insights are built on top of Prometheus metrics. Container Insights complements and completes E2E monitoring of AKS, including log collection, which Prometheus as stand-alone tool doesnΓÇÖt provide. You can use Prometheus integration and Azure Monitor together for E2E monitoring.
+If you already have a Prometheus environment that you want to use for your AKS clusters, then enable Azure Monitor managed service for Prometheus and then use remote-write to send data to your existing Prometheus environment. You can also [use remote-write to send data from your existing self-managed Prometheus environment to Azure Monitor managed service for Prometheus](../essentials/prometheus-remote-write.md).
-To learn more about using Container Insights, see the [Container Insights overview](container-insights-overview.md). To learn more about features and monitoring scenarios of Container Insights, see [Monitor layers of AKS with Container Insights](#monitor-layers-of-aks-with-container-insights).
+See [Default Prometheus metrics configuration in Azure Monitor](../essentials/prometheus-metrics-scrape-default.md) for details on the metrics that are collected by default and their frequency of collection. If you want to customize the configuration, see [Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-scrape-configuration.md).
-## Configure monitoring
+#### Enable Grafana for analysis of Prometheus data
-The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor.
+[Create an instance of Managed Grafana](../../managed-grafan)
-### Create Log Analytics workspace
+If you have an existing Grafana environment, then you can continue to use it and add Azure Monitor managed service for [Prometheus as a data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). You can also [add the Azure Monitor data source to Grafana](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to use data collected by Container insights in custom Grafana dashboards. Perform this configuration if you want to focus on Grafana dashboards rather than using the Container insights views and reports.
-You need at least one Log Analytics workspace to support Container Insights and to collect and analyze other telemetry about your AKS cluster. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../logs/cost-logs.md) for details.
+A variety of prebuilt dashboards are available for monitoring Kubernetes clusters including several that present similar information as Container insights views. [Search the available Grafana dashboards templates](https://grafana.com/grafan).
-If you're just getting started with Azure Monitor, we recommend starting with a single workspace and creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../vm/monitor-virtual-machine-security.md), although it's common to segregate availability and performance telemetry from security data.
-For information on design considerations for a workspace configuration, see [Designing your Azure Monitor Logs deployment](../logs/workspace-design.md).
+#### Enable Container Insights for collection of logs
-### Enable Container Insights
+When you enable Container Insights for your Kubernetes cluster, it deploys a containerized version of the [Azure Monitor agent](../agents/..//agents/log-analytics-agent.md) that sends data to a Log Analytics workspace in Azure Monitor. Container insights collects container stdout/stderr, infrastructure logs, and performance data. All log data is stored in a Log Analytics workspace where they can be analyzed using [Kusto Query Language (KQL)](../logs/log-query-overview.md).
-When you enable Container Insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../agents/log-analytics-agent.md) that sends data to Azure Monitor. For prerequisites and configuration options, see [Enable Container Insights](container-insights-onboard.md).
+See [Enable Container insights](../containers/container-insights-onboard.md) for prerequisites and configuration options for onboarding your Kubernetes clusters. [Onboard using Azure Policy](container-insights-enable-aks-policy.md) to ensure that all clusters retain a consistent configuration.
-### Configure collection from Prometheus
+Once Container insights is enabled for a cluster, perform the following actions to optimize your installation.
-Container Insights allows you to send Prometheus metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) or to your Log Analytics workspace without requiring a local Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container Insights. For details on this configuration, see [Collect Prometheus metrics with Container Insights](container-insights-prometheus.md).
+- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logging-v2.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md).
+- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. See [Enable cost optimization settings in Container insights (preview)](../containers/container-insights-cost-config.md) for details.
-### Collect resource logs
+If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
-The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](container-insights-log-query.md#resource-logs).
-You need to create a diagnostic setting to collect resource logs. You can create multiple diagnostic settings to send different sets of logs to different locations. To create diagnostic settings for your AKS cluster, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
+#### Collect control plane logs for AKS clusters
-There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
+The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](../../aks/monitor-aks.md#resource-logs).
+
+[Create a diagnostic setting](../../aks/monitor-aks.md#resource-logs) for each AKS cluster to send resource logs to a Log Analytics workspace. Use [Azure Policy](../essentials/diagnostic-settings-policy.md) to ensure consistent configuration across multiple clusters.
+
+There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information for compliance reasons. For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
+
+If you're unsure which resource logs to initially enable, use the following recommendations, which are based on the most common customer requirements. You can enable other categories later if you need to.
-If you're unsure which resource logs to initially enable, use the following recommendations:
| Category | Enable? | Destination | |:|:|:|
-| cluster-autoscaler | Enable if autoscale is enabled | Log Analytics workspace |
-| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace |
| kube-apiserver | Enable | Log Analytics workspace | | kube-audit | Enable | Azure storage. This keeps costs to a minimum yet retains the audit logs if they're required by an auditor. | | kube-audit-admin | Enable | Log Analytics workspace | | kube-controller-manager | Enable | Log Analytics workspace | | kube-scheduler | Disable | |
-| AllMetrics | Enable | Log Analytics workspace |
-
-The recommendations are based on the most common customer requirements. You can enable other categories later if you need to.
-
-## Access Azure Monitor features
-
-Access Azure Monitor features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal, or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu. The following image shows the **Monitoring** menu for your AKS cluster:
-
+| cluster-autoscaler | Enable if autoscale is enabled | Log Analytics workspace |
+| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace |
+| AllMetrics | Disable since metrics are collected in Managed Prometheus | Log Analytics workspace |
-| Menu option | Description |
-|:|:|
-| Insights | Opens Container Insights for the current cluster. Select **Containers** from the **Monitor** menu to open Container Insights for all clusters. |
-| Alerts | View alerts for the current cluster. |
-| Metrics | Open metrics explorer with the scope set to the current cluster. |
-| Diagnostic settings | Create diagnostic settings for the cluster to collect resource logs. |
-| Advisor | Recommendations for the current cluster from Azure Advisor. |
-| Logs | Open Log Analytics with the scope set to the current cluster to analyze log data and access prebuilt queries. |
-| Workbooks | Open workbook gallery for Kubernetes service. |
-## Monitor layers of AKS with Container Insights
+If you have an existing solution for collection of logs, either follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to Azure event hub to forward to alternate system.
-Your monitoring approach should be based on your unique workload requirements, and factors such as scale, topology, organizational roles, and multi-cluster tenancy. This section presents a common bottoms-up approach, starting from infrastructure up through applications. Each layer has distinct monitoring requirements.
+#### Collect Activity log for AKS clusters
+Configuration changes to your AKS clusters are stored in the [Activity log](../essentials/activity-log.md). [Create a diagnostic setting to send this data to your Log Analytics workspace](../essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with other monitoring data. There's no cost for this data collection, and you can analyze or alert on the data using Log Analytics.
-### Level 1 - Cluster level components
+### Monitor level 2 - Cluster level components
-The cluster level includes the following component:
+The cluster level includes the following components:
| Component | Monitoring requirements | |:|:| | Node | Understand the readiness status and performance of CPU, memory, disk and IP usage for each node and proactively monitor their usage trends before deploying any workloads. |
-Use existing views and reports in Container Insights to monitor cluster level components.
+Following are common scenarios for monitoring the cluster level components.
+**Container insights**<br>
- Use the **Cluster** view to see the performance of the nodes in your cluster, including CPU and memory utilization. - Use the **Nodes** view to see the health of each node and the health and performance of the pods running on them. For more information on analyzing node health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md). - Under **Reports**, use the **Node Monitoring** workbooks to analyze disk capacity, disk IO, and GPU usage. For more information about these workbooks, see [Node Monitoring workbooks](container-insights-reports.md#node-monitoring-workbooks).
+- Under **Monitoring**, select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.
- :::image type="content" source="media/monitor-kubernetes/container-insights-cluster-view.png" alt-text="Screenshot of Container Insights cluster view." lightbox="media/monitor-kubernetes/container-insights-cluster-view.png":::
+**Network observability (east-west traffic)**
+- For AKS clusters, use the [Network Observability add-on for AKS (preview)](https://aka.ms/NetObsAddonDoc) to monitor and observe access between services in the cluster (east-west traffic).
-- Under **Monitoring**, you can select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.
+**Grafana dashboards**<br>
+- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus.
+- Use Grafana dashboards with [Prometheus metric values](../essentials/prometheus-metrics-scrape-default.md) related to disk such as `node_disk_io_time_seconds_total` and `windows_logical_disk_free_bytes` to monitor attached storage.
- :::image type="content" source="media/monitor-kubernetes/monitoring-workbooks-subnet-ip-usage.png" alt-text="Screenshot of Container Insights workbooks." lightbox="media/monitor-kubernetes/monitoring-workbooks-subnet-ip-usage.png":::
+**Log Analytics**
+- Select the [Containers category](../logs/queries.md?tabs=groupby#find-and-filter-queries) in the [queries dialog](../logs/queries.md#queries-dialog) for your Log Analytics workspace to access prebuilt log queries for your cluster, including the **Image inventory** log query that retrieves data from the [ContainerImageInventory](/azure/azure-monitor/reference/tables/containerimageinventory) table populated by Container insights.
-For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](../../aks/ssh.md).
+**Troubleshooting**<br>
+- For troubleshooting scenarios, you may need to access nodes directly for maintenance or immediate log collection. For security purposes, AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](../../aks/ssh.md).
-### Level 2 - Managed AKS components
+**Cost analysis**<br>
+- Configure [OpenCost](https://www.opencost.io), which is an open-source, vendor-neutral CNCF sandbox project for understanding your Kubernetes costs, to support your analysis of your cluster costs. It exports detailed costing data to Azure storage.
+- Use data from OpenCost to breakdown relative usage of the cluster by different teams in your organization so that you can allocate the cost between each.
+- Use data from OpenCost to ensure that the cluster is using the full capacity of its nodes by densely packing workloads, using fewer large nodes as opposed to many smaller nodes.
-The managed AKS level includes the following components:
+
+### Monitor level 3 - Managed Kubernetes components
+
+The managed Kubernetes level includes the following components:
| Component | Monitoring | |:|:| | API Server | Monitor the status of API server and identify any increase in request load and bottlenecks if the service is down. | | Kubelet | Monitor Kubelet to help troubleshoot pod management issues, pods not starting, nodes not ready, or pods getting killed. |
-Azure Monitor and Container Insights don't provide full monitoring for the API server.
+Following are common scenarios for monitoring your managed Kubernetes components.
-- Under **Monitoring**, you can select **Metrics** to view the **Inflight Requests** counter, but you should refer to metrics in Prometheus for a complete view of the API server performance. This includes such values as request latency and workqueue processing time.-- To see critical metrics for the API server, see [Grafana Labs](https://grafana.com/grafan).
+**Container insights**<br>
+- Under **Monitoring**, select **Metrics** to view the **Inflight Requests** counter.
+- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks).
- :::image type="content" source="media/monitor-kubernetes/grafana-api-server.png" alt-text="Screenshot of dashboard for Grafana API server." lightbox="media/monitor-kubernetes/grafana-api-server.png":::
+**Grafana**<br>
+- Use a dashboard such as [Kubernetes apiserver](https://grafana.com/grafana/dashboards/12006) for a complete view of the API server performance. This includes such values as request latency and workqueue processing time.
-- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks). For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](../../aks/kubelet-logs.md).
+**Log Analytics**<br>
+- Use [log queries with resource logs](../../aks/monitor-aks.md#sample-log-queries) to analyze [control plane logs](#collect-control-plane-logs-for-aks-clusters) generated by AKS components.
+- Any configuration activities for AKS are logged in the Activity log. When you [send the Activity log to a Log Analytics workspace](#collect-activity-log-for-aks-clusters) you can analyze it with Log Analytics. For example, the following sample query can be used to return records identifying a successful upgrade across all your AKS clusters.
-### Resource logs
+ ``` kql
+ AzureActivity
+ | where CategoryValue == "Administrative"
+ | where OperationNameValue == "MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/WRITE"
+ | extend properties=parse_json(Properties_d)
+ | where properties.message == "Upgrade Succeeded"
+ | order by TimeGenerated desc
+ ```
-Use [log queries with resource logs](container-insights-log-query.md#resource-logs) to analyze control plane logs generated by AKS components.
-### Level 3 - Kubernetes objects and workloads
+**Troubleshooting**<br>
+- For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](../../aks/kubelet-logs.md).
+
+### Monitor level 4 - Kubernetes objects and workloads
The Kubernetes objects and workloads level includes the following components:
The Kubernetes objects and workloads level includes the following components:
| Pods | Monitor status and resource utilization, including CPU and memory, of the pods running on your AKS cluster. | | Containers | Monitor resource utilization, including CPU and memory, of the containers running on your AKS cluster. |
-Use existing views and reports in Container Insights to monitor containers and pods.
--- Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers.-- Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md#analyze-nodes-controllers-and-container-health).
+Following are common scenarios for monitoring your Kubernetes objects and workloads.
- :::image type="content" source="media/monitor-kubernetes/container-insights-containers-view.png" alt-text="Screenshot of Container Insights containers view." lightbox="media/monitor-kubernetes/container-insights-containers-view.png":::
-- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, ee [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md).
- :::image type="content" source="media/monitor-kubernetes/container-insights-deployments-workbook.png" alt-text="Screenshot of Container Insights deployments workbook." lightbox="media/monitor-kubernetes/container-insights-deployments-workbook.png":::
-#### Live data
-In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md).
-
+**Container insights**<br>
+- Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers.
+- Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md#analyze-nodes-controllers-and-container-health).
+- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, see [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md).
-### Level 4 - Applications
+**Grafana dashboards**<br>
+- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus.
-The application level includes the following component:
+**Live data**
+- In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md).
-| Component | Monitoring requirements |
-|:|:|
-| Applications | Monitor microservice application deployments to identify application failures and latency issues, including information like request rates, response times, and exceptions. |
+### Alerts for the platform engineer
-Application Insights provides complete monitoring of applications running on AKS and other environments. If you have a Java application, you can provide monitoring without instrumenting your code by following [Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights](../app/kubernetes-codeless.md).
+[Alerts in Azure Monitor](..//alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. If you have an existing [ITSM solution](../alerts/itsmc-overview.md) for alerting, you can [integrate it with Azure Monitor](../alerts/itsmc-overview.md). You can also [export workspace data](../logs/logs-data-export.md) to send data from your Log Analytics workspace to another location that supports your current alerting solution.
-If you want complete monitoring, you should configure code-based monitoring depending on your application:
+#### Alert types
+The following table describes the different types of custom alert rules that you can create based on the data collected by the services described above.
-- [ASP.NET applications](../app/asp-net.md)-- [ASP.NET Core applications](../app/asp-net-core.md)-- [.NET Console applications](../app/console.md)-- [Java](../app/opentelemetry-enable.md?tabs=java)-- [Node.js](../app/nodejs.md)-- [Python](../app/opencensus-python.md)-- [Other platforms](../app/app-insights-overview.md#supported-languages)
+| Alert type | Description |
+|:|:|
+| Prometheus alerts | [Prometheus alerts](../alerts/prometheus-alerts.md) are written in Prometheus Query Language (Prom QL) and applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Recommended alerts already include the most common Prometheus alerts, and you can [create addition alert rules](../essentials/prometheus-rule-groups.md) as required. |
+| Metric alert rules | Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. Metric alert rules can be useful to alert on AKS performance using any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). |
+| Log alert rules | Use log alert rules to generate an alert from the results of a log query. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). |
-For more information, see [What is Application Insights?](../app/app-insights-overview.md).
+#### Recommended alerts
+Start with a set of recommended Prometheus alerts from [Metric alert rules in Container insights (preview)](container-insights-metric-alerts.md#prometheus-alert-rules) which include the most common alerting conditions for a Kubernetes cluster. You can add more alert rules later as you identify additional alerting conditions.
-### Level 5 - External components
+## Developer
-The components external to AKS include the following:
+In addition to developing the application, the *developer* maintains the application running on the cluster. They're responsible for application specific traffic including application performance and failures and maintain reliability of the application according to company-defined SLAs.
-| Component | Monitoring requirements |
-|:|:|
-| Service Mesh, Ingress, Egress | Metrics based on component. |
-| Database and work queues | Metrics based on component. |
-Monitor external components such as Service Mesh, Ingress, Egress with Prometheus and Grafana, or other proprietary tools. Monitor databases and other Azure resources using other features of Azure Monitor.
+### Azure services for developer
-## Analyze metric data with the Metrics explorer
+The following table lists the services that are commonly used by the developer to monitor the health and performance of the application running on the cluster.
-Use the **Metrics** explorer to perform custom analysis of metric data collected for your containers. It allows you plot charts, visually correlate trends, and investigate spikes and dips in your metrics values. You can create metrics alert to proactively notify you when a metric value crosses a threshold and pin charts to dashboards for use by different members of your organization.
-For more information, see [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md). For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). When Container Insights is enabled for a cluster, [addition metric values](container-insights-update-metrics.md) are available.
-## Analyze log data with Log Analytics
-Select **Logs** to use the Log Analytics tool to analyze resource logs or dig deeper into data used to create the views in Container Insights. Log Analytics allows you to perform custom analysis of your log data.
+| Service | Description |
+|:|:|
+| [Application insights](../app/app-insights-overview.md) | Feature of Azure Monitor that provides application performance monitoring (APM) to monitor applications running on your Kubernetes cluster from development, through test, and into production. Quickly identify and mitigate latency and reliability issues using distributed traces. Supports [OpenTelemetry](../app/opentelemetry-overview.md#opentelemetry) for vendor-neutral instrumentation. |
-For more information on Log Analytics and to get started with it, see:
-- [How to query logs from Container Insights](container-insights-log-query.md)-- [Using queries in Azure Monitor Log Analytics](../logs/queries.md)-- [Monitoring AKS data reference logs](../../aks/monitor-aks-reference.md#azure-monitor-logs-tables)-- [Log Analytics tutorial](../logs/log-analytics-tutorial.md)
-You can also use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](../../aks/monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before the data can be collected.
-## Alerts
-[Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. There are no preconfigured alert rules for AKS clusters, but you can create your own based on data collected by Container Insights.
+See [Data Collection Basics of Azure Monitor Application Insights](../app/opentelemetry-overview.md) for options on configuring data collection from the application running on your cluster and decision criteria on the best method for your particular requirements.
-> [!IMPORTANT]
-> Most alert rules have a cost dependent on the type of rule, how many dimensions it includes, and how frequently it runs. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before creating any alert rules.
+### Monitor level 5 - Application
-### Choose an alert type
+Following are common scenarios for monitoring your application.
-The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario will depend on where the data is located that you want to set an alert for.
-You may have cases where data for a particular alerting scenario is available in both **Metrics** and **Logs**, and you need to determine which rule type to use. It's typically the best strategy to use metric alerts instead of log alerts when possible, because metric alerts are more responsive and stateful. You can create a metric alert on any values you can analyze in the Metrics explorer. If the logic for your alert rule requires data in **Logs**, or if it requires more complex logic, then you can use a log query alert rule.
-For example, if you want an alert when an application workload is consuming excessive CPU, you can create a metric alert using the CPU metric. If you need an alert when a particular message is found in a control plane log, then you'll require a log alert.
-### Metric alert rules
-Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. You can use any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics) for metric alert rules.
+**Application performance**<br>
+- Use the **Performance** view in Application insights to view the performance of different operations in your application.
+- Use [Profiler](../profiler/profiler-overview.md) to capture and view performance traces for your application.
+- Use [Application Map](../app/app-map.md) to view the dependencies between your application components and identify any bottlenecks.
+- Enable [distributed tracing](../app/distributed-tracing-telemetry-correlation.md), which provides a performance profiler that works like call stacks for cloud and microservices architectures, to gain better observability into the interaction between services.
-Container Insights includes a feature that creates a recommended set of metric alert rules for your AKS cluster. This feature creates new metric values used by the alert rules that you can also use in the Metrics explorer. For more information, see [Recommended metric alerts (preview) from Container Insights](container-insights-metric-alerts.md).
+**Application failures**<br>
+- Use the **Failures** tab of Application insights to view the number of failed requests and the most common exceptions.
+- Ensure that alerts for [failure anomalies](../alerts/proactive-failure-diagnostics.md) identified with [smart detection](../alerts/proactive-diagnostics.md) are configured properly.
-### Log alert rules
+**Health monitoring**<br>
+- Create an [Availability test](../app/availability-overview.md) in Application insights to create a recurring test to monitor the availability and responsiveness of your application.
+- Use the [SLA report](../app/sla-report.md) to calculate and report SLA for web tests.
+- Use [annotations](../app/annotations.md) to identify when a new build is deployed so that you can visually inspect any change in performance after the update.
-Use log alert rules to generate an alert from the results of a log query. This may be data collected by Container Insights or from AKS resource logs. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md).
+**Application logs**<br>
+- Container insights sends stdout/stderr logs to a Log Analytics workspace. See [Resource logs](../../aks/monitor-aks-reference.md#resource-logs) for a description of the different logs and [Kubernetes Services](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of the tables each is sent to.
-### Virtual machine alerts
+**Service mesh**
-AKS relies on a Virtual Machine Scale Set that must be healthy to run AKS workloads. You can alert on critical metrics such as CPU, memory, and storage for the virtual machines using the guidance at [Monitor virtual machines with Azure Monitor: Alerts](../vm/monitor-virtual-machine-alerts.md).
+- For AKS clusters, deploy the [Istio-based service mesh add-on](../../aks/istio-about.md) which provides observability to your microservices architecture. [Istio](https://istio.io/) is an open-source service mesh that layers transparently onto existing distributed applications. The add-on assists in the deployment and management of Istio for AKS.
-### Prometheus alerts
+## See also
-You can configure Prometheus alerts to cover scenarios where Azure Monitor either doesn't have the data required for an alerting condition or the alerting may not be responsive enough. For example, Azure Monitor doesn't collect critical information for the API server. You can create a log query alert using the data from the kube-apiserver resource log category, but it can take up to several minutes before you receive an alert, which may not be sufficient for your requirements. In this case, we recommend configuring Prometheus alerts.
+- See [Monitoring AKS](../../aks/monitor-aks.md) for guidance on monitoring specific to Azure Kubernetes Service (AKS).
-## Next steps
-- For more information about AKS metrics, logs, and other important values, see [Monitoring AKS data reference](../../aks/monitor-aks-reference.md).
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
Follow the steps in this article to determine the cause of Prometheus metrics no
Replica pod scrapes metrics from `kube-state-metrics` and custom scrape targets in the `ama-metrics-prometheus-config` configmap. DaemonSet pods scrape metrics from the following targets on their respective node: `kubelet`, `cAdvisor`, `node-exporter`, and custom scrape targets in the `ama-metrics-prometheus-config-node` configmap. The pod that you want to view the logs and the Prometheus UI for it depends on which scrape target you're investigating.
+## Troubleshoot using powershell script
+
+If you encounter an error while you attempt to enable monitoring for your AKS cluster, please follow the instructions mentioned [here](https://github.com/Azure/prometheus-collector/tree/main/internal/scripts/troubleshoot) to run the troubleshooting script. This script is designed to do a basic diagnosis of for any configuration issues on your cluster and you can ch the generated files while creating a support request for faster resolution for your support case.
+ ## Metrics Throttling In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%.
If you see metrics missed, you can first check if the ingestion limits are being
Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal. - ## Next steps - [Check considerations for collecting metrics at high scale](prometheus-metrics-scrape-scale.md).
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-## Controlling costs
-
-There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services,
-
-You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+> [!CAUTION]
+> If you want to store diagnostic logs in a Log Analytics workspace, there are two points to consider to avoid seeing duplicate data in Application Insights:
+> * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.
+> * The Application Insights user can't have access to both workspaces. Set the Log Analytics access control mode to Requires workspace permissions. Through Azure role-based access control, ensure the user only has access to the Log Analytics workspace the Application Insights resource is based on.
+>
+> These steps are necessary because Application Insights accesses telemetry across Application Insight resources, including Log Analytics workspaces, to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources that contain the same data.
-Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. Or you may want to remove unneeded columns from the data. In these cases, use [transformations](data-collection-transformations.md) on the workspace to filter logs that you don't require.
+## Controlling costs
+There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services.
-You can also use transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
+You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries. Diagnostic settings don't allow granular filtering of resource logs.
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
azure-monitor Diagnostics Settings Policies Deployifnotexists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostics-settings-policies-deployifnotexists.md
The following steps show how to apply the policy to send audit logs to for key v
### [CLI](#tab/cli) To apply a policy using the CLI, use the following commands:
-1. Create a policy assignment using [`az policy assignment create`](https://learn.microsoft.com/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create).
+1. Create a policy assignment using [`az policy assignment create`](/cli/azure/policy/assignment#az-policy-assignment-create).
```azurecli az policy assignment create --name <policy assignment name> --policy "6b359d8f-f88d-4052-aa7c-32015963ecc1" --scope <scope> --params "{\"logAnalytics\": {\"value\": \"<log analytics workspace resource ID"}}" --mi-system-assigned --location <location> ```
Find the role in the policy definition by searching for *roleDefinitionIds*
"deployment": { "properties": {... ```
- Assign the required role using [`az policy assignment identity assign`](https://learn.microsoft.com/cli/azure/policy/assignment/identity?view=azure-cli-latest):
+ Assign the required role using [`az policy assignment identity assign`](/cli/azure/policy/assignment/identity):
```azurecli az policy assignment identity assign --system-assigned --resource-group <resource group name> --role <role name or ID> --identity-scope </scope> --name <policy assignment name> ```
Find the role in the policy definition by searching for *roleDefinitionIds*
az policy assignment identity assign --system-assigned --resource-group rg-001 --role 92aaf0da-9dab-42b6-94a3-d43ce8d16293 --identity-scope /subscriptions/12345678-aaaa-bbbb-cccc-1234567890ab/resourceGroups/rg001 --name policy-assignment-1 ```
-1. Trigger a scan to find existing resources using [`az policy state trigger-scan`](https://learn.microsoft.com/cli/azure/policy/state?view=azure-cli-latest#az-policy-state-trigger-scan).
+1. Trigger a scan to find existing resources using [`az policy state trigger-scan`](/cli/azure/policy/state#az-policy-state-trigger-scan).
```azurecli az policy state trigger-scan --resource-group rg-001 ```
-1. Create a remediation task to apply the policy to existing resources using [`az policy remediation create`](https://learn.microsoft.com/cli/azure/policy/remediation?view=azure-cli-latest#az-policy-remediation-create).
+1. Create a remediation task to apply the policy to existing resources using [`az policy remediation create`](/cli/azure/policy/remediation#az-policy-remediation-create).
```azurecli az policy remediation create -g <resource group name> --policy-assignment <policy assignment name> --name <remediation name>
Find the role in the policy definition by searching for *roleDefinitionIds*
az policy remediation create -g rg-001 -n remediation-001 --policy-assignment policy-assignment-1 ```
-For more information on policy assignment using CLI, see [Azure CLI reference - az policy assignment](https://learn.microsoft.com/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create)
+For more information on policy assignment using CLI, see [Azure CLI reference - az policy assignment](/cli/azure/policy/assignment#az-policy-assignment-create)
### [PowerShell](#tab/Powershell)
You can get your policy assignment details using the following command:
1. Select the subscription where you want to apply the policy initiative using the `az account set` command.
-1. Assign the initiative using [`az policy assignment create`](https://learn.microsoft.com/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create).
+1. Assign the initiative using [`az policy assignment create`](/cli/azure/policy/assignment#az-policy-assignment-create).
```azurecli az policy assignment create --name <assignment name> --resource-group <resource group name> --policy-set-definition <initiative name> --params <parameters object> --mi-system-assigned --location <location>
You can get your policy assignment details using the following command:
"deployment": { "properties": {... ```
- Assign the required role using [`az policy assignment identity assign`](https://learn.microsoft.com/cli/azure/policy/assignment/identity?view=azure-cli-latest):
+ Assign the required role using [`az policy assignment identity assign`](/cli/azure/policy/assignment/identity):
```azurecli az policy assignment identity assign --system-assigned --resource-group <resource group name> --role <role name or ID> --identity-scope <scope> --name <policy assignment name> ```
You can get your policy assignment details using the following command:
```azurecli az policy set-definition show --name f5b29bc4-feca-4cc6-a58a-772dd5e290a5 |grep policyDefinitionReferenceId ```
- Remediate the resources using [`az policy remediation create`](https://learn.microsoft.com/cli/azure/policy/remediation?view=azure-cli-latest#az-policy-remediation-create)
+ Remediate the resources using [`az policy remediation create`](/cli/azure/policy/remediation#az-policy-remediation-create)
```azurecli az policy remediation create --resource-group <resource group name> --policy-assignment <assignment name> --name <remediation task name> --definition-reference-id "policy specific reference ID" --resource-discovery-mode ReEvaluateCompliance
azure-monitor Metric Chart Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md
# Metric chart examples
-The Azure platform offers [over a thousand metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index.md), many of which have dimensions. By using [dimension filters](./metrics-charts.md), applying [splitting](./metrics-charts.md), controlling chart type, and adjusting chart settings you can create powerful diagnostic views and dashboards that provide insight into the health of your infrastructure and applications. This article shows some examples of the charts that you can build using [Metrics Explorer](./metrics-charts.md), and explains the necessary steps to configure each of these charts.
+The Azure platform offers [over a thousand metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index), many of which have dimensions. By using [dimension filters](./metrics-charts.md), applying [splitting](./metrics-charts.md), controlling chart type, and adjusting chart settings you can create powerful diagnostic views and dashboards that provide insight into the health of your infrastructure and applications. This article shows some examples of the charts that you can build using [Metrics Explorer](./metrics-charts.md), and explains the necessary steps to configure each of these charts.
## Website CPU utilization by server instances
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Title: Advanced features of Metrics Explorer
-description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources.
+ Title: Advanced features of Metrics Explorer in Azure Monitor
+description: Learn how to use Metrics Explorer to investigate the health and usage of resources.
# Advanced features of Metrics Explorer in Azure Monitor
-> [!NOTE]
-> This article assumes you're familiar with basic features of the Metrics Explorer feature of Azure Monitor. If you're a new user and want to learn how to create your first metric chart, see [Getting started with the Metrics Explorer](./metrics-getting-started.md).
+In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called *platform*) or custom.
-In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called "platform") or custom.
+The Azure platform provides standard metrics. These metrics reflect the health and usage statistics of your Azure resources.
-Standard metrics are provided by the Azure platform. They reflect the health and usage statistics of your Azure resources.
+This article describes advanced features of Metrics Explorer in Azure Monitor. It assumes that you're familiar with basic features of Metrics Explorer. If you're a new user and want to learn how to create your first metric chart, see [Get started with Metrics Explorer](./metrics-getting-started.md).
## Resource scope picker
-The resource scope picker allows you to view metrics across single resources and multiple resources. The following sections explain how to use the resource scope picker.
+Use the resource scope picker to view metrics across single resources and multiple resources.
### Select a single resource
-In the Azure portal, select **Metrics** from the **Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker.
+1. In the Azure portal, select **Metrics** from the **Monitor** menu or from the **Monitoring** section of a resource's menu.
-Use the scope picker to select the resources whose metrics you want to see. If you opened the Azure Metrics Explorer from a resource's menu, the scope should be populated.
+1. Choose **Select a scope**.
-![Screenshot showing how to open the resource scope picker.](./media/metrics-charts/scope-picker.png)
+ :::image source="./media/metrics-charts/scope-picker.png" alt-text="Screenshot that shows the button that opens the resource scope picker." lightbox="./media/metrics-charts/scope-picker.png":::
-For some resources, you can view only one resource's metrics at a time. In the **Resource types** menu, these resources are in the **All resource types** section.
+1. Use the scope picker to select the resources whose metrics you want to see. If you opened Metrics Explorer from a resource's menu, the scope should be populated.
-![Screenshot showing a single resource.](./media/metrics-charts/single-resource-scope.png)
+ For some resources, you can view only one resource's metrics at a time. On the **Resource types** menu, these resources are in the **All resource types** section.
-After selecting a resource, you see all subscriptions and resource groups that contain that resource.
+ :::image source="./media/metrics-charts/single-resource-scope.png" alt-text="Screenshot that shows available resources." lightbox="./media/metrics-charts/single-resource-scope.png":::
-![Screenshot showing available resources.](./media/metrics-charts/available-single-resource.png)
+1. Select a resource. All subscriptions and resource groups that contain that resource appear.
-> [!TIP]
-> If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**.
+ :::image source="./media/metrics-charts/available-single-resource.png" alt-text="Screenshot that shows a single resource." lightbox="./media/metrics-charts/available-single-resource.png":::
-When you're satisfied with your selection, select **Apply**.
+ If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**.
-### Select multiple resources
+1. When you're satisfied with your selection, select **Apply**.
-Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu.
+### Select multiple resources
-For more information, see [Select multiple resources](./metrics-dynamic-scope.md#select-multiple-resources).
+Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu. For more information, see [Select multiple resources](./metrics-dynamic-scope.md#select-multiple-resources).
-![Screenshot showing cross-resource types.](./media/metrics-charts/multi-resource-scope.png)
For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups. For more information, see [Select a resource group or subscription](./metrics-dynamic-scope.md#select-a-resource-group-or-subscription). ## Multiple metric lines and charts
-In the Azure Metrics Explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
+In Metrics Explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
- Correlate related metrics on the same graph to see how one value relates to another. - Display metrics that use different units of measure in close proximity. - Visually aggregate and compare metrics from multiple resources.
-For example, imagine you have five storage accounts, and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time.
+For example, imagine that you have five storage accounts, and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time.
### Multiple metrics on the same chart To view multiple metrics on the same chart, first [create a new chart](./metrics-getting-started.md#create-your-first-metric-chart). Then select **Add metric**. Repeat this step to add another metric on the same chart.
-> [!NOTE]
-> Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly.
->
-> In these cases, consider using multiple charts instead. In Metrics Explorer, select **New chart** to create a new chart.
-![Screenshot showing multiple metrics.](./media/metrics-charts/multiple-metrics-chart.png)
+Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. In these cases, consider using multiple charts instead.
### Multiple charts To create another chart that uses a different metric, select **New chart**.
-To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then choose **Move up**, **Move down**, or **Delete**.
+To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then select **Move up**, **Move down**, or **Delete**.
-![Screenshot showing multiple charts.](./media/metrics-charts/multiple-charts.png)
## Time range controls
-In addition to changing the time range using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can also pan and zoom using the controls in the chart area.
+In addition to changing the time range by using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can pan and zoom by using the controls in the chart area.
### Pan
-To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half the chart's time span. For example, if you're viewing the past 24 hours, clicking on the left arrow causes the time range to shift to span a day and a half to 12 hours ago.
+To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half of the chart's time span. For example, if you're viewing the past 24 hours, selecting the left arrow causes the time range to shift to span a day and a half to 12 hours ago.
-Most metrics support 93 days of retention but only let you view 30 days at a time. Using the pan controls, you look at the past 30 days and then easily walk back 15 days at a time to view the rest of the retention period.
+Most metrics support 93 days of retention but let you view only 30 days at a time. By using the pan controls, you look at the past 30 days and then easily go back 15 days at a time to view the rest of the retention period.
-![Animated gif showing the left and right pan controls.](./media/metrics-charts/metrics-pan-controls.gif)
### Zoom
-You can select and drag on the chart to zoom into a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to Automatic, zooming selects a smaller time grain. The new time range applies to all charts in Metrics.
+You can select and drag on the chart to zoom in to a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to **Automatic**, zooming selects a smaller time grain. The new time range applies to all charts in Metrics Explorer.
-![Animated gif showing the metrics zoom feature.](./media/metrics-charts/metrics-zoom-control.gif)
## Aggregation
When you add a metric to a chart, Metrics Explorer applies a default aggregation
Before you use different aggregations on a chart, you should understand how Metrics Explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time granularity*.
-You select the size of the time grain by using Metrics Explorer's time picker panel. If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain.
+You select the size of the time grain by using the time picker panel in Metrics Explorer. If you don't explicitly select the time grain, Metrics Explorer uses the currently selected time range by default. After Metrics Explorer determines the time grain, the metric values that it captures during each time grain are aggregated on the chart, one data point per time grain.
+
+For example, suppose a chart shows the *Server response time* metric. It uses the average aggregation over the time span of the last 24 hours.
-For example, suppose a chart shows the *Server response time* metric. It uses the *average* aggregation over time span of the *last 24 hours*. In this example:
+In this example:
-- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. That is, 2 data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.-- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, 4 data points per hour for 24 hours.
+- If you set the time granularity to 30 minutes, Metrics Explorer draws the chart from 48 aggregated data points. That is, it uses two data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the average of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.
+- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get four data points per hour for 24 hours.
Metrics Explorer has five aggregation types:-- **Sum**: The sum of all values captured during the aggregation interval. The *sum* aggregation is sometimes called the *total* aggregation.+
+- **Sum**: The sum of all values captured during the aggregation interval. The sum aggregation is sometimes called the *total* aggregation.
- **Count**: The number of measurements captured during the aggregation interval.+ When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives. - **Average**: The average of the metric values captured during the aggregation interval. - **Min**: The smallest value captured during the aggregation interval. - **Max**: The largest value captured during the aggregation interval.
- :::image type="content" source="media/metrics-charts/aggregations.png" alt-text="A screenshot showing the aggregation dropdown." lightbox="media/metrics-charts/aggregations.png":::
-Metrics Explorer hides the aggregations that are irrelevant and can't be used.
+Metrics Explorer hides the aggregations that are irrelevant and can't be used.
For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md). ## Filters
-You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, a chart line is displayed for only successful or only failed transactions.
+You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, Metrics Explorer displays a chart line for only successful or only failed transactions.
### Add a filter 1. Above the chart, select **Add filter**.
-1. Select a dimension from the **Property** dropdown to filter.
+1. Select a dimension from the **Property** dropdown list.
+
+ :::image type="content" source="./media/metrics-charts/filter-property.png" alt-text="Screenshot that shows the dropdown list for filter properties." lightbox="./media/metrics-charts/filter-property.png":::
- :::image type="content" source="./media/metrics-charts/filter-property.png" alt-text="Screenshot that shows the filter properties dropdown." lightbox="./media/metrics-charts/filter-property.png":::
+1. Select the operator that you want to apply against the dimension (property). The default operator is equals (**=**).
+
+ :::image type="content" source="./media/metrics-charts/filter-operator.png" alt-text="Screenshot that shows the operator that you can use with the filter." lightbox="./media/metrics-charts/filter-operator.png":::
-1. Select the operator you want to apply against the dimension (property). The default operator is = (equals)
- :::image type="content" source="./media/metrics-charts/filter-operator.png" alt-text="Screenshot that shows the operator you can use with the filter." lightbox="./media/metrics-charts/filter-operator.png":::
+1. Select which dimension values you want to apply to the filter when you're plotting the chart. This example shows filtering out the successful storage transactions.
-1. Select which dimension values you want to apply to the filter when plotting the chart. This example shows filtering out the successful storage transactions.
- :::image type="content" source="./media/metrics-charts/filter-values.png" alt-text="Screenshot that shows the filter values dropdown." lightbox="./media/metrics-charts/filter-values.png":::
+ :::image type="content" source="./media/metrics-charts/filter-values.png" alt-text="Screenshot that shows the dropdown list for filter values." lightbox="./media/metrics-charts/filter-values.png":::
-1. After selecting the filter values, click away from the filter selector to close it. The chart shows how many storage transactions have failed:
- :::image type="content" source="./media/metrics-charts/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions." lightbox="./media/metrics-charts/filtered-chart.png":::
+1. After you select the filter values, click away from the filter selector to close it. The chart shows how many storage transactions have failed.
+
+ :::image type="content" source="./media/metrics-charts/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions." lightbox="./media/metrics-charts/filtered-chart.png":::
1. Repeat these steps to apply multiple filters to the same charts.
You can split a metric by dimension to visualize how different segments of the m
### Apply splitting 1. Above the chart, select **Apply splitting**.
-
-1. Choose dimensions on which to segment your chart:
- :::image type="content" source="./media/metrics-charts/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart." lightbox="./media/metrics-charts/apply-splitting.png":::
- The chart shows multiple lines, one for each dimension segment:
- :::image type="content" source="./media/metrics-charts/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/metrics-charts/segment-dimension.png":::
+1. Choose dimensions on which to segment your chart.
+
+ :::image type="content" source="./media/metrics-charts/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart." lightbox="./media/metrics-charts/apply-splitting.png":::
+
+ The chart shows multiple lines, one for each dimension segment.
+
+ :::image type="content" source="./media/metrics-charts/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/metrics-charts/segment-dimension.png":::
+
+1. Choose a limit on the number of values to be displayed after you split by the selected dimension. The default limit is 10, as shown in the preceding chart. The range of the limit is 1 to 50.
+
+ :::image type="content" source="./media/metrics-charts/segment-dimension-limit.png" alt-text="Screenshot that shows the split limit, which restricts the number of values after splitting." lightbox="./media/metrics-charts/segment-dimension-limit.png":::
+1. Choose the sort order on segments: **Descending** (default) or **Ascending**.
-1. Choose a limit on the number of values to be displayed after splitting by selected dimension. The default limit is 10 as shown in the above chart. The range of limit is 1 - 50.
- :::image type="content" source="./media/metrics-charts/segment-dimension-limit.png" alt-text="Screenshot that shows split limit, which restricts the number of values after splitting." lightbox="./media/metrics-charts/segment-dimension-limit.png":::
+ :::image type="content" source="./media/metrics-charts/segment-dimension-sort.png" alt-text="Screenshot that shows the sort order on split values." lightbox="./media/metrics-charts/segment-dimension-sort.png":::
-1. Choose the sort order on segments: **Ascending** or **Descending**. The default selection is **Descending**.
+1. Segment by multiple segments by selecting multiple dimensions from the **Values** dropdown list. The legend shows a comma-separated list of dimension values for each segment.
-
- :::image type="content" source="./media/metrics-charts/segment-dimension-sort.png" alt-text="Screenshot that shows sort order on split values." lightbox="./media/metrics-charts/segment-dimension-sort.png":::
+ :::image type="content" source="./media/metrics-charts/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/metrics-charts/segment-dimension-multiple.png":::
-1. Segment by multiple segments by selecting multiple dimensions from the values dropdown. The legends shows a comma-separated list of dimension values for each segment
- :::image type="content" source="./media/metrics-charts/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/metrics-charts/segment-dimension-multiple.png":::
-
1. Click away from the grouping selector to close it.
- > [!TIP]
- > To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension.
+> [!TIP]
+> To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension.
## Locking the range of the y-axis Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values.
-For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small numeric value fluctuation would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
+For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small fluctuation in a numeric value would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
+
+Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot.
+
+To control the y-axis range:
-Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot.
+1. Open the chart menu by selecting the ellipsis (**...**). Then select **Chart settings** to access advanced chart settings.
-1. To control the y-axis range, open the chart menu **...**. Then select **Chart settings** to access advanced chart settings.
- :::image source="./media/metrics-charts/select-chart-settings.png" alt-text="Screenshot that highlights the chart settings selection." lightbox="./media/metrics-charts/select-chart-settings.png":::
+ :::image source="./media/metrics-charts/select-chart-settings.png" alt-text="Screenshot that shows the menu option for chart settings." lightbox="./media/metrics-charts/select-chart-settings.png":::
1. Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
- :::image type="content" source="./media/metrics-charts/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/metrics-charts/chart-settings.png":::
-
-> [!NOTE]
-> If you lock the boundaries of the y-axis for charts that tracks count, sum, min, or max aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults.
->
-> A fixed time granularity is chosen because chart values change when the time granularity is automatically modified when a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range.
+ :::image type="content" source="./media/metrics-charts/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/metrics-charts/chart-settings.png":::
+
+If you lock the boundaries of the y-axis for a chart that tracks count, sum, minimum, or maximum aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults.
+
+You choose a fixed time granularity because chart values change when the time granularity is automatically modified after a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range.
## Line colors
-Chart lines are automatically assigned a color from a default palette.
+Chart lines are automatically assigned a color from a default palette.
To change the color of a chart line, select the colored bar in the legend that corresponds to the line on the chart. Use the color picker to select the line color. Customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart. ## Saving to dashboards or workbooks
-After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
+After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
-- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** and then **Pin to dashboard**.-- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** and then **Save to workbook**.
+- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** > **Pin to dashboard**.
+- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** > **Save to workbook**.
## Alert rules
-You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the alert rule creation pane.
+You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the **Create an alert rule** pane.
-To create an alert rule,
-1. Select **New alert rule** in the upper-right corner of the chart
+To create an alert rule:
-1. On the **Condition** tab, the **Signal name** is defaulted to the metric from your chart. You can choose a different metric.
+1. Select **New alert rule** in the upper-right corner of the chart.
-1. Enter a **Threshold value**. The threshold value is the value that triggers the alert. The Preview chart shows the threshold value as a horizontal line over the metric values.
+ :::image source="./media/metrics-charts/new-alert.png" alt-text="Screenshot that shows the button for creating a new alert rule." lightbox="./media/metrics-charts/new-alert.png":::
-1. Select the **Details** tab.
+1. Select the **Condition** tab. The **Signal name** entry defaults to the metric from your chart. You can choose a different metric.
-1. On the **Details** tab, enter a **Name** and **Description** for the alert rule.
+1. Enter a number for **Threshold value**. The threshold value is the value that triggers the alert. The **Preview** chart shows the threshold value as a horizontal line over the metric values. When you're ready, select the **Details** tab.
-1. Select a **Severity** level for the alert rule. Severities include Critical, Error Warning, Informational, and Verbose.
+ :::image source="./media/metrics-charts/alert-rule-condition.png" alt-text="Screenshot that shows the Condition tab on the pane for creating an alert rule." lightbox="./media/metrics-charts/alert-rule-condition.png":::
-1. Select **Review + create** to review the alert rule, then select **Create** to create the alert rule.
+1. Enter **Name** and **Description** values for the alert rule.
-For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md).
+1. Select a **Severity** level for the alert rule. Severities include **Critical**, **Error Warning**, **Informational**, and **Verbose**.
-## Correlate metrics to logs
+1. Select **Review + create** to review the alert rule.
-**Drill into Logs** helps you diagnose the root cause of anomalies in your metrics chart. Drilling into logs allows you to correlate spikes in your metrics chart to logs and queries.
+ :::image source="./media/metrics-charts/alert-rule-details.png" alt-text="Screenshot that shows the Details tab on the pane for creating an alert rule." lightbox="./media/metrics-charts/alert-rule-details.png":::
-The following table summarizes the types of logs and queries provided:
+1. Select **Create** to create the alert rule.
-| Term | Definition |
+For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md).
+
+## Correlating metrics to logs
+
+In Metrics Explorer, **Drill into Logs** helps you diagnose the root cause of anomalies in your metric chart. Drilling into logs allows you to correlate spikes in your metric chart to the following types of logs and queries:
+
+| Term | Definition |
||-|
-| Activity logs | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane) in addition to updates on Service Health events. Use the Activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single Activity log for each Azure subscription. |
-| Diagnostic log | Provides insight into operations that were performed within an Azure resource (the data plane). For example, getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource. |
-| Recommended log | Scenario-based queries that you can use to investigate anomalies in Metrics Explorer. |
+| Activity log | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane), in addition to updates on Azure Service Health events. Use the activity log to determine the what, who, and when for any write operations (`PUT`, `POST`, or `DELETE`) taken on the resources in your subscription. There's a single activity log for each Azure subscription. |
+| Diagnostic log | Provides insight into operations that you performed within an Azure resource (the data plane). Examples include getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource. |
+| Recommended log | Provides scenario-based queries that you can use to investigate anomalies in Metrics Explorer. |
-Currently, Drill into Logs is available for select resource providers. The following resource providers offer the complete Drill into Logs experience:
+Currently, **Drill into Logs** is available for select resource providers. The following resource providers offer the complete **Drill into Logs** experience:
- Application Insights - Autoscale-- App Services-- Storage
+- Azure App Service
+- Azure Storage
+
+To diagnose a spike in failed requests:
+
+1. Select **Drill into Logs**.
- :::image source="./media/metrics-charts/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures in app insights metrics pane." lightbox="./media/metrics-charts/drill-into-log-ai.png":::
+ :::image source="./media/metrics-charts/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures on an Application Insights metrics pane." lightbox="./media/metrics-charts/drill-into-log-ai.png":::
-1. To diagnose the spike in failed requests, select **Drill into Logs**.
+1. In the dropdown list, select **Failures**.
- ![Screenshot shows the Drill into Logs dropdown menu.](./media/metrics-charts/drill-into-logs-dropdown.png)
+ :::image source="./media/metrics-charts/drill-into-logs-dropdown.png" alt-text="Screenshot that shows the dropdown menu for drilling into logs." lightbox="./media/metrics-charts/drill-into-logs-dropdown.png":::
-1. Select **Failures** to open a custom failure pane that provides you with the failed operations, top exceptions types, and dependencies.
+1. On the custom failure pane, check for failed operations, top exception types, and failed dependencies.
- ![Screenshot of app insights failure pane.](./media/metrics-charts/ai-failure-blade.png)
+ :::image source="./media/metrics-charts/ai-failure-blade.png" alt-text="Screenshot of the Application Insights failure pane." lightbox="./media/metrics-charts/ai-failure-blade.png":::
## Next steps
-To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../app/tutorial-app-dashboards.md).
+To create actionable dashboards by using metrics, see [Create custom KPI dashboards](../app/tutorial-app-dashboards.md).
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Previously updated : 07/27/2022 Last updated : 08/16/2023 #Customer intent: As a dev-ops administrator I want to migrate my retention setting from diagnostic setting retention storage to Azure Storage lifecycle management so that it continues to work after the feature has been deprecated.
This guide walks you through migrating from using Azure diagnostic settings stor
> [!IMPORTANT] > **Deprecation Timeline.**
-> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them.
-> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
+> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. This includes using the portal, CLI PowerShell, and ARM and Bicep templates. If you have configured retention settings, you'll still be able to see and change them in the portal.
+> - September 30, 2023 ΓÇô You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
> - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments.
To migrate your diagnostics settings retention rules, follow the steps below:
1. Set your retention time, then select **Next** :::image type="content" source="./media/retention-migration/lifecycle-management-add-rule-base-blobs.png" alt-text="A screenshot showing the Base blobs tab for adding a lifecycle rule.":::
-1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to.
-For example, for all Function App logs, you could use the container *insights-logs-functionapplogs* to set the retention for all Function App logs.
-To set the rule for a specific subscription, resource group, and function app name, use *insights-logs-functionapplogs/resourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your function app name\>*.
+1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. The path or prefix can be at any level within the container and will apply to all blobs under that path or prefix.
+For example, for *all* insight activity logs, use the container *insights-activity-logs* to set the retention for all of the log in that container logs.
+To set the rule for a specific webapp app, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your webapp name\>*.
+
+ Use the Storage browser to help you find the path or prefix.
+ The example below shows the prefix for a specific web app: **insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001/PROVIDERS/MICROSOFT.WEB/SITES/appfromdocker1*.
+ To set the rule for all resources in the resource group, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001*.
+ :::image type="content" source="./media/retention-migration/blob-prefix.png" alt-text="A screenshot showing the Storage browser and resource path." lightbox="./media/retention-migration/blob-prefix.png":::
1. Select **Add** to save the rule. ## Next steps
azure-monitor Migrate To Batch Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md
# How to migrate from the metrics API to the getBatch API
-Heavy use of the [metrics API](https://learn.microsoft.com/rest/api/monitor/metrics/list?tabs=HTTP) can result in throttling or performance problems. Migrating to the [metrics:getBatch](https://learn.microsoft.com/rest/api/monitor/metrics-data-plane/batch?tabs=HTTP) API allows you to query multiple resources in a single REST request. The two APIs share a common set of query parameter and response formats that make migration easy.
+Heavy use of the [metrics API](/rest/api/monitor/metrics/list?tabs=HTTP) can result in throttling or performance problems. Migrating to the [metrics:getBatch](/rest/api/monitor/metrics-data-plane/batch?tabs=HTTP) API allows you to query multiple resources in a single REST request. The two APIs share a common set of query parameter and response formats that make migration easy.
## Request format The metrics:getBatch API request has the following format:
Consider the following restrictions on which resources can be batched together w
- All resources in a batch must be in the same Azure region. - All resources in a batch must be the same resource type.
-To help identify groups of resources that meet these criteria, run the following Azure Resource Graph query using the [Azure Resource Graph Explorer](https://portal.azure.com/#view/HubsExtension/ArgQueryBlade), or via the [Azure Resource Manager Resources query API](https://learn.microsoft.com/rest/api/azureresourcegraph/resourcegraph(2021-03-01)/resources/resources?tabs=HTTP).
+To help identify groups of resources that meet these criteria, run the following Azure Resource Graph query using the [Azure Resource Graph Explorer](https://portal.azure.com/#view/HubsExtension/ArgQueryBlade), or via the [Azure Resource Manager Resources query API](/rest/api/azureresourcegraph/resourcegraph(2021-03-01)/resources/resources?tabs=HTTP).
``` resources
GET https://management.azure.com/subscriptions/12345678-1234-1234-1234-123456789
from `/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/microsoft.Insights/metrics` to `/subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch`
-1. The `metricNamespace` query param is required for metrics:getBatch. For Azure standard metrics, the namespace name is usually the resource type of the resources you've specified. To check the namespace value to use, see the [metrics namespaces API](https://learn.microsoft.com/rest/api/monitor/metric-namespaces/list?tabs=HTTP)
+1. The `metricNamespace` query param is required for metrics:getBatch. For Azure standard metrics, the namespace name is usually the resource type of the resources you've specified. To check the namespace value to use, see the [metrics namespaces API](/rest/api/monitor/metric-namespaces/list?tabs=HTTP)
1. Update the api-version query parameter as follows: `&api-version=2023-03-01-preview` 1. The filter query param isn't prefixed with a `$` in the metrics:getBatch API. Change the query param from `$filter=` to `filter=`. 1. The metrics:getBatch API is a POST call with a body that contains a comma-separated list of resourceIds in the following format:
How the top parameter works in the context of the batch API can be a little conf
### 401 authorization errors
-The individual metrics API requires a user have the [Monitoring Reader](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#monitoring-reader) permission on the resource being queried. Because the metrics:getBatch API is a subscription level API, users must have the Monitoring Reader permission for the queried subscription to use the batch API. Even if users have Monitoring Reader on all the resources being queried in the batch API, the request fails if the user doesn't have Monitoring Reader on the subscription itself.
+The individual metrics API requires a user have the [Monitoring Reader](/azure/role-based-access-control/built-in-roles#monitoring-reader) permission on the resource being queried. Because the metrics:getBatch API is a subscription level API, users must have the Monitoring Reader permission for the queried subscription to use the batch API. Even if users have Monitoring Reader on all the resources being queried in the batch API, the request fails if the user doesn't have Monitoring Reader on the subscription itself.
### 529 throttling errors
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
Last updated 09/28/2022
# Azure Monitor managed service for Prometheus rule groups Rules in Prometheus act on data as it's collected. They're configured as part of a Prometheus rule group, which is stored in [Azure Monitor workspace](azure-monitor-workspace-overview.md). Rules are run sequentially in the order they're defined in the group. - ## Rule types There are two types of Prometheus rules as described in the following table.
There are two types of Prometheus rules as described in the following table.
| Recording |[Recording rules](https://aka.ms/azureprometheus-promio-recrules) allow you to precompute frequently needed or computationally extensive expressions and store their result as a new set of time series. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. | ## Create Prometheus rules
-Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties.Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell.
+Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties. Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell.
+
+Azure managed Prometheus rule groups follow the structure and terminology of the open source Prometheus rule groups. Rule names, expression, 'for' clause, labels, annotations are all supported in the Azure version. The following key differences between OSS rule groups and Azure managed Prometheus should be noted:
+* Azure managed Prometheus rule groups are managed as Azure resources, and include necessary information for resource management, such as the subscription and resource group where the Azure rule group should reside.
+* Azure managed Prometheus alert rules include dedicated properties that allow alerts to be processed like other Azure Monitor alerts. For example, alert severity, action group association, and alert auto resolve configuration are supported as part of Azure managed Prometheus alert rules.
> [!NOTE] > For your AKS or ARC Kubernetes clusters, you can use some of the recommended alerts rules. See pre-defined alert rules [here](../containers/container-insights-metric-alerts.md#enable-prometheus-alert-rules). ### Limiting rules to a specific cluster
-You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property.
-You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, and therefore limit each group to cover a different cluster.
+You can optionally limit the rules in a rule group to query data originating from a single specific cluster, by adding a cluster scope to your rule group, and/or by using the rule group `clusterName` property.
+You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the cluster scope, you can create multiple rule groups, each configured with the same rules, with each group covering a different cluster.
+
+To limit your rule group to a cluster scope, you should add the Azure Resource ID of your cluster to the rule group **scopes[]** list. **The scopes list must still include the Azure Monitor workspace resource ID**. The following cluster resource types are supported as a cluster scope:
+* Azure Kubernetes Service clusters (AKS) (Microsoft.ContainerService/managedClusters)
+* Azure Arc-enabled Kubernetes clusters (Microsoft.kubernetes/connectedClusters)
+* Azure connected appliances (Microsoft.ResourceConnector/appliances)
+
+In addition to the cluster ID, you can configure the **clusterName** property of your rule group. The 'clusterName' property must match the `cluster` label that is added to your metrics when scraped from a specific cluster. By default, this label is set to the last part (resource name) of your cluster ID. If you've changed this label using the ['cluster_alias'](../essentials/prometheus-metrics-scrape-configuration.md#cluster-alias) setting in your cluster scraping configmap, you must include the updated value in the rule group 'clusterName' property. If your scraping uses the default 'cluster' label value, the 'clusterName' property is optional.
+
+Here's an example of how a rule group is configured to limit query to a specific cluster:
-- The `clusterName` value must be identical to the `cluster` label that is added to the metrics from a specific cluster during data collection.-- If `clusterName` isn't specified for a specific rule group, the rules in the group query all the data in the workspace from all clusters.
+``` json
+{
+ "name": "sampleRuleGroup",
+ "type": "Microsoft.AlertsManagement/prometheusRuleGroups",
+ "apiVersion": "2023-03-01",
+ "location": "northcentralus",
+ "properties": {
+ "description": "Sample Prometheus Rule Group limited to a specific cluster",
+ "scopes": [
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>",
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.containerservice/managedclusters/<myClusterName>"
+ ],
+ "clusterName": "<myCLusterName>",
+ "rules": [
+ {
+ ...
+ }
+ ]
+ }
+}
+```
+If both cluster ID scope and `clusterName` aren't specified for a rule group, the rules in the group query data from all the clusters in the workspace from all clusters.
### Creating Prometheus rule group using Resource Manager template
The basic steps are as follows:
2. Deploy the template using any deployment method, such as [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md), or [Rest API](../../azure-resource-manager/templates/deploy-rest.md). ### Template example for a Prometheus rule group
-Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The rules are executed in the order they appear within a group.
+Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The scope of this group is limited to a single AKS cluster. The rules are executed in the order they appear within a group.
``` json {
Following is a sample template that creates a Prometheus rule group, including o
"properties": { "description": "Sample Prometheus Rule Group", "scopes": [
- "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>"
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>",
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.containerservice/managedclusters/<myClusterName>"
], "enabled": true, "clusterName": "<myCLusterName>",
Following is a sample template that creates a Prometheus rule group, including o
}, "actions": [ {
- "actionGroupId": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>"
+ "actionGroupID": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>"
} ] }
The rule group contains the following properties.
| `name` | True | string | Prometheus rule group name | | `type` | True | string | `Microsoft.AlertsManagement/prometheusRuleGroups` | | `apiVersion` | True | string | `2023-03-01` |
-| `location` | True | string | Resource location from regions supported in the preview |
-| `properties.description` | False | string | Rule group description |
-| `properties.scopes` | True | string[] | Target Azure Monitor workspace. Only one scope currently supported |
+| `location` | True | string | Resource location from regions supported in the preview. |
+| `properties.description` | False | string | Rule group description. |
+| `properties.scopes` | True | string[] | Must include the target Azure Monitor workspace ID. Can optionally include one more cluster ID, as well. |
| `properties.enabled` | False | boolean | Enable/disable group. Default is true. |
-| `properties.clusterName` | False | string | Apply rule to data from a specific cluster. Default is apply to all data in workspace. |
+| `properties.clusterName` | False | string | Must match the `cluster` label that is added to metrics scraped from your target cluster. By default, set to the last part (resource name) of cluster ID that appears in scopes[]. |
| `properties.interval` | False | string | Group evaluation interval. Default = PT1M | ### Recording rules
The `rules` section contains the following properties for alerting rules.
|:|:|:|:|:| | `alert` | False | string | Alert rule name | | `expression` | True | string | PromQL expression to evaluate. |
-| `for` | False | string | Alert firing timeout. Values - 'PT1M', 'PT5M' etc. |
+| `for` | False | string | Alert firing timeout. Values - PT1M, PT5M etc. |
| `labels` | False | object | labels key-value pairs | Prometheus alert rule labels. These labels are added to alerts fired by this rule. | | `rules.annotations` | False | object | Annotations key-value pairs to add to the alert. | | `enabled` | False | boolean | Enable/disable group. Default is true. |
The `rules` section contains the following properties for alerting rules.
If you have a [Prometheus rules configuration file](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#configuring-rules) (in YAML format), you can now convert it to an Azure Prometheus rule group ARM template, using the [az-prom-rules-converter utility](https://github.com/Azure/prometheus-collector/tree/main/tools/az-prom-rules-converter#az-prom-rules-converter). The rules file can contain definition of one or more rule groups.
-In addition to the rules file, you can provide the utility with additional properties that are needed to create the Azure Prometheus rule groups, including: subscription, resource group, location, target Azure Monitor workspace, target cluster name, and action groups (used for alert rules). The utility creates a template file that can be deployed directly or within a deployment pipe providing some of these properties as parameters. Note that properties provided to the utility are used for all the rule groups in the template, e.g., all rule groups in the file will be created in the same subscription/resource group/location, using the same Azure Monitor workspace, etc. If an action group is provided as a parameter to the utility, the same action group will be used in all the alert rules in the template. If you want to change this default configuration (e.g., use different action groups in different rules) you can edit the resulting template according to your needs, before deploying it.
+In addition to the rules file, you must provide the utility with other properties that are needed to create the Azure Prometheus rule groups, including: subscription, resource group, location, target Azure Monitor workspace, target cluster ID and name, and action groups (used for alert rules). The utility creates a template file that can be deployed directly or within a deployment pipe providing some of these properties as parameters. Properties that you provide to the utility are used for all the rule groups in the template. For example, all rule groups in the file are created in the same subscription, resource group and location, and using the same Azure Monitor workspace. If an action group is provided as a parameter to the utility, the same action group is used in all the alert rules in the template. If you want to change this default configuration (for example, use different action groups in different rules) you can edit the resulting template according to your needs, before deploying it.
> [!NOTE]
-> !The az-prom-convert-utility is provided as a courtesy tool. We recommend that you review the resulting template and verify it matches your intended configuration.
+> The az-prom-convert-utility is provided as a courtesy tool. We recommend that you review the resulting template and verify it matches your intended configuration.
### Creating Prometheus rule group using Azure CLI
To enable or disable a rule, select the rule in the Azure portal. Select either
> After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group. + ## Next steps - [Learn more about the Azure alerts](../alerts/alerts-types.md).
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
Title: Cross-resource query Azure Data Explorer by using Azure Monitor
-description: Use Azure Monitor to perform cross-product queries between Azure Data Explorer, Log Analytics workspaces, and classic Application Insights applications in Azure Monitor.
+ Title: Query data in Azure Data Explorer and Azure Resource Graph from Azure Monitor
+description: Query data in Azure Data Explorer and Azure Resource Graph from Azure Monitor.
Previously updated : 07/25/2023 Last updated : 08/22/2023 -
-# Cross-resource query Azure Data Explorer by using Azure Monitor
-Azure Monitor supports cross-service queries between Azure Data Explorer, [Application Insights](../app/app-insights-overview.md), and [Log Analytics](../logs/data-platform-logs.md). You can then query your Azure Data Explorer cluster by using Log Analytics or Application Insights tools and refer to it in a cross-service query. This article shows how to make a cross-service query.
-
-The following diagram shows the Azure Monitor cross-service flow:
+# Query data in Azure Data Explorer and Azure Resource Graph from Azure Monitor
+Azure Monitor lets you query data in [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) and [Azure Resource Graph](../../governance/resource-graph/overview.md) from your Log Analytics workspace and Application Insights resources. This article explains how to query data in Azure Resource Graph and Azure Data Explorer from Azure Monitor.
-## Cross query your Log Analytics or Application Insights resources and Azure Data Explorer
+You can run cross-service queries by using any client tools that support Kusto Query Language (KQL) queries, including the Log Analytics web UI, workbooks, PowerShell, and the REST API.
-You can run cross-resource queries by using client tools that support Kusto queries. Examples of these tools include the Log Analytics web UI, workbooks, PowerShell, and the REST API.
-
-Enter the identifier for an Azure Data Explorer cluster in a query within the `adx` pattern, followed by the database name and table.
+## Permissions required
-```kusto
-adx('https://help.kusto.windows.net/Samples').StormEvents
-```
+To run a cross-service query, you need:
-> [!NOTE]
->* Database names are case sensitive.
->* Cross-resource query as an alert isn't supported.
->* Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass along the time filter.
-> * The cross-service query ability is used for data retrieval only. For more information, see [Function supportability](#function-supportability).
-> * Private Link is not supported with this feature.
+- `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Reader built-in role](../logs/manage-access.md#log-analytics-reader), for example.
+- Reader permissions to the resources you query in Azure Resource Graph.
+- Viewer permissions to the tables you query in Azure Data Explorer.
## Function supportability
-The Azure Monitor cross-service queries support functions for Application Insights, Log Analytics, and Azure Data Explorer.
-This capability enables cross-cluster queries to reference an Azure Monitor or Azure Data Explorer tabular function directly.
+Azure Monitor cross-service queries support functions for Application Insights, Log Analytics, Azure Data Explorer, and Azure Resource Graph.
+This capability enables cross-cluster queries to reference an Azure Monitor, Azure Data Explorer, or Azure Resource Graph tabular function directly.
The following commands are supported with the cross-service query: * `.show functions` * `.show function {FunctionName}` * `.show database {DatabaseName} schema as json`
-## Combine Azure Data Explorer cluster tables with a Log Analytics workspace
+## Query data in Azure Data Explorer
+
+Enter the identifier for an Azure Data Explorer cluster in a query within the `adx` pattern, followed by the database name and table.
+
+```kusto
+adx('https://help.kusto.windows.net/Samples').StormEvents
+```
+### Combine Azure Data Explorer cluster tables with a Log Analytics workspace
Use the `union` command to combine cluster tables with a Log Analytics workspace.
+For example:
+ ```kusto union customEvents, adx('https://help.kusto.windows.net/Samples').StormEvents | take 10
union customEvents, adx('https://help.kusto.windows.net/Samples').StormEvents
```kusto let CL1 = adx('https://help.kusto.windows.net/Samples').StormEvents; union customEvents, CL1 | take 10
-```
+
+```sql
> [!TIP] > Shorthand format is allowed: *ClusterName*/*InitialCatalog*. For example, `adx('help/Samples')` is translated to `adx('help.kusto.windows.net/Samples')`.
-When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator) instead of union, you're required to use a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to combine the data in the Azure Data Explorer cluster with the Log Analytics workspace. Use `Hint.remote={Direction of the Log Analytics Workspace}`. For example:
+When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator) instead of union, you're required to use a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to combine the data in the Azure Data Explorer cluster with the Log Analytics workspace. Use `Hint.remote={Direction of the Log Analytics Workspace}`.
-```kusto
+For example:
+
+kusto
AzureDiagnostics | join hint.remote=left adx("cluster=ClusterURI").AzureDiagnostics on (ColumnName) ```
-## Join data from an Azure Data Explorer cluster in one tenant with an Azure Monitor resource in another
+### Join data from an Azure Data Explorer cluster in one tenant with an Azure Monitor resource in another
Cross-tenant queries between the services aren't supported. You're signed in to a single tenant for running the query that spans both resources.
If the Azure Data Explorer resource is in Tenant A and the Log Analytics workspa
* Use Azure Data Explorer to add roles for principals in different tenants. Add your user ID in Tenant B as an authorized user on the Azure Data Explorer cluster. Validate that the [TrustedExternalTenant](/powershell/module/az.kusto/update-azkustocluster) property on the Azure Data Explorer cluster contains Tenant B. Run the cross query fully in Tenant B. * Use [Lighthouse](../../lighthouse/index.yml) to project the Azure Monitor resource into Tenant A.
-## Connect to Azure Data Explorer clusters from different tenants
+### Connect to Azure Data Explorer clusters from different tenants
Kusto Explorer automatically signs you in to the tenant to which the user account originally belongs. To access resources in other tenants with the same user account, you must explicitly specify `TenantId` in the connection string: `Data Source=https://ade.applicationinsights.io/subscriptions/SubscriptionId/resourcegroups/ResourceGroupName;Initial Catalog=NetDefaultDB;AAD Federated Security=True;Authority ID=TenantId`
+## Query data in Azure Resource Graph (Preview)
+
+Enter the `arg("")` pattern, followed by the Azure Resource Graph table name.
+
+For example:
+
+```kusto
+arg("").<Azure-Resource-Graph-table-name>
+```
+
+Here are some sample Azure Log Analytics queries that use the new Azure Resource Graph cross-service query capabilities:
+
+- Filter a Log Analytics query based on the results of an Azure Resource Graph query:
+
+```kusto
+arg("").Resources
+| where type == "microsoft.compute/virtualmachines" and properties.hardwareProfile.vmSize startswith "Standard_D"
+| join (
+ Heartbeat
+ | where TimeGenerated > ago(1d)
+ | distinct Computer
+ )
+ on $left.name == $right.Computer
+```
+
+- Create an alert rule that applies only to certain resources taken from an ARG query:
+ - Exclude resources based on tags ΓÇô for example, not to trigger alerts for VMs with a ΓÇ£TestΓÇ¥ tag.
+
+```kusto
+arg("").Resources
+| where tags.environment=~'Test'
+| project name
+
+```
+
+- Retrieve performance data related to CPU utilization and filter to resources with the ΓÇ£prodΓÇ¥ tag.
+
+```kusto
+InsightsMetrics
+| where Name == "UtilizationPercentage"
+| lookup (
+ arg("").Resources
+ | where type == 'microsoft.compute/virtualmachines'
+ | project _ResourceId=tolower(id), tags
+ )
+ on _ResourceId
+| where tostring(tags.Env) == "Prod"
+```
+
+More use cases:
+- Use a tag to determine whether VMs should be running 24x7 or should be shut down at night.
+- Show alerts on any server that contains a certain number of cores.
+
+### Combine Azure Resource Graph tables with a Log Analytics workspace
+
+Use the `union` command to combine cluster tables with a Log Analytics workspace.
+
+For example:
+
+```kusto
+union AzureActivity, arg("").Resources
+| take 10
+```
+```kusto
+let CL1 = arg("").Resources ;
+union AzureActivity, CL1 | take 10
+
+```sql
+
+When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator) instead of union, you're required to use a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to combine the data in Azure Resource Graph with the Log Analytics workspace. Use `Hint.remote={Direction of the Log Analytics Workspace}`. For example:
+
+kusto
+Perf | where ObjectName == "Memory" and (CounterName == "Available MBytes Memory")
+| extend _ResourceId = replace_string(replace_string(replace_string(_ResourceId, 'microsoft.compute', 'Microsoft.Compute'), 'virtualmachines','virtualMachines'),"resourcegroups","resourceGroups")
+| join hint.remote=left (arg("").Resources | where type =~ 'Microsoft.Compute/virtualMachines' | project _ResourceId=id, tags) on _ResourceId | project-away _ResourceId1 | where tostring(tags.env) == "prod"
+
+```
+
+## Create an alert based on a cross-service query
+
+To create a new alert rule based on a cross-service query, follow the steps in [Create a new alert rule](../alerts/alerts-create-new-alert-rule.md), selecting your Log Analytics workspace on the Scope tab.
+
+## Limitations
+
+* Database names are case sensitive.
+* Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass along the time filter.
+* The cross-service query ability is used for data retrieval only.
+* [Private Link](../logs/private-link-security.md) does not support cross-service queries.
+ ## Next steps * [Write queries](/azure/data-explorer/write-queries)
-* [Query data in Azure Monitor by using Azure Data Explorer](/azure/data-explorer/query-monitor-data)
* [Perform cross-resource log queries in Azure Monitor](../logs/cross-workspace-query.md)++
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |
-| Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSCallSummary](/azure/azure-monitor/reference/tables/ACSCallSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) |
+| Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSCallSummary](/azure/azure-monitor/reference/tables/ACSCallSummary)<br>[ACSJobRouterIncomingOperations](/azure/azure-monitor/reference/tables/ACSJobRouterIncomingOperations)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) |
| Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) | | Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) | | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) |
-| Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) |
+| Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs)<br>[DevCenterResourceOperationLogs](/azure/azure-monitor/reference/tables/DevCenterResourceOperationLogs) |
| Data Transfer | [DataTransferOperations](/azure/azure-monitor/reference/tables/DataTransferOperations) | | Event Hubs | [AZMSArchiveLogs](/azure/azure-monitor/reference/tables/AZMSArchiveLogs)<br>[AZMSAutoscaleLogs](/azure/azure-monitor/reference/tables/AZMSAutoscaleLogs)<br>[AZMSCustomerManagedKeyUserLogs](/azure/azure-monitor/reference/tables/AZMSCustomerManagedKeyUserLogs)<br>[AZMSKafkaCoordinatorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaCoordinatorLogs)<br>[AZMSKafkaUserErrorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaUserErrorLogs) | | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) |
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Storage | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs)<br>[StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs)<br>[StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs)<br>[StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) | | Synapse | [SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/SynapseSqlPoolExecRequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/SynapseSqlPoolRequestSteps)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/SynapseSqlPoolDmsWorkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/SynapseSqlPoolWaits) | | Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs)<br>[StorageMoverCopyLogsFailed](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsFailed)<br>[StorageMoverCopyLogsTransferred](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsTransferred)<br> |
-| Virtual Network Manager | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange) |
+| Virtual Network Manager | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange)<br>[AVNMRuleCollectionChange](/azure/azure-monitor/reference/tables/AVNMRuleCollectionChange) |
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) don't support Basic logs.
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Last updated 10/01/2022- # Query Basic Logs in Azure Monitor
https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D
} ``` + ## Pricing model The charge for a query on Basic Logs is based on the amount of data the query scans, which is influenced by the size of the table and the query's time range. For example, a query that scans three days of data in a table that ingests 100 GB each day, would be charged for 300 GB. For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-> [!NOTE]
-> Billing of queries on Basic Logs is not yet enabled. You can query Basic Logs for free until early 2023.
## Next steps - [Learn more about the Basic Logs and Analytics log plans](basic-logs-configure.md). - [Use a search job to retrieve data from Basic Logs into Analytics Logs where it can be queries multiple times](search-jobs.md).+
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Subscriptions that contained a Log Analytics workspace or Application Insights r
Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing).
+> [!IMPORTANT]
+> The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data as cost-effective Basic Logs.
+ ### Free Trial pricing tier
-Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). The data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free Trial tier.
+Workspaces in the Free Trial pricing tier have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). Data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes, not production workloads. No SLA is provided for the Free Trial tier.
> [!NOTE] > Creating new workspaces in, or moving existing workspaces into, the legacy Free Trial pricing tier was possible only until July 1, 2022.
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
The following sections provide the procedure for creating a custom field. At th
> The custom field is populated as records matching the specified criteria are added to the Log Analytics workspace, so it will only appear on records collected after the custom field is created. The custom field will not be added to records that are already in the data store when itΓÇÖs created. >
-### Step 1 ΓÇô Identify records that will have the custom field
+### Step 1: Identify records that will have the custom field
The first step is to identify the records that will get the custom field. You start with a [standard log query](./log-query-overview.md) and then select a record to act as the model that Azure Monitor will learn from. When you specify that you are going to extract data into a custom field, the **Field Extraction Wizard** is opened where you validate and refine the criteria. 1. Go to **Logs** and use a [query to retrieve the records](./log-query-overview.md) that will have the custom field.
The first step is to identify the records that will get the custom field. You s
4. The **Field Extraction Wizard** is opened, and the record you selected is displayed in the **Main Example** column. The custom field will be defined for those records with the same values in the properties that are selected. 5. If the selection is not exactly what you want, select additional fields to narrow the criteria. In order to change the field values for the criteria, you must cancel and select a different record matching the criteria you want.
-### Step 2 - Perform initial extract.
+### Step 2: Perform initial extract.
Once youΓÇÖve identified the records that will have the custom field, you identify the data that you want to extract. Log Analytics will use this information to identify similar patterns in similar records. In the step after this you will be able to validate the results and provide further details for Log Analytics to use in its analysis. 1. Highlight the text in the sample record that you want to populate the custom field. You will then be presented with a dialog box to provide a name and data type for the field and to perform the initial extract. The characters **\_CF** will automatically be appended. 2. Click **Extract** to perform an analysis of collected records. 3. The **Summary** and **Search Results** sections display the results of the extract so you can inspect its accuracy. **Summary** displays the criteria used to identify records and a count for each of the data values identified. **Search Results** provides a detailed list of records matching the criteria.
-### Step 3 ΓÇô Verify accuracy of the extract and create custom field
+### Step 3: Verify accuracy of the extract and create custom field
Once you have performed the initial extract, Log Analytics will display its results based on data that has already been collected. If the results look accurate then you can create the custom field with no further work. If not, then you can refine the results so that Log Analytics can improve its logic. 1. If any values in the initial extract arenΓÇÖt correct, then click the **Edit** icon next to an inaccurate record and select **Modify this highlight** in order to modify the selection.
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
Until September 18, 2023, the following is true. If a workspace enabled the [Mic
To set or change the daily cap for a Log Analytics workspace in the Azure portal: 1. From the **Log Analytics workspaces** menu, select your workspace, and then **Usage and estimated costs**.
-2. Select **Data Cap** at the top of the page.
+2. Select **Daily Cap** at the top of the page.
3. Select **ON** and then set the data volume limit in GB/day. :::image type="content" source="media/manage-cost-storage/set-daily-volume-cap-01.png" lightbox="media/manage-cost-storage/set-daily-volume-cap-01.png" alt-text="Log Analytics configure data limit":::
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
This article explains how to feed data from Log Analytics into Power BI to produ
> [!NOTE] > You can use free Power BI features to integrate and create reports and dashboards. More advanced features, such as sharing your work, scheduled refreshes, dataflows, and incremental refresh might require purchasing a Power BI Pro or Premium account. For more information, see [Learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/).
+## Prerequisites
+
+- To export the query to a .txt file that you can use in Power BI Desktop, you need [Power BI Desktop](https://powerbi.microsoft.com/desktop/).
+- To create a new dataset based on your query directly in the Power BI service:
+ - You need a Power BI account.
+ - You must give permission in Azure for the Power BI service to write logs. For more information, see [Prerequisites to configure Azure Log Analytics for Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-configure#prerequisites).
+
+## Permissions required
+
+- To export the query to a .txt file that you can use in Power BI Desktop, you need `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example.
+- To create a new dataset based on your query directly in the Power BI service, you need `Microsoft.OperationalInsights/workspaces/write` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
+ ## Create Power BI datasets and reports from Log Analytics queries From the **Export** menu in Log Analytics, select one of the two options for creating Power BI datasets and reports from your Log Analytics queries:
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Capabilities that require dedicated clusters:
- **Cost optimization** - Link your workspaces in same region to cluster to get commitment tier discount to all workspaces, even to ones with low ingestion that eligible for commitment tier discount. - **[Availability zones](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking. The physical separation in zones and independent infrastructure makes an incident far less likely since the workspace can rely on the resources from any of the zones. [Azure Monitor availability zones](./availability-zones.md#service-resiliencesupported-regions) covers broader parts of the service and when available in your region, extends your Azure Monitor resilience automatically. Azure Monitor creates dedicated clusters as availability-zone-enabled (`isAvailabilityZonesEnabled`: 'true') by default in supported regions. [Dedicated clusters Availability zones](./availability-zones.md#data-resiliencesupported-regions) aren't supported in all regions currently.-- **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an Event Bubs into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier.
+- **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an event hub into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier.
## Cluster pricing model Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
The Log Analytics Reader role includes the following Azure actions:
| Action | `*/read` | Ability to view all Azure resources and resource configuration.<br>Includes viewing:<br>- Virtual machine extension status.<br>- Configuration of Azure diagnostics on resources.<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options in the preceding list. | | Action | `Microsoft.Support/*` | Ability to open support cases. | |Not Action | `Microsoft.OperationalInsights/workspaces/sharedKeys/read` | Prevents reading of workspace key required to use the data collection API and to install agents. This prevents the user from adding new resources to the workspace. |
-| Action | `Microsoft.OperationalInsights/workspaces/analytics/query/action` | Deprecated. |
-| Action | `Microsoft.OperationalInsights/workspaces/search/action` | Deprecated. |
#### Log Analytics Contributor
Granting table-level read access involves assigning a user two roles:
```json "Microsoft.OperationalInsights/workspaces/read",
- "Microsoft.OperationalInsights/workspaces/query/read",
- "Microsoft.OperationalInsights/workspaces/analytics/query/action",
- "Microsoft.OperationalInsights/workspaces/search/action"
+ "Microsoft.OperationalInsights/workspaces/query/read"
``` 1. In the `"not actions"` section, add:
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
We've identified the following products and experiences query workspaces through
Note the following requirements. ### Network subnet size
-The smallest supported IPv4 subnet is /27 (using CIDR subnet definitions). Although Azure virtual networks [can be as small as /29](../../virtual-network/virtual-networks-faq.md#how-small-and-how-large-can-vnets-and-subnets-be), Azure [reserves five IP addresses](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). The Azure Monitor private link setup requires at least 11 more IP addresses, even if you're connecting to a single workspace. [Review your endpoint's DNS settings](./private-link-configure.md#review-your-endpoints-dns-settings) for the list of Azure Monitor private link endpoints.
+The smallest supported IPv4 subnet is /27 (using CIDR subnet definitions). Although Azure virtual networks [can be as small as /29](../../virtual-network/virtual-networks-faq.md#how-small-and-how-large-can-virtual-networks-and-subnets-be), Azure [reserves five IP addresses](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). The Azure Monitor private link setup requires at least 11 more IP addresses, even if you're connecting to a single workspace. [Review your endpoint's DNS settings](./private-link-configure.md#review-your-endpoints-dns-settings) for the list of Azure Monitor private link endpoints.
### Agents The latest versions of the Windows and Linux agents must be used to support secure ingestion to Log Analytics workspaces. Older versions can't upload monitoring data over a private network.
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
Last updated 03/20/2023
The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE]
-> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components.
+> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components.
The steps required to configure the Logs ingestion API are as follows:
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
Title: Enable Profiler for ASP.NET Core web apps hosted in Linux on App Service | Microsoft Docs
+ Title: Enable Profiler for ASP.NET Core web apps hosted in Linux
description: Learn how to enable Profiler on your ASP.NET Core web application hosted in Linux on Azure App Service.-+ ms.devlang: csharp Previously updated : 07/18/2022 Last updated : 08/30/2023
+# Customer Intent: As a .NET developer, I'd like to enable Application Insights Profiler for my .NET web application hosted in Linux
-# Enable Profiler for ASP.NET Core web apps hosted in Linux on App Service
+# Enable Profiler for ASP.NET Core web apps hosted in Linux
By using Profiler, you can track how much time is spent in each method of your live ASP.NET Core web apps that are hosted in Linux on Azure App Service. This article focuses on web apps hosted in Linux. You can also experiment by using Linux, Windows, and Mac development environments. In this article, you:
-> [!div class="checklist"]
-> - Set up and deploy an ASP.NET Core web application hosted on Linux.
-> - Add Application Insights Profiler to the ASP.NET Core web application.
+- Set up and deploy an ASP.NET Core web application hosted on Linux.
+- Add Application Insights Profiler to the ASP.NET Core web application.
## Prerequisites
In this article, you:
git remote add azure https://<username>@<app_name>.scm.azurewebsites.net:443/<app_name>.git ```
- * Use the **username** that you used to create the deployment credentials.
- * Use the **app name** that you used to create the web app by using App Service on Linux.
+ - Use the **username** that you used to create the deployment credentials.
+ - Use the **app name** that you used to create the web app by using App Service on Linux.
1. Deploy the project by pushing the changes to Azure:
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md
Title: Profile Azure containers with Application Insights Profiler
-description: Enable Application Insights Profiler for Azure containers.
+description: Learn how to enable the Application Insights Profiler for your ASP.NET Core application running in Azure containers.
ms.contributor: charles.weininger- Previously updated : 07/15/2022+ Last updated : 08/30/2023
+# Customer Intent: As a .NET developer, I'd like to learn how to enable Profiler on my ASP.NET Core application running in my container.
# Profile live Azure containers with Application Insights You can enable the Application Insights Profiler for ASP.NET Core application running in your container almost without code. To enable the Application Insights Profiler on your container instance, you need to:
-* Add the reference to the `Microsoft.ApplicationInsights.Profiler.AspNetCore` NuGet package.
-* Set the environment variables to enable it.
+- Add the reference to the `Microsoft.ApplicationInsights.Profiler.AspNetCore` NuGet package.
+- Update the code to enable the Profiler.
+- Set up the Application Insights instrumentation key.
In this article, you learn about the various ways that you can:
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
Title: Roles, permissions, and security in Azure Monitor
description: Learn how to use roles and permissions in Azure Monitor to restrict access to monitoring resources. + Last updated 08/09/2023 - # Roles, permissions, and security in Azure Monitor
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
The DCR is defined by the options in the following table.
| Option | Description | |:|:|
-| Guest performance | Specifies whether to collect [performance data](https://learn.microsoft.com/azure/azure-monitor/vm/vminsights-performance) from the guest operating system. This option is required for all machines. The collection interval for performance data is every 60 seconds.|
+| Guest performance | Specifies whether to collect [performance data](/azure/azure-monitor/vm/vminsights-performance) from the guest operating system. This option is required for all machines. The collection interval for performance data is every 60 seconds.|
| Processes and dependencies | Collects information about processes running on the virtual machine and dependencies between machines. This information enables the [Map feature in VM insights](vminsights-maps.md). This is optional and enables the [VM insights Map feature](vminsights-maps.md) for the machine. | | Log Analytics workspace | Workspace to store the data. Only workspaces with VM insights are listed. |
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
This article describes how to enable VM insights on Azure virtual machines by us
- Azure Virtual Machine Scale Sets > [!NOTE]
-> This article only applies to the Log Analytics agent. To enable VM insights with the Azure Monitor agent, use other installation methods described in [Enable VM insights overview](vminsights-enable-overview.md).
+> The PowerShell script provided in this article enables VM Insights with the Log Analytics agent. We'll update it to support Azure Monitoring Agent shortly. In the meantime, to enable VM insights with Azure Monitor Agent, use the other installation methods described in [Enable VM insights overview](vminsights-enable-overview.md).
+ ## Prerequisites You need to:
azure-netapp-files Azacsnap Cmd Ref Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-configure.md
na Previously updated : 08/19/2022 Last updated : 08/21/2023
The process described in the Azure Backup documentation has been implemented wit
1. re-enable the backint-based backup. By default this option is disabled, but it can be enabled by running `azacsnap -c configure ΓÇôconfiguration edit` and answering ΓÇÿyΓÇÖ (yes) to the question
-ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. Editing the configuration as described will set the
+ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. Editing the configuration as described sets the
autoDisableEnableBackint value to true in the JSON configuration file (for example, `azacsnap.json`). It's also possible to change this value by editing the configuration file directly.
When you add an *Oracle database* to the configuration, the following values are
- **SID** = The database System ID. - **Oracle Connect String** = The Connect String used by `sqlplus` to connect to Oracle and enable/disable backup mode.
+# [IBM Db2](#tab/db2)
+
+When adding a *Db2 database* to the configuration, the following values are required:
+
+- **Db2 Server's Address** = The database server hostname or IP address.
+ - If Db2 Server Address (serverAddress) matches '127.0.0.1' or 'localhost' then azacsnap executes all `db2` commands locally (refer "Local connectivity"). Otherwise AzAcSnap uses the serverAddress as the host to connect to via SSH using the "Instance User" as the SSH login name. Remote access via SSH can be validated with `ssh <instanceUser>@<serverAddress>` replacing instanceUser and serverAddress with the respective values (refer "Remote connectivity").
+- **Instance User** = The database System Instance User.
+- **SID** = The database System ID.
+
+> [!IMPORTANT]
+> Setting the Db2 Server Address (serverAddress) aligns directly with the method used to communicate with Db2, ensure this is set correctly as described.
+ # [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
When you add *HLI Storage* to a database section, the following values are requi
When you add *ANF Storage* to a database section, the following values are required: -- **Service Principal Authentication filename** = the `authfile.json` file generated in the Cloud Shell when configuring
+- **Service Principal Authentication filename** (JSON field: authFile)
+ - To use a System Managed Identity, leave empty with no value and press [Enter] to go to the next field.
+ - An example to set up an Azure System Managed Identity can be found on the [AzAcSnap Installation](azacsnap-installation.md).
+ - To use a Service Principal, use name of the authentication file (for example, `authfile.json`) generated in the Cloud Shell when configuring
communication with Azure NetApp Files storage.-- **Full ANF Storage Volume Resource ID** = the full Resource ID of the Volume being snapshot. This string can be retrieved from:
+ - An example to set up a Service Principal can be found on the [AzAcSnap Installation](azacsnap-installation.md).
+- **Full ANF Storage Volume Resource ID** (JSON field: resourceId) = the full Resource ID of the Volume being snapshot. This string can be retrieved from:
Azure portal ΓÇô> ANF ΓÇô> Volume ΓÇô> Settings/Properties ΓÇô> Resource ID
For **Azure Large Instance** system, this information is provided by Microsoft S
is made available in an Excel file that is provided during handover. Open a service request if you need to be provided this information again.
-The following output is an example configuration file only and is the content of the file as generated by the configuration session above, update all the values accordingly.
+The following output is an example configuration file only and is the content of the file as generated by the configuration example, update all the values accordingly.
```bash cat azacsnap.json
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
For more information about using GPG, see [The GNU Privacy Handbook](https://www
## Supported scenarios
-The snapshot tools can be used in the following [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md) and
-[SAP HANA with Azure NetApp Files](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md).
+The snapshot tools can be used in the following [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md) and [SAP HANA with Azure NetApp Files](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md).
## Snapshot Support Matrix from SAP
-The following matrix is provided as a guideline on which versions of SAP HANA
-are supported by SAP for Storage Snapshot Backups.
+The following matrix is provided as a guideline on which versions of SAP HANA are supported by SAP for Storage Snapshot Backups.
+
| Database type | Minimum database versions | Notes | |||--| | Single Container Database | 1.0 SPS 12, 2.0 SPS 00 | |
-| MDC Single Tenant | 2.0 SPS 01 | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.* |
-| MDC Multiple Tenants | 2.0 SPS 04 | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
-> \* SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02
+| MDC Single Tenant | [2.0 SPS 01](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/2194a981ea9e48f4ba0ad838abd2fb1c.html?version=2.0.01&locale=en-US) | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.* |
+| MDC Multiple Tenants | [2.0 SPS 04](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/7910eb4a498246b1b0435a4e9bf938d1.html?version=2.0.04&locale=en-US) | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
+> \* [SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02](https://help.sap.com/docs/SAP_HANA_PLATFORM/42668af650f84f9384a3337bcd373692/7f203cf75ae4445d96ad0012c67c0480.html?version=2.0.02&locale=en-US)
+++ ## Important things to remember
The following guidance is provided to illustrate the usage of the snapshot tools
## Next steps - [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md)+
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
na Previously updated : 11/29/2022 Last updated : 08/21/2023
This article provides a guide for installation of the Azure Application Consiste
## Introduction
-The downloadable self-installer is designed to make the snapshot tools easy to set up and run with non-root user privileges (for example, azacsnap). The installer will set up the user and put the snapshot tools into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`).
-The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the pre-requisite steps (enable communication with storage and SAP HANA) were run as root, then the installation will copy the private key and `hdbuserstore` to the backup userΓÇÖs location. The steps to enable communication with the storage back-end and SAP HANA can be manually done by a knowledgeable administrator after the installation.
+The downloadable self-installer is designed to make the snapshot tools easy to set up and run with non-root user privileges (for example, azacsnap). The installer sets up the user and puts the snapshot tools into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`).
+The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the prerequisite steps (enable communication with storage and SAP HANA) were run as root, then the installation copies the private key and `hdbuserstore` to the back-up userΓÇÖs location. The steps to enable communication with the storage back-end and SAP HANA can be manually done by a knowledgeable administrator after the installation.
## Prerequisites for installation
Follow the guidelines to set up and execute the snapshots and disaster recovery
is recommended the following steps are completed as root before installing and using the snapshot tools.
-1. **OS is patched**: See patching and SMT setup in [How to install and configure SAP HANA (Large Instances) on Azure](../virtual-machines/workloads/sap/hana-installation.md#operating-system).
-1. **Time Synchronization is set up**. The customer will need to provide an NTP compatible time
- server, and configure the OS accordingly.
-1. **HANA is installed** : See HANA installation instructions in [SAP NetWeaver Installation on HANA database](/archive/blogs/saponsqlserver/sap-netweaver-installation-on-hana-database).
+1. **OS is patched**: See patching and SMT set up in [How to install and configure SAP HANA (Large Instances) on Azure](../virtual-machines/workloads/sap/hana-installation.md#operating-system).
+1. **Time Synchronization is set up**. The customer needs to provide an NTP compatible time server, and configure the OS accordingly.
+1. **Database is installed** : Refer to separate instructions for each supported database.
1. **[Enable communication with storage](#enable-communication-with-storage)** (for more information, see separate section): Select the storage back-end you're using for your deployment. # [Azure NetApp Files](#tab/azure-netapp-files)
- 1. **For Azure NetApp Files (for more information, see separate section)**: Customer must generate the service principal authentication file.
+ 1. **For Azure NetApp Files (for more information, see separate section)**: Customer must either set up a System Managed Identity or generate the Service Principal authentication file.
> [!IMPORTANT] > When validating communication with Azure NetApp Files, communication might fail or time-out. Check to ensure firewall rules are not blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:
tools.
# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
- 1. **For Azure Large Instance (for more information, see separate section)**: Set up SSH with a
- private/public key pair. Provide the public key for each node, where the snapshot tools are
- planned to be executed, to Microsoft Operations for setup on the storage back-end.
+ 1. **For Azure Large Instance (for more information, see separate section)**: Generate an SSH private/public key pair. For each node where the snapshot tools will be run, provide the generated public key to Microsoft Operations so they can install on the storage back-end.
- Test this by using SSH to connect to one of the nodes (for example, `ssh -l <Storage UserName> <Storage IP Address>`).
+ Test connectivity by using SSH to connect to one of the nodes (for example, `ssh -l <Storage UserName> <Storage IP Address>`).
Type `exit` to logout of the storage prompt.
- Microsoft operations will provide the storage user and storage IP at the time of provisioning.
+ Microsoft Operations provides the storage user and storage IP at the time of provisioning.
tools.
> [!NOTE] > These examples are for non-SSL communication to SAP HANA.
- # [Oracle](#tab/oracle)
+ # [Oracle](#tab/oracle)
Set up an appropriate Oracle database and Oracle Wallet following the instructions in the Enable communication with database](#enable-communication-with-database) section.
tools.
1. `sqlplus /@<ORACLE_USER> as SYSBACKUP`
+ # [IBM Db2](#tab/db2)
+
+ Set up an appropriate IBM Db2 connection method following the instructions in the Enable communication with database](#enable-communication-with-database) section.
+
+ 1. After set up the connection can be tested from the command line as follows using these examples:
+
+ 1. Installed onto the database server, then complete the set up with "[Db2 local connectivity](#db2-local-connectivity)".
+
+ `db2 "QUIT"`
+
+ 1. Installed onto a centralized back-up system, then complete the set up with "[Db2 remote connectivity](#db2-remote-connectivity)".
+
+ `ssh <InstanceUser>@<ServerAddress> 'db2 "QUIT"'`
+
+ 1. Both of the commands run in step 1 should produce the output:
+
+ ```output
+ DB20000I The QUIT command completed successfully.
+ ```
+
This section explains how to enable communication with storage. Ensure the stora
# [Azure NetApp Files (with Virtual Machine)](#tab/azure-netapp-files)
-Create RBAC Service Principal
+### Azure System Managed Identity
+
+From AzAcSnap 9, it's possible to use a System Managed Identity instead of a Service Principal for operation. Using this feature avoids the need to store Service Principal credentials on a VM. The steps to follow to set up an Azure Managed Identity using the Azure Portal Cloud Shell are as follows.
+
+1. Within an Azure Cloud Shell session with Bash, use the following example to set the shell variables appropriately and apply to the subscription where you want to create the Azure Managed Identity:
+
+ ```azurecli-interactive
+ export SUBSCRIPTION="99z999zz-99z9-99zz-99zz-9z9zz999zz99"
+ export VM_NAME="MyVM"
+ export RESOURCE_GROUP="MyResourceGroup"
+ export ROLE="Contributor"
+ export SCOPE="/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}"
+ ```
+
+ > [!NOTE]
+ > Set the `SUBSCRIPTION`, `VM_NAME`, and `RESOURCE_GROUP` to your site specific values.
+
+1. Set the Cloud Shell to the correct subscription:
+
+ ```azurecli-interactive
+ az account set -s "${SUBSCRIPTION}"
+ ```
+
+1. Create the managed identity for the virtual machine. The following command sets, or shows if already set, the AzAcSnap virtual machine Managed Identity.
+
+ ```azurecli-interactive
+ az vm identity assign --name "${VM_NAME}" --resource-group "${RESOURCE_GROUP}"
+ ```
+
+1. Get the Principal ID for use to assign a role:
+
+ ```azurecli-interactive
+ PRINCIPAL_ID=$(az resource list -n ${VM_NAME} --query [*].identity.principalId --out tsv)
+ ```
+
+1. Assign the ΓÇÿContributorΓÇÖ role to the Principal ID:
+
+ ```azurecli-interactive
+ az role assignment create --assignee "${PRINCIPAL_ID}" --role "${ROLE}" --scope "${SCOPE}"
+ ```
+
+#### Optional RBAC
+
+ItΓÇÖs possible to limit the permissions for the Managed Identity using a custom role definition. Create a suitable role definition for the virtual machine to be able to manage snapshots (example permissions settings can be found in [Tips and tricks for using Azure Application Consistent Snapshot tool](azacsnap-tips.md).
+
+Then assign the role to the Azure Virtual Machine Principal ID (also displayed as `SystemAssignedIdentity`):
+
+```azurecli-interactive
+az role assignment create --assignee ${PRINCIPAL_ID} --role "AzAcSnap on ANF" --scope "${SCOPE}"
+```
+
+### Generate Service Principal file
1. Within an Azure Cloud Shell session, make sure you're logged on at the subscription where you want to be associated with the service principal by default:
Create RBAC Service Principal
az account show ```
-1. If the subscription isn't correct, use the following command:
+1. If the subscription isn't correct, use the `az account set` command:
```azurecli-interactive az account set -s <subscription name or id> ```
-1. Create a service principal using Azure CLI per the following example:
+1. Create a service principal using Azure CLI per this example:
```azurecli-interactive az ad sp create-for-rbac --name "AzAcSnap" --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth ```
- 1. This should generate an output like the following example:
+ 1. This command should generate an output like this example:
```output {
Create RBAC Service Principal
command and secure the file with appropriate system permissions. > [!WARNING]
- > Make sure the format of the JSON file is exactly as described above. Especially with the URLs enclosed in double quotes (").
+ > Make sure the format of the JSON file is exactly as described in the step to "Create a service principal using Azure CLI". Ensure the URLs are enclosed in double quotes (").
# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance) Communication with the storage back-end executes over an encrypted SSH channel. The following
-example steps are to provide guidance on setup of SSH for this communication.
+example steps are to provide guidance on set up of SSH for this communication.
1. Modify the `/etc/ssh/ssh_config` file
example steps are to provide guidance on setup of SSH for this communication.
1. Create a private/public key pair
- Using the following example command to generate the key pair, do not enter a password when generating a key.
+ Using the following example command to generate the key pair, don't enter a password when generating a key.
```bash ssh-keygen -t rsa ΓÇôb 5120 -C ""
This section explains how to enable communication with the database. Ensure the
> If deploying to a centralized virtual machine, then it will need to have the SAP HANA client installed and set up so the AzAcSnap user can run `hdbsql` and `hdbuserstore` commands. The SAP HANA Client can downloaded from https://tools.hana.ondemand.com/#hanatools. The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to
-initiate and release the database save-point. The following example shows the setup of the SAP
+initiate and release the database save-point. The following example shows the set up of the SAP
HANA v2 user and the `hdbuserstore` for communication to the SAP HANA database. The following example commands set up a user (AZACSNAP) in the SYSTEMDB on SAP HANA 2.
database, change the IP address, usernames, and passwords as appropriate:
> [!NOTE] > Check with corporate policy before making this change.
- This example disables the password expiration for the AZACSNAP user, without this change the user's password will expire preventing snapshots to be taken correctly.
+ This example disables the password expiration for the AZACSNAP user, without this change the user's password can expire preventing snapshots to be taken correctly.
```sql hdbsql SYSTEMDB=> ALTER USER AZACSNAP DISABLE PASSWORD LIFETIME;
database, change the IP address, usernames, and passwords as appropriate:
### Using SSL for communication with SAP HANA
-The `azacsnap` tool utilizes SAP HANA's `hdbsql` command to communicate with SAP HANA. This
-includes the use of SSL options when encrypting communication with SAP HANA. `azacsnap` uses
+The `azacsnap` tool utilizes SAP HANA's `hdbsql` command to communicate with SAP HANA. Using `hdbsql` allows the
+the use of SSL options to encrypt communication with SAP HANA. `azacsnap` uses the
`hdbsql` command's SSL options as follows. The following are always used when using the `azacsnap --ssl` option:
as specified in the `azacsnap` configuration file.
- For commoncrypto: - `mv sapcli.pse <securityPath>/<SID>_keystore`
-When `azacsnap` calls `hdbsql`, it will add `-sslkeystore=<securityPath>/<SID>_keystore`
-to the command line.
+When `azacsnap` calls `hdbsql`, it adds `-sslkeystore=<securityPath>/<SID>_keystore`
+to the `hdbsql` command line.
#### Trust Store files
multiple parameters passed on the command line.
# [Oracle](#tab/oracle)
-The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable/disable backup mode. After putting the database in backup mode, `azacsnap` will query the Oracle database to get a list of files, which have backup-mode as active. This file list is output into an external file, which is in the same location and basename as the log file, but with a ".protected-tables" extension (output filename detailed in the AzAcSnap log file).
+The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable/disable back-up mode. After `azacsnap` puts the database in back-up mode, `azacsnap` will query the Oracle database to get a list of files, which have back-up mode as active. This file list is output into an external file, which is in the same location and basename as the log file, but with a `.protected-tables` filename extension (output filename detailed in the AzAcSnap log file).
The following examples show the set up of the Oracle database user, the use of `mkstore` to create an Oracle Wallet, and the `sqlplus` configuration files required for communication to the Oracle database.
The following example commands set up a user (AZACSNAP) in the Oracle database,
User created. ```
-1. Grant the user permissions - This example sets the permission for the AZACSNAP user to allow for putting the database in backup mode.
+1. Grant the user permissions - This example sets the permission for the AZACSNAP user to allow for putting the database in back-up mode.
```sql SQL> GRANT CREATE SESSION TO azacsnap;
The following example commands set up a user (AZACSNAP) in the Oracle database,
SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME unlimited; ```
- After making this change, there should be no password expiry date for user's with the DEFAULT profile.
+ After this change is made to the database setting, there should be no password expiry date for user's with the DEFAULT profile.
```sql SQL> SELECT username, account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP';
The following example commands set up a user (AZACSNAP) in the Oracle database,
1. The Oracle Wallet provides a method to manage database credentials across multiple domains. This capability is accomplished by using a database
- connection string in the datasource definition, which is resolved by an entry in the wallet. When used correctly, the Oracle Wallet makes having
+ connection string in the datasource definition, which is resolved with an entry in the wallet. When used correctly, the Oracle Wallet makes having
passwords in the datasource configuration unnecessary.
- This makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, thus hiding
+ This set up makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, thus hiding
details of the database connection string. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead of potentially many datasource definitions.
The following example commands set up a user (AZACSNAP) in the Oracle database,
1. Create the Linux user to generate the Oracle Wallet and associated `*.ora` files using the output from the previous step. > [!NOTE]
- > In these examples we are using the `bash` shell. If you're using a different shell (for example, csh), then ensure environment
+ > In these examples we're using the `bash` shell. If you're using a different shell (for example, csh), then ensure environment
> variables have been set correctly. ```bash
The following example commands set up a user (AZACSNAP) in the Oracle database,
> If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and set up so > the AzAcSnap user can run `sqlplus` commands. > The Oracle Instant Client can downloaded from https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html.
- > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
+ > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
1. Complete the following steps on the system running AzAcSnap.
The following example commands set up a user (AZACSNAP) in the Oracle database,
1. Test the set up with AzAcSnap
- After configuring AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connect string (for example, `/@AZACSNAP`),
+ After you configure AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connect string (for example, `/@AZACSNAP`),
it should be possible to connect to the Oracle database. Check the `$TNS_ADMIN` variable is set for the correct Oracle target system
The following example commands set up a user (AZACSNAP) in the Oracle database,
> or by exporting it before each run (for example, `export TNS_ADMIN="/home/orasnap/ORACLE19c" ; cd /home/orasnap/bin ; > ./azacsnap --configfile ORACLE19c.json -c backup --volume data --prefix hourly-ora19c --retention 12`) +
+# [IBM Db2](#tab/db2)
+
+The snapshot tools issue commands to the IBM Db2 database using the command line processor `db2` to enable and disable back-up mode.
+
+After putting the database in back-up mode, `azacsnap` will query the IBM Db2 database to get a list of "protected paths", which are part of the database where back-up mode is active. This list is output into an external file, which is in the same location and basename as the log file, but with a `.\<DBName>-protected-paths` extension (output filename detailed in the AzAcSnap log file).
+
+AzAcSnap uses the IBM Db2 command line processor `db2` to issue SQL commands, such as `SET WRITE SUSPEND` or `SET WRITE RESUME`. Therefore AzAcSnap should be installed in one of the following two ways:
+
+ 1. Installed onto the database server, then complete the set up with "[Db2 local connectivity](#db2-local-connectivity)".
+ 1. Installed onto a centralized back-up system, then complete the set up with "[Db2 remote connectivity](#db2-remote-connectivity)".
+
+#### Db2 local connectivity
+
+If AzAcSnap has been installed onto the database server, then be sure to add the `azacsnap` user to the correct Linux group and import the Db2 instance user's profile per the following example set up.
+
+##### `azacsnap` user permissions
+
+The `azacsnap` user should belong to the same Db2 group as the database instance user. Here we're getting the group membership of the IBM Db2 installation's database instance user `db2tst`.
+
+```bash
+id db2tst
+```
+
+```output
+uid=1101(db2tst) gid=1001(db2iadm1) groups=1001(db2iadm1)
+```
+
+From the output, we can confirm the `db2tst` user has been added to the `db2iadm1` group, therefore add the `azacsnap` user to the group.
+
+```bash
+usermod -a -G db2iadm1 azacsnap
+```
+
+##### `azacsnap` user profile
+
+The `azacsnap` user needs to be able to execute the `db2` command. By default the `db2` command won't be in the `azacsnap` user's $PATH, therefore add the following to the user's `.bashrc` file using your own IBM Db2 installation value for `INSTHOME`.
+
+```output
+# The following four lines have been added to allow this user to run the DB2 command line processor.
+INSTHOME="/db2inst/db2tst"
+if [ -f ${INSTHOME}/sqllib/db2profile ]; then
+ . ${INSTHOME}/sqllib/db2profile
+fi
+```
+
+Test the user can run the `db2` command line processor.
+
+```bash
+su - azacsnap
+db2
+```
+
+```output
+(c) Copyright IBM Corporation 1993,2007
+Command Line Processor for DB2 Client 11.5.7.0
+
+You can issue database manager commands and SQL statements from the command
+prompt. For example:
+ db2 => connect to sample
+ db2 => bind sample.bnd
+
+For general help, type: ?.
+For command help, type: ? command, where command can be
+the first few keywords of a database manager command. For example:
+ ? CATALOG DATABASE for help on the CATALOG DATABASE command
+ ? CATALOG for help on all of the CATALOG commands.
+
+To exit db2 interactive mode, type QUIT at the command prompt. Outside
+interactive mode, all commands must be prefixed with 'db2'.
+To list the current command option settings, type LIST COMMAND OPTIONS.
+
+For more detailed help, refer to the Online Reference Manual.
+```
+
+```sql
+db2 => quit
+DB20000I The QUIT command completed successfully.
+```
+
+Now configure azacsnap to user localhost. Once this preliminary test as the `azacsnap` user is working correctly, go on to configure (`azacsnap -c configure`) with the `serverAddress=localhost` and test (`azacsnap -c test --test db2`) azacsnap database connectivity.
++
+#### Db2 remote connectivity
+
+If AzAcSnap has been installed following option 2, then be sure to allow SSH access to the Db2 database instance per the following example set up.
+
+Log in to the AzAcSnap system as the `azacsnap` user and generate a public/private SSH key pair.
+
+```bash
+ssh-keygen
+```
+
+```output
+Generating public/private rsa key pair.
+Enter file in which to save the key (/home/azacsnap/.ssh/id_rsa):
+Enter passphrase (empty for no passphrase):
+Enter same passphrase again:
+Your identification has been saved in /home/azacsnap/.ssh/id_rsa.
+Your public key has been saved in /home/azacsnap/.ssh/id_rsa.pub.
+The key fingerprint is:
+SHA256:4cr+0yN8/dawBeHtdmlfPnlm1wRMTO/mNYxarwyEFLU azacsnap@db2-02
+The key's randomart image is:
++[RSA 2048]-+
+| ... o. |
+| . . +. |
+| .. E + o.|
+| .... B..|
+| S. . o *=|
+| . . . o o=X|
+| o. . + .XB|
+| . + + + +oX|
+| ...+ . =.o+|
++-[SHA256]--+
+```
+
+Get the contents of the public key.
+
+```bash
+cat .ssh/id_rsa.pub
+```
+
+```output
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02
+```
+
+Log in to the IBM Db2 system as the Db2 Instance User.
+
+Add the contents of the AzAcSnap user's public key to the Db2 Instance Users `authorized_keys` file.
+
+```bash
+echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02" >> ~/.ssh/authorized_keys
+```
+
+Log in to the AzAcSnap system as the `azacsnap` user and test SSH access.
+
+```bash
+ssh <InstanceUser>@<ServerAddress>
+```
+
+```output
+[InstanceUser@ServerName ~]$
+```
+
+Test the user can run the `db2` command line processor.
+
+```bash
+db2
+```
+
+```output
+(c) Copyright IBM Corporation 1993,2007
+Command Line Processor for DB2 Client 11.5.7.0
+
+You can issue database manager commands and SQL statements from the command
+prompt. For example:
+ db2 => connect to sample
+ db2 => bind sample.bnd
+
+For general help, type: ?.
+For command help, type: ? command, where command can be
+the first few keywords of a database manager command. For example:
+ ? CATALOG DATABASE for help on the CATALOG DATABASE command
+ ? CATALOG for help on all of the CATALOG commands.
+
+To exit db2 interactive mode, type QUIT at the command prompt. Outside
+interactive mode, all commands must be prefixed with 'db2'.
+To list the current command option settings, type LIST COMMAND OPTIONS.
+
+For more detailed help, refer to the Online Reference Manual.
+```
+
+```sql
+db2 => quit
+DB20000I The QUIT command completed successfully.
+```
+
+```bash
+[prj@db2-02 ~]$ exit
+
+```output
+logout
+Connection to <serverAddress> closed.
+```
+
+Once this is working correctly go on to configure (`azacsnap -c configure`) with the Db2 server's external IP address and test (`azacsnap -c test --test db2`) azacsnap database connectivity.
+
+Run the `azacsnap` test command
+
+```bash
+cd ~/bin
+azacsnap -c test --test db2 --configfile Db2.json
+```
+
+```output
+BEGIN : Test process started for 'db2'
+BEGIN : Db2 DB tests
+PASSED: Successful connectivity to Db2 DB version v11.5.7.0
+END : Test process complete for 'db2'
+```
+ ## Installing the snapshot tools The downloadable self-installer is designed to make the snapshot tools easy to set up and run with
-non-root user privileges (for example, azacsnap). The installer will set up the user and put the snapshot tools
+non-root user privileges (for example, azacsnap). The installer sets up the user and puts the snapshot tools
into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`). The self-installer tries to determine the correct settings and paths for all the files based on the
-configuration of the user performing the installation (for example, root). If the previous setup steps (Enable
-communication with storage and SAP HANA) were run as root, then the installation will copy the
-private key and the `hdbuserstore` to the backup user's location. The steps to enable communication with the storage back-end
-and SAP HANA can be manually done by a knowledgeable administrator after the installation.
+configuration of the user performing the installation (for example, root). If the previous set up steps (Enable
+communication with storage and SAP HANA) were run as root, then the installation copies the
+private key and the `hdbuserstore` to the back-up user's location. The steps to enable communication with the storage back-end
+and database can be manually done by a knowledgeable administrator after the installation.
> [!NOTE] > For earlier SAP HANA on Azure Large Instance installations, the directory of pre-installed snapshot tools was `/hana/shared/<SID>/exe/linuxx86_64/hdb`.
-With the [pre-requisite steps](#prerequisites-for-installation) completed, itΓÇÖs now possible to install the snapshot tools using the self-installer as follows:
+With the [prerequisite steps](#prerequisites-for-installation) completed, itΓÇÖs now possible to install the snapshot tools using the self-installer as follows:
1. Copy the downloaded self-installer to the target system. 1. Execute the self-installer as the `root` user, see the following example. If necessary, make the file executable using the `chmod +x *.run` command.
-Running the self-installer command without any arguments will display help on using the installer to
-install the snapshot tools as follows:
+Running the self-installer command without any arguments displays help on using the installer as follows:
```bash chmod +x azacsnap_installer_v5.0.run
Examples of a target directory are ./tmp or /usr/local/bin
> [!NOTE] > The self-installer has an option to extract (-X) the snapshot tools from the bundle without
-performing any user creation and setup. This allows an experienced administrator to
-complete the setup steps manually, or to copy the commands to upgrade an existing
+performing any user creation and set up. This allows an experienced administrator to
+complete the set up steps manually, or to copy the commands to upgrade an existing
installation. ### Easy installation of snapshot tools (default) The installer has been designed to quickly install the snapshot tools for SAP HANA on Azure. By default, if the
-installer is run with only the -I option, it will do the following steps:
+installer is run with only the -I option, it does the following steps:
-1. Create Snapshot user 'azacsnap', home directory, and set group membership.
+1. Create Snapshot user `azacsnap`, home directory, and set group membership.
1. Configure the azacsnap user's login `~/.profile`.
-1. Search filesystem for directories to add to azacsnap's `$PATH`, these are typically the paths to
- the SAP HANA tools, such as `hdbsql` and `hdbuserstore`.
+1. Search filesystem for directories to add to azacsnap's `$PATH`. This task allows the user who runs azacsnap to use SAP HANA commands, such as `hdbsql` and `hdbuserstore`.
1. Search filesystem for directories to add to azacsnap's `$LD_LIBRARY_PATH`. Many commands
- require a library path to be set in order to execute correctly, this configures it for the
+ require a library path to be set in order to execute correctly, this task configures it for the
installed user.
-1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running the install). This assumes the "root" user has
+1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running the install). This task assumes the "root" user has
already configured connectivity to the storage (for more information, see section [Enable communication with storage](#enable-communication-with-storage)).
-3. Copy the SAP HANA connection secure user store for the target user, azacsnap. This
+3. Copy the SAP HANA connection secure user store for the target user, azacsnap. This task
assumes the "root" user has already configured the secure user store (for more information, see section "Enable communication with SAP HANA"). 1. The snapshot tools are extracted into `/home/azacsnap/bin/`. 1. The commands in `/home/azacsnap/bin/` have their permissions set (ownership and executable bit, etc.).
The following output shows the steps to complete after running the installer wit
1. Run your first snapshot backup 1. `azacsnap -c backup ΓÇô-volume data--prefix=hana_test --retention=1`
-Step 2 will be necessary if "[Enable communication with database](#enable-communication-with-database)" wasn't done before the
+Step 2 is necessary if "[Enable communication with database](#enable-communication-with-database)" wasn't done before the
installation. > [!NOTE]
This section explains how to configure the data base.
### SAP HANA Configuration
-There are some recommended changes to be applied to SAP HANA to ensure protection of the log backups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` will output their files to the `$(DIR_INSTANCE)/backup/log` directory, and it's unlikely this path is on a volume which `azacsnap` is configured to snapshot these files won't be protected with storage snapshots.
+There are some recommended changes to be applied to SAP HANA to ensure protection of the log back-ups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` are set so SAP HANA will put related files into the `$(DIR_INSTANCE)/backup/log` directory. It's unlikely this location is on a volume which `azacsnap` is configured to snapshot, therefore these files won't be protected with storage snapshots.
The following `hdbsql` command examples demonstrate setting the log and catalog paths to locations, which are on storage volumes that can be snapshot by `azacsnap`. Be sure to check the values on the command line match the local SAP HANA configuration.
drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog
``` If the path needs to be created, the following example creates the path and sets the correct
-ownership and permissions. These commands will need to be run as root.
+ownership and permissions. These commands need to be run as root.
```bash mkdir /hana/logbackups/H80/catalog
ls -ld /hana/logbackups/H80/catalog
drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog ```
-The following example will change the SAP HANA setting.
+The following example changes the SAP HANA setting.
```bash hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_catalogbackup') = '/hana/logbackups/H80/catalog' WITH RECONFIGURE"
hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD
### Check log and catalog backup locations
-After making the changes to the log and catalog backup locations, confirm the settings are correct with the following command.
-In this example, the settings that have been set following the example will display as SYSTEM settings.
+After making the changes to the log and catalog back-up locations, confirm the settings are correct with the following command.
+In this example, the settings that have been set following the example are displayed as SYSTEM settings.
> This query also returns the DEFAULT settings for comparison.
global.ini,SYSTEM,,,persistence,basepath_logvolumes,/hana/log/H80
### Configure log backup timeout
-The default setting for SAP HANA to perform a log backup is 900 seconds (15 minutes). It's
+The default setting for SAP HANA to perform a log back-up is 900 seconds (15 minutes). It's
recommended to reduce this value to 300 seconds (for example, 5 minutes). Then it's possible to run regular
-backups of these files (for example, every 10 minutes). This is done by adding the log_backups volumes to the OTHER volume section of the
+back-ups of these files (for example, every 10 minutes). These back-ups can be taken by adding the log_backups volumes to the OTHER volume section of the
configuration file. ```bash
hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD
#### Check log backup timeout
-After making the change to the log backup timeout, check to ensure this has been set as follows.
-In this example, the settings that have been set will display as the SYSTEM settings, but this
+After making the change to the log back-up timeout, check to ensure the timeout is set as follows.
+In this example, the settings that have been set are displayed as SYSTEM settings, but this
query also returns the DEFAULT settings for comparison. ```bash
The following changes must be applied to the Oracle Database to allow for monito
QUIT ```
+# [IBM Db2](#tab/db2)
+
+No special database configuration is required for Db2 as we're using the Instance User's local operating system environment.
+ ## Next steps
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
na Previously updated : 05/03/2023 Last updated : 08/21/2023
Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool tha
- **Databases** - SAP HANA (refer to [support matrix](azacsnap-get-started.md#snapshot-support-matrix-from-sap) for details) - Oracle Database release 12 or later (refer to [Oracle VM images and their deployment on Microsoft Azure](../virtual-machines/workloads/oracle/oracle-vm-solutions.md) for details)
+ - IBM Db2 for LUW on Linux-only version 10.5 or later (refer to [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](../virtual-machines/workloads/sap/dbms_guide_ibm.md) for details)
- **Operating Systems** - SUSE Linux Enterprise Server 12+
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
na-+ Previously updated : 12/16/2022 Last updated : 08/21/2023
This article provides a guide on set up and usage of the new features in preview for **AzAcSnap**. This guide should be read along with the main documentation for AzAcSnap at [aka.ms/azacsnap](./azacsnap-introduction.md).
-The preview features provided with **AzAcSnap 7** are:
+The preview features provided with **AzAcSnap 9** are:
- Azure NetApp Files Backup.-- IBM Db2 Database. - Azure Managed Disk.-- Azure Key Vault support for storing Service Principal. ## Providing feedback
This can be enabled in AzAcSnap by setting `"anfBackup": "renameOnly"` in the co
This can also be done using the `azacsnap -c configure --configuration edit --configfile <configfilename>` and when asked to `Enter new value for 'ANF Backup (none, renameOnly)' (current = 'none'):` enter `renameOnly`.
-## IBM Db2 Database
-
-### Supported platforms and operating systems
-
-> [!NOTE]
-> Support for IBM Db2 is Preview feature.
-> This section's content supplements [What is Azure Application Consistent Snapshot tool](azacsnap-introduction.md) page.
-
-New database platforms and operating systems supported with this preview release.
--- **Databases**
- - IBM Db2 for LUW on Linux-only is in preview as of Db2 version 10.5 (refer to [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](../virtual-machines/workloads/sap/dbms_guide_ibm.md) for details)
--
-### Enable communication with database
-
-> [!NOTE]
-> Support for IBM Db2 is Preview feature.
-> This section's content supplements [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) page.
-
-This section explains how to enable communication with the database. Ensure the database you're using is correctly selected from the tabs.
-
-# [IBM Db2](#tab/db2)
-
-The snapshot tools issue commands to the IBM Db2 database using the command line processor `db2` to enable and disable backup mode.
-
-After putting the database in backup mode, `azacsnap` will query the IBM Db2 database to get a list of "protected paths", which are part of the database where backup-mode is active. This list is output into an external file, which is in the same location and basename as the log file, but with a ".\<DBName>-protected-paths" extension (output filename detailed in the AzAcSnap log file).
-
-AzAcSnap uses the IBM Db2 command line processor `db2` to issue SQL commands, such as `SET WRITE SUSPEND` or `SET WRITE RESUME`. Therefore AzAcSnap should be installed in one of the following two ways:
-
- 1. Installed onto the database server, then complete the setup with "[Local connectivity](#local-connectivity)".
- 1. Installed onto a centralized backup system, then complete the setup with "[Remote connectivity](#remote-connectivity)".
-
-#### Local connectivity
-
-If AzAcSnap has been installed onto the database server, then be sure to add the `azacsnap` user to the correct Linux group and import the Db2 instance user's profile per the following example setup.
-
-##### `azacsnap` user permissions
-
-The `azacsnap` user should belong to the same Db2 group as the database instance user. Here we are getting the group membership of the IBM Db2 installation's database instance user `db2tst`.
-
-```bash
-id db2tst
-```
-
-```output
-uid=1101(db2tst) gid=1001(db2iadm1) groups=1001(db2iadm1)
-```
-
-From the output we can confirm the `db2tst` user has been added to the `db2iadm1` group, therefore add the `azacsnap` user to the group.
-
-```bash
-usermod -a -G db2iadm1 azacsnap
-```
-
-##### `azacsnap` user profile
-
-The `azacsnap` user will need to be able to execute the `db2` command. By default the `db2` command will not be in the `azacsnap` user's $PATH, therefore add the following to the user's `.bashrc` file using your own IBM Db2 installation value for `INSTHOME`.
-
-```output
-# The following four lines have been added to allow this user to run the DB2 command line processor.
-INSTHOME="/db2inst/db2tst"
-if [ -f ${INSTHOME}/sqllib/db2profile ]; then
- . ${INSTHOME}/sqllib/db2profile
-fi
-```
-
-Test the user can run the `db2` command line processor.
-
-```bash
-su - azacsnap
-db2
-```
-
-```output
-(c) Copyright IBM Corporation 1993,2007
-Command Line Processor for DB2 Client 11.5.7.0
-
-You can issue database manager commands and SQL statements from the command
-prompt. For example:
- db2 => connect to sample
- db2 => bind sample.bnd
-
-For general help, type: ?.
-For command help, type: ? command, where command can be
-the first few keywords of a database manager command. For example:
- ? CATALOG DATABASE for help on the CATALOG DATABASE command
- ? CATALOG for help on all of the CATALOG commands.
-
-To exit db2 interactive mode, type QUIT at the command prompt. Outside
-interactive mode, all commands must be prefixed with 'db2'.
-To list the current command option settings, type LIST COMMAND OPTIONS.
-
-For more detailed help, refer to the Online Reference Manual.
-```
-
-```sql
-db2 => quit
-DB20000I The QUIT command completed successfully.
-```
-
-Now configure azacsnap to user localhost.
-Once this is working correctly go on to configure (`azacsnap -c configure`) with the `serverAddress=localhost` and test (`azacsnap -c test --test db2`) azacsnap database connectivity.
--
-#### Remote connectivity
-
-If AzAcSnap has been installed following option 2, then be sure to allow SSH access to the Db2 database instance per the following example setup.
--
-Log in to the AzAcSnap system as the `azacsnap` user and generate a public/private SSH key pair.
-
-```bash
-ssh-keygen
-```
-
-```output
-Generating public/private rsa key pair.
-Enter file in which to save the key (/home/azacsnap/.ssh/id_rsa):
-Enter passphrase (empty for no passphrase):
-Enter same passphrase again:
-Your identification has been saved in /home/azacsnap/.ssh/id_rsa.
-Your public key has been saved in /home/azacsnap/.ssh/id_rsa.pub.
-The key fingerprint is:
-SHA256:4cr+0yN8/dawBeHtdmlfPnlm1wRMTO/mNYxarwyEFLU azacsnap@db2-02
-The key's randomart image is:
-+[RSA 2048]-+
-| ... o. |
-| . . +. |
-| .. E + o.|
-| .... B..|
-| S. . o *=|
-| . . . o o=X|
-| o. . + .XB|
-| . + + + +oX|
-| ...+ . =.o+|
-+-[SHA256]--+
-```
-
-Get the contents of the public key.
-
-```bash
-cat .ssh/id_rsa.pub
-```
-
-```output
-ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02
-```
-
-Log in to the IBM Db2 system as the Db2 Instance User.
-
-Add the contents of the AzAcSnap user's public key to the Db2 Instance Users `authorized_keys` file.
-
-```bash
-echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02" >> ~/.ssh/authorized_keys
-```
-
-Log in to the AzAcSnap system as the `azacsnap` user and test SSH access.
-
-```bash
-ssh <InstanceUser>@<ServerAddress>
-```
-
-```output
-[InstanceUser@ServerName ~]$
-```
-
-Test the user can run the `db2` command line processor.
-
-```bash
-db2
-```
-
-```output
-(c) Copyright IBM Corporation 1993,2007
-Command Line Processor for DB2 Client 11.5.7.0
-
-You can issue database manager commands and SQL statements from the command
-prompt. For example:
- db2 => connect to sample
- db2 => bind sample.bnd
-
-For general help, type: ?.
-For command help, type: ? command, where command can be
-the first few keywords of a database manager command. For example:
- ? CATALOG DATABASE for help on the CATALOG DATABASE command
- ? CATALOG for help on all of the CATALOG commands.
-
-To exit db2 interactive mode, type QUIT at the command prompt. Outside
-interactive mode, all commands must be prefixed with 'db2'.
-To list the current command option settings, type LIST COMMAND OPTIONS.
-
-For more detailed help, refer to the Online Reference Manual.
-```
-
-```sql
-db2 => quit
-DB20000I The QUIT command completed successfully.
-```
-
-```bash
-[prj@db2-02 ~]$ exit
-
-```output
-logout
-Connection to <serverAddress> closed.
-```
-
-Once this is working correctly go on to configure (`azacsnap -c configure`) with the Db2 server's external IP address and test (`azacsnap -c test --test db2`) azacsnap database connectivity.
-
-Run the `azacsnap` test command
-
-```bash
-cd ~/bin
-azacsnap -c test --test db2 --configfile Db2.json
-```
-
-```output
-BEGIN : Test process started for 'db2'
-BEGIN : Db2 DB tests
-PASSED: Successful connectivity to Db2 DB version v11.5.7.0
-END : Test process complete for 'db2'
-```
---
-### Configuring the database
-
-This section explains how to configure the data base.
-
-# [IBM Db2](#tab/db2)
-
-No special database configuration is required for Db2 as we are using the Instance User's local operating system environment.
---
-### Configuring AzAcSnap
-
-This section explains how to configure AzAcSnap for the specified database.
-
-> [!NOTE]
-> Support for Db2 is Preview feature.
-> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page.
-
-### Details of required values
-
-The following sections provide detailed guidance on the various values required for the configuration file.
-
-# [IBM Db2](#tab/db2)
-
-#### Db2 Database values for configuration
-
-When adding a Db2 database to the configuration, the following values are required:
--- **Db2 Server's Address** = The database server hostname or IP address.
- - If Db2 Server Address (serverAddress) matches '127.0.0.1' or 'localhost' then azacsnap will execute all `db2` commands locally (refer "Local connectivity"). Otherwise AzAcSnap will use the serverAddress as the host to connect to via SSH using the "Instance User" as the SSH login name, this can be validated with `ssh <instanceUser>@<serverAddress>` replacing instanceUser and serverAddress with the respective values (refer "Remote connectivity").
-- **Instance User** = The database System Instance User.-- **SID** = The database System ID.--- ## Azure Managed Disk > [!NOTE]
Although `azacsnap` is currently missing the `-c restore` option for Azure Manag
-## Azure Key Vault
-
-From AzAcSnap v5.1, it's possible to store the Service Principal securely as a Secret in Azure Key Vault. Using this feature allows for centralization of Service Principal credentials
-where an alternate administrator can set up the Secret for AzAcSnap to use.
-
-The steps to follow to set up Azure Key Vault and store the Service Principal in a Secret are as follows:
-
-1. Within an Azure Cloud Shell session, make sure you're logged on at the subscription where you want to create the Azure Key Vault:
-
- ```azurecli-interactive
- az account show
- ```
-
-1. If the subscription isn't correct, use the following command to set the Cloud Shell to the correct subscription:
-
- ```azurecli-interactive
- az account set -s <subscription name or id>
- ```
-
-1. Create Azure Key Vault
-
- ```azurecli-interactive
- az keyvault create --name "<AzureKeyVaultName>" -g <ResourceGroupName>
- ```
-
-1. Create the trust relationship and assign the policy for virtual machine to get the Secret
-
- 1. Show AzAcSnap virtual machine Identity
-
- If the virtual machine already has an identity created, retrieve it as follows:
-
- ```azurecli-interactive
- az vm identity show --name "<VMName>" --resource-group "<ResourceGroup>"
- ```
-
- The `"principalId"` in the output is used as the `--object-id` value when setting the Policy with `az keyvault set-policy`.
-
- ```output
- {
- "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "tenantId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "type": "SystemAssigned, UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/99z999zz-99z9-99zz-99zz-9z9zz999zz99/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-eastus2": {
- "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99"
- }
- }
- }
- ```
-
- 1. Set AzAcSnap virtual machine Identity (if necessary)
-
- If the VM doesn't have an identity, create it as follows:
-
- ```azurecli-interactive
- az vm identity assign --name "<VMName>" --resource-group "<ResourceGroup>"
- ```
-
- The `"systemAssignedIdentity"` in the output is used as the `--object-id` value when setting the Policy with `az keyvault set-policy`.
-
- ```output
- {
- "systemAssignedIdentity": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "userAssignedIdentities": {
- "/subscriptions/99z999zz-99z9-99zz-99zz- 9z9zz999zz99/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-eastus2": {
- "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99"
- }
- }
- }
- ```
-
- 1. Assign a suitable policy for the virtual machine to be able to retrieve the Secret from the Key Vault.
-
- ```azurecli-interactive
- az keyvault set-policy --name "<AzureKeyVaultName>" --object-id "<VMIdentity>" --secret-permissions get
- ```
-
-1. Create Azure Key Vault Secret
-
- Create the secret, which will store the Service Principal credential information.
-
- It's possible to paste the contents of the Service Principal. In the **Bash** Cloud Shell below a single apostrophe character is put after value then
- press the `[Enter]` key, then paste the contents of the Service Principal, close the content by adding another single apostrophe and press the `[Enter]` key.
- This command should create the Secret and store it in Azure Key Vault.
-
- > [!TIP]
- > If you have a separate Service Principal per installation the `"<NameOfSecret>"` could be the SID, or some other suitable unique identifier.
-
- Following example is for using the **Bash** Cloud Shell:
-
- ```azurecli-interactive
- az keyvault secret set --name "<NameOfSecret>" --vault-name "<AzureKeyVaultName>" --value '
- {
- "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "clientSecret": "<ClientSecret>",
- "subscriptionId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "tenantId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
- "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
- "resourceManagerEndpointUrl": "https://management.azure.com/",
- "activeDirectoryGraphResourceId": "https://graph.windows.net/",
- "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
- "galleryEndpointUrl": "https://gallery.azure.com/",
- "managementEndpointUrl": "https://management.core.windows.net/"
- }'
- ```
-
- Following example is for using the **PowerShell** Cloud Shell:
-
- > [!WARNING]
- > In PowerShell the double quotes have to be escaped with an additional double quote, so one double quote (") becomes two double quotes ("").
-
- ```azurecli-interactive
- az keyvault secret set --name "<NameOfSecret>" --vault-name "<AzureKeyVaultName>" --value '
- {
- ""clientId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"",
- ""clientSecret"": ""<ClientSecret>"",
- ""subscriptionId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"",
- ""tenantId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"",
- ""activeDirectoryEndpointUrl"": ""https://login.microsoftonline.com"",
- ""resourceManagerEndpointUrl"": ""https://management.azure.com/"",
- ""activeDirectoryGraphResourceId"": ""https://graph.windows.net/"",
- ""sqlManagementEndpointUrl"": ""https://management.core.windows.net:8443/"",
- ""galleryEndpointUrl"": ""https://gallery.azure.com/"",
- ""managementEndpointUrl"": ""https://management.core.windows.net/""
- }'
- ```
-
- The output of the command `az keyvault secret set` will have the URI value to use as `"authFile"` entry in the AzAcSnap JSON configuration file. The URI is
- the value of the `"id"` below (for example, `"https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999"`).
-
- ```output
- {
- "attributes": {
- "created": "2022-02-23T20:21:01+00:00",
- "enabled": true,
- "expires": null,
- "notBefore": null,
- "recoveryLevel": "Recoverable+Purgeable",
- "updated": "2022-02-23T20:21:01+00:00"
- },
- "contentType": null,
- "id": "https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999",
- "kid": null,
- "managed": null,
- "name": "AzureAuth",
- "tags": {
- "file-encoding": "utf-8"
- },
- "value": "\n{\n \"clientId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"clientSecret\": \"<ClientSecret>\",\n \"subscriptionId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"tenantId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\",\n \"resourceManagerEndpointUrl\": \"https://management.azure.com/\",\n \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\",\n \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\",\n \"galleryEndpointUrl\": \"https://gallery.azure.com/\",\n \"managementEndpointUrl\": \"https://management.core.windows.net/\"\n}"
- }
- ```
-
-1. Update AzAcSnap JSON configuration file
-
- Replace the value for the authFile entry with the Secret's ID value. Making this change can be done by editing the file using a tool like `vi`, or by using the
- `azacsnap -c configure --configuration edit` option.
-
- 1. Old Value
-
- ```output
- "authFile": "azureauth.json"
- ```
-
- 1. New Value
-
- ```output
- "authFile": "https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999"
- ```
-- ## Next steps - [Get started](azacsnap-get-started.md)
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
na Previously updated : 06/27/2023 Last updated : 08/21/2023
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
+## Aug-2023
+
+### AzAcSnap 9 (Build: 1AE5640)
+
+AzAcSnap 9 is being released with the following fixes and improvements:
+
+- Features moved to GA (generally available):
+ - IBM Db2 Database support.
+ - [System Managed Identity](azacsnap-installation.md#azure-system-managed-identity) support for easier setup while improving security posture.
+- Fixes and Improvements:
+ - Configure (`-c configure`) changes:
+ - Allows for a blank value for `authFile` in the configuration file when using System Managed Identity.
+- Features added to [Preview](azacsnap-preview.md):
+ - None.
+- Features removed:
+ - Azure Key Vault support has been removed from Preview, it isn't needed now AzAcSnap supports a System Managed Identity directly.
+
+Download the [AzAcSnap 9](https://aka.ms/azacsnap-9) installer.
+ ## Jun-2023 ### AzAcSnap 8b (Build: 1AD3679)
AzAcSnap 8b is being released with the following fixes and improvements:
- Fixes and Improvements: - General improvement to `azacsnap` command exit codes.
- - `azacsnap` should return an exit code of 0 (zero) when it has run as expected, otherwise it should return an exit code of non-zero. For example, running `azacsnap` will return non-zero as it has not done anything and will show usage information whereas `azacsnap -h` will return exit-code of zero as it's expected to return usage information.
+ - `azacsnap` should return an exit code of 0 (zero) when it has run as expected, otherwise it should return an exit code of non-zero. For example, running `azacsnap` returns non-zero as it hasn't done anything and shows usage information whereas `azacsnap -h` returns exit-code of zero as it's performing as expected by returning usage information.
- Any failure in `--runbefore` exits before any backup activity and returns the `--runbefore` exit code. - Any failure in `--runafter` returns the `--runafter` exit code. - Backup (`-c backup`) changes:
AzAcSnap 8 is being released with the following fixes and improvements:
- Backup (`-c backup`) changes: - Fix for incorrect error output when using `-c backup` and the database has ΓÇÿbackintΓÇÖ configured. - Remove lower-case conversion for anfBackup rename-only option using `-c backup` so the snapshot name maintains case of Volume name.
- - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA cannot be put into backup-mode, AzAcSnap immediately exits with an error.
+ - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA can't be put into backup-mode, AzAcSnap immediately exits with an error.
- Details (`-c details`) changes: - Fix for listing snapshot details with `-c details` when using Azure Large Instance storage. - Logging enhancements:
AzAcSnap v5.1 Preview (Build: 20220125.85030) has been released with the followi
AzAcSnap v5.0.2 (Build: 20210827.19086) is provided as a patch update to the v5.0 branch with the following fixes and improvements: -- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - then AzAcSnap would trap this exit code as a failure and abort. With this update extra verification is done to validate correct execution, this includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional exit code checks.-- Fix the installer's check for the location of the hdbuserstore. The installer would search the filesystem for an incorrect source directory for the hdbuserstore location for the user running the install - the installer now searches for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.
+- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - then AzAcSnap would trap this exit code as a failure and abort. With this update extra verification is done to validate correct execution, this validation includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional exit code checks.
+- Fix the installer's check for the location of the hdbuserstore. The installer would search the filesystem for an incorrect source directory for the hdbuserstore location for the user running the install - the installer now searches for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was preconfigured for the `root` user before installing `azacsnap`.
- Installer now shows the version it will install/extract (if the installer is run without any arguments). ## May-2021
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
ms.assetid:
na-+ Last updated 03/08/2023
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
You can modify SMB share permissions using Microsoft Management Console (MMC).
* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) * [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
-* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md) * [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
na Previously updated : 11/03/2022 Last updated : 09/05/2023 # Delegate a subnet to Azure NetApp Files
You must delegate a subnet to Azure NetApp Files. When you create a volume, you
## Considerations
-* The wizard for creating a new subnet defaults to a /24 network mask, which provides for 251 available IP addresses. Using a /28 network mask, which provides for 11 usable IP addresses, is sufficient for most use cases. You should consider a larger subnet (for example, /26 network mask) in scenarios such as SAP HANA where many volumes and storage endpoints are anticipated. You can also stay with the default network mask /24 as proposed by the wizard if you don't need to reserve many client or VM IP addresses in your Azure Virtual Network (VNet). Note that the network mask of the delegated network cannot be changed after the initial creation.
+* The wizard for creating a new subnet defaults to a /24 network mask, which provides for 251 available IP addresses. You should consider a larger subnet (for example, /26 network mask) in scenarios such as SAP HANA where many volumes and storage endpoints are anticipated. You can also stay with the default network mask /24 as proposed by the wizard if you don't need to reserve many client or VM IP addresses in your Azure Virtual Network (VNet). Note that the network mask of the delegated network cannot be changed after the initial creation.
* In each VNet, only one subnet can be delegated to Azure NetApp Files. Azure enables you to create multiple delegated subnets in a VNet. However, any attempts to create a new volume will fail if you use more than one delegated subnet. You can have only a single delegated subnet in a VNet. A NetApp account can deploy volumes into multiple VNets, each having its own delegated subnet.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Australia Southeast * Brazil South * Canada Central
+* Canada East
* Central India * East Asia * East US
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Switzerland West * UAE Central * UAE North
+* UK South
* West Europe * West US * West US 2
azure-netapp-files Azure Netapp Files Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
na Previously updated : 08/02/2022 Last updated : 08/31/2023 # Performance considerations for Azure NetApp Files
-The [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS is determined by a combination of the quota assigned to the volume and the service level selected. For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
+> [!IMPORTANT]
+> This article addresses performance considerations for *regular volumes* only.
+> For *large volumes*, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md#requirements-and-considerations).
+
+The combination of the quota assigned to the volume and the selected service level determins the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS . For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
## Quota and throughput
-Throughput limits are a combination of read and write speed. The throughput limit is only one determinant of the actual performance that will be realized.
+Throughput limits are a combination of read and write speed. The throughput limit is only one determinant of the actual performance to be realized.
-Typical storage performance considerations, including read and write mix, the transfer size, random or sequential patterns, and many other factors will contribute to the total performance delivered.
+Typical storage performance considerations contribute to the total performance delivered. The considerations include read and write mix, the transfer size, random or sequential patterns, and many other factors.
Metrics are reported as aggregates of multiple data points collected during a five-minute interval. For more information about metrics aggregation, see [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md). The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB will provision a throughput limit that is high enough to achieve this level of performance.
-In the case of automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing additional data. However, the added quota will not result in a further increase in actual throughput.
+For automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing more data. However, the added quota doesn't result in a further increase in actual throughput.
The same empirical throughput ceiling applies to volumes with manual QoS. The maximum throughput can assign to a volume is 4,500 MiB/s.
If a workloadΓÇÖs performance is throughput-limit bound, it is possible to overp
For example, if an automatic QoS volume in the Premium storage tier has only 500 GiB of data but requires 128 MiB/s of throughput, you can set the quota to 2 TiB so that the throughput level is set accordingly (64 MiB/s per TB * 2 TiB = 128 MiB/s).
-If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In the example above, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
+If you consistently overprovision a volume for achieving a higher throughput, consider using the manual QoS volumes or using a higher service level instead. In this example, you can achieve the same throughput limit with half the automatic QoS volume quota by using the Ultra storage tier instead (128 MiB/s per TiB * 1 TiB = 128 MiB/s).
### Dynamically increasing or decreasing volume quota
If your performance requirements are temporary in nature, or if you have increas
* Volume quota can be increased or decreased without any need to pause IO, and access to the volume is not interrupted or impacted.
- You can adjust the quota during an active I/O transaction against a volume. Note that volume quota can never be decreased below the amount of logical data that is stored in the volume.
+ You can adjust the quota during an active I/O transaction against a volume. Volume quota can never be decreased below the amount of logical data that is stored in the volume.
* When volume quota is changed, the corresponding change in throughput limit is nearly instantaneous.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [SAP S/4HANA in Linux on Azure - Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-s4hana) * [Run SAP BW/4HANA with Linux VMs - Azure Architecture Center](/azure/architecture/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines) * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md)
+* [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300)
* [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417) * [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md)
This section provides references to SAP on Azure solutions.
* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161) * [SAP HANA on Azure NetApp Files - Data protection with BlueXP backup and recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-hana-on-azure-netapp-files-data-protection-with-bluexp/ba-p/3840116)
+* [SAP HANA on Azure NetApp Files ΓÇô System refresh & cloning operations with BlueXP backup and recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-hana-on-azure-netapp-files-system-refresh-amp-cloning/ba-p/3908660)
* [Azure NetApp Files Backup for SAP Solutions](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/anf-backup-for-sap-solutions/ba-p/3717977) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf)
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 08/09/2023 Last updated : 08/23/2023
Azure NetApp Files backup is supported for the following regions:
* Australia East * Australia Southeast * Brazil South
+* Canada Central
* Canada East * East Asia * East US
Azure NetApp Files backup is supported for the following regions:
* Germany West Central * Japan East * Japan West
+* Korea Central
+* North Central US
* North Europe
+* Norway East
* Qatar Central * South Africa North * South Central US * Southeast Asia
+* UAE North
* UK South * West Europe * West US
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
na Previously updated : 02/23/2023 Last updated : 08/15/2023 # Requirements and considerations for Azure NetApp Files backup
Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
* Policy-based (scheduled) Azure NetApp Files backup is independent from [snapshot policy configuration](azure-netapp-files-manage-snapshots.md).
-* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume.
+* In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a CRR or CZR *destination* volume.
* See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups.
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
Title: Configure application volume groups for SAP HANA REST API | Microsoft Docs
+ Title: Configure application volume groups for SAP HANA using REST API
description: Setting up your application volume groups for the SAP HANA API requires special configurations. documentationcenter: ''
Last updated 04/09/2023
-# Configure application volume groups for the SAP HANA REST API
+# Configure application volume groups for SAP HANA using REST API
Application volume groups (AVG) enable you to deploy all volumes for a single HANA host in one atomic step. The Azure portal and the Azure Resource Manager template have implemented prechecks and recommendations for deployment in areas including throughputs and volume naming conventions. As a REST API user, those checks and recommendations are not available.
azure-netapp-files Cross Region Replication Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md
na Previously updated : 01/17/2023 Last updated : 03/22/2023 # Delete volume replications or volumes
If you want to delete the source or destination volume, you must perform the fol
* [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md) * [Display health status of replication relationship](cross-region-replication-display-health-status.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)
+* [Re-establish deleted volume relationship](reestablish-deleted-volume-relationships.md)
* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md)
azure-netapp-files Cross Zone Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md
na Previously updated : 05/19/2023 Last updated : 08/18/2023 # Requirements and considerations for using cross-zone replication
This article describes requirements and considerations about [using the volume c
* After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until you delete the replication relationship and volume. * You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens. * You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after you've deleted replication relationship. You cannot delete manual snapshots for the destination volume until you break the replication relationship.
-* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is unavailable out for volumes in a replication relationship.
+* When reverting a source volume with an active volume replication relationship, only snapshots that are more recent than the SnapMirror snapshot can be used in the revert operation. For more information, see [Revert a volume using snapshot revert with Azure NetApp Files](snapshots-revert-volume.md).
* Data replication volumes support [customer-managed keys](configure-customer-managed-keys.md). * You can't currently use cross-zone replication with [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) (larger than 100 TiB).
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
na Previously updated : 07/26/2023 Last updated : 08/28/2023
Azure NetApp Files double encryption at rest is supported for the following regi
* Australia Southeast * Brazil South * Canada Central
+* Canada East
* Central US * East Asia * East US
Azure NetApp Files double encryption at rest is supported for the following regi
* Norway East * Qatar Central * South Africa North
-* South Central US
-* Sweden Central
+* South Central US
* Switzerland North * UAE North * UK South
+* UK West
* West Europe * West US * West US 2
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
If you know the server name, you can use the `-ServerName` parameter with the command. See the [Get-SmbConnection](/powershell/module/smbshare/get-smbconnection?view=windowsserver2019-ps&preserve-view=true) PowerShell command details.
+1. Once you enable SMB Continuous Availability, reboot the server for the change to take effect.
+ ## Next steps * [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
The scale-out architecture would be comprised of multiple IBM MQ multi-instance
## I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the *NFS* protocol?
->[!NOTE]
-> This section contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- If you're running the Apache ActiveMQ, it's recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq).
-ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the "slave" isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover.
+ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the replica isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover.
Because most problems with this HA solution stem from inaccurate OS-level file locking, the ActiveMQ community [introduced the concept of a pluggable storage locker](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq) in version 5.7 of the broker. This approach allows a user to take advantage of a different means of the shared lock, using a row-level JDBC database lock as opposed to an OS-level filesystem lock. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us).
azure-netapp-files Faq Data Migration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-data-migration-protection.md
Previously updated : 12/16/2022 Last updated : 08/31/2023 # Data migration and protection FAQs for Azure NetApp Files
This article answers frequently asked questions (FAQs) about Azure NetApp Files
## How do I migrate data to Azure NetApp Files? Azure NetApp Files provides NFS and SMB volumes. You can use any file-based copy tool to migrate data to the service.
+For more information about the Azure File Migration Program, see [Migrate the critical file data you need to power your applications](https://techcommunity.microsoft.com/t5/azure-storage-blog/migrate-the-critical-file-data-you-need-to-power-your/ba-p/3038751). Also, see [Azure Storage migration tools comparison - Unstructured data](../storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md).
+ NetApp offers a SaaS-based solution, [NetApp Cloud Sync](https://cloud.netapp.com/cloud-sync-service). The solution enables you to replicate NFS or SMB data to Azure NetApp Files NFS exports or SMB shares. You can also use a wide array of free tools to copy data. For NFS, you can use workloads tools such as [rsync](https://rsync.samba.org/examples.html) to copy and synchronize source data into an Azure NetApp Files volume. For SMB, you can use workloads [robocopy](/windows-server/administration/windows-commands/robocopy) in the same manner. These tools can also replicate file or folder permissions.
+Migration of certain structured datasets (such as databases) is best done using database-native tools (for example, SQL Server AOAG, Oracle Data Guard, and so on).
+ The requirements for data migration from on premises to Azure NetApp Files are as follows: - Ensure Azure NetApp Files is available in the target Azure region.
The requirements for data migration from on premises to Azure NetApp Files are a
- Transfer the source data to the target volume by using your preferred file copy tool. >[!NOTE]
->[AzCopy](../storage/common/storage-use-azcopy-v10.md) can only be used in migration scenarios where the source or target is a storage account. Azure NetApp Files is not a storage account.
+>[AzCopy](../storage/common/storage-use-azcopy-v10.md) can only be used in migration scenarios where the source *or* target is a storage account, which Azure NetApp Files is not. Azure NetApp Files can be the source OR target in an AzCopy operation, but not both.
## Where does Azure NetApp Files store customer data?
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
na Previously updated : 03/27/2023 Last updated : 08/31/2023 # Requirements and considerations for large volumes (preview)
To enroll in the preview for large volumes, use the [large volumes preview sign-
## Requirements and considerations
+The following requirements and considerations apply to large volumes. For performance considerations of *regular volumes*, see [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md).
+ * Existing regular volumes can't be resized over 100 TiB. * You can't convert regular Azure NetApp Files volumes to large volumes. * You must create a large volume at a size greater than 100 TiB. A single volume can't exceed 500 TiB.
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
ms.assetid:
na-+ Last updated 01/13/2023
azure-netapp-files Reestablish Deleted Volume Relationships https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/reestablish-deleted-volume-relationships.md
+
+ Title: Re-establish deleted volume replication relationships in Azure NetApp Files
+description: You can re-establish the replication relationship between volumes.
++++++ Last updated : 02/21/2023+
+# Re-establish deleted volume replication relationships in Azure NetApp Files (preview)
+
+Azure NetApp Files allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. You can only re-establish the relationship from the destination volume.
+
+If the destination volume remains operational and no snapshots were deleted, the replication re-establish operation uses the last common snapshot. The operation incrementally synchronizes the destination volume based on the last known good snapshot. A baseline snapshot isn't required.
+
+## Considerations
+
+* You can only re-establish relationships when there's an existing snapshot generated either [manually](azure-netapp-files-manage-snapshots.md) or by a [snapshot policy](snapshots-manage-policy.md).
+
+## Register the feature
+
+The re-establish deleted volume replication relationships capability is currently in preview. If you're using this feature for the first time, you need to register the feature first.
+
+1. Register the feature by running the following commands:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFReestablishReplication
+ ```
+
+2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFReestablishReplication
+ ```
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+## Re-establish the relationship
+
+1. From the **Volumes** menu under **Storage service**, select the volume that was formerly the _destination_ volume in the replication relationship you want to restore. Then select the **Replication** tab.
+1. In the **Replication** tab, select the **Re-establish** button.
+ :::image type="content" source="./media/reestablish-deleted-volume-relationships/reestablish-button.png" alt-text="Screenshot of volume menu that depicts no existing volume relationships. A red box surrounds the re-establish button." lightbox="./media/reestablish-deleted-volume-relationships/reestablish-button.png":::
+1. A dropdown list appears with a selection of all volumes that formerly had either a source or destination replication relationship with the selected volume. From the dropdown menu, select the volume you want to reestablish a relationship with. Select **OK** to reestablish the relationship.
+ :::image type="content" source="./media/reestablish-deleted-volume-relationships/reestablish-confirm.png" alt-text="Screenshot of a dropdown menu with available volume relationships to restore." lightbox="./media/reestablish-deleted-volume-relationships/reestablish-confirm.png":::
+
+## Next steps
+
+* [Cross-region replication](cross-region-replication-introduction.md)
+* [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md)
+* [Display health status of replication relationship](cross-region-replication-display-health-status.md)
+* [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)
azure-netapp-files Troubleshoot User Access Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-user-access-ldap.md
+
+ Title: Troubleshoot user access on LDAP volumes | Microsoft Docs
+description: Describes the steps for troubleshooting user access on LDAP-enabled volumes.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 09/06/2023+++
+# Troubleshoot user access on LDAP volumes
+
+Azure NetApp Files provides you with the ability to validate user connectivity and access to LDAP-enabled volumes based on group membership. When you provide a user ID, Azure NetApp Files will report a list of primary and auxiliary group IDs that user belongs to from the LDAP server.
+
+Validating user access is helpful for scenarios such as ensuring POSIX attributes set on the LDAP server are accurate or when you encounter permission errors.
+
+1. In the volume page for the LDAP-enabled volume, select **LDAP Group ID List** under **Support & Troubleshooting**.
+1. Enter the user ID and select **Get group IDs**.
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-ldap-user-id.png" alt-text="Screenshot of the LDAP group ID list portal." lightbox="../media/azure-netapp-files/troubleshoot-ldap-user-id.png":::
+
+1. The portal will display up to 256 results even if the user is in more than 256 groups. You can search for a specific group ID in the results.
+
+Refer to [Troubleshoot volume errors](troubleshoot-volumes.md#errors-for-ldap-volumes) for further resources if the group ID you're searching for is not present.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 08/03/2023 Last updated : 09/06/2023
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## September 2023
+
+* [Troubleshooting enhancement: validate user connectivity, group membership and access to LDAP-enabled volumes](troubleshoot-user-access-ldap.md)
+
+ Azure NetApp Files now provides you with the ability to validate user connectivity and access to LDAP-enabled volumes based on group membership. When you provide a user ID, Azure NetApp Files reports a list of primary and auxiliary group IDs that the user belongs to from the LDAP server. Validating user access is helpful for scenarios such as ensuring POSIX attributes set on the LDAP server are accurate or when you encounter permission errors.
+ ## August 2023
+* [Cross-region replication enhancement: re-establish deleted volume replication](reestablish-deleted-volume-relationships.md) (Preview)
+
+ Azure NetApp Files now allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. If the destination volume remained operational and no snapshots were deleted, the replication re-establish operation will use the last common snapshot and incrementally synchronize the destination volume based on the last known good snapshot. In that case, no baseline replication will be required.
+ * [Backup vault](backup-vault-manage.md) (Preview) Azure NetApp Files backups are now organized under a backup vault. You must migrate all existing backups to a backup vault. For more information, see [Migrate backups to a backup vault](backup-vault-manage.md#migrate-backups-to-a-backup-vault).
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Dynamic change of service level](dynamic-change-volume-service-level.md) * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users) + ## March 2022 * Features that are now generally available (GA)
azure-portal Azure Portal Dashboard Share Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboard-share-access.md
Title: Share Azure portal dashboards by using Azure role-based access control description: This article explains how to share a dashboard in the Azure portal by using Azure role-based access control. Previously updated : 07/10/2023 Last updated : 09/05/2023 # Share Azure dashboards by using Azure role-based access control
-After configuring a dashboard, you can publish it and share it with other users in your organization. You allow others to view your dashboard by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to assign roles to either a single user or a group of users. You can select a role that allows them only to view the published dashboard, or a role that also allows them to modify it.
+After configuring a dashboard, you can publish it and share it with other users in your organization. When you share a dashboard, you can control who can view it by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to assign roles to either a single user or a group of users. You can select a role that allows them only to view the published dashboard, or a role that also allows them to modify it.
> [!TIP] > Within a dashboard, individual tiles enforce their own access control requirements based on the resources they display. You can share any dashboard broadly, even if some data on specific tiles might not be visible to all users.
From an access control perspective, dashboards are no different from other resou
Azure RBAC lets you assign users to roles at four different [levels of scope](/azure/role-based-access-control/scope-overview): management group, subscription, resource group, or resource. Azure RBAC permissions are inherited from higher levels down to the individual resource. In many cases, you may already have users assigned to roles for the subscription that will give them access to the published dashboard.
-For example, any users who have the [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role for a subscription can list, view, create, modify, or delete dashboards within the subscription. Users with a [custom role](/azure/role-based-access-control/custom-roles) that includes the `Microsoft.Portal/Dashboards/Write` permission can also perform these tasks.
+For example, users who have the [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role for a subscription can list, view, create, modify, or delete dashboards within the subscription. Users with a [custom role](/azure/role-based-access-control/custom-roles) that includes the `Microsoft.Portal/Dashboards/Write` permission can also perform these tasks.
Users with the [Reader](/azure/role-based-access-control/built-in-roles#reader) role for the subscription (or a custom role with `Microsoft.Portal/Dashboards/Read` permission) can list and view dashboards within that subscription, but they can't modify or delete them. These users are able to make private copies of dashboards for themselves. They can also make local edits to a published dashboard for their own use, such as when troubleshooting an issue, but they can't publish those changes back to the server.
For each dashboard that you have published, you can assign Azure RBAC built-in r
1. After publishing the dashboard, select **Manage sharing**, then select **Access control**.
+ :::image type="content" source="media/azure-portal-dashboard-share-access/manage-sharing-dashboard.png" alt-text="Screenshot showing the Access control option for an Azure portal dashboard.":::
+ 1. In **Access Control**, select **Role assignments** to see existing users that are already assigned a role for this dashboard. 1. To add a new user or group, select **Add** then **Add role assignment**.
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards.md
Title: Create a dashboard in the Azure portal description: This article describes how to create and customize a dashboard in the Azure portal. Previously updated : 05/05/2022 Last updated : 09/01/2023 # Create a dashboard in the Azure portal
-Dashboards are a focused and organized view of your cloud resources in the Azure portal. Use dashboards as a workspace where you can monitor resources and quickly launch tasks for day-to-day operations. Build custom dashboards based on projects, tasks, or user roles, for example.
+Dashboards are a focused and organized view of your cloud resources in the Azure portal. Use dashboards as a workspace where you can monitor resources and quickly launch tasks for day-to-day operations. For example, you can build custom dashboards based on projects, tasks, or user roles in your organization.
-The Azure portal provides a default dashboard as a starting point. You can edit the default dashboard and create and customize additional dashboards.
+The Azure portal provides a default dashboard as a starting point. You can edit this default dashboard, and you can create and customize additional dashboards.
-> [!NOTE]
-> Each user can create up to 100 private dashboards. If you [publish and share the dashboard](azure-portal-dashboard-share-access.md), it will be implemented as an Azure resource in your subscription and won't count towards this limit.
-
-This article describes how to create a new dashboard and customize it. For information on sharing dashboards, see [Share Azure dashboards by using Azure role-based access control](azure-portal-dashboard-share-access.md).
+All dashboards are private when created, and each user can create up to 100 private dashboards. If you publish and share a dashboard with other users in your organization](azure-portal-dashboard-share-access.md), the shared dashboard is implemented as an Azure resource in your subscription, and doesn't count towards the private dashboard limit.
## Create a new dashboard
-This example shows how to create a new private dashboard with an assigned name. All dashboards are private when created, although you can choose to publish and share your dashboard with other users in your organization if you'd like.
+This example shows how to create a new private dashboard with an assigned name.
1. Sign in to the [Azure portal](https://portal.azure.com).
This example shows how to create a new private dashboard with an assigned name.
:::image type="content" source="media/azure-portal-dashboards/portal-menu-dashboard.png" alt-text="Screenshot of the Azure portal with Dashboard selected.":::
-1. Select **New dashboard**, then select **Blank dashboard**.
-
- :::image type="content" source="media/azure-portal-dashboards/create-new-dashboard.png" alt-text="Screenshot of the New dashboard options.":::
+1. Select **Create**, then select **Custom**.
- This action opens the **Tile Gallery**, from which you can select tiles, and an empty grid where you'll arrange the tiles.
+ This action opens the **Tile Gallery**, from which you can select tiles that display different types of information. You'll also see an empty grid representing the dashboard layout, where you can arrange the tiles.
-1. Select the **My Dashboard** text in the dashboard label and enter a name that will help you easily identify the custom dashboard.
+1. Select the text in the dashboard label and enter a name that will help you easily identify the custom dashboard.
:::image type="content" source="media/azure-portal-dashboards/dashboard-name.png" alt-text="Screenshot of an empty grid with the Tile Gallery.":::
-1. To save the dashboard as is, select **Save** in the page header. Or, continue to Step 2 of the next section to add tiles and save your dashboard.
+1. To save the dashboard as is, select **Save** in the page header.
-The dashboard view now shows your new dashboard. Select the arrow next to the dashboard name to see dashboards available to you. The list might include dashboards that other users have created and shared.
+The dashboard view now shows your new dashboard. Select the arrow next to the dashboard name to see other available dashboards. The list might include dashboards that other users have created and shared.
## Edit a dashboard
-Now, let's edit the dashboard to add, resize, and arrange tiles that represent your Azure resources.
+Now, let's edit the example dashboard you created to add, resize, and arrange tiles that show your Azure resources or display other helpful information. We'll start by working with the Tile Gallery, then explore other ways to customize dashboards.
### Add tiles from the Tile Gallery
-To add tiles to a dashboard, follow these steps:
+To add tiles to a dashboard by using the Tile Gallery, follow these steps.
-1. Select ![edit icon](./media/azure-portal-dashboards/dashboard-edit-icon.png) **Edit** from the dashboard's page header.
+1. Select **Edit** from the dashboard's page header.
:::image type="content" source="media/azure-portal-dashboards/dashboard-edit.png" alt-text="Screenshot of dashboard highlighting the Edit option.":::
To add tiles to a dashboard, follow these steps:
:::image type="content" source="media/azure-portal-dashboards/dashboard-tile-gallery.png" alt-text="Screenshot of the Tile Gallery.":::
-1. Select **Add** to add the tile to the dashboard with a default size and location. Or, drag the tile to the grid and place it where you want. Add any tiles you want, but here are a couple of ideas:
+1. Select **Add** to add the tile to the dashboard with a default size and location. Or, drag the tile to the grid and place it where you want.
- - Add **All resources** to see any resources you've already created.
+1. To save your changes, select **Save**. You can also preview the changes without saving by selecting **Preview**. This preview mode also allows you to see how [filters](#apply-dashboard-filters) affect your tiles. From the preview screen, you can select **Save** to keep the changes, **Cancel** to remove them, or **Edit** to go back to the editing options and make further changes.
- - If you work with more than one organization, add the **Organization identity** tile to your dashboard to clearly show which organization the resources belong to.
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-save.png" alt-text="Screenshot of the Save and Preview options for an edited dashboard.":::
-1. If desired, [resize or rearrange](#resize-or-rearrange-tiles) your tiles.
+### Resize or rearrange tiles
-1. To save your changes, select **Save**. You can also preview the changes without saving by selecting **Preview**. This preview mode also allows you to see how [filters](#set-and-override-dashboard-filters) affect your tiles. From the preview screen, you can select **Save** to keep the changes, **Cancel** to remove them, or **Edit** to go back to the editing options and make further changes.
+To change the size of a tile, or to rearrange the tiles on a dashboard, follow these steps:
- :::image type="content" source="media/azure-portal-dashboards/dashboard-save.png" alt-text="Screenshot of the Preview, Save, and Discard options.":::
+1. Select **Edit** from the page header.
-> [!NOTE]
-> A markdown tile lets you display custom, static content on your dashboard. This could be basic instructions, an image, a set of hyperlinks, or even contact information. For more information about using a markdown tile, see [Use a markdown tile on Azure dashboards to show custom content](azure-portal-markdown-tile.md).
+1. Select the context menu in the upper right corner of a tile. Then, choose a tile size. Tiles that support any size also include a "handle" in the lower right corner that lets you drag the tile to the size you want.
+
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-tile-resize.png" alt-text="Screenshot of dashboard with tile size menu open.":::
+
+1. Select a tile and drag it to a new location on the grid to arrange your dashboard.
+
+1. When you're finished, select **Save**.
### Pin content from a resource page
Another way to add tiles to your dashboard is directly from a resource page.
Many resource pages include a pin icon in the page header, which means that you can pin a tile representing the source page. In some cases, a pin icon may also appear by specific content within a page, which means you can pin a tile for that specific content, rather than the entire page.
-If you select this icon, you can pin the tile to an existing private or shared dashboard. You can also create a new dashboard which will include this pin by selecting **Create new**.
+Select this icon to pin the tile to an existing private or shared dashboard. You can also create a new dashboard which will include this pin by selecting **Create new**.
:::image type="content" source="media/azure-portal-dashboards/dashboard-pin-pane.png" alt-text="Screenshot of Pin to dashboard options.":::
If you want to reuse a tile on a different dashboard, you can copy it from one d
:::image type="content" source="media/azure-portal-dashboards/copy-dashboard.png" alt-text="Screenshot showing how to copy a tile in the Azure portal.":::
-You can then select whether to copy the tile to an existing private or shared dashboard, or create a copy of the tile within the dashboard you're already working in. You can also create a new dashboard which will include a copy of the tile by selecting **Create new**.
+You can then select whether to copy the tile to a different private or shared dashboard, or create a copy of the tile within the dashboard you're already working in. You can also create a new dashboard that includes a copy of the tile by selecting **Create new**.
-### Resize or rearrange tiles
-
-To change the size of a tile or to rearrange the tiles on a dashboard, follow these steps:
-
-1. Select ![edit icon](./media/azure-portal-dashboards/dashboard-edit-icon.png) **Edit** from the page header.
-
-1. Select the context menu in the upper right corner of a tile. Then, choose a tile size. Tiles that support any size also include a "handle" in the lower right corner that lets you drag the tile to the size you want.
-
- :::image type="content" source="media/azure-portal-dashboards/dashboard-tile-resize.png" alt-text="Screenshot of dashboard with tile size menu open.":::
-
-1. Select a tile and drag it to a new location on the grid to arrange your dashboard.
-
-### Set and override dashboard filters
-
-Near the top of your dashboard, you'll see options to set the **Auto refresh** and **Time settings** for data displayed in the dashboard, along with an option to add additional filters.
--
-By default, data will be refreshed every hour. To change this, select **Auto refresh** and choose a new refresh interval. When you've made your selection, select **Apply**.
+## Modify tile settings
-The default time settings are **UTC Time**, showing data for the **Past 24 hours**. To change this, select the button and choose a new time range, time granularity, and/or time zone, then select **Apply**.
+Some tiles might require more configuration to show the information you want. For example, the **Metrics chart** tile has to be set up to display a metric from Azure Monitor. You can also customize tile data to override the dashboard's default time settings and filters, or to change the title and subtitle of a tile.
-To apply additional filters, select **Add filter**. The options you'll see will vary depending on the tiles in your dashboard. For example, you may be able to show only data for a specific subscription or location. Select the filter you'd like to use and make your selections. The filter will then be applied to your data. To remove a filter, select the **X** in its button.
+> [!NOTE]
+> The **Markdown** tile lets you display custom, static content on your dashboard. This can be any information you provide, such as basic instructions, an image, a set of hyperlinks, or even contact information. For more information about using markdown tiles, see [Use a markdown tile on Azure dashboards to show custom content](azure-portal-markdown-tile.md).
-Tiles which support filtering have a ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) filter icon in the top-left corner of the tile. Some tiles allow you to override the global filters with filters specific to that tile. To do so, select **Configure tile data** from the context menu, or select the filter icon, then apply the desired filters.
+### Change the title and subtitle of a tile
-If you set filters for a particular tile, the left corner of that tile displays a double filter icon, indicating that the data in that tile reflects its own filters.
+Some tiles allow you to edit their title and/or subtitle. To do so, select **Configure tile settings** from the context menu.
-## Modify tile settings
+Make your changes, then select **Apply**.
-Some tiles might require more configuration to show the information you want. For example, the **Metrics chart** tile has to be set up to display a metric from Azure Monitor. You can also customize tile data to override the dashboard's default time settings and filters.
### Complete tile configuration
-Any tile that needs to be set up displays a banner until you customize the tile. For example, in the **Metrics chart**, the banner reads **Edit in Metrics**. Other banners may use different text, such as **Configure tile**.
+Any tile that requires configuration displays a banner until you customize the tile. For example, in the **Metrics chart**, the banner reads **Edit in Metrics**. Other banners may use different text, such as **Configure tile**.
To customize the tile:
-1. In the page header select **Save** to exit edit mode.
+1. If needed, select **Save** or **Cancel** near the top of the page to exit edit mode.
1. Select the banner, then do the required setup.
- :::image type="content" source="media/azure-portal-dashboards/dashboard-configure-tile.png" alt-text="Screenshot of tile that requires configuration.":::
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-configure-tile.png" alt-text="Screenshot of a tile that requires configuration.":::
-### Customize time span for a tile
+### Apply dashboard filters
-Data on the dashboard shows activity and refreshes based on the global filters. Some tiles will allow you to select a different time span for just one tile. To do so, follow these steps:
+Near the top of your dashboard, you'll see options to set the **Auto refresh** and **Time settings** for data displayed in the dashboard, along with an option to add additional filters.
-1. Select **Configure tile settings** from the context menu or from the ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) in the upper left corner of the tile.
- :::image type="content" source="media/azure-portal-dashboards/dashboard-customize-tile-data.png" alt-text="Screenshot of tile context menu.":::
+To change how often data is refreshed, select **Auto refresh**, then choose a new refresh interval. When you've made your selection, select **Apply**.
-1. Select the checkbox to **Override the dashboard time settings at the tile level**.
+The default time settings are **UTC Time**, showing data for the **Past 24 hours**. To change this, select the button and choose a new time range, time granularity, and/or time zone, then select **Apply**.
- :::image type="content" source="media/azure-portal-dashboards/dashboard-override-time-settings.png" alt-text="Screenshot of dialog to configure tile time settings.":::
+To apply additional filters, select **Add filter**. The options you'll see will vary depending on the tiles in your dashboard. For example, you may see options to filter data for a specific subscription or location. In some cases, you'll see that no additional filters are available.
-1. Choose the time span to show for this tile. You can choose from the past 30 minutes to the past 30 days or define a custom range.
+If you see additional filter options, select the one you'd like to use and make your selections. The filter will then be applied to your data.
-1. Choose the time granularity to display. You can show anywhere from one-minute increments to one-month.
+To remove a filter, select the **X** in its button.
-1. Select **Apply**.
+### Override dashboard filters for specific tiles
-### Change the title and subtitle of a tile
+Tiles which support filtering have a ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) filter icon in the top-left corner of the tile. These tiles allow you to override the global filters with filters specific to that tile.
-Some tiles allow you to edit their title and subtitle. To do so, select **Configure tile settings** from the context menu.
+To do so, select **Configure tile settings** from the tile's context menu, or select the filter icon. Then you can change the desired filters for that tile. For example, some tiles provide an option to override the dashboard time settings at the tile level, allowing you to select a different time span to refresh data.
+When you apply filters for a particular tile, the left corner of that tile changes to show a double filter icon, indicating that the data in that tile reflects its own filters.
-Make any changes to the tile's title and/or subtitle, then select **Apply**.
-
## Delete a tile To remove a tile from a dashboard, do one of the following: - Select the context menu in the upper right corner of the tile, then select **Remove from dashboard**. -- Select ![edit icon](./media/azure-portal-dashboards/dashboard-edit-icon.png) **Edit** to enter customization mode. Hover in the upper right corner of the tile, then select the ![delete icon](./media/azure-portal-dashboards/dashboard-delete-icon.png) delete icon to remove the tile from the dashboard.-
- :::image type="content" source="media/azure-portal-dashboards/dashboard-delete-tile.png" alt-text="Screenshot showing how to remove tile from dashboard.":::
+- Select **Edit** to enter customization mode. Hover in the upper right corner of the tile, then select the ![delete icon](./media/azure-portal-dashboards/dashboard-delete-icon.png) delete icon to remove the tile from the dashboard.
## Clone a dashboard
To use an existing dashboard as a template for a new dashboard, follow these ste
1. In the page header, select ![clone icon](./media/azure-portal-dashboards/dashboard-clone.png) **Clone**.
-1. A copy of the dashboard, named **Clone of** *your dashboard name* opens in edit mode. Use the preceding steps in this article to rename and customize the dashboard.
+1. A copy of the dashboard, named **Clone of *your dashboard name* ** opens in edit mode. You can then rename and customize the dashboard.
## Publish and share a dashboard
When you create a dashboard, it's private by default, which means you're the onl
### Open a shared dashboard
-To find and open a shared dashboard, follow these steps:
+To find and open a shared dashboard, follow these steps.
1. Select the arrow next to the dashboard name. 1. Select from the displayed list of dashboards. If the dashboard you want to open isn't listed:
- 1. select **Browse all dashboards**.
+ 1. Select **Browse all dashboards**.
:::image type="content" source="media/azure-portal-dashboards/dashboard-browse.png" alt-text="Screenshot of dashboard selection menu.":::
- 1. In the **Type** field, select **Shared dashboards**.
+ 1. Select the **Type equals** filter, then select **Shared dashboard**.
:::image type="content" source="media/azure-portal-dashboards/dashboard-browse-all.png" alt-text="Screenshot of all dashboards selection menu.":::
- 1. Select one or more subscriptions. You can also enter text to filter dashboards by name.
-
- 1. Select a dashboard from the list of shared dashboards.
+ 1. Select a dashboard from the list of shared dashboards. If you don't see the one you want, use the filters to limit the results shown, such as selecting a specific subscription or filtering by name.
## Delete a dashboard
-To permanently delete a private or shared dashboard, follow these steps:
+You can delete your private dashboards, or a shared dashboard that you created or have permissions to modify.
+
+To permanently delete a private or shared dashboard, follow these steps.
1. Select the dashboard you want to delete from the list next to the dashboard name.
To permanently delete a private or shared dashboard, follow these steps:
:::image type="content" source="media/azure-portal-dashboards/dashboard-delete-dash.png" alt-text="Screenshot of delete confirmation.":::
-## Recover a deleted dashboard
-
-If you're in the global Azure cloud, and you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. For more information, see [Recover a deleted dashboard in the Azure portal](recover-shared-deleted-dashboard.md).
+> [!TIP]
+> In the global Azure cloud, if you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. For more information, see [Recover a deleted dashboard in the Azure portal](recover-shared-deleted-dashboard.md).
## Next steps
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
Title: Allow the Azure portal URLs on your firewall or proxy server description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 05/18/2023 Last updated : 08/22/2023
You can use [service tags](../virtual-network/service-tags-overview.md) to defin
The URL endpoints to allow for the Azure portal are specific to the Azure cloud where your organization is deployed. To allow network traffic to these endpoints to bypass restrictions, select your cloud, then add the list of URLs to your proxy server or firewall. We do not recommend adding any additional portal-related URLs aside from those listed here, although you may want to add URLs related to other Microsoft products and services. Depending on which services you use, you may not need to include all of these URLs in your allowlist.
-> [!NOTE]
-> Including the wildcard symbol (\*) at the start of an endpoint will allow all subdomains. Avoid adding a wildcard symbol to endpoints listed here that don't already include one. Instead, if you identify additional subdomains of an endpoint that are needed for your particular scenario, we recommend that you allow only that particular subdomain.
+> [!IMPORTANT]
+> Including the wildcard symbol (\*) at the start of an endpoint will allow all subdomains. For endpoints with wildcards, we also advise you to add the URL without the wildcard. For example, you should add both `*.portal.azure.com` and `portal.azure.com` to ensure that access to the domain is allowed with or without a subdomain.
+>
+> Avoid adding a wildcard symbol to endpoints listed here that don't already include one. Instead, if you identify additional subdomains of an endpoint that are needed for your particular scenario, we recommend that you allow only that particular subdomain.
### [Public Cloud](#tab/public-cloud)
login.live.com
#### Azure portal framework ```
-portal.azure.com
*.portal.azure.com *.hosting.portal.azure.net
-reactblade.portal.azure.net
*.reactblade.portal.azure.net management.azure.com *.ext.azure.com
-graph.windows.net
*.graph.windows.net
-graph.microsoft.com
*.graph.microsoft.com ```
graph.microsoft.com
*.account.microsoft.com *.bmx.azure.com *.subscriptionrp.trafficmanager.net
-signup.azure.com
*.signup.azure.com ```
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-relay Ip Firewall Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md
This section shows you how to use the Azure portal to create IP firewall rules f
1. To restrict access to specific networks and IP addresses, select the **Selected networks** option. In the **Firewall** section, follow these steps: 1. Select **Add your client IP address** option to give your current client IP the access to the namespace. 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
- 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow [trusted Microsoft services](#trusted-services) to bypass this firewall?**.
+ 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall?**.
:::image type="content" source="./media/ip-firewall/selected-networks-trusted-access-disabled.png" alt-text="Screenshot showing the Public access tab of the Networking page with the Firewall enabled."::: 1. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications.
azure-relay Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/private-link-service.md
The following procedure provides step-by-step instructions for disabling public
3. Select the **namespace** from the list to which you want to add a private endpoint. 4. On the left menu, select the **Networking** tab under **Settings**. 1. On the **Networking** page, for **Public network access**, select **Disabled** if you want the namespace to be accessed only via private endpoints.
-1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-services) to bypass this firewall.
+1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall.
:::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled."::: 1. Select the **Private endpoint connections** tab at the top of the page
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
You can use Azure Resource Group Deployment task or Azure CLI task to deploy a B
### Use Azure Resource Manager Template Deployment task
+> [!NOTE]
+> *AzureResourceManagerTemplateDeployment@3* task won't work if you have a *bicepparam* file.
+ 1. Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3). ```yml
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 08/08/2023 Last updated : 08/30/2023 # Configure your Bicep environment
The preceding sample enables 'userDefineTypes' and 'extensibility`. The availabl
- **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). - **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.-- **userDefinedFunctions**: Allows you to define your own custom functions.
+- **userDefinedFunctions**: Allows you to define your own custom functions. See [User-defined functions in Bicep](./user-defined-functions.md).
- **userDefinedTypes**: Allows you to define your own custom types for parameters. See [User-defined types in Bicep](https://aka.ms/bicepCustomTypes). ## Next steps
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md
Title: Bicep deployment what-if
description: Determine what changes will happen to your resources before deploying a Bicep file. Previously updated : 06/28/2023 Last updated : 09/06/2023 # Bicep deployment what-if operation
For REST API, use:
## Change types
-The what-if operation lists six different types of changes:
+The what-if operation lists seven different types of changes:
- **Create**: The resource doesn't currently exist but is defined in the Bicep file. The resource will be created. - **Delete**: This change type only applies when using [complete mode](../templates/deployment-modes.md) for JSON template deployment. The resource exists, but isn't defined in the Bicep file. With complete mode, the resource will be deleted. Only resources that [support complete mode deletion](../templates/deployment-complete-mode-deletion.md) are included in this change type. - **Ignore**: The resource exists, but isn't defined in the Bicep file. The resource won't be deployed or modified. When you reach the limits for expanding nested templates, you will encounter this change type. See [What-if limits](#what-if-limits). - **NoChange**: The resource exists, and is defined in the Bicep file. The resource will be redeployed, but the properties of the resource won't change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.
+- **NoEffect**: The property is ready-only and will be ignored by the service. For example, the `sku.tier` property is always set to match `sku.name` in the [`Microsoft.ServiceBus`](/azure/templates/microsoft.servicebus/namespaces) namespace.
- **Modify**: The resource exists, and is defined in the Bicep file. The resource will be redeployed, and the properties of the resource will change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value. - **Deploy**: The resource exists, and is defined in the Bicep file. The resource will be redeployed. The properties of the resource may or may not change. The operation returns this change type when it doesn't have enough information to determine if any properties will change. You only see this condition when [ResultFormat](#result-format) is set to `ResourceIdOnly`.
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
} ```
-> [!NOTE]
-> The example is for demonstration purposes. The properties `scriptContent` and `primaryScriptUri` can't coexist in a Bicep file.
-
-> [!NOTE]
-> The _scriptContent_ shows a script with multiple lines. The Azure portal and Azure DevOps pipeline can't parse a deployment script with multiple lines. You can either chain the PowerShell commands (by using semicolons or _\\r\\n_ or _\\n_) into one line, or use the `primaryScriptUri` property with an external script file. There are many free JSON string escape/unescape tools available. For example, [https://www.freeformatter.com/json-escape.html](https://www.freeformatter.com/json-escape.html).
- Property value details: - `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To login with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
Title: Create & deploy deployment stacks in Bicep
description: Describes how to create deployment stacks in Bicep. Previously updated : 08/07/2023 Last updated : 09/06/2023 # Deployment stacks (Preview)
Deployment stacks provide the following benefits:
## Create deployment stacks
-> [!WARNING]
-> The `New-Az*DeploymentStack` cmdlets are incorrectly outputting a warning message regarding the current existence of a stack with the same name as the stack attempting to be created. When the stack exists, the warning message is not shown, which could lead to unintended upsert behavior. Conversely, if the stack doesn't exist, the cmdlets display a warning message, suggesting that the stack exists and requiring user interaction to proceed with the upsert. This behavior will be reversed soon with an upcoming change. In the interim, you can use the `-Force` flag when executing the cmdlets to bypass the warning prompt in case the stack doesn't exist. This way, you can proceed with the upsert process without user intervention.
- A deployment stack resource can be created at resource group, subscription, or management group scope. The template passed into a deployment stack defines the resources to be created or updated at the target scope specified for the template deployment. - A stack at resource group scope can deploy the template passed-in to the same resource group scope where the deployment stack exists.
az stack mg create \
--name '<deployment-stack-name>' \ --location '<location>' \ --template-file '<bicep-file-name>' \
- --deployment-subscription-id '<subscription-id>' \
+ --deployment-subscription '<subscription-id>' \
--deny-settings-mode 'none' ```
az stack mg create \
--name '<deployment-stack-name>' \ --location '<location>' \ --template-file '<bicep-file-name>' \
- --deployment-subscription-id '<subscription-id>' \
+ --deployment-subscription '<subscription-id>' \
--deny-settings-mode 'none' ```
azure-resource-manager File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md
Title: Bicep file structure and syntax
description: Describes the structure and properties of a Bicep file using declarative syntax. Previously updated : 06/06/2023 Last updated : 08/30/2023 # Understand the structure and syntax of Bicep files
metadata <metadata-name> = ANY
targetScope = '<scope>'
+func <user-defined-function-name> (<argument-name> <data-type>, <argument-name> <data-type>, ...) <function-data-type> => <expression>
+ @<decorator>(<argument>) param <parameter-name> <parameter-data-type> = <default-value>
The allowed values are:
In a module, you can specify a scope that is different than the scope for the rest of the Bicep file. For more information, see [Configure module scope](modules.md#set-module-scope)
+## Functions (Preview)
+
+> [!NOTE]
+> To enable the preview feature, see [Enable experimental features](./bicep-config.md#enable-experimental-features).
+
+In your Bicep file, you can create your own functions in addition to using the [standard Bicep functions](./bicep-functions.md) that are automatically available within your Bicep files. Create your own functions when you have complicated expressions that are used repeatedly in your Bicep files.
+
+```bicep
+func buildUrl(https bool, hostname string, path string) string => '${https ? 'https' : 'http'}://${hostname}${empty(path) ? '' : '/${path}'}'
+
+output azureUrl string = buildUrl(true, 'microsoft.com', 'azure')
+```
+
+For more information, see [User-defined functions](./user-defined-functions.md).
+ ## Parameters Use parameters for values that need to vary for different deployments. You can define a default value for the parameter that is used if no value is provided during deployment.
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
Title: User-defined types in Bicep
description: Describes how to define and use user-defined data types in Bicep. Previously updated : 01/09/2023 Last updated : 08/29/2023 # User-defined data types in Bicep (Preview)
To enable this preview, modify your project's [bicepconfig.json](./bicep-config.
You can use the `type` statement to define user-defined data types. In addition, you can also use type expressions in some places to define custom types. ```bicep
-type <userDefinedDataTypeName> = <typeExpression>
+type <user-defined-data-type-name> = <type-expression>
``` The valid type expressions include:
azure-resource-manager User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-functions.md
+
+ Title: User-defined functions in Bicep
+description: Describes how to define and use user-defined functions in Bicep.
++ Last updated : 08/30/2023++
+# User-defined functions in Bicep (Preview)
+
+Within your Bicep file, you can create your own functions. These functions are available for use in your Bicep files. User-defined functions are separate from the [standard Bicep functions](./bicep-functions.md) that are automatically available within your Bicep files. Create your own functions when you have complicated expressions that are used repeatedly in your Bicep files.
+
+[Bicep version 0.20 or newer](./install.md) is required to use this feature.
+
+## Enable the preview feature
+
+To enable this preview, modify your project's [bicepconfig.json](./bicep-config.md) file to include the following JSON:
+
+```json
+{
+ "experimentalFeaturesEnabled": {
+ "userDefinedFunctions": true
+ }
+}
+```
+
+## Define the function
+
+Use the `func` statement to define user-defined functions.
+
+```bicep
+func <user-defined-function-name> (<argument-name> <data-type>, <argument-name> <data-type>, ...) <function-data-type> => <expression>
+```
+
+## Examples
+
+The following examples show how to define and use user-defined functions:
+
+```bicep
+func buildUrl(https bool, hostname string, path string) string => '${https ? 'https' : 'http'}://${hostname}${empty(path) ? '' : '/${path}'}'
+
+func sayHelloString(name string) string => 'Hi ${name}!'
+
+func sayHelloObject(name string) object => {
+ hello: 'Hi ${name}!'
+}
+
+func nameArray(name string) array => [
+ name
+]
+
+func addNameArray(name string) array => [
+ 'Mary'
+ 'Bob'
+ name
+]
+
+output azureUrl string = buildUrl(true, 'microsoft.com', 'azure')
+output greetingArray array = map(['Evie', 'Casper'], name => sayHelloString(name))
+output greetingObject object = sayHelloObject('John')
+output nameArray array = nameArray('John')
+output addNameArray array = addNameArray('John')
+
+```
+
+The outputs from the preceding examples are:
++
+| Name | Type | Value |
+| - | - | -- |
+| azureUrl | String | https://microsoft.com/azure |
+| greetingArray | Array | ["Hi Evie!","Hi Casper!"] |
+| greetingObject | Object | {"hello":"Hi John!"} |
+| nameArray | Array | ["John"] |
+| addNameArray | Array | ["Mary","Bob","John"] |
+
+## Limitations
+
+When defining a user function, there are some restrictions:
+
+* The function can't access variables.
+* The function can only use parameters that are defined in the function.
+* The function can't call other user-defined functions.
+* The function can't use the [reference](bicep-functions-resource.md#reference) function or any of the [list](bicep-functions-resource.md#list) functions.
+* Parameters for the function can't have default values.
+
+## Next steps
+
+* To learn about the Bicep file structure and syntax, see [Understand the structure and syntax of Bicep files](./file.md).
+* For a list of the available Bicep functions, see [Bicep functions](./bicep-functions.md).
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-resource-manager Approve Just In Time Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/approve-just-in-time-access.md
# Configure and approve just-in-time access for Azure Managed Applications
-As a consumer of a managed application, you might not be comfortable giving the publisher permanent access to the managed resource group. To give you greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access, which is currently in preview. It enables you to approve when and for how long the publisher has access to the resource group. The publisher can make required updates during that time, but when that time is over, the publisher's access expires.
+As a consumer of a managed application, you might not be comfortable giving the publisher permanent access to the managed resource group. To give you greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. It enables you to approve when and for how long the publisher has access to the resource group. The publisher can make required updates during that time, but when that time is over, the publisher's access expires.
The work flow for granting access is:
To approve requests through the managed application:
1. Select **JIT Access** for the managed application, and select **Approve Requests**. ![Approve requests](./media/approve-just-in-time-access/approve-requests.png)
-
+ 1. Select the request to approve. ![Select request](./media/approve-just-in-time-access/select-request.png)
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-resource-manager Request Just In Time Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/request-just-in-time-access.md
# Enable and request just-in-time access for Azure Managed Applications
-Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access.
+Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access.
JIT access enables you to request elevated access to a managed application's resources for troubleshooting or maintenance. You always have read-only access to the resources, but for a specific time period you can have greater access. The work flow for granting access is:
In "outputs":
"jitAccessPolicy": "[steps('jitConfiguration').jitConfigurationControl]" ```
-> [!NOTE]
-> JIT access is in preview. The schema for JIT configuration could change in future iterations.
- ## Enable JIT access When creating your offer in Partner Center, make sure you enable JIT access.
To send a JIT access request:
1. On the **Activate Role** form, select a start time and duration for your role to be active. Select **Activate** to send the request.
- ![Activate access](./media/request-just-in-time-access/activate-access.png)
+ ![Activate access](./media/request-just-in-time-access/activate-access.png)
1. View the notifications to see that the new JIT request is successfully sent to the consumer.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 12/13/2022 Last updated : 08/24/2023 # Azure subscription and service limits, quotas, and constraints
To learn more about Azure pricing, see [Azure pricing overview](https://azure.mi
> [!NOTE] > Some services have adjustable limits. >
-> When a service doesn't have adjustable limits, the following tables use the header **Limit**. In those cases, the default and the maximum limits are the same.
+> When the limit can be adjusted, the tables include **Default limit** and **Maximum limit** headers. The limit can be raised above the default limit but not above the maximum limit. Some services with adjustable limits use different headers with information about adjusting the limit.
>
-> When the limit can be adjusted, the tables include **Default limit** and **Maximum limit** headers. The limit can be raised above the default limit but not above the maximum limit.
+> When a service doesn't have adjustable limits, the following tables use the header **Limit** without any additional information about adjusting the limit. In those cases, the default and the maximum limits are the same.
> > If you want to raise the limit or quota above the default limit, [open an online customer support request at no charge](../templates/error-resource-quota.md). >
The following limits apply when you use Azure Resource Manager and Azure resourc
[!INCLUDE [api-center-service-limits](../../api-center/includes/api-center-service-limits.md)] - ## API Management limits [!INCLUDE [api-management-service-limits](../../../includes/api-management-service-limits.md)]
Azure Communications Gateway also has limits on the SIP signaling.
For Azure Container Apps limits, see [Quotas in Azure Container Apps](../../container-apps/quotas.md). + ## Azure Cosmos DB limits For Azure Cosmos DB limits, see [Limits in Azure Cosmos DB](../../cosmos-db/concepts-limits.md).
The following table details the features and limits of the Basic, Standard, and
## Digital Twins limits > [!NOTE]
-> Some areas of this service have adjustable limits, and others do not. This is represented in the tables below with the *Adjustable?* column. When the limit can be adjusted, the *Adjustable?* value is *Yes*.
+> Some areas of this service have adjustable limits, and others do not. This is represented in the following tables with the *Adjustable?* column. When the limit can be adjusted, the *Adjustable?* value is *Yes*.
[!INCLUDE [digital-twins-limits](../../../includes/digital-twins-limits.md)]
The latest values for Microsoft Purview quotas can be found in the [Microsoft Pu
## Microsoft Sentinel limits -
-### Incident limits
--
-### Machine learning-based limits
--
-### Multi workspace limits
--
-### Notebook limits
--
-### Repositories limits
--
-### Threat intelligence limits
--
-## TI upload indicators API limits
--
-### User and Entity Behavior Analytics (UEBA) limits
--
-### Watchlist limits
--
-### Workbook limits
-
+For Microsoft Sentinel limits, see [Service limits for Microsoft Sentinel](../../sentinel/sentinel-service-limits.md)
## Service Bus limits
For more information, see [Virtual machine sizes](../../virtual-machines/sizes.m
[!INCLUDE [azure-storage-limits-vm-apps](../../../includes/azure-storage-limits-vm-apps.md)]
-For more information see [VM Applications](../../virtual-machines/vm-applications.md).
+For more information, see [VM Applications](../../virtual-machines/vm-applications.md).
#### Disk encryption sets
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Previously updated : 04/06/2023 Last updated : 08/24/2023 content_well_notification: - AI-contribution
Applying locks can lead to unexpected results. Some operations, which don't seem
For example, if a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), which is a control plane operation, the deletion fails. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use a control plane operation. -- A read-only lock or cannot-delete lock on a **network security group (NSG)** prevents the creation of a traffic flow log for the NSG.
+- A read-only lock on a **network security group (NSG)** prevents the creation of the corresponding NSG flow log. A cannot-delete lock on a **network security group (NSG)** doesn't prevent the creation or modification of the corresponding NSG flow log.
- A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access.
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Title: Manage resource groups - Azure portal description: Use Azure portal to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups.- Previously updated : 03/26/2019- Last updated : 08/16/2023 # Manage Azure resource groups by using the Azure portal
The resource group stores metadata about the resources. Therefore, when you spec
## Create resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups**
+1. Select **Resource groups**.
+1. Select **Create**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted.":::
-3. Select **Add**.
-4. Enter the following values:
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png":::
- - **Subscription**: Select your Azure subscription.
- - **Resource group**: Enter a new resource group name.
+1. Enter the following values:
+
+ - **Subscription**: Select your Azure subscription.
+ - **Resource group**: Enter a new resource group name.
- **Region**: Select an Azure location, such as **Central US**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region.":::
-5. Select **Review + Create**
-6. Select **Create**. It takes a few seconds to create a resource group.
-7. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png":::
+1. Select **Review + Create**
+1. Select **Create**. It takes a few seconds to create a resource group.
+1. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png":::
## List resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. To list the resource groups, select **Resource groups**
-
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups.":::
+1. To list the resource groups, select **Resource groups**
+1. To customize the information displayed for the resource groups, configure the filters. The following screenshot shows the additional columns you could add to the display:
-3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display:
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png":::
## Open resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups**.
-3. Select the resource group you want to open.
+1. Select **Resource groups**.
+1. Select the resource group you want to open.
## Delete resource groups 1. Open the resource group you want to delete. See [Open resource groups](#open-resource-groups).
-2. Select **Delete resource group**.
+1. Select **Delete resource group**.
- :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group." lightbox="./media/manage-resource-groups-portal/delete-group.png":::
For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md).
You can move the resources in the group to another resource group. For more info
## Lock resource groups
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
1. Open the resource group you want to lock. See [Open resource groups](#open-resource-groups).
-2. In the left pane, select **Locks**.
-3. To add a lock to the resource group, select **Add**.
-4. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**.
+1. In the left pane, select **Locks**.
+1. To add a lock to the resource group, select **Add**.
+1. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png":::
For more information, see [Lock resources to prevent unexpected changes](lock-resources.md).
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
Title: Move Azure App Service resources across resource groups or subscriptions
description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 03/31/2022 Last updated : 08/17/2023 # Move App Service resources to a new resource group or subscription
When you move a Web App across subscriptions, the following guidance applies:
- Uploaded or imported TLS/SSL certificates - App Service Environments - All App Service resources in the resource group must be moved together.-- App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment.
+- App Service Environments can't be moved to a new resource group or subscription.
+ - You can move a Web App and App Service plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The Web App and App Service plan that you move will always be associated with your initial App Service Environment. You can't move a Web App/App Service plan to a different App Service Environment.
+ - If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment.
- You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section.
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription.++ Last updated 04/24/2023
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | loadtests | No | No | No |
+> | loadtests | Yes | Yes | No |
## Microsoft.LocationBasedServices
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | certificates | resource group | 1-260 | Can't use:<br>`/` <br><br>Can't end with space or period. |
-> | serverfarms | resource group | 1-40 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode |
+> | serverfarms | resource group | 1-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode |
> | sites | global or per domain. See note below. | 2-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode<br><br>Can't start or end with hyphen. | > | sites / slots | site | 2-59 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode |
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 02/02/2023 Last updated : 08/15/2023 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
* automationAccounts
+## Microsoft.AzureArcData
+
+* SqlServerInstances
+ ## Microsoft.AzureStack * generateDeploymentLicense
Some resources have a limit on the number instances per region. This limit is di
* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+## Microsoft.Cdn
+
+* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+ ## Microsoft.Compute
+* diskEncryptionSets
* disks * galleries * galleries/images
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.DBforPostgreSQL * flexibleServers
-* serverGroups
* serverGroupsv2 * servers
-* serversv2
## Microsoft.DevTestLab
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.EdgeOrder
+* bootstrapConfigurations
* orderItems * orders
Some resources have a limit on the number instances per region. This limit is di
* clusters * namespaces
+## Microsoft.Fabric
+
+* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota
+ ## Microsoft.GuestConfiguration * guestConfigurationAssignments
Some resources have a limit on the number instances per region. This limit is di
* machines * machines/extensions
+* machines/runcommands
## Microsoft.Logic
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Network
-* applicationGatewayWebApplicationFirewallPolicies
* applicationSecurityGroups
-* bastionHosts
* customIpPrefixes * ddosProtectionPlans
-* dnsForwardingRulesets
-* dnsForwardingRulesets/forwardingRules
-* dnsForwardingRulesets/virtualNetworkLinks
-* dnsResolvers
-* dnsResolvers/inboundEndpoints
-* dnsResolvers/outboundEndpoints
-* dnszones
-* dnszones/A
-* dnszones/AAAA
-* dnszones/all
-* dnszones/CAA
-* dnszones/CNAME
-* dnszones/MX
-* dnszones/NS
-* dnszones/PTR
-* dnszones/recordsets
-* dnszones/SOA
-* dnszones/SRV
-* dnszones/TXT
-* expressRouteCrossConnections
* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit * networkIntentPolicies * networkInterfaces * networkSecurityGroups
-* privateDnsZones
-* privateDnsZones/A
-* privateDnsZones/AAAA
-* privateDnsZones/all
-* privateDnsZones/CNAME
-* privateDnsZones/MX
-* privateDnsZones/PTR
-* privateDnsZones/SOA
-* privateDnsZones/SRV
-* privateDnsZones/TXT
-* privateDnsZones/virtualNetworkLinks
* privateEndpointRedirectMaps * privateEndpoints * privateLinkServices * publicIPAddresses * serviceEndpointPolicies
-* trafficmanagerprofiles
-* virtualNetworks/privateDnsZoneLinks
* virtualNetworkTaps
+## Microsoft.NetworkCloud
+
+* volumes
+
+## Microsoft.NetworkFunction
+
+* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess
+ ## Microsoft.NotificationHubs * namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
Some resources have a limit on the number instances per region. This limit is di
* assignments * securityConnectors
+* securityConnectors/devops
## Microsoft.ServiceBus
Some resources have a limit on the number instances per region. This limit is di
* accounts/jobs * accounts/models * accounts/networks
+* accounts/secrets
* accounts/storageContainers ## Microsoft.Sql
Some resources have a limit on the number instances per region. This limit is di
* storageAccounts
-## Microsoft.StoragePool
-
-* diskPools
-* diskPools/iscsiTargets
- ## Microsoft.StreamAnalytics * streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Web * apiManagementAccounts/apis
+* certificates - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Web/DisableResourcesPerRGLimitForAPIMinWebApp
* sites ## Next steps
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | DataControllers | Yes | Yes |
+> | DataControllers | Yes | No |
> | DataControllers / ActiveDirectoryConnectors | No | No |
-> | PostgresInstances | Yes | Yes |
-> | SqlManagedInstances | Yes | Yes |
-> | SqlServerInstances | Yes | Yes |
-> | SqlServerInstances / Databases | Yes | Yes |
+> | PostgresInstances | Yes | No |
+> | SqlManagedInstances | Yes | No |
+> | SqlServerInstances | Yes | No |
+> | SqlServerInstances / Databases | Yes | No |
+> | SqlServerInstances / AvailabilityGroups | Yes | No |
## Microsoft.AzureCIS
To get the same data as a file of comma-separated values, download [tag-support.
> | dstsServiceAccounts | Yes | Yes | > | dstsServiceClientIdentities | Yes | Yes |
-## Microsoft.AzureData
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | sqlServerRegistrations | Yes | Yes |
-> | sqlServerRegistrations / sqlServers | No | No |
- ## Microsoft.AzureScan > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
## Next steps To learn how to apply tags to resources, see [Use tags to organize your Azure resources](tag-resources.md).+
azure-resource-manager Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tls-support.md
Title: TLS version supported by Azure Resource Manager
description: Describes the deprecation of TLS versions prior to 1.2 in Azure Resource Manager Previously updated : 09/26/2022 Last updated : 08/24/2023 # Migrating to TLS 1.2 for Azure Resource Manager Transport Layer Security (TLS) is a security protocol that establishes encryption channels over computer networks. TLS 1.2 is the current industry standard and is supported by Azure Resource Manager. For backwards compatibility, Azure Resource Manager also supports earlier versions, such as TLS 1.0 and 1.1, but that support is ending.
-To ensure that Azure is compliant with regulatory requirements, and provide improved security for our customers, **Azure Resource Manager will stop supporting protocols older than TLS 1.2 by Fall 2023.**
+To ensure that Azure is compliant with regulatory requirements, and provide improved security for our customers, **Azure Resource Manager will stop supporting protocols older than TLS 1.2 on November 30, 2023.**
This article provides guidance for removing dependencies on older security protocols. ## Why migrate to TLS 1.2
-TLS encrypts data sent over the internet to prevent malicious users from accessing private, sensitive information. The client and server perform a TLS handshake to verify each other's identity and determine how they'll communicate. During the handshake, each party identifies which TLS versions they use. The client and server can communicate if they both support a common version.
+TLS encrypts data sent over the internet to prevent malicious users from accessing private, sensitive information. The client and server perform a TLS handshake to verify each other's identity and determine how they'll communicate. During the handshake, each party identifies which TLS versions they use. The client and server can communicate if they both support a common version.
TLS 1.2 is more secure and faster than its predecessors.
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md
Title: Deploy multiple instances of resources
description: Use copy operation and arrays in an Azure Resource Manager template (ARM template) to deploy resource type many times. Previously updated : 05/22/2023 Last updated : 08/30/2023 # Resource iteration in ARM templates
The following example creates the number of storage accounts specified in the `s
Notice that the name of each resource includes the `copyIndex()` function, which returns the current iteration in the loop. `copyIndex()` is zero-based. So, the following example: ```json
-"name": "[concat('storage', copyIndex())]",
+"name": "[format('storage{0}', copyIndex())]",
``` Creates these names:
Creates these names:
To offset the index value, you can pass a value in the `copyIndex()` function. The number of iterations is still specified in the copy element, but the value of `copyIndex` is offset by the specified value. So, the following example: ```json
-"name": "[concat('storage', copyIndex(1))]",
+"name": "[format('storage{0}', copyIndex(1))]",
``` Creates these names:
The following example creates one storage account for each name provided in the
If you want to return values from the deployed resources, you can use [copy in the outputs section](copy-outputs.md).
+### Use symbolic name
+
+[Symbolic name](./resource-declaration.md#use-symbolic-name) will be assigned to resource copy loops. The loop index is zero-based. In the following example, `myStorages[1]` references the second resource in the resource loop.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "storageCount": {
+ "type": "int",
+ "defaultValue": 2
+ }
+ },
+ "resources": {
+ "myStorages": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}storage{1}', copyIndex(), uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {},
+ "copy": {
+ "name": "storagecopy",
+ "count": "[parameters('storageCount')]"
+ }
+ }
+ },
+ "outputs": {
+ "storageEndpoint":{
+ "type": "object",
+ "value": "[reference('myStorages[1]').primaryEndpoints]"
+ }
+ }
+}
+```
+
+If the index is a runtime value, format the reference yourself. For example
+
+```json
+"outputs": {
+ "storageEndpoint":{
+ "type": "object",
+ "value": "[reference(format('myStorages[{0}]', variables('runtimeIndex'))).primaryEndpoints]"
+ }
+}
+```
+
+Symbolic names can be used in [dependsOn arrays](./resource-dependency.md#depend-on-resources-in-a-loop). If a symbolic name is for a copy loop, all resources in the loop are added as dependencies. For more information, see [Depends on resources in a loop](./resource-dependency.md#depend-on-resources-in-a-loop).
+ ## Serial or Parallel By default, Resource Manager creates the resources in parallel. It applies no limit to the number of resources deployed in parallel, other than the total limit of 800 resources in the template. The order in which they're created isn't guaranteed.
The following example shows the implementation.
}, { "type": "Microsoft.DataFactory/factories/datasets",
- "name": "[concat('exampleDataFactory', '/', 'exampleDataSet', copyIndex())]",
+ "name": "[format('exampleDataFactory/exampleDataSet{0}', copyIndex())]",
"dependsOn": [ "exampleDataFactory" ],
azure-resource-manager Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/definitions.md
+
+ Title: Type definitions in templates
+description: Describes how to create type definitions in an Azure Resource Manager template (ARM template).
++ Last updated : 08/22/2023++
+# Type definitions in ARM templates
+
+This article describes how to create and use definitions in your Azure Resource Manager template (ARM template). By defining your own types, you can reuse these types. Type definitions can only be used with [languageVersion 2.0](./syntax.md#languageversion-20).
++
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [User-defined data types in Bicep](../bicep/user-defined-data-types.md).
+
+## Minimal declaration
+
+At a minimum, every type definition needs a name and either a `type` or a [`$ref`](#use-definition).
+
+```json
+"definitions": {
+ "demoStringType": {
+ "type": "string"
+ },
+ "demoIntType": {
+ "type": "int"
+ },
+ "demoBoolType": {
+ "type": "bool"
+ },
+ "demoObjectType": {
+ "type": "object"
+ },
+ "demoArrayType": {
+ "type": "array"
+ }
+}
+```
+
+## Allowed values
+
+You can define allowed values for a type definition. You provide the allowed values in an array. The deployment fails during validation if a value is passed in for the type definition that isn't one of the allowed values.
+
+```json
+"definitions": {
+ "demoEnumType": {
+ "type": "string",
+ "allowedValues": [
+ "one",
+ "two"
+ ]
+ }
+}
+```
+
+## Length constraints
+
+You can specify minimum and maximum lengths for string and array type definitions. You can set one or both constraints. For strings, the length indicates the number of characters. For arrays, the length indicates the number of items in the array.
+
+The following example declares two type definitions. One type definition is for a storage account name that must have 3-24 characters. The other type definition is an array that must have from 1-5 items.
+
+```json
+"definitions": {
+ "storageAccountNameType": {
+ "type": "string",
+ "minLength": 3,
+ "maxLength": 24
+ },
+ "appNameType": {
+ "type": "array",
+ "minLength": 1,
+ "maxLength": 5
+ }
+}
+```
+
+## Integer constraints
+
+You can set minimum and maximum values for integer type definitions. You can set one or both constraints.
+
+```json
+"definitions": {
+ "monthType": {
+ "type": "int",
+ "minValue": 1,
+ "maxValue": 12
+ }
+}
+```
+
+## Object constraints
+
+### Properties
+
+The value of `properties` is a map of property name => type definition.
+
+The following example would accept `{"foo": "string", "bar": 1}`, but reject `{"foo": "string", "bar": -1}`, `{"foo": "", "bar": 1}`, or any object without a `foo` or `bar` property.
+
+```json
+"definitions": {
+ "objectDefinition": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0
+ }
+ }
+ }
+},
+"parameters": {
+ "objectParameter": {
+ "$ref": "#/definitions/objectDefinition",
+ }
+}
+```
+
+All properties are required unless the propertyΓÇÖs type definition has the ["nullable": true](#nullable-constraint) constraint. To make both properties in the preceding example optional, it would look like:
+
+```json
+"definitions": {
+ "objectDefinition": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3,
+ "nullable": true
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0,
+ "nullable": true
+ }
+ }
+ }
+}
+```
+
+### additionalProperties
+
+The value of `additionalProperties` is a type definition or a boolean value. If no `additionalProperties` constraint is defined, the default value is `true`.
+
+If value is a type definition, the value describes the schema that is applied to all properties not mentioned in the [`properties`](#properties) constraint. The following example would accept `{"fizz": "buzz", "foo": "bar"}` but reject `{"property": 1}`.
+
+```json
+"definitions": {
+ "dictionaryDefinition": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3,
+ "nullable": true
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0,
+ "nullable": true
+ }
+ },
+ "additionalProperties": {
+ "type": "string"
+ }
+ }
+}
+```
+
+If the value is `false`, no properties beyond those defined in the [`properties`](#properties) constraint may be supplied. The following example would accept `{"foo": "string", "bar": 1}`, but reject `{"foo": "string", "bar": 1, "fizz": "buzz"}`.
+
+```json
+"definitions": {
+ "dictionaryDefinition": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0
+ }
+ },
+ "additionalProperties": false
+ }
+}
+```
+
+If the value is `true`, any property not defined in the [`properties`](#properties) constraint accepts any value. The following example would accept `{"foo": "string", "bar": 1, "fizz": "buzz"}`.
+
+```json
+"definitions": {
+ "dictionaryDefinition": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0
+ }
+ },
+ "additionalProperties": true
+ }
+}
+```
+
+### discriminator
+
+The value `discriminator` defines what schema to apply based on a discriminator property. The following example would accept either `{"type": "ints", "foo": 1, "bar": 2}` or `{"type": "strings", "fizz": "buzz", "pop": "goes", "the": "weasel"}`, but reject `{"type": "ints", "fizz": "buzz"}`.
+
+```json
+"definitions": {
+ "taggedUnionDefinition": {
+ "type": "object",
+ "discriminator": {
+ "propertyName": "type",
+ "mapping": {
+ "ints": {
+ "type": "object",
+ "additionalProperties": {"type": "int"}
+ },
+ "strings": {
+ "type": "object",
+ "additionalProperties": {"type": "string"}
+ }
+ }
+ }
+ }
+}
+```
+
+## Array constraints
+
+### prefixItems
+
+The value of `prefixItems` is an array of type definitions. Each type definition in the value is the schema to be used to validate the element of an array at the same index. The following example would accept `[1, true]` but reject `[1, "string"]` or `[1]`:
+
+```json
+"definitions": {
+ "tupleDefinition": {
+ "type": "array",
+ "prefixItems": [
+ { "type": "int" },
+ { "type": "bool" }
+ ]
+ }
+},
+"parameters": {
+ "tupleParameter": {
+ "$ref": "#/definitions/tupleDefinition"
+ }
+}
+```
+
+### items
+
+The value of `items` is a type definition or a boolean. If no `items` constraint is defined, the default value is `true`.
+
+If value is a type definition, the value describes the schema that is applied to all elements of the array whose index is greater than the largest index of the [`prefixItems`](#prefixitems) constraint. The following example would accept `[1, true, 1]` or `[1, true, 1, 1]` but reject `[1, true, "foo"]`:
+
+```json
+"definitions": {
+ "tupleDefinition": {
+ "type": "array",
+ "prefixItems": [
+ { "type": "int" },
+ { "type": "bool" }
+ ],
+ "items": { "type": "int" }
+ }
+},
+"parameters": {
+ "tupleParameter": {
+ "$ref": "#/definitions/tupleDefinition"
+ }
+}
+```
+
+You can use `items` without using `prefixItems`. The following example would accept `[1, 2]` or `[1]` but reject `["foo"]`:
+
+```json
+"definitions": {
+ "intArrayDefinition": {
+ "type": "array",
+ "items": { "type": "int" }
+ }
+},
+"parameters": {
+ "intArrayParameter": {
+ "$ref": "#/definitions/intArrayDefinition"
+ }
+}
+```
+
+If the value is `false`, the validated array must be the exact same length as the [`prefixItems`](#prefixitems) constraint. The following example would accept `[1, true]`, but reject `[1, true, 1]`, and `[1, true, false, "foo", "bar"]`.
+
+```json
+"definitions": {
+ "tupleDefinition": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ]
+ },
+ "items": false
+}
+```
+
+If the value is true, elements of the array whose index is greater than the largest index of the [`prefixItems`](#prefixitems) constraint accept any value. The following examples would accept `[1, true]`, `[1, true, 1]` and `[1, true, false, "foo", "bar"]`.
+
+```json
+"definitions": {
+ "tupleDefinition": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ]
+ }
+}
+```
+
+```json
+"definitions": {
+ "tupleDefinition": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ]
+ },
+ "items": true
+}
+```
+
+## nullable constraint
+
+The nullable constraint indicates that the value may be `null` or omitted. See [Properties](#properties) for an example.
++
+## Description
+
+You can add a description to a type definition to help users of your template understand the value to provide.
+
+```json
+"definitions": {
+ "virtualMachineSize": {
+ "type": "string",
+ "metadata": {
+ "description": "Must be at least Standard_A3 to support 2 NICs."
+ },
+ "defaultValue": "Standard_DS1_v2"
+ }
+}
+```
+
+## Use definition
+
+To reference a type definition, use the following syntax:
+
+```json
+"$ref": "#/definitions/<definition-name>"
+```
+
+The following example shows how to reference a type definition from parameters and outputs:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "languageVersion": "2.0",
+
+ "definitions": {
+ "naturalNumber": {
+ "type": "int",
+ "minValue": 1
+ }
+ },
+ "parameters": {
+ "numberParam": {
+ "$ref": "#/definitions/naturalNumber",
+ "defaultValue": 0
+ }
+ },
+ "resources": {},
+ "outputs": {
+ "output1": {
+ "$ref": "#/definitions/naturalNumber",
+ "value": "[parameters('numberParam')]"
+ }
+ }
+}
+```
+
+## Next steps
+
+* To learn about the available properties for type definitions, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
Title: Template deployment what-if description: Determine what changes will happen to your resources before deploying an Azure Resource Manager template. Previously updated : 02/15/2023 Last updated : 09/06/2023 ms.devlang: azurecli
For REST API, use:
## Change types
-The what-if operation lists six different types of changes:
+The what-if operation lists seven different types of changes:
- **Create**: The resource doesn't currently exist but is defined in the template. The resource will be created.- - **Delete**: This change type only applies when using [complete mode](deployment-modes.md) for deployment. The resource exists, but isn't defined in the template. With complete mode, the resource will be deleted. Only resources that [support complete mode deletion](./deployment-complete-mode-deletion.md) are included in this change type.- - **Ignore**: The resource exists, but isn't defined in the template. The resource won't be deployed or modified. When you reach the limits for expanding nested templates, you will encounter this change type. See [What-if limits](#what-if-limits).- - **NoChange**: The resource exists, and is defined in the template. The resource will be redeployed, but the properties of the resource won't change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.-
+- **NoEffect**: The property is ready-only and will be ignored by the service. For example, the `sku.tier` property is always set to match `sku.name` in the [`Microsoft.ServiceBus`](/azure/templates/microsoft.servicebus/namespaces) namespace.
- **Modify**: The resource exists, and is defined in the template. The resource will be redeployed, and the properties of the resource will change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.- - **Deploy**: The resource exists, and is defined in the template. The resource will be redeployed. The properties of the resource may or may not change. The operation returns this change type when it doesn't have enough information to determine if any properties will change. You only see this condition when [ResultFormat](#result-format) is set to `ResourceIdOnly`. ## Result format
The following results show the two different output formats:
] ~ properties.subnets: [ - 0:- name: "subnet001" properties.addressPrefix: "10.0.0.0/24"
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md
Title: Link templates for deployment description: Describes how to use linked templates in an Azure Resource Manager template (ARM template) to create a modular template solution. Shows how to pass parameters values, specify a parameter file, and dynamically created URLs. Previously updated : 04/26/2023 Last updated : 08/22/2023
To nest a template, add a [deployments resource](/azure/templates/microsoft.reso
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "nestedTemplate1", "properties": { "mode": "Incremental",
To nest a template, add a [deployments resource](/azure/templates/microsoft.reso
} } }
- ],
- "outputs": {
- }
+ ]
} ```
The following example deploys a storage account through a nested template.
"contentVersion": "1.0.0.0", "parameters": { "storageAccountName": {
- "type": "string"
+ "type": "string",
+ "defaultValue": "[format('{0}{1}', 'store', uniqueString(resourceGroup().id))]"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
} }, "resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "nestedTemplate1", "properties": { "mode": "Incremental",
The following example deploys a storage account through a nested template.
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "[parameters('storageAccountName')]",
- "location": "West US",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+[Nested resources](./child-resource-name-type.md#within-parent-resource) can't be used in a [symbolic name](./resource-declaration.md#use-symbolic-name) template. In the following template, the nested storage account resource cannot use symbolic name:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageAccountName": {
+ "type": "string",
+ "defaultValue": "[format('{0}{1}', 'storage', uniqueString(resourceGroup().id))]"
+
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": {
+ "mainStorage": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ },
+ "nestedResource": {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2022-09-01",
+ "name": "nestedTemplate1",
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}nested', parameters('storageAccountName'))]",
+ "location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" },
The following example deploys a storage account through a nested template.
} } }
- ],
- "outputs": {
} } ```
When using a nested template, you can specify whether template expressions are e
You set the scope through the `expressionEvaluationOptions` property. By default, the `expressionEvaluationOptions` property is set to `outer`, which means it uses the parent template scope. Set the value to `inner` to cause expressions to be evaluated within the scope of the nested template.
+> [!IMPORTANT]
+> For [`languageVersion 2.0`](./syntax.md#languageversion-20), the default value for the `expressionEvaluationOptions` property is `inner`. The value `outer` is blocked.
+ ```json { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "nestedTemplate1", "properties": { "expressionEvaluationOptions": {
The following template demonstrates how template expressions are resolved accord
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "nestedTemplate1", "properties": { "expressionEvaluationOptions": {
The following example deploys a SQL server and retrieves a key vault secret to u
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "dynamicSecret", "properties": { "mode": "Incremental",
The following example deploys a SQL server and retrieves a key vault secret to u
} }, "variables": {
- "sqlServerName": "[concat('sql-', uniqueString(resourceGroup().id, 'sql'))]"
+ "sqlServerName": "[format('sql-{0}sql', uniqueString(resourceGroup().id, 'sql'))]"
}, "resources": [ { "type": "Microsoft.Sql/servers",
- "apiVersion": "2021-02-01-preview",
+ "apiVersion": "2022-05-01-preview",
"name": "[variables('sqlServerName')]", "location": "[parameters('location')]", "properties": {
The following excerpt shows which values are secure and which aren't secure.
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2023-03-01",
"name": "mainTemplate", "properties": { ...
The following excerpt shows which values are secure and which aren't secure.
{ "name": "outer", "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"properties": { "expressionEvaluationOptions": { "scope": "outer"
The following excerpt shows which values are secure and which aren't secure.
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2023-03-01",
"name": "outer", "properties": { ...
The following excerpt shows which values are secure and which aren't secure.
{ "name": "inner", "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"properties": { "expressionEvaluationOptions": { "scope": "inner"
The following excerpt shows which values are secure and which aren't secure.
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2023-03-01",
"name": "inner", "properties": { ...
To link a template, add a [deployments resource](/azure/templates/microsoft.reso
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
When referencing a linked template, the value of `uri` can't be a local file or
You may reference templates using parameters that include HTTP or HTTPS. For example, a common pattern is to use the `_artifactsLocation` parameter. You can set the linked template with an expression like: ```json
-"uri": "[concat(parameters('_artifactsLocation'), '/shared/os-disk-parts-md.json', parameters('_artifactsLocationSasToken'))]"
+"uri": "[format('{0}/shared/os-disk-parts-md.json{1}', parameters('_artifactsLocation'), parameters('_artifactsLocationSasToken'))]"
``` If you're linking to a template in GitHub, use the raw URL. The link has the format: `https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/get-started-with-templates/quickstart-template/azuredeploy.json`. To get the raw link, select **Raw**.
If you're linking to a template in GitHub, use the raw URL. The link has the for
[!INCLUDE [Deploy templates in private GitHub repo](../../../includes/resource-manager-private-github-repo-templates.md)]
+For linked templates, you can nest a non-symbolic-name deployment inside a [symbolic-name template](./resource-declaration.md#use-symbolic-name), or nest a symbolic-name deployment inside a non-symbolic template, or nest a symbolic-name deployment inside another symbolic-name template, or vice versa.
+ ### Parameters for linked template You can provide the parameters for your linked template either in an external file or inline. When providing an external parameter file, use the `parametersLink` property:
You can provide the parameters for your linked template either in an external fi
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
To pass parameter values inline, use the `parameters` property.
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
The following template shows how *mainTemplate.json* deploys *nestedChild.json*
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "childLinked", "properties": { "mode": "Incremental",
The following example template shows how to use `copy` with a nested template.
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
- "name": "[concat('nestedTemplate', copyIndex())]",
+ "apiVersion": "2022-09-01",
+ "name": "[format('nestedTemplate{0}', copyIndex())]",
// yes, copy works here "copy": { "name": "storagecopy",
The following example template shows how to use `copy` with a nested template.
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-04-01",
- "name": "[concat(variables('storageName'), copyIndex())]",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}{1}', variables('storageName'), copyIndex())]",
"location": "West US", "sku": { "name": "Standard_LRS"
You can use these separate entries in the history to retrieve output values afte
"parameters": { "publicIPAddresses_name": { "type": "string"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
} }, "variables": {}, "resources": [ { "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2021-02-01",
+ "apiVersion": "2023-04-01",
"name": "[parameters('publicIPAddresses_name')]",
- "location": "southcentralus",
+ "location": "[parameters('location')]",
"properties": { "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static", "idleTimeoutInMinutes": 4, "dnsSettings": {
- "domainNameLabel": "[concat(parameters('publicIPAddresses_name'), uniqueString(resourceGroup().id))]"
+ "domainNameLabel": "[format('{0}{1}', parameters('publicIPAddresses_name'), uniqueString(resourceGroup().id))]"
} }, "dependsOn": []
The following template links to the preceding template. It creates three public
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
- "name": "[concat('linkedTemplate', copyIndex())]",
+ "apiVersion": "2022-09-01",
+ "name": "[format('linkedTemplate{0}', copyIndex())]",
"copy": { "count": 3, "name": "ip-loop"
The following template links to the preceding template. It creates three public
"contentVersion": "1.0.0.0" }, "parameters":{
- "publicIPAddresses_name":{"value": "[concat('myip-', copyIndex())]"}
+ "publicIPAddresses_name":{"value": "[format('myip-{0}', copyIndex())]"}
} } }
The following example shows how to pass a SAS token when linking to a template:
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental", "templateLink": {
- "uri": "[concat(uri(deployment().properties.templateLink.uri, 'helloworld.json'), parameters('containerSasToken'))]",
+ "uri": "[format('{0}{1}', uri(deployment().properties.templateLink.uri, 'helloworld.json'), parameters('containerSasToken'))]",
"contentVersion": "1.0.0.0" } }
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameters.md
Title: Parameters in templates
description: Describes how to define parameters in an Azure Resource Manager template (ARM template). Previously updated : 09/28/2022 Last updated : 08/22/2023 # Parameters in ARM templates
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md).
+In addition to minValue, maxValue, minLength, maxLength, and allowedValues, [languageVersion 2.0](./syntax.md#languageversion-20) introduces some aggregate type validation constraints to be used in [definitions](./syntax.md#definitions), [parameters](./syntax.md#parameters) and [outputs](./syntax.md#outputs) definitions. These constraints include:
+
+- [additionalProperties](#additionalproperties)
+- [discriminator](#discriminator)
+- [items](#items)
+- [nullable](#nullable-constraint)
+- [prefixItems](#prefixitems)
+- [properties](#properties)
++ > [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameters](../bicep/parameters.md).
You can use another parameter value to build a default value. The following temp
## Length constraints
-You can specify minimum and maximum lengths for string and array parameters. You can set one or both constraints. For strings, the length indicates the number of characters. For arrays, the length indicates the number of items in the array.
+You can specify minimum and maximum lengths for string and array parameters. You can set one or both constraints. For strings, the length indicates the number of characters. For arrays, the length indicates the number of items in the array.
The following example declares two parameters. One parameter is for a storage account name that must have 3-24 characters. The other parameter is an array that must have from 1-5 items.
You can set minimum and maximum values for integer parameters. You can set one o
} ```
+## Object constraints
+
+The object constraints are only allowed on [objects](./data-types.md#objects), and can only be used with [languageVersion 2.0](./syntax.md#languageversion-20).
+
+### Properties
+
+The value of `properties` is a map of property name => [type definition](./definitions.md).
+
+The following example would accept `{"foo": "string", "bar": 1}`, but reject `{"foo": "string", "bar": -1}`, `{"foo": "", "bar": 1}`, or any object without a `foo` or `bar` property.
+
+```json
+"parameters": {
+ "objectParameter": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0
+ }
+ }
+ }
+}
+```
+
+All properties are required unless the propertyΓÇÖs [type definition](./definitions.md) has the ["nullable": true](#nullable-constraint) constraint. To make both properties in the preceding example optional, it would look like:
+
+```json
+"parameters": {
+ "objectParameter": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3,
+ "nullable": true
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0,
+ "nullable": true
+ }
+ }
+ }
+}
+```
+
+### additionalProperties
+
+The value of `additionalProperties` is a [type definition](./definitions.md) or a boolean value. If no `additionalProperties` constraint is defined, the default value is `true`.
+
+If value is a type definition, the value describes the schema that is applied to all properties not mentioned in the [`properties`](#properties) constraint. The following example would accept `{"fizz": "buzz", "foo": "bar"}` but reject `{"property": 1}`.
+
+```json
+"parameters": {
+ "dictionaryParameter": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3,
+ "nullable": true
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0,
+ "nullable": true
+ }
+ },
+ "additionalProperties": {
+ "type": "string"
+ }
+ }
+}
+```
+
+If the value is `false`, no properties beyond those defined in the [`properties`](#properties) constraint may be supplied. The following example would accept `{"foo": "string", "bar": 1}`, but reject `{"foo": "string", "bar": 1, "fizz": "buzz"}`.
+
+```json
+"parameters": {
+ "dictionaryParameter": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0
+ }
+ },
+ "additionalProperties": false
+ }
+}
+```
+
+If the value is `true`, any property not defined in the [`properties`](#properties) constraint accepts any value. The following example would accept `{"foo": "string", "bar": 1, "fizz": "buzz"}`.
+
+```json
+"parameters": {
+ "dictionaryParameter": {
+ "type": "object",
+ "properties": {
+ "foo": {
+ "type": "string",
+ "minLength": 3
+ },
+ "bar": {
+ "type": "int",
+ "minValue": 0
+ }
+ },
+ "additionalProperties": true
+ }
+}
+```
+
+### discriminator
+
+The value `discriminator` defines what schema to apply based on a discriminator property. The following example would accept either `{"type": "ints", "foo": 1, "bar": 2}` or `{"type": "strings", "fizz": "buzz", "pop": "goes", "the": "weasel"}`, but reject `{"type": "ints", "fizz": "buzz"}`.
+
+```json
+"parameters": {
+ "taggedUnionParameter": {
+ "type": "object",
+ "discriminator": {
+ "propertyName": "type",
+ "mapping": {
+ "ints": {
+ "type": "object",
+ "additionalProperties": {"type": "int"}
+ },
+ "strings": {
+ "type": "object",
+ "additionalProperties": {"type": "string"}
+ }
+ }
+ }
+ }
+}
+```
+
+## Array constraints
+
+The array constraints are only allowed on [arrays](./data-types.md#arrays), and can only be used with [languageVersion 2.0](./syntax.md#languageversion-20).
+
+### prefixItems
+
+The value of `prefixItems` is an array of [type definitions](./definitions.md). Each type definition in the value is the schema to be used to validate the element of an array at the same index. The following example would accept `[1, true]` but reject `[1, "string"]` or `[1]`:
+
+```json
+"parameters": {
+ "tupleParameter": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ]
+ }
+}
+```
+
+### items
+
+The value of `items` is a [type definition](./definitions.md) or a boolean. If no `items` constraint is defined, the default value is `true`.
+
+If value is a type definition, the value describes the schema that is applied to all elements of the array whose index is greater than the largest index of the [`prefixItems`](#prefixitems) constraint. The following example would accept `[1, true, 1]` or `[1, true, 1, 1]` but reject `[1, true, "foo"]`:
+
+```json
+"parameters": {
+ "tupleParameter": {
+ "type": "array",
+ "prefixItems": [
+ { "type": "int" },
+ { "type": "bool" }
+ ],
+ "items": { "type": "int" },
+ "defaultValue": [1, true, "foo"]
+ }
+}
+```
+
+You can use `items` without using `prefixItems`. The following example would accept `[1, 2]` or `[1]` but reject `["foo"]`:
+
+```json
+"parameters": {
+ "intArrayParameter": {
+ "type": "array",
+ "items": {"type": "int"}
+ }
+}
+```
+
+If the value is `false`, the validated array must be the exact same length as the [`prefixItems`](#prefixitems) constraint. The following example would accept `[1, true]`, but reject `[1, true, 1]`, and `[1, true, false, "foo", "bar"]`.
+
+```json
+"parameters": {
+ "tupleParameter": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ],
+ "items": false
+ }
+}
+```
+
+If the value is true, elements of the array whose index is greater than the largest index of the [`prefixItems`](#prefixitems) constraint accept any value. The following examples would accept `[1, true]`, `[1, true, 1]` and `[1, true, false, "foo", "bar"]`.
+
+```json
+"parameters": {
+ "tupleParameter": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ]
+ }
+}
+```
+
+```json
+"parameters": {
+ "tupleParameter": {
+ "type": "array",
+ "prefixItems": [
+ {"type": "int"},
+ {"type": "bool"}
+ ]
+ },
+ "items": true
+}
+```
++
+## nullable constraint
+
+The nullable constraint can only be used with [languageVersion 2.0](./syntax.md#languageversion-20). It indicates that the value may be `null` or omitted. See [Properties](#properties) for an example.
+ ## Description You can add a description to a parameter to help users of your template understand the value to provide. When deploying the template through the portal, the text you provide in the description is automatically used as a tip for that parameter. Only add a description when the text provides more information than can be inferred from the parameter name.
azure-resource-manager Quickstart Create Templates Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md
If you don't have an Azure subscription, [create a free account](https://azure.m
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Quickstart: Create Bicep files with Visual Studio Code](../bicep/quickstart-create-bicep-use-visual-studio-code.md). + ## Create an ARM template Create and open with Visual Studio Code a new file named *azuredeploy.json*. Enter `arm` into the code editor, which initiates Azure Resource Manager snippets for scaffolding out an ARM template.
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-declaration.md
Last updated 09/28/2022
To deploy a resource through an Azure Resource Manager template (ARM template), you add a resource declaration. Use the `resources` array in a JSON template.
+[languageVersion 2.0](./syntax.md#languageversion-20) makes a list of enhancements to ARM JSON templates, such as changing the resources declaration from an array to an object. The majority of the samples shown in this article still use `resources` array. For languageVersion 2.0 specific information, see [Use symbolic name](#use-symbolic-name).
++++++ > [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource declaration](../bicep/resource-declaration.md).
Use intellisense or [template reference](/azure/templates/) to determine which p
} ```
+## Use symbolic name
+
+In [Bicep](../bicep/overview.md), each resource definition has a symbolic name. The symbolic name is used to reference the resource from the other parts of your Bicep file. To support symbolic name in ARM JSON templates, add `languageVersion` with the version `2.0`, and change the resource definition from an array to an object. When `languageVersion` is specified for a template, symbolic name must be specified for root level resources. For example:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.ContainerService/managedClusters",
+ ...
+ }
+ ]
+}
+```
+
+The preceding JSON can be written into the following JSON:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "resources": {
+ "aks": {
+ "type": "Microsoft.ContainerService/managedClusters",
+ ...
+ }
+ }
+}
+```
+
+Symbolic names are case-sensitive. The allowed characters for symbolic names are letters, numbers, and _. Symbolic names must be unique in a template, but can overlap with variable names, parameter names, and output names in a template. In the following example, the symbolic name of the storage account resource has the same name as the output.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageAccountName": {
+ "type": "string",
+ "defaultValue": "[format('storage{0}', uniqueString(resourceGroup().id))]"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": {
+ "myStorage": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {}
+ }
+ },
+ "outputs": {
+ "myStorage":{
+ "type": "object",
+ "value": "[reference('myStorage')]"
+ }
+ }
+}
+```
+
+The [reference](./template-functions-resource.md#reference) function can use a resource's symbolic name, as shown in the preceding example. The reference function can no longer use a resource's name, for example, `reference(parameters('storageAccountName'))` is not allowed.
+
+If [Deployments resource](/azure/templates/microsoft.resources/deployments?tabs=json) is used in a symbolic-name deployment, use apiVersion `2020-09-01` or later.
+
+### Declare existing resources
+
+With [`languageVersion 2.0`](./syntax.md#languageversion-20) and using symbolic name for resource declaration, you can declare existing resources. A top-level resource property of `"existing": true` causes ARM to read rather than deploy a resource as shown in the following example:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "languageVersion": "2.0",
+
+ "resources": {
+ "storageAccount": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "storageacct",
+ "existing": true
+ }
+ },
+ "outputs": {
+ "saBlocksPlaintext": {
+ "type": "bool",
+ "value": "[ reference('storageAccount').supportsHttpsTrafficOnly]"
+ }
+ }
+}
+```
+
+Existing resources don't need to define any properties other than `type`, `apiVersion`, and `name`.
+ ## Next steps * To conditionally deploy a resource, see [Conditional deployment in ARM templates](conditional-resource-deployment.md).
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
Title: Set deployment order for resources description: Describes how to set one Azure resource as dependent on another resource during deployment. The dependencies ensure resources are deployed in the correct order. Previously updated : 05/22/2023 Last updated : 08/22/2023 # Define the order for deploying resources in ARM templates
The following example shows a network interface that depends on a virtual networ
} ```
+With [languageVersion 2.0](./syntax.md#languageversion-20), use resource symbolic name in `dependsOn` arrays. For example:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": {
+ "myStorage": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2023-01-01",
+ "name": "[format('storage{0}', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ },
+ "myVm": {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2023-03-01",
+ "name": "[format('vm{0}', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "myStorage"
+ ],
+ ...
+ }
+ }
+}
+```
+ While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. After deployment, the resource doesn't retain deployment dependencies in its properties, so there are no commands or operations that let you see dependencies. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel. ## Child resources
In the following example, a CDN endpoint explicitly depends on the CDN profile,
```json {
- "name": "[variables('endpointName')]",
- "apiVersion": "2021-06-01",
"type": "endpoints",
+ "apiVersion": "2021-06-01",
+ "name": "[variables('endpointName')]",
"location": "[resourceGroup().location]", "dependsOn": [ "[variables('profileName')]"
The following example shows how to deploy three storage accounts before deployin
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "parameters": {},
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2012-06-01",
+ "apiVersion": "2022-09-01",
"name": "[format('{0}storage{1}, copyIndex(), uniqueString(resourceGroup().id))]",
- "location": "[resourceGroup().location]",
+ "location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" },
The following example shows how to deploy three storage accounts before deployin
"dependsOn": ["storagecopy"], ... }
- ],
- "outputs": {}
+ ]
+}
+```
+
+[Symbolic names](./resource-declaration.md#use-symbolic-name) can be used in `dependsOn` arrays. If a symbolic name is for a copy loop, all resources in the loop are added as dependencies. The preceding sample can be written as the following JSON. In the sample, **myVM** depends on all of the storage accounts in the **myStorages** loop.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": {
+ "myStorages": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}storage{1}, copyIndex(), uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "copy": {
+ "name": "storagecopy",
+ "count": 3
+ },
+ "properties": {}
+ },
+ "myVM": {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2022-11-01",
+ "name": "[format('VM{0}', uniqueString(resourceGroup().id))]",
+ "dependsOn": ["myStorages"],
+ ...
+ }
+ }
} ```
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax
description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 05/01/2023 Last updated : 08/22/2023 # Understand the structure and syntax of ARM templates
In its simplest structure, a template has the following elements:
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "",
"contentVersion": "", "apiProfile": "",
- "parameters": { },
- "variables": { },
- "functions": [ ],
- "resources": [ ],
- "outputs": { }
+ "definitions": { },
+ "parameters": { },
+ "variables": { },
+ "functions": [ ],
+ "resources": [ ], /* or "resources": { } with languageVersion 2.0 */
+ "outputs": { }
} ``` | Element name | Required | Description | |: |: |: | | $schema |Yes |Location of the JavaScript Object Notation (JSON) schema file that describes the version of the template language. The version number you use depends on the scope of the deployment and your JSON editor.<br><br>If you're using [Visual Studio Code with the Azure Resource Manager tools extension](quickstart-create-templates-use-visual-studio-code.md), use the latest version for resource group deployments:<br>`https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#`<br><br>Other editors (including Visual Studio) may not be able to process this schema. For those editors, use:<br>`https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#`<br><br>For subscription deployments, use:<br>`https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#`<br><br>For management group deployments, use:<br>`https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#`<br><br>For tenant deployments, use:<br>`https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#` |
+| languageVersion |No |Language version of the template. To view the enhancements of languageVersion 2.0, see [languageVersion 2.0](#languageversion-20). |
| contentVersion |Yes |Version of the template (such as 1.0.0.0). You can provide any value for this element. Use this value to document significant changes in your template. When deploying resources using the template, this value can be used to make sure that the right template is being used. | | apiProfile |No | An API version that serves as a collection of API versions for resource types. Use this value to avoid having to specify API versions for each resource in the template. When you specify an API profile version and don't specify an API version for the resource type, Resource Manager uses the API version for that resource type that is defined in the profile.<br><br>The API profile property is especially helpful when deploying a template to different environments, such as Azure Stack and global Azure. Use the API profile version to make sure your template automatically uses versions that are supported in both environments. For a list of the current API profile versions and the resources API versions defined in the profile, see [API Profile](https://github.com/Azure/azure-rest-api-specs/tree/master/profile).<br><br>For more information, see [Track versions using API profiles](./template-cloud-consistency.md#track-versions-using-api-profiles). |
+| [definitions](#definitions) |No |Schemas that are used to validate array and object values. Definitions are only supported in [languageVersion 2.0](#languageversion-20).|
| [parameters](#parameters) |No |Values that are provided when deployment is executed to customize resource deployment. | | [variables](#variables) |No |Values that are used as JSON fragments in the template to simplify template language expressions. | | [functions](#functions) |No |User-defined functions that are available within the template. |
In its simplest structure, a template has the following elements:
Each element has properties you can set. This article describes the sections of the template in greater detail.
+## Definitions
+
+In the `definitions` section of the template, specify the schemas used for validating array and object values. `Definitions` can only be used with [languageVersion 2.0](#languageversion-20).
+
+```json
+"definitions": {
+ "<definition-name": {
+ "type": "<data-type-of-definition>",
+ "allowedValues": [ "<array-of-allowed-values>" ],
+ "minValue": <minimum-value-for-int>,
+ "maxValue": <maximum-value-for-int>,
+ "minLength": <minimum-length-for-string-or-array>,
+ "maxLength": <maximum-length-for-string-or-array>,
+ "prefixItems": <schema-for-validating-array>,
+ "items": <schema-for-validating-array-or-boolean>,
+ "properties": <schema-for-validating-object>,
+ "additionalProperties": <schema-for-validating-object-or-boolean>,
+ "discriminator": <schema-to-apply>,
+ "nullable": <boolean>,
+ "metadata": {
+ "description": "<description-of-the-type-definition>"
+ }
+ }
+}
+```
+
+| Element name | Required | Description |
+|: |: |: |
+| definition-name |Yes |Name of the type definition. Must be a valid JavaScript identifier. |
+| type |Yes |Type of the type definition. The allowed types and values are **string**, **securestring**, **int**, **bool**, **object**, **secureObject**, and **array**. See [Data types in ARM templates](data-types.md). |
+| allowedValues |No |Array of allowed values for the type definition to make sure that the right value is provided. |
+| minValue |No |The minimum value for int type definitions, this value is inclusive. |
+| maxValue |No |The maximum value for int type definitions, this value is inclusive. |
+| minLength |No |The minimum length for string, secure string, and array type definitions, this value is inclusive. |
+| maxLength |No |The maximum length for string, secure string, and array type definitions, this value is inclusive. |
+| prefixItems |No |The schema for validating the element of an array at the same index. |
+| items |No |The schema that is applied to all elements of the array whose index is greater than the largest index of the `prefixItems` constraint, or boolean for controlling the elements of the array whose index is greater than the largest index of the `prefixItems` constraint. |
+| properties |No |The schema for validating object. |
+| additionalProperties |No |The schema that is applied to all properties not mentioned in the `properties` constraint, or boolean for accepting any property not defined in the `properties` constraint. |
+| discriminator |No |The schema to apply based on a discriminator property.|
+| nullable|No |A boolean indicating that the value may be null or omitted. |
+| description |No |Description of the type definition that is displayed to users through the portal. For more information, see [Comments in templates](#comments). |
+
+For examples of how to use type definitions, see [Type definitions in ARM templates](./definitions.md).
+
+In Bicep, see [User-defined data types](../bicep/user-defined-data-types.md).
+ ## Parameters In the `parameters` section of the template, you specify which values you can input when deploying the resources. You're limited to [256 parameters](../management/azure-subscription-service-limits.md#general-limits) in a template. You can reduce the number of parameters by using objects that contain multiple properties.
The available properties for a parameter are:
"minValue": <minimum-value-for-int>, "maxValue": <maximum-value-for-int>, "minLength": <minimum-length-for-string-or-array>,
- "maxLength": <maximum-length-for-string-or-array-parameters>,
+ "maxLength": <maximum-length-for-string-or-array>,
+ "prefixItems": <schema-for-validating-array>,
+ "items": <schema-for-validating-array-or-boolean>,
+ "properties": <schema-for-validating-object>,
+ "additionalProperties": <schema-for-validating-object-or-boolean>,
+ "discriminator": <schema-to-apply>,
+ "nullable": <boolean>,
"metadata": { "description": "<description-of-the parameter>" }
The available properties for a parameter are:
| maxValue |No |The maximum value for int type parameters, this value is inclusive. | | minLength |No |The minimum length for string, secure string, and array type parameters, this value is inclusive. | | maxLength |No |The maximum length for string, secure string, and array type parameters, this value is inclusive. |
+| prefixItems |No |The type definition for validating the element of an array at the same index. `prefixItems` is only supported in [languageVersion 2.0](#languageversion-20).|
+| items |No |The schema that is applied to all elements of the array whose index is greater than the largest index of the `prefixItems` constraint, or boolean for controlling the elements of the array whose index is greater than the largest index of the `prefixItems` constraint. `items` is only supported in [languageVersion 2.0](#languageversion-20).|
+| properties |No |The schema for validating object. `properties` is only supported in [languageVersion 2.0](#languageversion-20).|
+| additionalProperties |No |The schema that is applied to all properties not mentioned in the `properties` constraint, or boolean for accepting any property not defined in the `properties` constraint. `additionalProperties` is only supported in [languageVersion 2.0](#languageversion-20).|
+| discriminator |No |The schema to apply based on a discriminator property. `discriminator` is only supported in [languageVersion 2.0](#languageversion-20).|
+| nullable|No |A boolean indicating that the value may be null or omitted. `nullable` is only supported in [languageVersion 2.0](#languageversion-20).|
| description |No |Description of the parameter that is displayed to users through the portal. For more information, see [Comments in templates](#comments). | For examples of how to use parameters, see [Parameters in ARM templates](./parameters.md).
In Bicep, see [parameters](../bicep/file.md#parameters).
## Variables
-In the `variables` section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](data-types.md). You are limited to [256 variables](../management/azure-subscription-service-limits.md#general-limits) in a template.
+In the `variables` section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](data-types.md). You're limited to [256 variables](../management/azure-subscription-service-limits.md#general-limits) in a template.
The following example shows the available options for defining a variable:
When defining a user function, there are some restrictions:
For examples of how to use custom functions, see [User-defined functions in ARM template](./user-defined-functions.md).
-In Bicep, user-defined functions aren't supported. Bicep does support a variety of [functions](../bicep/bicep-functions.md) and [operators](../bicep/operators.md).
+In Bicep, user-defined functions aren't supported. Bicep does support various [functions](../bicep/bicep-functions.md) and [operators](../bicep/operators.md).
## Resources
-In the `resources` section, you define the resources that are deployed or updated. You are limited to [800 resources](../management/azure-subscription-service-limits.md#general-limits) in a template.
+In the `resources` section, you define the resources that are deployed or updated. You're limited to [800 resources](../management/azure-subscription-service-limits.md#general-limits) in a template.
You define resources with the following structure: ```json "resources": [ {
- "condition": "<true-to-deploy-this-resource>",
- "type": "<resource-provider-namespace/resource-type-name>",
- "apiVersion": "<api-version-of-resource>",
- "name": "<name-of-the-resource>",
- "comments": "<your-reference-notes>",
- "location": "<location-of-resource>",
- "dependsOn": [
- "<array-of-related-resource-names>"
- ],
- "tags": {
- "<tag-name1>": "<tag-value1>",
- "<tag-name2>": "<tag-value2>"
- },
- "identity": {
- "type": "<system-assigned-or-user-assigned-identity>",
- "userAssignedIdentities": {
- "<resource-id-of-identity>": {}
- }
- },
- "sku": {
- "name": "<sku-name>",
- "tier": "<sku-tier>",
- "size": "<sku-size>",
- "family": "<sku-family>",
- "capacity": <sku-capacity>
- },
- "kind": "<type-of-resource>",
- "scope": "<target-scope-for-extension-resources>",
- "copy": {
- "name": "<name-of-copy-loop>",
- "count": <number-of-iterations>,
- "mode": "<serial-or-parallel>",
- "batchSize": <number-to-deploy-serially>
- },
- "plan": {
- "name": "<plan-name>",
- "promotionCode": "<plan-promotion-code>",
- "publisher": "<plan-publisher>",
- "product": "<plan-product>",
- "version": "<plan-version>"
- },
- "properties": {
- "<settings-for-the-resource>",
- "copy": [
- {
- "name": ,
- "count": ,
- "input": {}
- }
- ]
- },
- "resources": [
- "<array-of-child-resources>"
- ]
+ "condition": "<true-to-deploy-this-resource>",
+ "type": "<resource-provider-namespace/resource-type-name>",
+ "apiVersion": "<api-version-of-resource>",
+ "name": "<name-of-the-resource>",
+ "comments": "<your-reference-notes>",
+ "location": "<location-of-resource>",
+ "dependsOn": [
+ "<array-of-related-resource-names>"
+ ],
+ "tags": {
+ "<tag-name1>": "<tag-value1>",
+ "<tag-name2>": "<tag-value2>"
+ },
+ "identity": {
+ "type": "<system-assigned-or-user-assigned-identity>",
+ "userAssignedIdentities": {
+ "<resource-id-of-identity>": {}
+ }
+ },
+ "sku": {
+ "name": "<sku-name>",
+ "tier": "<sku-tier>",
+ "size": "<sku-size>",
+ "family": "<sku-family>",
+ "capacity": <sku-capacity>
+ },
+ "kind": "<type-of-resource>",
+ "scope": "<target-scope-for-extension-resources>",
+ "copy": {
+ "name": "<name-of-copy-loop>",
+ "count": <number-of-iterations>,
+ "mode": "<serial-or-parallel>",
+ "batchSize": <number-to-deploy-serially>
+ },
+ "plan": {
+ "name": "<plan-name>",
+ "promotionCode": "<plan-promotion-code>",
+ "publisher": "<plan-publisher>",
+ "product": "<plan-product>",
+ "version": "<plan-version>"
+ },
+ "properties": {
+ "<settings-for-the-resource>",
+ "copy": [
+ {
+ "name": ,
+ "count": ,
+ "input": {}
+ }
+ ]
+ },
+ "resources": [
+ "<array-of-child-resources>"
+ ]
} ] ``` | Element name | Required | Description | |: |: |: |
-| condition | No | Boolean value that indicates whether the resource will be provisioned during this deployment. When `true`, the resource is created during deployment. When `false`, the resource is skipped for this deployment. See [condition](conditional-resource-deployment.md). |
+| condition | No | Boolean value that indicates whether the resource is provisioned during this deployment. When `true`, the resource is created during deployment. When `false`, the resource is skipped for this deployment. See [condition](conditional-resource-deployment.md). |
| type |Yes |Type of the resource. This value is a combination of the namespace of the resource provider and the resource type (such as `Microsoft.Storage/storageAccounts`). To determine available values, see [template reference](/azure/templates/). For a child resource, the format of the type depends on whether it's nested within the parent resource or defined outside of the parent resource. See [Set name and type for child resources](child-resource-name-type.md). | | apiVersion |Yes |Version of the REST API to use for creating the resource. When creating a new template, set this value to the latest version of the resource you're deploying. As long as the template works as needed, keep using the same API version. By continuing to use the same API version, you minimize the risk of a new API version changing how your template works. Consider updating the API version only when you want to use a new feature that is introduced in a later version. To determine available values, see [template reference](/azure/templates/). | | name |Yes |Name of the resource. The name must follow URI component restrictions defined in RFC3986. Azure services that expose the resource name to outside parties validate the name to make sure it isn't an attempt to spoof another identity. For a child resource, the format of the name depends on whether it's nested within the parent resource or defined outside of the parent resource. See [Set name and type for child resources](child-resource-name-type.md). |
You define resources with the following structure:
| properties |No |Resource-specific configuration settings. The values for the properties are the same as the values you provide in the request body for the REST API operation (PUT method) to create the resource. You can also specify a copy array to create several instances of a property. To determine available values, see [template reference](/azure/templates/). | | resources |No |Child resources that depend on the resource being defined. Only provide resource types that are permitted by the schema of the parent resource. Dependency on the parent resource isn't implied. You must explicitly define that dependency. See [Set name and type for child resources](child-resource-name-type.md). |
+To support [Bicep](../bicep/overview.md) symbolic name in ARM JSON templates, add `languageVersion` with the version `2.0` or newer, and change the resource definition from an array to an object.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "resources": {
+ "<name-of-the-resource>": {
+ ...
+ }
+ }
+}
+```
+
+For more information, see [Resources](#resources).
+ In Bicep, see [resources](../bicep/file.md#resources). ## Outputs
-In the `outputs` section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed. You are limited to [64 outputs](../management/azure-subscription-service-limits.md#general-limits) in a template.
+In the `outputs` section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed. You're limited to [64 outputs](../management/azure-subscription-service-limits.md#general-limits) in a template.
The following example shows the structure of an output definition:
You have a few options for adding comments and metadata to your template.
### Comments
-For inline comments, you can use either `//` or `/* ... */`. In Visual Studio Code, save the parameter files with comments as the **JSON with comments (JSONC)** file type, otherwise you will get an error message saying "Comments not permitted in JSON".
+For inline comments, you can use either `//` or `/* ... */`. In Visual Studio Code, save the parameter files with comments as the **JSON with comments (JSONC)** file type, otherwise you get an error message saying "Comments not permitted in JSON".
> [!NOTE] >
For inline comments, you can use either `//` or `/* ... */`. In Visual Studio Co
```json { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2023-03-01",
"name": "[variables('vmName')]", // to customize name, change it in variables "location": "[parameters('location')]", //defaults to resource group location "dependsOn": [ /* storage account and network interface must be deployed first */
For `resources`, add a `comments` element or a `metadata` object. The following
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2018-07-01",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}{1}', 'storage', uniqueString(resourceGroup().id))]",
"comments": "Storage account used to store VM disks", "location": "[parameters('location')]", "metadata": {
You can break a string into multiple lines. For example, see the `location` prop
```json { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2023-03-01",
"name": "[variables('vmName')]", // to customize name, change it in variables "location": "[ parameters('location')
You can break a string into multiple lines. For example, see the `location` prop
In Bicep, see [multi-line strings](../bicep/file.md#multi-line-strings).
+## languageVersion 2.0
+
+> [!NOTE]
+> Using any `languageVersion` that ends in `-experimental` is not recommended in production environments because experimental functionality could be changed at any time.
++
+To use languageVersion 2.0, add `"languageVersion": "2.0"` to your template:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "resources": {
+ "<name-of-the-resource>": {
+ ...
+ }
+ }
+}
+```
+
+The enhancements and changes that comes with languageVersion 2.0:
+
+- Use symbolic name in ARM JSON template. For more information, see [Use symbolic name](./resource-declaration.md#use-symbolic-name).
+- Use symbolic name in resource copy loops. See [Use symbolic name](./copy-resources.md#use-symbolic-name).
+- Use symbolic name in `dependsOn` arrays. See [DependsOn](./resource-dependency.md#dependson) and [Depend on resources in a loop](./resource-dependency.md#depend-on-resources-in-a-loop).
+- Use symbolic name instead of resource name in the `reference` function. See [reference](./template-functions-resource.md#reference).
+- A references() function that returns an array of objects representing a resource collection's runtime states. See [references](./template-functions-resource.md#references).
+- Use the 'existing' resource property to declare existing resources for ARM to read rather than deploy a resource. See [Declare existing resources](./resource-declaration.md#declare-existing-resources).
+- Create user-defined types. See [Type definition](./definitions.md).
+- Additional aggregate type validation constraints to be used in [parameters](./parameters.md) and [outputs](./outputs.md).
+- The default value for the `expressionEvaluationOptions` property is `inner`. The value `outer` is blocked. See [Expression evaluation scope in nested templates](./linked-templates.md#expression-evaluation-scope-in-nested-templates).
+- The `deployment` function returns a limited subset of properties. See [deployment](./template-functions-deployment.md#deployment).
+- If Deployments resource is used in a symbolic-name deployment, use apiVersion `2020-09-01` or later.
+- In resource definition, double-escaping values within an expression is no longer needed. See [Escape characters](./template-expressions.md#escape-characters).
+ ## Next steps * To view complete templates for many different types of solutions, see the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/).
azure-resource-manager Template Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-expressions.md
Title: Template syntax and expressions
description: Describes the declarative JSON syntax for Azure Resource Manager templates (ARM templates). Previously updated : 06/22/2023 Last updated : 08/22/2023 # Syntax and expressions in ARM templates
To pass a string value as a parameter to a function, use single quotes.
"name": "[concat('storage', uniqueString(resourceGroup().id))]" ```
-Most functions work the same whether deployed to a resource group, subscription, management group, or tenant. The following functions have restrictions based on the scope:
+Most functions work the same whether they are deployed to a resource group, subscription, management group, or tenant. The following functions have restrictions based on the scope:
* [resourceGroup](template-functions-resource.md#resourcegroup) - can only be used in deployments to a resource group. * [resourceId](template-functions-resource.md#resourceid) - can be used at any scope, but the valid parameters change depending on the scope.
To escape double quotes in an expression, such as adding a JSON object in the te
}, ```
-To escape single quotes in an ARM expression output, double up the single quotes. The output of the following template will result in JSON value `{"abc":"'quoted'"}`.
+To escape single quotes in an ARM expression output, double up the single quotes. The output of the following template results in JSON value `{"abc":"'quoted'"}`.
```json {
To escape single quotes in an ARM expression output, double up the single quotes
} ```
+In resource definition, double-escape values within an expression. The `scriptOutput` from the following template is `de'f`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "forceUpdateTag": {
+ "type": "string",
+ "defaultValue": "[newGuid()]"
+ }
+ },
+ "variables": {
+ "deploymentScriptSharedProperties": {
+ "forceUpdateTag": "[parameters('forceUpdateTag')]",
+ "azPowerShellVersion": "10.1",
+ "retentionInterval": "P1D"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deploymentScripts",
+ "apiVersion": "2020-10-01",
+ "name": "escapingTest",
+ "location": "[resourceGroup().location]",
+ "kind": "AzurePowerShell",
+ "properties": "[union(variables('deploymentScriptSharedProperties'), createObject('scriptContent', '$DeploymentScriptOutputs = @{}; $DeploymentScriptOutputs.escaped = \"de''''f\";'))]"
+ }
+ ],
+ "outputs": {
+ "scriptOutput": {
+ "type": "string",
+ "value": "[reference('escapingTest').outputs.escaped]"
+ }
+ }
+}
+```
+
+With [languageVersion 2.0](./syntax.md#languageversion-20), double-escape is on longer needed. The preceding example can be written as the following JSON to get the same result, `de'f`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "2.0",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "forceUpdateTag": {
+ "type": "string",
+ "defaultValue": "[newGuid()]"
+ }
+ },
+ "variables": {
+ "deploymentScriptSharedProperties": {
+ "forceUpdateTag": "[parameters('forceUpdateTag')]",
+ "azPowerShellVersion": "10.1",
+ "retentionInterval": "P1D"
+ }
+ },
+ "resources": {
+ "escapingTest": {
+ "type": "Microsoft.Resources/deploymentScripts",
+ "apiVersion": "2020-10-01",
+ "name": "escapingTest",
+ "location": "[resourceGroup().location]",
+ "kind": "AzurePowerShell",
+ "properties": "[union(variables('deploymentScriptSharedProperties'), createObject('scriptContent', '$DeploymentScriptOutputs = @{}; $DeploymentScriptOutputs.escaped = \"de''f\";'))]"
+ }
+ },
+ "outputs": {
+ "scriptOutput": {
+ "type": "string",
+ "value": "[reference('escapingTest').outputs.escaped]"
+ }
+ }
+}
+```
+ When passing in parameter values, the use of escape characters depends on where the parameter value is specified. If you set a default value in the template, you need the extra left bracket. ```json
azure-resource-manager Template Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-deployment.md
Title: Template functions - deployment
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve deployment information. Previously updated : 05/22/2023 Last updated : 08/22/2023 # Deployment functions for ARM templates
When you deploy to an Azure subscription, management group, or tenant, the retur
} ```
+When deploying a [languageVersion 2.0](./syntax.md#languageversion-20) template, the `deployment` function returns a limited subset of properties:
+
+```json
+{
+ "name": "",
+ "location": "",
+ "properties": {
+ "template": {
+ "contentVersion": ""
+ },
+ "templateLink": {
+ "id": "",
+ "uri": ""
+ }
+ }
+}
+```
+ ### Remarks You can use `deployment()` to link to another template based on the URI of the parent template.
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects
description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 05/22/2023 Last updated : 08/22/2023 # Object functions for ARM templates
The JSON data type from the specified string, or an empty value when **null** is
### Remarks
-If you need to include a parameter value or variable in the JSON object, use the [concat](template-functions-string.md#concat) function to create the string that you pass to the function.
+If you need to include a parameter value or variable in the JSON object, use the [format](template-functions-string.md#format) function to create the string that you pass to the function.
You can also use [null()](#null) to get a null value.
An array or object.
The union function uses the sequence of the parameters to determine the order and values of the result.
-For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, it's earlier placement in the array is preserved.
+For arrays, the function iterates through each element in the first parameter and adds it to the result if it isn't already present. Then, it repeats the process for the second parameter and any additional parameters. If a value is already present, its earlier placement in the array is preserved.
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 08/08/2023 Last updated : 08/22/2023
The possible uses of `list*` are shown in the following table.
| Microsoft.OperationalInsights/workspaces | listKeys | | Microsoft.PolicyInsights/remediations | [listDeployments](/rest/api/policy/remediations/listdeploymentsatresourcegroup) | | Microsoft.RedHatOpenShift/openShiftClusters | [listCredentials](/rest/api/openshift/openshiftclusters/listcredentials) |
-| Microsoft.Relay/namespaces/authorizationRules | [listKeys](/rest/api/relay/namespaces/listkeys) |
+| Microsoft.Relay/namespaces/authorizationRules | [listKeys](/rest/api/relay/controlplane-stable/namespaces/list-keys) |
| Microsoft.Relay/namespaces/disasterRecoveryConfigs/authorizationRules | listKeys |
-| Microsoft.Relay/namespaces/HybridConnections/authorizationRules | [listKeys](/rest/api/relay/hybridconnections/listkeys) |
-| Microsoft.Relay/namespaces/WcfRelays/authorizationRules | [listkeys](/rest/api/relay/wcfrelays/listkeys) |
+| Microsoft.Relay/namespaces/HybridConnections/authorizationRules | [listKeys](/rest/api/relay/controlplane-stable/hybrid-connections/list-keys) |
+| Microsoft.Relay/namespaces/WcfRelays/authorizationRules | [listkeys](/rest/api/relay/controlplane-stable/wcf-relays/list-keys) |
| Microsoft.Search/searchServices | [listAdminKeys](/rest/api/searchmanagement/2021-04-01-preview/admin-keys/get) | | Microsoft.Search/searchServices | [listQueryKeys](/rest/api/searchmanagement/2021-04-01-preview/query-keys/list-by-search-service) | | Microsoft.ServiceBus/namespaces/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-keys) |
The [providers operation](/rest/api/resources/providers) is still available thro
## reference
+In the templates without [symbolic names](./resource-declaration.md#use-symbolic-name):
+ `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
+In the templates with [symbolic names](./resource-declaration.md#use-symbolic-name):
+
+`reference(symbolicName or resourceIdentifier, [apiVersion], ['Full'])`
+ Returns an object representing a resource's runtime state. To return an array of objects representing a resource collections's runtime states, see [references](#references). Bicep provide the reference function, but in most cases, the reference function isn't required. It's recommended to use the symbolic name for the resource instead. See [reference](../bicep/bicep-functions-resource.md#reference).
Bicep provide the reference function, but in most cases, the reference function
| Parameter | Required | Type | Description | |: |: |: |: |
-| resourceName or resourceIdentifier |Yes |string |Name or unique identifier of a resource. When referencing a resource in the current template, provide only the resource name as a parameter. When referencing a previously deployed resource or when the name of the resource is ambiguous, provide the resource ID. |
+| resourceName/resourceIdentifier or symbolicName/resourceIdentifier |Yes |string |In the templates without symbolic names, specify name or unique identifier of a resource. When referencing a resource in the current template, provide only the resource name as a parameter. When referencing a previously deployed resource or when the name of the resource is ambiguous, provide the resource ID. </br>In the templates with symbolic names, specify symbolic name or unique identifier of a resource. When referencing a resource in the current template, provide only the resource symbolic name as a parameter. When referencing a previously deployed resource, provide the resource ID.|
| apiVersion |No |string |API version of the specified resource. **This parameter is required when the resource isn't provisioned within same template.** Typically, in the format, **yyyy-mm-dd**. For valid API versions for your resource, see [template reference](/azure/templates/). | | 'Full' |No |string |Value that specifies whether to return the full resource object. If you don't specify `'Full'`, only the properties object of the resource is returned. The full object includes values such as the resource ID and location. |
Use `'Full'` when you need resource values that aren't part of the properties sc
```json { "type": "Microsoft.KeyVault/vaults",
- "apiVersion": "2019-09-01",
+ "apiVersion": "2022-07-01",
"name": "vaultName", "properties": { "tenantId": "[subscription().tenantId]",
If you use the `reference` function in a resource that is conditionally deployed
By using the `reference` function, you implicitly declare that one resource depends on another resource if the referenced resource is provisioned within same template and you refer to the resource by its name (not resource ID). You don't need to also use the `dependsOn` property. The function isn't evaluated until the referenced resource has completed deployment.
-### Resource name or identifier
+### Resource name, Symbolic name or identifier
-When referencing a resource that is deployed in the same template, provide the name of the resource.
+When referencing a resource that is deployed in the same none-symbolic-name template, provide the name of the resource.
```json "value": "[reference(parameters('storageAccountName'))]" ```
+When referencing a resource that is deployed in the same symbolic-name template, provide the symbolic name of the resource.
+
+```json
+"value": "[reference('myStorage').primaryEndpoints]"
+```
+
+Or
+
+```json
+"value": "[reference('myStorage', '2022-09-01', 'Full').location]"
+```
+ When referencing a resource that isn't deployed in the same template, provide the resource ID and `apiVersion`. ```json
-"value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2018-07-01')]"
+"value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2022-09-01')]"
``` To avoid ambiguity about which resource you're referencing, you can provide a fully qualified resource identifier.
The following example template references a storage account that isn't deployed
`references(symbolic name of a resource collection, ['Full', 'Properties])`
-The `references` function works similarly as [`reference`](#reference). Instead of returning an object presenting a resource's runtime state, the `references` function returns an array of objects representing a resource collection's runtime states. This function requires ARM template language version `1.10-experimental` and with [symbolic name](../bicep/file.md#resources) enabled:
+The `references` function works similarly as [`reference`](#reference). Instead of returning an object presenting a resource's runtime state, the `references` function returns an array of objects representing a resource collection's runtime states. This function requires ARM template language version `2.0` and with [symbolic name](../bicep/file.md#resources) enabled:
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "languageVersion": "1.10-experimental",
+ "languageVersion": "2.0",
"contentVersion": "1.0.0.0", ... }
The following example deploys a resource collection, and references that resourc
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "languageVersion": "1.10-experimental",
+ "languageVersion": "2.0",
"contentVersion": "1.0.0.0", "parameters": { "location": {
The following example deploys a resource collection, and references that resourc
"count": "[length(range(0, parameters('numWorkers')))]" }, "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2022-09-01",
+ "apiVersion": "2023-05-01",
"name": "[format('worker-{0}', range(0, parameters('numWorkers'))[copyIndex()])]", "location": "[parameters('location')]", "properties": {
The following example deploys a resource collection, and references that resourc
}, "containerController": { "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2022-09-01",
+ "apiVersion": "2023-05-01",
"name": "controller", "location": "[parameters('location')]", "properties": {
The preceding example returns the three objects.
"type": "Array", "value": [ {
- "apiVersion": "2022-09-01",
+ "apiVersion": "2023-05-01",
"condition": true, "copyContext": { "copyIndex": 0,
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
The connection string contains:
The following table lists all the valid names for key/value pairs in the connection string.
-| Key | Description | Required | Default value| Example value
-| | | | | |
-| Endpoint | The URL of your ASRS instance. | Y | N/A |`https://foo.service.signalr.net` |
-| Port | The port that your ASRS instance is listening on. on. | N| 80/443, depends on the endpoint URI schema | 8080|
-| Version| The version of given connection. string. | N| 1.0 | 1.0 |
-| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N| null | `https://foo.bar` |
-| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app |
+| Key | Description | Required | Default value | Example value |
+| -- | - | -- | | |
+| Endpoint | The URL of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` |
+| Port | The port that your ASRS instance is listening on. on. | N | 80/443, depends on the endpoint URI schema | 8080 |
+| Version | The version of given connection. string. | N | 1.0 | 1.0 |
+| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N | null | `https://foo.bar` |
+| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app |
### Use AccessKey The local auth method is used when `AuthType` is set to null.
-| Key | Description| Required | Default value | Example value|
-| | | | | |
-| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
+| Key | Description | Required | Default value | Example value |
+| | - | -- | - | - |
+| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
-### Use Azure Active Directory
+### Use Microsoft Entra ID
-The Azure AD auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
+The Microsoft Entra ID auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
-| Key| Description| Required | Default value | Example value|
-| -- | | -- | - | |
-| ClientId | A GUID of an Azure application or an Azure identity. | N| null| `00000000-0000-0000-0000-000000000000` |
-| TenantId | A GUID of an organization in Azure Active Directory. | N| null| `00000000-0000-0000-0000-000000000000` |
-| ClientSecret | The password of an Azure application instance. | N| null| `***********************.****************` |
-| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N| null| `/usr/local/cert/app.cert` |
+| Key | Description | Required | Default value | Example value |
+| -- | | -- | - | |
+| ClientId | A GUID of an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` |
+| TenantId | A GUID of an organization in Microsoft Entra ID. | N | null | `00000000-0000-0000-0000-000000000000` |
+| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` |
+| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` |
-A different `TokenCredential` is used to generate Azure AD tokens depending on the parameters you have given.
+A different `TokenCredential` is used to generate Microsoft Entra tokens depending on the parameters you have given.
- `type=azure`
A different `TokenCredential` is used to generate Azure AD tokens depending on t
1. A user-assigned managed identity is used if `clientId` has been given in connection string.
- ```
+ ```text
Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id> ```
-
+ - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used. 1. A system-assigned managed identity is used.
A different `TokenCredential` is used to generate Azure AD tokens depending on t
- `type=azure.app`
- `clientId` and `tenantId` are required to use [Azure AD application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
+ `clientId` and `tenantId` are required to use [Microsoft Entra application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) is used if `clientSecret` is given.
You can also use Azure CLI to get the connection string:
az signalr key list -g <resource_group> -n <resource_name> ```
-## Connect with an Azure AD application
+## Connect with a Microsoft Entra application
-You can use an [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
+You can use a [Microsoft Entra application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
-To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string looks as follows:
+To use Microsoft Entra authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Microsoft Entra application, including client ID, client secret and tenant ID. The connection string looks as follows:
```text Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0; ```
-For more information about how to authenticate using Azure AD application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md).
+For more information about how to authenticate using Microsoft Entra application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md).
## Authenticate with Managed identity
-You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
+You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
To use a system assigned identity, add `AuthType=azure.msi` to the connection string:
For more information about how to configure managed identity, see [Authorize fro
### Use the connection string generator
-It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu.
+It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Microsoft Entra identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu.
:::image type="content" source="media/concept-connection-string/generator.png" alt-text="Screenshot showing connection string generator of SignalR service in Azure portal.":::
-In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application.
+In this page you can choose different authentication types (access key, managed identity or Microsoft Entra application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application.
> [!NOTE] > Information you enter won't be saved after you leave the page. You will need to copy and save your connection string to use in your application.
-For more information about how access tokens are generated and validated, see [Authenticate via Azure Active Directory Token](signalr-reference-data-plane-rest-api.md#authenticate-via-azure-active-directory-token-azure-ad-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) .
+For more information about how access tokens are generated and validated, see [Authenticate via Microsoft Entra token](signalr-reference-data-plane-rest-api.md#authenticate-via-microsoft-entra-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) .
## Client and server endpoints A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service.
-In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security.
+In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security.
In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string:
services.AddSignalR().AddAzureSignalR("<connection_string>");
Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers).
-In a local development environment, the configuration is stored in a file (*appsettings.json* or *secrets.json*) or environment variables. You can use one of the following ways to configure connection string:
+In a local development environment, the configuration is stored in a file (_appsettings.json_ or _secrets.json_) or environment variables. You can use one of the following ways to configure connection string:
- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
+- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services. > [!NOTE]
-> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
+> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
### Configure multiple connection strings
There are also two ways to configure multiple instances:
```text Azure:SignalR:ConnectionString:<name>:<type>
- ```
+ ```
azure-signalr Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-disable-local-auth.md
Title: Disable local (access key) authentication with Azure SignalR Service
-description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure SignalR Service.
+description: This article provides information about how to disable access key authentication and use only Microsoft Entra authorization with Azure SignalR Service.
# Disable local (access key) authentication with Azure SignalR Service
-There are two ways to authenticate to Azure SignalR Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure SignalR Service resources when possible.
+There are two ways to authenticate to Azure SignalR Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID offers superior security and ease of use compared to the access key method.
+With Microsoft Entra ID, you do not need to store tokens in your code, reducing the risk of potential security vulnerabilities.
+We highly recommend using Microsoft Entra ID for your Azure SignalR Service resources whenever possible.
> [!IMPORTANT]
-> Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with current set of access keys will become unavailable.
+> Disabling local authentication can have following consequences.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with the current set of access keys will become unavailable.
## Use Azure portal
You can disable local authentication by setting `disableLocalAuth` property to t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "resource_name": {
- "defaultValue": "test-for-disable-aad",
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.SignalRService/SignalR",
- "apiVersion": "2022-08-01-preview",
- "name": "[parameters('resource_name')]",
- "location": "eastus",
- "sku": {
- "name": "Premium_P1",
- "tier": "Premium",
- "size": "P1",
- "capacity": 1
- },
- "kind": "SignalR",
- "properties": {
- "tls": {
- "clientCertEnabled": false
- },
- "features": [
- {
- "flag": "ServiceMode",
- "value": "Default",
- "properties": {}
- },
- {
- "flag": "EnableConnectivityLogs",
- "value": "True",
- "properties": {}
- }
- ],
- "cors": {
- "allowedOrigins": [
- "*"
- ]
- },
- "serverless": {
- "connectionTimeoutInSeconds": 30
- },
- "upstream": {},
- "networkACLs": {
- "defaultAction": "Deny",
- "publicNetwork": {
- "allow": [
- "ServerConnection",
- "ClientConnection",
- "RESTAPI",
- "Trace"
- ]
- },
- "privateEndpoints": []
- },
- "publicNetworkAccess": "Enabled",
- "disableLocalAuth": true,
- "disableAadAuth": false
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/SignalR",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "kind": "SignalR",
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "features": [
+ {
+ "flag": "ServiceMode",
+ "value": "Default",
+ "properties": {}
+ },
+ {
+ "flag": "EnableConnectivityLogs",
+ "value": "True",
+ "properties": {}
+ }
+ ],
+ "cors": {
+ "allowedOrigins": ["*"]
+ },
+ "serverless": {
+ "connectionTimeoutInSeconds": 30
+ },
+ "upstream": {},
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
} ```
You can assign the [Azure SignalR Service should have local authentication metho
See the following docs to learn about authentication methods. -- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)
- [Authenticate with Azure applications](./signalr-howto-authorize-application.md) - [Authenticate with managed identities](./signalr-howto-authorize-managed-identity.md)
azure-signalr Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-enable-geo-replication.md
Companies seeking local presence or requiring a robust failover system often cho
## Example use case Contoso is a social media company with its customer base spread across the US and Canada. To serve those customers and let them communicate with each other, Contoso runs its services in Central US. Azure SignalR Service is used to handle user connections and facilitate communication among users. Contoso's end users are mostly phone users. Due to the long geographical distances, end-users in Canada might experience high latency and poor network quality.
-![Screenshot of using one Azure SignalR instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-single.png "Single SignalR Example")
+![Diagram of using one Azure SignalR instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-single.png "Single SignalR Example")
Before the advent of the geo-replication feature, Contoso could set up another Azure SignalR Service in Canada Central to serve its Canadian users. By setting up a geographically closer Azure SignalR Service, end users now have better network quality and lower latency.
However, managing multiple Azure SignalR Services brings some challenges:
2. The development team would need to manage two separate Azure SignalR Services, each with distinct domain and connection string. 3. If a regional outage happens, the traffic needs to be switched to another region.
-![Screenshot of using two Azure SignalR instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-multiple.png "Mutiple SignalR Example")
+![Diagram of using two Azure SignalR instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-multiple.png "Mutiple SignalR Example")
## Harnessing geo-replication With the new geo-replication feature, Contoso can now establish a replica in Canada Central, effectively overcoming the above-mentioned hurdles.
-![Screenshot of using one Azure SignalR instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/signalr-replica.png "Replica Example")
+![Diagram of using one Azure SignalR instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/signalr-replica.png "Replica Example")
## Create a SignalR replica
To create a replica, Navigate to the SignalR **Replicas** blade on the Azure por
> [!NOTE] > * Geo-replication is a feature available in premium tier.
-> * A replica is considered a separate resource when it comes to billing. See [Pricing](#pricing) for more details.
+> * A replica is considered a separate resource when it comes to billing. See [Pricing and resource unit](#pricing-and-resource-unit) for more details.
After creation, you would be able to view/edit your replica on the portal by clicking the replica name. ![Screenshot of overview blade of Azure SignalR replica resource. ](./media/howto-enable-geo-replication/signalr-replica-overview.png "Replica Overview")
-## Pricing
-Replica is a feature of [Premium tier](https://azure.microsoft.com/pricing/details/signalr-service/) of Azure SignalR Service. Each replica is billed **separately** according to its own units and outbound traffic. Free message quota is also calculated separately.
+## Pricing and resource unit
+Each replica has its **own** `unit` and `autoscale settings`.
+
+Replica is a feature of [Premium tier](https://azure.microsoft.com/pricing/details/signalr-service/) of Azure SignalR Service. Each replica is billed **separately** according to its own unit and outbound traffic. Free message quota is also calculated separately.
In the preceding example, Contoso added one replica in Canada Central. Contoso would pay for the replica in Canada Central according to its unit and message in Premium Price.
To delete a replica in the Azure portal:
1. Navigate to your Azure SignalR Service, and select **Replicas** blade. Click the replica you want to delete. 2. Click Delete button on the replica overview blade.
-## Understanding how the SignalR replica works
+## Understand how the SignalR replica works
The diagram below provides a brief illustration of the SignalR Replicas' functionality:
-![Screenshot of the arch of Azure SignalR replica. ](./media/howto-enable-geo-replication/signalr-replica-arch.png "Replica Arch")
+![Diagram of the arch of Azure SignalR replica. ](./media/howto-enable-geo-replication/signalr-replica-arch.png "Replica Arch")
-1. The client resolves the Fully Qualified Domain Name (FQDN) `contoso.service.signalr.net` of the SignalR service. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional SignalR instance.
-2. With this CNAME, the client establishes a connection to the regional instance.
+1. The client negotiates with the app server and receives a redirection to the Azure SignalR service. It then resolves the SignalR service's Fully Qualified Domain Name (FQDN) ΓÇö `contoso.service.signalr.net`. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional SignalR instance.
+2. With this CNAME, the client establishes a connection to the regional instance (Replica).
3. The two replicas will synchronize data with each other. Messages sent to one replica would be transferred to other replicas if necessary.
-4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution process.
+4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution process. For details, refer to below [Resiliency and Disaster Recovery](#resiliency-and-disaster-recovery)
> [!NOTE] > * In the data plane, a primary Azure SignalR resource functions identically to its replicas
+## Resiliency and disaster recovery
+
+Azure SignalR Service utilizes a traffic manager for health checks and DNS resolution towards its replicas. Under normal circumstances, when all replicas are functioning properly, clients will be directed to the closest replica. For instance:
+
+- Clients close to `eastus` will be directed to the replica located in `eastus`.
+- Similarly, clients close to `westus` will be directed to the replica in `westus`.
+
+In the event of a **regional outage** in eastus (illustrated below), the traffic manager will detect the health check failure for that region. Then, this faulty replica's DNS will be excluded from the traffic manager's DNS resolution results. After a DNS Time-to-Live (TTL) duration, which is set to 90 seconds, clients in `eastus` will be redirected to connect with the replica in `westus`.
+
+![Diagram of Azure SignalR replica failover. ](./media/howto-enable-geo-replication/signalr-replica-failover.png "Replica Failover")
+
+Once the issue in `eastus` is resolved and the region is back online, the health check will succeed. Clients in `eastus` will then, once again, be directed to the replica in their region. This transition is smooth as the connected clients will not be impacted until those existing connections are closed.
+
+![Diagram of Azure SignalR replica failover recovery. ](./media/howto-enable-geo-replication/signalr-replica-failover-recovery.png "Replica Failover Recover")
++
+This failover and recovery process is **automatic** and requires no manual intervention.
+
+For **server connections**, the failover and recovery work the same way as it does for client connections.
+> [!NOTE]
+> * This failover mechanism is for Azure SignalR service. Regional outages of app server are beyond the scope of this document.
+ ## Impact on performance after adding replicas
-Post replica addition, your clients will be distributed across different locations based on their geographical locations. SignalR must synchronize data across these replicas. The cost for synchronization is negligible if your use case primarily involves sending to large groups (size >100) or broadcasting. However, the cost becomes more apparent when sending to smaller groups (size < 10) or a single user.
+After replicas are enabled, clients will naturally distribute based on their geographical locations. While SignalR takes on the responsibility to synchronize data across these replicas, you'll be pleased to know that the associated overhead on [Server Load](signalr-concept-performance.md#quick-evaluation-using-metrics) is minimal for most common use cases.
+
+Specifically, if your application typically broadcasts to larger groups (size >10) or a single connection, the performance impact of synchronization is barely noticeable. If you're messaging small groups (size < 10) or individual users, you might notice a bit more synchronization overhead.
To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](signalr-howto-scale-autoscale.md) to manage this.
azure-signalr Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-use-managed-identity.md
# Managed identities for Azure SignalR Service
-In Azure SignalR Service, you can use a managed identity from Azure Active Directory to:
+In Azure SignalR Service, you can use a managed identity from Microsoft Entra ID to:
- Obtain access tokens - Access secrets in Azure Key Vault
This article shows you how to create a managed identity for Azure SignalR Servic
To use a managed identity, you must have the following items: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An Azure SignalR resource.
+- An Azure SignalR resource.
- Upstream resources that you want to access. For example, an Azure Key Vault resource. - An Azure Function app. - ## Add a managed identity to Azure SignalR Service
-You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal.
+You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal.
### Add a system-assigned identity
To add a system-managed identity to your SignalR instance:
1. Browse to your SignalR instance in the Azure portal. 1. Select **Identity**.
-1. On the **System assigned** tab, switch **Status** to **On**.
+1. On the **System assigned** tab, switch **Status** to **On**.
1. Select **Save**.
- :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot showing Add a system-assigned identity in the portal.":::
1. Select **Yes** to confirm the change.
To add a user-assigned identity to your SignalR instance, you need to create the
1. On the **User assigned** tab, select **Add**. 1. Select the identity from the **User assigned managed identities** drop down menu. 1. Select **Add**.
- :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot showing Add a user-assigned identity in the portal.":::
## Use a managed identity in serverless scenarios
-Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request.
+Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request.
### Enable managed identity authentication in upstream settings
Once you've added a [system-assigned identity](#add-a-system-assigned-identity)
1. Browse to your SignalR instance. 1. Select **Settings** from the menu. 1. Select the **Serverless** service mode.
-1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings)
+1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings)
1. Select Add one Upstream Setting and select any asterisk go to **Upstream Settings**.
- :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings.":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings.":::
-1. Configure your upstream endpoint settings.
+1. Configure your upstream endpoint settings.
- :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings.":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings.":::
1. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be one of the following formats:
- - Empty
- - Application (client) ID of the service principal
- - Application ID URI of the service principal
- - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).)
- > [!NOTE]
- > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests.
+ - Empty
+ - Application (client) ID of the service principal
+ - Application ID URI of the service principal
+ - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).)
+
+ > [!NOTE]
+ > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests.
### Validate access tokens
The token in the `Authorization` header is a [Microsoft identity platform access
To validate access tokens, your app should also validate the audience and the signing tokens. These tokens need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration).
-The Azure Active Directory (Azure AD) middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
+The Microsoft Entra ID middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
-Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
+Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Microsoft Entra authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
#### Authentication in Function App
You can easily set access validation for a Function App without code changes usi
1. Select **Authentication** from the menu. 1. Select **Add identity provider**. 1. In the **Basics** tab, select **Microsoft** from the **Identity provider** dropdown.
-1. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**.
-1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Azure AD provider, see [Configure your App Service or Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
- :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Aad":::
+1. Select **Log in with Microsoft Entra ID** in **Action to take when request is not authenticated**.
+1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Microsoft Entra ID provider, see [Configure your App Service or Azure Functions app to login with Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md)
+ :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Microsoft Entra ID":::
1. Navigate to SignalR Service and follow the [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity. 1. go to **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously. After you configure these settings, the Function App will reject requests without an access token in the header. > [!IMPORTANT]
-> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)).
+> To pass the authentication, the _Issuer Url_ must match the _iss_ claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)).
-To verify the *Issuer Url* format in your Function app:
+To verify the _Issuer Url_ format in your Function app:
1. Go to the Function app in the portal. 1. Select **Authentication**. 1. Select **Identity provider**. 1. Select **Edit**. 1. Select **Issuer Url**.
-1. Verify that the *Issuer Url* has the format `https://sts.windows.net/<tenant-id>/`.
+1. Verify that the _Issuer Url_ has the format `https://sts.windows.net/<tenant-id>/`.
## Use a managed identity for Key Vault reference
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
ms.devlang: csharp + # Azure SignalR Service authentication
-This tutorial builds on the chat room application introduced in the quickstart. If you have not completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first.
+This tutorial builds on the chat room application introduced in the quickstart. If you haven't completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first.
-In this tutorial, you'll learn how to implement your own authentication and integrate it with the Microsoft Azure SignalR Service.
+In this tutorial, you can discover the process of creating your own authentication method and integrate it with the Microsoft Azure SignalR Service.
-The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach is not very useful in real-world applications where a rogue user would impersonate others to access sensitive data.
+The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach lacks effectiveness in real-world, as it fails to prevent malicious users who might assume false identities from gaining access to sensitive data.
-[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you will use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate.
+[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate.
For more information on the OAuth authentication APIs provided through GitHub, see [Basics of Authentication](https://developer.github.com/v3/guides/basics-of-authentication/).
The code for this tutorial is available for download in the [AzureSignalR-sample
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Register a new OAuth app with your GitHub account
-> * Add an authentication controller to support GitHub authentication
-> * Deploy your ASP.NET Core web app to Azure
+>
+> - Register a new OAuth app with your GitHub account
+> - Add an authentication controller to support GitHub authentication
+> - Deploy your ASP.NET Core web app to Azure
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
To complete this tutorial, you must have the following prerequisites:
1. Open a web browser and navigate to `https://github.com` and sign into your account.
-2. For your account, navigate to **Settings** > **Developer settings** and click **Register a new application**, or **New OAuth App** under *OAuth Apps*.
+2. For your account, navigate to **Settings** > **Developer settings** and select **Register a new application**, or **New OAuth App** under _OAuth Apps_.
-3. Use the following settings for the new OAuth App, then click **Register application**:
+3. Use the following settings for the new OAuth App, then select **Register application**:
- | Setting Name | Suggested Value | Description |
- | | | -- |
- | Application name | *Azure SignalR Chat* | The GitHub user should be able to recognize and trust the app they are authenticating with. |
- | Homepage URL | `http://localhost:5000` | |
- | Application description | *A chat room sample using the Azure SignalR Service with GitHub authentication* | A useful description of the application that will help your application users understand the context of the authentication being used. |
- | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the *AspNet.Security.OAuth.GitHub* package, */signin-github*. |
+ | Setting Name | Suggested Value | Description |
+ | -- | - | |
+ | Application name | _Azure SignalR Chat_ | The GitHub user should be able to recognize and trust the app they're authenticating with. |
+ | Homepage URL | `http://localhost:5000` | |
+ | Application description | _A chat room sample using the Azure SignalR Service with GitHub authentication_ | A useful description of the application that will help your application users understand the context of the authentication being used. |
+ | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the _AspNet.Security.OAuth.GitHub_ package, _/signin-github_. |
-4. Once the new OAuth app registration is complete, add the *Client ID* and *Client Secret* to Secret Manager using the following commands. Replace *Your_GitHub_Client_Id* and *Your_GitHub_Client_Secret* with the values for your OAuth app.
+4. Once the new OAuth app registration is complete, add the _Client ID_ and _Client Secret_ to Secret Manager using the following commands. Replace _Your_GitHub_Client_Id_ and _Your_GitHub_Client_Secret_ with the values for your OAuth app.
- ```dotnetcli
- dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id
- dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret
- ```
+ ```dotnetcli
+ dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id
+ dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret
+ ```
## Implement the OAuth flow ### Update the Startup class to support GitHub authentication
-1. Add a reference to the latest *Microsoft.AspNetCore.Authentication.Cookies* and *AspNet.Security.OAuth.GitHub* packages and restore all packages.
-
- ```dotnetcli
- dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656
- dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final
- dotnet restore
- ```
-
-1. Open *Startup.cs*, and add `using` statements for the following namespaces:
-
- ```csharp
- using System.Net.Http;
- using System.Net.Http.Headers;
- using System.Security.Claims;
- using Microsoft.AspNetCore.Authentication.Cookies;
- using Microsoft.AspNetCore.Authentication.OAuth;
- using Newtonsoft.Json.Linq;
- ```
-
-2. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets.
-
- ```csharp
- private const string GitHubClientId = "GitHubClientId";
- private const string GitHubClientSecret = "GitHubClientSecret";
- ```
-
-3. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app:
-
- ```csharp
- services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
- .AddCookie()
- .AddGitHub(options =>
- {
- options.ClientId = Configuration[GitHubClientId];
- options.ClientSecret = Configuration[GitHubClientSecret];
- options.Scope.Add("user:email");
- options.Events = new OAuthEvents
- {
- OnCreatingTicket = GetUserCompanyInfoAsync
- };
- });
- ```
-
-4. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class.
-
- ```csharp
- private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context)
- {
- var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint);
- request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
- request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken);
-
- var response = await context.Backchannel.SendAsync(request,
- HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted);
-
- var user = JObject.Parse(await response.Content.ReadAsStringAsync());
- if (user.ContainsKey("company"))
- {
- var company = user["company"].ToString();
- var companyIdentity = new ClaimsIdentity(new[]
- {
- new Claim("Company", company)
- });
- context.Principal.AddIdentity(companyIdentity);
- }
- }
- ```
-
-5. Update the `Configure` method of the Startup class with the following line of code, and save the file.
-
- ```csharp
- app.UseAuthentication();
- ```
+1. Add a reference to the latest _Microsoft.AspNetCore.Authentication.Cookies_ and _AspNet.Security.OAuth.GitHub_ packages and restore all packages.
+
+ ```dotnetcli
+ dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656
+ dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final
+ dotnet restore
+ ```
+
+1. Open _Startup.cs_, and add `using` statements for the following namespaces:
+
+ ```csharp
+ using System.Net.Http;
+ using System.Net.Http.Headers;
+ using System.Security.Claims;
+ using Microsoft.AspNetCore.Authentication.Cookies;
+ using Microsoft.AspNetCore.Authentication.OAuth;
+ using Newtonsoft.Json.Linq;
+ ```
+
+1. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets.
+
+ ```csharp
+ private const string GitHubClientId = "GitHubClientId";
+ private const string GitHubClientSecret = "GitHubClientSecret";
+ ```
+
+1. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app:
+
+ ```csharp
+ services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
+ .AddCookie()
+ .AddGitHub(options =>
+ {
+ options.ClientId = Configuration[GitHubClientId];
+ options.ClientSecret = Configuration[GitHubClientSecret];
+ options.Scope.Add("user:email");
+ options.Events = new OAuthEvents
+ {
+ OnCreatingTicket = GetUserCompanyInfoAsync
+ };
+ });
+ ```
+
+1. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class.
+
+ ```csharp
+ private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context)
+ {
+ var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint);
+ request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
+ request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken);
+
+ var response = await context.Backchannel.SendAsync(request,
+ HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted);
+
+ var user = JObject.Parse(await response.Content.ReadAsStringAsync());
+ if (user.ContainsKey("company"))
+ {
+ var company = user["company"].ToString();
+ var companyIdentity = new ClaimsIdentity(new[]
+ {
+ new Claim("Company", company)
+ });
+ context.Principal.AddIdentity(companyIdentity);
+ }
+ }
+ ```
+
+1. Update the `Configure` method of the Startup class with the following line of code, and save the file.
+
+ ```csharp
+ app.UseAuthentication();
+ ```
### Add an authentication controller In this section, you will implement a `Login` API that authenticates clients using the GitHub OAuth app. Once authenticated, the API will add a cookie to the web client response before redirecting the client back to the chat app. That cookie will then be used to identify the client.
-1. Add a new controller code file to the *chattest\Controllers* directory. Name the file *AuthController.cs*.
-
-2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory was not *chattest*:
-
- ```csharp
- using AspNet.Security.OAuth.GitHub;
- using Microsoft.AspNetCore.Authentication;
- using Microsoft.AspNetCore.Mvc;
-
- namespace chattest.Controllers
- {
- [Route("/")]
- public class AuthController : Controller
- {
- [HttpGet("login")]
- public IActionResult Login()
- {
- if (!User.Identity.IsAuthenticated)
- {
- return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme);
- }
-
- HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name);
- HttpContext.SignInAsync(User);
- return Redirect("/");
- }
- }
- }
- ```
+1. Add a new controller code file to the _chattest\Controllers_ directory. Name the file _AuthController.cs_.
+
+2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory wasn't _chattest_:
+
+ ```csharp
+ using AspNet.Security.OAuth.GitHub;
+ using Microsoft.AspNetCore.Authentication;
+ using Microsoft.AspNetCore.Mvc;
+
+ namespace chattest.Controllers
+ {
+ [Route("/")]
+ public class AuthController : Controller
+ {
+ [HttpGet("login")]
+ public IActionResult Login()
+ {
+ if (!User.Identity.IsAuthenticated)
+ {
+ return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme);
+ }
+
+ HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name);
+ HttpContext.SignInAsync(User);
+ return Redirect("/");
+ }
+ }
+ }
+ ```
3. Save your changes. ### Update the Hub class
-By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token is not associated with an authenticated identity. This access is actually anonymous access.
+By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token isn't associated with an authenticated identity.
+Basically, it's anonymous access.
In this section, you will turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim.
-1. Open *Hub\Chat.cs* and add references to these namespaces:
+1. Open _Hub\Chat.cs_ and add references to these namespaces:
- ```csharp
- using System.Threading.Tasks;
- using Microsoft.AspNetCore.Authorization;
- ```
+ ```csharp
+ using System.Threading.Tasks;
+ using Microsoft.AspNetCore.Authorization;
+ ```
2. Update the hub code as shown below. This code adds the `Authorize` attribute to the `Chat` class, and uses the user's authenticated identity in the hub methods. Also, the `OnConnectedAsync` method is added, which will log a system message to the chat room each time a new client connects.
- ```csharp
- [Authorize]
- public class Chat : Hub
- {
- public override Task OnConnectedAsync()
- {
- return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED");
- }
-
- // Uncomment this line to only allow user in Microsoft to send message
- //[Authorize(Policy = "Microsoft_Only")]
- public void BroadcastMessage(string message)
- {
- Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message);
- }
-
- public void Echo(string message)
- {
- var echoMessage = $"{message} (echo from server)";
- Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage);
- }
- }
- ```
+ ```csharp
+ [Authorize]
+ public class Chat : Hub
+ {
+ public override Task OnConnectedAsync()
+ {
+ return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED");
+ }
+
+ // Uncomment this line to only allow user in Microsoft to send message
+ //[Authorize(Policy = "Microsoft_Only")]
+ public void BroadcastMessage(string message)
+ {
+ Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message);
+ }
+
+ public void Echo(string message)
+ {
+ var echoMessage = $"{message} (echo from server)";
+ Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage);
+ }
+ }
+ ```
3. Save your changes. ### Update the web client code
-1. Open *wwwroot\https://docsupdatetracker.net/index.html* and replace the code that prompts for the username with code to use the cookie returned by the authentication controller.
-
- Remove the following code from *https://docsupdatetracker.net/index.html*:
-
- ```javascript
- // Get the user name and store it to prepend to messages.
- var username = generateRandomName();
- var promptMessage = 'Enter your name:';
- do {
- username = prompt(promptMessage, username);
- if (!username || username.startsWith('_') || username.indexOf('<') > -1 || username.indexOf('>') > -1) {
- username = '';
- promptMessage = 'Invalid input. Enter your name:';
- }
- } while(!username)
- ```
-
- Add the following code in place of the code above to use the cookie:
-
- ```javascript
- // Get the user name cookie.
- function getCookie(key) {
- var cookies = document.cookie.split(';').map(c => c.trim());
- for (var i = 0; i < cookies.length; i++) {
- if (cookies[i].startsWith(key + '=')) return unescape(cookies[i].slice(key.length + 1));
- }
- return '';
- }
- var username = getCookie('githubchat_username');
- ```
+1. Open _wwwroot\https://docsupdatetracker.net/index.html_ and replace the code that prompts for the username with code to use the cookie returned by the authentication controller.
+
+ Remove the following code from _https://docsupdatetracker.net/index.html_:
+
+ ```javascript
+ // Get the user name and store it to prepend to messages.
+ var username = generateRandomName();
+ var promptMessage = "Enter your name:";
+ do {
+ username = prompt(promptMessage, username);
+ if (
+ !username ||
+ username.startsWith("_") ||
+ username.indexOf("<") > -1 ||
+ username.indexOf(">") > -1
+ ) {
+ username = "";
+ promptMessage = "Invalid input. Enter your name:";
+ }
+ } while (!username);
+ ```
+
+ Add the following code in place of the code above to use the cookie:
+
+ ```javascript
+ // Get the user name cookie.
+ function getCookie(key) {
+ var cookies = document.cookie.split(";").map((c) => c.trim());
+ for (var i = 0; i < cookies.length; i++) {
+ if (cookies[i].startsWith(key + "="))
+ return unescape(cookies[i].slice(key.length + 1));
+ }
+ return "";
+ }
+ var username = getCookie("githubchat_username");
+ ```
2. Just beneath the line of code you added to use the cookie, add the following definition for the `appendMessage` function:
- ```javascript
- function appendMessage(encodedName, encodedMsg) {
- var messageEntry = createMessageEntry(encodedName, encodedMsg);
- var messageBox = document.getElementById('messages');
- messageBox.appendChild(messageEntry);
- messageBox.scrollTop = messageBox.scrollHeight;
- }
- ```
+ ```javascript
+ function appendMessage(encodedName, encodedMsg) {
+ var messageEntry = createMessageEntry(encodedName, encodedMsg);
+ var messageBox = document.getElementById("messages");
+ messageBox.appendChild(messageEntry);
+ messageBox.scrollTop = messageBox.scrollHeight;
+ }
+ ```
3. Update the `bindConnectionMessage` and `onConnected` functions with the following code to use `appendMessage`.
- ```javascript
- function bindConnectionMessage(connection) {
- var messageCallback = function(name, message) {
- if (!message) return;
- // Html encode display name and message.
- var encodedName = name;
- var encodedMsg = message.replace(/&/g, "&amp;").replace(/</g, "&lt;").replace(/>/g, "&gt;");
- appendMessage(encodedName, encodedMsg);
- };
- // Create a function that the hub can call to broadcast messages.
- connection.on('broadcastMessage', messageCallback);
- connection.on('echo', messageCallback);
- connection.onclose(onConnectionError);
- }
-
- function onConnected(connection) {
- console.log('connection started');
- document.getElementById('sendmessage').addEventListener('click', function (event) {
- // Call the broadcastMessage method on the hub.
- if (messageInput.value) {
- connection
- .invoke('broadcastMessage', messageInput.value)
- .catch(e => appendMessage('_BROADCAST_', e.message));
- }
-
- // Clear text box and reset focus for next comment.
- messageInput.value = '';
- messageInput.focus();
- event.preventDefault();
- });
- document.getElementById('message').addEventListener('keypress', function (event) {
- if (event.keyCode === 13) {
- event.preventDefault();
- document.getElementById('sendmessage').click();
- return false;
- }
- });
- document.getElementById('echo').addEventListener('click', function (event) {
- // Call the echo method on the hub.
- connection.send('echo', messageInput.value);
-
- // Clear text box and reset focus for next comment.
- messageInput.value = '';
- messageInput.focus();
- event.preventDefault();
- });
- }
- ```
-
-4. At the bottom of *https://docsupdatetracker.net/index.html*, update the error handler for `connection.start()` as shown below to prompt the user to log in.
-
- ```javascript
- connection.start()
- .then(function () {
- onConnected(connection);
- })
- .catch(function (error) {
- if (error) {
- if (error.message) {
- console.error(error.message);
- }
- if (error.statusCode && error.statusCode === 401) {
- appendMessage('_BROADCAST_', 'You\'re not logged in. Click <a href="/login">here</a> to login with GitHub.');
- }
- }
- });
- ```
+ ```javascript
+ function bindConnectionMessage(connection) {
+ var messageCallback = function (name, message) {
+ if (!message) return;
+ // Html encode display name and message.
+ var encodedName = name;
+ var encodedMsg = message
+ .replace(/&/g, "&amp;")
+ .replace(/</g, "&lt;")
+ .replace(/>/g, "&gt;");
+ appendMessage(encodedName, encodedMsg);
+ };
+ // Create a function that the hub can call to broadcast messages.
+ connection.on("broadcastMessage", messageCallback);
+ connection.on("echo", messageCallback);
+ connection.onclose(onConnectionError);
+ }
+
+ function onConnected(connection) {
+ console.log("connection started");
+ document
+ .getElementById("sendmessage")
+ .addEventListener("click", function (event) {
+ // Call the broadcastMessage method on the hub.
+ if (messageInput.value) {
+ connection
+ .invoke("broadcastMessage", messageInput.value)
+ .catch((e) => appendMessage("_BROADCAST_", e.message));
+ }
+
+ // Clear text box and reset focus for next comment.
+ messageInput.value = "";
+ messageInput.focus();
+ event.preventDefault();
+ });
+ document
+ .getElementById("message")
+ .addEventListener("keypress", function (event) {
+ if (event.keyCode === 13) {
+ event.preventDefault();
+ document.getElementById("sendmessage").click();
+ return false;
+ }
+ });
+ document
+ .getElementById("echo")
+ .addEventListener("click", function (event) {
+ // Call the echo method on the hub.
+ connection.send("echo", messageInput.value);
+
+ // Clear text box and reset focus for next comment.
+ messageInput.value = "";
+ messageInput.focus();
+ event.preventDefault();
+ });
+ }
+ ```
+
+4. At the bottom of _https://docsupdatetracker.net/index.html_, update the error handler for `connection.start()` as shown below to prompt the user to sign in.
+
+ ```javascript
+ connection
+ .start()
+ .then(function () {
+ onConnected(connection);
+ })
+ .catch(function (error) {
+ if (error) {
+ if (error.message) {
+ console.error(error.message);
+ }
+ if (error.statusCode && error.statusCode === 401) {
+ appendMessage(
+ "_BROADCAST_",
+ 'You\'re not logged in. Click <a href="/login">here</a> to login with GitHub.'
+ );
+ }
+ }
+ });
+ ```
5. Save your changes.
In this section, you will turn on real authentication by adding the `Authorize`
2. Build the app using the .NET Core CLI, execute the following command in the command shell:
- ```dotnetcli
- dotnet build
- ```
+ ```dotnetcli
+ dotnet build
+ ```
3. Once the build successfully completes, execute the following command to run the web app locally:
- ```dotnetcli
- dotnet run
- ```
+ ```dotnetcli
+ dotnet run
+ ```
- By default, the app will be hosted locally on port 5000:
+ The app is hosted locally on port 5000 by default:
- ```output
- E:\Testing\chattest>dotnet run
- Hosting environment: Production
- Content root path: E:\Testing\chattest
- Now listening on: http://localhost:5000
- Application started. Press Ctrl+C to shut down.
- ```
+ ```output
+ E:\Testing\chattest>dotnet run
+ Hosting environment: Production
+ Content root path: E:\Testing\chattest
+ Now listening on: http://localhost:5000
+ Application started. Press Ctrl+C to shut down.
+ ```
-4. Launch a browser window and navigate to `http://localhost:5000`. Click the **here** link at the top to log in with GitHub.
+4. Launch a browser window and navigate to `http://localhost:5000`. Select the **here** link at the top to sign in with GitHub.
- ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
+ ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
- You will be prompted to authorize the chat app's access to your GitHub account. Click the **Authorize** button.
+ You will be prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button.
- ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png)
+ ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png)
- You will be redirected back to the chat application and logged in with your GitHub account name. The web application determined you account name by authenticating you using the new authentication you added.
+ You will be redirected back to the chat application and logged in with your GitHub account name. The web application determined your account name by authenticating you using the new authentication you added.
- ![Account identified](media/signalr-concept-authenticate-oauth/signalr-oauth-account-identified.png)
+ ![Account identified](media/signalr-concept-authenticate-oauth/signalr-oauth-account-identified.png)
- Now that the chat app performs authentication with GitHub and stores the authentication information as cookies, you should deploy it to Azure so other users can authenticate with their accounts and communicate from other workstations.
+ With the chat app now performs authentication with GitHub and stores the authentication information as cookies, the next step involves deploying it to Azure.
+ This approach enables other users to authenticate using their respective accounts and communicate from various workstations.
## Deploy the app to Azure
Prepare your environment for the Azure CLI:
In this section, you will use the Azure CLI to create a new web app in [Azure App Service](../app-service/index.yml) to host your ASP.NET application in Azure. The web app will be configured to use local Git deployment. The web app will also be configured with your SignalR connection string, GitHub OAuth app secrets, and a deployment user.
-When creating the following resources, make sure to use the same resource group that your SignalR Service resource resides in. This approach will make clean up a lot easier later when you want to remove all the resources. The examples given assume you used the group name recommended in previous tutorials, *SignalRTestResources*.
+When creating the following resources, make sure to use the same resource group that your SignalR Service resource resides in. This approach will make clean up a lot easier later when you want to remove all the resources. The examples given assume you used the group name recommended in previous tutorials, _SignalRTestResources_.
### Create the web app and plan
az webapp create --name $WebAppName --resource-group $ResourceGroupName \
--plan $WebAppPlan ```
-| Parameter | Description |
-| -- | |
-| ResourceGroupName | This resource group name was suggested in previous tutorials. It is a good idea to keep all tutorial resources grouped together. Use the same resource group you used in the previous tutorials. |
-| WebAppPlan | Enter a new, unique, App Service Plan name. |
-| WebAppName | This will be the name of the new web app and part of the URL. Use a unique name. For example, signalrtestwebapp22665120. |
+| Parameter | Description |
+| -- | -- |
+| ResourceGroupName | This resource group name was suggested in previous tutorials. It's a good idea to keep all tutorial resources grouped together. Use the same resource group you used in the previous tutorials. |
+| WebAppPlan | Enter a new, unique, App Service Plan name. |
+| WebAppName | This parameter is the name of the new web app and part of the URL. Make it unique. For example, signalrtestwebapp22665120. |
### Add app settings to the web app In this section, you will add app settings for the following components:
-* SignalR Service resource connection string
-* GitHub OAuth app client ID
-* GitHub OAuth app client secret
+- SignalR Service resource connection string
+- GitHub OAuth app client ID
+- GitHub OAuth app client secret
Copy the text for the commands below and update the parameters. Paste the updated script into the Azure Cloud Shell, and press **Enter** to add the app settings:
ResourceGroupName=SignalRTestResources
SignalRServiceResource=mySignalRresourcename WebAppName=myWebAppName
-# Get the SignalR primary connection string
+# Get the SignalR primary connection string
primaryConnectionString=$(az signalr key list --name $SignalRServiceResource \ --resource-group $ResourceGroupName --query primaryConnectionString -o tsv)
az webapp config appsettings set --name $WebAppName \
--settings "GitHubClientSecret=$GitHubClientSecret" ```
-| Parameter | Description |
-| -- | |
-| GitHubClientId | Assign this variable the secret Client Id for your GitHub OAuth App. |
-| GitHubClientSecret | Assign this variable the secret password for your GitHub OAuth App. |
-| ResourceGroupName | Update this variable to be the same resource group name you used in the previous section. |
+| Parameter | Description |
+| - | -- |
+| GitHubClientId | Assign this variable the secret Client ID for your GitHub OAuth App. |
+| GitHubClientSecret | Assign this variable the secret password for your GitHub OAuth App. |
+| ResourceGroupName | Update this variable to be the same resource group name you used in the previous section. |
| SignalRServiceResource | Update this variable with the name of the SignalR Service resource you created in the quickstart. For example, signalrtestsvc48778624. |
-| WebAppName | Update this variable with the name of the new web app you created in the previous section. |
+| WebAppName | Update this variable with the name of the new web app you created in the previous section. |
### Configure the web app for local Git deployment
az webapp deployment source config-local-git --name $WebAppName \
--query [url] -o tsv ```
-| Parameter | Description |
-| -- | |
-| DeploymentUserName | Choose a new deployment user name. |
-| DeploymentUserPassword | Choose a password for the new deployment user. |
-| ResourceGroupName | Use the same resource group name you used in the previous section. |
-| WebAppName | This will be the name of the new web app you created previously. |
+| Parameter | Description |
+| - | -- |
+| DeploymentUserName | Choose a new deployment user name. |
+| DeploymentUserPassword | Choose a password for the new deployment user. |
+| ResourceGroupName | Use the same resource group name you used in the previous section. |
+| WebAppName | This parameter will be the name of the new web app you created previously. |
Make a note the Git deployment URL returned from this command. You will use this URL later.
To deploy your code, execute the following commands in a Git shell.
1. Navigate to the root of your project directory. If you don't have the project initialized with a Git repository, execute following command:
- ```bash
- git init
- ```
+ ```bash
+ git init
+ ```
2. Add a remote for the Git deployment URL you noted earlier:
- ```bash
- git remote add Azure <your git deployment url>
- ```
+ ```bash
+ git remote add Azure <your git deployment url>
+ ```
3. Stage all files in the initialized repository and add a commit.
- ```bash
- git add -A
- git commit -m "init commit"
- ```
+ ```bash
+ git add -A
+ git commit -m "init commit"
+ ```
4. Deploy your code to the web app in Azure.
- ```bash
- git push Azure main
- ```
+ ```bash
+ git push Azure main
+ ```
- You will be prompted to authenticate in order to deploy the code to Azure. Enter the user name and password of the deployment user you created above.
+ You will be prompted to authenticate in order to deploy the code to Azure. Enter the user name and password of the deployment user you created above.
### Update the GitHub OAuth app
The last thing you need to do is update the **Homepage URL** and **Authorization
1. Open [https://github.com](https://github.com) in a browser and navigate to your account's **Settings** > **Developer settings** > **Oauth Apps**.
-2. Click on your authentication app and update the **Homepage URL** and **Authorization callback URL** as shown below:
+2. Select on your authentication app and update the **Homepage URL** and **Authorization callback URL** as shown below:
- | Setting | Example |
- | - | - |
- | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` |
- | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` |
+ | Setting | Example |
+ | -- | - |
+ | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` |
+ | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` |
3. Navigate to your web app URL and test the application.
- ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
+ ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
## Clean up resources
Otherwise, if you are finished with the quickstart sample application, you can d
> [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
-Sign in to the [Azure portal](https://portal.azure.com) and click **Resource groups**.
+Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *SignalRTestResources*. On your resource group in the result list, click **...** then **Delete resource group**.
+In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named _SignalRTestResources_. On your resource group in the result list, click **...** then **Delete resource group**.
![Delete](./media/signalr-concept-authenticate-oauth/signalr-delete-resource-group.png)
-You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and click **Delete**.
+You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted.
After a few moments, the resource group and all of its contained resources are d
In this tutorial, you added authentication with OAuth to provide a better approach to authentication with Azure SignalR Service. To learn more about using Azure SignalR Server, continue to the Azure CLI samples for SignalR Service.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Azure SignalR CLI Samples](./signalr-reference-cli.md)
azure-signalr Signalr Concept Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authorize-azure-active-directory.md
Title: Authorize access with Azure Active Directory for Azure SignalR Service
-description: This article provides information on authorizing access to Azure SignalR Service resources using Azure Active Directory.
+ Title: Authorize access with Microsoft Entra ID for Azure SignalR Service
+description: This article provides information on authorizing access to Azure SignalR Service resources using Microsoft Entra ID.
Last updated 09/06/2021
-# Authorize access with Azure Active Directory for Azure SignalR Service
+# Authorize access with Microsoft Entra ID for Azure SignalR Service
-Azure SignalR Service supports Azure Active Directory (Azure AD) to authorize requests to SignalR resources. With Azure AD, you can use role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Azure AD, which returns an OAuth 2.0 token. The token is used to authorize a request against the SignalR resource.
+Azure SignalR Service supports Microsoft Entra ID for authorizing requests to SignalR resources. With Microsoft Entra ID, you can utilize role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Microsoft Entra ID, which returns an OAuth 2.0 token. The token is then used to authorize a request against the SignalR resource.
-Authorizing requests against SignalR with Azure AD provides superior security and ease of use over Access Key authorization. It's recommended using Azure AD authorization with your SignalR resources when possible to assure access with minimum required privileges.
+Authorizing requests against SignalR with Microsoft Entra ID provides superior security and ease of use compared to Access Key authorization. It is highly recommended to use Microsoft Entra ID for authorizing whenever possible, as it ensures access with the minimum required privileges.
<a id="security-principal"></a>
-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.*
+_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._
> [!IMPORTANT] > Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with access keys will no longer be available.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with access keys will no longer be available.
-## Overview of Azure AD for SignalR
+## Overview of Microsoft Entra ID
-When a security principal attempts to access a SignalR resource, the request must be authorized. With Azure AD, access to a resource requires 2 steps.
+When a security principal attempts to access a SignalR resource, the request must be authorized. Get access to a resource requires 2 steps when using Microsoft Entra ID.
-1. The security principal has to be authenticated by Azure, who will return an OAuth 2.0 token.
-1. The token is passed as part of a request to the SignalR resource to authorize access to the resource.
+1. The security principal has to be authenticated by Microsoft Entra ID, which will then return an OAuth 2.0 token.
+1. The token is passed as part of a request to the SignalR resource for authorizing the request.
-### Client-side authentication while using Azure AD
+### Client-side authentication with Microsoft Entra ID
-When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key.
+When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key.
-Using Azure AD there is no shared key. Instead SignalR uses a **temporary access key** to sign tokens for client connections. The workflow contains four steps.
+When using Microsoft Entra ID, there is no shared key. Instead, SignalR uses a **temporary access key** for signing tokens used in client connections. The workflow contains four steps.
-1. The security principal requires an OAuth 2.0 token from Azure to authenticate itself.
+1. The security principal requires an OAuth 2.0 token from Microsoft Entra ID to authenticate itself.
2. The security principal calls SignalR Auth API to get a **temporary access key**. 3. The security principal signs a client token with the **temporary access key** for client connections during negotiation. 4. The client uses the client token to connect to Azure SignalR resources.
-The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour.
+The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour.
The workflow is built in the [Azure SignalR SDK for app server](https://github.com/Azure/azure-signalr). ## Assign Azure roles for access rights
-Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources.
+Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources.
### Resource scope
You may have to determine the scope of access that the security principal should
You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope:
-| Scope | Description |
-|-|-|
-|**An individual resource.**| Applies to only the target resource.|
-| **A resource group.** |Applies to all of the resources in a resource group.|
-| **A subscription.** | Applies to all of the resources in a subscription.|
-| **A management group.** |Applies to all of the resources in the subscriptions included in a management group.|
-
+| Scope | Description |
+| | |
+| **An individual resource.** | Applies to only the target resource. |
+| **A resource group.** | Applies to all of the resources in a resource group. |
+| **A subscription.** | Applies to all of the resources in a subscription. |
+| **A management group.** | Applies to all of the resources in the subscriptions included in a management group. |
## Azure built-in roles for SignalR resources
-|Role|Description|Use case|
-|-|-|-|
-|[SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server)|Access to Websocket connection creation API and Auth APIs.|Most commonly for an App Server.|
-|[SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner)|Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs.|Use for **Serverless mode** for Authorization with Azure AD since it requires both REST APIs permissions and Auth API permissions.|
-|[SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner)|Full access to data-plane REST APIs.|Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs.|
-|[SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader)|Read-only access to data-plane REST APIs.| Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs.|
+| Role | Description | Use case |
+| - | | -- |
+| [SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server) | Access to Websocket connection creation API and Auth APIs. | Most commonly for an App Server. |
+| [SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner) | Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs. | Use for **Serverless mode** for Authorization with Microsoft Entra ID since it requires both REST APIs permissions and Auth API permissions. |
+| [SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner) | Full access to data-plane REST APIs. | Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs. |
+| [SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader) | Read-only access to data-plane REST APIs. | Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs. |
## Next steps
-To learn how to create an Azure application and use Azure AD Auth, see:
+To learn how to create an Azure application and use Microsoft Entra authorization, see:
-- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)
+- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md)
-To learn how to configure a managed identity and use Azure AD Auth, see:
+To learn how to configure a managed identity and use Microsoft Entra authorization, see:
-- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)
+- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md)
To learn more about roles and role assignments, see:
To learn how to create custom roles, see:
- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
-To learn how to use only Azure AD authentication, see
-- [Disable local authentication](./howto-disable-local-auth.md)
+To learn how to use only Microsoft Entra authentication, see:
+
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Concept Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md
Title: Real-time apps with Azure SignalR Service and Azure Functions
+ Title: Real-time apps with Azure SignalR Service and Azure Functions
description: Learn about how Azure SignalR Service and Azure Functions together allow you to create real-time serverless web applications.
# Real-time apps with Azure SignalR Service and Azure Functions
+Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together.
-Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together.
-
-Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment.
--
+Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment.
## Integrate real-time communications with Azure services The Azure Functions service allows you to write code in [several languages](../azure-functions/supported-languages.md), including JavaScript, Python, C#, and Java that triggers whenever events occur in the cloud. Examples of these events include:
-* HTTP and webhook requests
-* Periodic timers
-* Events from Azure services, such as:
- - Event Grid
- - Event Hubs
- - Service Bus
- - Azure Cosmos DB change feed
- - Storage blobs and queues
- - Logic Apps connectors such as Salesforce and SQL Server
+- HTTP and webhook requests
+- Periodic timers
+- Events from Azure services, such as:
+ - Event Grid
+ - Event Hubs
+ - Service Bus
+ - Azure Cosmos DB change feed
+ - Storage blobs and queues
+ - Logic Apps connectors such as Salesforce and SQL Server
By using Azure Functions to integrate these events with Azure SignalR Service, you have the ability to notify thousands of clients whenever events occur. Some common scenarios for real-time serverless messaging that you can implement with Azure Functions and SignalR Service include:
-* Visualize IoT device telemetry on a real-time dashboard or map.
-* Update data in an application when documents update in Azure Cosmos DB.
-* Send in-app notifications when new orders are created in Salesforce.
+- Visualize IoT device telemetry on a real-time dashboard or map.
+- Update data in an application when documents update in Azure Cosmos DB.
+- Send in-app notifications when new orders are created in Salesforce.
## SignalR Service bindings for Azure Functions The SignalR Service bindings for Azure Functions allow an Azure Function app to publish messages to clients connected to SignalR Service. Clients can connect to the service using a SignalR client SDK that is available in .NET, JavaScript, and Java, with more languages coming soon.+ <!-- Are there more lanaguages now? --> ### An example scenario
An example of how to use the SignalR Service bindings is using Azure Functions t
### Authentication and users
-SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Azure Active Directory, Facebook, and Twitter. You can then send messages directly to these authenticated users.
+SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Microsoft Entra ID, Facebook, and Twitter. You can then send messages directly to these authenticated users.
## Next steps For full details on how to use Azure Functions and SignalR Service together visit the following resources:
-* [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md)
-* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
+- [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md)
+- [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
To try out the SignalR Service bindings for Azure Functions, see:
-* [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md)
-* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
-* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
+- [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md)
+- [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
+- [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
azure-signalr Signalr Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-disaster-recovery.md
Resiliency and disaster recovery is a common need for online systems. Azure SignalR Service already guarantees 99.9% availability, but it's still a regional service. Your service instance is always running in one region and doesn't fail over to another region when there's a region-wide outage.
-Instead, our service SDK provides a functionality to support multiple SignalR service instances and automatically switch to other instances when some of them aren't available.
-With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so in this document.
+For regional disaster recovery, we recommend the following two approaches:
+
+- **Enable Geo-Replication** (Easy way). This feature will handle regional failover for you automatically. When enabled, there remains just one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details.
+- **Utilize Multiple Endpoints in Service SDK**. Our service SDK provides a functionality to support multiple SignalR service instances and automatically switch to other instances when some of them aren't available. With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so **in this document**.
+ ## High available architecture for SignalR service
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
In the Azure portal, locate the **Settings** page of your SignalR Service resour
A serverless real-time application built with Azure Functions and Azure SignalR Service requires at least two Azure Functions:
-* A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL.
-* One or more functions that handle messages sent from SignalR Service to clients.
+- A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL.
+- One or more functions that handle messages sent from SignalR Service to clients.
### negotiate function
For more information, see the [`SignalR` output binding reference](../azure-func
### SignalR Hubs
-SignalR has a concept of *hubs*. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces.
+SignalR has a concept of _hubs_. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces.
## Class-based model The class-based model is dedicated for C#. The class-based model provides a consistent SignalR server-side programming experience, with the following features:
-* Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name.
-* Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order.
-* Convenient output and negotiate experience.
+- Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name.
+- Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order.
+- Convenient output and negotiate experience.
The following code demonstrates these features:
All the hub methods **must** have an argument of `InvocationContext` decorated b
By default, `category=messages` except the method name is one of the following names:
-* `OnConnected`: Treated as `category=connections, event=connected`
-* `OnDisconnected`: Treated as `category=connections, event=disconnected`
+- `OnConnected`: Treated as `category=connections, event=connected`
+- `OnDisconnected`: Treated as `category=connections, event=disconnected`
### Parameter binding experience In class based model, `[SignalRParameter]` is unnecessary because all the arguments are marked as `[SignalRParameter]` by default except in one of the following situations:
-* The argument is decorated by a binding attribute
-* The argument's type is `ILogger` or `CancellationToken`
-* The argument is decorated by attribute `[SignalRIgnore]`
+- The argument is decorated by a binding attribute
+- The argument's type is `ILogger` or `CancellationToken`
+- The argument is decorated by attribute `[SignalRIgnore]`
### Negotiate experience in class-based model
SignalR client SDKs already contain the logic required to perform the negotiatio
```javascript const connection = new signalR.HubConnectionBuilder()
- .withUrl('https://my-signalr-function-app.azurewebsites.net/api')
- .build()
+ .withUrl("https://my-signalr-function-app.azurewebsites.net/api")
+ .build();
``` By convention, the SDK automatically appends `/negotiate` to the URL and uses it to begin the negotiation.
By convention, the SDK automatically appends `/negotiate` to the URL and uses it
For more information on how to use the SignalR client SDK, see the documentation for your language:
-* [.NET Standard](/aspnet/core/signalr/dotnet-client)
-* [JavaScript](/aspnet/core/signalr/javascript-client)
-* [Java](/aspnet/core/signalr/java-client)
+- [.NET Standard](/aspnet/core/signalr/dotnet-client)
+- [JavaScript](/aspnet/core/signalr/javascript-client)
+- [Java](/aspnet/core/signalr/java-client)
### Sending messages from a client to the service If you've [upstream](concept-upstream.md) configured for your SignalR resource, you can send messages from a client to your Azure Functions using any SignalR client. Here's an example in JavaScript: ```javascript
-connection.send('method1', 'arg1', 'arg2');
+connection.send("method1", "arg1", "arg2");
``` ## Azure Functions configuration
The JavaScript/TypeScript client makes HTTP request to the negotiate function to
#### Localhost
-When running the Function app on your local computer, you can add a `Host` section to *local.settings.json* to enable CORS. In the `Host` section, add two properties:
+When running the Function app on your local computer, you can add a `Host` section to _local.settings.json_ to enable CORS. In the `Host` section, add two properties:
-* `CORS` - enter the base URL that is the origin the client application
-* `CORSCredentials` - set it to `true` to allow "withCredentials" requests
+- `CORS` - enter the base URL that is the origin the client application
+- `CORSCredentials` - set it to `true` to allow "withCredentials" requests
Example:
Configure your SignalR clients to use the API Management URL.
### Using App Service Authentication
-Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Azure Active Directory. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
+Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
-In the Azure portal, in your Function app's *Platform features* tab, open the *Authentication/authorization* settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice.
+In the Azure portal, in your Function app's _Platform features_ tab, open the _Authentication/authorization_ settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice.
Once configured, authenticated HTTP requests will include `x-ms-client-principal-name` and `x-ms-client-principal-id` headers containing the authenticated identity's username and user ID, respectively.
azure-signalr Signalr Howto Authorize Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md
Title: Authorize request to SignalR resources with Azure AD from Azure applications
-description: This article provides information about authorizing request to SignalR resources with Azure AD from Azure applications
+ Title: Authorize requests to SignalR resources with Microsoft Entra applications
+description: This article provides information about authorizing request to SignalR resources with Microsoft Entra applications
Last updated 02/03/2023
ms.devlang: csharp
-# Authorize request to SignalR resources with Azure AD from Azure applications
+# Authorize requests to SignalR resources with Microsoft Entra applications
-Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md).
+Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra applications](../active-directory/develop/app-objects-and-service-principals.md).
-This article shows how to configure your SignalR resource and codes to authorize the request to a SignalR resource from an Azure application.
+This article shows how to configure your SignalR resource and codes to authorize requests to a SignalR resource from a Microsoft Entra application.
## Register an application
-The first step is to register an Azure application.
+The first step is to register a Microsoft Entra application.
-1. On the [Azure portal](https://portal.azure.com/), search for and select **Azure Active Directory**
+1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID**
2. Under **Manage** section, select **App registrations**. 3. Select **New registration**.-
- ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png)
-
+ ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Select **Register** to confirm the register.
Once you have your application registered, you can find the **Application (clien
![Screenshot of an application.](./media/signalr-howto-authorize-application/application-overview.png) To learn more about registering an application, see-- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
+- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
## Add credentials
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, select **New client secret**.
-![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png)
+ ![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**.
-1. Copy the value of the **client secret** and then paste it to a secure location.
- > [!NOTE]
- > The secret will display only once.
+1. Copy the value of the **client secret** and then paste it to a secure location.
+ > [!NOTE]
+ > The secret will display only once.
### Certificate
To learn more about adding credentials, see
The following steps describe how to assign a `SignalR App Server` role to a service principal (application) over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-> [!Note]
+> [!NOTE]
> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) 1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
The following steps describe how to assign a `SignalR App Server` role to a serv
> Azure role assignments may take up to 30 minutes to propagate. To learn more about how to assign and manage Azure role assignments, see these articles:+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
To learn more about how to assign and manage Azure role assignments, see these a
The best practice is to configure identity and credentials in your environment variables:
-| Variable | Description |
-||
-| `AZURE_TENANT_ID` | The Azure Active Directory tenant(directory) ID. |
-| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. |
-| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. |
+| Variable | Description |
+| - | |
+| `AZURE_TENANT_ID` | The Microsoft Entra tenant ID. |
+| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. |
+| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. |
| `AZURE_CLIENT_CERTIFICATE_PATH` | A path to a certificate and private key pair in PEM or PFX format, which can authenticate the App Registration. |
-| `AZURE_USERNAME` | The username, also known as upn, of an Azure Active Directory user account. |
-| `AZURE_PASSWORD` | The password for the Azure Active Directory user account. Password isn't supported for accounts with MFA enabled. |
+| `AZURE_USERNAME` | The username, also known as upn, of a Microsoft Entra user account. |
+| `AZURE_PASSWORD` | The password of the Microsoft Entra user account. Password isn't supported for accounts with MFA enabled. |
You can use either [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) or [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) to configure your SignalR endpoints.
services.AddSignalR().AddAzureSignalR(option =>
To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential Class](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
-#### Use different credentials while using multiple endpoints.
+#### Use different credentials while using multiple endpoints
For some reason, you may want to use different credentials for different endpoints.
services.AddSignalR().AddAzureSignalR(option =>
### Azure Functions SignalR bindings
-Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Azure application identities to access your SignalR resources.
+Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Microsoft Entra application identities to access your SignalR resources.
-Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample.
+Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample.
-Then you choose to configure your Azure application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables).
+Then you choose to configure your Microsoft Entra application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables).
#### Configure identity in pre-defined environment variables See [Environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables) for the list of pre-defined environment variables. When you have multiple services, we recommend that you use the same application identity, so that you don't need to configure the identity for each service. These environment variables might also be used by other services according to the settings of other services.
-For example, to use client secret credentials, configure as follows in the `local.settings.json` file.
+For example, to use client secret credentials, configure as follows in the `local.settings.json` file.
+ ```json { "Values": { "<CONNECTION_NAME_PREFIX>:serviceUri": "https://<SIGNALR_RESOURCE_NAME>.service.signalr.net",
- "AZURE_CLIENT_ID":"...",
- "AZURE_CLIENT_SECRET":"...",
- "AZURE_TENANT_ID":"..."
+ "AZURE_CLIENT_ID": "...",
+ "AZURE_CLIENT_SECRET": "...",
+ "AZURE_TENANT_ID": "..."
} } ```+ On Azure portal, add settings as follows:
-```
+
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net AZURE_CLIENT_ID = ... AZURE_TENANT_ID = ... AZURE_CLIENT_SECRET = ...
- ```
+```
#### Configure identity in SignalR specified variables The SignalR specified variables share the same key prefix with `serviceUri` key. Here's the list of variables you might use:
-* clientId
-* clientSecret
-* tenantId
+
+- clientId
+- clientSecret
+- tenantId
Here are the samples to use client secret credentials: In the `local.settings.json` file:+ ```json { "Values": {
In the `local.settings.json` file:
``` On Azure portal, add settings as follows:
-```
+
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__clientId = ... <CONNECTION_NAME_PREFIX>__clientSecret = ... <CONNECTION_NAME_PREFIX>__tenantId = ... ```+ ## Next steps See the following related articles:-- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)+
+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
Title: Authorize managed identity requests to a SignalR resource
-description: This article provides information about authorizing request to SignalR resources with Azure AD from managed identities
+ Title: Authorize requests to SignalR resources with Microsoft Entra managed identities
+description: This article provides information about authorizing request to SignalR resources with Microsoft Entra managed identities
Last updated 03/28/2023
ms.devlang: csharp
-# Authorize managed identity requests to a SignalR resource
+# Authorize requests to SignalR resources with Microsoft Entra managed identities
-Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [managed identities for Azure resources
+Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra managed identities
](../active-directory/managed-identities-azure-resources/overview.md).
-This article shows how to configure your SignalR resource and code to authorize a managed identity request to a SignalR resource.
+This article shows how to configure your SignalR resource and code to authorize requests to a SignalR resource from a managed identity.
## Configure managed identities
This example shows you how to configure `System-assigned managed identity` on a
![Screenshot of an application.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png) 1. Select the **Save** button to confirm the change. - To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) To learn more about configuring managed identities, see one of these articles:
See [How to use managed identities for App Service and Azure Functions](../app-s
The following steps describe how to assign a `SignalR App Server` role to a system-assigned identity over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-> [!Note]
+> [!NOTE]
> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) 1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
The following steps describe how to assign a `SignalR App Server` role to a syst
> Azure role assignments may take up to 30 minutes to propagate. To learn more about how to assign and manage Azure role assignments, see these articles:+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
services.AddSignalR().AddAzureSignalR(option =>
#### Using user-assigned identity
-Provide `ClientId` while creating the `ManagedIdentityCredential` object.
+Provide `ClientId` while creating the `ManagedIdentityCredential` object.
> [!IMPORTANT] > Use **Client Id**, not the Object (principal) ID even if they are both GUID!
You might need a group of key-value pairs to configure an identity. The keys of
If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). In the Azure portal, use the following example to configure a `DefaultAzureCredential`. If you don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity is used to authenticate.
-```
+
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net ``` Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts are attempted in order.+ ```json { "Values": {
Here's a config sample of `DefaultAzureCredential` in the `local.settings.json`
If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with the connection name prefix to `managedidentity`. Here's an application settings sample:
-```
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity ```
If you want to use system-assigned identity independently and without the influe
If you want to use user-assigned identity, you need to assign `clientId`in addition to the `serviceUri` and `credential` keys with the connection name prefix. Here's the application settings sample:
-```
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity <CONNECTION_NAME_PREFIX>__clientId = <CLIENT_ID>
If you want to use user-assigned identity, you need to assign `clientId`in addit
## Next steps See the following related articles:-- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)+
+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Howto Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-azure-policy.md
The following built-in policy definitions are specific to Azure SignalR Service:
## Assign policy definitions
-* Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
-* Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). SignalR policy assignments apply to existing and new SignalR resources within the scope.
-* Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+- Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+- Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). SignalR policy assignments apply to existing and new SignalR resources within the scope.
+- Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
> [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
When a resource is non-compliant, there are many possible reasons. To determine
1. Select **All services**, and search for **Policy**. 1. Select **Compliance**. 1. Use the filters to limit compliance states or to search for policies
-
- [ ![Policy compliance in portal](./media/signalr-howto-azure-policy/azure-policy-compliance.png) ](./media/signalr-howto-azure-policy/azure-policy-compliance.png#lightbox)
-2. Select a policy to review aggregate compliance details and events. If desired, then select a specific SignalR for resource compliance.
+
+ [ ![Screenshot showing policy compliance in portal.](./media/signalr-howto-azure-policy/azure-policy-compliance.png) ](./media/signalr-howto-azure-policy/azure-policy-compliance.png#lightbox)
+
+1. Select a policy to review aggregate compliance details and events. If desired, then select a specific SignalR for resource compliance.
### Policy compliance in the Azure CLI
az policy state list \
## Next steps
-* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
+- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
-
-* Learn more about [governance capabilities](../governance/index.yml) in Azure
+- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about [governance capabilities](../governance/index.yml) in Azure
<!-- LINKS - External -->
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
Platform metrics and the Activity log are collected and stored automatically, bu
Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
+Resource Logs are grouped into Category groups. Category groups are a collection of different logs to help you achieve different monitoring goals. These groups are defined dynamically and may change over time as new resource logs become available and are added to the category group. Note that this may incur additionally charges. The audit resource log category group allows you to select the resource logs that are necessary for auditing your resource. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../azure-monitor/essentials/diagnostic-settings.md?tabs=portal#resource-logs).
+
+For the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
The metrics and logs you can collect are discussed in the following sections.
azure-signalr Signalr Howto Troubleshoot Live Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md
Title: How to use live trace tool for Azure SignalR service description: Learn how to use live trace tool for Azure SignalR service--++ Last updated 07/14/2022
Live trace tool is a single web application for capturing and displaying live tr
> [!NOTE] > Note that the live traces will be counted as outbound messages.
-> Azure Active Directory access to live trace tool is not supported. You will need to enable **Access Key** in **Keys** settings.
+> Using Microsoft Entra ID to access the live trace tool is not supported. You have to enable **Access Key** in **Keys** settings.
## Launch the live trace tool
+> [!NOTE]
+> When enable access key, you'll use access token to authenticate live trace tool.
+> Otherwise, you'll use Microsoft Entra ID to authenticate live trace tool.
+> You can check whether you enable access key or not in your SignalR Service's Keys page in Azure portal.
+
+### Steps for access key enabled
+
+1. Go to the Azure portal and your SignalR Service page.
+1. From the menu on the left, under **Monitoring** select **Live trace settings**.
+1. Select **Enable Live Trace**.
+1. Select **Save** button. It will take a moment for the changes to take effect.
+1. When updating is complete, select **Open Live Trace Tool**.
+
+ :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool.":::
+
+### Steps for access key disabled
+
+#### Assign live trace tool API permission to yourself
+1. Go to the Azure portal and your SignalR Service page.
+1. Select **Access control (IAM)**.
+1. In the new page, Click **+Add**, then click **Role assignment**.
+1. In the new page, focus on **Job function roles** tab, Select **SignalR Service Owner** role, and then click **Next**.
+1. In **Members** page, click **+Select members**.
+1. In the new panel, search and select members, and then click **Select**.
+1. Click **Review + assign**, and wait for the completion notification.
+
+#### Visit live trace tool
1. Go to the Azure portal and your SignalR Service page. 1. From the menu on the left, under **Monitoring** select **Live trace settings**. 1. Select **Enable Live Trace**.
Live trace tool is a single web application for capturing and displaying live tr
:::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool.":::
+#### Sign in with your Microsoft account
+
+1. The live trace tool will pop up a Microsoft sign in window. If no window is pop up, check and allow pop up windows in your browser.
+1. Wait for **Ready** showing in the status bar.
+ ## Capture live traces The live trace tool provides functionality to help you capture the live traces for troubleshooting.
The real time live traces captured by live trace tool contain detailed informati
In this guide, you learned about how to use live trace tool. Next, learn how to handle the common issues: * Troubleshooting guides: How to troubleshoot typical issues based on live traces, see [troubleshooting guide](./signalr-howto-troubleshoot-guide.md).
-* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
+* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
azure-signalr Signalr Howto Work With Apim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-apim.md
Azure API Management service provides a hybrid, multicloud management platform f
:::image type="content" source="./media/signalr-howto-work-with-apim/architecture.png" alt-text="Diagram that shows the architecture of using SignalR Service with API Management."::: - ## Create resources
-* Follow [Quickstart: Use an ARM template to deploy Azure SignalR](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
+- Follow [Quickstart: Use an ARM template to deploy Azure SignalR](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
-* Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance **_APIM1_**
+- Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance **_APIM1_**
## Configure APIs
Azure API Management service provides a hybrid, multicloud management platform f
There are two types of requests for a SignalR client:
-* **negotiate request**: HTTP `POST` request to `<APIM-URL>/client/negotiate/`
-* **connect request**: request to `<APIM-URL>/client/`, it could be `WebSocket` or `ServerSentEvent` or `LongPolling` depends on the transport type of your SignalR client
+- **negotiate request**: HTTP `POST` request to `<APIM-URL>/client/negotiate/`
+- **connect request**: request to `<APIM-URL>/client/`, it could be `WebSocket` or `ServerSentEvent` or `LongPolling` depends on the transport type of your SignalR client
The type of **connect request** varies depending on the transport type of the SignalR clients. As for now, API Management doesn't yet support different types of APIs for the same suffix. With this limitation, when using API Management, your SignalR client doesn't support fallback from `WebSocket` transport type to other transport types. Fallback from `ServerSentEvent` to `LongPolling` could be supported. Below sections describe the detailed configurations for different transport types. ### Configure APIs when client connects with `WebSocket` transport This section describes the steps to configure API Management when the SignalR clients connect with `WebSocket` transport. When SignalR clients connect with `WebSocket` transport, three types of requests are involved:+ 1. **OPTIONS** preflight HTTP request for negotiate 1. **POST** HTTP request for negotiate 1. WebSocket request for connect Let's configure API Management from the portal.+ 1. Go to **APIs** tab in the portal for API Management instance **_APIM1_**, select **Add API** and choose **HTTP**, **Create** with the following parameters:
- * Display name: `SignalR negotiate`
- * Web service URL: `https://<your-signalr-service-url>/client/negotiate/`
- * API URL suffix: `client/negotiate/`
+ - Display name: `SignalR negotiate`
+ - Web service URL: `https://<your-signalr-service-url>/client/negotiate/`
+ - API URL suffix: `client/negotiate/`
1. Select the created `SignalR negotiate` API, **Save** with below settings:
- 1. In **Design** tab
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate preflight`
- * URL: `OPTIONS` `/`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate`
- * URL: `POST` `/`
- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+ 1. In **Design** tab
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate preflight`
+ - URL: `OPTIONS` `/`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate`
+ - URL: `POST` `/`
+ 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
1. Select **Add API** and choose **WebSocket**, **Create** with the following parameters:
- * Display name: `SignalR connect`
- * WebSocket URL: `wss://<your-signalr-service-url>/client/`
- * API URL suffix: `client/`
+ - Display name: `SignalR connect`
+ - WebSocket URL: `wss://<your-signalr-service-url>/client/`
+ - API URL suffix: `client/`
1. Select the created `SignalR connect` API, **Save** with below settings:
- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+ 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
Now API Management is successfully configured to support SignalR client with `WebSocket` transport. ### Configure APIs when client connects with `ServerSentEvents` or `LongPolling` transport This section describes the steps to configure API Management when the SignalR clients connect with `ServerSentEvents` or `LongPolling` transport type. When SignalR clients connect with `ServerSentEvents` or `LongPolling` transport, five types of requests are involved:+ 1. **OPTIONS** preflight HTTP request for negotiate 1. **POST** HTTP request for negotiate 1. **OPTIONS** preflight HTTP request for connect
This section describes the steps to configure API Management when the SignalR cl
Now let's configure API Management from the portal. 1. Go to **APIs** tab in the portal for API Management instance **_APIM1_**, select **Add API** and choose **HTTP**, **Create** with the following parameters:
- * Display name: `SignalR`
- * Web service URL: `https://<your-signalr-service-url>/client`
- * API URL suffix: `client`
+ - Display name: `SignalR`
+ - Web service URL: `https://<your-signalr-service-url>/client/`
+ - API URL suffix: `client/`
1. Select the created `SignalR` API, **Save** with below settings:
- 1. In **Design** tab
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate preflight`
- * URL: `OPTIONS` `/negotiate`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate`
- * URL: `POST` `/negotiate`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `connect preflight`
- * URL: `OPTIONS` `/`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `connect`
- * URL: `POST` `/`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `connect get`
- * URL: `GET` `/`
- 1. Select the newly added **connect get** operation, and edit the Backend policy to disable buffering for `ServerSentEvents`, [check here](../api-management/how-to-server-sent-events.md) for more details.
- ```xml
- <backend>
- <forward-request buffer-response="false" />
- </backend>
- ```
- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+ 1. In **Design** tab
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate preflight`
+ - URL: `OPTIONS` `/negotiate`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate`
+ - URL: `POST` `/negotiate`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `connect preflight`
+ - URL: `OPTIONS` `/`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `connect`
+ - URL: `POST` `/`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `connect get`
+ - URL: `GET` `/`
+ 1. Select the newly added **connect get** operation, and edit the Backend policy to disable buffering for `ServerSentEvents`, [check here](../api-management/how-to-server-sent-events.md) for more details.
+ ```xml
+ <backend>
+ <forward-request buffer-response="false" />
+ </backend>
+ ```
+ 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
Now API Management is successfully configured to support SignalR client with `ServerSentEvents` or `LongPolling` transport. ### Run chat+ Now, the traffic can reach SignalR Service through API Management. LetΓÇÖs use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally.
-* First let's get the connection string of **_ASRS1_**
- * On the **Connection strings** tab of **_ASRS1_**
- * **Client endpoint**: Enter the URL using **Gateway URL** of **_APIM1_**, for example `https://apim1.azure-api.net`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
- * Copy the Connection string
-
-* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
-* Go to samples/Chatroom folder:
-* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
-
- ```bash
- cd samples/Chatroom
- dotnet restore
- dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
- dotnet run
- ```
-* Configure transport type for the client
-
- Open `https://docsupdatetracker.net/index.html` under folder `wwwroot` and find the code when `connection` is created, update it to specify the transport type.
-
- For example, to specify the connection to use server-sent-events or long polling, update the code to:
-
- ```javascript
- const connection = new signalR.HubConnectionBuilder()
- .withUrl('/chat', signalR.HttpTransportType.ServerSentEvents | signalR.HttpTransportType.LongPolling)
- .build();
- ```
- To specify the connection to use WebSockets, update the code to:
-
- ```javascript
- const connection = new signalR.HubConnectionBuilder()
- .withUrl('/chat', signalR.HttpTransportType.WebSockets)
- .build();
- ```
-
-* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the connection is established through **_APIM1_**
+- First let's get the connection string of **_ASRS1_**
+
+ - On the **Connection strings** tab of **_ASRS1_**
+ - **Client endpoint**: Enter the URL using **Gateway URL** of **_APIM1_**, for example `https://apim1.azure-api.net`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
+ - Copy the Connection string
+
+- Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
+- Go to samples/Chatroom folder:
+- Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
+
+ ```bash
+ cd samples/Chatroom
+ dotnet restore
+ dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
+ dotnet run
+ ```
+
+- Configure transport type for the client
+
+ Open `https://docsupdatetracker.net/index.html` under folder `wwwroot` and find the code when `connection` is created, update it to specify the transport type.
+
+ For example, to specify the connection to use server-sent-events or long polling, update the code to:
+
+ ```javascript
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(
+ "/chat",
+ signalR.HttpTransportType.ServerSentEvents |
+ signalR.HttpTransportType.LongPolling
+ )
+ .build();
+ ```
+
+ To specify the connection to use WebSockets, update the code to:
+
+ ```javascript
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl("/chat", signalR.HttpTransportType.WebSockets)
+ .build();
+ ```
+
+- Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the connection is established through **_APIM1_**
## Next steps
azure-signalr Signalr Howto Work With App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md
# How to use Azure SignalR Service with Azure Application Gateway
-Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following:
+Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following:
-* Protect your applications from common web vulnerabilities.
-* Get application-level load-balancing for your scalable and highly available applications.
-* Set up end-to-end security.
-* Customize the domain name.
+- Protect your applications from common web vulnerabilities.
+- Get application-level load-balancing for your scalable and highly available applications.
+- Set up end-to-end security.
+- Customize the domain name.
-This article contains two parts,
-* [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway.
-* [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway.
+This article contains two parts,
+
+- [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway.
+- [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway.
:::image type="content" source="./media/signalr-howto-work-with-app-gateway/architecture.png" alt-text="Diagram that shows the architecture of using SignalR Service with Application Gateway."::: ## Set up and configure Application Gateway ### Create a SignalR Service instance
-* Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
+
+- Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
### Create an Application Gateway instance+ Create from the portal an Application Gateway instance **_AG1_**:
-* On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**.
-* On the **Basics** tab, use these values for the following application gateway settings:
- - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
- - **Application gateway name**: **_AG1_**
- - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers.
- - **Name**: Enter **_VN1_** for the name of the virtual network.
- - **Subnets**: Update the **Subnets** grid with below 2 subnets
-
- | Subnet name | Address range| Note|
- |--|--|--|
- | *myAGSubnet* | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed.
- | *myBackendSubnet* | (another address range) | Subnet for the Azure SignalR instance.
-
- - Accept the default values for the other settings and then select **Next: Frontends**
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab.":::
-
-* On the **Frontends** tab:
- - **Frontend IP address type**: **Public**.
- - Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
- - Select **Next: Backends**
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab.":::
-
-* On the **Backends** tab, select **Add a backend pool**:
- - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool.
- - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net`
- - Select **Next: Configuration**
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service.":::
-
-* On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column:
- - **Rule name**: **_myRoutingRule_**
- - **Priority**: 1
- - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
- - **Listener name**: Enter *myListener* for the name of the listener.
- - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend.
- - **Protocol**: HTTP
- * We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario.
- - Accept the default values for the other settings on the **Listener** tab
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service.":::
- - On the **Backend targets** tab, use the following values:
- * **Target type**: Backend pool
- * **Backend target**: select **signalr** we previously created
- * **Backend settings**: select **Add new** to add a new setting.
- * **Backend settings name**: *mySetting*
- * **Backend protocol**: **HTTPS**
- * **Use well known CA certificate**: **Yes**
- * **Override with new host name**: **Yes**
- * **Host name override**: **Pick host name from backend target**
- * Others keep the default values
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service.":::
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway.":::
-
-* Review and create the **_AG1_**
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance.":::
+
+- On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**.
+- On the **Basics** tab, use these values for the following application gateway settings:
+
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Application gateway name**: **_AG1_**
+ - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers.
+
+ - **Name**: Enter **_VN1_** for the name of the virtual network.
+ - **Subnets**: Update the **Subnets** grid with below 2 subnets
+
+ | Subnet name | Address range | Note |
+ | -- | -- | -- |
+ | _myAGSubnet_ | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed. |
+ | _myBackendSubnet_ | (another address range) | Subnet for the Azure SignalR instance. |
+
+ - Accept the default values for the other settings and then select **Next: Frontends**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab.":::
+
+- On the **Frontends** tab:
+
+ - **Frontend IP address type**: **Public**.
+ - Select **Add new** for the **Public IP address** and enter _myAGPublicIPAddress_ for the public IP address name, and then select **OK**.
+ - Select **Next: Backends**
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab.":::
+
+- On the **Backends** tab, select **Add a backend pool**:
+
+ - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool.
+ - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net`
+ - Select **Next: Configuration**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service.":::
+
+- On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column:
+
+ - **Rule name**: **_myRoutingRule_**
+ - **Priority**: 1
+ - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
+ - **Listener name**: Enter _myListener_ for the name of the listener.
+ - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend.
+ - **Protocol**: HTTP
+ - We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario.
+ - Accept the default values for the other settings on the **Listener** tab
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service.":::
+ - On the **Backend targets** tab, use the following values:
+
+ - **Target type**: Backend pool
+ - **Backend target**: select **signalr** we previously created
+ - **Backend settings**: select **Add new** to add a new setting.
+
+ - **Backend settings name**: _mySetting_
+ - **Backend protocol**: **HTTPS**
+ - **Use well known CA certificate**: **Yes**
+ - **Override with new host name**: **Yes**
+ - **Host name override**: **Pick host name from backend target**
+ - Others keep the default values
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service.":::
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway.":::
+
+- Review and create the **_AG1_**
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance.":::
### Configure Application Gateway health probe
When **_AG1_** is created, go to **Health probes** tab under **Settings** sectio
### Quick test
-* Try with an invalid client request `https://asrs1.service.signalr.net/client` and it returns *400* with error message *'hub' query parameter is required.* It means the request arrived at the SignalR Service and did the request validation.
- ```bash
- curl -v https://asrs1.service.signalr.net/client
- ```
- returns
- ```
- < HTTP/1.1 400 Bad Request
- < ...
- <
- 'hub' query parameter is required.
- ```
-* Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway.":::
-
-* Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns *400* with error message *'hub' query parameter is required.* It means the request successfully went through Application Gateway to SignalR Service and did the request validation.
-
- ```bash
- curl -I http://<frontend-public-IP-address>/client
- ```
- returns
- ```
- < HTTP/1.1 400 Bad Request
- < ...
- <
- 'hub' query parameter is required.
- ```
+- Try with an invalid client request `https://asrs1.service.signalr.net/client` and it returns _400_ with error message _'hub' query parameter is required._ It means the request arrived at the SignalR Service and did the request validation.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+- Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway.":::
+
+- Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns _400_ with error message _'hub' query parameter is required._ It means the request successfully went through Application Gateway to SignalR Service and did the request validation.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+
+ returns
+
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
### Run chat through Application Gateway Now, the traffic can reach SignalR Service through the Application Gateway. The customer could use the Application Gateway public IP address or custom domain name to access the resource. LetΓÇÖs use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally.
-* First let's get the connection string of **_ASRS1_**
- * On the **Connection strings** tab of **_ASRS1_**
- * **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
- * Copy the Connection string
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint.":::
+- First let's get the connection string of **_ASRS1_**
+
+ - On the **Connection strings** tab of **_ASRS1_**
+ - **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
+ - Copy the Connection string
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint.":::
+
+- Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
+- Go to samples/Chatroom folder:
+- Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
-* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
-* Go to samples/Chatroom folder:
-* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
+ ```bash
+ cd samples/Chatroom
+ dotnet restore
+ dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
+ dotnet run
+ ```
- ```bash
- cd samples/Chatroom
- dotnet restore
- dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
- dotnet run
- ```
-* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_** 
+- Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_**
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service.":::
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service.":::
## Secure SignalR Service
In this section, let's configure SignalR Service to deny all the traffic from pu
Let's configure SignalR Service to only allow private access. You can find more details in [use private endpoint for SignalR Service](howto-private-endpoints.md).
-* Go to the SignalR Service instance **_ASRS1_** in the portal.
-* Go the **Networking** tab:
- * On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service.":::
-
- * On **Private access** tab, select **+ Private endpoint**:
- * On **Basics** tab:
- * **Name**: **_PE1_**
- * **Network Interface Name**: **_PE1-nic_**
- * **Region**: make sure to choose the same region as your Application Gateway
- * Select **Next: Resources**
- * On **Resources** tab
- * Keep default values
- * Select **Next: Virtual Network**
- * On **Virtual Network** tab
- * **Virtual network**: Select previously created **_VN1_**
- * **Subnet**: Select previously created **_VN1/myBackendSubnet_**
- * Others keep the default settings
- * Select **Next: DNS**
- * On **DNS** tab
- * **Integration with private DNS zone**: **Yes**
- * Review and create the private endpoint
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service.":::
-
+- Go to the SignalR Service instance **_ASRS1_** in the portal.
+- Go the **Networking** tab:
+
+ - On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service.":::
+
+ - On **Private access** tab, select **+ Private endpoint**:
+ - On **Basics** tab:
+ - **Name**: **_PE1_**
+ - **Network Interface Name**: **_PE1-nic_**
+ - **Region**: make sure to choose the same region as your Application Gateway
+ - Select **Next: Resources**
+ - On **Resources** tab
+ - Keep default values
+ - Select **Next: Virtual Network**
+ - On **Virtual Network** tab
+ - **Virtual network**: Select previously created **_VN1_**
+ - **Subnet**: Select previously created **_VN1/myBackendSubnet_**
+ - Others keep the default settings
+ - Select **Next: DNS**
+ - On **DNS** tab
+ - **Integration with private DNS zone**: **Yes**
+ - Review and create the private endpoint
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service.":::
++ ### Refresh Application Gateway backend pool+ Since Application Gateway was set up before there was a private endpoint for it to use, we need to **refresh** the backend pool for it to look at the Private DNS Zone and figure out that it should route the traffic to the private endpoint instead of the public address. We do the **refresh** by setting the backend FQDN to some other value and then changing it back. Go to the **Backend pools** tab for **_AG1_**, and select **signalr**:
-* Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save**
-* Step2: change Target back to `asrs1.service.signalr.net`
+
+- Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save**
+- Step2: change Target back to `asrs1.service.signalr.net`
### Quick test
-* Now let's visit `https://asrs1.service.signalr.net/client` again. With public access disabled, it returns *403* instead.
- ```bash
- curl -v https://asrs1.service.signalr.net/client
- ```
- returns
- ```
- < HTTP/1.1 403 Forbidden
-* Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns *400* with error message *'hub' query parameter is required*. It means the request successfully went through the Application Gateway to SignalR Service.
-
- ```bash
- curl -I http://<frontend-public-IP-address>/client
- ```
- returns
- ```
- < HTTP/1.1 400 Bad Request
- < ...
- <
- 'hub' query parameter is required.
- ```
+- Now let's visit `https://asrs1.service.signalr.net/client` again. With public access disabled, it returns _403_ instead.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 403 Forbidden
+ ```
+- Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns _400_ with error message _'hub' query parameter is required_. It means the request successfully went through the Application Gateway to SignalR Service.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+
+ returns
+
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
Now if you run the Chat application locally again, you'll see error messages `Failed to connect to .... The server returned status code '403' when status code '101' was expected.`, it is because public access is disabled so that localhost server connections are longer able to connect to the SignalR service. Let's deploy the Chat application into the same VNet with **_ASRS1_** so that the chat can talk with **_ASRS1_**.
-### Deploy the chat application to Azure
-* On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
-
-* On the **Basics** tab, use these values for the following application gateway settings:
- - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
- - **Name**: **_WA1_**
- * **Publish**: **Code**
- * **Runtime stack**: **.NET 6 (LTS)**
- * **Operating System**: **Linux**
- * **Region**: Make sure it's the same as what you choose for SignalR Service
- * Select **Next: Docker**
-* On the **Networking** tab
- * **Enable network injection**: select **On**
- * **Virtual Network**: select **_VN1_** we previously created
- * **Enable VNet integration**: **On**
- * **Outbound subnet**: create a new subnet
- * Select **Review + create**
+### Deploy the chat application to Azure
+
+- On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
+
+- On the **Basics** tab, use these values for the following application gateway settings:
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Name**: **_WA1_**
+ * **Publish**: **Code**
+ * **Runtime stack**: **.NET 6 (LTS)**
+ * **Operating System**: **Linux**
+ * **Region**: Make sure it's the same as what you choose for SignalR Service
+ * Select **Next: Docker**
+- On the **Networking** tab
+ - **Enable network injection**: select **On**
+ - **Virtual Network**: select **_VN1_** we previously created
+ - **Enable VNet integration**: **On**
+ - **Outbound subnet**: create a new subnet
+ - Select **Review + create**
Now let's deploy our chat application to Azure. Below we use Azure CLI to deploy the web app, you can also choose other deployment environments following [publish your web app section](/azure/app-service/quickstart-dotnetcore#publish-your-web-app).
cd publish
zip -r app.zip . # use az CLI to deploy app.zip to our webapp az login
-az account set -s <your-subscription-name-used-to-create-WA1>
-az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
+az account set -s <your-subscription-name-used-to-create-WA1>
+az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
``` Now the web app is deployed, let's go to the portal for **_WA1_** and make the following updates:
-* On the **Configuration** tab:
- * New application settings:
- | Name | Value |
- | --| |
- |**WEBSITE_DNS_SERVER**| **168.63.129.16** |
- |**WEBSITE_VNET_ROUTE_ALL**| **1**|
+- On the **Configuration** tab:
+
+ - New application settings:
- * New connection string:
+ | Name | Value |
+ | -- | -- |
+ | **WEBSITE_DNS_SERVER** | **168.63.129.16** |
+ | **WEBSITE_VNET_ROUTE_ALL** | **1** |
- | Name | Value | Type|
- | --| ||
- |**Azure__SignalR__ConnectionString**| The copied connection string with ClientEndpoint value| select **Custom**|
+ - New connection string:
+ | Name | Value | Type |
+ | | | -- |
+ | **Azure**SignalR**ConnectionString** | The copied connection string with ClientEndpoint value | select **Custom** |
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string.":::
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string.":::
+- On the **TLS/SSL settings** tab:
-* On the **TLS/SSL settings** tab:
- * **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically.
+ - **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically.
-* Go to the **Overview** tab and get the URL of **_WA1_**.
-* Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
- > [!NOTE]
- >
- > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
+- Go to the **Overview** tab and get the URL of **_WA1_**.
+- Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
+ > [!NOTE]
+ >
+ > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service.":::
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service.":::
## Next steps
azure-signalr Signalr Reference Data Plane Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md
You can find a complete sample of using SignalR Service with Azure Functions at
The following table shows all supported versions of REST API. You can also find the swagger file for each version of REST API.
-API Version | Status | Port | Doc | Spec
-||||
-`20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json)
-`1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json)
-`1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json)
+| API Version | Status | Port | Doc | Spec |
+| - | -- | -- | | |
+| `20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) |
+| `1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) |
+| `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) |
The available APIs are listed as following.
-| API | Path |
-| - | - |
-| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` |
-| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` |
-| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` |
-| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` |
-| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` |
-| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` |
-| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` |
-| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` |
-| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
-| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
-| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
-| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` |
+| API | Path |
+| -- | |
+| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` |
+| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` |
+| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` |
+| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` |
## Using REST API
Use the `AccessKey` in Azure SignalR Service instance's connection string to sig
The following claims are required to be included in the JWT token.
-Claim Type | Is Required | Description
-||
-`aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`.
-`exp` | true | Epoch time when this token expires.
+| Claim Type | Is Required | Description |
+| - | -- | -- |
+| `aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`. |
+| `exp` | true | Epoch time when this token expires. |
-### Authenticate via Azure Active Directory Token (Azure AD Token)
+### Authenticate via Microsoft Entra token
-Similar to authenticating using `AccessKey`, when authenticating using Azure AD Token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+Similar to authenticating using `AccessKey`, when authenticating using Microsoft Entra token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-The difference is, in this scenario, the JWT Token is generated by Azure Active Directory. For more information, see [Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
+The difference is, in this scenario, the JWT Token is generated by Microsoft Entra ID. For more information, see [Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md)
-You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Azure Active Directory for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md)
+You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Microsoft Entra ID for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md)
### Implement Negotiate Endpoint
A typical negotiation response looks as follows:
```json {
- "url":"https://<service_name>.service.signalr.net/client/?hub=<hub_name>",
- "accessToken":"<a typical JWT token>"
+ "url": "https://<service_name>.service.signalr.net/client/?hub=<hub_name>",
+ "accessToken": "<a typical JWT token>"
} ```
Then SignalR Service uses the value of `nameid` claim as the user ID of each cli
You can find a complete console app to demonstrate how to manually build a REST API HTTP request in SignalR Service [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Serverless).
-You can also use [Microsoft.Azure.SignalR.Management](<https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management>) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](<https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management>). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md).
-
+You can also use [Microsoft.Azure.SignalR.Management](https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md).
## Limitation Currently, we have the following limitation for REST API requests:
-* Header size is a maximum of 16 KB.
-* Body size is a maximum of 1 MB.
+- Header size is a maximum of 16 KB.
+- Body size is a maximum of 1 MB.
If you want to send messages larger than 1 MB, use the Management SDK with `persistent` mode.
azure-signalr Signalr Tutorial Authenticate Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md
ms.devlang: javascript + # Tutorial: Azure SignalR Service authentication with Azure Functions A step by step tutorial to build a chat room with authentication and private messaging using Azure Functions, App Service Authentication, and SignalR Service.
A step by step tutorial to build a chat room with authentication and private mes
### Technologies used
-* [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages
-* [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients
-* [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions
+- [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages
+- [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients
+- [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions
### Prerequisites
-* An Azure account with an active subscription.
- * If you don't have one, you can [create one for free](https://azure.microsoft.com/free/).
-* [Node.js](https://nodejs.org/en/download/) (Version 18.x)
-* [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4)
+- An Azure account with an active subscription.
+ - If you don't have one, you can [create one for free](https://azure.microsoft.com/free/).
+- [Node.js](https://nodejs.org/en/download/) (Version 18.x)
+- [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4)
[Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Create essential resources on Azure+ ### Create an Azure SignalR service resource
-Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal.
+Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal.
1. Select on the **Create a resource** (**+**) button for creating a new Azure resource.
Your application will access a SignalR Service instance. Use the following step
1. Enter the following information.
- | Name | Value |
- |||
- | **Resource group** | Create a new resource group with a unique name |
- | **Resource name** | A unique name for the SignalR Service instance |
- | **Region** | Select a region close to you |
- | **Pricing Tier** | Free |
- | **Service mode** | Serverless |
+ | Name | Value |
+ | | - |
+ | **Resource group** | Create a new resource group with a unique name |
+ | **Resource name** | A unique name for the SignalR Service instance |
+ | **Region** | Select a region close to you |
+ | **Pricing Tier** | Free |
+ | **Service mode** | Serverless |
1. Select **Review + Create**. 1. Select **Create**. - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create an Azure Function App and an Azure Storage account
Your application will access a SignalR Service instance. Use the following step
1. Enter the following information.
- | Name | Value |
- |||
- | **Resource group** | Use the same resource group with your SignalR Service instance |
- | **Function App name** | A unique name for the Function app instance |
- | **Runtime stack** | Node.js |
- | **Region** | Select a region close to you |
+ | Name | Value |
+ | | -- |
+ | **Resource group** | Use the same resource group with your SignalR Service instance |
+ | **Function App name** | A unique name for the Function app instance |
+ | **Runtime stack** | Node.js |
+ | **Region** | Select a region close to you |
1. By default, a new Azure Storage account will also be created in the same resource group together with your function app. If you want to use another storage account in the function app, switch to **Hosting** tab to choose an account. 1. Select **Review + Create**, then select **Create**. ## Create an Azure Functions project locally+ ### Initialize a function app
-1. From a command line, create a root folder for your project and change to the folder.
+1. From a command line, create a root folder for your project and change to the folder.
1. Execute the following command in your terminal to create a new JavaScript Functions project.
-```
+
+```bash
func init --worker-runtime node --language javascript --name my-app ```
-By default, the generated project includes a *host.json* file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles).
+
+By default, the generated project includes a _host.json_ file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles).
### Configure application settings
-When running and debugging the Azure Functions runtime locally, application settings are read by the function app from *local.settings.json*. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier.
+When running and debugging the Azure Functions runtime locally, application settings are read by the function app from _local.settings.json_. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier.
-1. Replace the content of *local.settings.json* with the following code:
+1. Replace the content of _local.settings.json_ with the following code:
- ```json
- {
- "IsEncrypted": false,
- "Values": {
- "FUNCTIONS_WORKER_RUNTIME": "node",
- "AzureWebJobsStorage": "<your-storage-account-connection-string>",
- "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>"
- }
- }
- ```
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "FUNCTIONS_WORKER_RUNTIME": "node",
+ "AzureWebJobsStorage": "<your-storage-account-connection-string>",
+ "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>"
+ }
+ }
+ ```
- * Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting.
+ - Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting.
- Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string.
+ Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string.
- * Enter the storage account connection string into the `AzureWebJobsStorage` setting.
+ - Enter the storage account connection string into the `AzureWebJobsStorage` setting.
Navigate to your storage account in the Azure portal. In the **Security + networking** section, locate the **Access keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create a function to authenticate users to SignalR Service
When the chat app first opens in the browser, it requires valid connection crede
> This function must be named `negotiate` as the SignalR client requires an endpoint that ends in `/negotiate`. 1. From the root project folder, create the `negotiate` function from a built-in template with the following command.
- ```bash
- func new --template "SignalR negotiate HTTP trigger" --name negotiate
- ```
-1. Open *negotiate/function.json* to view the function binding configuration.
+ ```bash
+ func new --template "SignalR negotiate HTTP trigger" --name negotiate
+ ```
+
+1. Open _negotiate/function.json_ to view the function binding configuration.
The function contains an HTTP trigger binding to receive requests from SignalR clients and a SignalR input binding to generate valid credentials for a client to connect to an Azure SignalR Service hub named `default`.
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "methods": ["post"],
- "name": "req",
- "route": "negotiate"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "default",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "in"
- }
- ]
- }
- ```
-
- There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure.
-
-1. Close the *negotiate/function.json* file.
----
-1. Open *negotiate/index.js* to view the body of the function.
-
- ```javascript
- module.exports = async function (context, req, connectionInfo) {
- context.res.body = connectionInfo;
- };
- ```
-
- This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance.
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": ["post"],
+ "name": "req",
+ "route": "negotiate"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "default",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+
+ There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure.
+
+1. Close the _negotiate/function.json_ file.
+
+1. Open _negotiate/index.js_ to view the body of the function.
+
+ ```javascript
+ module.exports = async function (context, req, connectionInfo) {
+ context.res.body = connectionInfo;
+ };
+ ```
+
+ This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance.
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
When the chat app first opens in the browser, it requires valid connection crede
The web app also requires an HTTP API to send chat messages. You'll create an HTTP triggered function named `sendMessage` that sends messages to all connected clients using SignalR Service. 1. From the root project folder, create an HTTP trigger function named `sendMessage` from the template with the command:
- ```bash
- func new --name sendMessage --template "Http trigger"
- ```
-
-1. To configure bindings for the function, replace the content of *sendMessage/function.json* with the following code:
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "route": "messages",
- "methods": ["post"]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalR",
- "name": "$return",
- "hubName": "default",
- "direction": "out"
- }
- ]
- }
- ```
- Two changes are made to the original file:
- * Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method.
- * Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`.
-
-1. Replace the content of *sendMessage/index.js* with the following code:
-
- ```javascript
- module.exports = async function (context, req) {
- const message = req.body;
- message.sender = req.headers && req.headers['x-ms-client-principal-name'] || '';
-
- let recipientUserId = '';
- if (message.recipient) {
- recipientUserId = message.recipient;
- message.isPrivate = true;
- }
-
- return {
- 'userId': recipientUserId,
- 'target': 'newMessage',
- 'arguments': [ message ]
- };
- };
- ```
-
- This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client.
-
- The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial.
+
+ ```bash
+ func new --name sendMessage --template "Http trigger"
+ ```
+
+1. To configure bindings for the function, replace the content of _sendMessage/function.json_ with the following code:
+
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "route": "messages",
+ "methods": ["post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalR",
+ "name": "$return",
+ "hubName": "default",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+ Two changes are made to the original file:
+
+ - Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method.
+ - Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`.
+
+1. Replace the content of _sendMessage/index.js_ with the following code:
+
+ ```javascript
+ module.exports = async function (context, req) {
+ const message = req.body;
+ message.sender =
+ (req.headers && req.headers["x-ms-client-principal-name"]) || "";
+
+ let recipientUserId = "";
+ if (message.recipient) {
+ recipientUserId = message.recipient;
+ message.isPrivate = true;
+ }
+
+ return {
+ userId: recipientUserId,
+ target: "newMessage",
+ arguments: [message],
+ };
+ };
+ ```
+
+ This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client.
+
+ The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial.
1. Save the file.
The web app also requires an HTTP API to send chat messages. You'll create an HT
The chat application's UI is a simple single-page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). For simplicity, the function app hosts the web page. In a production environment, you can use [Static Web Apps](https://azure.microsoft.com/products/app-service/static) to host the web page.
-1. Create a new folder named *content* in the root directory of your function project.
-1. In the *content* folder, create a new file named *https://docsupdatetracker.net/index.html*.
+1. Create a new folder named _content_ in the root directory of your function project.
+1. In the _content_ folder, create a new file named _https://docsupdatetracker.net/index.html_.
1. Copy and paste the content of [https://docsupdatetracker.net/index.html](https://github.com/aspnet/AzureSignalR-samples/blob/da0aca70f490f3d8f4c220d0c88466b6048ebf65/samples/ServerlessChatWithAuth/content/https://docsupdatetracker.net/index.html) to your file. Save the file. 1. From the root project folder, create an HTTP trigger function named `index` from the template with the command:
- ```bash
- func new --name index --template "Http trigger"
- ```
+
+ ```bash
+ func new --name index --template "Http trigger"
+ ```
1. Modify the content of `index/index.js` to the following:
- ```js
- const fs = require('fs');
-
- module.exports = async function (context, req) {
- const fileContent = fs.readFileSync('content/https://docsupdatetracker.net/index.html', 'utf8');
-
- context.res = {
- // status: 200, /* Defaults to 200 */
- body: fileContent,
- headers: {
- 'Content-Type': 'text/html'
- },
- };
- }
- ```
- The function reads the static web page and returns it to the user.
-
-1. Open *index/function.json*, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this:
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": ["get", "post"]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+
+ ```js
+ const fs = require("fs");
+
+ module.exports = async function (context, req) {
+ const fileContent = fs.readFileSync("content/https://docsupdatetracker.net/index.html", "utf8");
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: fileContent,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ };
+ };
+ ```
+
+ The function reads the static web page and returns it to the user.
+
+1. Open _index/function.json_, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this:
+
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
1. Now you can test your app locally. Start the function app with the command:
- ```bash
- func start
- ```
+
+ ```bash
+ func start
+ ```
1. Open **http://localhost:7071/api/index** in your web browser. You should be able to see a web page as follows:
- :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface.":::
+ :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface.":::
1. Enter a message in the chat box and press enter. The message is displayed on the web page. Because the user name of the SignalR client isn't set, we send all messages as "anonymous". - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Deploy to Azure and enable authentication
You have been running the function app and chat application locally. You'll now
So far, the chat app works anonymously. In Azure, you'll use [App Service Authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user is passed to the `SignalRConnectionInfo` binding to generate connection information authenticated as the user.
-1. Open *negotiate/function.json*.
+1. Open _negotiate/function.json_.
1. Insert a `userId` property to the `SignalRConnectionInfo` binding with value `{headers.x-ms-client-principal-name}`. This value is a [binding expression](../azure-functions/functions-triggers-bindings.md) that sets the user name of the SignalR client to the name of the authenticated user. The binding should now look like this.
- ```json
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "userId": "{headers.x-ms-client-principal-name}",
- "hubName": "default",
- "direction": "in"
- }
- ```
+ ```json
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "userId": "{headers.x-ms-client-principal-name}",
+ "hubName": "default",
+ "direction": "in"
+ }
+ ```
1. Save the file. - ### Deploy function app to Azure+ Deploy the function app to Azure with the following command: ```bash func azure functionapp publish <your-function-app-name> --publish-local-settings ```
-The `--publish-local-settings` option publishes your local settings from the *local.settings.json* file to Azure, so you don't need to configure them in Azure again.
-
+The `--publish-local-settings` option publishes your local settings from the _local.settings.json_ file to Azure, so you don't need to configure them in Azure again.
### Enable App Service Authentication
-Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial.
+Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial.
1. Go to the resource page of your function app on Azure portal. 1. Select **Settings** -> **Authentication**.
-1. Select **Add identity provider**.
- :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page.":::
+1. Select **Add identity provider**.
+ :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page.":::
1. Select **Microsoft** from the **Identity provider** list.
- :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page.":::
+ :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page.":::
- Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles:
+ Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles:
- - [Azure Active Directory](../app-service/configure-authentication-provider-aad.md)
- - [Facebook](../app-service/configure-authentication-provider-facebook.md)
- - [Twitter](../app-service/configure-authentication-provider-twitter.md)
- - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md)
- - [Google](../app-service/configure-authentication-provider-google.md)
+ - [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md)
+ - [Facebook](../app-service/configure-authentication-provider-facebook.md)
+ - [Twitter](../app-service/configure-authentication-provider-twitter.md)
+ - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md)
+ - [Google](../app-service/configure-authentication-provider-google.md)
1. Select **Add** to complete the settings. An app registration will be created, which associates your identity provider with your function app.
Congratulations! You've deployed a real-time, serverless chat app!
To clean up the resources created in this tutorial, delete the resource group using the Azure portal.
->[!CAUTION]
+> [!CAUTION]
> Deleting the resource group deletes all resources contained within it. If the resource group contains resources outside the scope of this tutorial, they will also be deleted. [Having issues? Let us know.](https://aka.ms/asrs/qsauth)
To clean up the resources created in this tutorial, delete the resource group us
In this tutorial, you learned how to use Azure Functions with Azure SignalR Service. Read more about building real-time serverless applications with SignalR Service bindings for Azure Functions.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Real-time apps with Azure SignalR Service and Azure Functions](signalr-concept-azure-functions.md) [Having issues? Let us know.](https://aka.ms/asrs/qsauth)
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
Title: Azure AI Video Indexer accounts description: This article gives an overview of Azure AI Video Indexer accounts and provides links to other articles for more details. Previously updated : 08/07/2023- Last updated : 08/29/2023++ # Azure AI Video Indexer account types
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
Title: Add Contributor role on the Media Services account description: This topic explains how to add contributor role on the Media Services account.- Last updated 10/13/2021 ++ # Add contributor role to Media Services
azure-video-indexer Audio Effects Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md
Title: Introduction to Azure AI Video Indexer audio effects detection description: An introduction to Azure AI Video Indexer audio effects detection component responsibly.--- Last updated 06/15/2022 ++ # Audio effects detection
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
Title: Enable audio effects detection
description: Audio Effects Detection is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (for example, gunshot, screaming, crowd reaction and more). Last updated 05/24/2023-++ # Enable audio effects detection (preview)
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
Title: Enable and view a clapperboard with extracted metadata
-description: Learn about how to enable and view a clapperboard with extracted metadata.
--
+ Title: Enable and view a clapper board with extracted metadata
+description: Learn about how to enable and view a clapper board with extracted metadata.
Last updated 09/20/2022-++
-# Enable and view a clapperboard with extracted metadata (preview)
+# Enable and view a clapper board with extracted metadata (preview)
-A clapperboard insight is used to detect clapperboard instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, *date*, etc. The [clapperboard](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
+A clapper board insight is used to detect clapper board instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, *date*, etc. The [clapper board](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
-When the movie is being edited, a clapperboard is removed from the scene; however, the information that was written on the clapperboard is important. Azure AI Video Indexer extracts the data from clapperboards, preserves, and presents the metadata.
+When the movie is being edited, a clapper board is removed from the scene; however, the information that was written on the clapper board is important. Azure AI Video Indexer extracts the data from clapper boards, preserves, and presents the metadata.
-This article shows how to enable the post-production insight and view clapperboard instances with extracted metadata.
+This article shows how to enable the post-production insight and view clapper board instances with extracted metadata.
## View the insight
After the file has been uploaded and indexed, if you want to view the timeline o
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
-### Clapperboards
+### Clapper boards
-Clapperboards contain fields with titles (for example, *production*, *roll*, *scene*, *take*) and values (content) associated with each title.
+Clapper boards contain fields with titles (for example, *production*, *roll*, *scene*, *take*) and values (content) associated with each title.
-For example, take this clapperboard:
+For example, take this clapper board:
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/slate-detection-process/clapperboard.png" alt-text="This image shows a clapperboard.":::
-In the following example the board contains the following fields:
+In the following example, the board contains the following fields:
|title|content| |||
In the following example the board contains the following fields:
#### View the insight
-To see the instances on the website, select **Insights** and scroll to **Clapperboards**. You can hover over each clapperboard, or unfold **Show/Hide clapperboard info** and see the metadata:
+To see the instances on the website, select **Insights** and scroll to **Clapper boards**. You can hover over each clapper board, or unfold **Show/Hide clapper board info** and see the metadata:
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/slate-detection-process/clapperboard-metadata.png" alt-text="This image shows the clapperboard metadata."::: #### View the timeline
-If you checked the **Post-production** insight, You can also find the clapperboard instance and its timeline (includes time, fields' values) on the **Timeline** tab.
+If you checked the **Post-production** insight, You can also find the clapper board instance and its timeline (includes time, fields' values) on the **Timeline** tab.
#### View JSON
The following table describes fields found in json:
|Name|Description| |||
-|`id`|The clapperboard ID.|
+|`id`|The clapper board ID.|
|`thumbnailId`|The ID of the thumbnail.| |`isHeadSlate`|The value stands for head or tail (the board is upside-down) of the clapper board: `true` or `false`.| |`fields`|The fields found in the clapper board; also each field's name and value.| |`instances`|A list of time ranges where this element appeared.|
-## Clapperboard limitations
+## Clapper board limitations
The values may not always be correctly identified by the detection algorithm. Here are some limitations: - The titles of the fields appearing on the clapper board are optimized to identify the most popular fields appearing on top of clapper boards. - Handwritten text or digital digits may not be correctly identified by the fields detection algorithm. - The algorithm is optimized to identify fields' categories that appear horizontally. -- The clapperboard may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye. -- Empty fieldsΓÇÖ values may lead to to wrong fields categories.
+- The clapper board may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye.
+- Empty fieldsΓÇÖ values may lead to wrong fields categories.
<!-- If a part of a clapper board is hidden a value with the highest confidence is shown. --> ## Next steps
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Title: Azure AI Video Indexer terminology & concepts overview
description: This article gives a brief overview of Azure AI Video Indexer terminology and concepts. Last updated 08/02/2023-++ # Azure AI Video Indexer terminology & concepts
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Title: Connect a classic Azure AI Video Indexer account to ARM description: This topic explains how to connect an existing classic paid Azure AI Video Indexer account to an ARM-based account - Last updated 03/20/2023 ++ # Connect an existing classic paid Azure AI Video Indexer account to ARM-based account
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
Title: Create a classic Azure AI Video Indexer account connected to Azure
description: Learn how to create a classic Azure AI Video Indexer account connected to Azure. Last updated 08/24/2022- ++ # Create a classic Azure AI Video Indexer account
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
Title: Things to consider when using Azure AI Video Indexer at scale - Azure
description: This topic explains what things to consider when using Azure AI Video Indexer at scale. Last updated 07/03/2023-++ # Things to consider when using Azure AI Video Indexer at scale
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Title: Create an Azure AI Video Indexer account description: This article explains how to create an account for Azure AI Video Indexer. - Last updated 06/10/2022++ # Tutorial: create an ARM-based account with Azure portal
azure-video-indexer Customize Brands Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md
Title: Customize a Brands model in Azure AI Video Indexer - Azure description: This article gives an overview of what is a Brands model in Azure AI Video Indexer and how to customize it. - Last updated 12/15/2019-++ # Customize a Brands model in Azure AI Video Indexer
azure-video-indexer Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md
Title: Customize a Brands model with Azure AI Video Indexer API description: Learn how to customize a Brands model with the Azure AI Video Indexer API.-- Last updated 01/14/2020+ + # Customize a Brands model with the Azure AI Video Indexer API
azure-video-indexer Customize Brands Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md
Title: Customize a Brands model with the Azure AI Video Indexer website description: Learn how to customize a Brands model with the Azure AI Video Indexer website.-- Last updated 12/15/2019+ + # Customize a Brands model with the Azure AI Video Indexer website
azure-video-indexer Customize Content Models Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md
description: This article gives links to the conceptual articles that explain th
Last updated 06/26/2019 + # Customizing content models in Azure AI Video Indexer
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
Title: Customize a Language model in Azure AI Video Indexer - Azure description: This article gives an overview of what is a Language model in Azure AI Video Indexer and how to customize it.-- - Last updated 11/23/2022++ # Customize a Language model with Azure AI Video Indexer Azure AI Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
-Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure AI Video Indexer, it is recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, *"container service"* is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
+Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure AI Video Indexer, it's recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model isn't expecting them to appear in a certain context. For example, *"container service"* isn't a 2-word sequence that a nonspecialized Language model would recognize as a specific set of words.
-There are 2 ways to customize a language model:
+There are two ways to customize a language model:
-- **Option 1**: Edit the transcript that was generated by Azure AI Video Indexer. By editing and correcting the transcript, you are training a language model to provide improved results in the future.
+- **Option 1**: Edit the transcript that was generated by Azure AI Video Indexer. By editing and correcting the transcript, you're training a language model to provide improved results in the future.
- **Option 2**: Upload text file(s) to train the language model. The upload file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content. > [!Important] > Do not include in the upload file the words or sentences as currently incorrectly transcribed (for example, *"communities"*) as this will negate the intended impact. > Only include the words as you would like them to appear (for example, *"Kubernetes"*).
-You can use the Azure AI Video Indexer APIs or the website to create and edit custom Language models, as described in topics in the [Next steps](#next-steps) section of this topic.
+You can use the Azure AI Video Indexer APIs or the website to create and edit custom Language models, as described in articles in the [Next steps](#next-steps) section of this article.
## Best practices for custom Language models
Azure AI Video Indexer learns based on probabilities of word combinations, so to
* Give enough real examples of sentences as they would be spoken. * Put only one sentence per line, not more. Otherwise the system will learn probabilities across sentences.
-* It is okay to put one word as a sentence to boost the word against others, but the system learns best from full sentences.
+* It's okay to put one word as a sentence to boost the word against others, but the system learns best from full sentences.
* When introducing new words or acronyms, if possible, give as many examples of usage in a full sentence to give as much context as possible to the system. * Try to put several adaptation options, and see how they work for you. * Avoid repetition of the exact same sentence multiple times. It may create bias against the rest of the input.
-* Avoid including uncommon symbols (~, # @ % &) as they will get discarded. The sentences in which they appear will also get discarded.
+* Avoid including uncommon symbols (~, # @ % &) as they'll get discarded. The sentences in which they appear will also get discarded.
* Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so will dilute the effect of boosting. ## Next steps
azure-video-indexer Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-api.md
Title: Customize a Language model with Azure AI Video Indexer API description: Learn how to customize a Language model with the Azure AI Video Indexer API.-- Last updated 02/04/2020+ + # Customize a Language model with the Azure AI Video Indexer API
Azure AI Video Indexer lets you create custom Language models to customize speec
For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-You can use the Azure AI Video Indexer APIs to create and edit custom Language models in your account, as described in this topic. You can also use the website, as described in [Customize Language model using the Azure AI Video Indexer website](customize-language-model-with-api.md).
+You can use the Azure AI Video Indexer APIs to create and edit custom Language models in your account, as described in this article. You can also use the website, as described in [Customize Language model using the Azure AI Video Indexer website](customize-language-model-with-api.md).
## Create a Language model
The [create a language model](https://api-portal.videoindexer.ai/api-details#api
To upload files to be added to the Language model, you must upload files in the body using FormData in addition to providing values for the required parameters above. There are two ways to do this task:
-* Key will be the file name and value will be the txt file.
-* Key will be the file name and value will be a URL to txt file.
+* Key is the file name and value is the txt file.
+* Key is the file name and value is a URL to txt file.
### Response
The returned `id` is a unique ID used to distinguish between language models, wh
## Delete a Language model
-The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer will use its default model to reindex the video.
+The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model keeps the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer uses its default model to reindex the video.
### Response
The [update a Language model](https://api-portal.videoindexer.ai/api-details#api
To upload files to be added to the Language model, you must upload files in the body using FormData in addition to providing values for the required parameters above. There are two ways to do this task:
-* Key will be the file name and value will be the txt file.
-* Key will be the file name and value will be a URL to txt file.
+* Key is the file name and value is the txt file.
+* Key is the file name and value is a URL to txt file.
### Response
The [download a file](https://api-portal.videoindexer.ai/api-details#api=Operati
### Response
-The response will be the download of a text file with the contents of the file in the JSON format.
+The response is the download of a text file with the contents of the file in the JSON format.
## Next steps
azure-video-indexer Customize Language Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-website.md
Title: Customize Language model with Azure AI Video Indexer website description: Learn how to customize a Language model with the Azure AI Video Indexer website.-- Last updated 08/10/2020+ + # Customize a Language model with the Azure AI Video Indexer website
azure-video-indexer Customize Person Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-overview.md
description: This article gives an overview of what is a Person model in Azure A
Last updated 05/15/2019 + # Customize a Person model in Azure AI Video Indexer
azure-video-indexer Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-api.md
Title: Customize a Person model with Azure AI Video Indexer API description: Learn how to customize a Person model with the Azure AI Video Indexer API.-- Last updated 01/14/2020+ + # Customize a Person model with the Azure AI Video Indexer API
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
Title: Customize a Person model with Azure AI Video Indexer website description: Learn how to customize a Person model with the Azure AI Video Indexer website.-- Last updated 05/31/2022-++ # Customize a Person model with the Azure AI Video Indexer website
azure-video-indexer Customize Speech Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-overview.md
Title: Customize a speech model in Azure AI Video Indexer
description: This article gives an overview of what is a speech model in Azure AI Video Indexer. Last updated 03/06/2023++ # Customize a speech model
Last updated 03/06/2023
Through Azure AI Video Indexer integration with [Azure AI Speech services](../ai-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
-However, sometimes the base modelΓÇÖs transcription doesn't accurately handle some content. In these situations, a customized speech model can be used to improve recognition of domain-specific vocabulary or pronunciation that is specific to your content by providing text data to train the model. Through the process of creating and adapting speech customization models, your content can be properly transcribed. There is no additional charge for using Video Indexers speech customization.
+However, sometimes the base modelΓÇÖs transcription doesn't accurately handle some content. In these situations, a customized speech model can be used to improve recognition of domain-specific vocabulary or pronunciation that is specific to your content by providing text data to train the model. Through the process of creating and adapting speech customization models, your content can be properly transcribed. There's no additional charge for using Video Indexers speech customization.
## When to use a customized speech model?
A dataset including plain text sentences of related text can be used to improve
- Try to have each sentence or keyword on a separate line. - To increase the weight of a term such as product names, add several sentences that include the term. - For common phrases that are used in your content, providing many examples is useful because it tells the system to listen for these terms.ΓÇ» -- Avoid including uncommon symbols (~, # @ % &) as they'll get discarded. The sentences in which they appear will also get discarded. -- Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so will dilute the effect of boosting.
+- Avoid including uncommon symbols (~, # @ % &) as get discarded. The sentences in which they appear also get discarded.
+- Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so dilutes the effect of boosting.
Use this table to ensure that your plain text dataset file is formatted correctly:
azure-video-indexer Customize Speech Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-api.md
Title: Customize a speech model with the Azure AI Video Indexer API
description: Learn how to customize a speech model with the Azure AI Video Indexer API. Last updated 03/06/2023++ # Customize a speech model with the API
azure-video-indexer Customize Speech Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-website.md
Title: Customize a speech model with Azure AI Video Indexer website
description: Learn how to customize a speech model with the Azure AI Video Indexer website. Last updated 03/06/2023++ # Customize a speech model in the website
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
description: Learn how to create an Azure AI Video Indexer account by using an A
Last updated 05/23/2022-++ # Tutorial: Deploy Azure AI Video Indexer by using an ARM template
You need an Azure Media Services account. You can create one for free through [C
### Option 1: Select the button for deploying to Azure, and fill in the missing parameters
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Quick-Start%2Favam.template.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FDeploy-Samples%2FArmTemplates%2Favam.template.json)
-
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
Last updated 06/06/2022 + # Tutorial: deploy Azure AI Video Indexer by using Bicep
azure-video-indexer Detect Textual Logo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detect-textual-logo.md
Title: Detect textual logo with Azure AI Video Indexer
description: This article gives an overview of Azure AI Video Indexer textual logo detection. Last updated 01/22/2023-++ # How to detect textual logo (preview)
azure-video-indexer Detected Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detected-clothing.md
Title: Enable detected clothing feature
description: Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level. Last updated 08/07/2023-++ # Enable detected clothing feature (preview)
azure-video-indexer Digital Patterns Color Bars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/digital-patterns-color-bars.md
Title: Enable and view digital patterns with color bars description: Learn about how to enable and view digital patterns with color bars.-- Last updated 09/20/2022-++ # Enable and view digital patterns with color bars (preview)
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
Title: Edit speakers in the Azure AI Video Indexer website
description: The article demonstrates how to edit speakers with the Azure AI Video Indexer website. Last updated 11/01/2022-++ # Edit speakers with the Azure AI Video Indexer website
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
Title: View and update transcriptions in Azure AI Video Indexer website description: This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.- Last updated 05/03/2022++ # View and update transcriptions
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
Title: Azure AI Video Indexer text-based emotion detection overview-
+ Title: Azure AI Video Indexer text-based emotion detection overview
description: This article gives an overview of Azure AI Video Indexer text-based emotion detection.--- Last updated 08/02/2023 ++ # Text-based emotion detection
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
Title: Azure AI Video Indexer face detection overview-
-description: This article gives an overview of an Azure AI Video Indexer face detection.
---
+ Title: Face detection overview
+description: Get an overview of face detection in Azure AI Video Indexer.
Last updated 04/17/2023 ++ # Face detection
-> [!IMPORTANT]
-> Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
+Face detection, a feature of Azure AI Video Indexer, automatically detects faces in a media file, and then aggregates instances of similar faces into groups. The celebrities recognition model then runs to recognize celebrities.
+
+The celebrities recognition model covers approximately 1 million faces and is based on commonly requested data sources. Faces that Video Indexer doesn't recognize as celebrities are still detected but are left unnamed. You can build your own custom [person model](/azure/azure-video-indexer/customize-person-model-overview) to train Video Indexer to recognize faces that aren't recognized by default.
-Face detection is an Azure AI Video Indexer AI feature that automatically detects faces in a media file and aggregates instances of similar faces into the same group. The celebrities recognition module is then run to recognize celebrities. This module covers approximately one million faces and is based on commonly requested data sources. Faces that aren't recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build their own custom [Person modules](/azure/azure-video-indexer/customize-person-model-overview) whereby the Azure AI Video Indexer recognizes faces that aren't recognized by default.
+Face detection insights are generated as a categorized list in a JSON file that includes a thumbnail and either a name or an ID for each face. Selecting a faceΓÇÖs thumbnail displays information like the name of the person (if they were recognized), the percentage of the video that the person appears, and the person's biography, if they're a celebrity. You can also scroll between instances in the video where the person appears.
-The resulting insights are generated in a categorized list in a JSON file that includes a thumbnail and either name or ID of each face. Clicking faceΓÇÖs thumbnail displays information like the name of the person (if they were recognized), the % of appearances in the video, and their biography if they're a celebrity. It also enables scrolling between the instances in the video.ΓÇ»
+> [!IMPORTANT]
+> To support Microsoft Responsible AI principles, access to face identification, customization, and celebrity recognition features is limited and based on eligibility and usage criteria. Face identification, customization, and celebrity recognition features are available to Microsoft managed customers and partners. To apply for access, use the [facial recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu).
## Prerequisites
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+Review [Transparency Note for Azure AI Video Indexer](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context).
-## General principles
+## General principles
-This article discusses faces detection and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+This article discusses face detection and key considerations for using this technology responsibly. You need to consider many important factors when you decide how to use and implement an AI-powered feature, including:
-- Will this feature perform well in my scenario? Before deploying faces detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+- Will this feature perform well in your scenario? Before you deploy face detection in your scenario, test how it performs by using real-life data. Make sure that it can deliver the accuracy you need.
+- Are you equipped to identify and respond to errors? AI-powered products and features aren't 100 percent accurate, so consider how you'll identify and respond to any errors that occur.
## Key terms
-|Term|Definition|
+| Term | Definition |
|||
-|InsightΓÇ» |The information and knowledge derived from the processing and analysis of video and audio files that generate different types of insights and can include detected objects, people, faces, keyframes and translations or transcriptions. |
-|Face recognition  |The analysis of images to identify the faces that appear in the images. This process is implemented via the Azure AI Face API. |
-|Template |Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individualΓÇÖs template. The enrollment or probe images aren't stored by Face API and the original images can't be reconstructed based on a template. Template quality is a key determinant on the accuracy of your results. |
-|Enrollment |The process of enrolling images of individuals for template creation so they can be recognized. When a person is enrolled to a verification system used for authentication, their template is also associated with a primary identifier2 that is used to determine which template to compare with the probe template. High-quality images and images representing natural variations in how a person looks (for instance wearing glasses, not wearing glasses) generate high-quality enrollment templates. |
-|Deep searchΓÇ» |The ability to retrieve only relevant video and audio files from a video library by searching for specific terms within the extracted insights.|
+| insightΓÇ» | The information and knowledge that you derive from processing and analyzing video and audio files. The insight can include detected objects, people, faces, keyframes, and translations or transcriptions. |
+| face recognition  | Analyzing images to identify the faces that appear in the images. This process is implemented via the Azure AI Face API. |
+| template | Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individualΓÇÖs template. The enrollment or probe images aren't stored by the Face API, and the original images can't be reconstructed based on a template. Template quality is a key determinant for accuracy in your results. |
+| enrollment | The process of enrolling images of individuals for template creation so that they can be recognized. When a person is enrolled to a verification system that's used for authentication, their template is also associated with a primary identifier that's used to determine which template to compare against the probe template. High-quality images and images that represent natural variations in how a person looks (for instance, wearing glasses and not wearing glasses) generate high-quality enrollment templates. |
+| deep searchΓÇ» | The ability to retrieve only relevant video and audio files from a video library by searching for specific terms within the extracted insights.|
-## View the insight
+## View insights
-To see the instances on the website, do the following:
+To see face detection instances on the Azure AI Video Indexer website:
-1. When uploading the media file, go to Video + Audio Indexing, or go to Audio Only or Video + Audio and select Advanced.
-1. After the file is uploaded and indexed, go to Insights and scroll to People.
+1. When you upload the media file, in the **Upload and index** dialog, select **Advanced settings**.
+1. On the left menu, select **People models**. Select a model to apply to the media file.
+1. After the file is uploaded and indexed, go to **Insights** and scroll to **People**.
-To see face detection insight in the JSON file, do the following:
+To see face detection insights in a JSON file:
-1. Select Download -> Insights (JSON).
-1. Copy the `faces` element, under `insights`, and paste it into your JSON viewer.
+1. On the Azure AI Video Indexer website, open the uploaded video.
+1. Select **Download** > **Insights (JSON)**.
+1. Under `insights`, copy the `faces` element and paste it into your JSON viewer.
```json "faces": [
To see face detection insight in the JSON file, do the following:
] ```
-To download the JSON file via the API, [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, go to the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
> [!IMPORTANT]
-> When reviewing face detections in the UI you may not see all faces, we expose only face groups with a confidence of more than 0.5 and the face must appear for a minimum of 4 seconds or 10% * video_duration. Only when these conditions are met we will show the face in the UI and the Insights.json. You can always retrieve all face instances from the Face Artifact file using the api `https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/ArtifactUrl[?Faces][&accessToken]`
+> When you review face detections in the UI, you might not see all faces that appear in the video. We expose only face groups that have a confidence of more than 0.5, and the face must appear for a minimum of 4 seconds or 10 percent of the value of `video_duration`. Only when these conditions are met do we show the face in the UI and in the *Insights.json* file. You can always retrieve all face instances from the face artifact file by using the API: `https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/ArtifactUrl[?Faces][&accessToken]`.
-## Face detection components
+## Face detection components
-During the Faces Detection procedure, images in a media file are processed, as follows:
+The following table describes how images in a media file are processed during the face detection procedure:
-|Component|Definition|
+| Component | Definition |
|||
-|Source file | The user uploads the source file for indexing. |
-|Detection and aggregation |The face detector identifies the faces in each frame. The faces are then aggregated and grouped. |
-|Recognition |The celebrities module runs over the aggregated groups to recognize celebrities. If the customer has created their own **persons** module it's also run to recognize people. When people aren't recognized, they're labeled Unknown1, Unknown2 and so on. |
-|Confidence value |Where applicable for well-known faces or faces identified in the customizable list, the estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+| source file | The user uploads the source file for indexing. |
+| detection and aggregation | The face detector identifies the faces in each frame. The faces are then aggregated and grouped. |
+| recognition | The celebrities model processes the aggregated groups to recognize celebrities. If you've created your own people model, it also processes groups to recognize other people. If people aren't recognized, they're labeled Unknown1, Unknown2, and so on. |
+| confidence value | Where applicable for well-known faces or for faces that are identified in the customizable list, the estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82 percent certainty is represented as an 0.82 score. |
-## Example use cases
+## Example use cases
-* Summarizing where an actor appears in a movie or reusing footage by deep searching for specific faces in organizational archives for insight on a specific celebrity.
-* Improved efficiency when creating feature stories at a news or sports agency, for example deep searching for a celebrity or football player in organizational archives.
-* Using faces appearing in the video to create promos, trailers or highlights. Azure AI Video Indexer can assist by adding keyframes, scene markers, timestamps and labeling so that content editors invest less time reviewing numerous files.  
+The following list describes examples of common use cases for face detection:
-## Considerations when choosing a use case
+- Summarize where an actor appears in a movie or reuse footage by deep searching specific faces in organizational archives for insight about a specific celebrity.
+- Get improved efficiency when you create feature stories at a news agency or sports agency. Examples include deep searching a celebrity or a football player in organizational archives.
+- Use faces that appear in a video to create promos, trailers, or highlights. Video Indexer can assist by adding keyframes, scene markers, time stamps, and labeling so that content editors invest less time reviewing numerous files.
-* Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the video, low quality video might impact the detected insights.
-* Carefully consider when using for law enforcement. People might not be detected if they're small, sitting, crouching, or obstructed by objects or other people. To ensure fair and high-quality decisions, combine face detection-based automation with human oversight.
-* Don't use face detection for decisions that may have serious adverse impacts. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
+## Considerations for choosing a use case
-When used responsibly and carefully face detection is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+Face detection is a valuable tool for many industries when it's used responsibly and carefully. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend that you follow these use guidelines:
-* Always respect an individual’s right to privacy, and only ingest videos for lawful and justifiable purposes.  
-* Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
-* Commit to respecting and promoting human rights in the design and deployment of your analyzed media.  
-* When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.ΓÇ»
-* Always seek legal advice when using content from unknown sources.ΓÇ»
-* Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.    
-* Provide a feedback channel that allows users and individuals to report issues with the service.  
-* Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.ΓÇ»
-* Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.  
-* Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.ΓÇ»
+- Carefully consider the accuracy of the results. To promote more accurate detection, check the quality of the video. Low-quality video might affect the insights that are presented.
+- Carefully review results if you use face detection for law enforcement. People might not be detected if they're small, sitting, crouching, or obstructed by objects or other people. To ensure fair and high-quality decisions, combine face detection-based automation with human oversight.
+- Don't use face detection for decisions that might have serious, adverse impacts. Decisions that are based on incorrect output can have serious, adverse impacts. It's advisable to include human review of decisions that have the potential for serious impacts on individuals.
+- Always respect an individualΓÇÖs right to privacy, and ingest videos only for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate content about young children, family members of celebrities, or other content that might be detrimental to or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- If you use third-party materials, be aware of any existing copyrights or required permissions before you distribute content that's derived from them.ΓÇ»
+- Always seek legal advice if you use content from an unknown source.ΓÇ»
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and that they have adequate controls to preserve content integrity and prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues they might experience with the service.
+- Be aware of any applicable laws or regulations that exist in your area about processing, analyzing, and sharing media that features people.ΓÇ»
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision making.
+- Fully examine and review the potential of any AI model that you're using to understand its capabilities and limitations.ΓÇ»
-## Next steps
+## Related content
-### Learn More about Responsible AI
+Learn more about Responsible AI:
-- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Azure Learn training courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
+Azure AI Video Indexer insights:
- [Audio effects detection](audio-effects-detection.md) - [OCR](ocr.md) - [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)
+- [Transcription, translation, and language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)
+- [Observed people tracking and matched persons](observed-matched-people.md)
- [Topics inference](topics-inference.md)
azure-video-indexer Face Redaction With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-redaction-with-api.md
Title: Redact faces with Azure Video Indexer API
-description: This article shows how to use the Azure Video Indexer face redaction feature by using an API.
+ Title: Redact faces by using Azure AI Video Indexer API
+description: Learn how to use the Azure AI Video Indexer face redaction feature by using API.
Last updated 08/11/2023++
-# Redact faces with Azure Video Indexer API
+# Redact faces by using Azure AI Video Indexer API
-Azure Video Indexer enables customers to detect and identify faces. Face redaction enables you to modify your video in order to blur faces of selected individuals. A few minutes of footage that contains multiple faces can take hours to redact manually, but with this preset the face redaction process requires just a few simple steps.
+You can use Azure AI Video Indexer to detect and identify faces in video. To modify your video to blur (redact) faces of specific individuals, you can use API.
-This article shows how to do the face redaction with an API. The face redaction API includes a **Face Redaction** preset that offers scalable face detection and redaction (blurring) in the cloud.
+A few minutes of footage that contain multiple faces can take hours to redact manually, but by using presets in Video Indexer API, the face redaction process requires just a few simple steps.
-The following video shows how to redact a video with Azure Video Indexer API.
+This article shows you how to redact faces by using an API. Video Indexer API includes a **Face Redaction** preset that offers scalable face detection and redaction (blurring) in the cloud. The article demonstrates each step of how to redact faces by using the API in detail.
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW16UBo]
+The following video shows how to redact a video by using Azure AI Video Indexer API.
-The article demonstrates each step of how to redact faces with the API in detail.
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW16UBo]
## Compliance, privacy, and security
-As an important [reminder](limited-access-features.md), you must comply with all applicable laws in your use of analytics in Azure Video Indexer.
+As an important [reminder](limited-access-features.md), you must comply with all applicable laws in your use of analytics or insights that you derive by using Video Indexer.
+
+Face service access is limited based on eligibility and usage criteria to support the Microsoft Responsible AI principles. Face service is available only to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access. For more information, see the [Face limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext).
+
+## Face redaction terminology and hierarchy
-Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access. For more information, see the [Face limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext).
+Face redaction in Video Indexer relies on the output of existing Video Indexer face detection results that we provide in our Video Standard and Advanced Analysis presets.
-## Redactor terminology and hierarchy
+To redact a video, you must first upload a video to Video Indexer and complete an analysis by using the **Standard** or **Advanced** video presets. You can do this by using the [Azure Video Indexer website](https://www.videoindexer.ai/media/library) or [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). You can then use face redaction API to reference this video by using the `videoId` value. We create a new video in which the indicated faces are redacted. Both the video analysis and face redaction are separate billable jobs. For more information, see our [pricing page](https://azure.microsoft.com/pricing/details/video-indexer/).
-The Face Redactor in Video Indexer relies on the output of the existing Video Indexer Face Detection results provided in our Video Standard and Advanced Analysis presets. In order to redact a video, you must first upload a video to Video Indexer and perform an analysis using the **standard** or **Advanced** video presets. Upload a video by using the [Azure Video Indexer website](https://www.videoindexer.ai/media/library) or [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). You can then use the Redactor API to reference this video using the `videoId` and we create a new video with the redacted faces. Both the Video Analysis and Face Redaction are separate billable jobs. For more information, see our [pricing page](https://azure.microsoft.com/pricing/details/video-indexer/).
+## Types of blurring
-## Blurring kinds
+You can choose from different types of blurring in face redaction. To select a type, use a name or representative number for the `blurringKind` parameter in the request body:
-The Face Redaction comes with several options, which can be provided in the request body.
+|blurringKind number | blurringKind name | Example |
+||||
+|0| MediumBlur|:::image type="content" source="./media/face-redaction-with-api/medium-blur.png" alt-text="Photo of the Azure AI Video Indexer medium blur.":::|
+|1| HighBlur|:::image type="content" source="./media/face-redaction-with-api/high-blur.png" alt-text="Photo of the Azure AI Video Indexer high blur.":::|
+|2| LowBlur|:::image type="content" source="./media/face-redaction-with-api/low-blur.png" alt-text="Photo of the Azure AI Video Indexer low blur.":::|
+|3| BoundingBox|:::image type="content" source="./media/face-redaction-with-api/bounding-boxes.png" alt-text="Photo of Azure AI Video Indexer bounding boxes.":::|
+|4| Black|:::image type="content" source="./media/face-redaction-with-api/black-boxes.png" alt-text="Photo of Azure AI Video Indexer black boxes kind.":::|
-| Blurring Kind number | Blurring Kind name | Example |
-|--|--|--|
-| 0 | MediumBlur | :::image type="content" source="./media/face-redaction-with-api/medium-blur.png" alt-text="Picture of the Azure Video Indexer medium blur kind."::: |
-| 1 | HighBlur | :::image type="content" source="./media/face-redaction-with-api/high-blur.png" alt-text="Picture of the Azure Video Indexer high blur kind."::: |
-| 2 | LowBlur | :::image type="content" source="./media/face-redaction-with-api/low-blur.png" alt-text="Picture of the Azure Video Indexer low blur kind."::: |
-| 3 | BoundingBox | :::image type="content" source="./media/face-redaction-with-api/bounding-boxes.png" alt-text="Picture of the Azure Video Indexer bounding boxes kind."::: |
-| 4 | Black | :::image type="content" source="./media/face-redaction-with-api/black-boxes.png" alt-text="Picture of the Azure Video Indexer black boxes kind."::: |
+You can specify the kind of blurring in the request body by using the `blurringKind` parameter.
-You can specify the blurring kind in the request body using the `blurringKind`. For example:
+Here's an example:
```json {
You can specify the blurring kind in the request body using the `blurringKind`.
} ```
-Or when using the BlurringKind number:
+Or, use a number that represents the type of blurring that's described in the preceding table:
```json {
Or when using the BlurringKind number:
## Filters
-You can apply filters to instruct which face IDs should be blurred. You can specify the IDs of the faces in a comma separated array in the json body. Use the scope to exclude or include these faces for redaction. This way you can achieve a behavior of "redact all faces except these IDs" or "redact only these IDs" by specifying the least number of IDs. See the following examples.
+You can apply filters to set which face IDs to blur. You can specify the IDs of the faces in a comma-separated array in the body of the JSON file. Use the `scope` parameter to exclude or include these faces for redaction. By specifying IDs, you can either redact all faces *except* the IDs that you indicate or redact *only* those IDs. See examples in the next sections.
### Exclude scope
-Redact all faces except 1001 and 1016, use the `Exclude` scope.
+In the following example, to redact all faces except face IDs 1001 and 1016, use the `Exclude` scope:
```json {
Redact all faces except 1001 and 1016, use the `Exclude` scope.
### Include scope
-Redact only face IDs 1001 and 1016, use the `Include` scope.
+In the following example, to redact only face IDs 1001 and 1016, use the `Include` scope:
```json {
Redact only face IDs 1001 and 1016, use the `Include` scope.
### Redact all faces
-To redact all faces, remove the filter entirely.
+To redact all faces, remove the scope filter:
```json {
To redact all faces, remove the filter entirely.
} ```
-To retrieve the Face ID, you can go to the indexed video and retrieve the [artifact file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url). This artifact contains a faces.json and a thumbnail zip file with all the faces. You can match the face to the ID and decide which face IDs need to be redacted.
+To retrieve a face ID, you can go to the indexed video and retrieve the [artifact file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url). The artifact contains a *faces.json* file and a thumbnail .zip file that has all the faces that were detected in the video. You can match the face to the ID and decide which face IDs to redact.
-## Create a redactor job
+## Create a redaction job
-To create a Redactor job, you can invoke the following API call:
+To create a redaction job, you can invoke the following API call:
-```json
+```http
POST https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/redact[?name][&priority][&privacy][&externalId][&streamingPreset][&callbackUrl][&accessToken] ```
-The following values are mandatory:
+The following values are required:
| Name | Value | Description |
-|--|--|--|
-| **Accountid** | `{accountId}` | The ID of your Video Indexer account. |
-| **Location** | `{location}` | The location of your Video Indexer account that is, Westus. |
-| **AccessToken** | `{token}` | The token with Account Contributor rights generated through the [Azure Resource Manager](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP) REST API. |
-| **Videoid** | `{videoId}` | The video ID of the source video to redact. You can retrieve the video ID using the [List Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=List-Videos) API. |
-| **Name** | `{name}` | The name of the new redacted video. |
+||||
+|`Accountid` |`{accountId}`| The ID of your Video Indexer account. |
+| `Location` |`{location}`| The Azure region where your Video Indexer account is located. For example, westus. |
+|`AccessToken` |`{token}`| The token that has Account Contributor rights generated through the [Azure Resource Manager](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP) REST API. |
+| `Videoid` |`{videoId}`| The video ID of the source video to redact. You can retrieve the video ID by using the [List Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=List-Videos) API. |
+| `Name` |`{name}`|The name of the new, redacted video. |
-A sample request would be:
+Here's an example of a request:
-```
-https://api.videoindexer.ai/westeurope/Accounts/<id>/Videos/<id>/redact?priority=Low&name=testredaction&privacy=Private&streamingPreset=Default
+```http
+https://api.videoindexer.ai/westeurope/Accounts/{id}/Videos/{id}/redact?priority=Low&name=testredaction&privacy=Private&streamingPreset=Default
```
-We can specify the token as authorization header with a key value type of `bearertoken:{token}` or you can provide it as query param using `?token={token}`
+You can specify the token as an authorization header that has a key value type of `bearertoken:{token}`, or you can provide it as query parameter by using `?token={token}`.
-Additionally we need to add a request body in json format with the redaction job options that is:
+You also need to add a request body in JSON format with the redaction job options to apply. Here's an example:
```json {
Additionally we need to add a request body in json format with the redaction job
} ```
-When successful you receive an HTTP 202 ACCEPTED.
+When the request is successful, you receive the response `HTTP 202 ACCEPTED`.
## Monitor job status
-In the response of the job creation request, you receive an HTTP header `Location` with a URL to the job. You can perform a GET request to this URL with the same token to see the status of the redaction job. An example URL would be:
+In the response of the job creation request, you receive an HTTP header `Location` that has a URL to the job. You can use the same token to make a GET request to this URL to see the status of the redaction job.
-```
+Here's an example URL:
+
+```http
https://api.videoindexer.ai/westeurope/Accounts/<id>/Jobs/<id> ```
-Response
+Here's an example response:
```json {
Response
} ```
-Calling the same URL once the redaction job has completed, you get a Storage SAS URL to the redacted video again in the `Location` header. For instance:
+If you call the same URL when the redaction job is completed, in the `Location` header, you get a storage shared access signature (SAS) URL to the redacted video. For example:
-```
-https://api.videoindexer.ai/westeurope/Accounts/<id>/Videos/<id>/SourceFile/DownloadUrl
+```http
+https://api.videoindexer.ai/westeurope/Accounts/<id>/Videos/<id>/SourceFile/DownloadUrl
```
-This URL will redirect to the mp4 stored on the Azure Storage Account.
+This URL redirects to the .mp4 file that's stored in the Azure Storage account.
-## FAQ
+## FAQs
| Question | Answer |
-|--|--|
-| Can I upload a video and redact in one operation? | No, you need to first upload and analyze a video using the Index Video API and reference the indexed video in your redaction job. |
-| Can I use the [Azure Video Indexer website](https://www.videoindexer.ai/) to redact a video? | No, Currently you can only use the API to create redaction jobs. |
-| Can I play back the redacted video using the Video Indexer [website](https://www.videoindexer.ai/)? | Yes, the redacted video is visible in the Video Indexer like any other indexed video, however it doesn't contain any insights. |
-| How do I delete a redacted video? | You can use the [Delete Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) API and provide the `Videoid` of the redacted video. |
-| Do I need to pass Facial Identification gating to use Redactor? | Unless you're a US Police Department, no, even when youΓÇÖre gated we continue to offer Face Detection. We don't offer Face Identification when gated. You can however redact all faces in a video with just the Face Detection. |
-| Will the Face Redaction overwrite my original video? | No, the Redaction job creates a new video output file. |
-| Not all faces are properly redacted. What can I do? | Redaction relies on the initial Face Detection and tracking output of the Analysis pipeline. While we detect all faces most of the time, there can be circumstances where we haven't detected a face. There can be several reasons like face angle, number of frames the face was present, and quality of the source video. For more information, see our [Face insights](face-detection.md) documentation. |
-| Can I redact other objects than faces? | No, currently we only have face redaction. If you have the need for other objects, provide feedback to our product in the [Azure User Voice](https://feedback.azure.com/d365community/forum/8952b9e3-e03b-ec11-8c62-00224825aadf) channel. |
-| How Long is a SAS URL valid to download the redacted video? | <!--The SAS URL is valid for xxxx. -->To download the redacted video after the SAS URL expired, you need to call the initial Job status URL. It's best to keep these `Jobstatus` URLs in a database in your backend for future reference. |
+|||
+| Can I upload a video and redact in one operation? | No. You need to first upload and analyze a video by using Video Indexer API. Then, reference the indexed video in your redaction job. |
+| Can I use the [Azure AI Video Indexer website](https://www.videoindexer.ai/) to redact a video? | No. Currently you can use only the API to create a redaction job.|
+| Can I play back the redacted video by using the Video Indexer [website](https://www.videoindexer.ai/)?| Yes. The redacted video is visible on the Video Indexer website like any other indexed video, but it doesn't contain any insights. |
+| How do I delete a redacted video? | You can use the [Delete Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) API and provide the `Videoid` value for the redacted video. |
+| Do I need to pass facial identification gating to use face redaction? | Unless you represent a police department in the United States, no. Even if youΓÇÖre gated, we continue to offer face detection. We don't offer face identification if you're gated. However, you can redact all faces in a video by using only face detection. |
+| Will face redaction overwrite my original video? | No. The face redaction job creates a new video output file. |
+| Not all faces are properly redacted. What can I do? | Redaction relies on the initial face detection and tracking output of the analysis pipeline. Although we detect all faces most of the time, there are circumstances in which we can't detect a face. Factors like face angle, the number of frames the face is present, and the quality of the source video affect the quality of face redaction. For more information, see [Face insights](face-detection.md). |
+| Can I redact objects other than faces? | No. Currently, we offer only face redaction. If you have a need to redact other objects, you can provide feedback about our product in the [Azure User Voice](https://feedback.azure.com/d365community/forum/8952b9e3-e03b-ec11-8c62-00224825aadf) channel. |
+| How long is an SAS URL valid to download the redacted video? |<!--The SAS URL is valid for xxxx. --> To download the redacted video after the SAS URL expired, you need to call the initial job status URL. It's best to keep these `Jobstatus` URLs in a database in your back end for future reference. |
## Error codes
-### Response: 404 Not Found
+The following sections describe errors that might occur when you use face redaction.
-Account not found or video not found.
+### Response: 404 Not Found
-**Response headers**
+The account wasn't found or the video wasn't found.
-| Name | Required | Type | Description |
-|--|--|--|--|
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+#### Response headers
-**ErrorResponse**
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `x-ms-request-id` | false | string | A globally unique identifier (GUID) for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-| Name | Required | Type |
-|--|--|--|
-| ErrorType | false | ErrorType |
-| Message | false | string |
+#### Response body
+| Name | Required | Type |
+| - | - | - |
+| `ErrorType` | false | `ErrorType` |
+| `Message` | false | string |
-*default*
+#### Default JSON
```json {
Account not found or video not found.
Invalid input or can't redact the video since its original upload failed. Please upload the video again.
-**Response headers**
+Invalid input or can't redact the video because its original upload failed. Upload the video again.
-| Name | Required | Type | Description |
-|--|--|--|--|
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+#### Response headers
-**ErrorResponse**
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
+
+#### Response body
| Name | Required | Type |
-|--|--|--|
-| ErrorType | false | ErrorType |
-| Message | false | string |
+| - | - | - |
+| `ErrorType` | false | `ErrorType` |
+| `Message` | false | string |
-*default*
+#### Default JSON
```json {
Invalid input or can't redact the video since its original upload failed. Please
} ```
-### Response: 409 Conflict
+### Response: 409 Conflict
-Video is already being indexed.
+The video is already being indexed.
-**Response headers**
+#### Response headers
-| Name | Required | Type | Description |
-|--|--|--|--|
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job.|
-**ErrorResponse**
+#### Response body
| Name | Required | Type |
-|--|--|--|
-| ErrorType | false | ErrorType |
-| Message | false | string |
+| - | - | - |
+| `ErrorType` | false | `ErrorType` |
+| `Message` | false | string |
-*default*
+#### Default JSON
```json {
Video is already being indexed.
### Response: 401 Unauthorized
-**Response headers**
+The access token isn't authorized to access the account.
+
+#### Response headers
-| Name | Required | Type | Description |
-|--|--|--|--|
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-**ErrorResponse**
+#### Response body
| Name | Required | Type |
-|--|--|--|
-| ErrorType | false | ErrorType |
-| Message | false | string |
+| - | - | - |
+| `ErrorType` | false | `ErrorType` |
+| `Message` | false | string |
-*default*
+#### Default JSON
```json {
Video is already being indexed.
### Response: 500 Internal Server Error
-**Response headers**
+An error occurred on the server.
+
+#### Response headers
-| Name | Required | Type | Description |
-|--|--|--|--|
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-**ErrorResponse**
+#### Response body
| Name | Required | Type |
-|--|--|--|
-| ErrorType | false | ErrorType |
-| Message | false | string |
+| - | - | - |
+| `ErrorType` | false | `ErrorType` |
+| `Message` | false | string |
-*default*
+#### Default JSON
```json {
Video is already being indexed.
} ```
-### Response: 429 Too many requests
+### Response: 429 Too many requests
-Too many requests were sent, use Retry-After response header to decide when to send the next request.
+Too many requests were sent. Use the `Retry-After` response header to decide when to send the next request.
-**Response headers**
+#### Response headers
-| Name | Required | Type | Description |
-|--|--|--|--|
-| Retry-After | false | integer | A non-negative decimal integer indicating the seconds to delay after the response is received. |
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `Retry-After` | false | integer | A non-negative decimal integer that indicates the number of seconds to delay after the response is received. |
### Response: 504 Gateway Timeout
-Server didn't respond to gateway within expected time.
+The server didn't respond to the gateway within the expected time.
-**Response headers**
+#### Response headers
-| Name | Required | Type | Description |
-|--|--|--|--|
-| x-ms-request-id | false | string | A globally unique identifier (GUID) for the request, assigned by the server for instrumentation purposes. The server makes sure all logs associated with handling the request can be linked to the server request ID. A client can provide this request ID in support tickets so support engineers can find the logs linked to this particular request. The server makes sure this request ID never repeats itself. |
+| Name | Required | Type | Description |
+| - | - | - | - |
+| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-*default*
+#### Default JSON
```json {
Server didn't respond to gateway within expected time.
## Next steps -- [Azure Video Indexer](https://azure.microsoft.com/pricing/details/video-indexer/)-- Also, see [Azure pricing](https://azure.microsoft.com/pricing/) for encoding, streaming, and storage billed by the respective Azure service providers.
+- Learn more about [Video Indexer](https://azure.microsoft.com/pricing/details/video-indexer/).
+- See [Azure pricing](https://azure.microsoft.com/pricing/) for encoding, streaming, and storage billed by Azure service providers.
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
Title: Import your content from the trial account
description: Learn how to import your content from the trial account. Last updated 12/19/2022- ++ # Import content from your trial account to a regular account
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
Title: Indexing configuration guide
description: This article explains the configuration options of indexing process with Azure AI Video Indexer. Last updated 04/27/2023-+
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
Title: Azure AI Video Indexer insights overview
description: This article gives a brief overview of Azure AI Video Indexer insights. Last updated 08/02/2023-++ # Azure AI Video Indexer insights
azure-video-indexer Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/keywords.md
Title: Azure AI Video Indexer keywords extraction overview - description: An introduction to Azure AI Video Indexer keywords extraction component responsibly.--- Last updated 06/15/2022 ++ # Keywords extraction
azure-video-indexer Labels Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/labels-identification.md
Title: Azure AI Video Indexer labels identification overview- description: This article gives an overview of an Azure AI Video Indexer labels identification.--- Last updated 06/15/2022 ++ # Labels identification
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
Title: Use Azure AI Video Indexer to auto identify spoken languages description: This article describes how the Azure AI Video Indexer language identification model is used to automatically identifying the spoken language in a video. Previously updated : 04/12/2020 Last updated : 08/28/2023 + # Automatically identify the spoken language with language identification model
-Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language.
+Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language from audio content. The media file is transcribed in the dominant identified language.
-See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
+See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
-Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section below.
+Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section.
## Choosing auto language identification on indexing
-When indexing or [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
+When indexing or [reindexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
-When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to re-index. On the right-bottom corner click the re-index button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
+When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to reindex. On the right-bottom corner, select the **Re-index** button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
-![auto detect](./media/language-identification-model/auto-detect.png)
## Model output
Model dominant language is available in the insights JSON as the `sourceLanguage
"transcript": [...], . . . "sourceLanguageConfidence": 0.8563
- },
+ }
``` ## Guidelines and limitations
-* Automatic language identification (LID) supports the following languages:
+Automatic language identification (LID) supports the following languages:
- See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
-* Even though Azure AI Video Indexer supports Arabic (Modern Standard and Levantine), Hindi, and Korean, these languages are not supported in LID.
-* If the audio contains languages other than the supported list above, the result is unexpected.
-* If Azure AI Video Indexer can't identify the language with a high enough confidence (`>0.6`), the fallback language is English.
-* Currently, there isn't support for file with mixed languages audio. If the audio contains mixed languages, the result is unexpected.
-* Low-quality audio may impact the model results.
-* The model requires at least one minute of speech in the audio.
-* The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, etc.).
+ See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
+
+- If the audio contains languages other than the [supported list](language-support.md), the result is unexpected.
+- If Azure AI Video Indexer can't identify the language with a high enough confidence (greater than 0.6), the fallback language is English.
+- Currently, there isn't support for files with mixed language audio. If the audio contains mixed languages, the result is unexpected.
+- Low-quality audio may affect the model results.
+- The model requires at least one minute of speech in the audio.
+- The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, and so on).
## Next steps
-* [Overview](video-indexer-overview.md)
-* [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
+- [Overview](video-indexer-overview.md)
+- [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Title: Language support in Azure AI Video Indexer description: This article provides a comprehensive list of language support by service features in Azure AI Video Indexer.-- -- Last updated 03/10/2023+++ # Language support in Azure AI Video Indexer
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
Title: Limited Access features of Azure AI Video Indexer
description: This article talks about the limited access features of Azure AI Video Indexer. Last updated 06/17/2022-++ # Limited Access features of Azure AI Video Indexer
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Title: Logic Apps connector with ARM-based AVI accounts description: This article shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts.- Last updated 11/16/2022++ # Logic Apps connector with ARM-based AVI accounts
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
Title: The Azure AI Video Indexer connectors with Logic App and Power Automate. description: This tutorial shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate.- Last updated 09/21/2020++ # Use Azure AI Video Indexer with Logic App and Power Automate
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
Title: Repair the connection to Azure, check errors/warnings
description: Learn how to manage an Azure AI Video Indexer account connected to Azure repair the connection, examine errors/warnings. Last updated 01/14/2021-++ # Repair the connection to Azure, examine errors/warnings
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
description: This article suggests different integration options for managing mu
Last updated 05/15/2019 + # Manage multiple tenants
azure-video-indexer Matched Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/matched-person.md
Title: Enable the matched person insight
description: The topic explains how to use a match observed people feature. These are people that are detected in the video with the corresponding faces ("People" insight). Last updated 12/10/2021-++ # Enable the matched person insight (preview)
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Title: Monitoring Azure AI Video Indexer data reference description: Important reference material needed when you monitor Azure AI Video Indexer - - Last updated 04/17/2023++ <!-- VERSION 2.3 Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
Title: Monitoring Azure AI Video Indexer description: Start here to learn how to monitor Azure AI Video Indexer -- Last updated 12/19/2022++ <!-- VERSION 2.2
azure-video-indexer Multi Language Identification Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/multi-language-identification-transcription.md
Title: Automatically identify and transcribe multi-language content with Azure AI Video Indexer description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure AI Video Indexer.-- Last updated 09/01/2019-++ # Automatically identify and transcribe multi-language content
azure-video-indexer Named Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/named-entities.md
Title: Azure AI Video Indexer named entities extraction overview - description: An introduction to Azure AI Video Indexer named entities extraction component responsibly.--- Last updated 06/15/2022 ++ # Named entities extraction
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
description: This article gives an overview of the Azure AI Video Indexer netwo
Last updated 12/19/2022-++ # NSG service tags for Azure AI Video Indexer
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
Title: Azure AI Video Indexer observed people tracking & matched faces overview- description: An introduction to Azure AI Video Indexer observed people tracking & matched faces component responsibly.--- Last updated 04/06/2023 ++ # Observed people tracking and matched faces
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
Title: Enable featured clothing of an observed person
description: When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person. Last updated 08/14/2023-++ # Enable featured clothing of an observed person (preview)
azure-video-indexer Observed People Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracking.md
Title: Track observed people in a video
description: This topic gives an overview of Track observed people in a video concept. Last updated 08/07/2023-++ # Track observed people in a video (preview)
azure-video-indexer Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md
Title: Azure AI Video Indexer optical character recognition (OCR) overview description: An introduction to Azure AI Video Indexer optical character recognition (OCR) component responsibly.-- Last updated 06/15/2022 ++ # Optical character recognition (OCR)
When used responsibly and carefully, Azure AI Video Indexer is a valuable tool f
## Learn more about OCR -- [Cognitive Services documentation](/azure/ai-services/computer-vision/overview-ocr)
+- [Azure AI services documentation](/azure/ai-services/computer-vision/overview-ocr)
- [Transparency note](/legal/cognitive-services/computer-vision/ocr-transparency-note) - [Use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note#example-use-cases) - [Capabilities and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations)
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
Title: Index videos stored on OneDrive - Azure AI Video Indexer
description: Learn how to index videos stored on OneDrive by using Azure AI Video Indexer. Last updated 12/17/2021++ # Index your videos stored on OneDrive
azure-video-indexer Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/regions.md
Title: Regions in which Azure AI Video Indexer is available description: This article talks about Azure regions in which Azure AI Video Indexer is available.-- Last updated 09/14/2020-++ # Azure regions in which Azure AI Video Indexer exists
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
description: To stay up-to-date with the most recent developments, this article
Last updated 07/03/2023-++ # Azure AI Video Indexer release notes
azure-video-indexer Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/resource-health.md
Title: Diagnose Video Indexer resource issues with Azure Resource Health
description: Learn how to diagnose Video Indexer resource issues with Azure Resource Health. Last updated 05/12/2023++ # Diagnose Video Indexer resource issues with Azure Resource Health
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
Title: Manage access to an Azure AI Video Indexer account
description: This article talks about Video Indexer restricted viewer built-in role. This role is an account level permission, which allows users to grant restricted access to a specific user or security group. Last updated 12/14/2022++ # Manage access to an Azure AI Video Indexer account
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
Title: Azure AI Video Indexer scenes, shots, and keyframes
description: This topic gives an overview of the Azure AI Video Indexer scenes, shots, and keyframes. Last updated 06/07/2022-++ # Scenes, shots, and keyframes
azure-video-indexer Slate Detection Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/slate-detection-insight.md
Title: Slate detection insights description: Learn about slate detection insights.-- Last updated 09/20/2022-++ # The slate detection insights (preview)
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
Title: Use Video Indexer with storage behind firewall
description: This article gives an overview how to configure Azure AI Video Indexer to use storage behind firewall. Last updated 03/21/2023-++ # Configure Video Indexer to work with storage accounts behind firewall
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
Title: Switch between tenants on the Azure AI Video Indexer website
description: This article shows how to switch between tenants in the Azure AI Video Indexer website. Last updated 01/24/2023++ # Switch between multiple tenants
azure-video-indexer Textless Slate Scene Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/textless-slate-scene-matching.md
Title: Enable and view a textless slate with matching scene description: Learn about how to enable and view a textless slate with matching scene.-- Last updated 09/20/2022-++ # Enable and view a textless slate with matching scene (preview)
azure-video-indexer Topics Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/topics-inference.md
Title: Azure AI Video Indexer topics inference overview -
+ Title: Azure AI Video Indexer topics inference overview
description: An introduction to Azure AI Video Indexer topics inference component responsibly.--- Last updated 06/15/2022 ++ # Topics inference
azure-video-indexer Transcription Translation Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/transcription-translation-lid.md
Title: Azure AI Video Indexer media transcription, translation and language identification overview -
+ Title: Azure AI Video Indexer media transcription, translation and language identification overview
description: An introduction to Azure AI Video Indexer media transcription, translation and language identification components responsibly.--- Last updated 06/15/2022 ++ # Media transcription, translation and language identification
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
Title: Upload and index videos with Azure AI Video Indexer using the Video Index
description: Learn how to upload videos by using Azure AI Video Indexer. Last updated 05/10/2023++ # Upload media files using the Video Indexer website
azure-video-indexer Use Editor Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/use-editor-create-project.md
Title: Use the Azure AI Video Indexer editor to create projects and add video clips description: This topic demonstrates how to use the Azure AI Video Indexer editor to create projects and add video clips.-- Last updated 11/28/2020-++ # Add video clips to your projects
azure-video-indexer Video Indexer Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-disaster-recovery.md
Title: Azure AI Video Indexer failover and disaster recovery description: Learn how to fail over to a secondary Azure AI Video Indexer account if a regional datacenter failure or disaster occurs.--- - Last updated 07/29/2019-++ # Azure AI Video Indexer failover and disaster recovery
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
Title: Embed Azure AI Video Indexer widgets in your apps
description: Learn how to embed Azure AI Video Indexer widgets in your apps. Last updated 01/10/2023--++ # Embed Azure AI Video Indexer widgets in your apps
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
Title: Sign up for Azure AI Video Indexer and upload your first video - Azure
description: Learn how to sign up and upload your first video using the Azure AI Video Indexer website. Last updated 08/24/2022- ++ # Quickstart: How to sign up and upload your first video
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Title: Examine the Azure AI Video Indexer output description: This article examines the Azure AI Video Indexer output produced by the Get Video Index API.-- Last updated 08/02/2023-++ # Examine the Azure AI Video Indexer output
Videos that contain adult or racy content might be available for private view on
## Learn more about visualContentModeration -- [Cognitive services documentation](/azure/ai-services/computer-vision/concept-detecting-adult-content)
+- [Azure AI services documentation](/azure/ai-services/computer-vision/concept-detecting-adult-content)
- [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#features) - [Use cases](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#use-cases) - [Capabilities and limitations](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#system-performance-and-limitations-for-image-analysis)
Videos that contain adult or racy content might be available for private view on
##### Learn more about textualContentModeration -- [Cognitive Services documentation](/azure/ai-services/content-moderator/text-moderation-api)
+- [Azure AI services documentation](/azure/ai-services/content-moderator/text-moderation-api)
- [Supported languages](/azure/ai-services/content-moderator/language-support) - [Capabilities and limitations](/azure/ai-services/content-moderator/text-moderation-api) - [Data, privacy and security](/azure/ai-services/content-moderator/overview#data-privacy-and-security)
Azure AI Video Indexer makes an inference of main topics from transcripts. When
Explore the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai). For information about how to embed widgets in your application, see [Embed Azure AI Video Indexer widgets into your applications](video-indexer-embed-widgets.md). -
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure AI Video Indexer?
description: This article gives an overview of the Azure AI Video Indexer service. Last updated 08/02/2023-++ # What is Azure AI Video Indexer?
azure-video-indexer Video Indexer Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-search.md
Title: Search for exact moments in videos with Azure AI Video Indexer
description: Learn how to search for exact moments in videos using Azure AI Video Indexer. Last updated 11/23/2019-++ # Search for exact moments in videos with Azure AI Video Indexer
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
description: This article describes how to get started with Azure AI Video Index
Last updated 07/03/2023 ++ # Tutorial: Use the Azure AI Video Indexer API
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
Title: View Azure AI Video Indexer insights description: This article demonstrates how to view Azure AI Video Indexer insights.-- Last updated 04/12/2023-++ # View Azure AI Video Indexer insights
azure-video-indexer View Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/view-closed-captions.md
Title: View closed captions
description: Learn how to view captions using the Azure AI Video Indexer website. Last updated 10/24/2022++ # View closed captions in the Azure AI Video Indexer website
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 4/24/2023 Last updated : 8/30/2023 # What's new in Azure VMware Solution Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## August 2023
+
+**Available in 30 Azure Regions**
+
+Azure VMware Solution is now available in 30 Azure regions. [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware&rar=true&regions=all)
+
+**Pure Cloud Block Store (preview)**
+
+Pure Cloud Block Store for Azure VMware Solution is now in public preview. Now customers can use Pure Cloud Block Store from Pure Storage to scale compute and storage independently for storage heavy workloads. With Pure Cloud Block Store, customers can right size their storage and achieve sizeable savings in the process. [Learn more](ecosystem-external-storage-solutions.md)
+
+**Azure Arc-enabled VMware vSphere (preview)**
+
+Azure Arc-enabled VMware vSphere has a new refresh for the public preview. Now customers can start their onboarding with Azure Arc-enabled VMware vSphere, install agents at-scale, and enable Azure management, observability, and security solutions, while benefitting from the existing lifecycle management capabilities. Azure Arc-enabled VMware vSphere VMs will now show up alongside other Azure Arc-enabled servers under ΓÇÿMachinesΓÇÖ view in the Azure portal. [Learn more](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview)
+
+**VMware Cloud Director Service**
+
+VMware Cloud Director service for Azure VMware Solution is now available for enterprise. VMware Cloud Director service provides a multi-cloud control plane for managing multi-tenancy on infrastructure ranging from on-premises customer data centers, managed service provider facilities, and in the cloud. [Learn more](https://blogs.vmware.com/cloud/2023/08/15/cloud-director-service-ga-for-avs/)
+
+**Stretched Clusters Generally Available**
+
+Stretched Clusters for Azure VMware Solution is now available and provides 99.99 percent uptime for mission critical applications that require the highest availability. In times of availability zone failure, your virtual machines (VMs) and applications automatically failover to an unaffected availability zone with no application impact. [Learn more](deploy-vsan-stretched-clusters.md)
+
+**Well-Architected Assessment Tool**
+
+Azure VMware Solution Well-Architected Assessment Tool is now available. Based upon the Microsoft Azure Well-Architected Framework, the assessment tool methodically checks how your workloads align with best practices for resiliency, security, efficiency, and cost optimization. [Learn more](https://aka.ms/avswafdocs)
+
+**VMware Cloud Universal**
+
+VMware Cloud Universal now includes Azure VMware Solution. [Learn more](https://blogs.vmware.com/cloud/2023/07/06/avs-with-vmcu-announcement/)
+
+**Updated cloudadmin Permissions**
+
+Customers using the cloudadmin@vsphere.local credentials with the vSphere Client now have read-only access to the Management Resource Pool that contains the management and control plane of Azure VMware Solution (vCenter Server, NSX-T Data Center, HCX Manager, SRM Manager).
+ ## May 2023 **Azure VMware Solution in Azure Gov** Azure VMware Service will become generally available on May 17, 2023, to US Federal and State and Local Government (US) customers and their partners, in the regions of Arizona and Virgina. With this release, we are combining world-class Azure infrastructure together with VMware technologies by offering Azure VMware Solutions on Azure Government, which is designed, built, and supported by Microsoft.
-
**New Azure VMware Solution Region: Qatar** We are excited to announce that the Azure VMware Solution has gone live in Qatar Central and is now available to customers.
All new Azure VMware Solution private clouds are being deployed with VMware NSX-
VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. VMware HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html), like Replicated Assisted vMotion (RAV) and Mobility Optimized Networking (MON). VMware HCX Enterprise is now automatically installed for all new VMware HCX add-on requests, and existing VMware HCX Advanced customers can upgrade to VMware HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
-**Azure Log analytics - monitor Azure VMware Solution**
+**Azure Log Analytics - Monitor Azure VMware Solution**
The data in Azure Log Analytics offer insights into issues by searching using Kusto Query Language.
You can use customer-managed keys to bring and manage your master encryption key
You can use Azure NetApp Files volumes as a file share for Azure VMware Solution workloads using Network File System (NFS) or Server Message Block (SMB).
-**Stretched clusters - increase uptime with Stretched Clusters (Preview)**
+**Stretched Clusters - increase uptime with Stretched Clusters (Preview)**
Stretched clusters for Azure VMware Solution, provides 99.99% uptime for mission critical applications that require the highest availability.
For more information, see [Azure Migration and Modernization blog](https://techc
## January 2023
-Starting January 2023, all new Azure VMware Solution private clouds are being deployed with Microsoft signed TLS certificate for vCenter and NSX.
+Starting January 2023, all new Azure VMware Solution private clouds are being deployed with Microsoft signed TLS certificate for vCenter Server and NSX-T Data Center.
## November 2022
Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://ww
## Post update Once complete, newer versions of VMware solution components will appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.----
azure-vmware Bitnami Appliances Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/bitnami-appliances-deployment.md
In this article, you'll learn how to install and configure the following virtual
-## Step 1. Download the Bitnami virtual appliance OVA/OVF file
+## Step 1: Download the Bitnami virtual appliance OVA/OVF file
1. Go to the [VMware Marketplace](https://marketplace.cloud.vmware.com/) and download the virtual appliance you want to install on your Azure VMware Solution private cloud:
In this article, you'll learn how to install and configure the following virtual
>[!NOTE] >Make sure the file is accessible from the virtual machine.
-## Step 2. Access the local vCenter Server of your private cloud
+## Step 2: Access the local vCenter Server of your private cloud
1. Sign in to the [Azure portal](https://portal.azure.com).
In this article, you'll learn how to install and configure the following virtual
:::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page." border="true":::
-## Step 3. Install the Bitnami OVA/OVF file in vCenter Server
+## Step 3: Install the Bitnami OVA/OVF file in vCenter Server
1. Right-click the cluster that you want to install the LAMP virtual appliance and select **Deploy OVF Template**.
In this article, you'll learn how to install and configure the following virtual
-## Step 4. Assign a static IP to the virtual appliance
+## Step 4: Assign a static IP to the virtual appliance
In this step, you'll modify the *bootproto* and *onboot* parameters and assign a static IP address to the Bitnami virtual appliance.
In this step, you'll modify the *bootproto* and *onboot* parameters and assign a
-## Step 5. Enable SSH access to the virtual appliance
+## Step 5: Enable SSH access to the virtual appliance
In this step, you'll enable SSH on your virtual appliance for remote access control. The SSH service is disabled by default. You'll also use an OpenSSH client to connect to the host console.
azure-vmware Configure Vmware Cloud Director Service Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-cloud-director-service-azure-vmware-solution.md
+
+ Title: Configure VMware Cloud Director Service in Azure VMware Solution
+description: How to configure VMware Cloud Director Service in Azure VMware Solution
++++ Last updated : 06/12/2023++
+# Configure VMware Cloud Director Service in Azure VMware Solution
+
+In this article, learn how to configure [VMware Cloud Director](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html) service in Azure VMware Solution.
+
+## Prerequisites
+- Plan and deploy a VMware Cloud Director Service Instance in your preferred region using the process described here. [How Do I Create a VMware Cloud Director Instance](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-26D98BA1-CF4B-4A57-971E-E58A0B482EBB.html#GUID-26D98BA1-CF4B-4A57-971E-E58A0B482EBB)
+
+ >[!Note]
+ > VMware Cloud Director Instances can establish connections to AVS SDDC in regions where latency remains under 150 ms.
+
+- Plan and deploy Azure VMware solution private cloud using the following links:
+ - [Plan Azure VMware solution private cloud SDDC.](plan-private-cloud-deployment.md)
+ - [Deploy and configure Azure VMware Solution - Azure VMware Solution.](deploy-azure-vmware-solution.md)
+- After successfully gaining access to both your VMware Cloud Director instance and Azure VMware Solution SDDC, you can then proceed to the next section.
+
+## Plan and prepare Azure VMware solution private cloud for VMware Reverse proxy
+
+- VMware Reverse proxy VM is deployed within the Azure VMware solution SDDC and requires outbound connectivity to your VMware Cloud director Service Instance. [Plan how you would provide this internet connectivity.](concepts-design-public-internet-access.md)
+
+- Public IP on NSX-T edge can be used to provide outbound access for the VMware Reverse proxy VM as shown in this article. Learn more on, [How to configure a public IP in the Azure portal](enable-public-ip-nsx-edge.md#configure-a-public-ip-in-the-azure-portal) and [Outbound Internet access for VMs](enable-public-ip-nsx-edge.md#outbound-internet-access-for-vms)
+
+- VMware Reverse proxy can acquire an IP address through either DHCP or manual IP configuration.
+- Optionally create a dedicated Tier-1 router for the reverse proxy VM segment.
+
+### Prepare your Azure VMware Solution SDDC for deploying VMware Reverse proxy VM OVA
+
+1. Obtain NSX-T cloud admin credentials from Azure portal under VMware credentials. Then, Log in to NSX-T manager.
+1. Create a dedicated Tier-1 router (optional) for VMware Reverse proxy VM.
+ 1. Log in to Azure VMware solution NSX-T manage and select **ADD Tier-1 Gateway**
+ 1. Provide name, Linked Tier-0 gateway and then select save.
+ 1. Configure appropriate settings under Route Advertisements.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-create-gateway.png" alt-text="Screenshot showing how to create a Tier-1 Gateway." lightbox="./media/vmware-cloud-director-service/pic-create-gateway.png":::
+
+1. Create a segment for VMware Reverse proxy VM.
+ 1. Log in to Azure VMware solution NSX-T manage and under segments, select **ADD SEGMENT**
+ 1. Provide name, Connected Gateway, Transport Zone and Subnet information and then select save.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-create-reverse-proxy.png" alt-text="Screenshot showing how to create a NSX-T segment for reverse proxy VM." lightbox="./media/vmware-cloud-director-service/pic-create-reverse-proxy.png":::
+
+1. Optionally enable segment for DHCP by creating a DHCP profile and setting DHCP config. You can skip this step if you use static IPs.
+1. Add two NAT rules to provide an outbound access to VMware Reverse proxy VM to reach VMware cloud director service. You can also reach the management components of Azure VMware solution SDDC such as vCenter and NSX-T that are deployed in the management plane.
+ 1. Create **NOSNAT** rule,
+ - Provide name of the rule and select source IP. You can use CIDR format or specific IP address.
+ - Under destination port, use private cloud network CIDR.
+ 1. Create **SNAT** rule
+ - Provide name and select source IP.
+ - Under translated IP, provide a public IP address.
+ - Set priority of this rule higher as compared to the NOSNAT rule.
+ 1. Click **Save**.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-verify-nat-rules.png" alt-text="Screenshot showing how to verify the NAT rules have been created." lightbox="./media/vmware-cloud-director-service/pic-verify-nat-rules.png":::
+
+
+1. Ensure on Tier-1 gateway, NAT is enabled under router advertisement.
+1. Configure gateway firewall rules to enhance security.
+
+## Generate and Download VMware Reverse proxy OVA
+
+- What follows is a step-by-step procedure and how to obtain the required information on Azure portal and how to use it to generate VMware Reverse proxy VM.
+
+### Prerequisites on VMware cloud service
+
+- Verify that you're assigned the network administrator service role. See [Managing Roles and Permissions](https://docs.vmware.com/en/VMware-Cloud-services/services/Using-VMware-Cloud-Services/GUID-84E54AD5-A53F-416C-AEBE-783927CD66C1.html) and make changes using VMware Cloud Services Console.
+- If you're accessing VMware Cloud Director service through VMware Cloud Partner Navigator, verify that you're a Provider Service Manager user and that you have been assigned the provider:**admin** and provider:**network service** roles.
+- See [How do I change the roles of users in my organization](https://docs.vmware.com/en/VMware-Cloud-Partner-Navigator/services/Cloud-Partner-Navigator-Using-Provider/GUID-BF0ED645-1124-4828-9842-18F5C71019AE.html) in the VMware Cloud Partner Navigator documentation.
+
+### Procedure
+1. Log in to VMware Cloud Director service.
+1. Click Cloud Director Instances.
+1. In the card of the VMware Cloud Director instance for which you want to configure a reverse proxy service, click **Actions** > **Generate VMware Reverse Proxy OVА**.
+1. The **Generate VMware Reverse proxy OVA** wizard opens. Fill in the required information.
+1. Enter Network Name
+ - Network name is the name of the NSX-T segment you created in previous section for reverse proxy VM.
+1. Enter the required information such as vCenter FQDN, Management IP for vCenter, NSX FQDN or IP and more hosts within the SDDC to proxy.
+1. vCenter and NSX-T IP address of your Azure VMware solution private cloud can be found under **Azure portal** -> **manage**-> **VMware credentials**
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-obtain-vmware-credential.png" alt-text="Screenshot showing how to obtain VMware credentials using Azure portal." lightbox="./media/vmware-cloud-director-service/pic-obtain-vmware-credential.png":::
+
+1. To find FQDN of vCenter of your Azure VMware solution private cloud, login to the vCenter using VMware credential provided on Azure portal.
+1. In vSphere Client, select vCenter, which displays FQDN of the vCenter server.
+1. To obtain FQDN of NSX-T, replace vc with nsx. NSX-T FQDN in this example would be, ΓÇ£nsx.f31ca07da35f4b42abe08e.uksouth.avs.azure.comΓÇ¥
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-vcenter-vmware.png" alt-text="Screenshot showing how to obtain vCenter and NSX-T FQDN in Azure VMware solution private cloud." lightbox="./media/vmware-cloud-director-service/pic-vcenter-vmware.png":::
+
+1. Obtain ESXi management IP addresses and CIDR for adding IP addresses in allowlist when generating reverse proxy VM OVA.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-manage-ip-address.png" alt-text="Screenshot showing how to obtain management IP address and CIDR for ESXi hosts in Azure VMware solution private cloud." lightbox="./media/vmware-cloud-director-service/pic-manage-ip-address.png":::
++
+1. Enter a list of any other IP addresses that VMware Cloud Director must be able to access through the proxy, such as ESXi hosts to use for console proxy connection. Use new lines to separate list entries.
+
+ > [!TIP]
+ > To ensure that future additions of ESXi hosts don't require updates to the allowed targets, use a CIDR notation to enter the ESXi hosts in the allow list. This way, you can provide any new host with an IP address that is already allocated as part of the CIDR block.
+
+1. Once you have gathered all the required information, add the information in the VMware Reverse proxy OVA generation wizard in the following diagram.
+1. Click **Generate VMware Reverse Proxy OVА**.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-reverse-proxy.png" alt-text="Screenshot showing how to generate a reverse proxy VM OVA." lightbox="./media/vmware-cloud-director-service/pic-reverse-proxy.png":::
+
+1. On the **Activity log** tab, locate the task for generating an OVА and check its status. If the status of the task is **Success**, click the vertical ellipsis icon and select **View files**.
+1. Download the reverse proxy OVA.
+
+## Deploy VMware Reverse proxy VM
+1. Transfer reverse proxy VM OVA you generated in the previous section to a location from where you can access your private cloud.
+1. Deploy reverse proxy VM using OVA.
+1. Select appropriate parameters for OVA deployment for folder, computer resources, and storage.
+ - For network, select appropriate segment for reverse proxy.
+ - Under customize template, use DHCP or provide static IP if you aren't planning to use DHCP.
+ - Enable SSH to log in to reverse proxy VM.
+ - Provide root password.
+1. Once VM is deployed, power it on and then log in using the root credentials provided during OVA deployment.
+1. Log in to the VMware Reverse proxy VM and use the command **transporter-status.sh** to verify that the connection between CDs instance and Transporter VM is established.
+ - The status should indicate "UP." The command channel should display "Connected," and the allowed targets should be listed as "reachable."
+1. Next step is to associate Azure VMware Solution SDDC with the VMware Cloud Director Instance.
++
+## Associate Azure solution private cloud SDDC with VMware Cloud Director Instance via VMware Reverse proxy
+
+This process pools all the resources from Azure private solution SDDC and creates a provider virtual datacenter (PVDC) in CDs.
+
+1. Log in to VMware Cloud Director service.
+1. Click **Cloud Director Instances**.
+1. In the card of the VMware Cloud Director instance for which you want to associate your Azure VMware solution SDDC, select **Actions** and then click
+**Associate datacenter via VMware reverse proxy**.
+1. Review datacenter information.
+1. Select a proxy network for the reverse proxy appliance to use. Ensure correct NSX-T segment is selected where reverse proxy VM is deployed.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-proxy-network.png" alt-text="Screenshot showing how to review a proxy network information." lightbox="./media/vmware-cloud-director-service/pic-proxy-network.png":::
+
+6. In the **Data center name** text box, enter a name for the SDDC that you want to associate with datacenter.
+This name is only used to identify the data center in the VMware Cloud Director inventory, so it doesn't need to match the SDDC name entered when you generated the reverse proxy appliance OVA.
+7. Enter the FQDN for your vCenter Server instance.
+8. Enter the URL for the NSX Manager instance and wait for a connection to establish.
+9. Click **Next**.
+10. Under **Credentials**, enter your user name and password for the vCenter Server endpoint.
+11. Enter your user name and password for NSX Manager.
+12. To create infrastructure resources for your VMware Cloud Director instance, such as a network pool, an external network and a provider VDC, select **Create Infrastructure**.
+13. Click **Validate Credentials**. Ensure that validation is successful.
+14. Confirm that you acknowledge the costs associated with your instance, and click Submit.
+15. Check activity log to note the progress.
+16. Once this process is completed, you should see that your VMware Azure solution SDDC is securely associated with your VMware Cloud Director instance.
+17. When you open the VMware Cloud Director instance, the vCenter Server and the NSX Manager instances that you associated are visible in Infrastructure Resources.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-connect-vcenter-server.png" alt-text="Screenshot showing how the vcenter server is connected and enabled." lightbox="./media/vmware-cloud-director-service/pic-connect-vcenter-server.png":::
+
+18. A newly created Provider VDC is visible in Cloud Resources.
+19. In your Azure VMware solution private cloud, when logged into vCenter you see that a Resource Pool is created as a result of this association.
+
+ :::image type="content" source="./media/vmware-cloud-director-service/pic-resource-pool.png" alt-text="Screenshot showing how resource pools are created for CDs." lightbox="./media/vmware-cloud-director-service/pic-resource-pool.png":::
+
+You can use your VMware cloud director instance provider portal to configure tenants such as organizations and virtual data center.
+
+## WhatΓÇÖs next
+
+- Configure tenant networking on VMware Cloud director service on Azure VMware solution using link [Enable VMware Cloud Director service with Azure VMware Solution](enable-vmware-cds-with-azure.md).
+
+- Learn more about VMware cloud director service using [VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html)
+
+- To learn about Cloud director Service provider admin portal, Visit [VMware Cloud DirectorΓäó Service Provider Admin Portal Guide](https://docs.vmware.com/en/VMware-Cloud-Director/10.4/VMware-Cloud-Director-Service-Provider-Admin-Portal-Guide/GUID-F8F4B534-49B2-43B2-AEEE-7BAEE8CE1844.html).
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Title: Deploy Arc for Azure VMware Solution (Preview)
description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. Previously updated : 04/11/2022 Last updated : 08/28/2023
Before you begin checking off the prerequisites, verify the following actions ha
- You deployed an Azure VMware Solution private cluster. - You have a connection to the Azure VMware Solution private cloud through your on-premises environment or your native Azure Virtual Network. -- There should be an isolated NSX-T Data Center segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center segment doesn't exist, one will be created.
+- There should be an isolated NSX-T Data Center network segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center network segment doesn't exist, one will be created.
## Prerequisites
The following items are needed to ensure you're set up to begin the onboarding p
- Verify that your vCenter Server version is 6.7 or higher. - A resource pool with minimum-free capacity of 16 GB of RAM, 4 vCPUs. - A datastore with minimum 100 GB of free disk space that is available through the resource pool. -- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
+- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server.
- Please validate the regional support before starting the onboarding. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview). - The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. [Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md)
az feature show --name AzureArcForAVS --namespace Microsoft.AVS
## Onboard process to deploy Azure Arc
-Use the following steps to guide you through the process to onboard in Arc for Azure VMware Solution (Preview).
+Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution (Preview).
1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software. 1. Open the 'config_avs.json' file and populate all the variables.
Use the following steps to guide you through the process to onboard in Arc for A
> [!IMPORTANT] > You can't create the resources in a separate resource group. Make sure you use the same resource group from where the Azure VMware Solution private cloud was created to create the resources.
-## Discover and project your VMware infrastructure resources to Azure
+## Discover and project your VMware vSphere infrastructure resources to Azure
When Arc appliance is successfully deployed on your private cloud, you can do the following actions.
After the private cloud is Arc-enabled, vCenter resources should appear under **
### Manage access to VMware resources through Azure Role-Based Access Control
-After your Azure VMware Solution vCenter resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs.
+After your Azure VMware Solution vCenter Server resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs.
This section will demonstrate how to use custom roles to manage granular access to VMware vSphere resources through Azure.
When the extension installation steps are completed, they trigger deployment and
## Change Arc appliance credential
-When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store.
+When **cloudadmin** credentials are updated, use the following steps to update the credentials in the appliance store.
1. Log in to the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**. 1. Run the following command for Windows-based jumpbox VM.
When you activate Arc-enabled Azure VMware Solution resources in Azure, a repres
1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**. 1. When the deletion completes, select **Overview**. 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section.
-1. Select **Remove from Azure** to remove the vCenter resource from Azure.
+1. Select **Remove from Azure** to remove the vCenter Server resource from Azure.
1. Go to vCenter Server resource in Azure and delete it. 1. Go to the Custom location resource and select **Delete**. 1. Go to the Azure Arc Resource bridge resources and select **Delete**.
At this point, all of your Arc-enabled VMware vSphere resources have been remove
## Delete Arc resources from vCenter Server
-For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution SDDC. When you delete Arc resources from vCenter, it won't affect the Azure VMware Solution private cloud for the customer.
+For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter Server and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it won't affect the Azure VMware Solution private cloud for the customer.
## Preview FAQ
Arc for Azure VMware Solution is supported in all regions where Arc for VMware v
Standard support process for Azure VMware Solution has been enabled to support customers.
-**Does Arc for Azure VMware Solution support private end point?**
+**Does Arc for Azure VMware Solution support private endpoint?**
-Yes. Arc for Azure VMware Solution will support private end point for general audience. However, it's not currently supported.
+Private endpoint is currently not supported.
**Is enabling internet the only option to enable Arc for Azure VMware Solution?**
-Yes
+Yes, the Azure VMware Solution private cloud and jumpbox VM must have internet access for Arc to function.
**Is DHCP support available?**
-DHCP support isn't available to customers at this time, we only support static IP.
-
->[!NOTE]
-> This is Azure VMware Solution 2.0 only. It's not available for Azure VMware Solution by Cloudsimple.
+DHCP support isn't available to customers at this time, we only support static IP addresses.
## Debugging tips for known issues
Use the following tips as a self-help guide.
**I'm unable to install extensions on my virtual machine.** - Check that **guest management** has been successfully installed.-- **VMtools** should be installed on the VM.
+- **VMware Tools** should be installed on the VM.
**I'm facing Network related issues during on-boarding.**
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
Title: Deploy vSAN stretched clusters
description: Learn how to deploy vSAN stretched clusters. Previously updated : 06/24/2023 Last updated : 08/16/2023
It should be noted that these types of failures, although rare, fall outside the
Azure VMware Solution stretched clusters are available in the following regions: - UK South (on AV36) -- West Europe (on AV36)
+- West Europe (on AV36, and AV36P)
- Germany West Central (on AV36) - Australia East (on AV36P)
No. A stretched cluster is created between two availability zones, while the thi
### What are the limitations I should be aware of? - Once a private cloud has been created with a stretched cluster, it can't be changed to a standard cluster private cloud. Similarly, a standard cluster private cloud can't be changed to a stretched cluster private cloud after creation.-- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment.
+- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment. For more details, refer to [Azure subscription and service limits, quotas, and constraints](https://learn.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-vmware-solution-limits).
- Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority. - The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ. - Currently not supported in a stretched cluster environment:
azure-vmware Ecosystem External Storage Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-external-storage-solutions.md
+
+ Title: External storage solutions for Azure VMware Solution (preview)
+description: Learn about external storage solutions for Azure VMware Solution private cloud.
++++ Last updated : 08/07/2023
+
+
+# External storage solutions (preview)
+
+> [!NOTE]
+> By using Pure Cloud Block Store, you agree to the following [Microsoft supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It is advised NOT to run production workloads with preview features.
+
+## External storage solutions for Azure VMware Solution (preview)
+
+Azure VMware Solution is a Hyperconverged Infrastructure (HCI) service that offers VMware vSAN as the primary storage option. However, a significant requirement with on-premises VMware deployments is external storage, especially block storage. Providing the same consistent external block storage architecture in the cloud is crucial for some customers. Some workloads can't be migrated or deployed to the cloud without consistent external block storage. As a key principle of Azure VMware Solution is to enable customers to continue to use their investments and their favorite VMware solutions running on Azure, we engaged storage providers with similar goals.
+
+Pure Cloud Block Store, offered by Pure Storage, is one such solution. It helps bridge the gap by allowing customers to provision external block storage as needed to make full use of an Azure VMware Solution deployment without the need to scale out compute resources, while helping customers migrate their on-premises workloads to Azure. Pure Cloud Block Store is a 100% software-delivered product running entirely on native Azure infrastructure that brings all the relevant Purity features and capabilities to Azure.
+
+## Onboarding and support
+
+During preview, Pure Storage manages onboarding of Pure Cloud Block Store for Azure VMware Solution. You can join the preview by emailing [avs@purestorage.com](mailto:avs@purestorage.com). As Pure Cloud Block Store is a customer deployed and managed solution, please reach out to Pure Storage for Customer Support.
+
+For more information, see the following resources:
+
+- [Azure VMware Solution + CBS Implementation Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Azure_VMware_Solution_and_Cloud_Block_Store_Implementation_Guide)
+- [CBS Deployment Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Pure_Cloud_Block_Store_on_Azure_Implementation_Guide)
+- [Troubleshooting CBS Deployment](https://support.purestorage.com/Pure_Cloud_Block_Store/Pure_Cloud_Block_Store_on_Azure_-_Troubleshooting_Guide)
+- [Videos](https://support.purestorage.com/Pure_Cloud_Block_Store/Azure_VMware_Solution_and_Cloud_Block_Store_Video_Demos)
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Title: Enable Public IP on the NSX-T Data Center Edge for Azure VMware Solution
description: This article shows how to enable internet access for your Azure VMware Solution. Previously updated : 7/6/2023 Last updated : 8/18/2023
With this capability, you have the following features:
## Reference architecture The architecture shows internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX-T Data Center Edge. >[!IMPORTANT] >The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup. This includes not being able to support hosting a mail server in Azure VMware Solution.
azure-vmware Enable Sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-sql-azure-hybrid-benefit.md
Title: Enable SQL Azure hybrid benefit for Azure VMware Solution
-description: This article shows you how to apply SQL Azure hybrid benefits to your Azure VMware Solution private cloud by configuring a placement policy.
+ Title: Enable Azure Hybrid Benefit for SQL Server in Azure VMware Solution
++
+description: This article shows you how to apply Enable Azure Hybrid Benefit for SQL Server in Azure VMware Solution to your Azure VMware Solution private cloud by configuring a placement policy.
Previously updated : 02/14/2023 Last updated : 09/01/2023
-# Enable SQL Azure hybrid benefit for Azure VMware Solution
+# Enable Azure Hybrid Benefit for SQL Server in Azure VMware Solution
-In this article, youΓÇÖll learn how to configure SQL Azure hybrid benefits to an Azure VMware Solution private cloud by configuring a placement policy. The placement policy defines the hosts that are running SQL as well as the virtual machines on that host.
->[!IMPORTANT]
-> It is important to note that SQL benefits are applied at the host level.
+In this article, youΓÇÖll learn how to configure Azure Hybrid Benefit for SQL Server in an Azure VMware Solution private cloud by configuring a placement policy.
+The placement policy defines the hosts that are running SQL Server as well as the virtual machines on that host.
-For example, if each host in Azure VMware Solution has 36 cores and you signal that two hosts run SQL, then SQL Azure hybrid benefit will apply to 72 cores irrespective of the number of SQL or other virtual machines on that host.
+> [!IMPORTANT]
+> It is important to note that SQL Server benefits are applied at the host level.
-You can also choose to view a video tutorial for configuring SQL Azure hybrid benefits for Azure VMware Solution [here](https://www.youtube.com/watch?v=vJIQ1K2KTa0).
+For example, if each host in Azure VMware Solution has 36 cores and you intend to have 2 hosts run SQL Server then the Azure Hybrid Benefit will apply to 72 cores, regardless of the number of SQL Server instances or other virtual machines are on that host.
-## Configure host-VM placement policy
-1. From your Azure VMware Solution private cloud, select Azure hybrid benefit, then Create host-VM placement policy.
- :::image type="content" source="media/sql-azure-hybrid-benefit/azure-hybrid-benefit.png" alt-text="Diagram that shows how to create a host new virtual machine placement policy.":::
+[View a video tutorial for configuring Azure Hybrid Benefit for SQL Server in Azure VMware Solution](https://www.youtube.com/watch?v=vJIQ1K2KTa0)
-1. Fill in the required fields for creating the placement policy.
- 1. **Name** ΓÇô Select the name that identifies this policy.
+## Configure host-VM placement policy
+
+1. From your Azure VMware Solution private cloud, select Azure hybrid benefit, then Create host-VM placement policy.
+
+ :::image type="content" source="media/sql-azure-hybrid-benefit/azure-hybrid-benefit.png" alt-text="Diagram that shows how to create a host new virtual machine placement policy.":::
+
+1. Fill in the required fields for creating the placement policy.
+ 1. **Name** ΓÇô Select the name that identifies this policy.
2. **Type** ΓÇô Select the type of policy. This type must be a VM-Host affinity rule only.
- 3. **Azure hybrid benefit** ΓÇô Select the checkbox to apply the SQL Azure hybrid benefit.
+ 3. **Azure Hybrid Benefit** ΓÇô Select the checkbox to apply the Azure Hybrid Benefit for SQL Server.
4. **Cluster** ΓÇô Select the correct cluster. The policy is scoped to host in this cluster only.
- 1. **Enabled** ΓÇô Select enabled to apply the policy immediately once created.
-
+ 5. **Enabled** ΓÇô Select enabled to apply the policy immediately once created.
:::image type="content" source="media/sql-azure-hybrid-benefit/create-placement-policy.png" alt-text="Diagram that shows how to create a host virtual machine placement policy using the host VM affinity.":::
-3. Select the hosts and VMs that will be applied to the VM-Host affinity policy.
- 1. **Add Hosts** ΓÇô Select the hosts that will be running SQL. When hosts are replaced, policies are re-created on the new hosts automatically.
+2. Select the hosts and VMs that will be applied to the VM-Host affinity policy.
+ 1. **Add Hosts** ΓÇô Select the hosts that will be running SQL Server. When hosts are replaced, policies are re-created on the new hosts automatically.
2. **Add VMs** ΓÇô Select the VMs that should run on the selected hosts. 3. **Review and Create** the policy. :::image type="content" source="media/sql-azure-hybrid-benefit/select-policy-host.png" alt-text="Diagram that shows how to create a host virtual machine affinity.":::
-## Manage placement policies
-
-After creating the placement policy, you can review, manage, or edit the policy by way of the Placement policies menu in the Azure VMware Solution private cloud.
+## Manage placement policies
-By checking the Azure hybrid benefit checkbox in the configuration setting, you can enable existing host-VM affinity policies with the SQL Azure hybrid benefit.
+After creating the placement policy, you can review, manage, or edit the policy by way of the Placement policies menu in the Azure VMware Solution private cloud.
+By checking the Azure Hybrid Benefit checkbox in the configuration setting, you can enable existing host-VM affinity policies with the Azure Hybrid Benefit for SQL Server.
-## Next steps
-[Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/)
-[Attach Azure NetApp Files datastores to Azure VMware Solution hosts](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
+## More information
+[Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/)
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
Title: Enable VMware Cloud Director service with Azure VMware Solution (Public Preview)
+ Title: Enable VMware Cloud Director service with Azure VMware Solution
description: This article explains how to use Azure VMware Solution to enable enterprise customers to use Azure VMware Solution for private clouds underlying resources for virtual datacenters. Last updated 08/30/2022
-# Enable VMware Cloud Director service with Azure VMware Solution (Preview)
+# Enable VMware Cloud Director service with Azure VMware Solution
[VMware Cloud Director service (CDs)](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/getting-started-with-vmware-cloud-director-service/GUID-149EF3CD-700A-4B9F-B58B-8EA5776A7A92.html) with Azure VMware Solution enables enterprise customers to use APIs or the Cloud Director services portal to self-service provision and manage virtual datacenters through multi-tenancy with reduced time and complexity.
VMware Cloud Director Availability can be used to migrate VMware Cloud Director
For more information about VMware Cloud Director Availability, see [VMware Cloud Director Availability | Disaster Recovery & Migration](https://www.vmware.com/products/cloud-director-availability.html) ## FAQs
-**Question**: What are the supported Azure regions for the VMware Cloud Director service?
-**Answer**: This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service.
+### What are the supported Azure regions for the VMware Cloud Director service?
-**Question**: How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions?
+This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service.
+
+### How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions?
+
+[Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html)
+
+### How is VMware Cloud Director service supported?
+
+VMware Cloud director service (CDs) is VMware owned and supported product connected to Azure VMware solution. For any support queries on CDs, please contact VMware support for assistance. Both VMware and Microsoft support teams collaborate as necessary to address and resolve Cloud Director Service issues within Azure VMware Solution.
-**Answer** [Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html)
## Next steps [VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html) [Migration to Azure VMware Solutions with Cloud Director service](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/migration-to-azure-vmware-solution-with-cloud-director-service.pdf)++
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The following table provides a detailed list of roles and responsibilities betwe
| -- | - | | Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX | | Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
-| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
+| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy - VMware Cloud director service (CDs), VMware Cloud director availability(VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
## Next steps
The next step is to learn key [private cloud and cluster concepts](concepts-priv
<!-- LINKS - external --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md--
azure-vmware Migrate Sql Server Always On Availability Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-availability-group.md
+
+ Title: Migrate Microsoft SQL Server Always On cluster to Azure VMware Solution
+description: Learn how to migrate Microsoft SQL Server Always On cluster to Azure VMware Solution.
++++ Last updated : 3/20/2023++
+# Migrate a SQL Server Always On Availability Group to Azure VMware Solution
+
+In this article, you learn how to migrate a SQL Server Always On Availability Group to Azure VMware Solution. For VMware HCX, you can follow the VMware vMotion migration procedure.
++
+## Prerequisites
+
+These are the prerequisites to migrating your SQL Server instance to Azure VMware Solution.
+
+- Review and record the storage and network configuration of every node in the cluster.
+- Maintain backups of all the SQL Server databases.
+- Backup the virtual machine or virtual machines hosting SQL Server.
+- Remove the virtual machine from any VMware vSphere Distributed Resource Scheduler (DRS) groups and rules.
+- VMware HCX must be configured between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more information on how to configure HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md).
+- Ensure that all the network segments in use by SQL Server and workloads using it are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+
+VMware HCX over VPN is supported in Azure VMware Solution for workload migration.
+However, due to the size of database workloads, VMware HCX over VPN is not recommended for Microsoft SQL Server Always On FCI or AG migrations for production workloads.
+ExpressRoute connectivity is recommended as more performant and reliable.
+For Microsoft SQL Server standalone and non-production workloads this may be suitable, depending upon the size of the database, to migrate.
+
+Microsoft SQL Server (2019 and 2022) was tested with Windows Server (2019 and 2022) Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+
+## Downtime considerations
+
+Downtime during a migration depends upon the size of the database to be migrated and the speed of the private network connection to Azure cloud.
+While SQL Server Availablity Group migrations can be executed with minimal solution downtime, it is optimal to conduct the migration during off-peak hours within a pre-approved change window.
+
+The table below indicates the estimated downtime for migraton of each SQL Server topology.
+
+| **Scenario** | **Downtime expected** | **Notes** |
+|:|:--|:--|
+| **Standalone instance** | Low | Migrate with VMware vMotion, the database is available during migration, but it is not recommended to commit any critical data during it. |
+| **Always On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Always On Failover Cluster Instance** | High | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
+
+## Windows Server Failover Cluster quorum considerations
+
+Microsoft SQL Server Always On Availability Groups rely on Windows Server Failover Cluster, which requires a quorum voting mechanism to maintain the coherence of the cluster.
+
+An odd number of voting elements is required, which is achieved by an odd number of nodes in the cluster or by using a witness. Witness can be configured in three different ways:
+
+- Disk witness
+- File share witness
+- Cloud witness
+
+If the cluster uses **Disk witness**, then the disk must be migrated with the rest of cluster shared storage using the procedure described in this document.
+
+If the cluster uses a **File share witness** running on-premises, then the type of witness for your migrated cluster depends upon the Azure VMware Solution scenario, there are several options to consider.
+
+- **Datacenter Extension**: Maintain the file share witness on-premises. Your workloads are distributed across your datacenter and Azure. Therefore the connectivity between your datacenter and Azure should always be available. In any case, take into consideration bandwidth constraints and plan accordingly.
+- **Datacenter Exit**: For this scenario, there are two options. In both options, you can maintain the file share witness on-premises during the migration in case you need to do rollback during the process.
+ - Deploy a new **File share witness** in your Azure VMware Solution private cloud.
+ - Deploy a **Cloud witness** running in Azure Blob Storage in the same region as the Azure VMware Solution private cloud.
+- **Disaster Recovery and Business Continuity**: For a disaster recovery scenario, the best and most reliable option is to create a **Cloud Witness** running in Azure Storage.
+- **Application Modernization**: For this use case, the best option is to deploy a **Cloud Witness**.
+
+For details about configuring and managing the quorum, see [Failover Clustering documentation](/windows-server/failover-clustering/manage-cluster-quorum). For information about deployment of Cloud witness in Azure Blob Storage, see [Manage a cluster quorum for a Failover Cluster](/windows-server/failover-clustering/deploy-cloud-witness).
+
+## Migrate SQL Server Always On Availability Group
+
+1. Access your Always On Availability Group with SQL Server Management Studio using administration credentials.
+ - Select your primary replica and open **Availability Group** **Properties**.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-1.png" alt-text="Diagram showing Always On Availability Group properties." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-1.png":::
+ - Change **Availability Mode** to **Asynchronous commit** only for the replica to be migrated.
+ - Change **Failover Mode** to **Manual** for every member of the availability group.
+1. Access the on-premises vCenter Server and proceed to HCX area.
+1. Under **Services** select **Migration** > **Migrate**.
+ - Select one virtual machine running the secondary replica of the database the is going to be migrated.
+ - Set the vSphere cluster in the remote private cloud, which will now host the migrated SQL Server VM or VMs as the **Compute Container**.
+ - Select the **vSAN Datastore** as remote storage.
+ - Select a folder. This not mandatory, but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
+ - Keep **Same format as source**.
+ - Select **vMotion** as **Migration profile**.
+ - In **Extended Options** select **Migrate Custom Attributes**.
+ - Verify that on-premises network segments have the correct remote stretched segment in Azure.
+ - Select **Validate** and ensure that all checks are completed with pass status. The most common error is related to the storage configuration. Verify again that there are no virtual SCSI controllers have the physical sharing setting.
+ - Click **Go** to start the migration.
+1. Once the migration has been completed, access the migrated replica and verify connectivity with the rest of the members in the availability group.
+1. In SQL Server Management Studio, open the **Availability Group Dashboard** and verify that the replica appears as **Online**.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-2.png" alt-text="Diagram showing Always On Availability Group Dashboard." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-2.png":::
+
+ - **Data Loss** status in the **Failover Readiness** column is expected since the replica has been out-of-sync with the primary during the migration.
+1. Edit the **Availability Group** **Properties** again and set **Availability Mode** back to **Synchronous commit**.
+ - The secondary replica starts to synchronize back all the changes made to the primary replica during the migration. Wait until it appears in Synchronized state.
+1. From the **Availability Group Dashboard** in SSMS click on **Start Failover Wizard**.
+1. Select the migrated replica and click **Next**.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-3.png" alt-text="Diagram showing new primary replica selection for Always On." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-3.png":::
+
+1. Connect to the replica in the next screen with your DB admin credentials.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-4.png" alt-text="Diagram showing new primary replica admin credentials connection." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-4.png":::
+
+1. Review the changes and click **Finish** to start the failover operation.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-5.png" alt-text="Diagram showing Availability Group Always On operation review." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-5.png":::
+
+
+1. Monitor the progress of the failover in the next screen, and click **Close** when the operation is finished.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-6.png" alt-text="Diagram showing that SQL Server Always On cluster successfully finished." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-6.png":::
++
+1. Refresh the **Object Explorer** view in SQL Server Management Studio (SSMS), and verify that the migrated instance is now the primary replica.
+1. Repeat steps 1 to 6 for the rest of the replicas of the availability group.
+
+ >[!Note]
+ > Migrate one replica at a time and verify that all changes are synchronized back to the replica after each migration. Do not migrate all the replicas at the same time using **HCX Bulk Migration**.
+1. After the migration of all the replicas is completed, access your Always On availability group with **SQL Server Management Studio**.
+ - Open the Dashboard and verify there is no data loss in any of the replicas and that all are in a **Synchronized** state.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-7.png" alt-text="Diagram showing availability Group Dashboard with new primary replica and all migrated secondary replicas in synchronized state." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-7.png":::
+ - Edit the **Properties** of the availability group and set **Failover Mode** to **Automatic** in all replicas.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-8.png" alt-text="Diagram showing a setting for failover back to Automatic for all replicas." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-8.png":::
+
+## Next steps
+
+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+- [Create a placement policy in Azure VMware Solution](create-placement-policy.md)
+- [Windows Server Failover Clustering Documentation](/windows-server/failover-clustering/failover-clustering-overview)
+- [Microsoft SQL Server 2019 Documentation](/sql/sql-server/)
+- [Microsoft SQL Server 2022 Documentation](/sql/sql-server/)
+- [Windows Server Technical Documentation](/windows-server/)
+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)
+- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)
+- [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
+- [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)
+- [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)
+- [Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
azure-vmware Migrate Sql Server Always On Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-cluster.md
- Title: Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution
-description: Learn how to migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution.
-- Previously updated : 3/20/2023--
-# Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution
-
-In this article, you learn how to migrate Microsoft SQL Server Always-On Cluster to Azure VMware Solution. For VMware HCX, you can follow the VMware vMotion migration procedure.
--
-## Prerequisites
-
-These are the prerequisites to migrating your SQL server instance to Azure VMware Solution.
--- Review and record the storage and network configuration of every node in the cluster.-- Backup the full database.-- Backup the virtual machine running the Microsoft SQL Server instance. -- Remove the virtual machine from any VMware vSphere Distributed Resource Scheduler (DRS) groups and rules.-- VMware HCX must be configured between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more information on how to configure HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md).-- Ensure that all the network segments in use by the Microsoft SQL Server are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).-
-VMware HCX over VPN is supported in Azure VMware Solution for workload migration. However, due to the size of database workloads, VMware HCX over VPN is not recommended for Microsoft SQL Server Always-On migrations for production workloads. ExpressRoute connectivity is recommended as more performant and reliable. For Microsoft SQL Server Standalone and non-production workloads this may be suitable, depending upon the size of the database, to migrate.
-
-Microsoft SQL Server (2019 and 2022) was tested with Windows Server (2019 and 2022) Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
-
-## Downtime considerations
-
-Predicting downtime during a migration depends upon the size of the database to be migrated and the speed of the private network connection to Azure cloud. Always-On migrations are intended to be executed with low database downtime. However, plan to conduct the migration during off-peak hours within a pre-approved change window.
-
-The table below indicates the estimated downtime for each Microsoft SQL Server topology.
-
-| **Scenario** | **Downtime expected** | **Notes** |
-|:|:--|:--|
-| **Standalone instance** | Low | Migrate with VMware vMotion, the DB is available during migration, but it is not recommended to commit any critical data during it. |
-| **Always-On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
-| **Failover Cluster Instance** | High | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
-
-## Windows Server Failover Cluster quorum considerations
-
-Microsoft SQL Server Always-On Availability Groups rely on Windows Server Failover Cluster, which requires a quorum voting mechanism to maintain the coherence of the cluster.
-
-An odd number of voting elements is required, which is achieved by an odd number of nodes in the cluster or by using a witness. Witness can be configured in three different ways:
--- Disk witness-- File share witness-- Cloud witness-
-If the cluster uses **Disk witness**, then the disk must be migrated with the rest of cluster shared storage using the procedure described in this document.
-
-If the cluster uses a **File share witness** running on-premises, then the type of witness for your migrated cluster depends upon the Azure VMware Solution scenario, there are several options to consider.
--- **Datacenter Extension**: Maintain the file share witness on-premises. Your workloads are distributed across your datacenter and Azure. Therefore the connectivity between your datacenter and Azure should always be available. In any case, take into consideration bandwidth constraints and plan accordingly. -- **Datacenter Exit**: For this scenario, there are two options. In both options, you can maintain the file share witness on-premises during the migration in case you need to do rollback during the process.
- - Deploy a new **File share witness** in your Azure VMware Solution private cloud.
- - Deploy a **Cloud witness** running in Azure Blob Storage in the same region as the Azure VMware Solution private cloud.
-- **Disaster Recovery and Business Continuity**: For a disaster recovery scenario, the best and most reliable option is to create a **Cloud Witness** running in Azure Storage. -- **Application Modernization**: For this use case, the best option is to deploy a **Cloud Witness**.-
-For details about configuring and managing the quorum, see [Failover Clustering documentation](https://learn.microsoft.com/windows-server/failover-clustering/manage-cluster-quorum). For information about deployment of Cloud witness in Azure Blob Storage, see [Manage a cluster quorum for a Failover Cluster](https://learn.microsoft.com/windows-server/failover-clustering/deploy-cloud-witness).
-
-## Migrate Microsoft SQL Server Always-On cluster
-
-1. Access your Always-On cluster with SQL Server Management Studio using administration credentials.
- - Select your primary replica and open **Availability Group** **Properties**.
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-1.png" alt-text="Diagram showing Always On Availability Group properties." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-1.png":::
- - Change **Availability Mode** to **Asynchronous commit** only for the replica to be migrated.
- - Change **Failover Mode** to **Manual** for every member of the availability group.
-1. Access the on-premises vCenter Server and proceed to HCX area.
-1. Under **Services** select **Migration** > **Migrate**.
- - Select one virtual machine running the secondary replica of the database the is going to be migrated.
- - Set the vSphere cluster in the remote private cloud to run the migrated SQL cluster as the **Compute Container**.
- - Select the **vSAN Datastore** as remote storage.
- - Select a folder. This not mandatory, but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
- - Keep **Same format as source**.
- - Select **vMotion** as **Migration profile**.
- - In **Extended Options** select **Migrate Custom Attributes**.
- - Verify that on-premises network segments have the correct remote stretched segment in Azure.
- - Select **Validate** and ensure that all checks are completed with pass status. The most common error is related to the storage configuration. Verify again that there are no virtual SCSI controllers have the physical sharing setting.
- - Click **Go** to start the migration.
-1. Once the migration has been completed, access the migrated replica and verify connectivity with the rest of the members in the availability group.
-1. In SQL Server Management Studio, open the **Availability Group Dashboard** and verify that the replica appears as **Online**.
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-2.png" alt-text="Diagram showing Always On Availability Group Dashboard." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-2.png":::
-
- - **Data Loss** status in the **Failover Readiness** column is expected since the replica has been out-of-sync with the primary during the migration.
-1. Edit the **Availability Group** **Properties** again and set **Availability Mode** back to **Synchronous commit**.
- - The secondary replica starts to synchronize back all the changes made to the primary replica during the migration. Wait until it appears in Synchronized state.
-1. From the **Availability Group Dashboard** in SSMS click on **Start Failover Wizard**.
-1. Select the migrated replica and click **Next**.
-
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-3.png" alt-text="Diagram showing new primary replica selection for always on." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-3.png":::
-
-1. Connect to the replica in the next screen with your DB admin credentials.
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-4.png" alt-text="Diagram showing new primary replica admin credentials connection." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-4.png":::
-
-1. Review the changes and click **Finish** to start the failover operation.
-
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-5.png" alt-text="Diagram showing Availability Group always on operation review." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-5.png":::
-
-
-1. Monitor the progress of the failover in the next screen, and click **Close** when the operation is finished.
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-6.png" alt-text="Diagram showing that always on SQL server cluster successfully finished." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-6.png":::
--
-1. Refresh the **Object Explorer** view in SQL Server Management Studio (SSMS), and verify that the migrated instance is now the primary replica.
-1. Repeat steps 1 to 6 for the rest of the replicas of the availability group.
-
- >[!Note]
- > Migrate one replica at a time and verify that all changes are synchronized back to the replica after each migration. Do not migrate all the replicas at the same time using **HCX Bulk Migration**.
-1. After the migration of all the replicas is completed, access your Always-On availability group with **SQL Server Management Studio**.
- - Open the Dashboard and verify there is no data loss in any of the replicas and that all are in a **Synchronized** state.
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-7.png" alt-text="Diagram showing availability Group Dashboard with new primary replica and all migrated secondary replicas in synchronized state." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-7.png":::
- - Edit the **Properties** of the availability group and set **Failover Mode** to **Automatic** in all replicas.
-
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-8.png" alt-text="Diagram showing a setting for failover back to Automatic for all replicas." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-8.png":::
-
-## Next steps
--- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md). -- [Create a placement policy in Azure VMware Solution](create-placement-policy.md) -- [Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview) -- [Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/) -- [Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/) -- [Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/) -- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)-- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)-- [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)-- [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)-- [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)-- [Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
azure-vmware Migrate Sql Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md
Title: Migrate SQL Server failover cluster to Azure VMware Solution
-description: Learn how to migrate SQL Server failover cluster to Azure VMware Solution
+ Title: Migrate SQL Server Failover cluster to Azure VMware Solution
+description: Learn how to migrate SQL Server Failover cluster to Azure VMware Solution
++ Last updated 6/20/2023
-# Migrate SQL Server failover cluster to Azure VMware Solution
+# Migrate a SQL Server Always On Failover Cluster Instance to Azure VMware Solution
-In this article, you'll learn how to migrate a Microsoft SQL Server Failover cluster instance to Azure VMware Solution. Currently Azure VMware Solution service doesn't support VMware Hybrid Linked Mode to connect an on-premises vCenter Server with one running in Azure VMware Solution. Due to this constraint, this process requires the use of VMware HCX for the migration. For more details about configuring HCX, see [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
+In this article, you'll learn how to migrate a SQL Server Failover Cluster Instance to Azure VMware Solution.
+Currently Azure VMware Solution service doesn't support VMware Hybrid Linked Mode to connect an on-premises vCenter Server with one running in Azure VMware Solution.
+Due to this constraint, this process requires the use of VMware HCX for the migration.
+For more details about configuring HCX, see [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
-VMware HCX doesn't support migrating virtual machines with SCSI controllers in physical sharing mode attached to a virtual machine. However, you can overcome this limitation by performing the steps shown in this procedure and by using VMware HCX Cold Migration to move the different virtual machines that make up the cluster.
+VMware HCX doesn't support migrating virtual machines with SCSI controllers in physical sharing mode attached to a virtual machine.
+However, you can overcome this limitation by performing the steps shown in this procedure and by using VMware HCX Cold Migration to move the different virtual machines that make up the cluster.
> [!NOTE]
-> This procedure requires a full shutdown of the cluster. Since the Microsoft SQL Server service will be unavailable during the migration, plan accordingly for the downtime period .
+> This procedure requires a full shutdown of the cluster. Since the SQL Server service will be unavailable during the migration, plan accordingly for the downtime period.
## Prerequisites - Review and record the storage and network configuration of every node in the cluster. - Review and record the WSFC configuration.-- Back up the database(s) being executed in the cluster.-- Back up the cluster virtual machines.
+- Maintain backups of all the SQL Server databases.
+- Back up the cluster virtual machines.
- Remove all cluster node VMs from any Distributed Resource Scheduler (DRS) groups and rules they're part of. - VMware HCX must be configured between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more details about installing VMware HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md).-- Ensure that all the network segments in use by the Microsoft SQL Server are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+- Ensure that all the network segments in use by SQL Server and workloads using it are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
-VMware HCX over VPN is supported in Azure VMware Solution for workload migration. However, due to the size of database workloads it isn't recommended for Microsoft SQL Server Failover Cluster Instance and Microsoft SQL Server Always-On migrations, especially for production workloads. ExpressRoute connectivity is recommended as more performant and reliable. For Microsoft SQL Server Standalone and non-production workloads this can be suitable, depending upon the size of the database, to migrate.
+VMware HCX over VPN is supported in Azure VMware Solution for workload migration.
+However, due to the size of database workloads it isn't recommended for Microsoft SQL Server Failover Cluster Instance and Microsoft SQL Server Always On migrations, especially for production workloads.
+ExpressRoute connectivity is recommended as more performant and reliable.
+For Microsoft SQL Server Standalone and non-production workloads this can be suitable, depending upon the size of the database, to migrate.
-Microsoft SQL Server 2019 and 2022 were tested with Windows Server 2019 and 2022 Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+Microsoft SQL Server 2019 and 2022 were tested with Windows Server 2019 and 2022 Data Center edition with the virtual machines deployed in the on-premises environment.
+Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware.
+The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
## Downtime considerations
-Predicting downtime during a migration will depend upon the size of the database to be migrated and the speed of the private network connection to Azure cloud. Migration of SQL Server Failover Cluster Instances Always On to Azure VMware Solution requires a full downtime of the database and all cluster nodes, however you should plan for the migration to be executed during off-peak hours with an approved change window.
+Downtime during a migration depends on the size of the database to be migrated and the speed of the private network connection to Azure cloud.
+Migration of SQL Server Failover Cluster Instances Always On to Azure VMware Solution requires a full downtime of the database and all cluster nodes, however you should plan for the migration to be executed during off-peak hours with an approved change window.
The table below indicates the downtime for each Microsoft SQL Server topology. | **Scenario** | **Downtime expected** | **Notes** | |:|:--|:--|
-| **Standalone instance** | Low | Migration will be done using vMotion, the DB will be available during migration time, but it isn't recommended to commit any critical data during it. |
-| **Always-On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
-| **Failover Cluster Instance** | High | All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
+| **Standalone instance** | Low | Migration will be done using vMotion, the database will be available during migration time, but it isn't recommended to commit any critical data during it. |
+| **Always-On SQL Server Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Always On SQL Server Failover Cluster Instance** | High | All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
## Windows Server Failover Cluster quorum considerations
If the cluster uses a **File** **share witness** running on-premises, then the t
- Disaster Recovery and Business Continuity: For a disaster recovery scenario, the best and most reliable option is to create a **Cloud Witness** running in Azure Storage. - Application Modernization: For this use case, the best option is to deploy a **Cloud Witness**.
-For more information about quorum configuration and management, see [Failover Clustering documentation](https://learn.microsoft.com/windows-server/failover-clustering/manage-cluster-quorum). For more information about deploying a Cloud witness in Azure Blob Storage, see [Deploy a Cloud Witness for a Failover Cluster](https://learn.microsoft.com/windows-server/failover-clustering/deploy-cloud-witness) documentation for the details.
+For more information about quorum configuration and management, see [Failover Clustering documentation](/windows-server/failover-clustering/manage-cluster-quorum). For more information about deploying a Cloud witness in Azure Blob Storage, see [Deploy a Cloud Witness for a Failover Cluster](/windows-server/failover-clustering/deploy-cloud-witness) documentation for the details.
## Migrate failover cluster
For illustration purposes, in this document we're using a two-node cluster with
- Set **SCSI Bus Sharing** from **Physical** to **None** in the virtual SCSI controllers used for the shared storage. Usually, these controllers are of VMware Paravirtual type. 1. Edit the first node virtual machine settings. Set **SCSI Bus Sharing** from **Physical** to **None** in the SCSI controllers.
-1. From the vSphere Client,** go to the HCX plugin area. Under **Services**, select **Migration** > **Migrate**.
+1. From the **vSphere Client**, go to the HCX plugin area. Under **Services**, select **Migration** > **Migrate**.
- Select the second node virtual machine.
- - Set the vSphere cluster in the remote private cloud that will run the migrated SQL cluster as the **Compute Container**.
+ - Set the vSphere cluster in the remote private cloud, which will now host the migrated SQL Server VM or VMs, as the **Compute Container**.
- Select the **vSAN Datastore** as remote storage. - Select a folder if you want to place the virtual machines in specific folder, this not mandatory but is recommended to separate the different workloads in your Azure VMware Solution private cloud. - Keep **Same format as source**. - Select **Cold migration** as **Migration profile**. - In **Extended** **Options** select **Migrate Custom Attributes**.
- - Verify that on-premises network segments have the correct remote stretched segment in Azure.
- - Select **Validate** and ensure that all checks are completed with pass status. The most common error here will be one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
+ - Verify that on-premises network segments have the correct remote stretched segment in Azure.
+ - Select **Validate** and ensure that all checks are completed with pass status. The most common error here is one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
- Select **Go** and the migration will initiate. 1. Repeat the same process for the first node.
-1. Access **Azure VMware Solution vSphere Client** and edit the first node settings and set back to physical SCSI Bus sharing the SCSI controller(s) managing the shared disks.
+1. Access **Azure VMware Solution vSphere Client** and edit the first node settings and set back to physical SCSI Bus sharing the SCSI controller or controllers managing the shared disks.
1. Edit node 2 settings in **vSphere Client**. - Set SCSI Bus sharing back to physical in the SCSI controller managing shared storage.
For illustration purposes, in this document we're using a two-node cluster with
- In the **Failover Cluster Manager** review that the second node appears as **Online** status. :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-4.png" alt-text="Diagram showing a cluster node status in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-4.png":::
-1. Using the **SQL Server Management Studio** connect to the SQL Server cluster resource network name. Check the database is online and accessible.
+1. Using the **SQL Server Management Studio** connect to the SQL Server cluster resource network name. Confirm all databases are online and accessible.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-5.png" alt-text="Diagram showing a verification of SQL Server Management Studio connection to the migrated cluster instance database." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-5.png":::
-
-Finally, check the connectivity to SQL from other systems and applications in your infrastructure and verify that all applications using the database(s) can still access them.
-## Next steps
+Finally, check the connectivity to SQL Server from other systems and applications in your infrastructure and verify that all applications using the database or databases can still access them.
+
+## More information
-- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+- [Enable Azure Hybrid Benefit for SQL Server in Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
- [Create a placement policy in Azure VMware Solution](create-placement-policy.md) -- [Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)-- [Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver15)-- [Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver16)-- [Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
+- [Windows Server Failover Clustering Documentation](/windows-server/failover-clustering/failover-clustering-overview)
+- [Microsoft SQL Server 2019 Documentation](/sql/sql-server/?view=sql-server-ver15&preserve-view=true)
+- [Microsoft SQL Server 2022 Documentation](/sql/sql-server/?view=sql-server-ver16&preserve-view=true)
+- [Windows Server Technical Documentation](/windows-server/)
- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf) - [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
azure-vmware Migrate Sql Server Standalone Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md
Title: Migrate Microsoft SQL Server Standalone to Azure VMware Solution++ description: Learn how to migrate Microsoft SQL Server Standalone to Azure VMware Solution.
Last updated 3/20/2023
-# Migrate Microsoft SQL Server Standalone to Azure VMware Solution
+# Migrate a SQL Server standalone instance to Azure VMware Solution
-In this article, you learn how to migrate Microsoft SQL Server standalone to Azure VMware Solution.
+In this article, you learn how to migrate a SQL Server Standalone to Azure VMware Solution.
-When migrating Microsoft SQL Server Standalone to Azure VMware Solution, VMware HCX offers two migration profiles that can be used:
+When migrating a SQL Server standalone instance to Azure VMware Solution, VMware HCX offers two migration profiles:
- HCX vMotion - HCX Cold Migration
-In both cases, consider the size and criticality of the database being migrated. For this how-to procedure, we have validated VMware HCX vMotion. VMware HCX Cold Migration is also valid, but it requires a longer downtime period.
+In both cases, consider the size and criticality of the database being migrated.
+For this how-to procedure, we have validated VMware HCX vMotion.
+VMware HCX Cold Migration is also valid, but it requires a longer downtime period.
+This scenario was validated using the following editions and configurations:
+
+- Microsoft SQL Server (2019 and 2022)
+- Windows Server (2019 and 2022) Data Center edition
+- Windows Server and SQL Server were configured following best practices and recommendations from Microsoft and VMware.
+- The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+ ## Prerequisites - Review and record the storage and network configuration of every node in the cluster.-- Back up the full database.-- Back up the virtual machine running the Microsoft SQL Server instance.
+- Maintain backups of all the databases.
+- Back up the virtual machine running the SQL Server instance.
- Remove all cluster node VMs from any Distributed Resource Scheduler (DRS) groups and rules. -- Configure VMware HCX between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more information about configuring VMware HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md) .-- Ensure that all the network segments in use by the Microsoft SQL Server are extended into your Azure VMware Solution private cloud.For verify this step in the procedure, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+- Configure VMware HCX between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more information about configuring VMware HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md).
+- Ensure that all the network segments in use by the SQL Server are extended into your Azure VMware Solution private cloud. To verify this step in the procedure, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+
+Either VMware HCX over VPN or ExpressRoute connectivity can be used as the networking configuration for the migration.
-VMware HCX over VPN is supported in Azure VMware Solution for workload migration. However, due to the size of database workloads, VMware HCX over VPN isn't recommended for Microsoft SQL Server Failover Cluster Instance and Microsoft SQL Server Always-On migrations, especially for production workloads. ExpressRoute connectivity is recommended as more performant and reliable. For Microsoft SQL Server Standalone and non-production workloads HCX over VPN can be suitable, depending on the size of the database, to migrate.
+With VMWare HCX over VPN, due to its limited bandwidth it is typically suited for workloads that can sustain longer periods of downtime (such as non-production environments).
-Microsoft SQL Server (2019 and 2022) were tested with Windows Server (2019 and 2022) Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+For production environments, or workloads with large database sizes or where there is a need to minimize downtime the ExpressRoute connectivity is recommended for the migration.
+
+Further downtime considerations are discussed in the next section.
## Downtime considerations
-Predicting downtime during a migration depends upon the size of the database to be migrated and the speed of the private network connection to Azure cloud. Migration of SQL Server standalone instance doesn't require database downtime since it will be done using the VMware HCX vMotion mechanism. We recommend the migration during off-peak hours with an pre-approved change window.
+Downtime during a migration depends on the size of the database to be migrated and the speed of the private network connection to Azure cloud.
+Migration of the Microsoft SQL Server Standalone instance using the VMware HCX vMotion mechanism is intended to minimize the solution downtime, however we still recommend the migration take place during off-peak hours with a pre-approved change window.
-This table indicates the estimated downtime for each Microsoft SQL Server topology.
+This table indicates the estimated downtime for migration of each SQL Server topology.
| **Scenario** | **Downtime expected** | **Notes** | |:|:--|:--|
-| **Standalone instance** | Low | Migration is done using VMware vMotion, the DB is available during migration time, but it isn't recommended to commit any critical data during it. |
-| **Always-On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
-| **Failover Cluster Instance** | High | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
+| **Standalone instance** | Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
+| **Always On SQL Server Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Always On SQL Server Failover Cluster Instance** | High | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
-## Migrate Microsoft SQL Server standalone
+## Executing the migration
1. Log into your on-premises **vCenter Server** and access the VMware HCX plugin.
-1. Under **Services** select **Migration** > **Migrate**.
- - Select the Microsoft SQL Server virtual machine.
- - Set the vSphere cluster in the remote private cloud of the migrated SQL cluster as the **Compute Container**.
- - Select the vSAN Datastore as remote storage.
- - Select a folder. This isn't mandatory, but we recommended separating the different workloads in your Azure VMware Solution private cloud.
- - Keep **Same format as source**.
- - Select **vMotion** as Migration profile.
- - In **Extended Options** select **Migrate Custom Attributes**.
- - Verify that on-premises network segments have the correct remote stretched segment in Azure VMware Solution.
- - Select **Validate** and ensure that all checks are completed with pass status.
- - Select **Go** to start the migration.
+1. Under **Services**, select **Migration** > **Migrate**.
+ 1. Select the SQL Server virtual machine.
+ 2. Set the vSphere cluster in the remote private cloud, which will now host the migrated SQL Server VM or VMs as the **Compute Container**.
+ 3. Select the vSAN Datastore as remote storage.
+ 4. Select a folder. This isn't mandatory, but we recommend separating the different workloads in your Azure VMware Solution private cloud.
+ 5. Keep **Same format as source**.
+ 6. Select **vMotion** as Migration profile.
+ 7. In **Extended Options** select **Migrate Custom Attributes**.
+ 8. Verify that on-premises network segments have the correct remote stretched segment in Azure VMware Solution.
+ 9. Select **Validate** and ensure that all checks are completed with pass status.
+ 10. Select **Go** to start the migration.
1. After the migration has completed, access the virtual machine using VMware Remote Console in the vSphere Client.
- - Verify the network configuration and check connectivity both with on-premises and Azure VMware Solution resources.
- - Using SQL Server Management Studio verify you can access the database.
+1. Verify the network configuration and check connectivity both with on-premises and Azure VMware Solution resources.
+1. Verify your SQL Server and databases are up and accessible. For example, using SQL Server Management Studio verify you can access the database.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-standalone-1.png" alt-text="Diagram showing a SQL Server Management Studio connection to the migrated database." border="false" lightbox="media/sql-server-hybrid-benefit/sql-standalone-1.png":::
-## Next steps
+Finally, check the connectivity to SQL Server from other systems and applications in your infrastructure and verify that all applications using the database or databases can still access them.
+
+## More information
-- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+- [Enable SQL Azure Hybrid Benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
- [Create a placement policy in Azure VMware Solution](create-placement-policy.md) -- [Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)-- [Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver15)-- [Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver16)-- [Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
+- [Windows Server Failover Clustering Documentation](/windows-server/failover-clustering/failover-clustering-overview)
+- [Microsoft SQL Server 2019 Documentation](/sql/sql-server/?view=sql-server-ver15&preserve-view=true)
+- [Microsoft SQL Server 2022 Documentation](/sql/sql-server/?view=sql-server-ver16&preserve-view=true)
+- [Windows Server Technical Documentation](/windows-server/)
- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf) - [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
In this how-to, you'll request host quota/capacity for [Azure VMware Solution](i
If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll follow the same process. >[!IMPORTANT]
->It can take up to five business days to allocate the hosts, depending on the number requested. So request what is needed for provisioning, so you don't need to request a quota increase as often.
+>It can take up to five business days to allocate the hosts, depending on the number requested.
## Eligibility criteria
You'll need an Azure account in an Azure subscription that adheres to one of the
## Request host quota for EA and MCA customers 1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information:
- - **Issue type:** Technical
+ - **Issue type:** Service and subcscription limits (quotas)
- **Subscription:** Select your subscription
- - **Service:** All services > Azure VMware Solution
- - **Resource:** General question
- - **Summary:** Need capacity
- - **Problem type:** Capacity Management Issues
- - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
-
-1. In the **Description** of the support ticket, on the **Details** tab, provide information for:
-
- - Region Name
- - Number of hosts
- - Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage)
+ - **Quota type:** Azure VMware Solution
+
+1. Got to **Next**. On the **Additional details** tab, under **Request details** > Enter details form:
+
+ - Region
+ - SKU
+ - Number of nodes
+
+ Select **Save and continue**.
>[!NOTE] >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
-1. Select **Review + Create** to submit the request.
+1. Select **Next** > Under **Review + Create** > validate and click **Create to submit the request.
## Request host quota for CSP customers
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
1. Expand customer details and select **Microsoft Azure Management Portal**. 1. In the Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information:
- - **Issue type:** Technical
+ - **Issue type:** Service and subcscription limits (quotas)
- **Subscription:** Select your subscription
- - **Service:** All services > Azure VMware Solution
- - **Resource:** General question
- - **Summary:** Need capacity
- - **Problem type:** Capacity Management Issues
- - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
-
- 1. In the **Description** of the support ticket, on the **Details** tab, provide information for:
-
- - Region Name
- - Number of hosts
- - Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage)
- - Is intended to host multiple customers?
-
+ - **Quota type:** Azure VMware Solution
+
+ 1. Got to **Next**. On the **Additional details** tab, under **Request details** > Enter details form:
+
+ - Region
+ - SKU
+ - Number of nodes
+
+ Select **Save and continue**.
+ >[!NOTE] >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
-
- 1. Select **Review + Create** to submit the request.
+
+ 1. Select **Next** > Under **Review + Create** > validate and click **Create** to submit the request.
## Next steps
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
description: Learn how to rotate the vCenter Server credentials for your Azure V
Previously updated : 12/22/2022
-#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials.
Last updated : 8/16/2023
+# Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials.
# Rotate the cloudadmin credentials for Azure VMware Solution
->[!IMPORTANT]
->Currently, rotating your NSX-T Manager *cloudadmin* credentials isn't supported. To rotate your NSX-T Manager password, submit a [support request](https://rc.portal.azure.com/#create/Microsoft.Support). This process might impact running HCX services.
-In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
+In this article, you'll rotate the cloudadmin credentials (vCenter Server and NSX-T *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
>[!CAUTION]
->If you use your cloudadmin credentials to connect services to vCenter Server in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
+>If you use your cloudadmin credentials to connect services to vCenter Server or NSX-T in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
## Prerequisites
-Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
+Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* or NSX-T as cloudadmin before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter Server CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
-Instead of using the cloudadmin user to connect services to vCenter Server, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
+Instead of using the cloudadmin user to connect services to vCenter Server or NSX-T Data Center, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
## Reset your vCenter Server credentials ### [Portal](#tab/azure-portal)
-1. In your Azure VMware Solution private cloud, select **VMWare credentials**.
-1. Select **Generate new password**.
+1. In your Azure VMware Solution private cloud, select **VMware credentials**.
+1. Select **Generate new password** under vCenter Server credentials.
1. Select the confirmation checkbox and then select **Generate password**.
To begin using Azure CLI:
``` -----
-
--
-
+
-
-## Update HCX Connector
+### Update HCX Connector
1. Go to the on-premises HCX Connector at https://{ip of the HCX connector appliance}:443 and sign in using the new credentials.
To begin using Azure CLI:
4. Provide the new vCenter Server user credentials and select **Edit**, which saves the credentials. Save should show successful.
+## Reset your NSX-T Manager credentials
+
+1. In your Azure VMware Solution private cloud, select **VMware credentials**.
+1. Select **Generate new password** under NSX-T Manager credentials.
+1. Select the confirmation checkbox and then select **Generate password**.
+ ## Next steps
-Now that you've covered resetting your vCenter Server credentials for Azure VMware Solution, you may want to learn about:
+Now that you've covered resetting your vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about:
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md) -
azure-vmware Sql Server Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/sql-server-hybrid-benefit.md
Title: Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions
-description: Learn about Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions.
+ Title: Azure Hybrid Benefit for Windows Server, SQL Server, or Linux subscriptions
++
+description: Learn about Azure Hybrid Benefit for Windows Server, SQL Server, or Linux subscriptions.
Previously updated : 3/20/2023 Last updated : 8/28/2023
-# Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions
+# Azure Hybrid Benefit for Windows Server, SQL Server, and Linux subscriptions
-Azure Hybrid benefit is a cost saving offering from Microsoft you can use to save on cost while optimizing your hybrid environment by applying your existing Windows Server, SQL Server licenses or Linux subscriptions.
+Azure Hybrid Benefit is a cost-saving offering from Microsoft you can use to save on cost while optimizing your hybrid environment by applying your existing Windows Server, SQL Server licenses and Linux subscriptions.
-- Save up to 85% over standard pay-as-you-go rate leveraging Windows Server and SQL Server licenses with Azure Hybrid benefit.
+- Save up to 85% over standard pay-as-you-go rate leveraging Windows Server and SQL Server licenses with Azure Hybrid Benefit.
- Use Azure Hybrid Benefit in Azure SQL platform as a service (PaaS) environment. - Apply to SQL Server one to four vCPUs exchange: For every one core of SQL Server Enterprise Edition, you get four vCPUs of SQL Managed Instance or Azure SQL Database general purpose and Hyperscale tiers, or 4 vCPUs of SQL Server Standard edition on Azure VMs.-- Use existing SQL licensing to adopt Azure ArcΓÇôenabled SQL Managed Instance.
+- Use existing SQL Server licensing to adopt Azure ArcΓÇôenabled SQL Server Managed Instance.
- Help meet compliance requirements with unlimited virtualization on Azure Dedicated Host and the Azure VMware Solution. - Get 180 days of dual-use rights between on-premises and Azure.
-## Microsoft SQL server
+## Microsoft SQL Server
-Microsoft SQL server is a core component of many business-critical applications currently running on VMware vSphere and is one of the most widely used database platforms in the market with customers running hundreds of SQL Server instances with VMware vSphere on-premises.
+Microsoft SQL Server is a core component of many business-critical applications currently running on VMware vSphere and is one of the most widely used database platforms in the market with customers running hundreds of SQL Server instances with VMware vSphere on-premises.
-Azure VMware Solution is an ideal solution for customers looking to migrate and modernize their vSphere-based applications to the cloud, including their Microsoft SQL databases.
+Azure VMware Solution is an ideal solution for customers looking to migrate and modernize their vSphere-based applications to the cloud, including their SQL Server databases.
+
+Microsoft SQL Server Enterprise licenses are required for each Azure VMware Solution ESXi host core that is used by SQL Server workloads running in a cluster.
+This can be further reduced by configuring the [Azure Hybrid Benefit](enable-sql-azure-hybrid-benefit.md) feature within Azure VMware Solution, using placement policies to limit the scope of ESXi host cores that need to be licensed within a cluster.
## Next steps
Now that you've covered Azure Hybrid benefit, you may want to learn about:
- [Migrate Microsoft SQL Server Standalone to Azure VMware Solution](migrate-sql-server-standalone-cluster.md) - [Migrate SQL Server failover cluster to Azure VMware Solution](migrate-sql-server-failover-cluster.md)-- [Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution](migrate-sql-server-always-on-cluster.md)-- [Enable SQL Azure hybrid benefit for Azure VMware Solution](migrate-sql-server-standalone-cluster.md)
+- [Migrate Microsoft SQL Server Always-On Availability Group to Azure VMware Solution](migrate-sql-server-always-on-availability-group.md)
+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md)
- [Configure Windows Server Failover Cluster on Azure VMware Solution vSAN](configure-windows-server-failover-cluster.md)
azure-web-pubsub Concept Azure Ad Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-azure-ad-authorization.md
Title: Authorize access with Azure Active Directory for Azure Web PubSub
-description: This article provides information on authorizing access to Azure Web PubSub Service resources using Azure Active Directory.
+ Title: Authorize access with Microsoft Entra ID for Azure Web PubSub
+description: This article provides information on authorizing access to Azure Web PubSub Service resources using Microsoft Entra ID.
-# Authorize access to Web PubSub resources using Azure Active Directory
+# Authorize access to Web PubSub resources using Microsoft Entra ID
-The Azure Web PubSub Service allows for the authorization of requests to Web PubSub resources by using Azure Active Directory (Azure AD).
+The Azure Web PubSub Service enables the authorization of requests to Azure Web PubSub resources by utilizing Microsoft Entra ID.
-By utilizing role-based access control (RBAC) within Azure AD, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Azure AD authenticates this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request.
+By utilizing role-based access control (RBAC) with Microsoft Entra ID, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Microsoft Entra authorizes this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request.
-Using Azure AD for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Azure AD authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges.
+Using Microsoft Entra ID for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Microsoft Entra ID authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges.
<a id="security-principal"></a>
-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.*
+_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._
-## Overview of Azure AD for Web PubSub
+## Overview of Microsoft Entra ID for Web PubSub
-Authentication is necessary to access a Web PubSub resource when using Azure AD. This authentication involves two steps:
+Authentication is necessary to access a Web PubSub resource when using Microsoft Entra ID. This authentication involves two steps:
1. First, Azure authenticates the security principal and issues an OAuth 2.0 token. 2. Second, the token is added to the request to the Web PubSub resource. The Web PubSub service uses the token to check if the service principal has the access to the resource.
-### Client-side authentication while using Azure AD
+### Client-side authentication while using Microsoft Entra ID
The negotiation server/Function App shares an access key with the Web PubSub resource, enabling the Web PubSub service to authenticate client connection requests using client tokens generated by the access key.
-However, access key is often disabled when using Azure AD to improve security.
+However, access key is often disabled when using Microsoft Entra ID to improve security.
To address this issue, we have developed a REST API that generates a client token. This token can be used to connect to the Azure Web PubSub service.
-To use this API, the negotiation server must first obtain an **Azure AD Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Azure AD Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service.
+To use this API, the negotiation server must first obtain an **Microsoft Entra Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Microsoft Entra Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service.
We provided helper functions (for example `GenerateClientAccessUri) for supported programming languages. ## Assign Azure roles for access rights
-Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure Web PubSub defines a set of Azure built-in roles that encompass common sets of permissions used to access Web PubSub resources. You can also define custom roles for access to Web PubSub resources.
+Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure Web PubSub defines a set of Azure built-in roles that encompass common sets of permissions used to access Web PubSub resources. You can also define custom roles for access to Web PubSub resources.
### Resource scope
Before assigning an Azure RBAC role to a security principal, it's important to i
You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope: -- **An individual resource.**
+- **An individual resource.**
At this scope, a role assignment applies to only the target resource. -- **A resource group.**
+- **A resource group.**
At this scope, a role assignment applies to all of the resources in the resource group.
You can scope access to Azure SignalR resources at the following levels, beginni
At this scope, a role assignment applies to all of the resources in all of the resource groups in the subscription. -- **A management group.**
+- **A management group.**
At this scope, a role assignment applies to all of the resources in all of the resource groups in all of the subscriptions in the management group.
-## Azure built-in roles for Web PubSub resources.
+## Azure built-in roles for Web PubSub resources
- `Web PubSub Service Owner`
- Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
+ Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
- This role is the most common used for building an upstream server.
+ This role is the most common used for building an upstream server.
- `Web PubSub Service Reader`
- Use to grant read-only REST APIs permissions to Web PubSub resources.
+ Use to grant read-only REST APIs permissions to Web PubSub resources.
- It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
+ It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
## Next steps
-To learn how to create an Azure application and use Azure AD auth, see
-- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)
+To learn how to create an Azure application and use Microsoft Entra authorization, see
-To learn how to configure a managed identity and use Azure AD auth, see
-- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from applications](howto-authorize-from-application.md)
+
+To learn how to configure a managed identity and use Microsoft Entra ID auth, see
+
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities](howto-authorize-from-managed-identity.md)
+
+To learn more about roles and role assignments, see
-To learn more about roles and role assignments, see
- [What is Azure role-based access control](../role-based-access-control/overview.md)
-To learn how to create custom roles, see
+To learn how to create custom roles, see
+ - [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
-To learn how to use only Azure AD authentication, see
-- [Disable local authentication](./howto-disable-local-auth.md)
+To learn how to use only Microsoft Entra authorization, see
+
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-disaster-recovery.md
# Resiliency and disaster recovery in Azure Web PubSub Service
-Resiliency and disaster recovery is a common need for online systems. Azure Web PubSub Service already guarantees 99.9% availability, but it's still a regional service.
+Resiliency and disaster recovery is a common need for online systems. Azure Web PubSub Service already guarantees 99.9% availability, but it's still a regional service. When there is a region-wide outage, it is critical for the service to continue processing real-time messages in a different region.
-Your service instance is a regional service and the instance is running in one region. When there is a region-wide outage, it is critical for the service to continue processing real-time messages in a different region. This article will explain some of the strategies you can use to deploy the service to allow for disaster recovery.
+For regional disaster recovery, we recommend the following two approaches:
+
+- **Enable Geo-Replication** (Easy way). This feature will handle regional failover for you automatically. When enabled, there remains just one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details.
+- **Utilize Multiple Endpoints**. You learn how to do so **in this document**
## High available architecture for Web PubSub service
We haven't integrated the strategy into the SDK yet, so for now the application
In summary, what the application side needs to implement is: 1. Health check. Application can either check if the service is healthy using [service health check API](/rest/api/webpubsub/dataplane/health-api/get-service-status) periodically in the background or on demand for every **negotiate** call. 1. Negotiate logic. Application returns healthy **primary** endpoint by default. When **primary** endpoint is down, application returns healthy **secondary** endpoint.
-1. Broadcast logic. When sending messages to multiple clients, application needs to make sure it broadcasts messages to all the **healthy** endpoints.
+1. Broadcast logic. When messages are sent to multiple clients, application needs to make sure it broadcasts messages to all the **healthy** endpoints.
Below is a diagram that illustrates such topology:
You'll need to handle such cases at client side to make it transparent to your e
### High available architecture for client-client pattern
-For client-client pattern, currently it is not yet possible to support a zero-down-time disaster recovery. If you have high availability requirements, please consider using client-server pattern, or sending a copy of messages to the server as well.
-
-Clients connected to one Web PubSub service are not yet able to communicate with clients connected to another Web PubSub service using client-client pattern. So when using client-client pattern, the general principles are:
-1. All the app server instances return the same Web PubSub endpoint to the client **negotiate** calls. One way is to have a source-of-truth storing, checking the health status, and managing these endpoints, and returning one healthy endpoint in your primary regions.
-2. Make sure there is no active client connected to other endpoints. [Close All Connections](/rest/api/webpubsub/dataplane/web-pub-sub/close-all-connections) could be used to close all the connected clients.
+For client-client pattern, currently it is not yet possible to support a zero-down-time disaster recovery using multiple instances. If you have high availability requirements, please consider using [geo-replication](howto-enable-geo-replication.md).
## How to test a failover
azure-web-pubsub Concept Service Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-service-internals.md
Title: Azure Web PubSub service internals
-description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
+description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
Last updated 09/30/2022
-# Azure Web PubSub service internals
+# Azure Web PubSub service internals
Azure Web PubSub Service provides an easy way to publish/subscribe messages using simple [WebSocket](https://tools.ietf.org/html/rfc6455) connections.
Azure Web PubSub Service provides an easy way to publish/subscribe messages usin
- The service manages the WebSocket connections for you. ## Terms
-* **Service**: Azure Web PubSub Service.
+
+- **Service**: Azure Web PubSub Service.
[!INCLUDE [Terms](includes/terms.md)]
Azure Web PubSub Service provides an easy way to publish/subscribe messages usin
![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png) Workflow as shown in the above graph:
-1. A *client* connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol).
+
+1. A _client_ connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol).
2. On different client events, the service invokes the server using **CloudEvents protocol**. [**CloudEvents**](https://github.com/cloudevents/spec/tree/v1.0.1) is a standardized and protocol-agnostic definition of the structure and metadata description of events hosted by the Cloud Native Computing Foundation (CNCF). Detailed implementation of CloudEvents protocol relies on the server role, described in [server protocol](#server-protocol). 3. The Web PubSub server can invoke the service using the REST API to send messages to clients or to manage the connected clients. Details are described in [server protocol](#server-protocol)
Workflow as shown in the above graph:
A client connection connects to the `/client` endpoint of the service using [WebSocket protocol](https://tools.ietf.org/html/rfc6455). The WebSocket protocol provides full-duplex communication channels over a single TCP connection and was standardized by the IETF as RFC 6455 in 2011. Most languages have native support to start WebSocket connections. Our service supports two kinds of clients:+ - One is called [the simple WebSocket client](#the-simple-websocket-client) - The other is called [the PubSub WebSocket client](#the-pubsub-websocket-client) ### The simple WebSocket client+ A simple WebSocket client, as the naming indicates, is a simple WebSocket connection. It can also have its custom subprotocol. For example, in JS, a simple WebSocket client can be created using the following code.+ ```js // simple WebSocket client1
-var client1 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1');
+var client1 = new WebSocket("wss://test.webpubsub.azure.com/client/hubs/hub1");
// simple WebSocket client2 with some custom subprotocol
-var client2 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'custom.subprotocol')
-
+var client2 = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "custom.subprotocol"
+);
``` A simple WebSocket client follows a client<->server architecture, as the below sequence diagram shows: ![Diagram showing the sequence for a client connection.](./media/concept-service-internals/simple-client-sequence.png) - 1. When the client starts a WebSocket handshake, the service tries to invoke the `connect` event handler for WebSocket handshake. Developers can use this handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. 2. When the client is successfully connected, the service invokes a `connected` event handler. It works as a notification and doesn't block the client from sending messages. Developers can use this handler to do data storage and can respond with messages to the client. The service also pushes a `connected` event to all concerning event listeners, if any. 3. When the client sends messages, the service triggers a `message` event to the event handler to handle the messages sent. This event is a general event containing the messages sent in a WebSocket frame. Your code needs to dispatch the messages inside this event handler. If the event handler returns non-successful response code for, the service drops the client connection. The service also pushes a `message` event to all concerning event listeners, if any. If the service can't find any registered servers to receive the messages, the service also drops the connection. 4. When the client disconnects, the service tries to trigger the `disconnected` event to the event handler once it detects the disconnect. The service also pushes a `disconnected` event to all concerning event listeners, if any. #### Scenarios+ These connections can be used in a typical client-server architecture where the client sends messages to the server and the server handles incoming messages using [Event Handlers](#event-handler). It can also be used when customers apply existing [subprotocols](https://www.iana.org/assignments/websocket/websocket.xml) in their application logic. ### The PubSub WebSocket client+ The service also supports a specific subprotocol called `json.webpubsub.azure.v1`, which empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. We call the WebSocket connection with `json.webpubsub.azure.v1` subprotocol a PubSub WebSocket client. For more information, see the [Web PubSub client specification](https://github.com/Azure/azure-webpubsub/blob/main/protocols/client/client-spec.md) on GitHub. For example, in JS, a PubSub WebSocket client can be created using the following code.+ ```js // PubSub WebSocket client
-var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.webpubsub.azure.v1');
+var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "json.webpubsub.azure.v1"
+);
``` A PubSub WebSocket client can:
-* Join a group, for example:
- ```json
- {
- "type": "joinGroup",
- "group": "<group_name>"
- }
- ```
-* Leave a group, for example:
- ```json
- {
- "type": "leaveGroup",
- "group": "<group_name>"
- }
- ```
-* Publish messages to a group, for example:
- ```json
- {
- "type": "sendToGroup",
- "group": "<group_name>",
- "data": { "hello": "world" }
- }
- ```
-* Send custom events to the upstream server, for example:
-
- ```json
- {
- "type": "event",
- "event": "<event_name>",
- "data": { "hello": "world" }
- }
- ```
+
+- Join a group, for example:
+
+ ```json
+ {
+ "type": "joinGroup",
+ "group": "<group_name>"
+ }
+ ```
+
+- Leave a group, for example:
+
+ ```json
+ {
+ "type": "leaveGroup",
+ "group": "<group_name>"
+ }
+ ```
+
+- Publish messages to a group, for example:
+
+ ```json
+ {
+ "type": "sendToGroup",
+ "group": "<group_name>",
+ "data": { "hello": "world" }
+ }
+ ```
+
+- Send custom events to the upstream server, for example:
+
+ ```json
+ {
+ "type": "event",
+ "event": "<event_name>",
+ "data": { "hello": "world" }
+ }
+ ```
[PubSub WebSocket Subprotocol](./reference-json-webpubsub-subprotocol.md) contains the details of the `json.webpubsub.azure.v1` subprotocol.
-You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the *server* is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the *event* the message belongs.
+You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the _server_ is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the _event_ the message belongs.
+
+#### Scenarios
-#### Scenarios:
Such clients can be used when clients want to talk to each other. Messages are sent from `client2` to the service and the service delivers the message directly to `client1` if the clients are authorized to do so. Client1: ```js
-var client1 = new WebSocket("wss://xxx.webpubsub.azure.com/client/hubs/hub1", "json.webpubsub.azure.v1");
-client1.onmessage = e => {
- if (e.data) {
- var message = JSON.parse(e.data);
- if (message.type === "message"
- && message.group === "Group1"){
- // Only print messages from Group1
- console.log(message.data);
- }
+var client1 = new WebSocket(
+ "wss://xxx.webpubsub.azure.com/client/hubs/hub1",
+ "json.webpubsub.azure.v1"
+);
+client1.onmessage = (e) => {
+ if (e.data) {
+ var message = JSON.parse(e.data);
+ if (message.type === "message" && message.group === "Group1") {
+ // Only print messages from Group1
+ console.log(message.data);
}
+ }
};
-client1.onopen = e => {
- client1.send(JSON.stringify({
- type: "joinGroup",
- group: "Group1"
- }));
+client1.onopen = (e) => {
+ client1.send(
+ JSON.stringify({
+ type: "joinGroup",
+ group: "Group1",
+ })
+ );
}; ```
As the above example shows, `client2` sends data directly to `client1` by publis
### Client events summary Client events fall into two categories:
-* Synchronous events (blocking)
- Synchronous events block the client workflow.
- * `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups.
- * `message`: This event is triggered when a client sends a message.
-* Asynchronous events (non-blocking)
- Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail.
- * `connected`: This event is triggered when a client connects to the service successfully.
- * `disconnected`: This event is triggered when a client disconnected with the service.
+
+- Synchronous events (blocking)
+ Synchronous events block the client workflow.
+ - `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups.
+ - `message`: This event is triggered when a client sends a message.
+- Asynchronous events (non-blocking)
+ Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail.
+ - `connected`: This event is triggered when a client connects to the service successfully.
+ - `disconnected`: This event is triggered when a client disconnected with the service.
### Client message limit+ The maximum allowed message size for one WebSocket frame is **1MB**. ### Client authentication
The following graph describes the workflow.
![Diagram showing the client authentication workflow.](./media/concept-service-internals/client-connect-workflow.png)
-As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's *authorized* to. The `role`s of the client determines the *initial* permissions the client have:
+As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's _authorized_ to. The `role`s of the client determines the _initial_ permissions the client have:
-| Role | Permission |
-|||
-| Not specified | The client can send events.
-| `webpubsub.joinLeaveGroup` | The client can join/leave any group.
-| `webpubsub.sendToGroup` | The client can publish messages to any group.
-| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`.
-| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`.
+| Role | Permission |
+| - | |
+| Not specified | The client can send events. |
+| `webpubsub.joinLeaveGroup` | The client can join/leave any group. |
+| `webpubsub.sendToGroup` | The client can publish messages to any group. |
+| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`. |
+| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. |
The server-side can also grant or revoke permissions of the client dynamically through [server protocol](#connection-manager) as to be illustrated in a later section.
The server-side can also grant or revoke permissions of the client dynamically t
Server protocol provides the functionality for the server to handle client events and manage the client connections and the groups. In general, server protocol contains three roles:+ 1. [Event handler](#event-handler) 1. [Connection manager](#connection-manager) 1. [Event listener](#event-listener) ### Event handler+ The event handler handles the incoming client events. Event handlers are registered and configured in the service through the portal or Azure CLI. When a client event is triggered, the service can identify if the event is to be handled or not. Now we use `PUSH` mode to invoke the event handler. The event handler on the server side exposes a publicly accessible endpoint for the service to invoke when the event is triggered. It acts as a **webhook**. Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
When doing the validation, the `{event}` parameter is resolved to `validate`. Fo
For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback).
-#### Authentication between service and webhook
+#### Authentication/Authorization between service and webhook
+ - Anonymous mode - Simple authentication that `code` is provided through the configured Webhook URL.-- Use Azure Active Directory (Azure AD) authentication. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.
- - Step1: Enable Identity for the Web PubSub service
- - Step2: Select from existing Azure AD application that stands for your webhook web app
+- Use Microsoft Entra authorization. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.
+ - Step1: Enable Identity for the Web PubSub service
+ - Step2: Select from existing Microsoft Entra application that stands for your webhook web app
### Connection manager
-The server is by nature an authorized user. With the help of the *event handler role*, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
- - Close a client connection
- - Send messages to a client
- - Send messages to clients that belong to the same user
- - Add a client to a group
- - Add clients authenticated as the same user to a group
- - Remove a client from a group
- - Remove clients authenticated as the same user from a group
- - Publish messages to a group
+The server is by nature an authorized user. With the help of the _event handler role_, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
+
+- Close a client connection
+- Send messages to a client
+- Send messages to clients that belong to the same user
+- Add a client to a group
+- Add clients authenticated as the same user to a group
+- Remove a client from a group
+- Remove clients authenticated as the same user from a group
+- Publish messages to a group
It can also grant or revoke publish/join permissions for a PubSub client:
- - Grant publish/join permissions to some specific group or to all groups
- - Revoke publish/join permissions for some specific group or for all groups
- - Check if the client has permission to join or publish to some specific group or to all groups
+
+- Grant publish/join permissions to some specific group or to all groups
+- Revoke publish/join permissions for some specific group or for all groups
+- Check if the client has permission to join or publish to some specific group or to all groups
The service provides REST APIs for the server to do connection management.
You can combine an [event handler](#event-handler) and event listeners for the s
Web PubSub service delivers client events to event listeners using [CloudEvents AMQP extension for Azure Web PubSub](reference-cloud-events-amqp.md). ### Summary
-You may have noticed that the *event handler role* handles communication from the service to the server while *the manager role* handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol.
+
+You may have noticed that the _event handler role_ handles communication from the service to the server while _the manager role_ handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol.
![Diagram showing the Web PubSub service bi-directional workflow.](./media/concept-service-internals/http-service-server.png)
azure-web-pubsub Howto Authorize From Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md
Title: Authorize request to Web PubSub resources with Azure AD from Azure applications
-description: This article provides information about authorizing request to Web PubSub resources with Azure AD from Azure applications
+ Title: Authorize request to Web PubSub resources with Microsoft Entra ID from applications
+description: This article provides information about authorizing request to Web PubSub resources with Microsoft Entra ID from applications
-# Authorize request to Web PubSub resources with Azure AD from Azure applications
+# Authorize request to Web PubSub resources with Microsoft Entra ID from Azure applications
-Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md).
+Azure Web PubSub Service supports Microsoft Entra ID for authorizing requests from [applications](../active-directory/develop/app-objects-and-service-principals.md).
This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from an Azure application.
This article shows how to configure your Web PubSub resource and codes to author
The first step is to register an Azure application.
-1. On the [Azure portal](https://portal.azure.com/), search for and select **Azure Active Directory**
+1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID**
2. Under **Manage** section, select **App registrations**. 3. Click **New registration**.
- ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
+ ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Click **Register** to confirm the register.
Once you have your application registered, you can find the **Application (clien
![Screenshot of an application.](./media/howto-authorize-from-application/application-overview.png) To learn more about registering an application, see+ - [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). ## Add credentials
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, click **New client secret**.
-![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
+ ![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**.
-1. Copy the value of the **client secret** and then paste it to a secure location.
- > [!NOTE]
- > The secret will display only once.
+1. Copy the value of the **client secret** and then paste it to a secure location.
+ > [!NOTE]
+ > The secret will display only once.
+ ### Certificate You can also upload a certification instead of creating a client secret.
To learn more about adding credentials, see
## Add role assignments on Azure portal
-This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource.
+This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource.
-> [!Note]
+> [!NOTE]
> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. On the [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub.
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
1. Click **Select Members**
-3. Search for and select the application that you would like to assign the role to.
+1. Search for and select the application that you would like to assign the role to.
1. Click **Select** to confirm the selection.
-4. Click **Next**.
+1. Click **Next**.
![Screenshot of assigning role to service principals.](./media/howto-authorize-from-application/assign-role-to-service-principals.png)
-5. Click **Review + assign** to confirm the change.
+1. Click **Review + assign** to confirm the change.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
-To learn more about how to assign and manage Azure role assignments, see these articles:
+> To learn more about how to assign and manage Azure role assignments, see these articles:
+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md) - [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
-## Use Postman to get the Azure AD token
+## Use Postman to get the Microsoft Entra token
+ 1. Launch Postman 2. For the method, select **GET**.
To learn more about how to assign and manage Azure role assignments, see these a
4. On the **Headers** tab, add **Content-Type** key and `application/x-www-form-urlencoded` for the value.
-![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png)
+ ![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png)
5. Switch to the **Body** tab, and add the following keys and values.
- 1. Select **x-www-form-urlencoded**.
- 2. Add `grant_type` key, and type `client_credentials` for the value.
- 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
- 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
- 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
+ 1. Select **x-www-form-urlencoded**.
+ 2. Add `grant_type` key, and type `client_credentials` for the value.
+ 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
+ 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
+ 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
-![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png)
+ ![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png)
-6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
+6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
-![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png)
+ ![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png)
-## Sample codes using Azure AD auth
+## Sample codes using Microsoft Entra authorization
We officially support 4 programming languages:
We officially support 4 programming languages:
See the following related articles: -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)
+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md)
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities](howto-authorize-from-managed-identity.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Authorize From Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-managed-identity.md
Title: Authorize request to Web PubSub resources with Azure AD from managed identities
-description: This article provides information about authorizing request to Web PubSub resources with Azure AD from managed identities
+ Title: Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities
+description: This article provides information about authorizing request to Web PubSub resources with Microsoft Entra ID from managed identities
-# Authorize request to Web PubSub resources with Azure AD from managed identities
-Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+# Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities
+
+Azure Web PubSub Service supports Microsoft Entra ID for authorizing requests from [managed identities](../active-directory/managed-identities-azure-resources/overview.md).
This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from a managed identity.
This is an example for configuring `System-assigned managed identity` on a `Virt
1. Click the **Save** button to confirm the change. ### How to create user-assigned managed identities+ - [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) ### How to configure managed identities on other platforms
This is an example for configuring `System-assigned managed identity` on a `Virt
- [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
-## Add role assignments on Azure portal
+## Add role assignments on Azure portal
-This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource.
+This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource.
> [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. Open [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub.
This sample shows how to assign a `Web PubSub Service Owner` role to a system-as
1. Click **Select** to confirm the selection.
-2. Click **Next**.
+1. Click **Next**.
![Screenshot of assigning role to managed identities.](./media/howto-authorize-from-managed-identity/assign-role-to-managed-identities.png)
-3. Click **Review + assign** to confirm the change.
+1. Click **Review + assign** to confirm the change.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
-To learn more about how to assign and manage Azure role assignments, see these articles:
+> To learn more about how to assign and manage Azure role assignments, see these articles:
+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
We officially support 4 programming languages:
See the following related articles: -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)
+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md)
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from Azure applications](howto-authorize-from-application.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Create Serviceclient With Java And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-java-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with Java and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` with Java and Azure Identity.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in Java.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` with Java a
1. Create a `TokenCredential` with Azure Identity SDK.
- ```java
- package com.webpubsub.tutorial;
+ ```java
+ package com.webpubsub.tutorial;
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.core.credential.TokenCredential;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
- public class App {
+ public class App {
- public static void main(String[] args) {
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- }
- }
- ```
+ public static void main(String[] args) {
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ }
+ }
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```Java
- package com.webpubsub.tutorial;
+ ```Java
+ package com.webpubsub.tutorial;
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.DefaultAzureCredentialBuilder;
- import com.azure.messaging.webpubsub.WebPubSubServiceClient;
- import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder;
+ import com.azure.core.credential.TokenCredential;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClient;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder;
- public class App {
- public static void main(String[] args) {
+ public class App {
+ public static void main(String[] args) {
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- // create the service client
- WebPubSubServiceClient client = new WebPubSubServiceClientBuilder()
- .endpoint("<endpoint>")
- .credential(credential)
- .hub("<hub>")
- .buildClient();
- }
- }
- ```
+ // create the service client
+ WebPubSubServiceClient client = new WebPubSubServiceClientBuilder()
+ .endpoint("<endpoint>")
+ .credential(credential)
+ .hub("<hub>")
+ .buildClient();
+ }
+ }
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp-aad)
azure-web-pubsub Howto Create Serviceclient With Javascript And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-javascript-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with JavaScript and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in JavaScript.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in JavaScript.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK.
- ```javascript
- const { DefaultAzureCredential } = require('@azure/identity')
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
- let credential = new DefaultAzureCredential();
- ```
+ let credential = new DefaultAzureCredential();
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```javascript
- const { DefaultAzureCredential } = require('@azure/identity')
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
- let credential = new DefaultAzureCredential();
+ let credential = new DefaultAzureCredential();
- let serviceClient = new WebPubSubServiceClient("<endpoint>", credential, "<hub>");
- ```
+ let serviceClient = new WebPubSubServiceClient(
+ "<endpoint>",
+ credential,
+ "<hub>"
+ );
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp-aad)
azure-web-pubsub Howto Create Serviceclient With Net And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-net-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with .NET and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in .NET.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in .NET.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
- Install [Azure.Messaging.WebPubSub](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) from nuget.org ```bash
- Install-Package Azure.Messaging.WebPubSub
+ Install-Package Azure.Messaging.WebPubSub
``` ## Sample codes 1. Create a `TokenCredential` with Azure Identity SDK.
- ```C#
- using Azure.Identity;
-
- namespace chatapp
- {
- public class Program
- {
- public static void Main(string[] args)
- {
- var credential = new DefaultAzureCredential();
- }
- }
- }
- ```
-
- `credential` can be any class that inherits from `TokenCredential` class.
-
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
-
- To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
-
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
-
- ```C#
- using Azure.Identity;
- using Azure.Messaging.WebPubSub;
-
- public class Program
- {
- public static void Main(string[] args)
- {
- var credential = new DefaultAzureCredential();
- var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
- }
- }
- ```
-
- Or inject it into `IServiceCollections` with our `BuilderExtensions`.
-
- ```C#
- using System;
-
- using Azure.Identity;
-
- using Microsoft.Extensions.Azure;
- using Microsoft.Extensions.Configuration;
- using Microsoft.Extensions.DependencyInjection;
-
- namespace chatapp
- {
- public class Startup
- {
- public Startup(IConfiguration configuration)
- {
- Configuration = configuration;
- }
-
- public IConfiguration Configuration { get; }
-
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddAzureClients(builder =>
- {
- var credential = new DefaultAzureCredential();
- builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
- });
- }
- }
- }
- ```
-
- Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme)
+ ```C#
+ using Azure.Identity;
+
+ namespace chatapp
+ {
+ public class Program
+ {
+ public static void Main(string[] args)
+ {
+ var credential = new DefaultAzureCredential();
+ }
+ }
+ }
+ ```
+
+ `credential` can be any class that inherits from `TokenCredential` class.
+
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
+
+ To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
+
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+
+ ```C#
+ using Azure.Identity;
+ using Azure.Messaging.WebPubSub;
+
+ public class Program
+ {
+ public static void Main(string[] args)
+ {
+ var credential = new DefaultAzureCredential();
+ var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
+ }
+ }
+ ```
+
+ Or inject it into `IServiceCollections` with our `BuilderExtensions`.
+
+ ```C#
+ using System;
+
+ using Azure.Identity;
+
+ using Microsoft.Extensions.Azure;
+ using Microsoft.Extensions.Configuration;
+ using Microsoft.Extensions.DependencyInjection;
+
+ namespace chatapp
+ {
+ public class Startup
+ {
+ public Startup(IConfiguration configuration)
+ {
+ Configuration = configuration;
+ }
+
+ public IConfiguration Configuration { get; }
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddAzureClients(builder =>
+ {
+ var credential = new DefaultAzureCredential();
+ builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
+ });
+ }
+ }
+ }
+ ```
+
+ Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)
azure-web-pubsub Howto Create Serviceclient With Python And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-python-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with Python and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in Python.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in Python.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK.
- ```python
- from azure.identity import DefaultAzureCredential
+ ```python
+ from azure.identity import DefaultAzureCredential
- credential = DefaultAzureCredential()
- ```
+ credential = DefaultAzureCredential()
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```python
- from azure.identity import DefaultAzureCredential
+ ```python
+ from azure.identity import DefaultAzureCredential
- credential = DefaultAzureCredential()
+ credential = DefaultAzureCredential()
- client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential)
- ```
+ client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential)
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp-aad)
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
Title: Create an Azure Web PubSub resource
-description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
+description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
Last updated 03/13/2023
zone_pivot_groups: azure-web-pubsub-create-resource-methods + # Create a Web PubSub resource ## Prerequisites+ > [!div class="checklist"]
-> * An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
+>
+> - An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
> [!TIP] > Web PubSub includes a generous **free tier** that can be used for testing and production purposes.
-
-
+ ::: zone pivot="method-azure-portal"+ ## Create a resource from Azure portal
-1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
+1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
- :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
+ :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
2. Select **Web PubSub** from the search results, then select **Create**. 3. Enter the following settings.
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
- | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
- | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
- | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
- | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
- | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
+ | Setting | Suggested value | Description |
+ | -- | -- | |
+ | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
+ | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
+ | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
+ | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
+ | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
+ | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
- :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
+ :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
4. Select **Create** to provision your Web PubSub resource.-
+ ::: zone-end
::: zone pivot="method-azure-cli"+ ## Create a resource using Azure CLI
-The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
+The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
> [!IMPORTANT] > This quickstart requires Azure CLI of version 2.22.0 or higher.
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
[!INCLUDE [Create a Web PubSub instance](includes/cli-awps-creation.md)] ::: zone-end - ::: zone pivot="method-bicep"+ ## Create a resource using Bicep template [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
The template used in this quickstart is from [Azure Quickstart Templates](/sampl
1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
- # [CLI](#tab/CLI)
+ # [CLI](#tab/CLI)
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep
- ```
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
- # [PowerShell](#tab/PowerShell)
+ # [PowerShell](#tab/PowerShell)
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
- ```
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
-
+ ***
- When the deployment finishes, you should see a message indicating the deployment succeeded.
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
## Review deployed resources
Get-AzResource -ResourceGroupName exampleRG
``` + ## Clean up resources When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
az group delete --name exampleRG
```azurepowershell-interactive Remove-AzResourceGroup -Name exampleRG ```+ ::: zone-end ## Next step+ Now that you have created a resource, you are ready to put it to use. Next, you will learn how to subscribe and publish messages among your clients.
-> [!div class="nextstepaction"]
+
+> [!div class="nextstepaction"]
> [PubSub among clients](quickstarts-pubsub-among-clients.md)
azure-web-pubsub Howto Develop Event Listener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-event-listener.md
If you want to listen to your [client events](concept-service-internals.md#terms
This tutorial shows you how to authorize your Web PubSub service to connect to Event Hubs and how to add an event listener rule to your service settings.
-Web PubSub service uses Azure Active Directory (Azure AD) authentication with managed identity to connect to Event Hubs. Therefore, you should enable the managed identity of the service and make sure it has proper permissions to connect to Event Hubs. You can grant the built-in [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) role to the managed identity so that it has enough permissions.
+Web PubSub service uses Microsoft Entra ID with managed identity to connect to Event Hubs. Therefore, you should enable the managed identity of the service and make sure it has proper permissions to connect to Event Hubs. You can grant the built-in [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) role to the managed identity so that it has enough permissions.
To configure an Event Hubs listener, you need to:
-1. [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service)
-2. [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role)
-3. [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings)
+- [Send client events to Event Hubs](#send-client-events-to-event-hubs)
+ - [Overview](#overview)
+ - [Configure an event listener](#configure-an-event-listener)
+ - [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service)
+ - [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role)
+ - [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings)
+ - [Test your configuration with live demo](#test-your-configuration-with-live-demo)
+ - [Next steps](#next-steps)
## Configure an event listener
Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity
### Add an event listener rule to your service settings
-1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
+1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
:::image type="content" source="media/howto-develop-event-listener/web-pubsub-settings.png" alt-text="Screenshot of Web PubSub settings"::: 1. Then in the below editing page, you'd need to configure hub name, and select **Add** to add an event listener.
Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity
1. On the **Configure Event Listener** page, first configure an event hub endpoint. You can select **Select Event Hub from your subscription** to select, or directly input the fully qualified namespace and the event hub name. Then select `user` and `system` events you'd like to listen to. Finally select **Confirm** when everything is done. :::image type="content" source="media/howto-develop-event-listener/configure-event-hub-listener.png" alt-text="Screenshot of configuring Event Hubs Listener"::: - ## Test your configuration with live demo 1. Open this [Event Hubs Consumer Client](https://awpseventlistenerdemo.blob.core.windows.net/eventhub-consumer/https://docsupdatetracker.net/index.html) web app, input the Event Hubs connection string to connect to an event hub as a consumer. If you get the Event Hubs connection string from an Event Hubs namespace resource instead of an event hub instance, then you need to specify the event hub name. This event hub consumer client is connected with the mode that only reads new events; the events published before aren't seen here. You can change the consumer client connection mode to read all the available events in the production environment. 1. Use this [WebSocket Client](https://awpseventlistenerdemo.blob.core.windows.net/webpubsub-client/websocket-client.html) web app to generate client events. If you've configured to send system event `connected` to that event hub, you should be able to see a printed `connected` event in the Event Hubs consumer client after connecting to Web PubSub service successfully. You can also generate a user event with the app.
- :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app":::
- :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="The area of the WebSocket client app to generate a user event":::
+ :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app.":::
+ :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="Screenshot showing the area of the WebSocket client app to generate a user event.":::
## Next steps In this article, you learned how event listeners work and how to configure an event listener with an event hub endpoint. To learn the data format sent to Event Hubs, read the following specification.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Specification: CloudEvents AMQP extension for Azure Web PubSub](./reference-cloud-events-amqp.md)
-<!--TODO: Add demo-->
+
+<!--TODO: Add demo-->
azure-web-pubsub Howto Develop Eventhandler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-eventhandler.md
description: Guidance about event handler concepts and integration introduction
-+ Last updated 01/27/2023 # Event handler in Azure Web PubSub service
-The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**.
+The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**.
The Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
-For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response.
+For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response.
The data sending from the service to the server is always in CloudEvents `binary` format.
For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/
You can use any of these methods to authenticate between the service and webhook. - Anonymous mode-- Simple Auth with `?code=<code>` is provided through the configured Webhook URL as query parameter.-- Azure Active Directory(Azure AD) authentication. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).
+- Simple authentication with `?code=<code>` is provided through the configured Webhook URL as query parameter.
+- Microsoft Entra authorization. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).
## Configure event handler
You can add an event handler to a new hub or edit an existing hub.
To configure an event handler in a new hub:
-1. Go to your Azure Web PubSub service page in the **Azure portal**.
-1. Select **Settings** from the menu.
+1. Go to your Azure Web PubSub service page in the **Azure portal**.
+1. Select **Settings** from the menu.
1. Select **Add** to create a hub and configure your server-side webhook URL. Note: To add an event handler to an existing hub, select the hub and select **Edit**. :::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler."::: 1. Enter your hub name. 1. Select **Add** under **Configure Even Handlers**.
-1. In the event handler page, configure the following fields:
- 1. Enter the server webhook URL in the **URL Template** field.
- 1. Select the **System events** that you want to subscribe to.
- 1. Select the **User events** that you want to subscribe to.
- 1. Select **Authentication** method to authenticate upstream requests.
- 1. Select **Confirm**.
+1. In the event handler page, configure the following fields: 1. Enter the server webhook URL in the **URL Template** field. 1. Select the **System events** that you want to subscribe to. 1. Select the **User events** that you want to subscribe to. 1. Select **Authentication** method to authenticate upstream requests. 1. Select **Confirm**.
+ :::image type="content" source="media/howto-develop-eventhandler/configure-event-handler.png" alt-text="Screenshot of Azure Web PubSub Configure Event Handler.":::
1. Select **Save** at the top of the **Configure Hub Settings** page.
To configure an event handler in a new hub:
Use the Azure CLI [**az webpubsub hub**](/cli/azure/webpubsub/hub) group commands to configure the event handler settings.
-Commands | Description
|--
-`create` | Create hub settings for WebPubSub Service.
-`delete` | Delete hub settings for WebPubSub Service.
-`list` | List all hub settings for WebPubSub Service.
-`show` | Show hub settings for WebPubSub Service.
-`update` | Update hub settings for WebPubSub Service.
+| Commands | Description |
+| -- | -- |
+| `create` | Create hub settings for WebPubSub Service. |
+| `delete` | Delete hub settings for WebPubSub Service. |
+| `list` | List all hub settings for WebPubSub Service. |
+| `show` | Show hub settings for WebPubSub Service. |
+| `update` | Update hub settings for WebPubSub Service. |
Here's an example of creating two webhook URLs for hub `MyHub` of `MyWebPubSub` resource:
azure-web-pubsub Howto Develop Reliable Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md
description: How to create reliable Websocket clients
-+ Last updated 01/12/2023
The Web PubSub service supports two reliable subprotocols `json.reliable.webpubs
The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [PubSub with client SDK](./quickstart-use-client-sdk.md) for quick start. - ## The Hard Way - Implement by hand The following tutorial walks you through the important part of implementing the [Web PubSub client specification](./reference-client-specification.md). This guide is not for people looking for a quick start but who wants to know the principle of achieving reliability. For quick start, please use the Client SDK.
To use reliable subprotocols, you must set the subprotocol when constructing Web
- Use Json reliable subprotocol:
- ```js
- var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1');
- ```
+ ```js
+ var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "json.reliable.webpubsub.azure.v1"
+ );
+ ```
- Use Protobuf reliable subprotocol:
- ```js
- var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1');
- ```
+ ```js
+ var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "protobuf.reliable.webpubsub.azure.v1"
+ );
+ ```
### Connection recovery Connection recovery is the basis of achieving reliability and must be implemented when using the `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1` protocols.
-Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
+Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
-When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
+When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
```json {
- "type":"system",
- "event":"connected",
- "connectionId": "<connection_id>",
- "reconnectionToken": "<reconnection_token>"
+ "type": "system",
+ "event": "connected",
+ "connectionId": "<connection_id>",
+ "reconnectionToken": "<reconnection_token>"
} ```
Connection recovery may fail if the network issue hasn't been recovered yet. The
### Publisher
-Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
+Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
-The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message.
+The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message.
A sample group send message: ```json {
- "type": "sendToGroup",
- "group": "group1",
- "dataType" : "text",
- "data": "text data",
- "ackId": 1
+ "type": "sendToGroup",
+ "group": "group1",
+ "dataType": "text",
+ "data": "text data",
+ "ackId": 1
} ```
A sample ack response:
```json {
- "type": "ack",
- "ackId": 1,
- "success": true
+ "type": "ack",
+ "ackId": 1,
+ "success": true
} ``` When the Web PubSub service returns an ack response with `success: true`, the message has been processed by the service, and the client can expect the message will be delivered to all subscribers.
-When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used.
+When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used.
```json {
- "type": "ack",
- "ackId": 1,
- "success": false,
- "error": {
- "name": "InternalServerError",
- "message": "Internal server error"
- }
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "InternalServerError",
+ "message": "Internal server error"
+ }
} ``` ![Message Failure](./media/howto-develop-reliable-clients/message-failed.png)
-If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
+If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
```json {
- "type": "ack",
- "ackId": 1,
- "success": false,
- "error": {
- "name": "Duplicate",
- "message": "Message with ack-id: 1 has been processed"
- }
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "Duplicate",
+ "message": "Message with ack-id: 1 has been processed"
+ }
} ```
A sample sequence ack:
```json {
- "type": "sequenceAck",
- "sequenceId": 1
+ "type": "sequenceAck",
+ "sequenceId": 1
} ```
azure-web-pubsub Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-disable-local-auth.md
Title: Disable local (access key) authentication with Azure Web PubSub Service
-description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure Web PubSub Service.
+description: This article provides information about how to disable access key authentication and use only Microsoft Entra authorization with Azure Web PubSub Service.
# Disable local (access key) authentication with Azure Web PubSub Service
-There are two ways to authenticate to Azure Web PubSub Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure Web PubSub Service resources when possible.
+There are two ways to authenticate to Azure Web PubSub Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID provides superior security and ease of use over access key. With Microsoft Entra ID, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Microsoft Entra ID with your Azure Web PubSub Service resources when possible.
> [!IMPORTANT] > Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with current set of access keys will become unavailable.
-> - Signature will **NOT** be attached in the upstream request header. Please visit *[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)* to learn how to validate requests via Azure AD token.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with current set of access keys will become unavailable.
+> - Signature will **NOT** be attached in the upstream request header. Please visit _[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)_ to learn how to validate requests via Microsoft Entra token.
## Use Azure portal
You can disable local authentication by setting `disableLocalAuth` property to t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "resource_name": {
- "defaultValue": "test-for-disable-aad",
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.SignalRService/WebPubSub",
- "apiVersion": "2022-08-01-preview",
- "name": "[parameters('resource_name')]",
- "location": "eastus",
- "sku": {
- "name": "Premium_P1",
- "tier": "Premium",
- "size": "P1",
- "capacity": 1
- },
- "properties": {
- "tls": {
- "clientCertEnabled": false
- },
- "networkACLs": {
- "defaultAction": "Deny",
- "publicNetwork": {
- "allow": [
- "ServerConnection",
- "ClientConnection",
- "RESTAPI",
- "Trace"
- ]
- },
- "privateEndpoints": []
- },
- "publicNetworkAccess": "Enabled",
- "disableLocalAuth": true,
- "disableAadAuth": false
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/WebPubSub",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
} ```
You can assign the [Azure Web PubSub Service should have local authentication me
See the following docs to learn about authentication methods. -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)
+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md)
- [Authenticate with Azure applications](./howto-authorize-from-application.md) - [Authenticate with managed identities](./howto-authorize-from-managed-identity.md)
azure-web-pubsub Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md
Mission critical apps often need to have a robust failover system and serve user
### Contoso, a social media company Contoso is a social media company with its customer base spread across the US and Canada. Contoso provides a mobile and web app to its users so that they can connect with each other. Contoso application is deployed in Central US. As part of Contoso's architecture, Web PubSub is used to establish persistent WebSocket connections between client apps and the application server. Contoso **likes** that they can offload managing WebSocket connections to Web PubSub, but **doesn't** like reading reports of users in Canada experiencing higher latency. Furthermore, Contoso's development team wants to insure the app against regional outage so that the users can access the app with no interruptions.
-![Screenshot of using one Azure WebPubSub instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-single.png "Single WebPubSub Example")
+![Diagram of using one Azure WebPubSub instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-single.png "Single WebPubSub Example")
Contoso **could** set up another Web PubSub resource in Canada Central which is geographically closer to its users in Canada. However, managing multiple Web PubSub resources brings some challenges: 1. A cross-region communication mechanism would need to be implemented so that users in Canada and US can interact with each other.
Contoso **could** set up another Web PubSub resource in Canada Central which is
All of the above takes engineering resources away from focusing on product innovation.
-![Screenshot of using two Azure Web PubSub instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-multiple.png "Mutiple Web PubSub Example")
+![Diagram of using two Azure Web PubSub instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-multiple.png "Mutiple Web PubSub Example")
### Harnessing the geo-replication feature With the geo-replication feature, Contoso can now establish a replica in Canada Central, effectively overcoming the above-mentioned challenges. The developer team is glad to find out that they don't need to make any code changes. It's as easy as clicking a few buttons on Azure portal. The developer team is also happy to share with the stakeholders that as Contoso plans to enter the European market, they simply need to add another replica in Europe.
-![Screenshot of using one Azure Web PubSub instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/web-pubsub-replica.png "Replica Example")
+![Diagram of using one Azure Web PubSub instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/web-pubsub-replica.png "Replica Example")
## How to enable geo-replication in a Web PubSub resource To create a replica in an Azure region, go to your Web PubSub resource and find the **Replicas** blade on the Azure portal and click **Add** to create a replica. It will be automatically enabled upon creation.
After creation, you would be able to view/edit your replica on the portal by cli
> [!NOTE] > * Geo-replication is a feature available in premium tier.
-> * A replica is considered a separate resource when it comes to billing. See [Pricing](concept-billing-model.md#how-replica-is-billed) for more details.
+> * A replica is considered a separate resource when it comes to billing. See [Pricing and resource unit](#pricing-and-resource-unit) for more details.
+
+## Pricing and resource unit
+Each replica has its **own** `unit` and `autoscale settings`.
+
+Replica is a feature of [Premium tier](https://azure.microsoft.com/pricing/details/web-pubsub/) of Azure Web PubSub Service. Each replica is billed **separately** according to its own unit and outbound traffic. Free message quota is also calculated separately.
+
+In the preceding example, Contoso added one replica in Canada Central. Contoso would pay for the replica in Canada Central according to its unit and message in Premium Price.
## Delete a replica After you've created a replica for a Web PubSub resource, you can delete it at any time if it's no longer needed.
To delete a replica in the Azure portal:
1. Navigate to your Web PubSub resource, and select **Replicas** blade. Click the replica you want to delete. 2. Click Delete button on the replica overview blade.
-## Impact on performance after enabling geo-replication feature
-After a replica is created, your clients will be distributed across selected Azure regions based on their geographical locations. Web PubSub service handles synchronizing data across these replicas automatically and this synchronization incurs a low level of cost. The cost is negligible if your use case primarily involves `sendToGroup()` where the group has more than 100 connections. However, the cost may become more apparent when sending to smaller groups (connection count < 10) or a single user.
-
-For more performance evaluation, refer to [Performance](concept-performance.md).
-
-## Best practices
-To ensure effective failover management, it is recommended to enable [autoscaling](howto-scale-autoscale.md) for the resource and its replicas. If there are two replicas in a Web PubSub resource and one of the replicas is not available due to an outage, the available replica will receive all the traffic and handle all the WebSocket connections. Auto-scaling can scale up to meet the demand automatically.
-> [!NOTE]
-> * Autoscaling for replica is configured on its own resource level. Scaling primary resource won't change the unit size of the replica.
- ## Understand how the geo-replication feature works
-![Screenshot of the arch of Azure Web PubSub replica. ](./media/howto-enable-geo-replication/web-pubsub-replica-arch.png "Replica Arch")
+![Diagram of the arch of Azure Web PubSub replica. ](./media/howto-enable-geo-replication/web-pubsub-replica-arch.png "Replica Arch")
-1. The client resolves the Fully Qualified Domain Name (FQDN) `contoso.webpubsub.azure.com` of the Web PubSub service. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional Web PubSub instance.
-2. With this CNAME, the client establishes a websocket connection to the regional instance.
+1. The client resolves the Fully Qualified Domain Name (FQDN) `contoso.webpubsub.azure.com` of the Web PubSub service. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional Web PubSub instance.
+2. With this CNAME, the client establishes a websocket connection to the regional instance (replica).
3. The two replicas will synchronize data with each other. Messages sent to one replica would be transferred to other replicas if necessary.
-4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution results.
+4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution results. For details, refer to below [Resiliency and Disaster Recovery](#resiliency-and-disaster-recovery)
> [!NOTE] > * In the data plane, a primary Azure Web PubSub resource functions identically to its replicas -+
+## Resiliency and disaster recovery
+
+Azure Web PubSub Service utilizes a traffic manager for health checks and DNS resolution towards its replicas. Under normal circumstances, when all replicas are functioning properly, clients will be directed to the closest replica. For instance:
+
+- Clients close to `eastus` will be directed to the replica located in `eastus`.
+- Similarly, clients close to `westus` will be directed to the replica in `westus`.
+
+In the event of a **regional outage** in eastus (illustrated below), the traffic manager will detect the health check failure for that region. Then, this faulty replica's DNS will be excluded from the traffic manager's DNS resolution results. After a DNS Time-to-Live (TTL) duration, which is set to 90 seconds, clients in `eastus` will be redirected to connect with the replica in `westus`.
+
+![Diagram of Azure Web PubSub replica failover. ](./media/howto-enable-geo-replication/web-pubsub-replica-failover.png "Replica Failover")
+
+Once the issue in `eastus` is resolved and the region is back online, the health check will succeed. Clients in `eastus` will then, once again, be directed to the replica in their region. This transition is smooth as the connected clients will not be impacted until those existing connections are closed.
+
+![Diagram of Azure Web PubSub replica failover recovery. ](./media/howto-enable-geo-replication/web-pubsub-replica-failover-recovery.png "Replica Failover Recover")
++
+This failover and recovery process is **automatic** and requires no manual intervention.
+
+## Impact on performance after enabling geo-replication feature
+After replicas are enabled, clients will naturally distribute based on their geographical locations. While Web PubSub takes on the responsibility to synchronize data across these replicas, you'll be pleased to know that the associated overhead on [Server Load](concept-performance.md#quick-evaluation-using-metrics) is minimal for most common use cases.
+
+Specifically, if your application typically broadcasts to larger groups (size >10) or a single connection, the performance impact of synchronization is barely noticeable. If you're messaging small groups (size < 10), you might notice a bit more synchronization overhead.
+
+To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](howto-scale-autoscale.md) to manage this.
+
+For more performance evaluation, refer to [Performance](concept-performance.md).
+
+## Breaking issues
+* **Using replica and event handler together**
+
+ If you use the Web PubSub event handler with Web PubSub C# server SDK or an Azure Function that utilizes the Web PubSub extension, you may encounter issues with the abuse protection once replicas are enabled. To address this, you can either **disable the abuse protection** or **upgrade to the latest SDK/extension versions**.
+
+ For a detailed explanation and potential solutions, please refer to this [issue](https://github.com/Azure/azure-webpubsub/issues/598).
+
azure-web-pubsub Howto Generate Client Access Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md
# How to generate client access URL for the clients
-A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
+A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
- For quick start, copy one from the Azure portal - For development, generate the value using [Web PubSub server SDK](./reference-server-sdk-js.md)-- If you're using Azure AD, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)
+- If you're using Microsoft Entra ID, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)
## Copy from the Azure portal+ In the Keys tab in Azure portal, there's a Client URL Generator tool to quickly generate a Client Access URL for you, as shown in the following diagram. Values input here aren't stored. :::image type="content" source="./media/howto-websocket-connect/generate-client-url.png" alt-text="Screenshot of the Web PubSub Client URL Generator."::: ## Generate from service SDK+ The same Client Access URL can be generated by using the Web PubSub server SDK. # [JavaScript](#tab/javascript)
The same Client Access URL can be generated by using the Web PubSub server SDK.
1. Follow [Getting started with server SDK](./reference-server-sdk-js.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
- * Configure user ID
- ```js
- let token = await serviceClient.getClientAccessToken({ userId: "user1" });
- ```
- * Configure the lifetime of the token
- ```js
- let token = await serviceClient.getClientAccessToken({ expirationTimeInMinutes: 5 });
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.joinLeaveGroup.group1"] });
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.sendToGroup.group1"] });
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ groups: ["group1"] });
- ```
+
+ - Configure user ID
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({ userId: "user1" });
+ ```
+
+ - Configure the lifetime of the token
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ expirationTimeInMinutes: 5,
+ });
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ roles: ["webpubsub.joinLeaveGroup.group1"],
+ });
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ roles: ["webpubsub.sendToGroup.group1"],
+ });
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ groups: ["group1"],
+ });
+ ```
# [C#](#tab/csharp) 1. Follow [Getting started with server SDK](./reference-server-sdk-csharp.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`:
- * Configure user ID
- ```csharp
- var url = service.GetClientAccessUri(userId: "user1");
- ```
- * Configure the lifetime of the token
- ```csharp
- var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(groups: new string[] { "group1" });
- ```
+
+ - Configure user ID
+
+ ```csharp
+ var url = service.GetClientAccessUri(userId: "user1");
+ ```
+
+ - Configure the lifetime of the token
+
+ ```csharp
+ var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```csharp
+ var url = service.GetClientAccessUri(groups: new string[] { "group1" });
+ ```
# [Python](#tab/python) 1. Follow [Getting started with server SDK](./reference-server-sdk-python.md#install-the-package) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`:
- * Configure user ID
- ```python
- token = service.get_client_access_token(user_id="user1")
- ```
- * Configure the lifetime of the token
- ```python
- token = service.get_client_access_token(minutes_to_expire=5)
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(groups=["group1"])
- ```
+
+ - Configure user ID
+
+ ```python
+ token = service.get_client_access_token(user_id="user1")
+ ```
+
+ - Configure the lifetime of the token
+
+ ```python
+ token = service.get_client_access_token(minutes_to_expire=5)
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```python
+ token = service.get_client_access_token(groups=["group1"])
+ ```
# [Java](#tab/java) 1. Follow [Getting started with server SDK](./reference-server-sdk-java.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
- * Configure user ID
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setUserId(id);
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure the lifetime of the token
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setExpiresAfter(Duration.ofDays(1));
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.addRole("webpubsub.joinLeaveGroup.group1");
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.addRole("webpubsub.sendToGroup.group1");
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setGroups(Arrays.asList("group1")),
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
+
+ - Configure user ID
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setUserId(id);
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure the lifetime of the token
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setExpiresAfter(Duration.ofDays(1));
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.joinLeaveGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.sendToGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setGroups(Arrays.asList("group1")),
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back. ## Invoke the Generate Client Token REST API
-You can enable Azure AD in your service and use the Azure AD token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use.
-
-1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Azure AD.
-2. Follow [Get Azure AD token](./howto-authorize-from-application.md#use-postman-to-get-the-azure-ad-token) to get the Azure AD token with Postman.
-3. Use the Azure AD token to invoke `:generateToken` with Postman:
- 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
- 2. On the **Auth** tab, select **Bearer Token** and paste the Azure AD token fetched in the previous step
- 3. Select **Send** and you see the Client Access Token in the response:
- ```json
- {
- "token": "ABCDEFG.ABC.ABC"
- }
- ```
-4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
+You can enable Microsoft Entra ID in your service and use the Microsoft Entra token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use.
+
+1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Microsoft Entra ID.
+2. Follow [Get Microsoft Entra token](./howto-authorize-from-application.md#use-postman-to-get-the-microsoft-entra-token) to get the Microsoft Entra token with Postman.
+3. Use the Microsoft Entra token to invoke `:generateToken` with Postman:
+ 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
+ 2. On the **Auth** tab, select **Bearer Token** and paste the Microsoft Entra token fetched in the previous step
+ 3. Select **Send** and you see the Client Access Token in the response:
+
+ ```json
+ {
+ "token": "ABCDEFG.ABC.ABC"
+ }
+ ```
+
+4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
azure-web-pubsub Howto Monitor Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-monitor-azure-policy.md
[Azure Policy](../governance/policy/overview.md) is a free service in Azure to create, assign, and manage policies that enforce rules and effects to ensure your resources stay compliant with your corporate standards and service level agreements. Use these policies to audit Web PubSub resources for compliance.
-This article describes the built-in policies for Azure Web PubSub Service.
+This article describes the built-in policies for Azure Web PubSub Service.
## Built-in policy definitions - The following table contains an index of Azure Policy built-in policy definitions for Azure Web PubSub. For Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the Version column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
The name of each built-in policy definition links to the policy definition in th
When assigning a policy definition:
-* You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
-* Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
-* You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
-* Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
+- You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+- Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
+- You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+- Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
> [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
When a resource is non-compliant, there are many possible reasons. To determine
1. Open the Azure portal and search for **Policy**. 1. Select **Policy**. 1. Select **Compliance**.
-1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
- ID.
- [ ![Policy compliance in portal](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
-1. Select a policy to review aggregate compliance details and events.
+1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
+ ID.
+ [ ![Screenshot showing policy compliance in portal.](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
+1. Select a policy to review aggregate compliance details and events.
1. Select a specific Web PubSub for resource compliance. ### Policy compliance in the Azure CLI
az policy state list \
## Next steps
-* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-
-* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-* Learn more about [governance capabilities](../governance/index.yml) in Azure
+- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about [governance capabilities](../governance/index.yml) in Azure
<!-- LINKS - External -->
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
azure-web-pubsub Howto Secure Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints.md
When the `properties.provisioningState` is `Succeeded` and `properties.status` (
At this point, the private endpoint between Azure Web PubSub Service and Azure Function is established.
-### Step 4: Verify upstream calls are from a private IP
+## Step 4: Verify upstream calls are from a private IP
Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
azure-web-pubsub Howto Troubleshoot Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md
description: Learn what resource logs are and how to use them for troubleshootin
-+ Last updated 07/21/2022 # How to troubleshoot with resource logs
-This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
+This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
-## What are resource logs?
+## What are resource logs?
+
+There are three types of resource logs: _Connectivity_, _Messaging_, and _HTTP requests_.
-There are three types of resource logs: *Connectivity*, *Messaging*, and *HTTP requests*.
- **Connectivity** logs provide detailed information for Azure Web PubSub hub connections. For example, basic information (user ID, connection ID, and so on) and event information (connect, disconnect, and so on). - **Messaging** logs provide tracing information for the Azure Web PubSub hub messages received and sent via Azure Web PubSub service. For example, tracing ID and message type of the message. - **HTTP requests** logs provide tracing information for HTTP requests to the Azure Web PubSub service. For example, HTTP method and status code. Typically the HTTP request is recorded when it arrives at or leave from service.
The Azure Web PubSub service live trace tool has ability to collect resource log
> [!NOTE] > The following considerations apply to using the live trace tool:
-> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
-> - The live trace tool does not currently support Azure Active Directory authentication. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
-> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
+>
+> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
+> - The live trace tool does not currently support Microsoft Entra authorization. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
+> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
+
+## Launch the live trace tool
+
+> [!NOTE]
+> When enable access key, you'll use access token to authenticate live trace tool.
+> Otherwise, you'll use Microsoft Entra ID to authenticate live trace tool.
+> You can check whether you enable access key or not in your SignalR Service's Keys page in Azure portal.
+
+### Steps for access key enabled
+
+1. Go to the Azure portal and your SignalR Service page.
+1. From the menu on the left, under **Monitoring** select **Live trace settings**.
+1. Select **Enable Live Trace**.
+1. Select **Save** button. It will take a moment for the changes to take effect.
+1. When updating is complete, select **Open Live Trace Tool**.
+
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+
+### Steps for access key disabled
-### Launch the live trace tool
+#### Assign live trace tool API permission to yourself
+1. Go to the Azure portal and your SignalR Service page.
+1. Select **Access control (IAM)**.
+1. In the new page, Click **+Add**, then click **Role assignment**.
+1. In the new page, focus on **Job function roles** tab, Select **SignalR Service Owner** role, and then click **Next**.
+1. In **Members** page, click **+Select members**.
+1. In the new panel, search and select members, and then click **Select**.
+1. Click **Review + assign**, and wait for the completion notification.
-1. Go to the Azure portal and your Web PubSub service.
-1. On the left menu, under **Monitoring**, select **Live trace settings.**
-1. On the **Live trace settings** page, select **Enable Live Trace**.
-1. Choose the log categories to collect.
-1. Select **Save** and then wait until the settings take effect.
-1. Select **Open Live Trace Tool**.
+#### Visit live trace tool
+1. Go to the Azure portal and your SignalR Service page.
+1. From the menu on the left, under **Monitoring** select **Live trace settings**.
+1. Select **Enable Live Trace**.
+1. Select **Save** button. It will take a moment for the changes to take effect.
+1. When updating is complete, select **Open Live Trace Tool**.
:::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+#### Sign in with your Microsoft account
+
+1. The live trace tool will pop up a Microsoft sign in window. If no window is pop up, check and allow pop up windows in your browser.
+1. Wait for **Ready** showing in the status bar.
+ ### Capture the resource logs The live trace tool provides functionality to help you capture the resource logs for troubleshooting.
-* **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
-* **Clear**: Clear the captured real-time resource logs.
-* **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
-* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
+- **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
+- **Clear**: Clear the captured real-time resource logs.
+- **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
+- **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
:::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/live-trace-tool-capture.png" alt-text="Screenshot of capturing resource logs with live trace tool.":::
-The real-time resource logs captured by live trace tool contain detailed information for troubleshooting.
-
-| Name | Description |
-| | |
-| Time | Log event time |
-| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
-| Event Name | Operation name of the event |
-| Message | Detailed message for the event |
-| Exception | The run-time exception of Azure Web PubSub service |
-| Hub | User-defined hub name |
-| Connection ID | Identity of the connection |
-| User ID | User identity|
-| IP | Client IP address |
-| Route Template | The route template of the API |
-| Http Method | The Http method (POST/GET/PUT/DELETE) |
-| URL | The uniform resource locator |
-| Trace ID | The unique identifier to the invocation |
-| Status Code | The Http response code |
-| Duration | The duration between receiving the request and processing the request |
-| Headers | The additional information passed by the client and the server with an HTTP request or response |
+The real-time resource logs captured by live trace tool contain detailed information for troubleshooting.
+
+| Name | Description |
+| -- | -- |
+| Time | Log event time |
+| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
+| Event Name | Operation name of the event |
+| Message | Detailed message for the event |
+| Exception | The run-time exception of Azure Web PubSub service |
+| Hub | User-defined hub name |
+| Connection ID | Identity of the connection |
+| User ID | User identity |
+| IP | Client IP address |
+| Route Template | The route template of the API |
+| Http Method | The Http method (POST/GET/PUT/DELETE) |
+| URL | The uniform resource locator |
+| Trace ID | The unique identifier to the invocation |
+| Status Code | The Http response code |
+| Duration | The duration between receiving the request and processing the request |
+| Headers | The additional information passed by the client and the server with an HTTP request or response |
## Capture resource logs with Azure Monitor
Currently Azure Web PubSub supports integration with [Azure Storage](../azure-mo
1. Go to Azure portal. 1. On **Diagnostic settings** page of your Azure Web PubSub service instance, select **+ Add diagnostic setting**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one":::
1. In **Diagnostic setting name**, input the setting name. 1. In **Category details**, select any log category you need. 1. In **Destination details**, check **Archive to a storage account**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+ 1. Select **Save** to save the diagnostic setting.
-> [!NOTE]
-> The storage account should be in the same region as Azure Web PubSub service.
+ > [!NOTE]
+ > The storage account should be in the same region as Azure Web PubSub service.
### Archive to an Azure Storage Account
All logs are stored in JavaScript Object Notation (JSON) format. Each entry has
Archive log JSON strings include elements listed in the following tables:
-**Format**
-
-Name | Description
-- | -
-time | Log event time
-level | Log event level
-resourceId | Resource ID of your Azure SignalR Service
-location | Location of your Azure SignalR Service
-category | Category of the log event
-operationName | Operation name of the event
-callerIpAddress | IP address of your server or client
-properties | Detailed properties related to this log event. For more detail, see the properties table below
-
-**Properties Table**
-
-Name | Description
-- | -
-collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
-connectionId | Identity of the connection
-userId | Identity of the user
-message | Detailed message of log event
-hub | User-defined Hub Name |
-routeTemplate | The route template of the API |
-httpMethod | The Http method (POST/GET/PUT/DELETE) |
-url | The uniform resource locator |
-traceId | The unique identifier to the invocation |
-statusCode | The Http response code |
-duration | The duration between the request is received and processed |
-headers | The additional information passed by the client and the server with an HTTP request or response |
+#### Format
+
+| Name | Description |
+| | - |
+| time | Log event time |
+| level | Log event level |
+| resourceId | Resource ID of your Azure SignalR Service |
+| location | Location of your Azure SignalR Service |
+| category | Category of the log event |
+| operationName | Operation name of the event |
+| callerIpAddress | IP address of your server or client |
+| properties | Detailed properties related to this log event. For more detail, see the properties table below |
+
+#### Properties Table
+
+| Name | Description |
+| - | -- |
+| collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` |
+| connectionId | Identity of the connection |
+| userId | Identity of the user |
+| message | Detailed message of log event |
+| hub | User-defined Hub Name |
+| routeTemplate | The route template of the API |
+| httpMethod | The Http method (POST/GET/PUT/DELETE) |
+| url | The uniform resource locator |
+| traceId | The unique identifier to the invocation |
+| statusCode | The Http response code |
+| duration | The duration between the request is received and processed |
+| headers | The additional information passed by the client and the server with an HTTP request or response |
The following code is an example of an archive log JSON string:
The following code is an example of an archive log JSON string:
### Archive to Azure Log Analytics To send logs to a Log Analytics workspace:
-1. On the **Diagnostic setting** page, under **Destination details**, select **Send to Log Analytics workspace.
+
+1. On the **Diagnostic setting** page, under **Destination details**, select \*\*Send to Log Analytics workspace.
1. Select the **Subscription** you want to use. 1. Select the **Log Analytics workspace** to use as the destination for the logs.
To view the resource logs, follow these steps:
1. Select `Logs` in your target Log Analytics.
- :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
+ :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
1. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest`, and then select the time range to query the log. For advanced queries, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md).
- :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
-
+ :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
To use a sample query for SignalR service, follow the steps below.+ 1. Select `Logs` in your target Log Analytics. 1. Select `Queries` to open query explorer. 1. Select `Resource type` to group sample queries in resource type. 1. Select `Run` to run the script.
- :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
-
+ :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
Archive log columns include elements listed in the following table.
-Name | Description
-- | -
-TimeGenerated | Log event time
-Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
-OperationName | Operation name of the event
-Location | Location of your Azure SignalR Service
-Level | Log event level
-CallerIpAddress | IP address of your server/client
-Message | Detailed message of log event
-UserId | Identity of the user
-ConnectionId | Identity of the connection
-ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side
-TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling`
+| Name | Description |
+| | - |
+| TimeGenerated | Log event time |
+| Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` |
+| OperationName | Operation name of the event |
+| Location | Location of your Azure SignalR Service |
+| Level | Log event level |
+| CallerIpAddress | IP address of your server/client |
+| Message | Detailed message of log event |
+| UserId | Identity of the user |
+| ConnectionId | Identity of the connection |
+| ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side |
+| TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling` |
## Troubleshoot with the resource logs
The difference between `ConnectionAborted` and `ConnectionEnded` is that `Connec
The abort reasons are listed in the following table:
-| Reason | Description |
-| - | - |
-| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit
-| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service |
-| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered
+| Reason | Description |
+| - | - |
+| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit |
+| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service |
+| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered |
#### Unexpected increase in connections
If you get 401 Unauthorized returned for client requests, check your resource lo
### Throttling
-If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
+If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
azure-web-pubsub Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-use-managed-identity.md
This article shows you how to create a managed identity for Azure Web PubSub Service and how to use it.
-> [!Important]
-> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity.
+> [!Important]
+> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity.
## Add a system-assigned identity
To set up a managed identity in the Azure portal, you'll first create an Azure W
2. Select **Identity**.
-4. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
+3. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
- :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
+ :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
## Add a user-assigned identity
Creating an Azure Web PubSub Service instance with a user-assigned identity requ
5. Search for the identity that you created earlier and selects it. Select **Add**.
- :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
+ :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
## Use a managed identity in client events scenarios
Azure Web PubSub Service is a fully managed service, so you can't use a managed
2. Navigate to the rule and switch on the **Authentication**.
- :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting":::
+ :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting":::
3. Select application. The application ID will become the `aud` claim in the obtained access token, which can be used as a part of validation in your event handler. You can choose one of the following:
- - Use default AAD application.
- - Select from existing AAD applications. The application ID of the one you choose will be used.
- - Specify an AAD application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
- > [!NOTE]
- > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests.
+ - Use default Microsoft Entra application.
+ - Select from existing Microsoft Entra applications. The application ID of the one you choose will be used.
+ - Specify a Microsoft Entra application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
+
+ > [!NOTE]
+ > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests.
### Validate access tokens
The token in the `Authorization` header is a [Microsoft identity platform access
To validate access tokens, your app should also validate the audience and the signing tokens. These need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration).
-The Azure Active Directory (Azure AD) middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
+The Microsoft Entra middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
-We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
+We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Microsoft Entra authorization libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
-Specially, if the event handler hosts in Azure Function or Web Apps, an easy way is to [Configure Azure AD login](../app-service/configure-authentication-provider-aad.md).
+Specially, if the event handler hosts in Azure Function or Web Apps, an easy way is to [Configure Microsoft Entra login](../app-service/configure-authentication-provider-aad.md).
## Use a managed identity for Key Vault reference
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-live-demo.md
In this quickstart, we use the *Client URL Generator* to generate a temporarily
In real-world applications, you can use SDKs in various languages build your own application. We also provide Function extensions for you to build serverless applications easily.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
description: A tutorial to walk through how to use Azure Web PubSub service and
-+ Last updated 05/05/2023
The Azure Web PubSub service helps you build real-time messaging web application
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Build a serverless real-time chat app
-> * Work with Web PubSub function trigger bindings and output bindings
-> * Deploy the function to Azure Function App
-> * Configure Azure Authentication
-> * Configure Web PubSub Event Handler to route events and messages to the application
+>
+> - Build a serverless real-time chat app
+> - Work with Web PubSub function trigger bindings and output bindings
+> - Deploy the function to Azure Function App
+> - Configure Azure Authentication
+> - Configure Web PubSub Event Handler to route events and messages to the application
## Prerequisites # [JavaScript](#tab/javascript)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-* [Node.js](https://nodejs.org/en/download/), version 10.x.
- > [!NOTE]
- > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Node.js](https://nodejs.org/en/download/), version 10.x.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
# [C# in-process](#tab/csharp-in-process)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
# [C# isolated process](#tab/csharp-isolated-process)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
In this tutorial, you learn how to:
1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory.
- # [JavaScript](#tab/javascript)
- ```bash
- func init --worker-runtime javascript
- ```
+ # [JavaScript](#tab/javascript)
+
+ ```bash
+ func init --worker-runtime javascript
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```bash
+ func init --worker-runtime dotnet
+ ```
- # [C# in-process](#tab/csharp-in-process)
- ```bash
- func init --worker-runtime dotnet
- ```
+ # [C# isolated process](#tab/csharp-isolated-process)
- # [C# isolated process](#tab/csharp-isolated-process)
- ```bash
- func init --worker-runtime dotnet-isolated
- ```
+ ```bash
+ func init --worker-runtime dotnet-isolated
+ ```
2. Install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`.
-
- # [JavaScript](#tab/javascript)
- Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
- ```json
- {
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.*, 4.0.0)"
- }
- }
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- ```bash
- dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- ```bash
- dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
- ```
+
+ # [JavaScript](#tab/javascript)
+
+ Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.*, 4.0.0)"
+ }
+ }
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```bash
+ dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ ```bash
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
+ ```
3. Create an `index` function to read and host a static web page for clients.
- ```bash
- func new -n index -t HttpTrigger
- ```
+
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)+ - Update `index/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
- Update `index/index.js` and copy following codes.
- ```js
- var fs = require('fs');
- var path = require('path');
-
- module.exports = function (context, req) {
- var index = context.executionContext.functionDirectory + '/../https://docsupdatetracker.net/index.html';
- context.log("https://docsupdatetracker.net/index.html path: " + index);
- fs.readFile(index, 'utf8', function (err, data) {
- if (err) {
- console.log(err);
- context.done(err);
- }
- context.res = {
- status: 200,
- headers: {
- 'Content-Type': 'text/html'
- },
- body: data
- };
- context.done();
- });
- }
- ```
+
+ ```js
+ var fs = require("fs");
+ var path = require("path");
+
+ module.exports = function (context, req) {
+ var index =
+ context.executionContext.functionDirectory + "/../https://docsupdatetracker.net/index.html";
+ context.log("https://docsupdatetracker.net/index.html path: " + index);
+ fs.readFile(index, "utf8", function (err, data) {
+ if (err) {
+ console.log(err);
+ context.done(err);
+ }
+ context.res = {
+ status: 200,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ body: data,
+ };
+ context.done();
+ });
+ };
+ ```
# [C# in-process](#tab/csharp-in-process)+ - Update `index.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("index")]
- public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
- {
- var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
- log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}.");
- return new ContentResult
- {
- Content = File.ReadAllText(indexFile),
- ContentType = "text/html",
- };
- }
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
+ ```c#
+ [FunctionName("index")]
+ public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
+ {
+ var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
+ log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}.");
+ return new ContentResult
+ {
+ Content = File.ReadAllText(indexFile),
+ ContentType = "text/html",
+ };
+ }
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `index.cs` and replace `Run` function with following codes.
- ```c#
- [Function("index")]
- public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
- {
- var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
- _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
-
- var response = req.CreateResponse();
- response.WriteString(File.ReadAllText(path));
- response.Headers.Add("Content-Type", "text/html");
- return response;
- }
- ```
+
+ ```c#
+ [Function("index")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
+ {
+ var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
+ _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
+
+ var response = req.CreateResponse();
+ response.WriteString(File.ReadAllText(path));
+ response.Headers.Add("Content-Type", "text/html");
+ return response;
+ }
+ ```
4. Create a `negotiate` function to help clients get service connection url with access token.
- ```bash
- func new -n negotiate -t HttpTrigger
- ```
- > [!NOTE]
- > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
-
- # [JavaScript](#tab/javascript)
- - Update `negotiate/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "webPubSubConnection",
- "name": "connection",
- "hub": "simplechat",
- "userId": "{headers.x-ms-client-principal-name}",
- "direction": "in"
- }
- ]
- }
- ```
- - Update `negotiate/index.js` and copy following codes.
- ```js
- module.exports = function (context, req, connection) {
- context.res = { body: connection };
- context.done();
- };
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- - Update `negotiate.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("negotiate")]
- public static WebPubSubConnection Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
- [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection,
- ILogger log)
- {
- log.LogInformation("Connecting...");
- return connection;
- }
- ```
- - Add `using` statements in header to resolve required dependencies.
- ```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- - Update `negotiate.cs` and replace `Run` function with following codes.
- ```c#
- [Function("negotiate")]
- public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
- [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
- {
- var response = req.CreateResponse(HttpStatusCode.OK);
- response.WriteAsJsonAsync(connectionInfo);
- return response;
- }
- ```
+
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+
+ > [!NOTE]
+ > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
+
+ # [JavaScript](#tab/javascript)
+
+ - Update `negotiate/function.json` and copy following json codes.
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "hub": "simplechat",
+ "userId": "{headers.x-ms-client-principal-name}",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+ - Update `negotiate/index.js` and copy following codes.
+ ```js
+ module.exports = function (context, req, connection) {
+ context.res = { body: connection };
+ context.done();
+ };
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```c#
+ [FunctionName("negotiate")]
+ public static WebPubSubConnection Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection,
+ ILogger log)
+ {
+ log.LogInformation("Connecting...");
+ return connection;
+ }
+ ```
+ - Add `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("negotiate")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
+ {
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.WriteAsJsonAsync(connectionInfo);
+ return response;
+ }
+ ```
5. Create a `message` function to broadcast client messages through service.+ ```bash func new -n message -t HttpTrigger ```
In this tutorial, you learn how to:
> This function is actually using `WebPubSubTrigger`. However, the `WebPubSubTrigger` is not integrated in function's template. We use `HttpTrigger` to initialize the function template and change trigger type in code. # [JavaScript](#tab/javascript)+ - Update `message/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "type": "webPubSubTrigger",
- "direction": "in",
- "name": "data",
- "hub": "simplechat",
- "eventName": "message",
- "eventType": "user"
- },
- {
- "type": "webPubSub",
- "name": "actions",
- "hub": "simplechat",
- "direction": "out"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "type": "webPubSubTrigger",
+ "direction": "in",
+ "name": "data",
+ "hub": "simplechat",
+ "eventName": "message",
+ "eventType": "user"
+ },
+ {
+ "type": "webPubSub",
+ "name": "actions",
+ "hub": "simplechat",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
- Update `message/index.js` and copy following codes.
- ```js
- module.exports = async function (context, data) {
- context.bindings.actions = {
- "actionName": "sendToAll",
- "data": `[${context.bindingData.request.connectionContext.userId}] ${data}`,
- "dataType": context.bindingData.dataType
- };
- // UserEventResponse directly return to caller
- var response = {
- "data": '[SYSTEM] ack.',
- "dataType" : "text"
- };
- return response;
- };
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- - Update `message.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("message")]
- public static async Task<UserEventResponse> Run(
- [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
- BinaryData data,
- WebPubSubDataType dataType,
- [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions)
- {
- await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
- BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
- dataType));
- return new UserEventResponse
- {
- Data = BinaryData.FromString("[SYSTEM] ack"),
- DataType = WebPubSubDataType.Text
- };
- }
- ```
- - Add `using` statements in header to resolve required dependencies.
- ```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- using Microsoft.Azure.WebPubSub.Common;
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- - Update `message.cs` and replace `Run` function with following codes.
- ```c#
- [Function("message")]
- [WebPubSubOutput(Hub = "simplechat")]
- public SendToAllAction Run(
- [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
- {
- return new SendToAllAction
- {
- Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
- DataType = request.DataType
- };
- }
- ```
+ ```js
+ module.exports = async function (context, data) {
+ context.bindings.actions = {
+ actionName: "sendToAll",
+ data: `[${context.bindingData.request.connectionContext.userId}] ${data}`,
+ dataType: context.bindingData.dataType,
+ };
+ // UserEventResponse directly return to caller
+ var response = {
+ data: "[SYSTEM] ack.",
+ dataType: "text",
+ };
+ return response;
+ };
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```c#
+ [FunctionName("message")]
+ public static async Task<UserEventResponse> Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
+ BinaryData data,
+ WebPubSubDataType dataType,
+ [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions)
+ {
+ await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
+ BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
+ dataType));
+ return new UserEventResponse
+ {
+ Data = BinaryData.FromString("[SYSTEM] ack"),
+ DataType = WebPubSubDataType.Text
+ };
+ }
+ ```
+ - Add `using` statements in header to resolve required dependencies.
+ ```c#
+ using System.Threading.Tasks;
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ using Microsoft.Azure.WebPubSub.Common;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("message")]
+ [WebPubSubOutput(Hub = "simplechat")]
+ public SendToAllAction Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
+ {
+ return new SendToAllAction
+ {
+ Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
+ DataType = request.DataType
+ };
+ }
+ ```
6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content.
- ```html
- <html>
- <body>
- <h1>Azure Web PubSub Serverless Chat App</h1>
- <div id="login"></div>
- <p></p>
- <input id="message" placeholder="Type to chat...">
- <div id="messages"></div>
- <script>
- (async function () {
- let authenticated = window.location.href.includes('?authenticated=true');
- if (!authenticated) {
- // auth
- let login = document.querySelector("#login");
- let link = document.createElement('a');
- link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`;
- link.text = "login";
- login.appendChild(link);
- }
- else {
- // negotiate
- let messages = document.querySelector('#messages');
- let res = await fetch(`${window.location.origin}/api/negotiate`, {
- credentials: "include"
- });
- let url = await res.json();
- // connect
- let ws = new WebSocket(url.url);
- ws.onopen = () => console.log('connected');
- ws.onmessage = event => {
- let m = document.createElement('p');
- m.innerText = event.data;
- messages.appendChild(m);
- };
- let message = document.querySelector('#message');
- message.addEventListener('keypress', e => {
- if (e.charCode !== 13) return;
- ws.send(message.value);
- message.value = '';
- });
- }
- })();
- </script>
- </body>
- </html>
- ```
-
- # [JavaScript](#tab/javascript)
-
- # [C# in-process](#tab/csharp-in-process)
- Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
- ```xml
- <ItemGroup>
- <None Update="https://docsupdatetracker.net/index.html">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
- ```xml
- <ItemGroup>
- <None Update="https://docsupdatetracker.net/index.html">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
+
+ ```html
+ <html>
+ <body>
+ <h1>Azure Web PubSub Serverless Chat App</h1>
+ <div id="login"></div>
+ <p></p>
+ <input id="message" placeholder="Type to chat..." />
+ <div id="messages"></div>
+ <script>
+ (async function () {
+ let authenticated = window.location.href.includes(
+ "?authenticated=true"
+ );
+ if (!authenticated) {
+ // auth
+ let login = document.querySelector("#login");
+ let link = document.createElement("a");
+ link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`;
+ link.text = "login";
+ login.appendChild(link);
+ } else {
+ // negotiate
+ let messages = document.querySelector("#messages");
+ let res = await fetch(`${window.location.origin}/api/negotiate`, {
+ credentials: "include",
+ });
+ let url = await res.json();
+ // connect
+ let ws = new WebSocket(url.url);
+ ws.onopen = () => console.log("connected");
+ ws.onmessage = (event) => {
+ let m = document.createElement("p");
+ m.innerText = event.data;
+ messages.appendChild(m);
+ };
+ let message = document.querySelector("#message");
+ message.addEventListener("keypress", (e) => {
+ if (e.charCode !== 13) return;
+ ws.send(message.value);
+ message.value = "";
+ });
+ }
+ })();
+ </script>
+ </body>
+ </html>
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
## Create and Deploy the Azure Function App Before you can deploy your function code to Azure, you need to create three resources:
-* A resource group, which is a logical container for related resources.
-* A storage account, which is used to maintain state and other information about your functions.
-* A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
-Use the following commands to create these items.
+- A resource group, which is a logical container for related resources.
+- A storage account, which is used to maintain state and other information about your functions.
+- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
+
+Use the following commands to create these items.
1. If you haven't done so already, sign in to Azure:
- ```azurecli
- az login
- ```
+ ```azurecli
+ az login
+ ```
1. Create a resource group or you can skip by reusing the one of Azure Web PubSub service:
- ```azurecli
- az group create -n WebPubSubFunction -l <REGION>
- ```
+ ```azurecli
+ az group create -n WebPubSubFunction -l <REGION>
+ ```
1. Create a general-purpose storage account in your resource group and region:
- ```azurecli
- az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction
- ```
+ ```azurecli
+ az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction
+ ```
1. Create the function app in Azure:
- # [JavaScript](#tab/javascript)
+ # [JavaScript](#tab/javascript)
+
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
- > [!NOTE]
- > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+ > [!NOTE]
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
- # [C# in-process](#tab/csharp-in-process)
+ # [C# in-process](#tab/csharp-in-process)
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- # [C# isolated process](#tab/csharp-isolated-process)
+ # [C# isolated process](#tab/csharp-isolated-process)
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
1. Deploy the function project to Azure:
- After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+ After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+
+ ```bash
+ func azure functionapp publish <FUNCIONAPP_NAME>
+ ```
- ```bash
- func azure functionapp publish <FUNCIONAPP_NAME>
- ```
1. Configure the `WebPubSubConnectionString` for the function app: First, find your Web PubSub resource from **Azure Portal** and copy out the connection string under **Keys**. Then, navigate to Function App settings in **Azure Portal** -> **Settings** -> **Configuration**. And add a new item under **Application settings**, with name equals `WebPubSubConnectionString` and value is your Web PubSub resource connection string.
Go to **Azure portal** -> Find your Function App resource -> **App keys** -> **S
Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use. Replace the `<FUNCTIONAPP_NAME>` and `<APP_KEY>` to yours.
- - Hub Name: `simplechat`
- - URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>**
- - User Event Pattern: *
- - System Events: -(No need to configure in this sample)
+- Hub Name: `simplechat`
+- URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>**
+- User Event Pattern: \*
+- System Events: -(No need to configure in this sample)
:::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler.":::
Go to **Azure portal** -> Find your Function App resource -> **Authentication**.
Here we choose `Microsoft` as identify provider, which uses `x-ms-client-principal-name` as `userId` in the `negotiate` function. Besides, you can configure other identity providers following the links, and don't forget update the `userId` value in `negotiate` function accordingly.
-* [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md)
-* [Facebook](../app-service/configure-authentication-provider-facebook.md)
-* [Google](../app-service/configure-authentication-provider-google.md)
-* [Twitter](../app-service/configure-authentication-provider-twitter.md)
+- [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md)
+- [Facebook](../app-service/configure-authentication-provider-facebook.md)
+- [Google](../app-service/configure-authentication-provider-google.md)
+- [Twitter](../app-service/configure-authentication-provider-twitter.md)
## Try the application Now you're able to test your page from your function app: `https://<FUNCTIONAPP_NAME>.azurewebsites.net/api/index`. See snapshot.+ 1. Click `login` to auth yourself. 2. Type message in the input box to chat.
If you're not going to continue to use this app, delete all resources created by
## Next steps
-In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Quick start: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Quickstarts Push Messages From Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-push-messages-from-server.md
cd webpubsub-quickstart-subscriber
<groupId>com.azure</groupId> <artifactId>azure-messaging-webpubsub</artifactId> <version>1.0.0</version>
-</dependen
+</dependency>
<dependency> <groupId>org.java-websocket</groupId> <artifactId>Java-WebSocket</artifactId> <version>1.5.1</version>
-</dependen
+</dependency>
``` In Web PubSub, you can connect to the service and subscribe to messages through WebSocket connections. WebSocket is a full-duplex communication channel allowing the service to push messages to your client in real time. You can use any API or library that supports WebSocket. For this sample, we use package [Java-WebSocket](https://github.com/TooTallNate/Java-WebSocket).
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
As illustrated by the above workflow graph, and also detailed workflow described
In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure Web PubSub Service. <a name="signing"></a>+ #### Signing Algorithm and Signature `HS256`, namely HMAC-SHA256, is used as the signing algorithm.
You should use the `AccessKey` in Azure Web PubSub Service instance's connection
Below claims are required to be included in the JWT token.
-Claim Type | Is Required | Description
-||
-`aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`.
-`exp` | true | Epoch time when this token will be expired.
+| Claim Type | Is Required | Description |
+| - | -- | - |
+| `aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`. |
+| `exp` | true | Epoch time when this token will be expired. |
A pseudo code in JS:+ ```js const bearerToken = jwt.sign({}, connectionString.accessKey, {
- audience: request.url,
- expiresIn: "1h",
- algorithm: "HS256",
- });
+ audience: request.url,
+ expiresIn: "1h",
+ algorithm: "HS256",
+});
```
-### Authenticate via Azure Active Directory Token (Azure AD Token)
+### Authenticate via Microsoft Entra token
-Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
+**The difference is**, in this scenario, JWT Token is generated by Microsoft Entra ID.
-[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
+[Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md)
The credential scope used should be `https://webpubsub.azure.com/.default`.
You could also use **Role Based Access Control (RBAC)** to authorize the request
[Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal)
-## APIs
+## APIs
-| Operation Group | Description |
-|--|-|
-|[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status |
-|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
+| Operation Group | Description |
+| -- | |
+| [Service Status](/rest/api/webpubsub/dataplane/health-api) | Provides operations to check the service status |
+| [Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub) | Provides operations to manage the connections and send messages to them. |
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-csharp.md
-+ Last updated 11/11/2021
You can use this library in your app server side to manage the WebSocket client
Use this library to: -- Send messages to hubs and groups.
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups. - Close connections
You can also [enable console logging](https://github.com/Azure/azure-sdk-for-net
[azure_sub]: https://azure.microsoft.com/free/dotnet/ [samples_ref]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Azure.Messaging.WebPubSub/tests/Samples/
-[awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
+[awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
azure-web-pubsub Reference Server Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-java.md
-+ Last updated 01/31/2023 + # Azure Web PubSub service client library for Java [Azure Web PubSub service](./index.yml) is an Azure-managed service that helps developers easily build web applications with real-time features and a publish-subscribe pattern. Any scenario that requires real-time publish-subscribe messaging between server and clients or among clients can use Azure Web PubSub service. Traditional real-time features that often require polling from the server or submitting HTTP requests can also use Azure Web PubSub service.
Use this library to:
For more information, see: -- [Azure Web PubSub client library Java SDK][source_code] -- [Azure Web PubSub client library reference documentation][api]
+- [Azure Web PubSub client library Java SDK][source_code]
+- [Azure Web PubSub client library reference documentation][api]
- [Azure Web PubSub client library samples for Java][samples_readme] - [Azure Web PubSub service documentation][product_documentation]
For more information, see:
### Include the Package
-[//]: # ({x-version-update-start;com.azure:azure-messaging-webpubsub;current})
+[//]: # "{x-version-update-start;com.azure:azure-messaging-webpubsub;current}"
```xml <dependency>
For more information, see:
</dependency> ```
-[//]: # ({x-version-update-end})
+[//]: # "{x-version-update-end}"
### Create a `WebPubSubServiceClient` using connection string <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L21-L24 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .connectionString("{connection-string}")
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
### Create a `WebPubSubServiceClient` using access key <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L31-L35 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .credential(new AzureKeyCredential("{access-key}"))
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
### Broadcast message to entire hub <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L47-L47 -->+ ```java webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN
### Broadcast message to a group <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L59-L59 -->+ ```java webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.T
### Send message to a connection <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L71-L71 -->+ ```java webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", W
<a name="send-to-user"></a> ### Send message to a user+ <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L83-L83 -->+ ```java webPubSubServiceClient.sendToUser("Andy", "Hello Andy!", WebPubSubContentType.TEXT_PLAIN); ```
the client library to use the Netty HTTP client. Configuring or changing the HTT
By default, all client libraries use the Tomcat-native Boring SSL library to enable native-level performance for SSL operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides
-better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning].
+better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning].
[!INCLUDE [next step](includes/include-next-step.md)]
better performance compared to the default SSL implementation within the JDK. Fo
[coc]: https://opensource.microsoft.com/codeofconduct/ [coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/ [coc_contact]: mailto:opencode@microsoft.com
-[api]: /java/api/overview/azure/messaging-webpubsub-readme
+[api]: /java/api/overview/azure/messaging-webpubsub-readme
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
-+ Last updated 11/11/2021
npm install @azure/web-pubsub
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
``` You can also authenticate the `WebPubSubServiceClient` using an endpoint and an `AzureKeyCredential`: ```js
-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+const {
+ WebPubSubServiceClient,
+ AzureKeyCredential,
+} = require("@azure/web-pubsub");
const key = new AzureKeyCredential("<Key>");
-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<Endpoint>",
+ key,
+ "<hubName>"
+);
```
-Or authenticate the `WebPubSubServiceClient` using [Azure Active Directory][aad_doc]
+Or authenticate the `WebPubSubServiceClient` using [Microsoft Entra ID][microsoft_entra_id_doc]
1. Install the `@azure/identity` dependency
npm install @azure/identity
1. Update the source code to use `DefaultAzureCredential`: ```js
-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+const {
+ WebPubSubServiceClient,
+ AzureKeyCredential,
+} = require("@azure/web-pubsub");
const key = new DefaultAzureCredential();
-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<Endpoint>",
+ key,
+ "<hubName>"
+);
``` ### Examples
const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>")
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Get the access token for the WebSocket client connection to use let token = await serviceClient.getClientAccessToken();
token = await serviceClient.getClientAccessToken({ userId: "user1" });
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Send a JSON message await serviceClient.sendToAll({ message: "Hello world!" });
await serviceClient.sendToAll(payload.buffer);
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
const groupClient = serviceClient.group("<groupName>");
await groupClient.sendToAll(payload.buffer);
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Send a JSON message await serviceClient.sendToUser("user1", { message: "Hello world!" }); // Send a plain text message
-await serviceClient.sendToUser("user1", "Hi there!", { contentType: "text/plain" });
+await serviceClient.sendToUser("user1", "Hi there!", {
+ contentType: "text/plain",
+});
// Send a binary message const payload = new Uint8Array(10);
await serviceClient.sendToUser("user1", payload.buffer);
const { WebPubSubServiceClient } = require("@azure/web-pubsub"); const WebSocket = require("ws");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
const groupClient = serviceClient.group("<groupName>");
const { WebPubSubServiceClient } = require("@azure/web-pubsub");
function onResponse(rawResponse: FullOperationResponse): void { console.log(rawResponse); }
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
await serviceClient.sendToAll({ message: "Hello world!" }, { onResponse }); ```
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const handler = new WebPubSubEventHandler("chat", {
handleConnect: (req, res) => { // auth the connection and set the userId of the connection res.success({
- userId: "<userId>"
+ userId: "<userId>",
}); },
- allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"]
+ allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"],
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express");
const handler = new WebPubSubEventHandler("chat", { allowedEndpoints: [ "https://<yourAllowedService1>.webpubsub.azure.com",
- "https://<yourAllowedService2>.webpubsub.azure.com"
- ]
+ "https://<yourAllowedService2>.webpubsub.azure.com",
+ ],
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const express = require("express");
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express"); const handler = new WebPubSubEventHandler("chat", {
- path: "/customPath1"
+ path: "/customPath1",
}); const app = express();
app.use(handler.getMiddleware());
app.listen(3000, () => // Azure WebPubSub Upstream ready at http://localhost:3000/customPath1
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const handler = new WebPubSubEventHandler("chat", {
// You can also set the state here res.setState("calledTime", calledTime); res.success();
- }
+ },
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
For more detailed instructions on how to enable logs, see [@azure/logger package
Use **Live Trace** from the Web PubSub service portal to view the live traffic.
-[aad_doc]: howto-authorize-from-application.md
+[microsoft_entra_id_doc]: howto-authorize-from-application.md
[azure_sub]: https://azure.microsoft.com/free/ [samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/ ## Next steps
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-python.md
description: Learn about the Python server SDK for the Azure Web PubSub service.
-+ Last updated 05/23/2022
Or use the service endpoint and the access key:
>>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=AzureKeyCredential("<access_key>")) ```
-Or use [Azure Active Directory][aad_doc] (Azure AD):
+Or use [Microsoft Entra ID][microsoft_entra_id_doc]:
1. [pip][pip] install [`azure-identity`][azure_identity_pip].
-2. [Enable Azure AD authentication on your Webpubsub resource][aad_doc].
+2. [Enable Microsoft Entra authorization on your Webpubsub resource][microsoft_entra_id_doc].
3. Update code to use [DefaultAzureCredential][default_azure_credential].
- ```python
- >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
- >>> from azure.identity import DefaultAzureCredential
- >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential())
- ```
+ ```python
+ >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
+ >>> from azure.identity import DefaultAzureCredential
+ >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential())
+ ```
## Examples
When you submit a pull request, a CLA-bot automatically determines whether you n
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see [Code of Conduct][code_of_conduct] FAQ or contact [Open Source Conduct Team](mailto:opencode@microsoft.com) with questions or comments. <!-- LINKS -->+ [webpubsubservice_docs]: ./index.yml [azure_cli]: /cli/azure [azure_sub]: https://azure.microsoft.com/free/
This project has adopted the Microsoft Open Source Code of Conduct. For more inf
[connection_string]: howto-websocket-connect.md#authorization [azure_portal]: howto-develop-create-instance.md [azure-key-credential]: https://aka.ms/azsdk-python-core-azurekeycredential
-[aad_doc]: howto-authorize-from-application.md
+[microsoft_entra_id_doc]: howto-authorize-from-application.md
[samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/webpubsub/azure-messaging-webpubsubservice/samples
azure-web-pubsub Samples Authenticate And Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-authenticate-and-connect.md
Title: Azure Web PubSub samples - authenticate and connect
-description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s)
+description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s)
Last updated 05/15/2023
zone_pivot_groups: azure-web-pubsub-samples-authenticate-and-connect + # Azure Web PubSub samples - Authenticate and connect To make use of your Azure Web PubSub resource, you need to authenticate and connect to the service first. Azure Web PubSub service distinguishes two roles and they're given a different set of capabilities.
-
+ ## Client
-The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages.
+
+The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages.
## Application server
-While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource.
+
+While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource.
::: zone pivot="method-sdk-csharp"
-| Use case | Description |
+| Use case | Description |
| | -- |
-| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only.
-| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
-| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point.
+| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only.
+| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point.
::: zone-end ::: zone pivot="method-sdk-javascript"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/server.js#L9) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/src/index.js#L5) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
::: zone-end ::: zone pivot="method-sdk-java"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/java/com/webpubsub/tutorial/App.java#L21) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/resources/public/https://docsupdatetracker.net/index.html#L12) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
::: zone-end ::: zone pivot="method-sdk-python"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/server.py#L19) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/public/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
::: zone-end
azure-web-pubsub Socketio Build Realtime Code Streaming App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-build-realtime-code-streaming-app.md
Title: Build a real-time code streaming app using Socket.IO and host it on Azure
-description: An end-to-end tutorial demonstrating how to build an app that allows coders to share coding activities with their audience in real time using Web PubSub for Socket.IO
+ Title: Build a real-time code-streaming app by using Socket.IO and host it on Azure
+description: Learn how to build an app that allows coders to share coding activities with their audience in real time by using Web PubSub for Socket.IO.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO
Last updated 08/01/2023
-# Build a real-time code streaming app using Socket.IO and host it on Azure
+# Build a real-time code-streaming app by using Socket.IO and host it on Azure
-Building a real-time experience like the cocreation feature from [Microsoft Word](https://www.microsoft.com/microsoft-365/word) can be challenging.
+Building a real-time experience like the co-creation feature in [Microsoft Word](https://www.microsoft.com/microsoft-365/word) can be challenging.
-Through its easy-to-use APIs, [Socket.IO](https://socket.io/) has proven itself as a battle-tested library for real-time communication between clients and server. However, Socket.IO users often report difficulty around scaling Socket.IO's connections. With Web PubSub for Socket.IO, developers no longer need to worry about managing persistent connections.
+Through its easy-to-use APIs, [Socket.IO](https://socket.io/) has proven itself as a library for real-time communication between clients and a server. However, Socket.IO users often report difficulty around scaling Socket.IO's connections. With Web PubSub for Socket.IO, developers no longer need to worry about managing persistent connections.
## Overview
-This tutorial shows how to build an app that allows a coder to stream his/her coding activities to an audience. We build this application using
+
+This article shows how to build an app that allows a coder to stream coding activities to an audience. You build this application by using:
+ >[!div class="checklist"]
-> * Monitor Editor, the code editor that powers VS code
-> * [Express](https://expressjs.com/), a Node.js web framework
-> * APIs provided by Socket.IO library for real-time communication
-> * Host Socket.IO connections using Web PubSub for Socket.IO
+> * Monaco Editor, the code editor that powers Visual Studio Code.
+> * [Express](https://expressjs.com/), a Node.js web framework.
+> * APIs that the Socket.IO library provides for real-time communication.
+> * Host Socket.IO connections that use Web PubSub for Socket.IO.
### The finished app
-The finished app allows a code editor user to share a web link through which people can watch him/her typing.
+The finished app allows the user of a code editor to share a web link through which people can watch the typing.
++
+To keep the procedures focused and digestible in around 15 minutes, this article defines two user roles and what they can do in the editor:
-To keep this tutorial focused and digestible in around 15 minutes, we define two user roles and what they can do in the editor
-- a writer, who can type in the online editor and the content is streamed-- viewers, who receive real-time content typed by the writer and cannot edit the content
+* A writer, who can type in the online editor and the content is streamed
+* Viewers, who receive real-time content typed by the writer and can't edit the content
### Architecture
-| / | Purpose | Benefits |
+
+| Item | Purpose | Benefits |
|-|-||
-|[Socket.IO library](https://socket.io/) | Provides low-latency, bi-directional data exchange mechanism between the backend application and clients | Easy-to-use APIs that cover most real-time communication scenarios
-|Web PubSub for Socket.IO | Host WebSocket or poll-based persistent connections with Socket.IO clients | 100 K concurrent connections built-in; Simplify application architecture;
+|[Socket.IO library](https://socket.io/) | Provides a low-latency, bidirectional data exchange mechanism between the back-end application and clients | Easy-to-use APIs that cover most real-time communication scenarios
+|Web PubSub for Socket.IO | Hosts WebSocket or poll-based persistent connections with Socket.IO clients | Support for 100,000 concurrent connections; simplified application architecture
++
+## Prerequisites
+To follow all the steps in this article, you need:
-## Prerequisites
-In order to follow the step-by-step guide, you need
> [!div class="checklist"] > * An [Azure](https://portal.azure.com/) account. If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-> * [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources.
-> * Basic familiarity of [Socket.IO's APIs](https://socket.io/docs/v4/)
+> * The [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or later) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources.
+> * Basic familiarity with [Socket.IO's APIs](https://socket.io/docs/v4/).
## Create a Web PubSub for Socket.IO resource
-We are going to use Azure CLI to create the resource.
+
+Use the Azure CLI to create the resource:
+ ```bash az webpubsub create -n <resource-name> \ -l <resource-location> \
az webpubsub create -n <resource-name> \
--kind SocketIO \ --sku Free_F1 ```
-## Get connection string
-A connection string allows you to connect with Web PubSub for Socket.IO. Keep the returned connection string somewhere for use as we need it when we run the application at the end of the tutorial.
+
+## Get a connection string
+
+A connection string allows you to connect with Web PubSub for Socket.IO.
+
+Run the following commands. Keep the returned connection string somewhere, because you'll need it when you run the application later in this article.
+ ```bash az webpubsub key show -n <resource-name> \ -g <resource-group> \
az webpubsub key show -n <resource-name> \
-o tsv ```
-## Write the application
->[!NOTE]
-> This tutorial focuses on explaining the core code for implementing real-time communication. Complete code can be found in the [samples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream).
+## Write the application's server-side code
+
+Start writing your application's code by working on the server side.
+
+### Build an HTTP server
+
+1. Create a Node.js project:
-### Server-side code
-#### Build an HTTP server
-1. Create a Node.js project
```bash mkdir codestream cd codestream npm init ```
-2. Install server SDK and Express
+2. Install the server SDK and Express:
+ ```bash npm install @azure/web-pubsub-socket.io npm install express ```
-3. Import required packages and create an HTTP server to serve static files
+3. Import required packages and create an HTTP server to serve static files:
+ ```javascript
- /* server.js*/
+ /*server.js*/
// Import required packages const express = require('express'); const path = require('path');
- // Create a HTTP server based on Express
+ // Create an HTTP server based on Express
const app = express(); const server = require('http').createServer(app); app.use(express.static(path.join(__dirname, 'public'))); ```
-4. Define an endpoint called `/negotiate`. A **writer** client hits this endpoint first. This endpoint returns an HTTP response, which contains
-- an endpoint the client should establish a persistent connection with, -- `room` the client is assigned to
+4. Define an endpoint called `/negotiate`. A writer client hits this endpoint first. This endpoint returns an HTTP response. The response contains an endpoint that the client should use to establish a persistent connection. It also returns a `room` value that the client is assigned to.
- ```javascript
- /* server.js*/
+ ```javascript
+ /*server.js*/
app.get('/negotiate', async (req, res) => { res.json({ url: endpoint
az webpubsub key show -n <resource-name> \
}); ```
-#### Create Web PubSub for Socket.IO server
-1. Import Web PubSub for Socket.IO SDK and define options
+### Create the Web PubSub for Socket.IO server
+
+1. Import the Web PubSub for Socket.IO SDK and define options:
+ ```javascript
- /* server.js*/
+ /*server.js*/
const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); const wpsOptions = { hub: "codestream", connectionString: process.argv[2]
- };
+ }
```
-2. Create a Web PubSub for Socket.IO server
+2. Create a Web PubSub for Socket.IO server:
+ ```javascript
- /* server.js*/
+ /*server.js*/
const io = require("socket.io")(); useAzureSocketIO(io, wpsOptions); ```
-The two steps are slightly different than how you would normally create a Socket.IO server as [described here](https://socket.io/docs/v4/server-installation/). With these two steps, your server-side code can offload managing persistent connections to an Azure service. With the help of an Azure service, your application server acts **only** as a lightweight HTTP server.
+The two steps are slightly different from how you would normally create a Socket.IO server, as described in [this Socket.IO documentation](https://socket.io/docs/v4/server-installation/). With these two steps, your server-side code can offload managing persistent connections to an Azure service. With the help of an Azure service, your application server acts *only* as a lightweight HTTP server.
+
+### Implement business logic
-Now that we've created a Socket.IO server hosted by Web PubSub, we can define how the clients and server communicate using Socket.IO's APIs. This process is referred to as implementing business logic.
+Now that you've created a Socket.IO server hosted by Web PubSub, you can define how the clients and server communicate by using Socket.IO's APIs. This process is called implementing business logic.
-#### Implement business logic
-1. After a client is connected, the application server tells the client that "you are logged in" by sending a custom event named `login`.
+1. After a client is connected, the application server tells the client that it's logged in by sending a custom event named `login`.
```javascript
- /* server.js*/
+ /*server.js*/
io.on('connection', socket => { socket.emit("login"); }); ```
-2. Each client emits two events `joinRoom` and `sendToRoom` that the server can respond to. After the server getting the `room_id` a client wishes to join, we use Socket.IO's API `socket.join` to join the target client to the specified room.
+2. Each client emits two events that the server can respond to: `joinRoom` and `sendToRoom`. After the server gets the `room_id` value that a client wants to join, you use `socket.join` from Socket.IO's API to join the target client to the specified room.
```javascript
- /* server.js*/
+ /*server.js*/
socket.on('joinRoom', async (message) => { const room_id = message["room_id"]; await socket.join(room_id); }); ```
-3. After a client has successfully been joined, the server informs the client of the successful result with the `message` event. Upon receiving an `message` event with a type of `ackJoinRoom`, the client can ask the server to send the latest editor state.
+3. After a client is joined, the server informs the client of the successful result by sending a `message` event. When the client receives a `message` event with a type of `ackJoinRoom`, the client can ask the server to send the latest editor state.
```javascript
- /* server.js*/
+ /*server.js*/
socket.on('joinRoom', async (message) => { // ... socket.emit("message", {
Now that we've created a Socket.IO server hosted by Web PubSub, we can define ho
``` ```javascript
- /* client.js*/
+ /*client.js*/
socket.on("message", (message) => { let data = message; if (data.type === 'ackJoinRoom' && data.success) { sendToRoom(socket, `${room_id}-control`, { data: 'sync'}); } // ...
- })
+ });
```
-4. When a client sends `sendToRoom` event to the server, the server broadcasts the **changes to the code editor state** to the specified room. All clients in the room can now receive the latest update.
+4. When a client sends a `sendToRoom` event to the server, the server broadcasts the *changes to the code editor state* to the specified room. All clients in the room can now receive the latest update.
```javascript socket.on('sendToRoom', (message) => {
Now that we've created a Socket.IO server hosted by Web PubSub, we can define ho
}); ```
-Now that the server-side is finished. Next, we work on the client-side.
+## Write the application's client-side code
-### Client-side code
-#### Initial setup
-1. On the client side, we need to create an Socket.IO client to communicate with the server. The question is which server the client should establish a persistent connection with. Since we use Web PubSub for Socket.IO, the server is an Azure service. Recall that we defined [`/negotiate`](#build-an-http-server) route to serve clients an endpoint to Web PubSub for Socket.IO.
+Now that the server-side procedures are finished, you can work on the client side.
- ```javascript
- /*client.js*/
+### Initial setup
- async function initialize(url) {
- let data = await fetch(url).json()
+You need to create a Socket.IO client to communicate with the server. The question is which server the client should establish a persistent connection with. Because you're using Web PubSub for Socket.IO, the server is an Azure service. Recall that you defined a [/negotiate](#build-an-http-server) route to serve clients an endpoint to Web PubSub for Socket.IO.
- updateStreamId(data.room_id);
+```javascript
+/*client.js*/
- let editor = createEditor(...); // Create a editor component
+async function initialize(url) {
+ let data = await fetch(url).json()
- var socket = io(data.url, {
- path: "/clients/socketio/hubs/codestream",
- });
+ updateStreamId(data.room_id);
- return [socket, editor, data.room_id];
- }
- ```
-The `initialize(url)` organizes a few setup operations together.
-- fetches the endpoint to an Azure service from your HTTP server,-- creates a Monoca editor instance,-- establishes a persistent connection with Web PubSub for Socket.IO
+ let editor = createEditor(...); // Create an editor component
+
+ var socket = io(data.url, {
+ path: "/clients/socketio/hubs/codestream",
+ });
+
+ return [socket, editor, data.room_id];
+}
+```
+
+The `initialize(url)` function organizes a few setup operations together:
+
+* Fetches the endpoint to an Azure service from your HTTP server
+* Creates a Monaco Editor instance
+* Establishes a persistent connection with Web PubSub for Socket.IO
-#### Writer client
-[As mentioned earlier](#the-finished-app), we have two user roles on the client side. The first one is the writer and another one is viewer. Anything written by the writer is streamed to the viewer's screen.
+### Writer client
+
+As mentioned [earlier](#the-finished-app), you have two user roles on the client side: writer and viewer. Anything that the writer types is streamed to the viewer's screen.
+
+1. Get the endpoint to Web PubSub for Socket.IO and the `room_id` value:
-##### Writer client
-1. Get the endpoint to Web PubSub for Socket.IO and the `room_id`.
```javascript /*client.js*/ let [socket, editor, room_id] = await initialize('/negotiate'); ```
-2. When the writer client is connected with server, the server sends a `login` event to him. The writer can respond by asking the server to join itself to a specified room. Importantly, every 200 ms the writer sends its latest editor state to the room. A function aptly named `flush` organizes the sending logic.
+2. When the writer client is connected with the server, the server sends a `login` event to the writer. The writer can respond by asking the server to join itself to a specified room. Every 200 milliseconds, the writer client sends the latest editor state to the room. A function named `flush` organizes the sending logic.
```javascript /*client.js*/
The `initialize(url)` organizes a few setup operations together.
}); ```
-3. If a writer doesn't make any edits, `flush()` does nothing and simply returns. Otherwise, the **changes to the editor state** are sent to the room.
+3. If a writer doesn't make any edits, `flush()` does nothing and simply returns. Otherwise, the *changes to the editor state* are sent to the room.
+ ```javascript /*client.js*/ function flush() {
- // No change from editor need to be flushed
+ // No changes from editor need to be flushed
if (changes.length === 0) return; // Broadcast the changes made to editor content
The `initialize(url)` organizes a few setup operations together.
} ```
-4. When a new viewer client is connected, the viewer needs to get the latest **complete state** of the editor. To achieve this, a message containing `sync` data will be sent to the writer client, asking the writer client to send the complete editor state.
+4. When a new viewer client is connected, the viewer needs to get the latest *complete state* of the editor. To achieve this, a message that contains `sync` data is sent to the writer client. The message asks the writer client to send the complete editor state.
+ ```javascript /*client.js*/
The `initialize(url)` organizes a few setup operations together.
}); ```
-##### Viewer client
-1. Same with the writer client, the viewer client creates its Socket.IO client through `initialize()`. When the viewer client is connected and received a `login` event from server, it asks the server to join itself to the specified room. The query `room_id` specifies the room .
+### Viewer client
+
+1. Like the writer client, the viewer client creates its Socket.IO client through `initialize()`. When the viewer client is connected and receives a `login` event from the server, it asks the server to join itself to the specified room. The query `room_id` specifies the room.
```javascript /*client.js*/
The `initialize(url)` organizes a few setup operations together.
}); ```
-2. When a viewer client receives a `message` event from server and the data type is `ackJoinRoom`, the viewer client asks the writer client in the room to send over the complete editor state.
+2. When a viewer client receives a `message` event from the server and the data type is `ackJoinRoom`, the viewer client asks the writer client in the room to send the complete editor state.
```javascript /*client.js*/
The `initialize(url)` organizes a few setup operations together.
}); ```
-3. If the data type is `editorMessage`, the viewer client **updates the editor** according to its actual content.
+3. If the data type is `editorMessage`, the viewer client *updates the editor* according to its actual content.
```javascript /*client.js*/
The `initialize(url)` organizes a few setup operations together.
}); ```
-4. Implement `joinRoom()` and `sendToRoom()` using Socket.IO's APIs
+4. Implement `joinRoom()` and `sendToRoom()` by using Socket.IO's APIs:
+ ```javascript /*client.js*/
The `initialize(url)` organizes a few setup operations together.
``` ## Run the application+ ### Locate the repo
-We dived deep into the core logic related to synchronizing editor state between viewers and writer. The complete code can be found in [ examples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream).
+
+The preceding sections covered the core logic related to synchronizing the editor state between viewers and the writer. You can find the complete code in the [examples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream).
### Clone the repo+ You can clone the repo and run `npm install` to install project dependencies. ### Start the server+ ```bash node index.js <web-pubsub-connection-string> ```
-> [!NOTE]
-> This is the connection string you received from [a previous step](#get-connection-string).
+
+This is the connection string that you received in [an earlier step](#get-a-connection-string).
### Play with the real-time code editor
-Open `http://localhost:3000` in a browser tab and open another tab with the url displayed in the first web page.
-If you write code in the first tab, you should see your typing reflected real-time in the other tab. Web PubSub for Socket.IO facilitates message passing in the cloud. Your `express` server only serves the static `https://docsupdatetracker.net/index.html` and `/negotiate` endpoint.
+Open `http://localhost:3000` on a browser tab. Open another tab with the URL displayed on the first webpage.
+
+If you write code on the first tab, you should see your typing reflected in real time on the other tab. Web PubSub for Socket.IO facilitates message passing in the cloud. Your `express` server only serves the static `https://docsupdatetracker.net/index.html` file and the `/negotiate` endpoint.
azure-web-pubsub Socketio Migrate From Self Hosted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-migrate-from-self-hosted.md
Title: How to migrate a self-hosted Socket.IO to be fully managed on Azure
-description: A tutorial showing how to migrate an Socket.IO chat app to Azure
+ Title: Migrate a self-hosted Socket.IO app to be fully managed on Azure
+description: Learn how to migrate a Socket.IO chat app to Azure.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO
Last updated 07/21/2023
-# How to migrate a self-hosted Socket.IO app to be fully managed on Azure
->[!NOTE]
-> Web PubSub for Socket.IO is in "Private Preview" and is available to selected customers only. To register your interest, please write to us awps@microsoft.com.
+# Migrate a self-hosted Socket.IO app to be fully managed on Azure
+
+In this article, you migrate a Socket.IO chat app to Azure by using Web PubSub for Socket.IO.
## Prerequisites+ > [!div class="checklist"]
-> * An Azure account with an active subscription. If you don't have one, you can [create a free accout](https://azure.microsoft.com/free/).
-> * Some familiarity with Socket.IO library.
+> * An Azure account with an active subscription. If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+> * Some familiarity with the Socket.IO library.
## Create a Web PubSub for Socket.IO resource
-Head over to Azure portal and search for `socket.io`.
-## Migrate an official Socket.IO sample app
-To focus this guide to the migration process, we're going to use a sample chat app provided on [Socket.IO's website](https://github.com/socketio/socket.io/tree/4.6.2/examples/chat). We need to make some minor changes to both the **server-side** and **client-side** code to complete the migration.
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Search for **socket.io**, and then select **Web PubSub for Socket.IO**.
+1. Select a plan, and then select **Create**.
+
+ :::image type="content" source="./media/socketio-migrate-from-self-hosted/create-resource.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the Azure portal.":::
+
+## Migrate the app
+
+For the migration process in this guide, you use a sample chat app provided on [Socket.IO's website](https://github.com/socketio/socket.io/tree/4.6.2/examples/chat). You need to make some minor changes to both the server-side and client-side code to complete the migration.
### Server side
-Locate `index.js` in the server-side code.
-1. Add package `@azure/web-pubsub-socket.io`
+1. Locate `index.js` in the server-side code.
+
+2. Add the `@azure/web-pubsub-socket.io` package:
+ ```bash npm install @azure/web-pubsub-socket.io ```
-2. Import package in server code `index.js`
+3. Import the package:
+ ```javascript const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); ```
-3. Add configuration so that the server can connect with your Web PubSub for Socket.IO resource.
+4. Locate in your server-side code where you created the Socket.IO server, and append useAzureSocketIO(...):
+ ```javascript
- const wpsOptions = {
+ const io = require("socket.io")();
+ useAzureSocketIO(io, {
hub: "eio_hub", // The hub name can be any valid string. connectionString: process.argv[2]
- };
+ });
```
-
-4. Locate in your server-side code where Socket.IO server is created and append `.useAzureSocketIO(wpsOptions)`:
- ```javascript
- const io = require("socket.io")();
- useAzureSocketIO(io, wpsOptions);
- ```
->[!IMPORTANT]
-> `useAzureSocketIO` is an asynchronous method. Here we `await`. So you need to wrap it and related code in an asynchronous function.
-5. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO.
-- [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms)-- [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms)-- [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom)-- [socket.leave](https://socket.io/docs/v4/server-api/#socketleaveroom)
+ >[!IMPORTANT]
+ > The `useAzureSocketIO` method is asynchronous, and it does initialization steps to connect to Web PubSub. You can use `await useAzureSocketIO(...)` or use `useAzureSocketIO(...).then(...)` to make sure your app server starts to serve requests after the initialization succeeds.
+
+5. If you use the following server APIs, add `async` before using them, because they're asynchronous with Web PubSub for Socket.IO:
+
+ * [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms)
+ * [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms)
+ * [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom)
+ * [socket.leave](https://socket.io/docs/v4/server-api/#socketleaveroom)
+
+ For example, if there's code like this:
- For example, if there's code like:
```javascript io.on("connection", (socket) => { socket.join("room abc"); }); ```
- you should update it to:
+
+ Update it to:
+ ```javascript io.on("connection", async (socket) => { await socket.join("room abc"); }); ```
- In this chat example, none of them are used. So no changes are needed.
+ This chat example doesn't use any of those APIs. So you don't need to make any changes.
+
+### Client side
+
+1. Find the endpoint to your resource on the Azure portal.
-### Client Side
-In client-side code found in `./public/main.js`
+ :::image type="content" source="./media/socketio-migrate-from-self-hosted/get-resource-endpoint.png" alt-text="Screenshot of getting the endpoint to a Web PubSub for Socket.IO resource.":::
+1. Go to `./public/main.js` in the client-side code.
-Find where Socket.IO client is created, then replace its endpoint with Azure Socket.IO endpoint and add an `path` option. You can find the endpoint to your resource on Azure portal.
-```javascript
-const socket = io("<web-pubsub-for-socketio-endpoint>", {
- path: "/clients/socketio/hubs/eio_hub",
-});
-```
+1. Find where the Socket.IO client is created. Replace its endpoint with the Socket.IO endpoint in Azure, and add a `path` option:
+ ```javascript
+ const socket = io("<web-pubsub-for-socketio-endpoint>", {
+ path: "/clients/socketio/hubs/eio_hub",
+ });
+ ```
azure-web-pubsub Socketio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-overview.md
Title: Overview of Web PubSub for Socket.IO
-description: An overview of Web PubSub's support for the open-source Socket.IO library
+ Title: Overview Socket.IO on Azure
+description: Get an overview of Azure's support for the open-source Socket.IO library.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
Last updated 07/27/2023
-# Overview of Web PubSub for Socket.IO
-Web PubSub for Socket.IO is a fully managed cloud service for [Socket.IO](https://socket.io/), which is a widely popular open-source library for real-time messaging between clients and server.
+# Overview Socket.IO on Azure
-Managing stateful and persistent connections between clients and server is often a source of frustration for Socket.IO users. The problem is more acute when there are multiple Socket.IO instances spread across servers.
+> [!NOTE]
+> The support of Socket.IO on Azure is in public preview. We welcome any feedback and suggestions. Please reach out to the service team at awps@microsoft.com.
-Web PubSub for Socket.IO removes the burden of deploying, hosting and coordinating Socket.IO instances for developers, allowing development team to focus on building real-time experiences using their familiar APIs provided by Socket.IO library.
+Socket.IO is a widely popular open-source library for real-time messaging between clients and a server. Managing stateful and persistent connections between clients and a server is often a source of frustration for Socket.IO users. The problem is more acute when multiple Socket.IO instances are spread across servers.
+Azure provides a fully managed cloud solution for [Socket.IO](https://socket.io/). This support removes the burden of deploying, hosting, and coordinating Socket.IO instances for developers. Development teams can then focus on building real-time experiences by using familiar APIs from the Socket.IO library.
-## Benefits over hosting Socket.IO app yourself
->[!NOTE]
-> - **Socket.IO** refers to the open-source library.
-> - **Web PubSub for Socket.IO** refers to a fully managed Azure service.
+## Simplified architecture
+This feature removes the need for an "adapter" server component when scaling out a Socket.IO app, allowing the development team to reap the benefits of a simplified architecture.
-| / | Hosting Socket.IO app yourself | Using Web PubSub for Socket.IO|
+
+## Benefits over hosting a Socket.IO app yourself
+
+The following table shows the benefits of using the fully managed solution from Azure.
+
+| Item | Hosting a Socket.IO app yourself | Using Socket.IO on Azure|
|||| | Deployment | Customer managed | Azure managed | | Hosting | Customer needs to provision enough server resources to serve and maintain persistent connections | Azure managed |
-| Scaling connections | Customer managed by using a server-side component called ["adapter"](https://socket.io/docs/v4/adapter/) | Azure managed with **100k+** client connections out-of-the-box |
-| Uptime guarantee | Customer managed | Azure managed with **99.9%+** uptime |
-| Enterprise-grade security | Customer managed | Azure managed |
-| Ticket support system | N/A | Azure managed |
+| Scaling connections | Customer managed by using a server-side component called an [adapter](https://socket.io/docs/v4/adapter/) | Azure managed with more than 100,000 client connections out of the box |
+| Uptime guarantee | Customer managed | Azure managed with more than 99.9 percent uptime |
+| Enterprise-grade security | Customer managed | Azure managed |
+| Ticket support system | Not applicable | Azure managed |
-When you host Socket.IO app yourself, clients establish WebSocket or long-polling connections directly with your server. Maintaining such **stateful** connections places a heavy burden to your Socket.IO server, which limits the number of concurrent connections and increases messaging latency.
+When you host a Socket.IO app yourself, clients establish WebSocket or long-polling connections directly with your server. Maintaining such *stateful* connections places a heavy burden on your Socket.IO server. This burden limits the number of concurrent connections and increases messaging latency.
-A common approach to meeting the concurrent and latency challenge is to [scale out to multiple Socket.IO servers](https://socket.io/docs/v4/adapter/). Scaling out requires a server-side component called "adapter" like the Redis adapter provided by Socket.IO library. However, such adapter introduces an extra component you need to deploy and manage on top of writing extra code logic to get things to work properly.
+A common approach to meeting the concurrency and latency challenge is to [scale out to multiple Socket.IO servers](https://socket.io/docs/v4/adapter/). Scaling out requires a server-side component called an *adapter*, like the Redis adapter that the Socket.IO library provides. However, such an adapter introduces an extra component that you need to deploy and manage. It also requires you to write extra code logic to get things to work properly.
-With Web PubSub for Socket.IO, you're freed from handling scaling issues and implementing code logic related to using an adapter.
+With Socket.IO on Azure, you're freed from handling scaling issues and implementing code logic related to using an adapter.
## Same programming model
-To migrate a self-hosted Socket.IO app to Azure, you only need to add a few lines of code with **no need** to change the rest of the application code. In other words, the programming model remains the same and the complexity of managing a real-time app is reduced.
+
+To migrate a self-hosted Socket.IO app to Azure, you add only a few lines of code. There's no need to change the rest of the application code. In other words, the programming model remains the same, and the complexity of managing a real-time app is reduced.
> [!div class="nextstepaction"] > [Quickstart for Socket.IO users](./socketio-quickstart.md) >
-> [Quickstart: Mirgrate an self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md)
-----------
+> [Migrate a self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md)
azure-web-pubsub Socketio Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-quickstart.md
Title: Quick start of Web PubBub for Socket.IO
-description: A quickstart demonstrating how to use Web PubSub for Socket.IO
+ Title: 'Quickstart: Incorporate Web PubSub for Socket.IO in your app'
+description: In this quickstart, you learn how to use Web PubSub for Socket.IO on an existing Socket.IO app.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
Last updated 08/01/2023 -+
-# Quickstart for Socket.IO users
-This quickstart is aimed for existing Socket.IO users. It demontrates how quickly Socket.IO users can incorporate Web PubSub for Socket.IO in their app to simplify development, speed up deployment and achieve scalability without complexity.
+# Quickstart: Incorporate Web PubSub for Socket.IO in your app
+
+This quickstart demonstrates how to create a Web PubSub for Socket.IO resource and quickly incorporate it in your Socket.IO app to simplify development, speed up deployment, and achieve scalability without complexity.
+
+Code shown in this quickstart is in CommonJS. If you want to use an ECMAScript module, see the [chat demo for Socket.IO with Azure Web PubSub](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/chat).
## Prerequisites+ > [!div class="checklist"]
-> * An Azure account with an active subscription. If you don't have one, you can [create a free accout](https://azure.microsoft.com/free/).
-> * Some familiarity with Socket.IO library.
+> * An Azure account with an active subscription. If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+> * Some familiarity with the Socket.IO library.
## Create a Web PubSub for Socket.IO resource
-Head over to Azure portal and search for `socket.io`.
-## Initialize a Node project and install required packages
+To create a Web PubSub for Socket.IO, you can use the following one-click button to create or follow the actions below to search in Azure portal.
+
+- Use the following button to create a Web PubSub for Socket.IO resource in Azure
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://ms.portal.azure.com/#create/Microsoft.WebPubSubForSocketIO)
+
+- Search from Azure portal search bar
+
+ 1. Go to the [Azure portal](https://portal.azure.com/).
+
+ 1. Search for **socket.io**, in the search bar and then select **Web PubSub for Socket.IO**.
+
+ :::image type="content" source="./media/socketio-quickstart/search.png" alt-text="Screenshot of searching the Web PubSub for Socket.IO service in the Azure portal.":::
+
+- Search from Marketplace
+
+ 1. Go to the [Azure portal](https://portal.azure.com/).
+
+ 1. Select the **Create a resource** button found on the upper left-hand corner of the Azure portal. Type **socket.io** in the search box and press enter. Select the **Web PubSub for Socket.IO** in the search result.
+
+ :::image type="content" source="./media/socketio-quickstart/marketplace.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the marketplace.":::
+
+ 1. Click **Create** in the pop out page.
+
+ :::image type="content" source="./media/socketio-migrate-from-self-hosted/create-resource.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the Azure portal.":::
+
+## Sending messages with Socket.IO libraries and Web PubSub for Socket.IO
+
+In the following steps, you create a Socket.IO project and integrate with Web PubSub for Socket.IO.
+
+### Initialize a Node project and install required packages
+ ```bash mkdir quickstart cd quickstart
npm init
npm install @azure/web-pubsub-socket.io socket.io-client ```
-## Write server code
-1. Import required packages and create a configuration for Web PubSub
- ```javascript
- /* server.js */
- const { Server } = require("socket.io");
- const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io");
-
- // Add a Web PubSub Option
- const wpsOptions = {
- hub: "eio_hub", // The hub name can be any valid string.
- connectionString: process.argv[2] || process.env.WebPubSubConnectionString
- }
- ```
+### Write server code
-2. Create a Socket.IO server supported by Web PubSub for Socket.IO
- ```javascript
- /* server.js */
- let io = new Server(3000);
- useAzureSocketIO(io, wpsOptions);
- ```
+Create a `server.js` file and add following code to create a Socket.IO server and integrate with Web PubSub for Socket.IO.
-3. Write server logic
- ```javascript
- /* server.js */
- io.on("connection", (socket) => {
- // send a message to the client
- socket.emit("hello", "world");
-
- // receive a message from the client
- socket.on("howdy", (arg) => {
- console.log(arg); // prints "stranger"
- })
- });
- ```
+```javascript
+/*server.js*/
+const { Server } = require("socket.io");
+const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io");
-## Write client code
-1. Create a Socket.IO client
- ```javascript
- /* client.js */
- const io = require("socket.io-client");
+let io = new Server(3000);
- const webPubSubEndpoint = process.argv[2] || "<web-pubsub-socketio-endpoint>";
- const socket = io(webPubSubEndpoint, {
- path: "/clients/socketio/hubs/eio_hub",
- });
- ```
+// Use the following line to integrate with Web PubSub for Socket.IO
+useAzureSocketIO(io, {
+ hub: "Hub", // The hub name can be any valid string.
+ connectionString: process.argv[2]
+});
-2. Define the client behavior
- ```javascript
- /* client.js */
+io.on("connection", (socket) => {
+ // Sends a message to the client
+ socket.emit("hello", "world");
- // Receives a message from the server
- socket.on("hello", (arg) => {
- console.log(arg);
- });
+ // Receives a message from the client
+ socket.on("howdy", (arg) => {
+ console.log(arg); // Prints "stranger"
+ })
+});
+```
- // Sends a message to the server
- socket.emit("howdy", "stranger")
- ```
+### Write client code
+
+Create a `client.js` file and add following code to connect the client with Web PubSub for Socket.IO.
+
+```javascript
+/*client.js*/
+const io = require("socket.io-client");
+
+const socket = io("<web-pubsub-socketio-endpoint>", {
+ path: "/clients/socketio/hubs/Hub",
+});
+
+// Receives a message from the server
+socket.on("hello", (arg) => {
+ console.log(arg);
+});
+
+// Sends a message to the server
+socket.emit("howdy", "stranger")
+```
+
+When you use Web PubSub for Socket.IO, `<web-pubsub-socketio-endpoint>` and `path` are required for the client to connect to the service. The `<web-pubsub-socketio-endpoint>` and `path` can be found in Azure portal.
+
+1. Go to the **key** blade of Web PubSub for Socket.IO
+
+1. Type in your hub name and copy the **Client Endpoint** and **Client Path**
+
+ :::image type="content" source="./media/socketio-quickstart/client-url.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the Azure portal client endpoints blade.":::
## Run the app
-1. Run the server app
+
+1. Run the server app:
+ ```bash
- node server.js "<web-pubsub-connection-string>"
+ node server.js "<connection-string>"
```
-2. Run the client app in another terminal
+ The `<connection-string>` is the connection string that contains the endpoint and keys to access your Web PubSub for Socket.IO resource. You can also find the connection string in Azure portal
+
+ :::image type="content" source="./media/socketio-quickstart/connection-string.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the Azure portal connection string blade.":::
+
+2. Run the client app in another terminal:
+ ```bash
- node client.js "<web-pubsub-endpoint>"
+ node client.js
```-
-Note: Code shown in this quickstart is in CommonJS. If you'd like to use ES Module, please refer to [quickstart-esm](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/chat).
azure-web-pubsub Socketio Service Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-service-internal.md
Title: Service internal - how does Web PubSub support Socket.IO library
-description: An article explaining how Web PubSub supports Socket.IO library
+ Title: How does Azure Web PubSub support the Socket.IO library?
+description: This article explains how Azure Web PubSub supports the Socket.IO library.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
Last updated 08/1/2023
-# Service internal - how does Web PubSub support Socket.IO library
+# How does Azure Web PubSub support the Socket.IO library?
-> [!NOTE]
-> This article peels back the curtain from an engieerning perspective of how self-hosted Socket.IO apps can migrate to Azure with minimal code change to simplify app architecture and deployment, while achieving 100 K+ concurrent connections out-of-the-box. It's not necessary to understand everything in this article to use Web PubSub for Socket.IO effectively.
+This article provides an engineering perspective on how you can migrate self-hosted Socket.IO apps to Azure by using Web PubSub for Socket.IO with minimal code changes. You can then take advantage of simplified app architecture and deployment, while achieving 100,000 concurrent connections. You don't need to understand everything in this article to use Web PubSub for Socket.IO effectively.
-## A typical architecture of a self-hosted Socket.IO app
+## Architecture of a self-hosted Socket.IO app
-The diagram shows a typical architecture of a self-hosted Socket.IO app. To ensure that an app is scalable and reliable, Socket.IO users often have an architecture involving multiple Socket.IO servers. Client connections are distributed among Socket.IO servers to balance load on the system. A setup of multiple Socket.IO servers introduces the challenge when developers need to send the same message to clients connected to different server. This use case is often referred to as "broadcasting messages" by developers.
+The following diagram shows a typical architecture of a self-hosted Socket.IO app.
-The official recommendation from Soket.IO library is to introduce a server-side component called ["adapter"](https://socket.io/docs/v4/using-multiple-nodes/)to coordinate Socket.IO servers. What an adapter does is to figure out which servers clients are connected to and instruct those servers to send messages.
-Adding an adapter component introduces complexity to both development and deployment. For example, if the [Redis adapter](https://socket.io/docs/v4/redis-adapter/) is used, it means developers need to
-- implement sticky session-- deploy and maintain Redis instance(s)
+To ensure that an app is scalable and reliable, Socket.IO users often have an architecture that involves multiple Socket.IO servers. Client connections are distributed among Socket.IO servers to balance load on the system.
-The engineering effort and time of getting a real-time communication channel in place distracts developers from working on features that make an app or system unique and valuable to end users.
+A setup of multiple Socket.IO servers introduces a challenge when developers need to send the same message to clients that are connected to a different server. Developers often refer to this use case as "broadcasting messages."
+
+The official recommendation from the Socket.IO library is to introduce a server-side component called an [adapter](https://socket.io/docs/v4/using-multiple-nodes/) to coordinate Socket.IO servers. An adapter figures out which servers the clients are connected to and instructs those servers to send messages.
+
+Adding an adapter component introduces complexity to both development and deployment. For example, if an architecture uses the [Redis adapter](https://socket.io/docs/v4/redis-adapter/), developers need to:
+
+- Implement sticky sessions.
+- Deploy and maintain Redis instances.
+
+The engineering effort and time in getting a real-time communication channel in place distracts developers from working on features that make an app or system unique and valuable to users.
## What Web PubSub for Socket.IO aims to solve for developers
-Although setting up a reliable and scalable app built with Socket.IO library is often reported as challenging by developers, developers **enjoy** the intuitive APIs offered and the wide range of clients the library supports. Web PubSub for Socket.IO builds on the values the library brings, while relieving developers the complexity of managing persistent connections reliably and at scale.
-In practice, developers can continue using the APIs offered by Socket.IO library, but don't need to provision server resources to maintain WebSocket or long-polling based connections, which can be resource intensive. Also, developers don't need to manage and deploy an "adapter" component. The app server only needs to send a **single** operation and the Web PubSub for Socket.IO broadcasts the messages to relevant clients.
+Although developers often report that setting up a reliable and scalable app that's built with the Socket.IO library is challenging, developers can benefit from the intuitive APIs and the wide range of clients that the library supports. Web PubSub for Socket.IO builds on the value that the library brings, while relieving developers of the complexity in managing persistent connections reliably and at scale.
-## How does it work under the hood?
-Web PubSub for Socket.IO builds upon Socket.IO protocols by implementing the Adapter and Engine.IO. The diagram describes the typical architecture when you use the Web PubSub for Socket.IO with your Socket.IO server.
+In practice, developers can continue to use the Socket.IO library's APIs without needing to provision server resources to maintain WebSocket or long-polling-based connections, which can be resource intensive. Also, developers don't need to manage and deploy an adapter component. The app server needs to send only a single operation, and Web PubSub for Socket.IO broadcasts the messages to relevant clients.
+
+## How it works
+
+Web PubSub for Socket.IO builds on Socket.IO protocols by implementing the adapter and Engine.IO. The following diagram shows the typical architecture when you use Web PubSub for Socket.IO with your Socket.IO server.
:::image type="content" source="./media/socketio-service-internal/typical-architecture-managed-socketio.jpg" alt-text="Screenshot of a typical architecture of a fully managed Socket.IO app.":::
-Like a self-hosted Socket.IO app, you still need to host your Socket.IO application logic on your own server. However, with Web PubSub for Socket.IO**(the service)**, your server no longer manages client connections directly.
-- **Your clients** establish persistent connections with the service, which we call "client connections". -- **Your servers** also establish persistent connections with the service, which we call "server connections".
+Like a self-hosted Socket.IO app, you still need to host your Socket.IO application logic on your own server. However, with the Web PubSub for Socket.IO service:
+
+- Your server no longer manages client connections directly.
+- Your clients establish persistent connections with the service (*client connections*).
+- Your servers also establish persistent connections with the service (*server connections*).
-When your server logic uses `send to client`, `broadcast`, and `add client to rooms`, these operations are sent to the service through established server connection. Messages from your server are translated to Socket.IO operations that Socket.IO clients can understand. As a result, any existing Socket.IO implementation can work without modification. The only modification needed is to change the endpoint your clients connect to. Refer to this article of [how to migrate a self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md).
+When your server logic uses `send to client`, `broadcast`, and `add client to rooms`, these operations are sent to the service through an established server connection. Messages from your server are translated to Socket.IO operations that Socket.IO clients can understand. As a result, any existing Socket.IO implementation can work without major modifications. The only modification that you need to make is to change the endpoint that your clients connect to. For more information, see [Migrate a self-hosted Socket.IO app to be fully managed on Azure](./socketio-migrate-from-self-hosted.md).
-When a client connects to the service, the service
-- forwards Engine.IO connection `connect` to the server-- handles transport upgrade of client connections -- forwards all Socket.IO messages to server
+When a client connects to the service, the service:
+- Forwards the Engine.IO connection (`connect`) to the server.
+- Handles the transport upgrade of client connections.
+- Forwards all Socket.IO messages to the server.
azure-web-pubsub Socketio Supported Server Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-supported-server-apis.md
Title: Supported server APIs of Socket.IO
-description: An article listing out Socket.IO server APIs that are partially supported or unsupported by Web PubSub for Socekt.IO
+description: This article lists Socket.IO server APIs that are partially supported or unsupported in Web PubSub for Socket.IO.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO APIs, socketio, azure socketio
Last updated 07/27/2023
-# Server APIs supported by Web PubSub for Socket.IO
+# Supported server APIs of Socket.IO
-Socket.IO library provides a set of [server API](https://socket.io/docs/v4/server-api/).
-Note the following server APIs that are partially supported or unsupported by Web PubSub for Socket.IO.
+The Socket.IO library provides a set of [server APIs](https://socket.io/docs/v4/server-api/). The following server APIs are partially supported or unsupported by Web PubSub for Socket.IO.
| Server API | Support | |--|-| | [fetchSockets](https://socket.io/docs/v4/server-api/#serverfetchsockets) | Local only | | [serverSideEmit](https://socket.io/docs/v4/server-api/#serverserversideemiteventname-args) | Unsupported | | [serverSideEmitWithAck](https://socket.io/docs/v4/server-api/#serverserversideemitwithackeventname-args) | Unsupported |-
-Apart from the mentioned server APIs, all other server APIs from Socket.IO are fully supported.
azure-web-pubsub Socketio Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-common-issues.md
Title: How to troubleshoot Socket.IO common issues
-description: Learn how to troubleshoot Socket.IO common issues
+ Title: Troubleshoot Socket.IO common problems
+description: Learn how to troubleshoot common problems with the Socket.IO library and the Azure Web PubSub service.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO issues, socketio, azure socketio
Last updated 08/01/2023
-# Troubleshooting for common issues
+# Troubleshoot Socket.IO common problems
-Web PubSub for Socket.IO builds on Socket.IO library. When you use this Azure service, issues may lie with Socket.IO library itself or the service.
+Web PubSub for Socket.IO builds on the Socket.IO library. When you're using the Azure service, problems might lie with the service or with the library.
-## Issues with Socket.IO library
+To find the origin of problems, you can isolate the Socket.IO library by temporarily removing Web PubSub for Socket.IO from your application. If the application works as expected after the removal, the root cause is probably with the Azure service.
-To determine if the issues are with Socket.IO library, you can isolate it by temporarily removing Web PubSub for Socket.IO from your application. If the application works as expected after the removal, the root cause is probably with the Azure service.
+Use this article to find solutions to common problems with the service. Additionally, you can [enable logging on the server side](./socketio-troubleshoot-logging.md#server-side) to examine the behavior of your Socket.IO app, if none of the listed solutions help.
-If you suspect the issues are with Socket.IO library, refer to [Socket.IO library's documentation](https://socket.io/docs/v4/troubleshooting-connection-issues/) for common connection issues.
+If you suspect that the problems are with the Socket.IO library, refer to the [Socket.IO library's documentation](https://socket.io/docs/v4/troubleshooting-connection-issues/).
-## Issues with Web PubSub for Socket.IO
-If you suspect that the issues are with the Azure service after investigation, take a look at the list of common issues.
+## Server side
-Additionally, you can [enable logging on the server side](./socketio-troubleshoot-logging.md#server-side) to examine closely the behavior of your Socket.IO app, if none of the listed issues helps.
+### Improper package import
-### Server side
+#### Possible error
-#### `useAzureSocketIO is not a function`
-##### Possible error
-- `TypeError: (intermediate value).useAzureSocketIO is not a function`
+`TypeError: (intermediate value).useAzureSocketIO is not a function`
-##### Root cause
-If you use TypeScript in your project, you may observe this error. It's due to the improper package import.
+#### Root cause
+
+If you use TypeScript in your project, you might observe this error. It's due to improper package import.
```typescript // Bad example import * as wpsExt from "@azure/web-pubsub-socket.io" ```
-If a package isn't used or referenced after importing, the default behavior of TypeScript compiler is not to emit the package in the compiled `.js` file.
-##### Solution
-Use `import "@azure/web-pubsub-socket.io"`, instead. This import statement forces TypeScript compiler to include a package in the compiled `.js` file even if the package isn't referenced anywhere in the source code. [Read more](https://github.com/Microsoft/TypeScript/wiki/FAQ#why-are-imports-being-elided-in-my-emit)about this frequently asked question from the TypeScript community.
+If a package isn't used or referenced after importing, the default behavior of the TypeScript compiler is not to emit the package in the compiled *.js* file.
+
+#### Solution
+
+Use `import "@azure/web-pubsub-socket.io"` instead. This import statement forces the TypeScript compiler to include a package in the compiled *.js* file, even if the package isn't referenced anywhere in the source code. [Read more](https://github.com/Microsoft/TypeScript/wiki/FAQ#why-are-imports-being-elided-in-my-emit) about this frequently asked question from the TypeScript community.
+ ```typescript // Good example.
-// It forces TypeScript to include the package in compiled `.js` file.
+// It forces TypeScript to include the package in compiled .js file.
import "@azure/web-pubsub-socket.io" ```
-### Client side
+## Client side
+
+### Incorrect path option
+
+#### Possible error
+
+`GET <web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found
-#### `404 Not Found in client side with AWPS endpoint`
-##### Possible Error
- `GET <web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found
+#### Root cause
+
+The Socket.IO client was created without a correct `path` option.
-##### Root cause
-Socket.IO client is created without a correct `path` option.
```javascript // Bad example const socket = io(endpoint) ```
-##### Solution
-Add the correct `path` option with value `/clients/socketio/hubs/eio_hub`
+#### Solution
+
+Add the correct `path` option with the value `/clients/socketio/hubs/eio_hub`.
+ ```javascript // Good example const socket = io(endpoint, {
const socket = io(endpoint, {
}); ```
-#### `404 Not Found in client side with non-AWPS endpoint`
+### Incorrect Web PubSub for Socket.IO endpoint
+
+#### Possible error
-##### Possible Error
- `GET <non-web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found
+`GET <non-web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found
-##### Root cause
-Socket.IO client is created without correct Web PubSub for Socket.IO endpoint. For example,
+#### Root cause
+
+The Socket.IO client was created without a correct Web PubSub for Socket.IO endpoint. For example:
```javascript // Bad example.
const socket = io(endpoint, {
}); ```
-When you use Web PubSub for Socket.IO, your clients establish connections with an Azure service. When creating a Socket.IO client, you need use the endpoint to your Web PubSub for Socket.IO resource.
+When you use Web PubSub for Socket.IO, your clients establish connections with an Azure service. When you create a Socket.IO client, you need to use the endpoint for your Web PubSub for Socket.IO resource.
+
+#### Solution
-##### Solution
-Let Socket.IO client use the endpoint of your Web PubSub for Socket.IO resource.
+Let Socket.IO client use the endpoint for your Web PubSub for Socket.IO resource.
```javascript // Good example.
const webPubSubEndpoint = "<web-pubsub-endpoint>";
const socket = io(webPubSubEndpoint, { path: "/clients/socketio/hubs/<Your hub name>", });
-```
+```
azure-web-pubsub Socketio Troubleshoot Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-logging.md
Title: How to collect logs in Azure Socket.IO
-description: This article explains how to collect logs when using Web PubSub for Socket.IO
+ Title: Collect logs in Web PubSub for Socket.IO
+description: This article explains how to collect logs when you're using Web PubSub for Socket.IO.
+keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO logging, Socket.IO debugging, socketio, azure socketio
Last updated 08/01/2023
-# How to collect logs using Web PubSub for Socket.IO
+# Collect logs in Web PubSub for Socket.IO
-Like when you self-host Socket.IO library, you can collect logs on both the server and client side when you use Web PubSub for Socket.IO.
+Like when you self-host the Socket.IO library, you can collect logs on both the server and client side when you use Web PubSub for Socket.IO.
-## Server-side
-On the server-side, two utilities are included that provide debugging
-capabilities.
-- [DEBUG](https://github.com/debug-js/debug), which is used by Socket.IO library and extension library provided by Web PubSub for certain logging.-- [@azure/logger](https://www.npmjs.com/package/@azure/logger), which provides more low-level network-related logging. Conveniently, it also allows you to set a log level.
+## Server side
+
+The server side includes two utilities that provide debugging capabilities:
+
+- [DEBUG](https://github.com/debug-js/debug), which the Socket.IO library and extension library provided by Web PubSub use for certain logging.
+- [@azure/logger](https://www.npmjs.com/package/@azure/logger), which provides lower-level network-related logging. Conveniently, it also allows you to set a log level.
### `DEBUG` JavaScript utility
-#### Logs all debug information
+#### Log all debug information
+ ```bash DEBUG=* node yourfile.js ```
-#### Logs debug information of specific packages.
+#### Log the debug information from specific packages
+ ```bash
-# Logs debug information of "socket.io" package
+# Logs debug information from the "socket.io" package
DEBUG=socket.io:* node yourfile.js
-# Logs debug information of "engine.io" package
+# Logs debug information from the "engine.io" package
DEBUG=engine:* node yourfile.js
-# Logs debug information of extention library "wps-sio-ext" provided by Web PubSub
+# Logs debug information from the extension library "wps-sio-ext" provided by Web PubSub
DEBUG=wps-sio-ext:* node yourfile.js
-# Logs debug information of mulitple packages
+# Logs debug information from multiple packages
DEBUG=engine:*,socket.io:*,wps-sio-ext:* node yourfile.js ```+ :::image type="content" source="./media/socketio-troubleshoot-logging/log-debug.png" alt-text="Screenshot of logging information from DEBUG JavaScript utility"::: ### `@azure/logger` utility
-You can enable logging from this utility to get more low-level network-related information by setting the environmental variable `AZURE_LOG_LEVEL`.
+
+You can enable logging from the `@azure/logger` utility to get lower-level network-related information by setting the environmental variable `AZURE_LOG_LEVEL`.
```bash AZURE_LOG_LEVEL=verbose node yourfile.js ```
-`Azure_LOG_LEVEL` has four levels: `verbose`, `info`, `warning` and `error`.
+`Azure_LOG_LEVEL` has four levels: `verbose`, `info`, `warning`, and `error`.
+ ## Client side
-Using Web PubSub for Socket.IO doesn't change how you debug Socket.IO library. [Refer to the documentation](https://socket.io/docs/v4/logging-and-debugging/) from Socket.IO library.
-### Debug Socket.IO client in Node
+Using Web PubSub for Socket.IO doesn't change how you debug the Socket.IO library. [Refer to the documentation](https://socket.io/docs/v4/logging-and-debugging/) from the Socket.IO library.
+
+### Debug the Socket.IO client in Node
+ ```bash # Logs all debug information DEBUG=* node yourfile.js
-# Logs debug information from "socket.io-client" package
+# Logs debug information from the "socket.io-client" package
DEBUG=socket.io-client:* node yourfile.js
-# Logs debug information from "engine.io-client" package
+# Logs debug information from the "engine.io-client" package
DEBUG=engine.io-client:* node yourfile.js # Logs debug information from multiple packages DEBUG=socket.io-client:*,engine.io-client* node yourfile.js ```
-### Debug Socket.IO client in browser
-In browser, use `localStorage.debug = '<scope>'`.
+### Debug the Socket.IO client in a browser
+
+In a browser, use `localStorage.debug = '<scope>'`.
```bash # Logs all debug information localStorage.debug = '*';
-# Logs debug information from "socket.io-client" package
+# Logs debug information from the "socket.io-client" package
localStorage.debug = 'socket.io-client'; ```
backup About Restore Microsoft Azure Recovery Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-restore-microsoft-azure-recovery-services.md
Title: Restore options with Microsoft Azure Recovery Services (MARS) agent
description: Learn about the restore options available with the Microsoft Azure Recovery Services (MARS) agent. Previously updated : 05/07/2021 Last updated : 08/14/2023
Using the MARS agent you can:
- **[Restore all backed up files in a volume](restore-all-files-volume-mars.md):** This option recovers all backed up data in a specified volume from the recovery point in Azure Backup. It allows a faster transfer speed (up to 40 MBPS).<br>We recommend you to use this option for recovering large amounts of data, or entire volumes. - **[Restore a specific set of backed up files and folders in a volume using PowerShell](backup-client-automation.md#restore-data-from-azure-backup):** If the paths to the files and folders relative to the volume root are known, this option allows you to restore the specified set of files and folders from a recovery point, using the faster transfer speed of the full volume restore. However, this option doesnΓÇÖt provide the convenience of browsing files and folders in the recovery point using the Instant Restore option. - **[Restore individual files and folders using Instant Restore](backup-azure-restore-windows-server.md):** This option allows quick access to the backup data by mounting volume in the recovery point as a drive. You can then browse, and copy files and folders. This option offers a copy speed of up to 6 MBPS, which is suitable for recovering individual files and folders of total size less than 80 GB. Once the required files are copied, you can unmount the recovery point.
+- **Cross Region Restore for MARS (preview)**: If your Recovery Services vault uses GRS resiliency and has the [Cross Region Restore setting turned on](backup-create-recovery-services-vault.md#set-cross-region-restore), you can restore the backup data from the secondary region.
+
+## Cross Region Restore (preview)
+
+Cross Region Restore (CRR) allows you to restore MARS backup data from a secondary region, which is an Azure paired region. This enables you to conduct drills for audit and compliance, and recover data during the unavailability of the primary region in Azure in the case of a disaster.
+
+To use this feature:
+
+1. [Turn on Cross Region Restore in your Recovery Services vault](backup-create-rs-vault.md#set-cross-region-restore). Once Cross Region Restore is enabled, you can't disable it.
+2. After you turn on the feature, it can take up to *48 hours* for the backup items to be available in the secondary region. Currently, the secondary region RPO is *36 hours*, because the RPO in the primary region is *24 hours*, and it can take up to *12 hours* to replicate the backup data from the primary to secondary region.
+3. To restore the backup data for the original machine, you can directly select **Secondary Region** as the source of the backup data in the wizard.
+
+ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection for secondary region as the backup data source during Cross Region Restore.":::
+
+4. To restore backup data for an alternate server from the secondary region, you need to download the *Secondary Region vault credential* from the Azure portal.
+
+ :::image type="content" source="./media/about-restore-microsoft-azure-recovery-services/download-vault-credentials-for-cross-region-restore.png" alt-text="Screenshot shows how to download vault credentials for secondary region.":::
+
+5. To automate recovery from secondary region for audit or compliance drills, [use this command](backup-client-automation.md#cross-region-restore).
+
+>[!Note]
+>- Recovery Services vaults with private endpoint are currently not supported for Cross Region Restore with MARS.
+>- Recovery Services vaults enabled with Cross Region Restore will be automatically charged at RA-GRS rates for the MARS backups stored in the vault once the feature is generally available.
## Next steps -- For more frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml).
+- For additional frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml).
- For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md).
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
Title: Azure Kubernetes Service (AKS) backup using Azure Backup prerequisites
description: This article explains the prerequisites for Azure Kubernetes Service (AKS) backup. Previously updated : 07/27/2023 Last updated : 08/17/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- Extension agent and extension operator are the core platform components in AKS, which are installed when an extension of any type is installed for the first time in an AKS cluster. These provide capabilities to deploy *1P* and *3P* extensions. The backup extension also relies on these for installation and upgrades. -- Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster.---
+ >[!Note]
+ >Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster.
Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#backup-extension-related-operations).
To enable backup for an AKS cluster, see the following prerequisites: .
- Before installing Backup Extension in the AKS cluster, ensure that the CSI drivers and snapshots are enabled for your cluster. If disabled, see [these steps to enable them](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster). -- Backup Extension uses the AKS clusterΓÇÖs Managed System Identity to perform backup operations. So, ASK backup doesn't support AKS clusters using Service Principal. You can [update your AKS cluster to use Managed System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
+- Backup Extension uses the AKS clusterΓÇÖs Managed System Identity to perform backup operations. So, AKS backup doesn't support AKS clusters using Service Principal. You can [update your AKS cluster to use Managed System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
>[!Note] >Only Managed System Identity based AKS clusters are supported by AKS backup. The support for User Identity based AKS clusters is currently not available.
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 03/27/2023 Last updated : 08/17/2023
AKS backup is available in all the Azure public cloud regions: East US, North Eu
## Limitations -- AKS backup supports AKS clusters with Kubernetes version 1.21.1 or later. This version has Container Storage Interface (CSI) drivers installed.
+- AKS backup supports AKS clusters with Kubernetes version *1.22* or later. This version has Container Storage Interface (CSI) drivers installed.
-- A CSI driver supports performing backup and restore operations for persistent volumes.
+- Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster).
-- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). If you're using Azure Files shares and Azure Blob Storage persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [About Azure file share backup](azure-file-share-backup-overview.md) and [Overview of Azure Blob Storage backup](blob-backup-overview.md).
+- AKS backups don't support in-tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](../aks/csi-migrate-in-tree-volumes.md).
-- AKS backups don't support tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](../aks/csi-migrate-in-tree-volumes.md).
+- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). Also, these persistent volumes should be dynamically provisioned as static volumes are not supported.
-- Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster).
+- Azure Files shares and Azure Blob Storage persistent volumes are currently not supported by AKS backup due to lack of CSI Driver-based snapshotting capability. If you're using said persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [Azure file share backup](azure-file-share-backup-overview.md) and [Azure Blob Storage backup](blob-backup-overview.md).
+
+- Any unsupported persistent volume type is skipped while a backup is being created for the AKS cluster.
-- The backup extension uses the AKS cluster's managed system identity to perform backup operations. So, an AKS backup doesn't support AKS clusters that use a service principal. You can [update your AKS cluster to use a managed system identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
+- The backup extension uses the AKS cluster's system identity to do the backup operations. Currently, AKS clusters using a User Identity, or a Service Principal aren't supported. If your AKS cluster uses a Service Principal, you can [update your AKS cluster to use a System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
- You must install the backup extension in the AKS cluster. If you're using Azure CLI to install the backup extension, ensure that the version is 2.41 or later. Use `az upgrade` command to upgrade the Azure CLI.
backup Backup Azure About Mars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-about-mars.md
Title: About the MARS Agent description: Learn how the MARS Agent supports the backup scenarios Previously updated : 11/28/2022 Last updated : 08/18/2023
Azure Backup uses the Microsoft Azure Recovery Services (MARS) agent to back up and recover files, folders, and the volume or system state from an on-premises computer to Azure.
-In this article, you'll learn about:
-
-> [!div class="checklist"]
-> - Backup scenarios
-> - Recovery scenarios
-> - Backup process
- ## Backup scenarios The MARS agent supports the following backup scenarios:
The MARS agent supports the following recovery scenarios:
| Server | Recovery scenario | Description | | | | |
-| **Same Server** | | The server on which the backup was originally created. |
+| **Same Server** | | Server on which the backup was originally created. |
| | **Files and Folders** | Choose the individual files and folders that you want to restore. | | | **Volume Level** | Choose the volume and recovery point that you want to restore, and then restore it to the same location or an alternate location on the same machine. Create a copy of existing files, overwrite existing files, or skip recovering existing files. | | | **System Level** | Choose the system state and recovery point to restore to the same machine at a specified location. |
The MARS agent supports the following recovery scenarios:
## Backup process 1. From the Azure portal, create a [Recovery Services vault](install-mars-agent.md#create-a-recovery-services-vault), and choose files, folders, and the system state from the **Backup goals**.
-2. [Download the Recovery Services vault credentials and agent installer](./install-mars-agent.md#download-the-mars-agent) to an on-premises machine.
-
-3. [Install the agent](./install-mars-agent.md#install-and-register-the-agent) and use the downloaded vault credentials to register the machine to the Recovery Services vault.
-4. From the agent console on the client, [configure the backup](./backup-windows-with-mars-agent.md#create-a-backup-policy) to specify what to back up, when to back up (the schedule), how long the backups should be retained in Azure (the retention policy) and start protecting.
+2. [Configure your Recovery Services vault to securely save the backup passphrase to Azure Key vault](save-backup-passphrase-securely-in-azure-key-vault.md).
+3. [Download the Recovery Services vault credentials and agent installer](./install-mars-agent.md#download-the-mars-agent) to an on-premises machine.
+4. [Install the agent](./install-mars-agent.md#install-and-register-the-agent) and use the downloaded vault credentials to register the machine to the Recovery Services vault.
+5. From the agent console on the client, [configure the backup](./backup-windows-with-mars-agent.md#create-a-backup-policy) to specify what to back up, when to back up (the schedule), how long the backups should be retained in Azure (the retention policy) and start protecting.
The following diagram shows the backup flow:
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 07/13/2023 Last updated : 09/05/2023
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk. The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> When you choose a Vault-Standard recovery point, a VHD file with the content of the chosen recovery point is also created in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to a different subscription within the same tenant as the source subscription (as per the Azure RBAC capabilities) from restore points. <br><br> Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> Works with [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and [Cross Zonal Restore](backup-azure-arm-restore-vms.md#create-a-vm). <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots tier](backup-azure-vms-introduction.md#snapshot-creation) recovery points. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed) and [ADE encrypted VMs](backup-azure-vms-encryption.md#encryption-support-using-ade).
+**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
Azure Backup provides several ways to restore a VM.
Some details about storage accounts: - **Create VM**: When you create a new VM with managed disks, nothing is placed in the storage account you specify. If using unmanaged disks, the VHD files for the VM's disks will be placed in the storage account you specify.-- **Restore disk**: The restore job generates a template that you can download and use to specify custom VM settings. This template is placed in the specified storage account. VHD files are also copied to the storage account when you restore managed disks from a Vault-Standard recovery point if the disk size is less than 4 TB, or when you restore unmanaged disks.
+- **Restore disk**: The restore job generates a template that you can download and use to specify custom VM settings. This template is placed in the specified storage account. VHD files are also copied to the storage account, when you restore managed disks of size less than 4 TB, from a Vault-Standard recovery point, or when you restore unmanaged disks. The files are then copied to Managed storage. To avoid unnecessary charge, delete the VHD files from the Staging Storage Account.
- **Replace disk**: When you replace a managed disk from a Vault-Standard recovery point and the disk size is less than 4 TB, a VHD file with the data from the chosen recovery point is created in the specified storage account. After the replace disk operation, the disks of the source Azure VM are left in the specified Resource group for your operation and the VHDs are stored in the specified storage account. You can choose to delete or retain these VHDs and disks. - **Storage account location**: The storage account must be in the same region as the vault. Only these accounts are displayed. If there are no storage accounts in the location, you need to create one. - **Storage type**: Blob storage isn't supported.
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-delete-vault.md
Choose a client:
To delete a vault, follow these steps: -- **Step 1**: Go to **vault Overview**, click **Delete**, and then follow the instructions to complete the removal of Azure Backup and Azure Site Recovery items for vault deletion as shown below. Each link calls the respective _blade_ to perform the corresponding vault deletion steps.
+- **Step 1:** Go to **vault Overview**, click **Delete**, and then follow the instructions to complete the removal of Azure Backup and Azure Site Recovery items for vault deletion as shown below. Each link calls the respective _blade_ to perform the corresponding vault deletion steps.
See the instructions in the following steps to understand the process. Also, you can go to each blade to delete vaults.
To delete a vault, follow these steps:
Alternately, go to the blades manually by following the steps below. -- <a id="portal-mua">**Step 2**</a>: If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- <a id="portal-mua">**Step 2:**</a> If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
-- <a id="portal-disable-soft-delete">**Step 3**</a>: Disable the soft delete and Security features
+- <a id="portal-disable-soft-delete">**Step 3:**</a> Disable the soft delete and Security features
1. Go to **Properties** -> **Security Settings** and disable the **Soft Delete** feature if enabled. See [how to disable soft delete](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete). 1. Go to **Properties** -> **Security Settings** and disable **Security Features**, if enabled. [Learn more](./backup-azure-security-feature.md) -- <a id="portal-delete-cloud-protected-items">**Step 4**</a>: Delete Cloud protected items
+- <a id="portal-delete-cloud-protected-items">**Step 4:**</a> Delete Cloud protected items
1. **Delete Items in soft-deleted state**: After disabling soft delete, check if there are any items previously remaining in the soft deleted state. If there are items in soft deleted state, then you need to *undelete* and *delete* them again. [Follow these steps](./backup-azure-security-feature-cloud.md#using-azure-portal) to find soft delete items and permanently delete them.
To delete a vault, follow these steps:
1. Go to the vault dashboard menu -> **Backup Items**. Click **Stop Backup** to stop the backups of all listed items, and then click **Delete Backup Data** to delete. [Follow these steps](#delete-protected-items-in-the-cloud) to remove those items. -- <a id="portal-delete-backup-servers">**Step 5**</a>: Delete Backup Servers
+- <a id="portal-delete-backup-servers">**Step 5:**</a> Delete Backup Servers
1. Go to the vault dashboard menu > **Backup Infrastructure** > **Protected Servers**. In Protected Servers, select the server to unregister. To delete the vault, you must unregister all the servers. Right-click each protected server and select **Unregister**.
To delete a vault, follow these steps:
>[!Note] >Deleting MARS/MABS/DPM servers also removes the corresponding backup items protected in the vault. -- <a id="portal-unregister-storage-accounts">**Step 6**</a>: Unregister Storage Accounts
+- <a id="portal-unregister-storage-accounts">**Step 6:**</a> Unregister Storage Accounts
Ensure all registered storage accounts are unregistered for successful vault deletion. Go to the vault dashboard menu > **Backup Infrastructure** > **Storage Accounts**. If you've storage accounts listed here, then you must unregister all of them. Learn more how to [Unregister a storage account](manage-afs-backup.md#unregister-a-storage-account). -- <a id="portal-remove-private-endpoints">**Step 7**</a>: Remove Private Endpoints
+- <a id="portal-remove-private-endpoints">**Step 7:**</a> Remove Private Endpoints
Ensure there are no Private endpoints created for the vault. Go to Vault dashboard menu > **Private endpoint Connections** under 'Settings' > if the vault has any Private endpoint connections created or attempted to be created, ensure they are removed before proceeding with vault delete. -- **Step 8**: Delete vault
+- **Step 8:** Delete vault
After you've completed these steps, you can continue to [delete the vault](?tabs=portal#delete-the-recovery-services-vault).
If you're sure that all the items backed up in the vault are no longer required
Follow these steps: -- **Step 1**: Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- **Step 1:** Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
-- <a id="powershell-install-az-module">**Step 2**</a>: Upgrade to PowerShell 7 version by performing these steps:
+- <a id="powershell-install-az-module">**Step 2:**</a> Upgrade to PowerShell 7 version by performing these steps:
1. Upgrade to PowerShell 7: Run the following command in your console:
Follow these steps:
1. Open PowerShell 7 as administrator. -- **Step 3**: Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault.
+- **Step 3:** Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault.
>[!Note] >To access the PowerShell script for vault deletion, see the [PowerShell script for vault deletion](./scripts/delete-recovery-services-vault.md) article.
backup Backup Azure Restore System State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-system-state.md
Title: Restore System State to a Windows Server description: Step-by-step explanation for restoring Windows Server System State from a backup in Azure. Previously updated : 12/09/2022 Last updated : 08/14/2023
This article explains how to restore Windows Server System State backups from an
1. Restore System State as files from Azure Backup. When restoring System State as files from Azure Backup, you can either: * Restore System State to the same server where the backups were taken, or * Restore System State file to an alternate server.
+ * If you have Cross Region Restore enabled in your vault, you can restore the backup data from a secondary region.
2. Apply the restored System State files to a Windows Server using the Windows Server Backup utility.
The following steps explain how to roll back your Windows Server configuration t
![Choose this server option to restore the data to the same machine](./media/backup-azure-restore-system-state/samemachine.png)
+ If you have enabled Cross Region Restore (preview) and want to restore from the secondary region, select **Secondary Region**. Else, select **Primary Region**.
+
+ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection of the source region of recovery point.":::
+ 4. On the **Select Recovery Mode** pane, choose **System State** and then select **Next**. ![Browse files](./media/backup-azure-restore-system-state/recover-type-selection.png)
The terminology used in these steps includes:
5. Provide the vault credential file that corresponds to the *Sample vault*. If the vault credential file is invalid (or expired), download a new vault credential file from the *Sample vault* in the Azure portal. Once the vault credential file is provided, the Recovery Services vault associated with the vault credential file appears.
+ If you want to use Cross Region Restore to restore the backup data from the secondary region, you need to download the *Secondary Region vault credential file* from the Azure portal, and then pass the file in the MARS agent.
+
+ :::image type="content" source="./media/backup-azure-restore-windows-server/pass-vault-credentials-in-mars-agent.png" alt-text="Screenshot shows the secondary vault credentials passed in MARS agent.":::
+ 6. On the Select Backup Server pane, select the *Source machine* from the list of displayed machines. 7. On the Select Recovery Mode pane, choose **System State** and select **Next**.
backup Backup Azure Restore Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-windows-server.md
Title: Restore files to Windows Server using the MARS Agent description: In this article, learn how to restore data stored in Azure to a Windows server or Windows computer with the Microsoft Azure Recovery Services (MARS) Agent.- Previously updated : 09/07/2018+ Last updated : 08/14/2023
This article explains how to restore data from a backup vault. To restore data,
* Restore data to the same machine from which the backups were taken. * Restore data to an alternate machine.
+* If you have Cross Region Restore enabled on your vault, you can restore the backup data from the secondary region.
-Use the Instant Restore feature to mount a writeable recovery point snapshot as a recovery volume. You can then explore the recovery volume and copy files to a local computer, in that way selectively restoring files.
+Use the Instant Restore feature to mount a writeable recovery point snapshot as a recovery volume. You can then explore the recovery volume and copy files to a local computer, thus selectively restoring files.
> [!NOTE] > The [January 2017 Azure Backup update](https://support.microsoft.com/help/3216528/azure-backup-update-for-microsoft-azure-recovery-services-agent-januar) is required if you want to use Instant Restore to restore data. Also, the backup data must be protected in vaults in locales listed in the support article. Consult the [January 2017 Azure Backup update](https://support.microsoft.com/help/3216528/azure-backup-update-for-microsoft-azure-recovery-services-agent-januar) for the latest list of locales that support Instant Restore.
If you accidentally deleted a file and want to restore it to the same machine (f
![Screenshot of Recover Data Wizard Getting Started page (restore to same machine)](./media/backup-azure-restore-windows-server/samemachine_gettingstarted_instantrestore.png)
+ If you have enabled Cross Region Restore (preview) and want to restore from the secondary region, select **Secondary Region**. Else, select **Primary Region**.
+
+ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection of the source region of recovery point.":::
++ 4. On the **Select Recovery Mode** page, choose **Individual files and folders** > **Next**.
These steps include the following terminology:
![Screenshot of Recover Data Wizard Getting Started page (restore to alternate machine)](./media/backup-azure-restore-windows-server/alternatemachine_gettingstarted_instantrestore.png)
-5. Provide the vault credential file that corresponds to the sample vault, and select **Next**.
+5. Provide the vault credential file that corresponds to the sample vault.
If the vault credential file is invalid (or expired), [download a new vault credential file from the sample vault](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) in the Azure portal. After you provide a valid vault credential, the name of the corresponding backup vault appears.
+ If you want to use Cross Region Restore to restore backup data from the secondary region, you need to download the *Secondary Region* vault credential file* from the Azure portal, and then pass the file in the MARS agent.
+
+ :::image type="content" source="./media/backup-azure-restore-windows-server/pass-vault-credentials-in-mars-agent.png" alt-text="Screenshot shows the vault credentials added to MARS agent.":::
+
+ Select **Next** to continue.
+ 6. On the **Select Backup Server** page, select the source machine from the list of displayed machines, and provide the passphrase. Then select **Next**. ![Screenshot of Recover Data Wizard Select Backup Server page (restore to alternate machine)](./media/backup-azure-restore-windows-server/alternatemachine_selectmachine_instantrestore.png)
backup Backup Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-database.md
Title: Back up SQL Server databases to Azure description: This article explains how to back up SQL Server to Azure. The article also explains SQL Server recovery. Previously updated : 08/11/2022 Last updated : 09/06/2023
1. Workload aware backups that support all backup types - full, differential, and log 2. 15 minute RPO (recovery point objective) with frequent log backups 3. Point-in-time recovery up to a second
-4. Individual database level backup and restore
+4. Individual database level back up and restore
>[!Note] >Snapshot-based backup for SQL databases in Azure VM is now in preview. This unique offering combines the goodness of snapshots, leading to a better RTO and low impact on the server along with the benefits of frequent log backups for low RPO. For any queries/access, write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
Add **NT AUTHORITY\SYSTEM** and **NT Service\AzureWLBackupPluginSvc** logins to
![Rediscover DBs in Azure portal](media/backup-azure-sql-database/sql-rediscover-dbs.png)
-Alternatively, you can automate giving the permissions by running the following PowerShell commands in admin mode. The instance name is set to MSSQLSERVER by default. Change the instance name argument in script if need be:
+Alternatively, you can automate giving the permissions by running the following PowerShell commands in admin mode. The instance name is set to MSSQLSERVER by default. Change the instance name argument in script if needed.
```powershell param(
catch
} ```
+## Configure simultaneous backups
+
+You can now configure backups to save the SQL server recovery points and logs in a local storage and Recovery Services vault simultaneously.
+
+To configure simultaneous backups, follow these steps:
+
+1. Go to the `C:\Program Files\Azure Workload Backup\bin\plugins` location, and then create the file **PluginConfigSettings.json**, if it's not present.
+2. Add the comma separated key value entities, with keys `EnableLocalDiskBackupForBackupTypes` and `LocalDiskBackupFolderPath` to the JSON file.
+
+ - Under `EnableLocalDiskBackupForBackupTypes`, list the backup types that you want to store locally.
+
+ For example, if you want to store the *Full* and *Log* backups, mention `[ΓÇ£FullΓÇ¥, ΓÇ£LogΓÇ¥]`. To store only the log backups, mention `[ΓÇ£LogΓÇ¥]`.
+
+ - Under `LocalDiskBackupFolderPath`, mention the *path to the local folder*. Ensure that you use the *double forward slash* while mentioning the path in the JSON file.
+
+ For example, if the preferred path for local backup is `E:\LocalBackup`, mention the path in JSON as `E:\\LocalBackup`.
+
+ The final JSON should appear as:
+
+ ```JSON
+ {
+ "EnableLocalDiskBackupForBackupTypes": [ΓÇ£LogΓÇ¥],
+ "LocalDiskBackupFolderPath": ΓÇ£E:\\LocalBackupΓÇ¥,
+ }
+
+ ```
+
+ If there are other pre-populated entries in the JSON file, add the above two entries at the bottom of the JSON file *just before the closing curly bracket*.
+
+3. For the changes to take effect immediately instead of regular one hour, go to **TaskManager** > **Services**, right-click **AzureWLbackupPluginSvc** and select **Stop**.
+
+ >[!Caution]
+ >This action will cancel all the ongoing backup jobs.
+
+ The naming convention of the stored backup file and the folder structure for it will be `{LocalDiskBackupFolderPath}\{SQLInstanceName}\{DatabaseName}`.
+
+ For example, if you have a database `Contoso` under the SQL instance `MSSQLSERVER`, the files will be located at in `E:\LocalBackup\MSSQLSERVER\Contoso`.
+
+ The name of the file is the `VDI device set guid`, which is used for the backup operation.
+
+4. Check if the target location under `LocalDiskBackupFolderPath` has *read* and *write* permissions for `NT Service\AzureWLBackupPluginSvc`.
+
+ >[!Note]
+ >For a folder on the local VM disks, right-click the folder and configure the required permissions for `NT Service\AzureWLBackupPluginSvc` on the **Security** tab.
+
+ If you're using a network or SMB share, configure the permissions by running the below PowerShell cmdlets from a user console that already has the permission to access the share:
+
+ ```azurepowershell
+ $cred = Get-Credential
+ New-SmbGlobalMapping -RemotePath <FileSharePath> -Credential $cred -LocalPath <LocalDrive>: -FullAccess @(ΓÇ£<Comma Separated list of accounts>ΓÇ¥) -Persistent $true
+
+ ```
+
+ **Example**:
+
+ ```azurepowershell
+ $cred = Get-Credential
+ New-SmbGlobalMapping -RemotePath \\i00601p1imsa01.file.core.windows.net\rsvshare -Credential $cred -LocalPath Y: -FullAccess @("NT AUTHORITY\SYSTEM","NT Service\AzureWLBackupPluginSvc") -Persistent $true
+ ```
+ ## Next steps * [Learn about](backup-sql-server-database-azure-vms.md) backing up SQL Server databases.
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md
Title: Back up and restore encrypted Azure VMs description: Describes how to back up and restore encrypted Azure VMs with the Azure Backup service. Previously updated : 12/14/2022 Last updated : 08/28/2023
Azure Backup needs read-only access to back up the keys and secrets, along with
- Your Key Vault is associated with the Azure AD tenant of the Azure subscription. If you're a **Member user**, Azure Backup acquires access to the Key Vault without further action. - If you're a **Guest user**, you must provide permissions for Azure Backup to access the key vault. You need to have access to key vaults to configure Backup for encrypted VMs.
+To provide Azure RBAC permissions on Key Vault, see [this article](../key-vault/general/rbac-guide.md?tabs=azure-cli#enable-azure-rbac-permissions-on-key-vault).
+ To set permissions: 1. In the Azure portal, select **All services**, and search for **Key vaults**.
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
The backup operation failed because the VM is in Failed state. For a successful
Error code: UserErrorFsFreezeFailed <br/> Error message: Failed to freeze one or more mount-points of the VM to take a file-system consistent snapshot.
-**Step 1**
+**Step 1:**
* Unmount the devices for which the file system state wasn't cleaned, using the **umount** command. * Run a file system consistency check on these devices by using the **fsck** command.
MountsToSkip = /mnt/resource
SafeFreezeWaitInSeconds=600 ```
-**Step 2**
+**Step 2:**
* Check if there are duplicate mount points present.
Error message: Snapshot operation failed because VSS writers were in a bad state
This error occurs because the VSS writers were in a bad state. Azure Backup extensions interact with VSS Writers to take snapshots of the disks. To resolve this issue, follow these steps:
-**Step 1**: Check the **Free Disk Space**, **VM resources as RAM and page file**, and **CPU utilization percentage**.
+**Step 1:** Check the **Free Disk Space**, **VM resources as RAM and page file**, and **CPU utilization percentage**.
- Increase the VM size to increase vCPUs and RAM space. - Increase the disk size if the free disk space is low.
-**Step 2**: Restart VSS writers that are in a bad state.
+**Step 2:** Restart VSS writers that are in a bad state.
* From an elevated command prompt, run `vssadmin list writers`. * The output contains all VSS writers and their state. For every VSS writer with a state that's not **[1] Stable**, restart the respective VSS writer's service.
This error occurs because the VSS writers were in a bad state. Azure Backup exte
> [!NOTE] > Restarting some services can have an impact on your production environment. Ensure the approval process is followed and the service is restarted at the scheduled downtime.
-**Step 3**: If restarting the VSS writers did not resolve the issue, then run the following command from an elevated command-prompt (as an administrator) to prevent the threads from being created for blob-snapshots.
+**Step 3:** If restarting the VSS writers did not resolve the issue, then run the following command from an elevated command-prompt (as an administrator) to prevent the threads from being created for blob-snapshots.
```console REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgentPersistentKeys" /v SnapshotWithoutThreads /t REG_SZ /d True /f ```
-**Step 4**: If steps 1 and 2 did not resolve the issue, then the failure could be due to VSS writers timing out due to limited IOPS.<br>
+**Step 4:** If steps 1 and 2 did not resolve the issue, then the failure could be due to VSS writers timing out due to limited IOPS.<br>
To verify, navigate to ***System and Event Viewer Application logs*** and check for the following error message:<br> *The shadow copy provider timed out while holding writes to the volume being shadow copied. This is probably due to excessive activity on the volume by an application or a system service. Try again later when activity on the volume is reduced.*<br>
Error message: Snapshot operation failed due to inadequate VM resources.
The backup operation on the VM failed due to delay in network calls while performing the snapshot operation. To resolve this issue, perform Step 1. If the issue persists, try steps 2 and 3.
-**Step 1**: Create snapshot through Host
+**Step 1:** Create snapshot through Host
From an elevated (admin) command-prompt, run the following command:
REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgentPersistentKeys" /v CalculateSnapshotTi
This will ensure the snapshots are taken through host instead of Guest. Retry the backup operation.
-**Step 2**: Try changing the backup schedule to a time when the VM is under less load (like less CPU or IOPS)
+**Step 2:** Try changing the backup schedule to a time when the VM is under less load (like less CPU or IOPS)
-**Step 3**: Try [increasing the size of the VM](../virtual-machines/resize-vm.md) and retry the operation
+**Step 3:** Try [increasing the size of the VM](../virtual-machines/resize-vm.md) and retry the operation
### 320001, ResourceNotFound - Could not perform the operation as VM no longer exists / 400094, BCMV2VMNotFound - The virtual machine doesn't exist / An Azure virtual machine wasn't found
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md
Title: Use PowerShell to back up Windows Server to Azure description: In this article, learn how to use PowerShell to set up Azure Backup on Windows Server or a Windows client, and manage backup and recovery.- Previously updated : 08/24/2021 -+ Last updated : 08/29/2021 +
$CredsPath = "C:\downloads"
$CredsFilename = Get-AzRecoveryServicesVaultSettingsFile -Backup -Vault $Vault1 -Path $CredsPath ```
-### Registering using the PS Az module
+### Register using the PowerShell Az module
> [!NOTE] > A bug with generation of vault certificate is fixed in Az 3.5.0 release. Use Az 3.5.0 release version or greater to download a vault certificate.
Server properties updated successfully
All backups from Windows Servers and clients to Azure Backup are governed by a policy. The policy includes three parts:
-1. A **backup schedule** that specifies when backups need to be taken and synchronized with the service.
-2. A **retention schedule** that specifies how long to retain the recovery points in Azure.
-3. A **file inclusion/exclusion specification** that dictates what should be backed up.
+- A **backup schedule** that specifies when backups need to be taken and synchronized with the service.
+- A **retention schedule** that specifies how long to retain the recovery points in Azure.
+- A **file inclusion/exclusion specification** that dictates what should be backed up.
In this document, since we're automating backup, we'll assume nothing has been configured. We begin by creating a new backup policy using the [New-OBPolicy](/powershell/module/msonlinebackup/new-obpolicy) cmdlet.
Job completed.
The recovery operation completed successfully. ```
-## Uninstalling the Azure Backup agent
+## Cross Region Restore
+
+Cross Region Restore (CRR) allows you to restore MARS backup data from a secondary region, which is an Azure paired region. This enables you to conduct drills for audit and compliance, and recover data during the unavailability of the primary region in Azure in the case of a disaster.
+
+### Original server restore
+
+If you're performing restore for the original server from the secondary region (Cross Region Restore), use the flag `UseSecondaryRegion` while getting the `OBRecoverableSource` object.
+
+```azurepowershell
+$sources = Get-OBRecoverableSource -UseSecondaryRegion
+$RP = Get-OBRecoverableItem -Source $sources[0]
+$RO = New-OBRecoveryOption -DestinationPath $RecoveryPath -OverwriteType Overwrite
+Start-OBRecovery -RecoverableItem $RP -RecoveryOption $RO -Async | ConvertTo-Json
+
+```
+
+### Alternate server restore
+
+If you're performing restore for an alternate server from the secondary region (Cross Region Restore), download the *secondary region vault credential file* from the Azure portal and pass the secondary region vault credential for restore.
+
+```azurepowershell
+$serverName = ΓÇÿmyserver.mycompany.comΓÇÖ
+$secVaultCred = ΓÇ£C:\Users\myuser\Downloads\myvault_Mon Jul 17 2023.VaultCredentialsΓÇ¥
+$passphrase = ΓÇÿDefault PassphraseΓÇÖ
+$alternateServers = Get-OBAlternateBackupServer -VaultCredentials $secVaultCred
+$altServer = $alternateServers[2] | Where-Object {$_.ServerName -Like $serverName}
+$pwd = ConvertTo-SecureString -String $passphrase -AsPlainText -Force
+$sources = Get-OBRecoverableSource $altServer
+$RP = Get-OBRecoverableItem -Source $sources[0]
+$RO = New-OBRecoveryOption
+Start-OBRecoveryMount -RecoverableItem $RP -RecoveryOption $RO -EncryptionPassphrase $pwd -Async | ConvertTo-Json
+
+```
+
+## Uninstall the Azure Backup agent
Uninstalling the Azure Backup agent can be done by using the following command:
Invoke-Command -Session $Session -Script { param($D, $A) Start-Process -FilePath
For more information about Azure Backup for Windows Server/Client: * [Introduction to Azure Backup](./backup-overview.md)
-* [Back up Windows Servers](backup-windows-with-mars-agent.md)
+* [Back up Windows Servers](backup-windows-with-mars-agent.md)
backup Backup Create Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-create-recovery-services-vault.md
Title: Create and configure Recovery Services vaults description: Learn how to create and configure Recovery Services vaults, and how to restore in a secondary region by using Cross Region Restore. Previously updated : 07/21/2023 Last updated : 08/14/2023
Azure Backup automatically handles storage for the vault. You need to specify ho
1. For **Storage replication type**, select **Geo-redundant**, **Locally-redundant**, or **Zone-redundant**. Then select **Save**.
- ![Set the storage configuration for new vault](./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png)
+ :::image type="content" source="./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png" alt-text="Screenshot shows how to set the storage configuration for a new vault." lightbox="./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png":::
Here are our recommendations for choosing a storage replication type:
Before you begin, consider the following information:
- Cross Region Restore is supported only for a Recovery Services vault that uses the [GRS replication type](#set-storage-redundancy). - Virtual machines (VMs) created through Azure Resource Manager and encrypted Azure VMs are supported. VMs created through the classic deployment model aren't supported. You can restore the VM or its disk. - SQL Server or SAP HANA databases hosted on Azure VMs are supported. You can restore databases or their files.
+- MARS Agent is supported for vaults without private endpoint (preview).
- Review the [support matrix](backup-support-matrix.md#cross-region-restore) for a list of supported managed types and regions.-- Using Cross Region Restore will incur additional charges. [Learn more](https://azure.microsoft.com/pricing/details/backup/).-- After you opt in, it might take up to 48 hours for the backup items to be available in secondary regions.
+- Using Cross Region Restore will incur additional charges. Once you enable Cross Region restore, it might take up to 48 hours for the backup items to be available in secondary regions. [Learn more about pricing](https://azure.microsoft.com/pricing/details/backup/).
- Cross Region Restore currently can't be reverted to GRS or LRS after the protection starts for the first time. - Currently, secondary region RPO is 36 hours. This is because the RPO in the primary region is 24 hours and can take up to 12 hours to replicate the backup data from the primary to the secondary region. - Review the [permissions required to use Cross Region Restore](backup-rbac-rs-vault.md#minimum-role-requirements-for-azure-vm-backup).
For more information about backup and restore with Cross Region Restore, see the
- [Cross Region Restore for Azure VMs](backup-azure-arm-restore-vms.md#cross-region-restore) - [Cross Region Restore for SQL Server databases](restore-sql-database-azure-vm.md#cross-region-restore) - [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore)
+- [Cross Region Restore for MARS (Preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview)
## Set encryption settings
backup Backup Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md
Title: Back up Azure Managed Disks using Azure CLI
description: Learn how to back up Azure Managed Disks using Azure CLI. Previously updated : 08/14/2023 Last updated : 08/25/2023
az dataprotection backup-instance list-from-resourcegraph --datasource-type Azur
] ```
-You can specify a rule and tagname while triggering backup. To view the rules in policy, look through the policy JSON. In the below example, the rule with the name BackupDaily, and tag name "default" is displayed and we'll use that rule for the on-demand backup.
+You can specify a rule and tagname while triggering backup. To view the rules in policy, look through the policy JSON. In the following example, the rule with the name `"BackupDaily"`, and tag name `"default"` is displayed, and we'll use that rule for the on-demand backup.
```json "name": "BackupDaily",
backup Backup Sql Server Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md
The VM is not able to contact Azure Backup service due to internet connectivity
| Error message | Possible cause | Recommendation | | | | |
-| Operation failing with `UserErrorWindowsWLExtFailedToStartPluginService` error. | Azure Backup workload extension is unable to start the workload backup plugin service on the Azure Virtual Machine due to service account misconfiguration. | **Step 1** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** user has **Read** permissions on: <br> - C:\windows\Microsoft.NET \assembly\GAC_32 <br> - C:\windows\Microsoft.NET \assembly\GAC_64 <br> - C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config. <br><br> If the permissions are missing, assign **Read** permissions on these directories. <br><br> **Step 2** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** has the **Bypass traverse chekcing** rights by going to **Local Security Policy** > **User Right Assignment** > **Bypass traverse checking**. **Everyone** must be selected by default. <br><br> If **Everyone** and **NT Service\AzureWLBackupPluginSvc** are missing, add **NT Service\AzureWLBackupPluginSvc** user, and then try to restart the service or trigger a backup or restore operation for a datasource. |
+| Operation failing with `UserErrorWindowsWLExtFailedToStartPluginService` error. | Azure Backup workload extension is unable to start the workload backup plugin service on the Azure Virtual Machine due to service account misconfiguration. | **Step 1:** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** user has **Read** permissions on: <br> - C:\windows\Microsoft.NET \assembly\GAC_32 <br> - C:\windows\Microsoft.NET \assembly\GAC_64 <br> - C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config. <br><br> If the permissions are missing, assign **Read** permissions on these directories. <br><br> **Step 2:** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** has the **Bypass traverse chekcing** rights by going to **Local Security Policy** > **User Right Assignment** > **Bypass traverse checking**. **Everyone** must be selected by default. <br><br> If **Everyone** and **NT Service\AzureWLBackupPluginSvc** are missing, add **NT Service\AzureWLBackupPluginSvc** user, and then try to restart the service or trigger a backup or restore operation for a datasource. |
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 07/05/2023 Last updated : 08/21/2023
Back up Linux Azure VMs with the Linux Azure VM agent | Supported for file-consi
Back up Linux Azure VMs with the MARS agent | Not supported.<br/><br/> The MARS agent can be installed only on Windows machines. Back up Linux Azure VMs with DPM or MABS | Not supported. Back up Linux Azure VMs with Docker mount points | Currently, Azure Backup doesn't support exclusion of Docker mount points because these are mounted at different paths every time.
+Backup Linux Azure VMs with ZFS Pool Configuration | Not supported
## Operating system support (Linux)
Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app serve
**Restore disk** | This option restores a VM disk, which can you can then use to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings and create a VM.<br/><br/> The disks are copied to the resource group that you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM by using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured via the template or PowerShell. **Replace existing** | You can restore a disk and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it has been deleted, you can't use this option.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and it stores the snapshot in the staging location that you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>This option is supported for unencrypted managed VMs and for VMs [created from custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or fewer disks than the current VM, the number of disks in the restore point will only reflect the VM configuration.<br><br> This option is also supported for VMs with linked resources, like [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) and [Azure Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | You can use cross-region restore to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Backup admins and app admins have permissions to perform the restore operation on a secondary region.
-**Cross Subscription (preview)** | You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (within the same tenant as the source subscription) from restore points. This is one of the Azure role-based access control (RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-subscription restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups), and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription** | Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> You can restore Azure Virtual Machines or disks to a different subscription within the same tenant as the source subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) tier recovery points. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed) and [VMs with disks having Azure Encryptions (ADE)](backup-azure-vms-encryption.md#encryption-support-using-ade).
+**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones. You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
## Support for file-level restore
The following table summarizes support for backup during VM management tasks, su
**Restore** | **Supported** |
-<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
[Restore across a region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported. <a name="backup-azure-cross-zonal-restore">Restore across a zone</a> | [Cross-zonal restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. Restore to an existing VM | Use the replace disk option.
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, Central US, North Central US, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks.
-<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, Central US, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks.
+<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, Central US, North Central US, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup.
+<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, Central US, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. <br><br> - GRS type vaults cannot be used for enabling backup.
[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 10/21/2022 Last updated : 08/14/2023
Backup supports the compression of backup traffic, as summarized in the followin
## Cross Region Restore
-Azure Backup has added the Cross Region Restore feature to strengthen data availability and resiliency capability, giving you full control to restore data to a secondary region. To configure this feature, visit [the Set Cross Region Restore article.](backup-create-rs-vault.md#set-cross-region-restore). This feature is supported for the following management types:
+Azure Backup has added the Cross Region Restore feature to strengthen data availability and resiliency capability, giving you full control to restore data to a secondary region. To configure this feature, see [Set Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore). This feature is supported for the following management types:
| Backup Management type | Supported | Supported Regions | | - | | -- | | Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA. | | SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central and UG IOWA. |
-| MARS Agent/On premises | No | N/A |
+| MARS Agent (Preview) | Available in preview. <br><br> Not supported for vaults with Private Endpoint enabled. | Available in all Azure public regions. |
+| DPM/MABS | No | N/A |
| AFS (Azure file shares) | No | N/A | ## Resource health
backup Backup Windows With Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-windows-with-mars-agent.md
Title: Back up Windows machines by using the MARS agent description: Use the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Previously updated : 06/23/2023 Last updated : 08/18/2023
To run an on-demand backup, follow these steps:
[ ![Screenshot shows the Back up now option in Windows Server.](./media/backup-configure-vault/backup-now.png) ](./media/backup-configure-vault/backup-now.png#lightbox)
-1. If the MARS agent version is 2.0.9169.0 or newer, then you can set a custom retention date. In the **Retain Backup Till** section, choose a date from the calendar.
+1. If the MARS agent version is *2.0.9254.0 or newer*, select a *subset of the volumes backed up periodically* for on-demand backup. Only the files/folders configured for periodic backup can be backed up on demand.
+
+ :::image type="content" source="./media/backup-configure-vault/select-subset-of-volumes-backed-up-periodically-for-mars-on-demand-backup.png" alt-text="Screenshot shows how to select a subset of volumes backed up periodically for on-demand backup.":::
+
+ If the MARS agent version is *2.0.9169.0 or newer*, set a custom retention date. In the **Retain Backup Till** section, choose a date from the calendar.
[ ![Screenshot shows how to use the calendar to customize a retention date.](./media/backup-configure-vault/mars-ondemand.png) ](./media/backup-configure-vault/mars-ondemand.png#lightbox)
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md
Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 07/21/2023 Last updated : 08/17/2023
The retention period for a backup also follows the maximum limit of 450 snapshot
For example, if the scheduling frequency for backups is set as Daily, then you can set the retention period for backups at a maximum value of 450 days. Similarly, if the scheduling frequency for backups is set as Hourly with a 1-hour frequency, then you can set the retention for backups at a maximum value of 18 days.
+## Why do I see more snapshots than my retention policy?
+
+If a retention policy is set as *1*, you can find two snapshots. This configuration ensures that at least one latest recovery point is always present in the vault, if all subsequent backups fail due to any issue. This causes the presence of two snapshots.
+
+So, if the policy is for *n* snapshots, you can find *n+1* snapshots at times. Further, you can even find *n+1+2* snapshots if there is a delay in deletion of recovery points whose retention period is over (garbage collection). This can happen at rare times when:
+
+- You clean up snapshots, which are past retentions.
+- The garbage collector (GC) in the backend is under heavy load.
+ ## Pricing Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md) of the managed disk. Incremental snapshots are charged per GiB of the storage occupied by the delta changes since the last snapshot. For example, if you're using a managed disk with a provisioned size of 128 GiB, with 100 GiB used, the first incremental snapshot is billed only for the used size of 100 GiB. 20 GiB of data is added on the disk before you create the second snapshot. Now, the second incremental snapshot is billed for only 20 GiB.
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 08/02/2023 Last updated : 08/25/2023
# Encryption of backup data using customer-managed keys
-Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK) instead of using platform-managed keys, which are enabled by default. Your keys encrypt the backup data must be stored in [Azure Key Vault](../key-vault/index.yml).
+Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK) instead of platform-managed keys, which are enabled by default. Your keys that encrypt the backup data must be stored in [Azure Key Vault](../key-vault/index.yml).
-The encryption key used for encrypting backups may be different from the one used for the source. The data is protected using an AES 256 based data encryption key (DEK), which in turn, is protected using your key encryption keys (KEK). This provides you with full control over the data and the keys. To allow encryption, you must grant Recovery Services vault the permissions to access the encryption key in the Azure Key Vault. You can change the key when required.
+The encryption key used for encrypting backups may be different from the one used for the source. The data is protected using an AES 256-based data encryption key (DEK), which in turn, is protected using your key encryption keys (KEK). This provides you with full control over the data and the keys. To allow encryption, you must grant Recovery Services vault the permissions to access the encryption key in the Azure Key Vault. You can change the key when required.
In this article, you'll learn how to:
In this article, you'll learn how to:
>Use Az module 5.3.0 or later to use customer managed keys for backups in the Recovery Services vault. >[!Warning]
- >If you're using PowerShell for managing encryption keys for Backup, we don't recommend to update the keys from the portal. <br> If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
+ >If you're using PowerShell for managing encryption keys for Backup, we don't recommend to update the keys from the portal. <br> If you update the key from the portal, you can't use PowerShell to update the encryption key further till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
If you haven't created and configured your Recovery Services vault, see [how to do so here](backup-create-rs-vault.md).
To configure a vault, perform the following actions in the given sequence to ac
3. Enable soft-delete and purge protection on Azure Key Vault.
-4. Assign the encryption key to the Recovery Services vault,
+4. Assign the encryption key to the Recovery Services vault.
### Enable managed identity for your Recovery Services vault
Choose a client:
# [Azure portal](#tab/portal)
-1. Go to your Recovery Services vault -> **Identity**
+1. Go to your Recovery Services vault -> **Identity**.
![Identity settings](media/encryption-at-rest-with-cmk/enable-system-assigned-managed-identity-for-vault.png)
Choose a client:
3. Change the **Status** to **On**.
-4. Click **Save** to enable the identity for the vault.
+4. Select **Save** to enable the identity for the vault.
An Object ID is generated, which is the system-assigned managed identity of the vault.
To assign the user-assigned managed identity for your Recovery Services vault, c
# [Azure portal](#tab/portal)
-1. Go to your Recovery Services vault -> **Identity**
+1. Go to your Recovery Services vault -> **Identity**.
![Assign user-assigned managed identity to the vault](media/encryption-at-rest-with-cmk/assign-user-assigned-managed-identity-to-vault.png) 2. Navigate to the **User assigned** tab.
-3. Click **+Add** to add a user-assigned managed identity.
+3. Select **+Add** to add a user-assigned managed identity.
4. In the **Add user assigned managed identity** blade that opens, select the subscription for your identity. 5. Select the identity from the list. You can also filter by the name of the identity or the resource group.
-6. Once done, click **Add** to finish assigning the identity.
+6. Once done, select **Add** to finish assigning the identity.
# [PowerShell](#tab/powershell)
Choose a client:
![Add Access Policies](./media/encryption-at-rest-with-cmk/access-policies.png)
-2. Under **Key Permissions**, select **Get**, **List**, **Unwrap Key** and **Wrap Key** operations. This specifies the actions on the key that will be permitted.
+2. Under **Key Permissions**, select **Get**, **List**, **Unwrap Key**, and **Wrap Key** operations. This specifies the actions on the key that will be permitted.
![Assign key permissions](./media/encryption-at-rest-with-cmk/key-permissions.png)
You can do this from the Azure Key Vault UI as shown below. Alternatively, you c
Set-AzContext -SubscriptionId SubscriptionId ```
-3. Enable soft delete
+3. Enable soft delete.
```azurepowershell ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true"
You can do this from the Azure Key Vault UI as shown below. Alternatively, you c
Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties ```
-4. Enable purge protection
+4. Enable purge protection.
```azurepowershell ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enablePurgeProtection" -Value "true"
You can do this from the Azure Key Vault UI as shown below. Alternatively, you c
az account set --subscription "Subscription1" ```
-3. Enable soft delete
+3. Enable soft delete.
```azurecli az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-soft-delete true ```
-4. Enable purge protection
+4. Enable purge protection.
```azurecli az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-purge-protection true
You can do this from the Azure Key Vault UI as shown below. Alternatively, you c
>Before proceeding further, ensure the following: > >- All the steps mentioned above have been completed successfully:
-> - The Recovery Services vault's managed identity has been enabled, and has been assigned required permissions
-> - The Azure Key Vault has soft-delete and purge-protection enabled
->- The Recovery Services vault for which you want to enable CMK encryption **does not** have any items protected or registered to it
+> - The Recovery Services vault's managed identity has been enabled and has been assigned the required permissions.
+> - The Azure Key Vault has soft-delete and purge-protection enabled.
+>- The Recovery Services vault for which you want to enable CMK encryption **does not** have any items protected or registered to it.
Once the above are ensured, continue with selecting the encryption key for your vault.
To assign the key and follow the steps, choose a client:
# [Azure portal](#tab/portal)
-1. Go to your Recovery Services vault -> **Properties**
+1. Go to your Recovery Services vault -> **Properties**.
![Encryption settings](./media/encryption-at-rest-with-cmk/encryption-settings.png)
To assign the key and follow the steps, choose a client:
1. Enter the **Key URI** with which you want to encrypt the data in this Recovery Services vault. You also need to specify the subscription in which the Azure Key Vault (that contains this key) is present. This key URI can be obtained from the corresponding key in your Azure Key Vault. Ensure the key URI is copied correctly. It's recommended that you use the **Copy to clipboard** button provided with the key identifier. >[!NOTE]
- >When specifying the encryption key using the full Key URI, the key will not be auto-rotated, and you need to perform key updates manually by specifying the new key when required. Alternatively, remove the Version component of the Key URI to get automatic rotation.
+ >When specifying the encryption key using the full Key URI, the key will not be autorotated, and you need to perform key updates manually by specifying the new key when required. Alternatively, remove the Version component of the Key URI to get automatic rotation.
![Enter key URI](./media/encryption-at-rest-with-cmk/key-uri.png) 2. Browse and select the key from the Key Vault in the key picker pane. >[!NOTE]
- >When specifying the encryption key using the key picker pane, the key will be auto-rotated whenever a new version for the key is enabled. [Learn more](#enable-auto-rotation-of-encryption-keys) on enabling auto-rotation of encryption keys.
+ >When specifying the encryption key using the key picker pane, the key will be autorotated whenever a new version for the key is enabled. [Learn more](#enable-autorotation-of-encryption-keys) on enabling autorotation of encryption keys.
![Select key from key vault](./media/encryption-at-rest-with-cmk/key-vault.png)
InfrastructureEncryptionState : Disabled
# [CLI](#tab/cli)
-Use the [az backup vault encryption update](/cli/azure/backup/vault/encryption#az-backup-vault-encryption-update) command to enable encryption using customer-managed keys, and to assign or update the encryption key to be used.
+Use the [az backup vault encryption update](/cli/azure/backup/vault/encryption#az-backup-vault-encryption-update) command to enable encryption using customer-managed keys and to assign or update the encryption key to be used.
Example:
InfrastructureEncryptionState : Disabled
``` >[!NOTE]
->This process remains the same when you wish to update or change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), make sure that:
+>This process remains the same when you wish to update or change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), ensure that:
>
->- The key vault is located in the same region as the Recovery Services vault
+>- The key vault is located in the same region as the Recovery Services vault.
>
->- The key vault has soft-delete and purge protection enabled
+>- The key vault has soft-delete and purge protection enabled.
> >- The Recovery Services vault has the required permissions to access the key Vault.
InfrastructureEncryptionState : Disabled
## Back up to a vault encrypted with customer-managed keys
-Before proceeding to configure protection, we strongly recommend you ensure the following checklist is adhered to. This is important since once an item has been configured to be backed up (or attempted to be configured) to a non-CMK encrypted vault, encryption using customer-managed keys can't be enabled on it and it will continue to use platform-managed keys.
+Before proceeding to configure protection, we strongly recommend you adhere to the following checklist. This is important since once an item has been configured to be backed up (or attempted to be configured) to a non-CMK encrypted vault, encryption using customer-managed keys can't be enabled on it and it will continue to use platform-managed keys.
>[!IMPORTANT] > Before proceeding to configure protection, you must have **successfully** completed the following steps:
Before proceeding to configure protection, we strongly recommend you ensure the
> >If all the above steps have been confirmed, only then proceed with configuring backup.
-The process to configure and perform backups to a Recovery Services vault encrypted with customer-managed keys is the same as to a vault that uses platform-managed keys, with **no changes to the experience**. This holds true for [backup of Azure VMs](./quick-backup-vm-portal.md) as well as backup of workloads running inside a VM (for example, [SAP HANA](./tutorial-backup-sap-hana-db.md), [SQL Server](./tutorial-sql-backup.md) databases).
+The process to configure and perform backups to a Recovery Services vault encrypted with customer-managed keys is the same as to a vault that uses platform-managed keys with **no changes to the experience**. This holds true for [backup of Azure VMs](./quick-backup-vm-portal.md) as well as backup of workloads running inside a VM (for example, [SAP HANA](./tutorial-backup-sap-hana-db.md), [SQL Server](./tutorial-sql-backup.md) databases).
## Restore data from backup
The process to configure and perform backups to a Recovery Services vault encryp
Data stored in the Recovery Services vault can be restored according to the steps described [here](./backup-azure-arm-restore-vms.md). When restoring from a Recovery Services vault encrypted using customer-managed keys, you can choose to encrypt the restored data with a Disk Encryption Set (DES). >[!Note]
->The experience described in this section only applies to restore of data from CMK encrypted vaults. When you restore data from a vault that isn't using CMK encryption, the restored data would be encrypted using Platform Managed Keys. If you restore from an instant recovery snapshot, it would be encrypted using the mechanism used for encrypting the source disk.
+>The experience described in this section only applies when you restore data from CMK encrypted vaults. When you restore data from a vault that isn't using CMK encryption, the restored data would be encrypted using Platform Managed Keys. If you restore from an instant recovery snapshot, it would be encrypted using the mechanism used for encrypting the source disk.
#### Restore VM/disk
-1. When you recover disk / VM from a *Snapshot* recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks.
+1. When you recover disk/VM from a *Snapshot* recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks.
-1. When restoring disk / VM from a recovery point with Recovery Type as "Vault", you can choose to have the restored data encrypted using a DES, specified at the time of restore. Alternatively, you can choose to continue with the restore the data without specifying a DES, in which case it will be encrypted using Microsoft-managed keys.
+1. When restoring disk/VM from a recovery point with Recovery Type as **Vault**, you can choose to have the restored data encrypted using a DES specified at the time of restore. Alternatively, you can choose to continue with the restore the data without specifying a DES, in which case the encryption setting on the VM will be applied.
-1. During Cross Region Restore, CMK (customer-managed keys) enabled Azure VMs, which aren't backed-up in a CMK enabled Recovery Services vault, is restored as non-CMK enabled VMs in the secondary region.
+1. During Cross Region Restore, CMK (customer-managed keys) enabled Azure VMs, which aren't backed up in a CMK enabled Recovery Services vault, are restored as non-CMK enabled VMs in the secondary region.
-You can encrypt the restored disk / VM after the restore is complete, regardless of the selection made while initiating the restore.
+You can encrypt the restored disk/VM after the restore is complete, regardless of the selection made while initiating the restore.
![Restore points](./media/encryption-at-rest-with-cmk/restore-points.png)
When your subscription is allow-listed, the **Backup Encryption** tab will displ
1. To specify the key to be used for encryption, select the appropriate option.
- You can provide the URI for the encryption key, or browse and select the key. When you specify the key using the **Select the Key Vault** option, auto-rotation of the encryption key will enable automatically. [Learn more on auto-rotation](#enable-auto-rotation-of-encryption-keys).
+ You can provide the URI for the encryption key, or browse and select the key. When you specify the key using the **Select the Key Vault** option, autorotation of the encryption key will enable automatically. [Learn more on autorotation](#enable-autorotation-of-encryption-keys).
1. Specify the user assigned managed identity to manage encryption with customer-managed keys. Click **Select** to browse and select the required identity. 1. Proceed to add Tags (optional) and continue creating the vault.
-### Enable auto-rotation of encryption keys
+### Enable autorotation of encryption keys
When you specify the customer-managed key that must be used to encrypt backups, use the following methods to specify it: - Enter the key URI - Select from Key Vault
-Using the **Select from Key Vault** option helps to enable auto-rotation for the selected key. This eliminates the manual effort to update to the next version. However, using this option:
+Using the **Select from Key Vault** option helps to enable autorotation for the selected key. This eliminates the manual effort to update to the next version. However, using this option:
- Key version update may take up to an hour to take effect. - When a new version of the key takes effect, the old version should also be available (in enabled state) for at least one subsequent backup job after the key update has taken effect.
+> [!NOTE]
+> When specifying the encryption key using the full Key URI, the key won't be auto rotated, and you need to perform key updates manually by specifying the new key when required. To enable automatic rotation, remove the Version component of the Key URI.
+ ### Use Azure Policies to audit and enforce encryption with customer-managed keys (in preview) Azure Backup allows you to use Azure Polices to audit and enforce encryption, using customer-managed keys, of data in the Recovery Services vault. Using the Azure Policies: -- The audit policy can be used for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021. For vaults with the CMK encryption enabled before this date, the policy may fail to apply or may show false negative results (that is, these vaults may be reported as non-compliant, despite having **CMK encryption** enabled).
+- The audit policy can be used for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021. For vaults with the CMK encryption enabled before this date, the policy may fail to apply or may show false negative results (that is, these vaults may be reported as noncompliant despite having **CMK encryption** enabled).
- To use the audit policy for auditing vaults with **CMK encryption** enabled before 04/01/2021, use the Azure portal to update an encryption key. This helps to upgrade to the new model. If you don't want to change the encryption key, provide the same key again through the key URI or the key selection option. >[!Warning]
- >If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
+ >If you're using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
## Frequently asked questions ### Can I encrypt an existing Backup vault with customer-managed keys?
-No, CMK encryption can be enabled for new vaults only. So the vault must never have had any items protected to it. In fact, no attempts to protect any items to the vault must be made before enabling encryption using customer-managed keys.
+No, CMK encryption can be enabled for new vaults only. So, the vault must never have had any items protected to it. In fact, no attempts to protect any items to the vault must be made before enabling encryption using customer-managed keys.
### I tried to protect an item to my vault, but it failed, and the vault still doesn't contain any items protected to it. Can I enable CMK encryption for this vault?
-No, the vault must haven't had any attempts to protect any items to it in the past.
+No, the vault must not have had any attempts to protect any items to it in the past.
### I have a vault that's using CMK encryption. Can I later revert to encryption using platform-managed keys even if I have backup items protected to the vault?
Using CMK encryption for Backup doesn't incur any additional costs to you. You m
## Next steps -- [Overview of security features in Azure Backup](security-overview.md)
+[Overview of security features in Azure Backup](security-overview.md).
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md
Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines.- Previously updated : 11/15/2022+ Last updated : 08/18/2023
To modify the storage replication type:
> You can't modify the storage replication type after the vault is set up and contains backup items. If you want to do this, you need to re-create the vault. >
+## Configure Recovery Services vault to save passphrase to Recovery Services vault (preview)
+
+Azure Backup using the Recovery Services agent (MARS) allows you to back up file or folder and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase provided during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location, such as Azure Key Vault.
+
+We recommend you to create a Key Vault and provide permissions to your Recovery Services vault to save the passphrase to the Key Vault. [Learn more](save-backup-passphrase-securely-in-azure-key-vault.md).
+ ### Verify internet access [!INCLUDE [Configuring network connectivity](../../includes/backup-network-connectivity.md)]
If you've already installed the agent on any machines, ensure you're running the
* Save the passphrase in a secure location. You need it to restore a backup. * If you lose or forget the passphrase, Microsoft can't help you recover the backup data.
+ The MARS agent can automatically save the passphrase securely to Azure Key Vault. So, we recommend you create a Key Vault and grant permissions to your Recovery Services vault to save the passphrase to the Key Vault before registering your first MARS agent to the vault. [Learn more](save-backup-passphrase-securely-in-azure-key-vault.md).
+
+ After granting the required permissions, you can save the passphrase to the Key Vault by copying the *Key Vault URI* from the Azure portal and to the Register Server Wizard.
+ :::image type="content" source="./media/backup-configure-vault/encryption-settings-passphrase-to-encrypt-decrypt-backups.png" alt-text="Screenshot showing to specify a passphrase to be used to encrypt and decrypt backups for machines."::: 1. Select **Finish**. The agent is now installed, and your machine is registered to the vault. You're ready to configure and schedule your backup.
If you've already installed the agent on any machines, ensure you're running the
If you are running into issues during vault registration, see the [troubleshooting guide](backup-azure-mars-troubleshoot.md#invalid-vault-credentials-provided). >[!Note]
- >We strongly recommend you save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](../key-vault/secrets/quick-create-portal.md) how to store a secret in a key vault.
+ >We recommend to save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](save-backup-passphrase-securely-in-azure-key-vault.md) how to store a secret in a Key Vault.
## Next steps
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
backup Restore All Files Volume Mars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-all-files-volume-mars.md
Title: Restore all files in a volume with MARS description: Learn how to restore all the files in a volume using the MARS Agent.- Previously updated : 01/17/2021+ Last updated : 08/14/2023
This article explains how to restore all backed-up files in an entire volume usi
- Restore all backed-up files in a volume to the same machine from which the backups were taken. - Restore all backed-up files in a volume to an alternate machine.
+- If you have Cross Region Restore enabled in your vault, you can restore the backup data from the secondary region.
+- If you want to use Cross Region Restore to restore the backup data from the secondary region, you need to download the Secondary Region vault credential file from the Azure portal, and then pass the file in the MARS agent.
>[!TIP]
->The **Volume** option recovers all backed up data in a specified volume. This option provides faster transfer speeds (up to 40 MBps), and is recommended for recovering large-sized data or entire volumes.
+>The **Volume** option recovers all backed up data in a specified volume. This option provides faster transfer speeds (up to 40 Mbps), and is recommended for recovering large-sized data or entire volumes.
>
->The **Individual files and folders option** allows for quick access to the recovery point data. It's suitable for recovering individual files, and is recommended for a total size of less than 80 GB. It offers transfer or copy speeds up to 6 MBps during recovery.
+>The **Individual files and folders option** allows for quick access to the recovery point data. It's suitable for recovering individual files, and is recommended for a total size of less than 80 GB. It offers transfer or copy speeds of up to 6 MBps during recovery.
## Volume level restore to the same machine
The following steps will help you recover all backed-up files in a volume:
![Getting started page](./media/restore-all-files-volume-mars/same-machine-instant-restore.png)
+ If you have enabled Cross Region Restore (preview) and want to restore from the secondary region, select **Secondary Region**. Otherwise, select **Primary Region**.
+
+ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection of the source region of recovery point.":::
+ 1. On the **Select Recovery Mode** page, choose **Volume** > **Next**. ![Select recovery mode](./media/restore-all-files-volume-mars/select-recovery-mode.png)
These steps include the following terminology:
1. On the **Getting Started** page, select **Another server**.
- ![Screenshot of Recover Data Wizard Getting Started page (restore to alternate machine)](./media/backup-azure-restore-windows-server/alternatemachine_gettingstarted_instantrestore.png)
+ ![Screenshot of Recover Data Wizard Getting Started page (restore to alternate machine).](./media/backup-azure-restore-windows-server/alternatemachine_gettingstarted_instantrestore.png)
+
+1. Provide the vault credential file that corresponds to the sample vault.
+
+ If the vault credential file is invalid (or expired), [download a new vault credential file from the sample vault](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) in the Azure portal. After you provide a valid vault credential, the name of the corresponding backup vault appears.
-1. Provide the vault credential file that corresponds to the sample vault, and select **Next**.
+ >[!Note]
+ >If you want to use Cross Region Restore to restore the backup data from the secondary region, you need to download the *Secondary Region vault credential file* from the Azure portal, and then pass the file in the MARS agent.
+ >
+ > :::image type="content" source="./media/backup-azure-restore-windows-server/pass-vault-credentials-in-mars-agent.png" alt-text="Screenshot shows the secondary vault credentials passed in MARS agent.":::
- If the vault credential file is invalid (or expired), [download a new vault credential file from the sample vault](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) in the Azure portal. After you provide a valid vault credential, the name of the corresponding backup vault appears.
+ Select **Next** to continue.
1. On the **Select Backup Server** page, select the source machine from the list of displayed machines, and provide the passphrase. Then select **Next**.
backup Restore Azure Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-encrypted-virtual-machines.md
Encrypted VMs can only be restored by restoring the VM disk and creating a virtu
Follow below steps to restore encrypted VMs:
-### **Step 1**: Restore the VM disk
+### Step 1: Restore the VM disk
1. In **Restore configuration** > **Create new** > **Restore Type** select **Restore disks**. 1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name.
When your virtual machine uses unmanaged disks, they're restored as blobs to the
> [!NOTE] > After you restore the VM disk, you can manually swap the OS disk of the original VM with the restored VM disk without re-creating it. [Learn more](https://azure.microsoft.com/blog/os-disk-swap-managed-disks/).
-### **Step 2**: Recreate the virtual machine instance
+### Step 2: Recreate the virtual machine instance
Do one of the following actions:
Do one of the following actions:
>While deploying the template, verify the storage account containers and the public/private settings. - Create a new VM from the restored disks using PowerShell. [Learn more](backup-azure-vms-automation.md#create-a-vm-from-restored-disks).
-### **Step 3**: Restore an encrypted Linux VM
+### Step 3: Restore an encrypted Linux VM
Reinstall the ADE extension so the data disks are open and mounted.
backup Save Backup Passphrase Securely In Azure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/save-backup-passphrase-securely-in-azure-key-vault.md
+
+ Title: Save and manage MARS agent passphrase securely in Azure Key Vault (preview)
+description: Learn how to save MARS agent passphrase securely in Azure Key Vault and retrieve them during restore.
+ Last updated : 08/18/2023+++++++
+# Save and manage MARS agent passphrase securely in Azure Key Vault (preview)
+
+Azure Backup using the Recovery Services agent (MARS) allows you back up files/folders and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase you provide during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location.
++
+>[!Important]
+>If this passphrase is lost, Microsoft will not be able retrieve backup data stored in the Recovery Services vault. We recommend that you store this passphrase in a secure external location, such as Azure Key Vault.
+
+Now, you can save your encryption passphrase securely in Azure Key Vault as a Secret from the MARS console during installation for new machines and by changing the passphrase for existing machines. To allow saving the passphrase to Azure Key Vault, you must grant Recovery Services vault the permissions to create a Secret in the Azure Key Vault.
+
+## Before you start
+
+- [Create a Recovery Services vault](backup-create-recovery-services-vault.md) in case you don't have one.
+- You should use a single Azure Key Vault to store all your passphrases. [Create a Key Vault](../key-vault/general/quick-create-portal.md) in case you don't have one.
+ - [Azure Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) is applicable when you create a new Azure Key Vault to store your passphrase.
+ - After you create the Key Vault, to protect against accidental or malicious deletion of passphrase, [ensure that soft-delete and purge protection is turned on](../key-vault/general/soft-delete-overview.md).
+- This feature is supported only in Azure public regions with MARS agent version *2.0.9254.0* or above.
+
+## Configure the Recovery Services vault to store passphrase to Azure Key Vault
+
+Before you can save your passphrase to Azure Key Vault, configure your Recovery Services vault and Azure Key Vault,
+
+To configure a vault, follow these steps in the given sequence to achieve the intended results. Each action is discussed in detail in the sections below:
+
+1. Enabled system-assigned managed identity for the Recovery Services vault.
+2. Assign permissions to the Recovery Services vault to save the passphrase as a Secret in Azure Key Vault.
+3. Enable soft-delete and purge protection on the Azure Key Vault.
+
+>[!Note]
+>- Once you enable this feature, you must not disable the managed identity (even temporarily). Disabling the managed identity may lead to inconsistent behavior.
+>- User-assigned managed identity is currently not supported for saving passphrase in Azure Key Vault.
++
+### Enable system-assigned managed identity for the Recovery Services vault
+
+**Choose a client**:
+
+# [Azure portal](#tab/azure-portal)
+
+Follow these steps:
+
+1. Go to your *Recovery Services vault* > **Identity**.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/recovery-services-vault-identity.png" alt-text="Screenshot shows how to go to Identity in Recovery Services vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/recovery-services-vault-identity.png":::
+
+2. Select the **System assigned** tab.
+3. Change the **Status** to **On**.
+4. Select **Save** to enable the identity for the vault.
+
+An Object ID is generated, which is the system-assigned managed identity of the vault.
++
+# [PowerShell](#tab/powershell)
+
+To enable system-assigned managed identity for the Recovery Services vault, use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault) cmdlet.
+
+**Example**
+
+```azurepowershell
+$vault=Get-AzRecoveryServicesVault -ResourceGroupName "testrg" -Name "testvault"
+Update-AzRecoveryServicesVault -IdentityType SystemAssigned -ResourceGroupName TestRG -Name TestVault
+$vault.Identity | fl
+
+```
+
+```Output
+PrincipalId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+TenantId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Type : SystemAssigned
+
+```
++
+# [CLI](#tab/cli)
+
+To enable system-assigned managed identity for the Recovery Services vault, use the `az backup vault identity assign` command.
+
+**Example**
+
+```azurecli
+az backup vault identity assign --system-assigned --resource-group MyResourceGroup --name MyVault
+
+```
+
+```Output
+PrincipalId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+TenantId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Type : SystemAssigned
+
+```
+++
+### Assign permissions to save the passphrase in Azure Key Vault
+
+Based on the Key Vault permission model (either role-based access permissions or access policy-based permission model) configured for Key Vault, refer to the following sections.
+
+#### Enable permissions using role-based access permission model for Key Vault
+
+**Choose a client:**
+
+# [Azure portal](#tab/azure-portal)
+
+To assign the permissions, follow these steps:
+
+1. Go to your *Azure Key Vault* > **Settings** > **Access Configuration** to ensure that the permission model is **RBAC**.
+
+2. Select **Access control (IAM)** > **+Add** to add role assignment.
+
+3. The Recovery Services vault identity requires the **Set permission on Secret** to create and add the passphrase as a Secret to the Key Vault.
+
+ You can select a *built-in role* such as **Key Vault Secrets Officer** that has the permission (along with other permissions not required for this feature) or [create a custom role](../key-vault/general/rbac-guide.md?tabs=azurepowershell#creating-custom-roles) with only Set permission on Secret.
+
+ Select **Details** to view the permissions granted by the role and ensure Set permission on Secret is available.
+
+4. Select **Next** to proceed to select Members for assignment.
+
+5. Select **Managed identity** and then **+ Select members**. choose the **Subscription** of the target Recovery Services vault, select Recovery Services vault under **System-assigned managed identity**.
+
+ Search and select the *name of the Recovery Services vault*.
+
+6. Select **Next**, review the assignment, and select **Review + assign**.
+
+7. Go to **Access control (IAM)** in the Key Vault, select **Role assignments** and ensure that the Recovery Services vault is listed.
+
+
+# [PowerShell](#tab/powershell)
+
+To assign the permissions, run the following cmdlet:
+
+```azurepowershell
+#Find the application id for your recovery services vault
+Get-AzADServicePrincipal -SearchString <principalName>
+#Identify a role with Set permission on Secret, like Key Vault Secret Office
+Get-AzRoleDefinition | Format-Table -Property Name, IsCustom, Id
+#Assign role to Recovery Services Vault identity
+Get-AzRoleDefinition -Name <roleName>
+#Assign by Service Principal ApplicationId
+New-AzRoleAssignment -RoleDefinitionName 'Key Vault Secrets Officer' -ApplicationId {i.e 8ee5237a-816b-4a72-b605-446970e5f156} -Scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
+
+```
+
+# [CLI](#tab/cli)
+
+To assign the permissions, run the following command:
+
+```azurecli
+#Find the application id for your recovery services vault
+az ad sp list --all --filter "displayname eq '<my recovery vault name>' and servicePrincipalType eq 'ManagedIdentity'"
+#Identify a role with Set permission on Secret, like Key Vault Secret Office
+az role definition list --query "[].{name:name, roleType:roleType, roleName:roleName}" --output tsv
+az role definition list --name "{roleName}"
+#Assign role to Recovery Services Vault identity
+az role assignment create --role "Key Vault Secrets Officer" --assignee "<application id>" {i.e "55555555-5555-5555-5555-555555555555"} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
+
+```
+++
+#### Enable permissions using Access Policy permission model for Key Vault
+
+**Choose a client**:
+
+# [Azure portal](#tab/azure-portal)
+
+Follow these steps:
+
+1. Go to your *Azure Key Vault* > **Access Policies** > **Access policies**, and then select **+ Create**.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/create-access-policies.png" alt-text="Screenshot shows how to start creating a Key Vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/create-access-policies.png":::
+
+2. Under **Secret Permissions**, select **Set operation**.
+
+ This specifies the allowed actions on the Secret.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/set-secret-permissions.png" alt-text="Screenshot shows how to start setting permissions." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/set-secret-permissions.png":::
+
+3. Go to **Select Principal** and search for your *vault* in the search box using its name or managed identity.
+
+ Select the *vault* from the search result and choose **Select**.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/assign-principal.png" alt-text="Screenshot shows the assignment of permission to a selected vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/assign-principal.png":::
+
+4. Go to **Review + create**, ensure that **Set permission** is available and **Principal** is the correct *Recovery Services vault*, and then select **Create**.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/review-and-create-access-policy.png" alt-text="Screenshot shows the verification of the assigned Recovery Services vault and create the Key Vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/review-and-create-access-policy.png":::
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/check-access-policies.png" alt-text="Screenshot shows how to verify the access present." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/check-access-policies.png":::
++
+# [PowerShell](#tab/powershell)
+
+To get the Principal ID of the Recovery Services vault, use the [Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) cmdlet. Then use this ID in the [Get-AzADServicePrincipal](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet to set an access policy for the Key vault.
+
+**Example**
+
+```azurepowershell
+$sp = Get-AzADServicePrincipal -DisplayName MyVault
+$Set-AzKeyVaultAccessPolicy -VaultName myKeyVault -ObjectId $sp.Id -PermissionsToSecrets set
+
+```
+
+# [CLI](#tab/cli)
+
+To get the principal ID of the Recovery Services vault, use the `az ad sp list` command. Then use this ID in the `az keyvault set-policy` command to set an access policy for the Key vault.
+
+**Example**
+
+```azurecli
+az ad sp list --display-name MyVault
+az keyvault set-policy --name myKeyVault --object-id <object-id> --secret-permissions set
+
+```
++++
+### Enable soft-delete and purge protection on Azure Key Vault
+
+You need to enable soft-delete and purge protection on your Azure Key Vault that stores your encryption key.
+
+*Choose a client**
+
+# [Azure portal](#tab/azure-portal)
+
+You can enable soft-delete and purge protection from the Azure Key Vault.
+
+Alternatively, you can set these properties while creating the Key Vault. [Learn more](../key-vault/general/soft-delete-overview.md) about these Key Vault properties.
++
+# [PowerShell](#tab/powershell)
++
+1. Sign in to your Azure account.
+
+ ```azurepowershell
+ Login-AzAccount
+ ```
+
+2. Select the *subscription* that contains your vault.
+
+ ```azurepowershell
+ Set-AzContext -SubscriptionId SubscriptionId
+ ```
+
+3. Enable *soft-delete*.
+
+ ```azurepowershell
+ ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true"
+ Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties
+ ```
+
+4. Enable *purge protection*.
+
+ ```azurepowershell
+ ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enablePurgeProtection" -Value "true"
+ Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties
+
+ ```
+
+# [CLI](#tab/cli)
+
+1. Sign in to your Azure Account.
+
+ ```azurecli
+ az login
+ ```
+
+2. Select the subscription that contains your vault.
+
+ ```azurecli
+ az account set --subscription "Subscription1"
+ ```
+
+3. Enable soft delete
+
+ ```azurecli
+ az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-soft-delete true
+ ```
+
+4. Enable purge protection
+
+ ```azurecli
+ az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-purge-protection true
+ ```
+++++
+## Save passphrase to Azure Key Vault for a new MARS installation
+
+Before proceeding to install the MARS agent, ensure that you have [configured the Recovery Services vault to store passphrase to Azure Key Vault](#configure-the-recovery-services-vault-to-store-passphrase-to-azure-key-vault) and you have successfully:
+
+1. Created your Recovery Services vault.
+2. Enabled the Recovery Services vault's system-assigned managed identity.
+3. Assigned permissions to your Recovery Services vault to create Secret in your Key Vault.
+4. Enabled soft delete and purge protection for your Key Vault.
+
+5. To install the MARS agent on a machine, download the MARS installer from the Azure portal, and then [use installation wizard](install-mars-agent.md).
+
+6. After providing the *Recovery Services vault credentials* during registration, in the **Encryption Setting**, select the option to save the passphrase to Azure Key Vault.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase.png" alt-text="Screenshot shows the option to save the passphrase to Azure Key Vault to be selected." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase.png":::
+
+7. Enter your *passphrase* or select **Generate Passphrase**.
+4. In the *Azure portal*, open your *Key Vault*, copy the *Key Vault URI*.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png" alt-text="Screenshot shows how to copy the Key Vault URI." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png":::
+
+5. Paste the *Key Vault URI* in the *MARS console*, and then select **Register**.
+
+ If you encounter an error, [check the troubleshooting section](#troubleshoot-common-scenarios) for more information.
+
+8. Once the registration succeeds, the option to *copy the identifier to the Secret* is created and the passphrase is NOT saved to a file locally.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/server-registration-success.png" alt-text="Screenshot shows the option to copy the identifier to the Secret gets creates." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/server-registration-success.png":::
+
+ If you change the passphrase in the future for this MARS agent, a new version of the Secret will be added with the latest passphrase.
+
+You can automate this process by using the new KeyVaultUri option in `Set-OBMachineSetting command` in the [installation script](./scripts/register-microsoft-azure-recovery-services-agent.md).
+
+## Save passphrase to Azure Key Vault for an existing MARS installation
+
+If you have an existing MARS agent installation and want to save your passphrase to Azure Key Vault, [update your agent](upgrade-mars-agent.md) to version *2.0.9254.0* or above and perform a change passphrase operation.
+
+After updating your MARS agent, ensure that you have [configured the Recovery Services vault to store passphrase to Azure Key Vault](#configure-the-recovery-services-vault-to-store-passphrase-to-azure-key-vault) and you have successfully:
+
+1. Created your Recovery Services vault.
+2. Enabled the Recovery Services vault's system-assigned managed identity.
+3. Assigned permissions to your Recovery Services vault to create Secret in your Key Vault.
+4. Enabled soft delete and purge protection for your Key Vault
+
+To save the passphrase to Key Vault:
+
+1. Open the *MARS agent console*.
+
+ You should see a banner asking you to select a link to save the passphrase to Azure Key Vault.
+
+ Alternatively, select **Change Properties** > **Change Passphrase** to proceed.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase-key-vault.png" alt-text="Screenshot shows how to start changing passphrase for an existing MARS installation." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase-key-vault.png":::
+
+2. In the **Change Properties** dialog box, the option to *save passphrase to Key Vault by providing a Key Vault URI* appears.
+
+ >[!Note]
+ >If the machine is already configured to save passphrase to Key Vault, the Key Vault URI will be populated in the text box automatically.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/enter-key-vault-url.png" alt-text="Screenshot shows the option to save passphrase to Key Vault by providing a Key Vault URI gets generated." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/enter-key-vault-url.png":::
+
+3. Open the *Azure portal*, open your *Key Vault*, and then *copy the Key Vault URI*.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png" alt-text="Screenshot shows how to copy the Key Vault URI." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png":::
+
+4. *Paste the Key Vault URI* in the *MARS console*, and then select **OK**.
+
+ If you encounter an error, [check the troubleshooting section](#troubleshoot-common-scenarios) for more information.
+
+5. Once the change passphrase operation succeeds, an option to *copy the identifier to the Secret* gets created and the passphrase is NOT saved to a file locally.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/passphrase-saved-to-key-vault.png" alt-text="Screenshot shows an option to copy the identifier to the Secret gets created." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/passphrase-saved-to-key-vault.png":::
+
+ If you change the passphrase in the future for this MARS agent, a new version of the *Secret* will be added with the latest passphrase.
+
+You can automate this step by using the new KeyVaultUri option in [Set-OBMachineSetting](/powershell/module/msonlinebackup/set-obmachinesetting?view=msonlinebackup-ps&preserve-view=true) cmdlet.
+
+## Retrieve passphrase from Azure Key Vault for a machine
+
+If your machine becomes unavailable and you need to restore backup data from the Recovery Services vault via [alternate location restore](restore-all-files-volume-mars.md#volume-level-restore-to-an-alternate-machine), you need the machineΓÇÖs passphrase to proceed.
+
+The passphrase is saved to Azure Key Vault as a Secret. One Secret is created per machine and a new version is added to the Secret when the passphrase for the machine is changed. The Secret is named as `AzBackup-machine fully qualified name-vault name`.
+
+To locate the machineΓÇÖs passphrase:
+
+1. In the *Azure portal*, open the *Key Vault used to save the passphrase for the machine*.
+
+ We recommend you to use one Key Vault to save all your passphrases.
+
+2. Select **Secrets** and search for the secret named `AzBackup-<machine name>-<vaultname>`.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/locate-passphrase.png" alt-text="Screenshot shows bow to check for the secret name." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/locate-passphrase.png":::
+
+3. Select the **Secret**, open the latest version and *copy the value of the Secret*.
+
+ This is the passphrase of the machine to be used during recovery.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-passphrase-from-secret.png" alt-text="Screenshot shows selection of the secret." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-passphrase-from-secret.png":::
+
+ If you have a large number of Secrets in the Key Vault, use the Key Vault CLI to list and search for the secret.
+
+```azurecli
+az keyvault secret list --vault-name 'myvaultnameΓÇÖ | jq '.[] | select(.name|test("AzBackup-<myvmname>"))'
+
+```
+
+## Troubleshoot common scenarios
+
+This section lists commonly encountered errors when saving the passphrase to Azure Key Vault.
+
+### System identity isn't configured ΓÇô 391224
+
+**Cause**: This error occurs if the Recovery Services vault doesn't have a system-assigned managed identity configured.
+
+**Recommended action**: Ensure that system-assigned managed identity is configured correctly for the Recovery Services vault as per the [prerequisites](#before-you-start).
+
+### Permissions aren't configured ΓÇô 391225
+
+**Cause**: The Recovery Services vault has a system-assigned managed identity, but it doesn't have **Set permission** to create a Secret in the target Key Vault.
+
+**Recommended action**:
+
+1. Ensure that the vault credential used corresponds to the intended recovery services vault.
+2. Ensure that the Key Vault URI corresponds to the intended Key Vault.
+3. Ensure that the Recovery Services vault name is listed under Key Vault -> Access policies -> Application, with Secret Permissions as Set.
+
+ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/check-secret-permissions-is-set.png" alt-text="Screenshot shows the Recovery Services vault name is listed under Key Vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/check-secret-permissions-is-set.png":::
+
+ If it's not listed, [configure the permission again](#assign-permissions-to-save-the-passphrase-in-azure-key-vault).
+
+### Azure Key Vault URI is incorrect - 100272
+
+**Cause**: The Key Vault URI entered isn't in the right format.
+
+**Recommended action**: Ensure that you have entered a Key Vault URI copied from the Azure portal. For example, `https://myvault.vault.azure.net/`.
+
+
+### Registration is incomplete
+
+**Cause**: You didn't complete the MARS registration by registering the passphrase. So, you'll not be able to configure backups until you register.
+
+**Recommended action**: Select the warning message and complete the registration.
++
+ΓÇâ
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 07/14/2023 Last updated : 08/30/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- August 2023
+ - [Save your MARS backup passphrase securely to Azure Key Vault (preview)](#save-your-mars-backup-passphrase-securely-to-azure-key-vault-preview)
+ - [Cross Region Restore for MARS Agent (preview)](#cross-region-restore-for-mars-agent-preview)
- July 2023 - [SAP HANA System Replication database backup support is now generally available](#sap-hana-system-replication-database-backup-support-is-now-generally-available) - [Cross Region Restore for PostgreSQL (preview)](#cross-region-restore-for-postgresql-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Save your MARS backup passphrase securely to Azure Key Vault (preview)
+
+Azure Backup now enables you to save the MARS passphrase to Azure Key Vault automatically from the MARS console during registration or changing passphrase.
+
+The MARS agent from Azure Backup requires a passphrase provided by the user to encrypt the backups sent to and stored on Azure Recovery Services Vault. This passphrase is not shared with Microsoft and needs to be saved in a secure location to ensure that the backups can be retrieved if the server backed up with MARS goes down.
+
+For more information, see [Save and manage MARS agent passphrase securely in Azure Key Vault](save-backup-passphrase-securely-in-azure-key-vault.md).
+
+## Cross Region Restore for MARS Agent (preview)
+
+You can now restore data from the secondary region for MARS Agent backups using Cross Region Restore on Recovery Services vaults with Geo-redundant storage (GRS) replication. You can use this capability to do recovery drills from the secondary region for audit or compliance. If disasters cause partial or complete unavailability of the primary region, you can directly access the backup data from the secondary region.
+
+For more information, see [Cross Region Restore for MARS (preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview).
+ ## SAP HANA System Replication database backup support is now generally available Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection,
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
The following table describes the network topologies supported by each network f
|Connectivity over Active/Passive VPN gateways| Yes | |Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No |
-|Transit connectivity via vWAN for Spoke Delegated VNETS| No |
+|Transit connectivity via vWAN for Spoke Delegated VNETS| Yes |
|On-premises connectivity to Delegated subnet via vWAN attached SD-WAN| No| |On-premises connectivity via Secured HUB(Az Firewall NVA) | No| |Connectivity from UVMs on NC2 nodes to Azure resources|Yes|
bastion Bastion Connect Vm Rdp Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-linux.md
- Title: 'Connect to a Linux VM using RDP'-
-description: Learn how to use Azure Bastion to connect to Linux VM using RDP.
--- Previously updated : 04/26/2023----
-# Create an RDP connection to a Linux VM using Azure Bastion
-
-This article shows you how to securely and seamlessly create an RDP connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also [connect to a Linux VM using SSH](bastion-connect-vm-ssh-linux.md).
-
-Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see [What is Azure Bastion?](bastion-overview.md)
-
-## Prerequisites
-
-Before you begin, verify that you've met the following criteria:
-
-* Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
-
-* To use RDP with a Linux virtual machine, you must also ensure that you have xrdp installed and configured on the Linux VM. To learn how to do this, see [Use xrdp with Linux](../virtual-machines/linux/use-remote-desktop.md).
-
-* This configuration isn't available for the **Basic** SKU. To use this feature, [Upgrade the SKU](upgrade-sku.md) to the Standard SKU tier.
-
-* You must use username/password authentication.
-
-### Required roles
-
-In order to make a connection, the following roles are required:
-
-* Reader role on the virtual machine
-* Reader role on the NIC with private IP of the virtual machine
-* Reader role on the Azure Bastion resource
-* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
-
-### Ports
-
-To connect to the Linux VM via RDP, you must have the following ports open on your VM:
-
-* Inbound port: RDP (3389) *or*
-* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion)
-
-## <a name="rdp"></a>Connect
--
-## Next steps
-
-Read the [Bastion FAQ](bastion-faq.md) for more information.
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
# Create an SSH connection to a Linux VM using Azure Bastion
-This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Linux VM using RDP. For information, see [Create an RDP connection to a Linux VM](bastion-connect-vm-rdp-linux.md).
+This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software.
Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md) overview article.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 08/08/2023 Last updated : 08/16/2023 # Azure Bastion FAQ
No. You don't need to install an agent or any software on your browser or your A
See [About VM connections and features](vm-about.md) for supported features.
+### <a name="shareable-links-passwords"></a>Is Reset Password available for local users connecting via shareable link?
+
+No. Some organizations have company policies that require a password reset when a user logs into a local account for the first time. When using shareable links, the user can't change the password, even though a "Reset Password" button may appear.
+ ### <a name="audio"></a>Is remote audio available for VMs? Yes. See [About VM connections and features](vm-about.md#audio).
This may be due to the Private DNS zone for privatelink.azure.com linked to the
## Next steps
-For more information, see [What is Azure Bastion](bastion-overview.md).
+For more information, see [What is Azure Bastion](bastion-overview.md).
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
description: Learn how to connect to your virtual machines using a specified pri
Previously updated : 06/26/2023 Last updated : 08/23/2023
IP-based connection lets you connect to your on-premises, non-Azure, and Azure v
**Limitations**
-IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing.
+* IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing.
+
+* Azure Active Directory authentication and custom ports and protocols aren't currently supported when connecting to a VM via native client.
## Prerequisites
Before you begin these steps, verify that you have the following environment set
## Connect to VM - native client
-You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Configure Bastion native client support](native-client.md). Use the following commands as examples:
-
- **RDP:**
-
- ```azurecli
- az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
- ```
-
- **SSH:**
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
-
- **Tunnel:**
-
- ```azurecli
- az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
- ```
+You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunneling. To learn more about configuring native client support, see [Configure Bastion native client support](native-client.md).
+
+> [!NOTE]
+> This feature does not currently support Azure Active Directory authentication or custom port and protocol.
+
+Use the following commands as examples:
+
+**RDP:**
+
+```azurecli
+az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
+```
+
+**SSH:**
+
+```azurecli
+az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+```
+
+**Tunnel:**
+```azurecli
+az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
+```
## Next steps
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 05/26/2023 Last updated : 08/23/2023
pendingTaskSamples = pendingTaskSamplePercent < 70 ? startingNumberOfVMs : avg($
$TargetDedicatedNodes=min(maxNumberofVMs, pendingTaskSamples); $NodeDeallocationOption = taskcompletion; ```
+> [!IMPORTANT]
+> Currently, Batch Service has limitations with the resolution of the pending tasks. When a task is added to the job, it's also added into a internal queue used by Batch service for scheduling. If the task is deleted before it can be scheduled, the task might persist within the queue, causing it to still be counted in `$PendingTasks`. This deleted task will eventually be cleared from the queue when Batch gets chance to pull tasks from the queue to schedule with idle nodes in the Batch pool.
#### Preempted nodes
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
You should be familiar with container concepts and how to create a Batch pool an
> - Batch Python SDK version 14.0.0 > - Batch Java SDK version 11.0.0 > - Batch Node.js SDK version 11.0.0
-> the `containerConfiguration` requires `Type` property to be passed and the supported values are: `ContainerType.DockerCompatible` and `ContainerType.CriCompatible`.
+
+Currently, the `containerConfiguration` requires `Type` property to be passed and the supported values are: `ContainerType.DockerCompatible` and `ContainerType.CriCompatible`.
Keep in mind the following limitations:
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
description: Learn how to mount different kinds of virtual file systems on Batch
ms.devlang: csharp Previously updated : 04/28/2023 Last updated : 08/22/2023 # Mount a virtual file system on a Batch pool
Mounting the file system to the pool makes accessing data easier and more effici
Also, you can choose the underlying file system to meet performance, throughout, and input/output operations per second (IOPS) requirements. You can independently scale the file system based on the number of compute nodes that concurrently access the data.
-For example, you could use an [Avere vFXT](/azure/avere-vfxt/avere-vfxt-overview) distributed in-memory cache to support large movie-scale renders with thousands of concurrent render nodes that access on-premises source data. Or, for data that's already in cloud-based blob storage, you can use [BlobFuse](/azure/storage/blobs/storage-how-to-mount-container-linux) to mount the data as a local file system. BlobFuse is available only on Linux nodes except Ubuntu 22.04, but [Azure Files](/azure/storage/files/storage-files-introduction) provides a similar workflow and is available on both Windows and Linux.
+For example, you could use an [Avere vFXT](/azure/avere-vfxt/avere-vfxt-overview) distributed in-memory cache to support large movie-scale renders with thousands of concurrent render nodes that access on-premises source data. Or, for data that's already in cloud-based blob storage, you can use [BlobFuse](/azure/storage/blobs/storage-how-to-mount-container-linux) to mount the data as a local file system. [Azure Files](/azure/storage/files/storage-files-introduction) provides a similar workflow to that of BlobFuse and is available on both Windows and Linux.
## Supported configurations
Batch supports the following virtual file system types for node agents that are
| OS Type | Azure Files share | Azure Blob container | NFS mount | CIFS mount | ||||||
-| Linux | :heavy_check_mark: | :heavy_check_mark:* | :heavy_check_mark: | :heavy_check_mark: |
+| Linux | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Windows | :heavy_check_mark: | :x: | :x: | :x: |
-\*Azure Blob container isn't supported on Ubuntu 22.04.
- > [!NOTE] > Mounting a virtual file system isn't supported on Batch pools created before August 8, 2019.
When you use virtual file mounts with Batch pools in a virtual network, keep the
- **Azure Blob containers** require TCP port 443 to be open for traffic to and from the `storage` service tag. Virtual machines (VMs) must have access to `https://packages.microsoft.com` to download the `blobfuse` and `gpg` packages. Depending on your configuration, you might need access to other URLs. - **Network File System (NFS)** requires access to port 2049 by default. Your configuration might have other requirements. VMs must have access to the appropriate package manager to download the `nfs-common` (for Debian or Ubuntu) or `nfs-utils` (for CentOS) packages. The URL might vary based on your OS version. Depending on your configuration, you might also need access to other URLs.
-
+ Mounting Azure Blob or Azure Files through NFS might have more networking requirements. For example, your compute nodes might need to use the same virtual network subnet as the storage account. - **Common Internet File System (CIFS)** requires access to TCP port 445. VMs must have access to the appropriate package manager to download the `cifs-utils` package. The URL might vary based on your OS version.
You can use [Azure PowerShell](/powershell/) to mount an Azure Files share on a
``` 1. Get the context for your Batch account. Replace the `<batch-account-name>` placeholder with your Batch account name.
-
+ ```powershell-interactive $context = Get-AzBatchAccount -AccountName <batch-account-name> ```
You can use [Azure PowerShell](/powershell/) to mount an Azure Files share on a
```powershell-interactive $fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<storage-account-name>", "https://<storage-account-name>.file.core.windows.net/batchfileshare1", "S", "<storage-account-key>")
-
+ $mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
-
+ $imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("WindowsServer", "MicrosoftWindowsServer", "2016-Datacenter", "latest")
-
+ $configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.windows amd64")
-
+ New-AzBatchPool -Id "<pool-name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -MountConfiguration @($mountConfig) -BatchContext $context ```
cmdkey /add:"<storage-account-name>.file.core.windows.net" /user:"Azure\<storage
```powershell-interactive $fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<storage-account-name>", https://<storage-account-name>.file.core.windows.net/<file-share-name>, "S", "<storage-account-key>", "-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp")
-
+ $mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
-
+ $imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("0001-com-ubuntu-server-focal", "canonical", "20_04-lts", "latest")
-
+ $configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.ubuntu 20.04")
-
+ New-AzBatchPool -Id "<pool-name>" -VirtualMachineSize "Standard_DS1_v2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -MountConfiguration @($mountConfig) -BatchContext $Context ```
To get log files for debugging, you can use the [OutputFiles](batch-task-output-
### Investigate mounting errors
-If you get the following error when you try to mount an Azure file share to a Batch node, you can RDP or SSH to the node to check the related log files.
+You can RDP or SSH to the node to check the log files pertaining to filesystem mounts.
+The following example error message is possible when you try to mount an Azure file share to a Batch node:
```output Mount Configuration Error | An error was encountered while configuring specified mount(s)
Message: System error (out of memory, cannot fork, no more loop devices)
MountConfigurationPath: S ```
-If you receive this error, RDP or SSH to the node to check the related log files. The Batch agent implements mounting differently on Windows and Linux. On Linux, Batch installs the package `cifs-utils`. Then, Batch issues the mount command. On Windows, Batch uses `cmdkey` to add your Batch account credentials. Then, Batch issues the mount command through `net use`. For example:
+If you receive this error, RDP or SSH to the node to check the related log files. The Batch agent implements mounting differently on Windows and Linux for Azure file shares. On Linux, Batch installs the package `cifs-utils`. Then, Batch issues the mount command. On Windows, Batch uses `cmdkey` to add your Batch account credentials. Then, Batch issues the mount command through `net use`. For example:
```powershell-interactive net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<storage-account-name> <storage-account-key>
net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<
```output CMDKEY: Credential added successfully. System error 86 has occurred.
-
+ The specified network password is not correct. ```
If you can't use RDP or SSH to check the log files on the node, you can upload t
1. When the upload completes, download the files and open *agent-debug.log*.
-1. Review the error messages, for example:
+1. Review the error messages, for example:
```output ..20210322T113107.448Z.00000000-0000-0000-0000-000000000000.ERROR.agent.mount.filesystems.basefilesystem.basefilesystem.py.run_cmd_persist_output_async.59.2912.MainThread.3580.Mount command failed with exit code: 2, output:
-
+ CMDKEY: Credential added successfully.
-
+ System error 86 has occurred.
-
+ The specified network password is not correct. ```
If you can't diagnose or fix mounting errors, you can use PowerShell to mount th
```powershell-interactive $imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("WindowsServer", "MicrosoftWindowsServer", "2016-Datacenter", "latest")
-
+ $configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.windows amd64")
-
+ New-AzBatchPool -Id "<pool-name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -BatchContext $Context ```
If you can't diagnose or fix mounting errors, you can use PowerShell to mount th
```powershell-interactive $imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("0001-com-ubuntu-server-focal", "canonical", "20_04-lts", "latest")
-
+ $configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.ubuntu 20.04")
-
+ New-AzBatchPool -Id "<pool-name>" -VirtualMachineSize "Standard_DS1_v2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -BatchContext $Context ```
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
Open a browser and sign in to the [Azure portal](https://portal.azure.com).
### Dynamic site acceleration optimization If you want to optimize your CDN endpoint for dynamic site acceleration (DSA), you should use the [CDN portal](cdn-create-new-endpoint.md) to create your profile and endpoint. With [DSA optimization](cdn-dynamic-site-acceleration.md), the performance of web pages with dynamic content is measurably improved. For instructions about how to optimize a CDN endpoint for DSA from the CDN portal, see [CDN endpoint configuration to accelerate delivery of dynamic files](cdn-dynamic-site-acceleration.md#cdn-endpoint-configuration-to-accelerate-delivery-of-dynamic-files).
-Otherwise, if you don't want to optimize your new endpoint, you can use the web app portal to create it by following the steps in the next section. For **Azure CDN from Verizon** profiles, you can't change the optimization of a CDN endpoint after it has been created.
+Otherwise, if you don't want to optimize your new endpoint, you can use the web app portal to create it by following the steps in the next section. For **Azure CDN from Edgio** profiles, you can't change the optimization of a CDN endpoint after it has been created.
## Create a CDN profile and endpoint
Azure creates the profile and endpoint. The new endpoint appears in the **Endpoi
Because it takes time for the registration to propagate, the endpoint isn't immediately available for use: - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
- - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes within 90 minutes.
+ - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes within 90 minutes.
The sample app has an *https://docsupdatetracker.net/index.html* file and *css*, *img*, and *js* folders that contain other static assets. The content paths for all of these files are the same at the CDN endpoint. For example, both of the following URLs access the *bootstrap.css* file in the *css* folder:
cdn Cdn Analyze Usage Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-analyze-usage-patterns.md
Title: Core Reports from Verizon | Microsoft Docs
-description: 'Learn how to access and view Verizon Core Reports via the Manage portal for Verizon profiles.'
+ Title: Core Reports from Edgio | Microsoft Docs
+description: 'Learn how to access and view Edgio Core Reports via the Manage portal for Edgio profiles.'
documentationcenter: ''
Last updated 01/23/2017
-# Core Reports from Verizon
+# Core Reports from Edgio
[!INCLUDE [cdn-verizon-only](../../includes/cdn-verizon-only.md)]
-By using Verizon Core Reports via the Manage portal for Verizon profiles, you can view usage patterns for your CDN with the following reports:
+By using Edgio Core Reports via the Manage portal for Edgio profiles, you can view usage patterns for your CDN with the following reports:
* Bandwidth * Data Transferred
By using Verizon Core Reports via the Manage portal for Verizon profiles, you ca
* Cache Hit Ratio * IPV4/IPV6 Data Transferred
-## Accessing Verizon Core Reports
+<a name='accessing-verizon-core-reports'></a>
+
+## Accessing Edgio Core Reports
1. From the CDN profile blade, click the **Manage** button. ![CDN profile Manage button](./media/cdn-reports/cdn-manage-btn.png)
This report shows the traffic usage distribution in IPV4 vs IPV6.
## Considerations Reports can only be generated within the last 18 months.-
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
private static void PromptPurgeCdnEndpoint(CdnManagementClient cdn)
``` > [!NOTE]
-> In the example previously, the string `/*` denotes that I want to purge everything in the root of the endpoint path. This is equivalent to checking **Purge All** in the Azure portal's "purge" dialog. In the `CreateCdnProfile` method, I created our profile as an **Azure CDN from Verizon** profile using the code `Sku = new Sku(SkuName.StandardVerizon)`, so this will be successful. However, **Azure CDN from Akamai** profiles do not support **Purge All**, so if I was using an Akamai profile for this tutorial, I would need to include specific paths to purge.
+> In the example previously, the string `/*` denotes that I want to purge everything in the root of the endpoint path. This is equivalent to checking **Purge All** in the Azure portal's "purge" dialog. In the `CreateCdnProfile` method, I created our profile as an **Azure CDN from Edgio** profile using the code `Sku = new Sku(SkuName.StandardVerizon)`, so this will be successful. However, **Azure CDN from Akamai** profiles do not support **Purge All**, so if I was using an Akamai profile for this tutorial, I would need to include specific paths to purge.
> >
cdn Cdn Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-azure-diagnostic-logs.md
Here's how you can use the tool:
## Log data delays
-The following table shows log data delays for **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Akamai**, and **Azure CDN Standard/Premium from Verizon**.
+The following table shows log data delays for **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Akamai**, and **Azure CDN Standard/Premium from Edgio**.
-Microsoft log data delays | Verizon log data delays | Akamai log data delays
+Microsoft log data delays | Edgio log data delays | Akamai log data delays
| | Delayed by 1 hour. | Delayed by 1 hour and can take up to 2 hours to start appearing after endpoint propagation completion. | Delayed by 24 hours; if it was created more than 24 hours ago, it takes up to 2 hours to start appearing. If it was recently created, it can take up to 25 hours for the logs to start appearing.
The following table shows a list of metrics available in the core analytics logs
* **Azure CDN Standard from Microsoft** * **Azure CDN Standard from Akamai**
-* **Azure CDN Standard/Premium from Verizon**
+* **Azure CDN Standard/Premium from Edgio**
Not all metrics are available from all providers, although such differences are minimal. The table also displays whether a given metric is available from a provider. The metrics are available for only those CDN endpoints that have traffic on them.
-|Metric | Description | Microsoft | Verizon | Akamai |
+|Metric | Description | Microsoft | Edgio | Akamai |
||-|--||--| | RequestCountTotal | Total number of request hits during this period. | Yes | Yes |Yes | | RequestCountHttpStatus2xx | Count of all requests that resulted in a 2xx HTTP code (for example, 200, 202). | Yes | Yes |Yes |
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
If you're using Azure Blob storage as the origin for your content, you also incu
- Transfers in GB: The amount of data transferred to fill the CDN caches. > [!NOTE]
-> Starting October 2019, If you are using Azure CDN from Microsoft, the cost of data transfer from Origins hosted in Azure to CDN PoPs is free of charge. Azure CDN from Verizon and Azure CDN from Akamai are subject to the rates described as followed.
+> Starting October 2019, If you are using Azure CDN from Microsoft, the cost of data transfer from Origins hosted in Azure to CDN PoPs is free of charge. Azure CDN from Edgio and Azure CDN from Akamai are subject to the rates described as followed.
For more information about Azure Storage billing, see [Plan and manage costs for Azure Storage](../storage/common/storage-plan-manage-costs.md).
cdn Cdn Caching Rules Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules-tutorial.md
# Tutorial: Set Azure CDN caching rules > [!NOTE]
-> Caching rules are available only for **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN from Microsoft** profiles, you must use the [Standard rules engine](cdn-standard-rules-engine-reference.md) For **Azure CDN Premium from Verizon** profiles, you must use the [Verizon Premium rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
+> Caching rules are available only for **Azure CDN Standard from Edgio** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN from Microsoft** profiles, you must use the [Standard rules engine](cdn-standard-rules-engine-reference.md) For **Azure CDN Premium from Edgio** profiles, you must use the [Edgio Premium rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
This tutorial describes how you can use Azure Content Delivery Network (CDN) caching rules to set or modify default cache expiration behavior both globally and with custom conditions, such as a URL path and file extension. Azure CDN provides two types of caching rules:
In this tutorial, you learned how to:
Advance to the next article to learn how to configure additional caching rule settings. > [!div class="nextstepaction"]
-> [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md)
+> [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md)
cdn Cdn Caching Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules.md
This article describes how you can use content delivery network (CDN) caching rules to set or modify default cache expiration behavior. These caching rules can either be global or with custom conditions, such as a URL path and file extension. > [!NOTE]
-> Caching rules are available only for **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN from Microsoft** profiles, you must use the [Standard rules engine](cdn-standard-rules-engine-reference.md) For **Azure CDN Premium from Verizon** profiles, you must use the [Verizon Premium rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
+> Caching rules are available only for **Azure CDN Standard from Edgio** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN from Microsoft** profiles, you must use the [Standard rules engine](cdn-standard-rules-engine-reference.md) For **Azure CDN Premium from Edgio** profiles, you must use the [Edgio Premium rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
Azure Content Delivery Network (CDN) offers two ways to control how your files get cached:
When you set these rules, a request for _&lt;endpoint hostname&gt;_.azureedge.ne
> > Azure CDN configuration changes can take some time to propagate through the network: > - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
-> - For **Azure CDN Standard from Verizon** profiles, propagation usually completes in 10 minutes.
+> - For **Azure CDN Standard from Edgio** profiles, propagation usually completes in 10 minutes.
> ## See also
cdn Cdn Change Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-change-provider.md
The purpose of this article is to share best practices when migrating from one C
**Azure Front Door:** release two new tiers (Standard and Premium) on March 29, 2022, which is the next generation Front Door service. It combines the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall. With features such as private link integration, enhancements to rules engine, diagnostics, and a one-stop secure application acceleration for Azure customers. For more information about Azure Front Door, see [Front Door overview](../frontdoor/front-door-overview.md).
-**Azure CDN Standard/Premium from Verizon:** is an alternative to Azure Front Door for your general CDN and media solutions. Azure CDN from Verizon is optimized for large media streaming workloads. It has unique CDN features such as cache warmup, log delivery services, and reporting features.
+**Azure CDN Standard/Premium from Edgio:** is an alternative to Azure Front Door for your general CDN and media solutions. Azure CDN from Edgio is optimized for large media streaming workloads. It has unique CDN features such as cache warmup, log delivery services, and reporting features.
**Azure CDN Standard from Akamai (Retiring October 31, 2023)**: In May of 2016, Azure partnered with Akamai Technologies Inc to offer Azure CDN Standard from Akamai. Recently, Azure and Akamai Technologies Inc have decided not to renew this partnership. As a result, starting October 31, 2023, Azure CDN Standard from Akamai will no longer be supported.
Create a small-scale proof of concept testing environment with your potential re
* Define success criteria: * Cost - does the new CDN profile meet your cost requirements? * Performance - does the new CDN profile meet the performance requirements of your workload?
-* Create a new profile - for example, Azure CDN with Verizon.
+* Create a new profile - for example, Azure CDN with Edgio.
* Configure your new profile with similar configuration settings as your existing profile. * Fine tune caching and compression configuration settings to meet your requirements.
For more information, see [Failover CDN endpoints with Traffic Manager](cdn-traf
## Next Steps * Create an [Azure Front Door](../frontdoor/create-front-door-portal.md) profile.
-* Create an [Azure CDN from Verizon](cdn-create-endpoint-how-to.md) profile.
+* Create an [Azure CDN from Edgio](cdn-create-endpoint-how-to.md) profile.
cdn Cdn China Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-china-delivery.md
Azure CDN global and Azure CDN China have the following features:
- Performs content delivery outside of China
- - Four pricing tiers: Microsoft standard, Verizon standard, Verizon premium, and Akamai standard
+ - Four pricing tiers: Microsoft standard, Edgio standard, Edgio premium, and Akamai standard
- [Documentation](./index.yml)
To learn more about Azure CDN China, see:
- [Use the Azure Content Delivery Network](https://docs.azure.cn/en-us/cdn/cdn-how-to-use) -- [Azure service availability in China](/azure/china/concepts-service-availability)
+- [Azure service availability in China](/azure/china/concepts-service-availability)
cdn Cdn Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-cors.md
On Azure CDN Standard from Microsoft, you can create a rule in the [Standard rul
On **Azure CDN Standard from Akamai**, the only mechanism to allow for multiple origins without the use of the wildcard origin is to use [query string caching](cdn-query-string.md). Enable the query string setting for the CDN endpoint and then use a unique query string for requests from each allowed domain. Doing so results in the CDN caching a separate object for each unique query string. This approach isn't ideal, however, as it results in multiple copies of the same file cached on the CDN.
-### Azure CDN Premium from Verizon
+<a name='azure-cdn-premium-from-verizon'></a>
-Using the Verizon Premium rules engine, you need to [create a rule](./cdn-verizon-premium-rules-engine.md) to check the **Origin** header on the request. If it's a valid origin, your rule sets the **Access-Control-Allow-Origin** header with the origin provided in the request. If the origin specified in the **Origin** header isn't allowed, your rule should omit the **Access-Control-Allow-Origin** header, which causes the browser to reject the request.
+### Azure CDN Premium from Edgio
+
+Using the Edgio Premium rules engine, you need to [create a rule](./cdn-verizon-premium-rules-engine.md) to check the **Origin** header on the request. If it's a valid origin, your rule sets the **Access-Control-Allow-Origin** header with the origin provided in the request. If the origin specified in the **Origin** header isn't allowed, your rule should omit the **Access-Control-Allow-Origin** header, which causes the browser to reject the request.
There are two ways to resolve this problem with the Premium rules engine. In both cases, the **Access-Control-Allow-Origin** header from the file's origin server is ignored and the CDN's rules engine completely manages the allowed CORS origins.
https?:\/\/(www\.contoso\.com|contoso\.com|www\.microsoft\.com|microsoft.com\.co
``` > [!TIP]
-> **Azure CDN Premium from Verizon** uses [Perl Compatible Regular Expressions](https://pcre.org/) as its engine for regular expressions. You can use a tool like [Regular Expressions 101](https://regex101.com/) to validate your regular expression. Note that the "/" character is valid in regular expressions and doesn't need to be escaped, however, escaping that character is considered a best practice and is expected by some regex validators.
+> **Azure CDN Premium from Edgio** uses [Perl Compatible Regular Expressions](https://pcre.org/) as its engine for regular expressions. You can use a tool like [Regular Expressions 101](https://regex101.com/) to validate your regular expression. Note that the "/" character is valid in regular expressions and doesn't need to be escaped, however, escaping that character is considered a best practice and is expected by some regex validators.
> >
Rather than regular expressions, you can instead create a separate rule for each
> [!TIP] > In the example, the use of the wildcard character * tells the rules engine to match both HTTP and HTTPS. >
->
+>
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-a-storage-account-with-cdn.md
Last updated 04/29/2022-+ # Quickstart: Integrate an Azure Storage account with Azure CDN
To create a storage account, you must be either the service administrator or a c
## Enable Azure CDN for the storage account
-1. On the page for your storage account, select **Blob service** > **Azure CDN** from the left menu. The **Azure CDN** page appears.
+1. On the page for your storage account, select **Security + Networking** > **Front Door and CDN** from the left menu. The **Front Door and CDN** page appears.
- :::image type="content" source="./media/cdn-create-a-storage-account-with-cdn/cdn-storage-endpoint-configuration.png" alt-text="Screenshot of create a CDN endpoint.":::
+ :::image type="content" source="./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-endpoint-configuration.png" alt-text="Screenshot of create a CDN endpoint." lightbox="./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-endpoint-configuration.png":::
1. In the **New endpoint** section, enter the following information: | Setting | Value | | -- | -- |
- | **CDN profile** | Select **Create new** and enter your profile name, for example, *cdn-profile-123*. A profile is a collection of endpoints. |
- | **Pricing tier** | Select one of the **Standard** options, such as **Microsoft CDN (classic)**. |
- | **CDN endpoint name** | Enter your endpoint hostname, such as *cdn-endpoint-123*. This name must be globally unique across Azure because it's to access your cached resources at the URL _&lt;endpoint-name&gt;_.azureedge.net. |
- | **Origin hostname** | By default, a new CDN endpoint uses the hostname of your storage account as the origin server. |
-
+ | **Service type** | **Azure CDN** |
+ | **Create new/use existing profile** | **Create new** |
+ | **Profile name** | Enter your profile name, for example, *cdn-profile-123*. A profile is a collection of endpoints. |
+ | **CDN endpoint name** | Enter your endpoint hostname, such as *cdn-endpoint-123*. This name must be globally unique across Azure because it's to access your cached resources at the URL _&lt;endpoint-name&gt;_.azureedge.net. |
+ | **Origin hostname** | By default, a new CDN endpoint uses the hostname of your storage account as the origin server. |
+ | **Pricing tier** | Select one of the options, such as **Microsoft CDN (classic)**. |
+
1. Select **Create**. After the endpoint is created, it appears in the endpoint list.
- ![Storage new CDN endpoint](./media/cdn-create-a-storage-account-with-cdn/cdn-storage-new-endpoint-list.png)
+ [ ![Screenshot of a storage new CDN endpoint.](./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-new-endpoint-list.png) ](./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-new-endpoint-list.png#lightbox)
> [!TIP] > If you want to specify advanced configuration settings for your CDN endpoint, such as [large file download optimization](cdn-optimization-overview.md#large-file-download), you can instead use the [Azure CDN extension](cdn-create-new-endpoint.md) to create a CDN profile and endpoint.
cdn Cdn Create Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-endpoint-how-to.md
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
- **Azure CDN Standard from Microsoft** profiles: - [**General web delivery**](cdn-optimization-overview.md#general-web-delivery)
- - **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles:
+ - **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles:
- [**General web delivery**](cdn-optimization-overview.md#general-web-delivery) - [**Dynamic site acceleration**](cdn-optimization-overview.md#dynamic-site-acceleration)
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
Because it takes time for the registration to propagate, the endpoint isn't immediately available for use: - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
- - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes within 30 minutes.
+ - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes within 30 minutes.
If you attempt to use the CDN domain name before the endpoint configuration has propagated to the point-of-presence (POP) servers, you might receive an HTTP 404 response status. If it has been several hours since you created your endpoint and you're still receiving a 404 response status, see [Troubleshooting Azure CDN endpoints that return a 404 status code](cdn-troubleshoot-endpoint.md). > [!NOTE]
-> For *Verizon CDN endpoints*, when an endpoint is **disabled** or **stopped** for any reason, all resources configured through the Verizon supplemental portal will be cleaned up. These configurations can't be restored automatically by restarting the endpoint. You will need to make those configuration changes again.
+> For *Edgio CDN endpoints*, when an endpoint is **disabled** or **stopped** for any reason, all resources configured through the Edgio supplemental portal will be cleaned up. These configurations can't be restored automatically by restarting the endpoint. You will need to make those configuration changes again.
## Clean up resources To delete an endpoint when it's no longer needed, select it and then select **Delete**.
To learn about custom domains, continue to the tutorial for adding a custom doma
> [!div class="nextstepaction"] > [Add a custom domain](cdn-map-content-to-custom-domain.md)--
cdn Cdn Create New Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-new-endpoint.md
After you've created a CDN profile, you use it to create an endpoint.
:::image type="content" source="./media/cdn-create-new-endpoint/cdn-endpoint-success.png" alt-text="View added endpoint.":::
- The time it takes for the endpoint to propagate depends on the pricing tier selected when you created the profile. **Standard Akamai** usually completes within one minute, **Standard Microsoft** in 10 minutes, and **Standard Verizon** and **Premium Verizon** in up to 30 minutes.
+ The time it takes for the endpoint to propagate depends on the pricing tier selected when you created the profile. **Standard Akamai** usually completes within one minute, **Standard Microsoft** in 10 minutes, and **Standard Edgio** and **Premium Edgio** in up to 30 minutes.
> [!NOTE]
-> For *Verizon CDN endpoints*, when an endpoint is **disabled** or **stopped** for any reason, all resources configured through the Verizon supplemental portal will be cleaned up. These configurations can't be restored automatically by restarting the endpoint. You will need to make the configuration change again.
+> For *Edgio CDN endpoints*, when an endpoint is **disabled** or **stopped** for any reason, all resources configured through the Edgio supplemental portal will be cleaned up. These configurations can't be restored automatically by restarting the endpoint. You will need to make the configuration change again.
## Clean up resources
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
To enable HTTPS on a custom domain, follow these steps:
2. Choose your profile: * **Azure CDN Standard from Microsoft** * **Azure CDN Standard from Akamai**
- * **Azure CDN Standard from Verizon**
- * **Azure CDN Premium from Verizon**
+ * **Azure CDN Standard from Edgio**
+ * **Azure CDN Premium from Edgio**
3. In the list of CDN endpoints, select the endpoint containing your custom domain.
To enable HTTPS on a custom domain, follow these steps:
# [Option 2: Enable HTTPS with your own certificate](#tab/option-2-enable-https-with-your-own-certificate) > [!IMPORTANT]
-> This option is available only with **Azure CDN from Microsoft** and **Azure CDN from Verizon** profiles.
+> This option is available only with **Azure CDN from Microsoft** and **Azure CDN from Edgio** profiles.
>
-You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure CDN uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected. For Azure CDN from Verizon, any valid CA will be accepted.
+You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure CDN uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected. For Azure CDN from Edgio, any valid CA will be accepted.
### Prepare your Azure Key vault account and certificate
If your CNAME record is in the correct format, DigiCert automatically verifies y
Automatic validation typically takes a few hours. If you donΓÇÖt see your domain validated in 24 hours, open a support ticket. >[!NOTE]
->If you have a Certificate Authority Authorization (CAA) record with your DNS provider, it must include the appropriate CA(s) for authorization. DigiCert is the CA for Microsoft and Verizon profiles. Akamai profile obtains certificates from three CAs: GeoTrust, Let's Encrypt and DigiCert. If a CA receives an order for a certificate for a domain that has a CAA record and that CA is not listed as an authorized issuer, it is prohibited from issuing the certificate to that domain or subdomain. For information about managing CAA records, see [Manage CAA records](https://support.dnsimple.com/articles/manage-caa-record/). For a CAA record tool, see [CAA Record Helper](https://sslmate.com/caa/).
+>If you have a Certificate Authority Authorization (CAA) record with your DNS provider, it must include the appropriate CA(s) for authorization. DigiCert is the CA for Microsoft and Edgio profiles. Akamai profile obtains certificates from three CAs: GeoTrust, Let's Encrypt and DigiCert. If a CA receives an order for a certificate for a domain that has a CAA record and that CA is not listed as an authorized issuer, it is prohibited from issuing the certificate to that domain or subdomain. For information about managing CAA records, see [Manage CAA records](https://support.dnsimple.com/articles/manage-caa-record/). For a CAA record tool, see [CAA Record Helper](https://sslmate.com/caa/).
### Custom domain isn't mapped to your CDN endpoint
In this section, you learn how to disable HTTPS for your custom domain.
1. In the [Azure portal](https://portal.azure.com), search for and select **CDN profiles**.
-2. Choose your **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon**, or **Azure CDN Premium from Verizon** profile.
+2. Choose your **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio**, or **Azure CDN Premium from Edgio** profile.
3. In the list of endpoints, pick the endpoint containing your custom domain.
The following table shows the operation progress that occurs when you disable HT
A dedicated certificate provided by Digicert is used for your custom domain for:
- * **Azure CDN from Verizon**
+ * **Azure CDN from Edgio**
* **Azure CDN from Microsoft** 2. *Do you use IP-based or SNI TLS/SSL?*
- Both **Azure CDN from Verizon** and **Azure CDN Standard from Microsoft** use SNI TLS/SSL.
+ Both **Azure CDN from Edgio** and **Azure CDN Standard from Microsoft** use SNI TLS/SSL.
3. *What if I don't receive the domain verification email from DigiCert?*
The following table shows the operation progress that occurs when you disable HT
Certificate Authority Authorization record isn't currently required. However, if you do have one, it must include DigiCert as a valid CA.
-6. *On June 20, 2018, Azure CDN from Verizon started using a dedicated certificate with SNI TLS/SSL by default. What happens to my existing custom domains using Subject Alternative Names (SAN) certificate and IP-based TLS/SSL?*
+6. *On June 20, 2018, Azure CDN from Edgio started using a dedicated certificate with SNI TLS/SSL by default. What happens to my existing custom domains using Subject Alternative Names (SAN) certificate and IP-based TLS/SSL?*
Your existing domains will be gradually migrated to single certificate in the upcoming months if Microsoft analyzes that only SNI client requests are made to your application.
The following table shows the operation progress that occurs when you disable HT
To ensure a newer certificate is deployed to PoP infrastructure, upload your new certificate to Azure KeyVault. In your TLS settings on Azure CDN, choose the newest certificate version and select save. Azure CDN will then propagate your new updated cert.
- For **Azure CDN from Verizon** profiles, if you use the same Azure Key Vault certificate on several custom domains (e.g. a wildcard certificate), ensure you update all of your custom domains that use that same certificate to the newer certificate version.
+ For **Azure CDN from Edgio** profiles, if you use the same Azure Key Vault certificate on several custom domains (e.g. a wildcard certificate), ensure you update all of your custom domains that use that same certificate to the newer certificate version.
8. *Do I need to re-enable HTTPS after the endpoint restarts?*
cdn Cdn Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-ddos.md
A content delivery network provides DDoS protection by design. In addition to th
Azure CDN from Microsoft is protected by [Azure Basic DDoS](../ddos-protection/ddos-protection-overview.md). It's integrated into the Azure CDN from Microsoft platform by default and at no extra cost. The full scale and capacity of Azure CDN from MicrosoftΓÇÖs globally deployed network provides defense against common network layer attacks through always-on traffic monitoring and real-time mitigation. Basic DDoS protection also defends against the most common, frequently occurring Layer 7 DNS Query Floods and Layer 3 and 4 volumetric attacks that target CDN endpoints. This service also has a proven track record in protecting MicrosoftΓÇÖs enterprise and consumer services from large-scale attacks.
-## Azure CDN from Verizon
+<a name='azure-cdn-from-verizon'></a>
-Azure CDN from Verizon is protected by Verizon's proprietary DDoS mitigation platform. It's integrated into Azure CDN from Verizon by default and at no extra cost. It provides basic protection against the most common, frequently occurring Layer 7 DNS Query Floods and Layer 3 and 4 volumetric attacks that target CDN endpoints.
+## Azure CDN from Edgio
+
+Azure CDN from Edgio is protected by Edgio's proprietary DDoS mitigation platform. It's integrated into Azure CDN from Edgio by default and at no extra cost. It provides basic protection against the most common, frequently occurring Layer 7 DNS Query Floods and Layer 3 and 4 volumetric attacks that target CDN endpoints.
## Azure CDN from Akamai
Azure CDN from Akamai is protected by Akamai's proprietary DDoS mitigation platf
## Next steps
-Learn more about [Azure DDoS](../ddos-protection/ddos-protection-overview.md).
+Learn more about [Azure DDoS](../ddos-protection/ddos-protection-overview.md).
cdn Cdn Dynamic Site Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-dynamic-site-acceleration.md
With the explosion of social media, electronic commerce, and the hyper-personali
Standard content delivery network (CDN) capability includes the ability to cache files closer to end users to speed up delivery of static files. However, with dynamic web applications, caching that content in edge locations isn't possible because the server generates the content in response to user behavior. Speeding up the delivery of such content is more complex than traditional edge caching and requires an end-to-end solution that finely tunes each element along the entire data path from inception to delivery. With Azure CDN dynamic site acceleration (DSA) optimization, the performance of web pages with dynamic content is measurably improved.
-**Azure CDN from Akamai** and **Azure CDN from Verizon** both offer DSA optimization through the **Optimized for** menu during endpoint creation. Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](../frontdoor/front-door-overview.md).
+**Azure CDN from Akamai** and **Azure CDN from Edgio** both offer DSA optimization through the **Optimized for** menu during endpoint creation. Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](../frontdoor/front-door-overview.md).
> [!Important] > For **Azure CDN from Akamai** profiles, you are allowed to change the optimization of a CDN endpoint after it has been created. >
-> For **Azure CDN from Verizon** profiles, you cannot change the optimization of a CDN endpoint after it has been created.
+> For **Azure CDN from Edgio** profiles, you cannot change the optimization of a CDN endpoint after it has been created.
## CDN endpoint configuration to accelerate delivery of dynamic files
Route optimization chooses the most optimal path to the origin so that a site is
The Akamai network uses techniques to collect real-time data and compare various paths through different nodes in the Akamai server, as well as the default BGP route across the open Internet to determine the fastest route between the origin and the CDN edge. These techniques avoid Internet congestion points and long routes.
-Similarly, the Verizon network uses a combination of Anycast DNS, high capacity support PoPs, and health checks, to determine the best gateways to best route data from the client to the origin.
+Similarly, the Edgio network uses a combination of Anycast DNS, high capacity support PoPs, and health checks, to determine the best gateways to best route data from the client to the origin.
As a result, fully dynamic and transactional content is delivered more quickly and more reliably to end users, even when it's uncacheable.
Transmission Control Protocol (TCP) is the standard of the Internet protocol sui
TCP *slow start* is an algorithm of the TCP protocol that prevents network congestion by limiting the amount of data sent over the network. It starts off with small congestion window sizes between sender and receiver until the maximum is reached or packet loss is detected.
- Both **Azure CDN from Akamai** and **Azure CDN from Verizon** profiles eliminate TCP slow start with the following three steps:
+ Both **Azure CDN from Akamai** and **Azure CDN from Edgio** profiles eliminate TCP slow start with the following three steps:
1. Health and bandwidth monitoring is used to measure the bandwidth of connections between edge PoP servers.
When you're using a CDN, fewer unique machines connect to your origin server dir
As previously mentioned, several handshake requests are required to establish a TCP connection. Persistent connections, which get implemented by the `Keep-Alive` HTTP header, reuse existing TCP connections for multiple HTTP requests to save round-trip times and speed up delivery.
-**Azure CDN from Verizon** also sends periodic keep-alive packets over the TCP connection to prevent an open connection from being closed.
+**Azure CDN from Edgio** also sends periodic keep-alive packets over the TCP connection to prevent an open connection from being closed.
#### Tuning TCP packet parameters
With DSA, caching is turned off by default on the CDN, even when the origin incl
If you have a website with a mix of static and dynamic assets, it's best to take a hybrid approach to get the best performance.
-For **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles, you can turn on caching for specific DSA endpoints by using [caching rules](cdn-caching-rules.md).
+For **Azure CDN Standard from Edgio** and **Azure CDN Standard from Akamai** profiles, you can turn on caching for specific DSA endpoints by using [caching rules](cdn-caching-rules.md).
To access caching rules:
To access caching rules:
2. Create a global or custom caching rule to turn on caching for your DSA endpoint.
-For **Azure CDN Premium from Verizon** profiles only, you turn on caching for specific DSA endpoints by using the [rules engine](./cdn-verizon-premium-rules-engine.md). Any rules that are created affect only those endpoints of your profile that are optimized for DSA.
+For **Azure CDN Premium from Edgio** profiles only, you turn on caching for specific DSA endpoints by using the [rules engine](./cdn-verizon-premium-rules-engine.md). Any rules that are created affect only those endpoints of your profile that are optimized for DSA.
To access the rules engine:
cdn Cdn Edge Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-edge-performance.md
This dashboard consists of:
### Accessing the edge performance dashboard 1. From the CDN profile page, select the **Manage** button.
- :::image type="content" source="./media/cdn-edge-performance/cdn-manage-btn.png" alt-text="Screenshot of the manage button from an Azure CDN Verizon Premium profile.":::
+ :::image type="content" source="./media/cdn-edge-performance/cdn-manage-btn.png" alt-text="Screenshot of the manage button from an Azure CDN Edgio Premium profile.":::
The CDN management portal opens. 2. Hover over the **Analytics** tab, then hover over the **Edge Performance Analytics** flyout. Select on **Dashboard**.
Each report in this module contains a chart and statistics on bandwidth and traf
1. From the CDN profile page, select the **Manage** button.
- :::image type="content" source="./media/cdn-edge-performance/cdn-manage-btn.png" alt-text="Screenshot of the manage button from an Azure CDN Verizon Premium profile.":::
+ :::image type="content" source="./media/cdn-edge-performance/cdn-manage-btn.png" alt-text="Screenshot of the manage button from an Azure CDN Edgio Premium profile.":::
The CDN management portal opens. 2. Hover over the **Analytics** tab, then hover over the **Edge Performance Analytics** flyout. Select on **HTTP Large Object**.
Each report in this module contains a chart and statistics on bandwidth and traf
* [Azure CDN Overview](cdn-overview.md) * [Real-time stats in Microsoft Azure CDN](cdn-real-time-stats.md) * [Overriding default HTTP behavior using the rules engine](./cdn-verizon-premium-rules-engine.md)
-* [Advanced HTTP Reports](cdn-advanced-http-reports.md)
+* [Advanced HTTP Reports](cdn-advanced-http-reports.md)
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
Azure Content Delivery Network (CDN) includes four products:
* **Azure CDN Standard from Microsoft** * **Azure CDN Standard from Akamai**
-* **Azure CDN Standard from Verizon**
-* **Azure CDN Premium from Verizon**.
+* **Azure CDN Standard from Edgio (formerly Verizon)**
+* **Azure CDN Premium from Edgio (formerly Verizon)**.
> [!IMPORTANT] > Azure CDN from Akamai is scheduled to be retired on October 31, 2023. You can no longer create new Azure CDN from Akamai after June 1, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider. The following table compares the features available with each product.
-| **Performance features and optimizations** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+| **Performance features and optimizations** | **Standard Microsoft** | **Standard Akamai** | **Standard Edgio** | **Premium Edgio** |
| | | | | | | [Dynamic site acceleration](./cdn-dynamic-site-acceleration.md) | Offered via [Azure Front Door Service](../frontdoor/front-door-overview.md) | **&#x2713;** | **&#x2713;** | **&#x2713;** | | &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[Dynamic site acceleration - adaptive image compression](./cdn-dynamic-site-acceleration.md#adaptive-image-compression-azure-cdn-from-akamai-only) | | **&#x2713;** | | |
The following table compares the features available with each product.
| IPv4/IPv6 dual-stack | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [HTTP/2 support](cdn-http2.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | ||||
- **Security** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+ **Security** | **Standard Microsoft** | **Standard Akamai** | **Standard Edgio** | **Premium Edgio** |
| HTTPS support with CDN endpoint | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Custom domain HTTPS](cdn-custom-ssl.md) | **&#x2713;** | **&#x2713;**, Requires direct CNAME to enable |**&#x2713;** |**&#x2713;** | | [Custom domain name support](cdn-map-content-to-custom-domain.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
The following table compares the features available with each product.
| [Bring your own certificate](cdn-custom-ssl.md?tabs=option-2-enable-https-with-your-own-certificate#tlsssl-certificates) |**&#x2713;** | | **&#x2713;** | **&#x2713;** | | Supported TLS Versions | TLS 1.2, TLS 1.0/1.1 - [Configurable](/rest/api/cdn/custom-domains/enable-custom-https#usermanagedhttpsparameters) | TLS 1.2 | TLS 1.2 | TLS 1.2 | ||||
-| **Analytics and reporting** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+| **Analytics and reporting** | **Standard Microsoft** | **Standard Akamai** | **Standard Edgio** | **Premium Edgio** |
| [Azure diagnostic logs](cdn-azure-diagnostic-logs.md) | **&#x2713;** | **&#x2713;** |**&#x2713;** |**&#x2713;** |
-| [Core reports from Verizon](cdn-analyze-usage-patterns.md) | | |**&#x2713;** |**&#x2713;** |
-| [Custom reports from Verizon](cdn-verizon-custom-reports.md) | | |**&#x2713;** |**&#x2713;** |
+| [Core reports from Edgio](cdn-analyze-usage-patterns.md) | | |**&#x2713;** |**&#x2713;** |
+| [Custom reports from Edgio](cdn-verizon-custom-reports.md) | | |**&#x2713;** |**&#x2713;** |
| [Advanced HTTP reports](cdn-advanced-http-reports.md) | | | |**&#x2713;** | | [Real-time stats](cdn-real-time-stats.md) | | | |**&#x2713;** | | [Edge node performance](cdn-edge-performance.md) | | | |**&#x2713;** | | [Real-time alerts](cdn-real-time-alerts.md) | | | |**&#x2713;** | ||||
-| **Ease of use** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+| **Ease of use** | **Standard Microsoft** | **Standard Akamai** | **Standard Edgio** | **Premium Edgio** |
| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](/azure/media-services/previous/media-services-portal-manage-streaming-endpoints) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable |Configurable |
The following table compares the features available with each product.
## Migration
-For information about migrating an **Azure CDN Standard from Verizon** profile to **Azure CDN Premium from Verizon**, see [Migrate an Azure CDN profile from Standard Verizon to Premium Verizon](cdn-migrate.md).
+For information about migrating an **Azure CDN Standard from Edgio** profile to **Azure CDN Premium from Edgio**, see [Migrate an Azure CDN profile from Standard Edgio to Premium Edgio](cdn-migrate.md).
> [!NOTE]
-> There is an upgrade path from Standard Verizon to Premium Verizon, there is no conversion mechanism between other products at this time.
+> There is an upgrade path from Standard Edgio to Premium Edgio, there is no conversion mechanism between other products at this time.
## Next steps
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
Two headers can be used to define cache freshness: `Cache-Control` and `Expires`
## Cache-directive headers > [!IMPORTANT]
-> By default, an Azure CDN endpoint that is optimized for DSA ignores cache-directive headers and bypasses caching. For **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles, you can adjust how an Azure CDN endpoint treats these headers by using [CDN caching rules](cdn-caching-rules.md) to enable caching. For **Azure CDN Premium from Verizon** profiles only, you use the [rules engine](./cdn-verizon-premium-rules-engine.md) to enable caching.
+> By default, an Azure CDN endpoint that is optimized for DSA ignores cache-directive headers and bypasses caching. For **Azure CDN Standard from Edgio** and **Azure CDN Standard from Akamai** profiles, you can adjust how an Azure CDN endpoint treats these headers by using [CDN caching rules](cdn-caching-rules.md) to enable caching. For **Azure CDN Premium from Edgio** profiles only, you use the [rules engine](./cdn-verizon-premium-rules-engine.md) to enable caching.
Azure CDN supports the following HTTP cache-directive headers, which define cache duration and cache sharing.
Azure CDN supports the following HTTP cache-directive headers, which define cach
- Overrides the `Expires` header, if both it and `Cache-Control` are defined. - When used in an HTTP request from the client to the CDN POP, `Cache-Control` gets ignored by all Azure CDN profiles, by default. - When used in an HTTP response from the origin server to the CDN POP:
- - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** support all `Cache-Control` directives.
- - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** honors caching behaviors for Cache-Control directives in [RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching (ietf.org)](https://tools.ietf.org/html/rfc7234#section-5.2.2.8).
+ - **Azure CDN Standard/Premium from Edgio** and **Azure CDN Standard from Microsoft** support all `Cache-Control` directives.
+ - **Azure CDN Standard/Premium from Edgio** and **Azure CDN Standard from Microsoft** honors caching behaviors for Cache-Control directives in [RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching (ietf.org)](https://tools.ietf.org/html/rfc7234#section-5.2.2.8).
- **Azure CDN Standard from Akamai** supports only the following `Cache-Control` directives; all others are ignored: - `max-age`: A cache can store the content for the number of seconds specified. For example, `Cache-Control: max-age=5`. This directive specifies the maximum amount of time the content is considered to be fresh. - `no-cache`: Cache the content, but validate the content every time before delivering it from the cache. Equivalent to `Cache-Control: max-age=0`.
Azure CDN supports the following HTTP cache-directive headers, which define cach
## Validators
-When the cache is stale, HTTP cache validators are used to compare the cached version of a file with the version on the origin server. **Azure CDN Standard/Premium from Verizon** supports both `ETag` and `Last-Modified` validators by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** supports only `Last-Modified` by default.
+When the cache is stale, HTTP cache validators are used to compare the cached version of a file with the version on the origin server. **Azure CDN Standard/Premium from Edgio** supports both `ETag` and `Last-Modified` validators by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** supports only `Last-Modified` by default.
**ETag:**-- **Azure CDN Standard/Premium from Verizon** supports `ETag` by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** don't.
+- **Azure CDN Standard/Premium from Edgio** supports `ETag` by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** don't.
- `ETag` defines a string that is unique for every file and version of a file. For example, `ETag: "17f0ddd99ed5bbe4edffdd6496d7131f"`. - Introduced in HTTP 1.1 and is more current than `Last-Modified`. Useful when the last modified date is difficult to determine. - Supports both strong validation and weak validation; however, Azure CDN supports only strong validation. For strong validation, the two resource representations must be byte-for-byte identical. - A cache validates a file that uses `ETag` by sending an `If-None-Match` header with one or more `ETag` validators in the request. For example, `If-None-Match: "17f0ddd99ed5bbe4edffdd6496d7131f"`. If the serverΓÇÖs version matches an `ETag` validator on the list, it sends status code 304 (Not Modified) in its response. If the version is different, the server responds with status code 200 (OK) and the updated resource. **Last-Modified:**-- For **Azure CDN Standard/Premium from Verizon** only, `Last-Modified` is used if `ETag` isn't part of the HTTP response.
+- For **Azure CDN Standard/Premium from Edgio** only, `Last-Modified` is used if `ETag` isn't part of the HTTP response.
- Specifies the date and time that the origin server has determined the resource was last modified. For example, `Last-Modified: Thu, 19 Oct 2017 09:28:00 GMT`. - A cache validates a file using `Last-Modified` by sending an `If-Modified-Since` header with a date and time in the request. The origin server compares that date with the `Last-Modified` header of the latest resource. If the resource hasn't been modified since the specified time, the server returns status code 304 (Not Modified) in its response. If the resource has been modified, the server returns status code 200 (OK) and the updated resource. ## Determining which files can be cached
-Not all resources can be cached. The following table shows what resources can be cached, based on the type of HTTP response. Resources delivered with HTTP responses that don't meet all of these conditions can't be cached. For **Azure CDN Premium from Verizon** only, you can use the rules engine to customize some of these conditions.
+Not all resources can be cached. The following table shows what resources can be cached, based on the type of HTTP response. Resources delivered with HTTP responses that don't meet all of these conditions can't be cached. For **Azure CDN Premium from Edgio** only, you can use the rules engine to customize some of these conditions.
-| | Azure CDN from Microsoft | Azure CDN from Verizon | Azure CDN from Akamai |
+| | Azure CDN from Microsoft | Azure CDN from Edgio | Azure CDN from Akamai |
|--|--||| | **HTTP status codes** | 200, 203, 206, 300, 301, 410, 416 | 200 | 200, 203, 300, 301, 302, 401 | | **HTTP methods** | GET, HEAD | GET | GET |
For **Azure CDN Standard from Microsoft** caching to work on a resource, the ori
The following table describes the default caching behavior for the Azure CDN products and their optimizations.
-| | Microsoft: General web delivery | Verizon: General web delivery | Verizon: DSA | Akamai: General web delivery | Akamai: DSA | Akamai: Large file download | Akamai: general or VOD media streaming |
+| | Microsoft: General web delivery | Edgio: General web delivery | Edgio: DSA | Akamai: General web delivery | Akamai: DSA | Akamai: Large file download | Akamai: general or VOD media streaming |
||--|-||--||-|--| | **Honor origin** | Yes | Yes | No | Yes | No | Yes | Yes | | **CDN cache duration** | Two days |Seven days | None | Seven days | None | One day | One year |
cdn Cdn Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http-debug-headers.md
Title: X-EC-Debug HTTP headers for Azure CDN rules engine | Microsoft Docs
-description: The X-EC-Debug debug cache request header provides additional information about the cache policy that is applied to the requested asset. These headers are specific to Verizon.
+description: The X-EC-Debug debug cache request header provides additional information about the cache policy that is applied to the requested asset. These headers are specific to Edgio.
documentationcenter: ''
# X-EC-Debug HTTP headers for Azure CDN rules engine
-The debug cache request header, `X-EC-Debug`, provides additional information about the cache policy that is applied to the requested asset. These headers are specific to **Azure CDN Premium from Verizon** products.
+The debug cache request header, `X-EC-Debug`, provides additional information about the cache policy that is applied to the requested asset. These headers are specific to **Azure CDN Premium from Edgio** products.
## Usage The response sent from the POP servers to a user includes the `X-EC-Debug` header only when the following conditions are met:
cdn Cdn Improve Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-improve-performance.md
There are two ways to enable file compression:
> Azure CDN configuration changes can take some time to propagate through the network: > - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. > - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
-> - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+> - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes in 10 minutes.
> > If you're setting up compression for the first time for your CDN endpoint, consider waiting 1-2 hours before you troubleshoot to ensure the compression settings have propagated to the POPs.
The standard and premium CDN tiers provide the same compression functionality, b
### Standard CDN profiles > [!NOTE]
-> This section applies to **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon**, and **Azure CDN Standard from Akamai** profiles.
+> This section applies to **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio**, and **Azure CDN Standard from Akamai** profiles.
> >
The standard and premium CDN tiers provide the same compression functionality, b
### Premium CDN profiles > [!NOTE]
-> This section applies only to **Azure CDN Premium from Verizon** profiles.
+> This section applies only to **Azure CDN Premium from Edgio** profiles.
> 1. From the CDN profile page, select **Manage**.
When a request for an asset specifies gzip compression and the request results i
If the origin uses Chunked Transfer Encoding (CTE) to send compressed data to the CDN POP, then response sizes greater than 8 MB aren't supported.
-### Azure CDN from Verizon profiles
+<a name='azure-cdn-from-verizon-profiles'></a>
-For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, only eligible files are compressed. To be eligible for compression, a file must:
+### Azure CDN from Edgio profiles
+
+For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, only eligible files are compressed. To be eligible for compression, a file must:
- Be larger than 128 bytes - Be smaller than 3 MB
These profiles support the following compression encodings:
- DEFLATE - bzip2
-Azure CDN from Verizon doesn't support brotli compression. When the HTTP request has the header `Accept-Encoding: br`, the CDN responds with an uncompressed response.
+Azure CDN from Edgio doesn't support brotli compression. When the HTTP request has the header `Accept-Encoding: br`, the CDN responds with an uncompressed response.
### Azure CDN Standard from Akamai profiles
The following tables describe Azure CDN compression behavior for every scenario:
| | | | | | Compressed |Compressed |Compressed |CDN transcodes between supported formats. <br/>**Azure CDN from Microsoft** doesn't support transcoding between formats and instead fetches data from origin, compresses and caches separately for the format. | | Compressed |Uncompressed |Compressed |CDN performs a compression. |
-| Compressed |Not cached |Compressed |CDN performs a compression if the origin returns an uncompressed file. <br/>**Azure CDN from Verizon** passes the uncompressed file on the first request and then compresses and caches the file for subsequent requests. <br/>Files with the `Cache-Control: no-cache` header are never compressed. |
+| Compressed |Not cached |Compressed |CDN performs a compression if the origin returns an uncompressed file. <br/>**Azure CDN from Edgio** passes the uncompressed file on the first request and then compresses and caches the file for subsequent requests. <br/>Files with the `Cache-Control: no-cache` header are never compressed. |
| Uncompressed |Compressed |Uncompressed |CDN performs a decompression. <br/>**Azure CDN from Microsoft** doesn't support decompression and instead fetches data from origin and caches separately for uncompressed clients. | | Uncompressed |Uncompressed |Uncompressed | | | Uncompressed |Not cached |Uncompressed | |
For endpoints enabled for Media Services CDN streaming, compression is enabled b
## See also * [Troubleshooting CDN file compression](cdn-troubleshoot-compression.md) -
cdn Cdn Large File Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-large-file-optimization.md
There are no limits on maximum file size.
### Chunked Transfer Encoding Support Microsoft CDN supports transfer encoding response but only up to a maximum of 8-MB content size. For chunked transfer encoded response that greater than 8 MB, Microsoft CDN only cache and serve the first 8 MB of content.
-## Optimize for delivery of large files with Azure CDN from Verizon
+<a name='optimize-for-delivery-of-large-files-with-azure-cdn-from-verizon'></a>
-**Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** endpoints deliver large files without a cap on file size. More features are turned on by default to make delivery of large files faster.
+## Optimize for delivery of large files with Azure CDN from Edgio
+
+**Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** endpoints deliver large files without a cap on file size. More features are turned on by default to make delivery of large files faster.
### Complete cache fill
The default complete cache fill feature enables the CDN to pull a file into the
Complete cache fill is most useful for large assets. Typically, users don't download them from start to finish. They use progressive download. The default behavior forces the edge server to initiate a background fetch of the asset from the origin server. Afterward, the asset is in the edge server's local cache. After the full object is in the cache, the edge server fulfills byte-range requests to the CDN for the cached object.
-The default behavior can be disabled through the rules engine in **Azure CDN Premium from Verizon**.
+The default behavior can be disabled through the rules engine in **Azure CDN Premium from Edgio**.
### Peer cache fill hot-filing
The default peer cache fills hot-filing feature uses a sophisticated proprietary
### Conditions for large file optimization
-Large file optimization features for **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** are turned on by default when you use the general web delivery optimization type. There are no limits on maximum file size.
+Large file optimization features for **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** are turned on by default when you use the general web delivery optimization type. There are no limits on maximum file size.
## Optimize for delivery of large files with Azure CDN Standard from Akamai
Consider the following aspects for this optimization type:
- For chunks cached at the CDN, there are no other requests to the origin until the content expires or it's evicted from the cache. - Users can make range requests to the CDN, which are treated like any normal file. Optimization applies only if it's a valid file type and the byte range is between 10 MB and 150 GB. If the average file size requested is smaller than 10 MB, use general web delivery instead.-
cdn Cdn Log Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-log-analysis.md
For more information, see [Azure CDN HTTP raw logs](monitoring-and-access-log.md
Core analytics is available for CDN endpoints for all pricing tiers. Azure diagnostics logs allow core analytics to be exported to Azure storage, event hubs, or Azure Monitor logs. Azure Monitor logs offers a solution with graphs that are user-configurable and customizable. For more information about Azure diagnostic logs, see [Azure diagnostic logs](cdn-azure-diagnostic-logs.md).
-## Verizon core reports
+<a name='verizon-core-reports'></a>
-**Azure CDN Standard from Verizon** or **Azure CDN Premium from Verizon** profiles provide core reports. You can view core reports in the Verizon supplemental portal. Verizon core reports are accessible via the **Manage** option from the Azure portal and offers different kinds of graphs and views. For more information, see [Core Reports from Verizon](cdn-analyze-usage-patterns.md).
+## Edgio core reports
-## Verizon custom reports
+**Azure CDN Standard from Edgio** or **Azure CDN Premium from Edgio** profiles provide core reports. You can view core reports in the Edgio supplemental portal. Edgio core reports are accessible via the **Manage** option from the Azure portal and offers different kinds of graphs and views. For more information, see [Core Reports from Edgio](cdn-analyze-usage-patterns.md).
-**Azure CDN Standard from Verizon** or **Azure CDN Premium from Verizon** profiles provide custom reports. You can view custom reports in the Verizon supplemental portal. Verizon custom reports are accessible via the **Manage** option from the Azure portal.
+<a name='verizon-custom-reports'></a>
-The custom reports display the number of hits or data transferred for each edge CNAME. Data gets grouped by HTTP response code or cache status over period of time. For more information, see [Custom Reports from Verizon](cdn-verizon-custom-reports.md).
+## Edgio custom reports
-## Azure CDN Premium from Verizon reports
+**Azure CDN Standard from Edgio** or **Azure CDN Premium from Edgio** profiles provide custom reports. You can view custom reports in the Edgio supplemental portal. Edgio custom reports are accessible via the **Manage** option from the Azure portal.
-With **Azure CDN Premium from Verizon**, you can also access the following reports:
+The custom reports display the number of hits or data transferred for each edge CNAME. Data gets grouped by HTTP response code or cache status over period of time. For more information, see [Custom Reports from Edgio](cdn-verizon-custom-reports.md).
+
+<a name='azure-cdn-premium-from-verizon-reports'></a>
+
+## Azure CDN Premium from Edgio reports
+
+With **Azure CDN Premium from Edgio**, you can also access the following reports:
* [Advanced HTTP reports](cdn-advanced-http-reports.md) * [Real-time stats](cdn-real-time-stats.md) * [Azure CDN edge node performance](cdn-edge-performance.md)
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
You can also control cache settings from the Azure portal by setting CDN caching
The preferred method for setting a blob's `Cache-Control` header is to use caching rules in the Azure portal. For more information about CDN caching rules, see [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md). > [!NOTE]
-> Caching rules are available only for **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN Premium from Verizon** profiles, you must use the [Azure CDN rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
+> Caching rules are available only for **Azure CDN Standard from Edgio** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN Premium from Edgio** profiles, you must use the [Azure CDN rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
**To navigate to the CDN caching rules page**:
cdn Cdn Manage Expiration Of Cloud Service Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-cloud-service-content.md
You can also control cache settings from the Azure portal by setting [CDN cachin
The preferred method for setting a web server's `Cache-Control` header is to use caching rules in the Azure portal. For more information about CDN caching rules, see [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md). > [!NOTE]
-> Caching rules are available only for **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN Premium from Verizon** profiles, you must use the [Azure CDN rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
+> Caching rules are available only for **Azure CDN Standard from Edgio** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN Premium from Edgio** profiles, you must use the [Azure CDN rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
**To navigate to the CDN caching rules page**:
cdn Cdn Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-powershell.md
Clear-AzCdnEndpointContent -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG
## Pre-load some assets > [!NOTE]
-> Pre-loading is only available on Azure CDN from Verizon profiles.
+> Pre-loading is only available on Azure CDN from Edgio profiles.
`Import-AzCdnEndpointContent` pre-loads assets into the CDN cache.
Remove-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG
* Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md). * To learn about CDN features, see [CDN Overview](cdn-overview.md).-
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
After you've registered your custom domain, you can then add it to your CDN endp
It can take some time for the new custom domain settings to propagate to all CDN edge nodes: - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
- - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+ - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes in 10 minutes.
# [**PowerShell**](#tab/azure-powershell)
Azure verifies that the CNAME record exists for the custom domain name you enter
- For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute. -- For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+- For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes in 10 minutes.
cdn Cdn Media Streaming Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-media-streaming-optimization.md
The general media delivery or video-on-demand media delivery optimization types
Partial cache sharing allows the CDN to serve partially cached content to new requests. For example, if the first request to the CDN results in a cache miss, the request is sent to the origin. Although this incomplete content is loaded into the CDN cache, other requests to the CDN can start getting this data.
-## Media streaming optimizations for Azure CDN from Verizon
+<a name='media-streaming-optimizations-for-azure-cdn-from-verizon'></a>
-**Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** endpoints deliver streaming media assets directly by using the general web delivery optimization type. A few features on the CDN directly help delivering media assets by default.
+## Media streaming optimizations for Azure CDN from Edgio
+
+**Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** endpoints deliver streaming media assets directly by using the general web delivery optimization type. A few features on the CDN directly help delivering media assets by default.
### Partial cache sharing
cdn Cdn Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-migrate.md
Title: Migrate Azure CDN profile from Verizon Standard to Verizon Premium
-description: Learn about the details of migrating a profile from Verizon Standard to Verizon Premium.
+ Title: Migrate Azure CDN profile from Edgio Standard to Edgio Premium
+description: Learn about the details of migrating a profile from Edgio Standard to Edgio Premium.
-# Migrate an Azure CDN profile from Standard Verizon to Premium Verizon
+# Migrate an Azure CDN profile from Standard Edgio to Premium Edgio
When you create an Azure Content Delivery Network (CDN) profile to manage your endpoints, Azure CDN offers four different products for you to choose from. For information about the different products and their available features, see [Compare Azure CDN product features](cdn-features.md).
-If you've create an **Azure CDN Standard from Verizon** profile and are using it to manage your CDN endpoints, you can upgrade it to an **Azure CDN Premium from Verizon** profile. When you upgrade, your CDN endpoints and all of your data gets preserved.
+If you've create an **Azure CDN Standard from Edgio** profile and are using it to manage your CDN endpoints, you can upgrade it to an **Azure CDN Premium from Edgio** profile. When you upgrade, your CDN endpoints and all of your data gets preserved.
> [!IMPORTANT]
-> Once you've upgraded to an **Azure CDN Premium from Verizon** profile, you cannot later convert it back to an **Azure CDN Standard from Verizon** profile.
+> Once you've upgraded to an **Azure CDN Premium from Edgio** profile, you cannot later convert it back to an **Azure CDN Standard from Edgio** profile.
>
-To upgrade an **Azure CDN Standard from Verizon** profile, contact [Microsoft Support](https://azure.microsoft.com/support/options/).
+To upgrade an **Azure CDN Standard from Edgio** profile, contact [Microsoft Support](https://azure.microsoft.com/support/options/).
## Profile comparison
-**Azure CDN Premium from Verizon** profiles have the following key differences from **Azure CDN Standard from Verizon** profiles:
-- For certain Azure CDN features such as [compression](cdn-improve-performance.md), [caching rules](cdn-caching-rules.md), and [geo filtering](cdn-restrict-access-by-country-region.md), you can't use the Azure CDN interface, you must use the Verizon portal via the **Manage** button.-- API: Unlike with Standard Verizon, you can't use the API to control those features that are accessed from the Premium Verizon portal. However, you can use the API to control other common features, such as creating/deleting an endpoint, purging/load cached assets, and enabling/disable a custom domain.-- Pricing: Premium Verizon has a different pricing structure for data transfers than Standard Verizon. For more information, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/).
+**Azure CDN Premium from Edgio** profiles have the following key differences from **Azure CDN Standard from Edgio** profiles:
+- For certain Azure CDN features such as [compression](cdn-improve-performance.md), [caching rules](cdn-caching-rules.md), and [geo filtering](cdn-restrict-access-by-country-region.md), you can't use the Azure CDN interface, you must use the Edgio portal via the **Manage** button.
+- API: Unlike with Standard Edgio, you can't use the API to control those features that are accessed from the Premium Edgio portal. However, you can use the API to control other common features, such as creating/deleting an endpoint, purging/load cached assets, and enabling/disable a custom domain.
+- Pricing: Premium Edgio has a different pricing structure for data transfers than Standard Edgio. For more information, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/).
-**Azure CDN Premium from Verizon** profiles have the following extra features:
+**Azure CDN Premium from Edgio** profiles have the following extra features:
- [Token authentication](cdn-token-auth.md): Allows users to obtain and use a token to fetch secure resources. - [Rules engine](./cdn-verizon-premium-rules-engine.md): Enables you to customize how HTTP requests are handled. - Advanced analytics tools:
To upgrade an **Azure CDN Standard from Verizon** profile, contact [Microsoft Su
## Next steps
-To learn more about the rules engine, see [Azure CDN rules engine reference](./cdn-verizon-premium-rules-engine-reference.md).
+To learn more about the rules engine, see [Azure CDN rules engine reference](./cdn-verizon-premium-rules-engine-reference.md).
cdn Cdn Optimization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-optimization-overview.md
This article provides an overview of various optimization features and when you
> [!NOTE] > Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](../frontdoor/front-door-overview.md).
-**Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles support the following optimizations:
+**Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles support the following optimizations:
* [General web delivery](#general-web-delivery). This optimization is also used for media streaming and large file download.
When you create a CDN endpoint, select an optimization type that best matches th
**General web delivery** is the default selection. You can only update **Azure CDN Standard from Akamai** endpoints optimization at any time.
-For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon**, you can't.
+For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio**, you can't.
1. In an **Azure CDN Standard from Akamai** profile, select an endpoint.
Media streaming is time-sensitive, because packets that arrive late on the clien
This scenario is common for Azure media service customers. When you use Azure media services, you get a single streaming endpoint that can be used for both live and on-demand streaming. With this scenario, customers don't need to switch to another endpoint when they change from live to on-demand streaming. General media streaming optimization supports this type of scenario.
-For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon**, and **Azure CDN Premium from Verizon**, use the general web delivery optimization type to deliver general streaming media content.
+For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio**, and **Azure CDN Premium from Edgio**, use the general web delivery optimization type to deliver general streaming media content.
For more information about media streaming optimization, see [Media streaming optimization](cdn-media-streaming-optimization.md).
For more information about media streaming optimization, see [Media streaming op
Video-on-demand media streaming optimization improves video-on-demand streaming content. If you use an endpoint for video-on-demand streaming, use this option.
-For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon**, and **Azure CDN Premium from Verizon** profiles, use the general web delivery optimization type to deliver video-on-demand streaming media content.
+For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio**, and **Azure CDN Premium from Edgio** profiles, use the general web delivery optimization type to deliver video-on-demand streaming media content.
For more information about media streaming optimization, see [Media streaming optimization](cdn-media-streaming-optimization.md).
For more information about media streaming optimization, see [Media streaming op
For **Azure CDN Standard from Akamai** profiles, large file downloads are optimized for content larger than 10 MB. If your average file size is smaller than 10 MB, use general web delivery. If your average files sizes are consistently larger than 10 MB, it might be more efficient to create a separate endpoint for large files. For example, firmware or software updates typically are large files. To deliver files larger than 1.8 GB, the large file download optimization is required.
-For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon**, and **Azure CDN Premium from Verizon** profiles, use the general web delivery optimization type to deliver large file download content. There's no limitation on file download size.
+For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio**, and **Azure CDN Premium from Edgio** profiles, use the general web delivery optimization type to deliver large file download content. There's no limitation on file download size.
For more information about large file optimization, see [Large file optimization](cdn-large-file-optimization.md). ### Dynamic site acceleration
- Dynamic site acceleration (DSA) is available for **Azure CDN Standard from Akamai**, **Azure CDN Standard from Verizon**, and **Azure CDN Premium from Verizon** profiles. This optimization involves an extra fee to use; for more information, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/).
+ Dynamic site acceleration (DSA) is available for **Azure CDN Standard from Akamai**, **Azure CDN Standard from Edgio**, and **Azure CDN Premium from Edgio** profiles. This optimization involves an extra fee to use; for more information, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/).
> [!NOTE] > Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](../frontdoor/front-door-overview.md) which is a global [anycast](https://en.wikipedia.org/wiki/Anycast) service leveraging Microsoft's private global network to deliver your app workloads.
DSA includes various techniques that benefit the latency and performance of dyna
You can use this optimization to accelerate a web app that includes numerous responses that aren't cacheable. Examples are search results, checkout transactions, or real-time data. You can continue to use core Azure CDN caching capabilities for static data.
-For more information about dynamic site acceleration, see [Dynamic site acceleration](cdn-dynamic-site-acceleration.md).
+For more information about dynamic site acceleration, see [Dynamic site acceleration](cdn-dynamic-site-acceleration.md).
cdn Cdn Pop Abbreviations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-abbreviations.md
Title: Azure CDN POP locations by abbreviation | Microsoft Docs
-description: This article lists Azure CDN POP locations, sorted by POP abbreviation, for Azure CDN from Verizon.
+description: This article lists Azure CDN POP locations, sorted by POP abbreviation, for Azure CDN from Edgio.
> [!div class="op_single_selector"] > * [POP locations by region](cdn-pop-locations.md) > * [Microsoft POP locations by abbreviation](microsoft-pop-abbreviations.md)
-> * [Verizon POP locations by abbreviation](cdn-pop-abbreviations.md)
+> * [Edgio POP locations by abbreviation](cdn-pop-abbreviations.md)
>
-This article lists POP locations, sorted by POP abbreviation, for **Azure CDN from Verizon**.
+This article lists POP locations, sorted by POP abbreviation, for **Azure CDN from Edgio**.
-## Verizon POP locations
+<a name='verizon-pop-locations'></a>
+
+## Edgio POP locations
Abbreviation | Location | Region | | | |
XIJ | Kuwait | Europe
## Next steps
-* View [Azure CDN from Verizon POP locations by metro](cdn-pop-locations.md#partners).
+* View [Azure CDN from Edgio POP locations by metro](cdn-pop-locations.md#partners).
* Learn how to [create an Azure CDN profile](cdn-create-new-endpoint.md).
cdn Cdn Pop List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-list-api.md
# Retrieve the current POP IP list for Azure CDN
-## Retrieve the current Verizon POP IP list for Azure CDN
+<a name='retrieve-the-current-verizon-pop-ip-list-for-azure-cdn'></a>
-You can use the REST API to retrieve the set of IPs for VerizonΓÇÖs point of presence (POP) servers. These POP servers make requests to origin servers that are associated with Azure Content Delivery Network (CDN) endpoints on a Verizon profile (**Azure CDN Standard from Verizon** or **Azure CDN Premium from Verizon**). This set of IPs is different from the IPs that a client would see when making requests to the POPs.
+## Retrieve the current Edgio POP IP list for Azure CDN
+
+You can use the REST API to retrieve the set of IPs for EdgioΓÇÖs point of presence (POP) servers. These POP servers make requests to origin servers that are associated with Azure Content Delivery Network (CDN) endpoints on a Edgio profile (**Azure CDN Standard from Edgio** or **Azure CDN Premium from Edgio**). This set of IPs is different from the IPs that a client would see when making requests to the POPs.
For the syntax of the REST API operation for retrieving the POP list, see [Edge Nodes - List](/rest/api/cdn/edge-nodes/list).
Use the AzureFrontDoor.Backend [service tag](../virtual-network/service-tags-ove
## Typical use case
-For security purposes, you can use this IP list to enforce that requests to your origin server are made only from a valid Verizon POP. For example, if someone discovered the hostname or IP address for a CDN endpoint's origin server, one could make requests directly to the origin server, therefore bypassing the scaling and security capabilities provided by Azure CDN. By setting the IPs in the returned list as the only allowed IPs on an origin server, this scenario can be prevented. To ensure that you have the latest POP list, retrieve it at least once a day.
+For security purposes, you can use this IP list to enforce that requests to your origin server are made only from a valid Edgio POP. For example, if someone discovered the hostname or IP address for a CDN endpoint's origin server, one could make requests directly to the origin server, therefore bypassing the scaling and security capabilities provided by Azure CDN. By setting the IPs in the returned list as the only allowed IPs on an origin server, this scenario can be prevented. To ensure that you have the latest POP list, retrieve it at least once a day.
## Next steps
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
# Azure CDN Coverage by Metro > [!div class="op_single_selector"] > * [POP locations by region](cdn-pop-locations.md)
-> * [Verizon POP locations by abbreviation](cdn-pop-abbreviations.md)
+> * [Edgio POP locations by abbreviation](cdn-pop-abbreviations.md)
> * [Microsoft POP locations by abbreviation](microsoft-pop-abbreviations.md) >
This article lists current metros containing point-of-presence (POP) locations,
> POP city locations for **Azure CDN from Akamai** are not individually disclosed. >
-| Region | Verizon | Akamai |
+| Region | Edgio | Akamai |
|--|--|--| | North America | Guadalajara, Mexico<br />Mexico City, Mexico<br />Monterrey, Mexico<br />Puebla, Mexico<br />Querétaro, Mexico<br />Atlanta, GA, USA<br />Boston, MA, USA<br />Chicago, IL, USA<br />Dallas, TX, USA<br />Denver, CO, USA<br />Detroit, MI, USA<br />Los Angeles, CA, USA<br />Miami, FL, USA<br />New York, NY, USA<br />Philadelphia, PA, USA<br />San Jose, CA, USA<br />Minneapolis, MN, USA<br />Pittsburgh, PA, USA<br />Seattle, WA, USA<br />Ashburn, VA, USA <br />Houston, TX, USA <br />Phoenix, AZ, USA | Canada<br />Mexico<br />USA | | South America | Buenos Aires, Argentina<br />Rio de Janeiro, Brazil<br />São Paulo, Brazil<br />Valparaíso, Chile<br />Bogota, Colombia<br />Barranquilla, Colombia<br />Medellin, Colombia<br />Quito, Ecuador<br />Lima, Peru | Argentina<br />Brazil<br />Chile<br />Colombia<br />Ecuador<br />Peru<br />Uruguay |
cdn Cdn Purge Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-purge-endpoint.md
This guide walks you through purging assets from all edge nodes of an endpoint.
![Purge button](./media/cdn-purge-endpoint/cdn-purge-button.png) > [!IMPORTANT]
-> Purge requests take approximately 2 minutes with **Azure CDN from Verizon** (standard and premium), and approximately 10 seconds with **Azure CDN from Akamai**. Azure CDN has a limit of 100 concurrent purge requests at any given time at the profile level.
+> Purge requests take approximately 2 minutes with **Azure CDN from Edgio** (standard and premium), and approximately 10 seconds with **Azure CDN from Akamai**. Azure CDN has a limit of 100 concurrent purge requests at any given time at the profile level.
> >
cdn Cdn Query String Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-query-string-premium.md
Title: Control Azure CDN caching behavior with query strings - premium tier
-description: Azure CDN query string caching controls how files are cached when a web request contains a query string. This article describes query string caching in the Azure CDN Premium from Verizon product.
+description: Azure CDN query string caching controls how files are cached when a web request contains a query string. This article describes query string caching in the Azure CDN Premium from Edgio product.
documentationcenter: ''
With Azure Content Delivery Network (CDN), you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, http:\//www.contoso.com/content.mov?field1=value1&field2=value2. If there is more than one key-value pair in a query string of a request, their order does not matter. > [!IMPORTANT]
-> The standard and premium CDN products provide the same query string caching functionality, but the user interface is different. This article describes the interface for **Azure CDN Premium from Verizon**. For query string caching with Azure CDN standard products, see [Control Azure CDN caching behavior with query strings - standard tier](cdn-query-string.md).
+> The standard and premium CDN products provide the same query string caching functionality, but the user interface is different. This article describes the interface for **Azure CDN Premium from Edgio**. For query string caching with Azure CDN standard products, see [Control Azure CDN caching behavior with query strings - standard tier](cdn-query-string.md).
>
Three query string modes are available:
> [!IMPORTANT] > Because it takes time for the registration to propagate through the CDN, cache string settings changes might not be immediately visible. Propagation usually completes in 10 minutes. -
cdn Cdn Query String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-query-string.md
With Azure Content Delivery Network (CDN), you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, http:\//www.contoso.com/content.mov?field1=value1&field2=value2. If there's more than one key-value pair in a query string of a request, their order doesn't matter. > [!IMPORTANT]
-> The Azure CDN standard and premium products provide the same query string caching functionality, but the user interface is different. This article describes the interface for **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Akamai** and **Azure CDN Standard from Verizon**. For query string caching with **Azure CDN Premium from Verizon**, see [Control Azure CDN caching behavior with query strings - premium tier](cdn-query-string-premium.md).
+> The Azure CDN standard and premium products provide the same query string caching functionality, but the user interface is different. This article describes the interface for **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Akamai** and **Azure CDN Standard from Edgio**. For query string caching with **Azure CDN Premium from Edgio**, see [Control Azure CDN caching behavior with query strings - premium tier](cdn-query-string-premium.md).
Three query string modes are available:
Three query string modes are available:
> Because it takes time for the registration to propagate through Azure CDN, cache string settings changes might not be immediately visible: > - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. > - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
-> - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+> - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes in 10 minutes.
## Next step - Learn how to [purge cached content](cdn-purge-endpoint.md) from Azure CDN endpoint.+
cdn Cdn Real Time Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-real-time-alerts.md
This document explains real-time alerts in Microsoft Azure CDN. This functionali
![Media Type with HTTP Large Object selected](./media/cdn-real-time-alerts/cdn-http-large.png) > [!IMPORTANT]
- > You must select **HTTP Large Object** as the **Media Type**. The other choices are not used by **Azure CDN from Verizon**. Failure to select **HTTP Large Object** causes your alert to never be triggered.
+ > You must select **HTTP Large Object** as the **Media Type**. The other choices are not used by **Azure CDN from Edgio**. Failure to select **HTTP Large Object** causes your alert to never be triggered.
> > 8. Create an **Expression** to monitor by selecting a **Metric**, **Operator**, and **Trigger value**.
This document explains real-time alerts in Microsoft Azure CDN. This functionali
* Analyze [Real-time stats in Azure CDN](cdn-real-time-stats.md) * Dig deeper with [advanced HTTP reports](cdn-advanced-http-reports.md) * Analyze [usage patterns](cdn-analyze-usage-patterns.md)-
cdn Cdn Restrict Access By Country Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-restrict-access-by-country-region.md
With the *geo-filtering* feature, you can create rules on specific paths on your
## Standard profiles
-These instructions are for **Azure CDN Standard from Akamai** and **Azure CDN Standard from Verizon** profiles.
+These instructions are for **Azure CDN Standard from Akamai** and **Azure CDN Standard from Edgio** profiles.
-For **Azure CDN Premium from Verizon** profiles, you must use the **Manage** portal to activate geo-filtering. For more information, see [Azure CDN Premium from Verizon profiles](#azure-cdn-premium-from-verizon-profiles).
+For **Azure CDN Premium from Edgio** profiles, you must use the **Manage** portal to activate geo-filtering. For more information, see [Azure CDN Premium from Edgio profiles](#azure-cdn-premium-from-verizon-profiles).
### Define the directory path To access the geo-filtering feature, select your CDN endpoint within the portal, then select **Geo-filtering** under SETTINGS in the left-hand menu.
After you have finished selecting the countries/regions, select **Save** to acti
To delete a rule, select it from the list on the **Geo-filtering** page, then choose **Delete**.
-## Azure CDN Premium from Verizon profiles
+<a name='azure-cdn-premium-from-verizon-profiles'></a>
-For **Azure CDN Premium from Verizon** profiles, the user interface for creating a geo-filtering rule is different:
+## Azure CDN Premium from Edgio profiles
+
+For **Azure CDN Premium from Edgio** profiles, the user interface for creating a geo-filtering rule is different:
1. From the top menu in your Azure CDN profile, select **Manage**.
-2. From the Verizon portal, select **HTTP Large**, then select **Country Filtering**.
+2. From the Edgio portal, select **HTTP Large**, then select **Country Filtering**.
:::image type="content" source="./media/cdn-filtering/cdn-geo-filtering-premium.png" alt-text="Screenshot shows how to select country filtering in Azure CDN" border="true":::
In the country/region filtering rules table, select the delete icon next to a ru
* Changes to your geo-filtering configuration don't take effect immediately: * For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. * For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
- * For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+ * For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes in 10 minutes.
* This feature doesn't support wildcard characters (for example, *).
In the country/region filtering rules table, select the delete icon next to a ru
* Only one rule can be applied to the same relative path. That is, you can't create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter.
-* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries/regions from which a request are allowed or blocked for a secured directory.
-
+* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Edgio** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries/regions from which a request are allowed or blocked for a secured directory.
cdn Cdn Sas Storage Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-sas-storage-support.md
This option is the simplest and uses a single SAS token, which is passed from Az
### Option 2: Using CDN security token authentication with a rewrite rule
-To use Azure CDN security token authentication, you must have an **Azure CDN Premium from Verizon** profile. This option is the most secure and customizable. Client access is based on the security parameters that you set on the security token. Once you've created and set up the security token, it's required on all CDN endpoint URLs. However, because of the URL Rewrite rule, the SAS token isn't required on the CDN endpoint. If the SAS token later becomes invalid, Azure CDN can't revalidate the content from the origin server.
+To use Azure CDN security token authentication, you must have an **Azure CDN Premium from Edgio** profile. This option is the most secure and customizable. Client access is based on the security parameters that you set on the security token. Once you've created and set up the security token, it's required on all CDN endpoint URLs. However, because of the URL Rewrite rule, the SAS token isn't required on the CDN endpoint. If the SAS token later becomes invalid, Azure CDN can't revalidate the content from the origin server.
1. [Create an Azure CDN security token](./cdn-token-auth.md#setting-up-token-authentication) and activate it by using the rules engine for the CDN endpoint and path where your users can access the file.
Because SAS parameters aren't visible to Azure CDN, Azure CDN can't change its d
| | | | Start | The time that Azure CDN can begin to access the blob file. Due to clock skew (when a clock signal arrives at different times for different components), choose a time 15 minutes earlier if you want the asset to be available immediately. | | End | The time after which Azure CDN can no longer access the blob file. Previously cached files on Azure CDN are still accessible. To control the file expiry time, either set the appropriate expiry time on the Azure CDN security token or purge the asset. |
-| Allowed IP addresses | Optional. If you're using **Azure CDN from Verizon**, you can set this parameter to the ranges defined in [Azure CDN from Verizon Edge Server IP Ranges](./cdn-pop-list-api.md). If you're using **Azure CDN from Akamai**, you can't set the IP ranges parameter because the IP addresses aren't static.|
+| Allowed IP addresses | Optional. If you're using **Azure CDN from Edgio**, you can set this parameter to the ranges defined in [Azure CDN from Edgio Edge Server IP Ranges](./cdn-pop-list-api.md). If you're using **Azure CDN from Akamai**, you can't set the IP ranges parameter because the IP addresses aren't static.|
| Allowed protocols | The protocol(s) allowed for a request made with the account SAS. The HTTPS setting is recommended.| ## Next steps
cdn Cdn Storage Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-storage-custom-domain-https.md
Azure CDN ignores any restrictions added to the SAS token. For example, all SAS
If you create multiple SAS URLs for the same blob endpoint, consider enabling query string caching. Doing so ensures that each URL is treated as a unique entity. For more information, see [Controlling Azure CDN caching behavior with query strings](cdn-query-string.md). ## HTTP-to-HTTPS redirection
-You can elect to redirect HTTP traffic to HTTPS by creating a URL redirect rule with the [Standard rules engine](cdn-standard-rules-engine.md) or the [Verizon Premium rules engine](cdn-verizon-premium-rules-engine.md). Standard Rules engine is available only for Azure CDN from Microsoft profiles, while Verizon premium rules engine is available only from Azure CDN Premium from Verizon profiles.
+You can elect to redirect HTTP traffic to HTTPS by creating a URL redirect rule with the [Standard rules engine](cdn-standard-rules-engine.md) or the [Edgio Premium rules engine](cdn-verizon-premium-rules-engine.md). Standard Rules engine is available only for Azure CDN from Microsoft profiles, while Edgio premium rules engine is available only from Azure CDN Premium from Edgio profiles.
![Microsoft redirect rule](./media/cdn-storage-custom-domain-https/cdn-standard-redirect-rule.png) In the above rule, leaving Hostname, Path, Query string, and Fragment results in the incoming values being used in the redirect.
-![Verizon redirect rule](./media/cdn-storage-custom-domain-https/cdn-url-redirect-rule.png)
+![Edgio redirect rule](./media/cdn-storage-custom-domain-https/cdn-url-redirect-rule.png)
In the above rule, *Cdn-endpoint-name* refers to the name that you configured for your CDN endpoint, which you can select from the drop-down list. The value for *origin-path* refers to the path within your origin storage account where your static content resides. If you're hosting all static content in a single container, replace *origin-path* with the name of that container.
When you access blobs through Azure CDN, you pay [Blob storage prices](https://a
For example, if you have a storage account in the United States that's being accessed using Azure CDN and someone in Europe attempts to access one of the blobs in that storage account via Azure CDN, Azure CDN first checks the POP closest to Europe for that blob. If found, Azure CDN accesses that copy of the blob and uses CDN pricing, because it's being accessed on Azure CDN. If not found, Azure CDN copies the blob to the POP server, which results in egress and transaction charges as specified in the Blob storage pricing, and then accesses the file on the POP server, which results in Azure CDN billing. ## Next steps
-[Tutorial: Set Azure CDN caching rules](cdn-caching-rules-tutorial.md)
+[Tutorial: Set Azure CDN caching rules](cdn-caching-rules-tutorial.md)
cdn Cdn Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-traffic-manager.md
Using Azure Traffic Manager in this way ensures your web application is always a
This article provides guidance and an example of how to configure failover with profiles from:
-* **Azure CDN Standard from Verizon**
+* **Azure CDN Standard from Edgio**
* **Azure CDN Standard from Akamai** **Azure CDN from Microsoft** is also supported.
This article provides guidance and an example of how to configure failover with
Create two or more Azure CDN profiles and endpoints with different providers. 1. Create two CDN profiles:
- * **Azure CDN Standard from Verizon**
+ * **Azure CDN Standard from Edgio**
* **Azure CDN Standard from Akamai** Create the profiles by following the steps in [Create a new CDN profile](cdn-create-new-endpoint.md#create-a-new-cdn-profile).
After you configure your CDN and Traffic Manager profiles, follow these steps to
`cdnverify.cdndemo101.dustydogpetcare.online CNAME cdnverify.cdndemo101verizon.azureedge.net`
-4. From your Azure CDN profile, select the second CDN endpoint (Verizon) and repeat step 2. Select **Add custom domain**, and enter **cdndemo101.dustydogpetcare.online**.
+4. From your Azure CDN profile, select the second CDN endpoint (Edgio) and repeat step 2. Select **Add custom domain**, and enter **cdndemo101.dustydogpetcare.online**.
After you complete these steps, your multi-CDN service with failover capabilities is configured with Azure Traffic Manager.
cdn Cdn Troubleshoot Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-compression.md
First, we should do a quick sanity check on the request. You can use your brows
### Verify compression settings (standard CDN profiles) > [!NOTE]
-> This step applies only if your CDN profile is an **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon**, or **Azure CDN Standard from Akamai** profile.
+> This step applies only if your CDN profile is an **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Edgio**, or **Azure CDN Standard from Akamai** profile.
> >
Navigate to your endpoint in the [Azure portal](https://portal.azure.com) and se
### Verify compression settings (Premium CDN profiles) > [!NOTE]
-> This step applies only if your CDN profile is an **Azure CDN Premium from Verizon** profile.
+> This step applies only if your CDN profile is an **Azure CDN Premium from Edgio** profile.
> >
Navigate to your endpoint in the [Azure portal](https://portal.azure.com) and se
![CDN premium compression settings](./media/cdn-troubleshoot-compression/cdn-compression-settings-premium.png)
-### Verify the content is cached (Verizon CDN profiles)
+<a name='verify-the-content-is-cached-verizon-cdn-profiles'></a>
+
+### Verify the content is cached (Edgio CDN profiles)
> [!NOTE]
-> This step applies only if your CDN profile is an **Azure CDN Standard from Verizon** or **Azure CDN Premium from Verizon** profile.
+> This step applies only if your CDN profile is an **Azure CDN Standard from Edgio** or **Azure CDN Premium from Edgio** profile.
> >
Using your browser's developer tools, check the response headers to ensure the f
![CDN response headers](./media/cdn-troubleshoot-compression/cdn-response-headers.png)
-### Verify the file meets the size requirements (Verizon CDN profiles)
+<a name='verify-the-file-meets-the-size-requirements-verizon-cdn-profiles'></a>
+
+### Verify the file meets the size requirements (Edgio CDN profiles)
> [!NOTE]
-> This step applies only if your CDN profile is an **Azure CDN Standard from Verizon** or **Azure CDN Premium from Verizon** profile.
+> This step applies only if your CDN profile is an **Azure CDN Standard from Edgio** or **Azure CDN Premium from Edgio** profile.
> >
The **Via** HTTP header indicates to the web server that the request is being pa
* **IIS 6**: [Set HcNoCompressionForProxies="FALSE" in the IIS Metabase properties](/previous-versions/iis/6.0-sdk/ms525390(v=vs.90)) * **IIS 7 and up**: [Set both **noCompressionForHttp10** and **noCompressionForProxies** to False in the server configuration](https://www.iis.net/configreference/system.webserver/httpcompression)-
cdn Cdn Troubleshoot Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-endpoint.md
There are several possible causes, including:
> After creating a CDN endpoint, it will not immediately be available for use, as it takes time for the registration to propagate through the CDN: > - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in ten minutes. > - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
-> - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes within 90 minutes.
+> - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes within 90 minutes.
> > If you complete the steps in this document and you're still getting 404 responses, consider waiting a few hours to check again before opening a support ticket. >
Lastly, we should verify our **Origin path**. By default this path is blank. Y
In the example endpoint, we wanted all resources on the storage account to be available, so **Origin path** was left blank. Therefore, a request to https:\//cdndocdemo.azureedge.net/publicblob/lorem.txt results in a connection from the endpoint to cdndocdemo.core.windows.net that requests */publicblob/lorem.txt*. Likewise, a request for https:\//cdndocdemo.azureedge.net/donotcache/status.png results in the endpoint requesting */donotcache/status.png* from the origin. But what if you don't want to use the CDN for every path on your origin? Say you only wanted to expose the *public blob* path. If we enter */publicblob* in the **Origin path** field, that is going to cause the endpoint to insert */publicblob* before every request being made to the origin. So the request for https:\//cdndocdemo.azureedge.net/publicblob/lorem.txt now takes the request portion of the URL, */publicblob/lorem.txt*, and append */publicblob* to the beginning. Resulting in a request for */publicblob/publicblob/lorem.txt* from the origin. If that path doesn't resolve to an actual file, the origin returns a 404 status. The correct URL to retrieve lorem.txt in this example would actually be https:\//cdndocdemo.azureedge.net/lorem.txt. We don't include the */publicblob* path at all, because the request portion of the URL is */lorem.txt* and the endpoint adds */publicblob*, resulting in */publicblob/lorem.txt* being the request passed to the origin.-
cdn Cdn Verizon Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-custom-reports.md
Title: Custom Reports from Verizon | Microsoft Docs
+ Title: Custom Reports from Edgio | Microsoft Docs
description: 'You can view usage patterns for your CDN by using the following reports: Bandwidth, Data Transferred, Hits, Cache Statuses, Cache Hit Ratio, IPV4/IPV6 Data Transferred.' documentationcenter: ''
Last updated 10/11/2017
-# Custom Reports from Verizon
+# Custom Reports from Edgio
[!INCLUDE [cdn-verizon-only](../../includes/cdn-verizon-only.md)]
-By using Verizon Custom Reports via the Manage portal for Verizon profiles, you can define the type of data to be collected for edge CNAMEs reports.
+By using Edgio Custom Reports via the Manage portal for Edgio profiles, you can define the type of data to be collected for edge CNAMEs reports.
-## Accessing Verizon Custom Reports
+<a name='accessing-verizon-custom-reports'></a>
+
+## Accessing Edgio Custom Reports
1. From the CDN profile blade, click the **Manage** button. ![CDN profile Manage button](./media/cdn-reports/cdn-manage-btn.png)
You can export the data in Excel format by clicking the Excel symbol to the righ
## Considerations Reports can be generated only within the last 18 months.-
cdn Cdn Verizon Http Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-http-headers.md
Title: Verizon-specific HTTP headers for Azure CDN rules engine | Microsoft Docs
-description: This article describes how to use Verizon-specific HTTP headers with Azure CDN rules engine.
+ Title: Edgio-specific HTTP headers for Azure CDN rules engine | Microsoft Docs
+description: This article describes how to use Edgio-specific HTTP headers with Azure CDN rules engine.
-# Verizon-specific HTTP headers for Azure CDN rules engine
+# Edgio-specific HTTP headers for Azure CDN rules engine
-For **Azure CDN Premium from Verizon** products, when an HTTP request is sent to the origin server, the point-of-presence (POP) server can add one or more reserved headers (or proxy special headers) in the client request to the POP. These headers are in addition to the standard forwarding headers received. For information about standard request headers, see [Request fields](https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Request_fields).
+For **Azure CDN Premium from Edgio** products, when an HTTP request is sent to the origin server, the point-of-presence (POP) server can add one or more reserved headers (or proxy special headers) in the client request to the POP. These headers are in addition to the standard forwarding headers received. For information about standard request headers, see [Request fields](https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Request_fields).
If you want to prevent one of these reserved headers from being added in the Azure CDN (Content Delivery Network) POP request to the origin server, you must create a rule with the [Proxy Special Headers feature](https://docs.vdms.com/cdn/Content/HRE/F/Proxy-Special-Headers.htm) in the rules engine. In this rule, exclude the header you want to remove from the default list of headers in the headers field. If you've enabled the [Debug Cache Response Headers feature](https://docs.vdms.com/cdn/Content/HRE/F/Debug-Cache-Response-Headers.htm), be sure to add the necessary `X-EC-Debug` headers.
For example, to remove the `Via` header, the headers field of the rule should in
![Proxy Special Headers rule](./media/cdn-http-headers/cdn-proxy-special-header-rule.png)
-The following table describes the headers that you can add to the Verizon CDN POP in the request:
+The following table describes the headers that you can add to the Edgio CDN POP in the request:
Request header | Description | Example |-|--
If the customer origin's HTTP Host Header option is set to blank, then the `Host
A POP server add/overwrite the `X-Gateway-List request header when either of the following conditions are met: - The request points to the ADN platform. - The request is forwarded to a customer origin server that is protected by the Origin Shield feature.-
cdn Cdn Verizon Premium Rules Engine Reference Conditional Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-conditional-expressions.md
Title: 'Conditional expressions for Azure CDN - Verizon Premium rules engine'
-description: Reference documentation for Azure CDN from Verizon Premium rules engine match conditions and features.
+ Title: 'Conditional expressions for Azure CDN - Edgio Premium rules engine'
+description: Reference documentation for Azure CDN from Edgio Premium rules engine match conditions and features.
-# Azure CDN from Verizon Premium rules engine conditional expressions
+# Azure CDN from Edgio Premium rules engine conditional expressions
-For more information on Verizon Premium rules engine expressions, see [Independent Conditional Expressions](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Whats_New/Whats-New-RE.htm#RuleSetup).
+For more information on Edgio Premium rules engine expressions, see [Independent Conditional Expressions](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Whats_New/Whats-New-RE.htm#RuleSetup).
## Next steps
For more information on Verizon Premium rules engine expressions, see [Independe
- [Rules engine reference](cdn-verizon-premium-rules-engine-reference.md) - [Rules engine match conditions](cdn-verizon-premium-rules-engine-reference-match-conditions.md) - [Rules engine features](cdn-verizon-premium-rules-engine-reference-features.md)-- [Overriding default HTTP behavior using the rules engine](cdn-verizon-premium-rules-engine.md)
+- [Overriding default HTTP behavior using the rules engine](cdn-verizon-premium-rules-engine.md)
cdn Cdn Verizon Premium Rules Engine Reference Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-features.md
Title: Azure CDN from Verizon Premium rules engine features
-description: Reference documentation for Azure CDN from Verizon Premium rules engine features.
+ Title: Azure CDN from Edgio Premium rules engine features
+description: Reference documentation for Azure CDN from Edgio Premium rules engine features.
-# Azure CDN from Verizon Premium rules engine features
+# Azure CDN from Edgio Premium rules engine features
This article lists detailed descriptions of the available features for Azure Content Delivery Network (CDN) [Rules Engine](cdn-verizon-premium-rules-engine.md). The third part of a rule is the feature. A feature defines the type of action that is applied to the request type that gets identified by a set of match conditions.
-## <a name="top"></a>Azure CDN from Verizon Premium rules engine features reference
+## <a name="top"></a>Azure CDN from Edgio Premium rules engine features reference
The available types of features are:
These features allow a request to be redirected or rewritten to a different URL.
**[Back to the top](#top)**
-For the most recent features, see the [Verizon Rules Engine documentation](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Quick_References/HRE_QR.htm#Actions).
+For the most recent features, see the [Edgio Rules Engine documentation](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Quick_References/HRE_QR.htm#Actions).
## Next steps
cdn Cdn Verizon Premium Rules Engine Reference Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-match-conditions.md
Title: Azure CDN from Verizon Premium rules engine match conditions
-description: Reference documentation for Azure Content Delivery Network from Verizon Premium rules engine match conditions.
+ Title: Azure CDN from Edgio Premium rules engine match conditions
+description: Reference documentation for Azure Content Delivery Network from Edgio Premium rules engine match conditions.
-# Azure CDN from Verizon Premium rules engine match conditions
+# Azure CDN from Edgio Premium rules engine match conditions
-This article lists detailed descriptions of the available match conditions for the Azure Content Delivery Network (CDN) from Verizon Premium [rules engine](cdn-verizon-premium-rules-engine.md).
+This article lists detailed descriptions of the available match conditions for the Azure Content Delivery Network (CDN) from Edgio Premium [rules engine](cdn-verizon-premium-rules-engine.md).
The second part of a rule is the match condition. A match condition identifies specific types of requests for which a set of features is performed.
These match conditions are designed to identify requests based on their properti
**[Back to Top](#top)**
-For the most recent match conditions, see the [Verizon Rules Engine documentation](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Quick_References/HRE_QR.htm#Conditio).
+For the most recent match conditions, see the [Edgio Rules Engine documentation](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Quick_References/HRE_QR.htm#Conditio).
## Next steps
cdn Cdn Verizon Premium Rules Engine Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference.md
Last updated 02/27/2023
-# Azure CDN from Verizon Premium rules engine reference
+# Azure CDN from Edgio Premium rules engine reference
This article lists detailed descriptions of the available match conditions and features for the Azure Content Delivery Network (CDN) [rules engine](cdn-verizon-premium-rules-engine.md).
cdn Cdn Verizon Premium Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine.md
Title: Override HTTP behavior with Azure CDN - Verizon Premium rules engine
-description: The rules engine allows you to customize how HTTP requests are handled by Azure CDN from Verizon Premium, such as blocking the delivery of certain types of content, define a caching policy, and modify HTTP headers.
+ Title: Override HTTP behavior with Azure CDN - Edgio Premium rules engine
+description: The rules engine allows you to customize how HTTP requests are handled by Azure CDN from Edgio Premium, such as blocking the delivery of certain types of content, define a caching policy, and modify HTTP headers.
-# Override HTTP behavior using the Azure CDN from Verizon Premium rules engine
+# Override HTTP behavior using the Azure CDN from Edgio Premium rules engine
[!INCLUDE [cdn-premium-feature](../../includes/cdn-premium-feature.md)]
To access the rules engine, you must first select **Manage** from the top of the
Select the **ADN** tab, then select **Rules Engine**.
- ADN is a term used by Verizon to specify DSA content. Any rules you create here are ignored by any endpoints in your profile that are not optimized for DSA.
+ ADN is a term used by Edgio to specify DSA content. Any rules you create here are ignored by any endpoints in your profile that are not optimized for DSA.
:::image type="content" source="./media/cdn-rules-engine/cdn-dsa-rules-engine.png" alt-text="Screenshot of rules engine for DSA.":::
cdn Microsoft Pop Abbreviations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/microsoft-pop-abbreviations.md
> [!div class="op_single_selector"] > * [POP locations by region](cdn-pop-locations.md) > * [Microsoft POP locations by abbreviation](microsoft-pop-abbreviations.md)
-> * [Verizon POP locations by abbreviation](cdn-pop-abbreviations.md)
+> * [Edgio POP locations by abbreviation](cdn-pop-abbreviations.md)
This article lists Microsoft POP locations, sorted by POP abbreviation, for **Azure CDN**.
cdn Onboard Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/onboard-apex-domain.md
After you've registered your custom domain, you can then add it to your CDN endp
It can take some time for the new custom domain settings to propagate to all CDN edge nodes: - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
- - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
+ - For **Azure CDN Standard from Edgio** and **Azure CDN Premium from Edgio** profiles, propagation usually completes in 10 minutes.
## Enable HTTPS on your custom domain
cdn Subscription Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/subscription-offerings.md
Azure CDN from Microsoft is the only CDN offering that is available to trial sub
Azure CDN from Akamai is unavailable for Pay-as-you-go subscriptions.
-Bandwidth for Azure CDN from Microsoft and Verizon gets throttled until the subscription is determined to be in good standing and has a sufficient payment history. The process for determining the subscription status and having throttling removed happens automatically after the first payment has been received.
+Bandwidth for Azure CDN from Microsoft and Edgio gets throttled until the subscription is determined to be in good standing and has a sufficient payment history. The process for determining the subscription status and having throttling removed happens automatically after the first payment has been received.
If you have made a payment and throttling hasn't been removed, you can request to do so by [contacting support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-secured-core.md
Validation|Device to be validated through toolset to ensure the device supports
::: zone pivot="platform-sphere" ## Azure Sphere platform Support
-The Mediatek MT3620AN must be included in your design. Additional guidance for building secured Azure Sphere applications can be within the [Azure Sphere application notes](https://learn.microsoft.com/azure-sphere/app-notes/app-notes-overview).
+The Mediatek MT3620AN must be included in your design. Additional guidance for building secured Azure Sphere applications can be within the [Azure Sphere application notes](/azure-sphere/app-notes/app-notes-overview).
## Azure Sphere Hardware/Firmware Requirements
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Description|The purpose of this requirement is to validate that sensitive data can be encrypted on nonvolatile storage.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
-|Resources|[Data at rest protection on Azure Sphere](https://learn.microsoft.com/azure-sphere/app-notes/app-notes-overview)|
+|Resources|[Data at rest protection on Azure Sphere](/azure-sphere/app-notes/app-notes-overview)|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Description|The purpose of this requirement is to make sure devices can report security information and events by sending data to a Microsoft telemetry service.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Provided by Microsoft|
-|Resources|[Collect and interpret error data - Azure Sphere](https://learn.microsoft.com/azure-sphere/deployment/interpret-error-data?tabs=cliv2beta)</br>[Configure crash dumps - Azure Sphere](https://learn.microsoft.com/azure-sphere/deployment/configure-crash-dumps)|
+|Resources|[Collect and interpret error data - Azure Sphere](/azure-sphere/deployment/interpret-error-data?tabs=cliv2beta)</br>[Configure crash dumps - Azure Sphere](/azure-sphere/deployment/configure-crash-dumps)|
</br>
The Mediatek MT3620AN must be included in your design. Additional guidance for b
|Description|The purpose of this policy is to ensure that there's a mechanism for collecting and distributing reports of vulnerabilities in the product.| |Validation Type|Prevalidated, no additional validation is required| |Validation|Azure Sphere vulnerabilities are collected by Microsoft through MSRC and are published to customers through the Tech Community Blog, Azure Sphere ΓÇ£WhatΓÇÖs NewΓÇ¥ page, and through MitreΓÇÖs CVE database.|
-|Resources|<ul><li>[Report an issue and submission guidelines](https://www.microsoft.com/msrc/faqs-report-an-issue)</li><li>[What's new - Azure Sphere](https://learn.microsoft.com/azure-sphere/product-overview/whats-new)</li><li>[Azure Sphere CVEs](https://learn.microsoft.com/azure-sphere/deployment/azure-sphere-cves)|</li></ul>
+|Resources|<ul><li>[Report an issue and submission guidelines](https://www.microsoft.com/msrc/faqs-report-an-issue)</li><li>[What's new - Azure Sphere](/azure-sphere/product-overview/whats-new)</li><li>[Azure Sphere CVEs](/azure-sphere/deployment/azure-sphere-cves)</li></ul>|
</br>
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
The faults listed in this article are currently available for use. To understand
| Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. | | Description | Adds CPU pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the **% Processor Utility** performance counter is used at fault start to determine current CPU percentage, which is subtracted from the `pressureLevel` defined in the fault so that **% Processor Utility** hits approximately the `pressureLevel` defined in the fault parameters. |
-| Prerequisites | **Linux**: Running the fault on a Linux VM requires the **stress-ng** utility to be installed. To install it, use the package manager for your Linux distro:<ul><li>APT command to install stress-ng: `sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng`</li><li>YUM command to install stress-ng: `sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng` |
+| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| | **Windows**: None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 | | Parameters (key, value) |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. | | Description | Adds physical memory pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |
-| Prerequisites | **Linux**: Running the fault on a Linux VM requires the **stress-ng** utility to be installed. To install it, use the package manager for your Linux distro:<ul><li>APT command to install stress-ng: `sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng`</li><li>YUM command to install stress-ng: `sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng` |
+| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| | **Windows**: None. | | Urn | urn:csci:microsoft:agent:physicalMemoryPressure/1.0 | | Parameters (key, value) | |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Property | Value | |-|-|
-| Capability name | DiskIOPressure-1.0 |
+| Capability name | DiskIOPressure-1.1 |
| Target type | Microsoft-Agent | | Supported OS types | Windows |
-| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to the primary storage of the VM where it's injected during the fault action. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. |
+| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to a Virtual Machine. Pressure is added to the primary disk by default, or the disk specified with the targetTempDirectory parameter. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. |
| Prerequisites | None. |
-| Urn | urn:csci:microsoft:agent:diskIOPressure/1.0 |
+| Urn | urn:csci:microsoft:agent:diskIOPressure/1.1 |
| Parameters (key, value) | | | pressureMode | The preset mode of disk pressure to add to the primary storage of the VM. Must be one of the `PressureModes` in the following table. |
+| targetTempDirectory | (Optional) The directory to use for applying disk pressure. For example, "D:/Temp". If the parameter is not included, pressure is added to the primary disk. |
| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. | ### Pressure modes
Currently, the Windows agent doesn't reduce memory pressure when other applicati
"actions": [ { "type": "continuous",
- "name": "urn:csci:microsoft:agent:diskIOPressure/1.0",
+ "name": "urn:csci:microsoft:agent:diskIOPressure/1.1",
"parameters": [ { "key": "pressureMode", "value": "PremiumStorageP10IOPS" },
+ {
+ "key": "targetTempDirectory",
+ "value": "C:/temp/"
+ },
{ "key": "virtualMachineScaleSetInstances", "value": "[0,1,2]"
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Property | Value | |-|-|
-| Capability name | LinuxDiskIOPressure-1.0 |
+| Capability name | LinuxDiskIOPressure-1.1 |
| Target type | Microsoft-Agent | | Supported OS types | Linux |
-| Description | Uses stress-ng to apply pressure to the disk. One or more worker processes are spawned that perform I/O processes with temporary files. For information on how pressure is applied, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. |
-| Prerequisites | Running the fault on a Linux VM requires the **stress-ng** utility to be installed. To install it, use the package manager for your Linux distro:<ul><li>APT command to install stress-ng: `sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng`</li><li>YUM command to install stress-ng: `sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng` |
-| Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.0 |
+| Description | Uses stress-ng to apply pressure to the disk. One or more worker processes are spawned that perform I/O processes with temporary files. Pressure is added to the primary disk by default, or the disk specified with the targetTempDirectory parameter. For information on how pressure is applied, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. |
+| Prerequisites | The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
+| Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.1 |
| Parameters (key, value) | | | workerCount | Number of worker processes to run. Setting `workerCount` to 0 generated as many worker processes as there are number of processors. | | fileSizePerWorker | Size of the temporary file that a worker performs I/O operations against. Integer plus a unit in bytes (b), kilobytes (k), megabytes (m), or gigabytes (g) (for example, 4 m for 4 megabytes and 256 g for 256 gigabytes). | | blockSize | Block size to be used for disk I/O operations, capped at 4 megabytes. Integer plus a unit in bytes, kilobytes, or megabytes (for example, 512 k for 512 kilobytes). |
+| targetTempDirectory | (Optional) The directory to use for applying disk pressure. For example, "/tmp/". If the parameter is not included, pressure is added to the primary disk. |
| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. | ### Sample JSON
Currently, the Windows agent doesn't reduce memory pressure when other applicati
"actions": [ { "type": "continuous",
- "name": "urn:csci:microsoft:agent:linuxDiskIOPressure/1.0",
+ "name": "urn:csci:microsoft:agent:linuxDiskIOPressure/1.1",
"parameters": [ { "key": "workerCount",
Currently, the Windows agent doesn't reduce memory pressure when other applicati
"key": "blockSize", "value": "256k" },
+ {
+ "key": "targetTempDirectory",
+ "value": "/tmp/"
+ },
{ "key": "virtualMachineScaleSetInstances", "value": "[0,1,2]"
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Target type | Microsoft-Agent | | Supported OS types | Linux | | Description | Runs any stress-ng command by passing arguments directly to stress-ng. Useful when one of the predefined faults for stress-ng doesn't meet your needs. |
-| Prerequisites | Running the fault on a Linux VM requires the **stress-ng** utility to be installed. To install it, use the package manager for your Linux distro:<ul><li>APT command to install stress-ng: `sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng`</li><li>YUM command to install stress-ng: `sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng` |
+| Prerequisites | The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| Urn | urn:csci:microsoft:agent:stressNg/1.0 | | Parameters (key, value) | | | stressNgArguments | One or more arguments to pass to the stress-ng process. For information on possible stress-ng arguments, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Property | Value | |-|-|
-| Capability name | NetworkLatency-1.0 |
+| Capability name | NetworkLatency-1.1 |
| Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. |
-| Description | Increases network latency for a specified port range and network block. |
+| Description | Increases network latency for a specified port range and network block. At least one destinationFilter or inboundDestinationFilter array must be provided. |
| Prerequisites | Agent (for Windows) must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
-| Urn | urn:csci:microsoft:agent:networkLatency/1.0 |
+| Urn | urn:csci:microsoft:agent:networkLatency/1.1 |
| Parameters (key, value) | | | latencyInMilliseconds | Amount of latency to be applied in milliseconds. |
-| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
+| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target. Maximum of 16.|
+| inboundDestinationFilters | Delimited JSON array of packet filters defining which inbound packets to target. Maximum of 16. |
+| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. |
+
+The parameters **destinationFilters** and **inboundDestinationFilters** use the following array of packet filters.
+
+| Property | Value |
+|-|-|
| address | IP address that indicates the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. | | portHigh | (Optional) Port number of the end of the port range. |
-| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. |
### Sample JSON
Currently, the Windows agent doesn't reduce memory pressure when other applicati
"actions": [ { "type": "continuous",
- "name": "urn:csci:microsoft:agent:networkLatency/1.0",
+ "name": "urn:csci:microsoft:agent:networkLatency/1.1",
"parameters": [ { "key": "destinationFilters", "value": "[ { \"address\": \"23.45.229.97\", \"subnetMask\": \"255.255.255.224\", \"portLow\": \"5000\", \"portHigh\": \"5200\" } ]" },
+ {
+ "key": "inboundDestinationFilters",
+ "value": "[ { \"address\": \"23.45.229.97\", \"subnetMask\": \"255.255.255.224\", \"portLow\": \"5000\", \"portHigh\": \"5200\" } ]"
+ },
{ "key": "latencyInMilliseconds", "value": "100",
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Property | Value | |-|-|
-| Capability name | NetworkDisconnect-1.0 |
+| Capability name | NetworkDisconnect-1.1 |
| Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. |
-| Description | Blocks outbound network traffic for specified port range and network block. |
+| Description | Blocks outbound network traffic for specified port range and network block. At least one destinationFilter or inboundDestinationFilter array must be provided. |
| Prerequisites | Agent (for Windows) must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. |
-| Urn | urn:csci:microsoft:agent:networkDisconnect/1.0 |
+| Urn | urn:csci:microsoft:agent:networkDisconnect/1.1 |
| Parameters (key, value) | |
-| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target for fault injection. Maximum of 16. |
+| destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target. Maximum of 16.|
+| inboundDestinationFilters | Delimited JSON array of packet filters defining which inbound packets to target. Maximum of 16. |
+| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. |
+
+The parameters **destinationFilters** and **inboundDestinationFilters** use the following array of packet filters.
+
+| Property | Value |
+|-|-|
| address | IP address that indicates the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. | | portHigh | (Optional) Port number of the end of the port range. |
-| virtualMachineScaleSetInstances | An array of instance IDs when this fault is applied to a virtual machine scale set. Required for virtual machine scale sets. |
### Sample JSON
Currently, the Windows agent doesn't reduce memory pressure when other applicati
"actions": [ { "type": "continuous",
- "name": "urn:csci:microsoft:agent:networkDisconnect/1.0",
+ "name": "urn:csci:microsoft:agent:networkDisconnect/1.1",
"parameters": [ { "key": "destinationFilters", "value": "[ { \"address\": \"23.45.229.97\", \"subnetMask\": \"255.255.255.224\", \"portLow\": \"5000\", \"portHigh\": \"5200\" } ]" },
+ {
+ "key": "inboundDestinationFilters",
+ "value": "[ { \"address\": \"23.45.229.97\", \"subnetMask\": \"255.255.255.224\", \"portLow\": \"5000\", \"portHigh\": \"5200\" } ]"
+ },
{ "key": "virtualMachineScaleSetInstances", "value": "[0,1,2]"
Currently, the Windows agent doesn't reduce memory pressure when other applicati
* The agent-based network faults currently only support IPv4 addresses.
-## Azure Resource Manager virtual machine shutdown
+## Virtual Machine shutdown
| Property | Value | |-|-| | Capability name | Shutdown-1.0 |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
} ```
-## Azure Resource Manager virtual machine scale set instance shutdown
+## Virtual Machine Scale Set instance shutdown
This fault has two available versions that you can use, Version 1.0 and Version 2.0. The main difference is that Version 2.0 allows you to filter by availability zones, only shutting down instances within a specified zone or zones.
cloud-services-extended-support Feature Support Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/feature-support-analysis.md
This article provides a feature analysis of Cloud Services (extended support) an
|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported|All SKUs| |Full control over VM, NICs, Disks|Limited control over NICs and VM via CS-ES APIs. No support for Disks|Yes|Limited control with virtual machine scale sets VM API| |RBAC Permissions Required|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write|
-|Accelerated networking|Yes|Yes|Yes|
+|Accelerated networking|No|Yes|Yes|
|Spot instances and pricing|No|Yes, you can have both Spot and Regular priority instances|Yes, instances must either be all Spot or all Regular| |Mix operating systems|Extremely limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system| |Disk Types|No Disk Support|Managed disks only, all storage types|Managed and unmanaged disks, All Storage Types
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 8/9/2023 Last updated : 8/21/2023
The following tables show the Microsoft Security Response Center (MSRC) updates
## August 2023 Guest OS
->[!NOTE]
-
->The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 23-08 | [5029247] | Latest Cumulative Update(LCU) | 6.61 | Aug 8, 2023 |
-| Rel 23-08 | [5029250] | Latest Cumulative Update(LCU) | 7.29 | Aug 8, 2023 |
-| Rel 23-08 | [5029242] | Latest Cumulative Update(LCU) | 5.85 | Aug 8, 2023 |
-| Rel 23-08 | [5028969] | .NET Framework 3.5 Security and Quality Rollup | 2.141 | Aug 8, 2023 |
-| Rel 23-08 | [5028963] | .NET Framework 4.7.2 Security and Quality Rollup | 2.141 | Aug 8, 2023 |
-| Rel 23-08 | [5028970] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [5028962] | .NET Framework 4.7.2 Cumulative Update LKG | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [5028967] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5028961] | .NET Framework 4.7.2 Cumulative Update LKG | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5028960] | .NET Framework DotNet | 6.61 | Aug 8, 2023 |
-| Rel 23-08 | [5028956] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.29 | Aug 8, 2023 |
-| Rel 23-08 | [5029296] | Monthly Rollup | 2.141 | Aug 8, 2023 |
-| Rel 23-08 | [5029295] | Monthly Rollup | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5029312] | Monthly Rollup | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [5029369] | Servicing Stack Update | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5029368] | Servicing Stack Update LKG | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [4578013] | OOB Standalone Security Update | 4.121 | Aug 19, 2020 |
-| Rel 23-08 | [5023788] | Servicing Stack Update LKG | 5.85 | Mar 14, 2023 |
-| Rel 23-08 | [5028264] | Servicing Stack Update LKG | 2.141 | Jul 11, 2023 |
-| Rel 23-08 | [4494175] | Microcode | 5.85 | Sep 1, 2020 |
-| Rel 23-08 | [4494174] | Microcode | 6.61 | Sep 1, 2020 |
-| Rel 23-08 | 5029395 | Servicing Stack Update | 7.29 | |
-| Rel 23-08 | 5028316 | Servicing Stack Update | 6.61 | |
+| Rel 23-08 | [5029247] | Latest Cumulative Update(LCU) | [6.61] | Aug 8, 2023 |
+| Rel 23-08 | [5029250] | Latest Cumulative Update(LCU) | [7.30] | Aug 8, 2023 |
+| Rel 23-08 | [5029242] | Latest Cumulative Update(LCU) | [5.85] | Aug 8, 2023 |
+| Rel 23-08 | [5028969] | .NET Framework 3.5 Security and Quality Rollup | [2.141] | Aug 8, 2023 |
+| Rel 23-08 | [5028963] | .NET Framework 4.7.2 Security and Quality Rollup | [2.141] | Aug 8, 2023 |
+| Rel 23-08 | [5028970] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [5028962] | .NET Framework 4.7.2 Cumulative Update LKG | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [5028967] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5028961] | .NET Framework 4.7.2 Cumulative Update LKG | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5028960] | .NET Framework DotNet | [6.61] | Aug 8, 2023 |
+| Rel 23-08 | [5028956] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.30] | Aug 8, 2023 |
+| Rel 23-08 | [5029296] | Monthly Rollup | [2.141] | Aug 8, 2023 |
+| Rel 23-08 | [5029295] | Monthly Rollup | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5029312] | Monthly Rollup | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [5029369] | Servicing Stack Update | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5029368] | Servicing Stack Update LKG | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [4578013] | OOB Standalone Security Update | [4.121] | Aug 19, 2020 |
+| Rel 23-08 | [5023788] | Servicing Stack Update LKG | [5.85] | Mar 14, 2023 |
+| Rel 23-08 | [5028264] | Servicing Stack Update LKG | [2.141] | Jul 11, 2023 |
+| Rel 23-08 | [4494175] | Microcode | [5.85] | Sep 1, 2020 |
+| Rel 23-08 | [4494174] | Microcode | [6.61] | Sep 1, 2020 |
+| Rel 23-08 | 5029395 | Servicing Stack Update | [7.30] | |
+| Rel 23-08 | 5028316 | Servicing Stack Update | [6.61] | |
[5029247]: https://support.microsoft.com/kb/5029247 [5029250]: https://support.microsoft.com/kb/5029250
The following tables show the Microsoft Security Response Center (MSRC) updates
[4494174]: https://support.microsoft.com/kb/4494174 [5029395]: https://support.microsoft.com/kb/5029395 [5028316]: https://support.microsoft.com/kb/5028316
+[2.141]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.129]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.121]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.85]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.61]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.30]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## July 2023 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 07/27/2023 Last updated : 08/31/2023
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **August 21, 2023**
+The August Guest OS has released.
+ ###### **July 27, 2023** The July Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
-| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.30 |
-| WA-GUEST-OS-7.27_202306-02 | July 8, 2023 | Post 7.29 |
+| WA-GUEST-OS-7.30_202308-01 | August 21, 2023 | Post 7.32 |
+| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.31 |
+|~~WA-GUEST-OS-7.27_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-7.25_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-7.24_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-7.23_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.61_202308-01 | August 21, 2023 | Post 6.63 |
| WA-GUEST-OS-6.60_202307-01 | July 27, 2023 | Post 6.62 |
-| WA-GUEST-OS-6.59_202306-02 | July 8, 2023 | Post 6.61 |
+|~~WA-GUEST-OS-6.59_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-6.57_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-6.56_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-6.55_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.85_202308-01 | August 21, 2023 | Post 5.87 |
| WA-GUEST-OS-5.84_202307-01 | July 27, 2023 | Post 5.86 |
-| WA-GUEST-OS-5.83_202306-02 | July 8, 2023 | Post 5.85 |
+|~~WA-GUEST-OS-5.83_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-5.81_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-5.80_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-5.79_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.121_202308-01 | August 21, 2023 | Post 4.123 |
| WA-GUEST-OS-4.120_202307-01 | July 27, 2023 | Post 4.122 |
-| WA-GUEST-OS-4.119_202306-02 | July 8, 2023 | Post 4.121 |
+|~~WA-GUEST-OS-4.119_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-4.117_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-4.116_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-4.115_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.129_202308-01 | August 21, 2023 | Post 3.131 |
| WA-GUEST-OS-3.128_202307-01 | July 27, 2023 | Post 3.130 |
-| WA-GUEST-OS-3.127_202306-02 | July 8, 2023 | Post 3.129 |
+|~~WA-GUEST-OS-3.127_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-3.125_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-3.124_202304-02~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-3.122_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.141_202308-01 | August 21, 2023 | Post 2.143 |
| WA-GUEST-OS-2.140_202307-01 | July 27, 2023 | Post 2.142 |
-| WA-GUEST-OS-2.139_202306-02 | July 8, 2023 | Post 2.141 |
+|~~WA-GUEST-OS-2.139_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-2.137_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-2.136_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-2.135_202303-01~~| March 28, 2023 | May 19, 2023 |
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Title: Node.js application using Socket.io - Azure
-description: Use this tutorial to learn how to host a socket.IO-based chat application on Azure. Socket.IO provides real time communication for a Node.js server and clients.
+description: Socket.IO is now natively supported on Azure. This old tutorial shows how to self-host a socket.IO-based chat application on Azure. The latest recommendation is to let Socket.IO provide real time communication for a Node.js server and clients, and let Azure manage scaling client connections.
Previously updated : 02/21/2023 Last updated : 08/31/2023
# Build a Node.js chat application with Socket.IO on an Azure Cloud Service (classic)
+> [!TIP]
+> Socket.IO is now natively supported on Azure. To scale out a Socket.IO app to handle thousands of connections, it is often frustrating. Now that Azure natively supports Socket.IO, you can let Azure handle scalability and availability. [Learn more about how you can get any Socket.IO app running on Azure with a few lines of code](https://learn.microsoft.com/azure/azure-web-pubsub/socketio-overview).
+ [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)] + Socket.IO provides real time communication between your Node.js server and clients. This tutorial walks you through hosting a socket.IO based chat application on Azure. For more information
cloud-shell Embed Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md
Large sized button
The location of these image files is subject to change. We recommend that you download the files for use in your applications.
-## Customize experience
-
-Set a specific shell experience by augmenting your URL.
-
-| Experience | URL |
-| | |
-| Most recently used shell | `https://shell.azure.com` |
-| Bash | `https://shell.azure.com/bash` |
-| PowerShell | `https://shell.azure.com/powershell` |
- ## Next steps - [Bash in Cloud Shell quickstart][07] - [PowerShell in Cloud Shell quickstart][06] <!-- updated link references -->
-[01]: https://shell.azure.com
[06]: quickstart-powershell.md [07]: quickstart.md
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
Cloud Shell comes with the following Azure command-line tools preinstalled:
| Tool | Version | Command | | - | -- | |
-| [Azure CLI][08] | 2.45.0 | `az --version` |
-| [Azure PowerShell][06] | 9.4.0 | `Get-Module Az -ListAvailable` |
+| [Azure CLI][08] | 2.51.0 | `az --version` |
+| [Azure PowerShell][06] | 10.2.0 | `Get-Module Az -ListAvailable` |
| [AzCopy][04] | 10.15.0 | `azcopy --version` |
-| [Azure Functions CLI][01] | 4.0.3971 | `func --version` |
+| [Azure Functions CLI][01] | 4.0.5198 | `func --version` |
| [Service Fabric CLI][03] | 11.2.0 | `sfctl --version` | | [Batch Shipyard][09] | 3.9.1 | `shipyard --version` | | [blobxfer][10] | 1.11.0 | `blobxfer --version` |
Cloud Shell comes with the following languages preinstalled:
| Language | Version | Command | | - | - | |
-| .NET Core | [6.0.405][16] | `dotnet --version` |
-| Go | 1.17.13 | `go version` |
-| Java | 11.0.18 | `java --version` |
-| Node.js | 16.18.1 | `node --version` |
-| PowerShell | [7.3.2][07] | `pwsh -Version` |
+| .NET Core | [7.0.400][16] | `dotnet --version` |
+| Go | 1.19.11 | `go version` |
+| Java | 17.0.8 | `java --version` |
+| Node.js | 16.20.1 | `node --version` |
+| PowerShell | [7.3.6][07] | `pwsh -Version` |
| Python | 3.9.14 | `python --version` |
-| Ruby | 3.1.3p185 | `ruby --version` |
+| Ruby | 3.1.4p223 | `ruby --version` |
You can verify the version of the language using the command listed in the table.
You can verify the version of the language using the command listed in the table
[13]: https://docs.cloudfoundry.org/cf-cli/ [14]: https://docs.d2iq.com/dkp/2.3/azure-quick-start [15]: https://docs.docker.com/desktop/
-[16]: https://dotnet.microsoft.com/download/dotnet/6.0
+[16]: https://dotnet.microsoft.com/download/dotnet/7.0
[17]: https://github.com/Azure/CloudShell/issues [18]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md [19]: https://helm.sh/docs/
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
ms.contributor: jahelmic
Last updated 05/03/2023 tags: azure-resource-manager+ Title: Azure Cloud Shell troubleshooting # Troubleshooting & Limitations of Azure Cloud Shell
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
Previously updated : 02/15/2023 Last updated : 08/17/2023
# Connect Azure Communication Services with Azure AI services
->[!IMPORTANT]
->Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
->Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite).
Azure Communication Services Call Automation APIs provide developers the ability to steer and control the Azure Communication Services Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure Cognitive Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality.
BYO Azure AI services can be easily integrated into any application regardless o
### Build applications that can play and recognize speech
-With the ability to, connect your Azure AI services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience.
+With the ability to, connect your Azure AI services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region, through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience.
## Run time flow
-[![Run time flow](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox)
+[![Screen shot of integration run time flow.](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox)
## Azure portal experience
-You can also configure and bind your Communication Services and Azure AI services through the Azure portal.
+You can configure and bind your Communication Services and Azure AI services through the Azure portal.
-### Add a Managed Identity to the Azure Communication Services Resource
+## Prerequisites
+- Azure account with an active subscription and access to Azure portal, for details see [Create an account for free](https://azure.microsoft.com/free/).
+- Azure Communication Services resource. See [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp).
+- An Azure Cognitive Services resource.
-1. Navigate to your Azure Communication Services Resource in the Azure portal.
-2. Select the Identity tab.
-3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed.
+### Connecting through the Azure portal
-[![Enable managed identiy](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox)
+1. Open your Azure Communication Services resource and click on the Cognitive Services tab.
+2. If system-assigned managed identity isn't enabled, there are two ways to enable it.
-<a name='option-1-add-role-from-azure-cognitive-services-in-the-azure-portal'></a>
+ 2.1. In the Cognitive Services tab, click on "Enable Managed Identity" button.
+
+ [![Screenshot of Enable Managed Identity button.](./media/enabled-identity.png)](./media/enabled-identity.png#lightbox)
-### Option 1: Add role from Azure AI services in the Azure portal
-1. Navigate to your Azure Cognitive Service resource.
-2. Select the "Access control (IAM)" tab.
-3. Click the "+ Add" button.
-4. Select "Add role assignments" from the menu.
+ or
-[![Add role from IAM](./media/add-role.png)](./media/add-role.png#lightbox)
+ 2.2. Navigate to the identity tab.
+
+ 2.3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed.
+ [![Screen shot of enable managed identiy.](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox)
-5. Choose the "Cognitive Services User" role to assign, then click "Next".
+ 2.4. Once the identity is enabled you should see something similar.
+ [![Screenshot of enabled identity.](./media/identity-saved.png)](./media/identity-saved.png#lightbox)
-[![Cognitive Services User](./media/cognitive-service-user.png)](./media/cognitive-service-user.png#lightbox)
+3. When managed identity is enabled the Cognitive Service tab should show a button 'Connect cognitive service' to connect the two services.
+[![Screenshot of Connect cognitive services button.](./media/cognitive-services.png)](./media/cog-svc.png#lightbox)
-6. For the field "Assign access to" choose the "User, group or service principal".
-7. Press "+ Select members" and a side tab opens.
-8. Search for your Azure Communication Services resource name in the text box and click it when it shows up, then click "Select".
+4. Click on 'Connect cognitive service', select the Subscription, Resource Group and Resource and click 'Connect' in the context pane that opens up.
+ [![Screenshot of Subscription, Resource Group and Resource in pane.](./media/choose-options.png)](./media/choose-options.png#lightbox)
+5. If connection is successful, you should see a green banner confirming successful connection.
-[![Select ACS resource](./media/select-acs-resource.png)](./media/select-acs-resource.png#lightbox)
+ [![Screenshot of successful connection.](./media/connected.png)](./media/connected.png#lightbox)
-9. Click ΓÇ£Review + assignΓÇ¥, this assigns the role to the managed identity.
-
-### Option 2: Add role through Azure Communication Services Identity tab
-
-1. Navigate to your Azure Communication Services resource in the Azure portal.
-2. Select Identity tab.
-3. Click on "Azure role assignments".
-
-[![ACS role assignment](./media/add-role-acs.png)](./media/add-role-acs.png#lightbox)
-
-4. Click the "Add role assignment (Preview)" button, which opens the "Add role assignment (Preview)" tab.
-5. Select the "Resource group" for "Scope".
-6. Select the "Subscription".
-7. Select the "Resource Group" containing the Cognitive Service.
-8. Select the "Role" "Cognitive Services User".
-
-[![ACS role information](./media/acs-roles-cognitive-services.png)](./media/acs-roles-cognitive-services.png#lightbox)
-
-10. Click Save.
-
-Your Communication Service has now been linked to your Azure Cognitive Service resource.
-
-<a name='azure-cognitive-services-regions-supported'></a>
+6. Now in the Cognitive Service tab you should see your connected services showing up.
+[![Screenshot of connected cognitive service on main page.](./media/new-entry-created.png)](./media/new-entry-created.png#lightbox)
## Azure AI services regions supported
-This integration between Azure Communication Services and Azure AI services is only supported in the following regions at this point in time:
+This integration between Azure Communication Services and Azure AI services is only supported in the following regions:
- westus - westus2 - westus3
This integration between Azure Communication Services and Azure AI services is o
- southcentralus - westcentralus - westeu
+- uksouth
## Next Steps-- Learn about [playing audio](../../concepts/call-automation/play-ai-action.md) to callers using Text-to-Speech.-- Learn about [gathering user input](../../concepts/call-automation/recognize-ai-action.md) with Speech-to-Text.
+- Learn about [playing audio](../../concepts/call-automation/play-action.md) to callers using Text-to-Speech.
+- Learn about [gathering user input](../../concepts/call-automation/recognize-action.md) with Speech-to-Text.
communication-services Call Automation Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md
Last updated 02/22/2023 -+ # Deliver expedient customer service by adding Microsoft Teams users in Call Automation workflows Businesses are looking for innovative ways to increase the efficiency of their customer service operations. Azure Communication Services Call Automation provides developers the ability to build programmable customer interactions using real-time event triggers to perform actions based on custom business logic. For example, with support for interoperability with Microsoft Teams, developers can use Call Automation APIs to add subject matter experts (SMEs). These SMEs, who use Microsoft Teams, can be added to an existing customer service call to provide expert advice and help resolve a customer issue.
The dataflow diagram depicts a canonical scenario where a Teams user is added to
The following list presents the set of features that are currently available in the Azure Communication Services Call Automation SDKs for calls with Microsoft Teams users.
-| Feature Area | Capability | .NET | Java |
-| -| -- | | -- |
-| Pre-call scenarios | Place new outbound call to a Microsoft Teams user | ✔️ | ✔️ |
-| | Redirect (forward) a call to a Microsoft Teams user | ✔️ | ✔️ |
-| | Set custom display name for the callee when making a call offer to a Microsoft Teams user | Only on Microsoft Teams desktop client | Only on Microsoft Teams desktop client |
-| Mid-call scenarios | Add one or more endpoints to an existing call with a Microsoft Teams user | ✔️ | ✔️ |
-| | Play Audio from an audio file | ✔️ | ✔️ |
-| | Recognize user input through DTMF | ✔️ | ✔️ |
-| | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
-| | Blind Transfer a 1:1 call to another endpoint | ✔️ | ✔️ |
-| | Hang up a call (remove the call leg) | ✔️ | ✔️ |
-| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
-| Query scenarios | Get the call state | ✔️ | ✔️ |
-| | Get a participant in a call | ✔️ | ✔️ |
-| | List all participants in a call | ✔️ | ✔️ |
-| Call Recording* | Start/pause/resume/stop recording | ✔️ | ✔️ |
+| Feature Area | Capability | .NET | Java | Python | JavaScript |
+| -| -- | | -- | | -- |
+| Pre-call scenarios | Place new outbound call to a Microsoft Teams user | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Redirect (forward) a call to a Microsoft Teams user | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Set custom display name for the callee when making a call offer to a Microsoft Teams user | Only on Microsoft Teams desktop and web client | Only on Microsoft Teams desktop
+ and web client |
+| Mid-call scenarios | Add one or more endpoints to an existing call with a Microsoft Teams user | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Recognize user input through DTMF | ❌ | ❌ | ❌ | ❌ |
+| | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ |
+| | Blind Transfer a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ |
+| Query scenarios | Get the call state | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get a participant in a call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | List all participants in a call | ✔️ | ✔️ | ✔️ | ✔️ |
+| Call Recording* | Start/pause/resume/stop recording (call recording notifications in Teams clients are supported for Teams desktop, web, iOS and Android) | ✔️ | ✔️ | ✔️ | ✔️ |
> [!IMPORTANT]
-> Azure Communication Services call recording notifications in Teams clients are not supported. You must obtain consent from and notify the parties of recorded communications in a manner that complies with the laws applicable to each participant. i.e., using the Play API available in Call Automation.
+> During Public preview, you won't be able to stop the call recording if it started after adding the Teams participant.
## Supported clients | Clients | Support | | --| -- | | Microsoft Teams Desktop | ✔️ |
-| Microsoft Teams Web | ❌ |
+| Microsoft Teams Web | ✔️ |
| Microsoft Teams iOS | ❌ | | Microsoft Teams Android | ❌ | | Azure Communications Services signed in with Microsoft 365 Identity | ❌ |
-## Roadmap
+Teams phone license is a must to use this feature.
-1. Support for Microsoft Teams Web coming soon.
+## Roadmap
1. Support for Azure Communications Services signed in with Microsoft 365 Identity coming soon.
+2. Support for Microsoft Teams iOS and Android clients coming soon.
## Next steps
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
# Incoming call concepts
-Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call.
+Azure Communication Services Call Automation enables developers to create applications that can make and receive calls. It leverages Event Grid subscriptions to deliver `IncomingCall` events, making it crucial to configure your environment to receive these notifications for your application to redirect or answer a call effectively. Therefore, understanding the fundamentals of incoming calls is essential for leveraging the full potential of Azure Communication Services Call Automation.
## Calling scenarios
-First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number triggers an `IncomingCall` event. The following are examples of these resources:
+Before setting up your environment, it's important to understand the scenarios that can trigger an `IncomingCall` event. To trigger an `IncomingCall` event, a call must be made to either an Azure Communication Services identity or a Public Switched Telephone Network (PSTN) number associated with your Azure Communication Services resource. The following are examples of these resources:
1. An Azure Communication Services identity 2. A PSTN phone number owned by your Azure Communication Services resource
Given these examples, the following scenarios trigger an `IncomingCall` event se
| Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer > [!NOTE]
-> An important concept to remember is that an Azure Communication Services identity can be a user or application. Although there is no ability to explicitly assign an identity to a user or application in the platform, this can be done by your own application or supporting infrastructure. Please review the [identity concepts guide](../identity-model.md) for more information on this topic.
+> It's important to understand that an Azure Communication Services identity can represent either a user or an application. While the platform does not have a built-in feature to explicitly assign an identity to a user or application, your application or supporting infrastructure can accomplish this. To learn more about this topic, refer to the [identity concepts guide](../identity-model.md).
## Register an Event Grid resource provider
If you haven't previously used Event Grid in your Azure subscription, you might
## Receiving an incoming call notification from Event Grid
-Since Azure Communication Services relies on Event Grid to deliver the `IncomingCall` notification through a subscription, how you choose to handle the notification is up to you. Additionally, since the Call Automation API relies specifically on Webhook callbacks for events, a common Event Grid subscription used would be a 'Webhook'. However, you could choose any one of the available subscription types offered by the service.
+In Azure Communication Services, receiving an `IncomingCall` notification is made possible through an Event Grid subscription. As the receiver of the notification, you have the flexibility to choose how to handle it. Since the Call Automation API leverages Webhook callbacks for events, it's common to use a 'Webhook' Event Grid subscription. However, the service offers various subscription types, and you have the liberty to choose the most suitable one for your needs.
This architecture has the following benefits:
This architecture has the following benefits:
- PSTN number assignment and routing logic can exist in your application versus being statically configured online. - As identified in the [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
-To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
+For a sample payload of the event and more information on other calling events published to Event Grid, refer to this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
Here is an example of an Event Grid Webhook subscription where the event type filter is listening only to the `IncomingCall` event. ![Image showing IncomingCall subscription.](./media/subscribe-incoming-call-event-grid.png)
-## Call routing in Call Automation or Event Grid
+## Call routing options with Call Automation and Event Grid
-You can use [advanced filters](../../../event-grid/event-filtering.md) in your Event Grid subscription to subscribe to an `IncomingCall` notification for a specific source/destination phone number or Azure Communication Services identity and sent it to an endpoint such as a Webhook subscription. That endpoint application can then make a decision to **redirect** the call using the Call Automation SDK to another Azure Communication Services identity or to the PSTN.
+In Call Automation and Event Grid, call routing can be tailored to your specific needs. By using [advanced filters](../../../event-grid/event-filtering.md) within your Event Grid subscription, you can subscribe to an `IncomingCall` notification that pertains to a specific source/destination phone number or an Azure Communication Services identity. This notification can then be directed to an endpoint, such as a Webhook subscription. Using the Call Automation SDK, the endpoint application can then make a decision to **redirect** the call to another Azure Communication Services identity or to the PSTN.
> [!NOTE]
-> In many cases you will want to configure filtering in Event Grid due to the scenarios described above generating an `IncomingCall` event so that your application only receives events it should be responding to. For example, if you want to redirect an inbound PSTN call to an ACS endpoint and you don't use a filter, your Event Grid subscription will receive two `IncomingCall` events; one for the PSTN call and one for the ACS user even though you had not intended to receive the second notification. Failure to handle these scenarios using filters or some other mechanism in your application can cause infinite loops and/or other undesired behavior.
+> To ensure that your application receives only the necessary events, it is recommended to configure filtering in Event Grid. This is particularly crucial in scenarios that generate `IncomingCall` events, such as redirecting an inbound PSTN call to an Azure Communication Services endpoint. If a filter isn't used, your Event Grid subscription receives two `IncomingCall` events - one for the PSTN call and one for the Azure Communication Services user - even though you intended to receive only the first notification. Neglecting to handle such scenarios using filters or other mechanisms in your application can result in infinite loops and other undesirable behavior.
Here is an example of an advanced filter on an Event Grid subscription watching for the `data.to.PhoneNumber.Value` string starting with a PSTN phone number of `+18005551212.
Here is an example of an advanced filter on an Event Grid subscription watching
## Number assignment
-Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you can maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
+When using the `IncomingCall` notification in Azure Communication Services, you have the freedom to associate any particular number with any endpoint. For example, if you obtained a PSTN phone number of `+14255551212` and wish to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you should maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent that matches the phone number in the **to** field, you can invoke the `Redirect` API and provide the user's identity. In other words, you can manage the number assignment within your application and route or answer calls at runtime.
## Best Practices
-1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you're facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md).
-2. Upon the receipt of an incoming call event, if your application doesn't respond back with 200Ok to Event Grid in time, Event Grid uses exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that won't work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
-
-3. We recommend you to enable logging for your Event Grid resource to monitor events that failed to deliver. Navigate to the system topic under Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in 'AegDeliveryFailureLogs' table.
+1. To ensure that Event Grid delivers events to your Webhook endpoint and prevents malicious users from flooding your endpoint with events, you need to prove ownership of your endpoint. To address any issues with receiving events, confirm that the Webhook you configured is verified by handling `SubscriptionValidationEvent`. For more information, refer to this [guide](../../../event-grid/webhook-event-delivery.md).
+2. When an incoming call event is received, if your application fails to respond back with a 200Ok status code to Event Grid within the required time frame, Event Grid utilizes exponential backoff retry to send the event again. However, an incoming call only rings for 30 seconds, and responding to a call after that time won't be effective. To prevent retries for expired or stale calls, we recommend setting the retry policy as Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. You can find these settings under the Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
+3. We recommend you to enable logging for your Event Grid resource to monitor events that fail to deliver. To do this, navigate to the system topic under the Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in the 'AegDeliveryFailureLogs' table.
```sql AegDeliveryFailureLogs
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
description: Conceptual information about playing audio in call using Call Autom
Previously updated : 09/06/2022 Last updated : 08/11/2023 # Playing audio in call
-The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide Azure Communication Services access to your pre-recorded audio files with support for authentication.
+The play action provided through the Azure Communication Services Call Automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. You can play audio to call participants through one of two methods;
+- Providing Azure Communication Services access to prerecorded audio files of WAV format, that ACS can access with support for authentication
+- Regular text that can be converted into speech output through the integration with Azure AI services.
+
+You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview)
> [!NOTE] > Azure Communication Services currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
-The Play action allows you to provide access to a pre-recorded audio file of WAV format that Azure Communication Services can access with support for authentication.
+## Prebuilt Neural Text to Speech voices
+Microsoft uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis occur simultaneously, resulting in a more fluid and natural sounding output. You can use these neural voices to make interactions with your chatbots and voice assistants more natural and engaging. There are over 100 prebuilt voices to choose from. Learn more about [Azure Text-to-Speech voices](../../../../articles/cognitive-services/Speech-Service/language-support.md).
## Common use cases
-The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications.
+The play action can be used in many ways, some examples of how developers may wish to use the play action in their applications are listed here.
### Announcements Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users.
In scenarios with IVRs and virtual assistants, you can use your application or b
The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller. ### Playing compliance messages
-As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥.
+As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call is recorded for quality purposes.ΓÇ¥.
+
+## Sample architecture for playing audio in call using Text-To-Speech (Public preview)
+
+![Diagram showing sample architecture for Play with AI.](./media/play-ai.png)
## Sample architecture for playing audio in a call
As part of compliance requirements in various industries, vendors are expected t
## Next Steps - Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users. - Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.
+- Learn about [gathering customer input](./recognize-action.md).
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
description: Conceptual information about using Recognize action to gather user
Previously updated : 09/16/2022 Last updated : 08/09/2023 # Gathering user input
-With the Recognize action developers will be able to enhance their IVR or contact center applications to gather user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action.
+With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
+
+**Voice recognition with speech-to-text (Public Preview)**
+
+[Azure Communications services integration with Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pretrained with dialects and phonetics representing various common domains. For more information about supported languages, see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
+ **DTMF**+ Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad. **DTMF events and their associated tones**
Dual-tone multifrequency (DTMF) recognition is the process of understanding tone
## Common use cases
-The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application.
+The recognize action can be used for many reasons, here are a few examples of how developers can use the recognize action in their application.
### Improve user journey with self-service prompts - **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query. - **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.
+- **Transcribe caller response** - With voice recognition you can collect user input and transcribe the audio to text and analyze it to carry out specific business action.
### Interrupt audio prompts **User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to interrupt the flow of the IVR menu and be able to chat to a human agent.
+## Sample architecture for gathering user input in a call with voice recognition
+
+[ ![Diagram showing sample architecture for Recognize AI Action.](./media/recognize-ai-flow.png) ](./media/recognize-ai-flow.png#lightbox)
## Sample architecture for gathering user input in a call
communication-services Enable Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/enable-closed-captions.md
# Enable Closed captions with Teams Interoperability - Closed captions are a textual representation of a voice or video conversation that is displayed to users in real-time. Azure Communication Services Closed captions offer developers the ability to allow users to select when they wish to toggle captions on or off. These captions are only available during the call/meeting for the user that has selected to enable captions, Azure Communication Services does **not** store these captions anywhere. Closed captions can be accessed through Azure Communication Services client-side SDKs for Web, Windows, iOS and Android. In this document, we're going to be looking at specifically Teams interoperability scenarios. For example, an Azure Communication Services user joins a Teams meeting and enabling captions or two Microsoft 365 users using Azure Communication Calling SDK to join a call or meeting.
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Last updated 12/01/2021
+ # Calling capabilities supported for Teams users in Calling SDK
communication-services Phone Number Management For Argentina https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-argentina.md
+
+ Title: Phone Number Management for Argentina
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Argentina.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Argentina
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Argentina.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Argentina phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md
+
+ Title: Phone Number Management for Australia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Australia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Australia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Australia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Australia phone numbers are available
+| Country/Region |
+| :- |
+| Australia |
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Austria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-austria.md
+
+ Title: Phone Number Management for Austria
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Austria.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Austria
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Austria.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Austria phone numbers are available
+| Country/Region |
+| :- |
+|Austria|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Belgium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-belgium.md
+
+ Title: Phone Number Management for Belgium
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Belgium.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Belgium
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Belgium.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Belgium phone numbers are available
+| Country/Region |
+| :- |
+|Belgium|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Brazil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-brazil.md
+
+ Title: Phone Number Management for Brazil
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Brazil.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Brazil
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Brazil.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Brazil phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Canada https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-canada.md
+
+ Title: Phone Number Management for Canada
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Canada.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Canada
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Canada.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |General Availability | General Availability | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Canada phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Chile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-chile.md
+
+ Title: Phone Number Management for Chile
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Chile.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Chile
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Chile.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Chile phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-china.md
+
+ Title: Phone Number Management for China
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in China.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for China
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in China.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where China phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Colombia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-colombia.md
+
+ Title: Phone Number Management for Colombia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Colombia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Colombia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Colombia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Colombia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Denmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-denmark.md
+
+ Title: Phone Number Management for Denmark
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Denmark.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Denmark
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Denmark.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Denmark phone numbers are available
+| Country/region |
+| :- |
+|Denmark|
+|Ireland|
+|Italy|
+|Sweden|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Estonia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-estonia.md
+
+ Title: Phone Number Management for Estonia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Estonia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Estonia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Estonia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Alphanumeric Sender ID\* | Public Preview | - | - | - |
+
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Alphanumeric Sender ID is available
+| Country/Region |
+| :- |
+|Australia|
+|Austria|
+|Denmark|
+|Estonia|
+|France|
+|Germany|
+|Italy|
+|Latvia|
+|Lithuania|
+|Netherlands|
+|Poland|
+|Portugal|
+|Spain|
+|Sweden|
+|Switzerland|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Finland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-finland.md
+
+ Title: Phone Number Management for Finland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Finland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Finland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Finland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Finland phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For France https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-france.md
+
+ Title: Phone Number Management for France
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in France.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for France
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in France.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where France phone numbers are available
+| Country/Region |
+| :- |
+|France|
+|Italy|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-germany.md
+
+ Title: Phone Number Management for Germany
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Germany.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Germany
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Germany.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Germany phone numbers are available
+| Country/Region |
+| :- |
+|Germany|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Hong Kong https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-hong-kong.md
+
+ Title: Phone Number Management for Hong Kong
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Hong Kong.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Hong Kong
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Hong Kong.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Hong Kong phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Indonesia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-indonesia.md
+
+ Title: Phone Number Management for Indonesia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Indonesia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Indonesia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Indonesia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Indonesia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Ireland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-ireland.md
+
+ Title: Phone Number Management for Ireland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Ireland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Ireland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Ireland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Ireland phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Israel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-israel.md
+
+ Title: Phone Number Management for Israel
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Israel.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Israel
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Israel.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Israel phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Italy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-italy.md
+
+ Title: Phone Number Management for Italy
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Italy.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Italy
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Italy.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free*** |- | - | General Availability | General Availability\* |
+| Local*** | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+\*** Phone numbers from Italy can only be purchased for own use. Reselling or suballocating to another party is not allowed.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Italy phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Japan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-japan.md
+
+ Title: Phone Number Management for Japan
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Japan.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Japan
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Japan.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| National | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Japan phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Latvia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-latvia.md
+
+ Title: Phone Number Management for Latvia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Latvia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Latvia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Latvia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Alphanumeric Sender ID\* | Public Preview | - | - | - |
++
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+
+\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Alphanumeric Sender ID is available
+| Country/Region |
+| :- |
+|Australia|
+|Austria|
+|Denmark|
+|Estonia|
+|France|
+|Germany|
+|Italy|
+|Latvia|
+|Lithuania|
+|Netherlands|
+|Poland|
+|Portugal|
+|Spain|
+|Sweden|
+|Switzerland|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Lithuania https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-lithuania.md
+
+ Title: Phone Number Management for Lithuania
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Lithuania.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Lithuania
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Lithuania.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Alphanumeric Sender ID\* | Public Preview\* | - | - | - |
++
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+
+\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Alphanumeric Sender ID is available
+| Country/Region |
+| :- |
+|Australia|
+|Austria|
+|Denmark|
+|Estonia|
+|France|
+|Germany|
+|Italy|
+|Latvia|
+|Lithuania|
+|Netherlands|
+|Poland|
+|Portugal|
+|Spain|
+|Sweden|
+|Switzerland|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Luxembourg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-luxembourg.md
+
+ Title: Phone Number Management for Luxembourg
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Luxembourg.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Luxembourg
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Luxembourg.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
++
+## Azure subscription billing locations where Luxembourg phone numbers are available
+| Country/Region |
+| :- |
+|Luxembourg|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-malaysia.md
+
+ Title: Phone Number Management for Malaysia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Malaysia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Malaysia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Malaysia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Malaysia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Mexico https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-mexico.md
+
+ Title: Phone Number Management for Mexico
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Mexico.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Mexico
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Mexico.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Mexico phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Netherlands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-netherlands.md
+
+ Title: Phone Number Management for Netherlands
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Netherlands.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Netherlands
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Netherlands.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Netherlands phone numbers are available
+| Country/Region |
+| :- |
+|Netherlands|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For New Zealand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-new-zealand.md
+
+ Title: Phone Number Management for New Zealand
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in New Zealand.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for New Zealand
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in New Zealand.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where New Zealand phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Norway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-norway.md
+
+ Title: Phone Number Management for Norway
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Norway.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Norway
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Norway.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Norway phone numbers are available
+| Country/Region |
+| :- |
+|Norway|
+|France|
+|Sweden|
+++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Philippines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-philippines.md
+
+ Title: Phone Number Management for Philippines
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Philippines.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Philippines
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Philippines.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Philippines phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Poland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-poland.md
+
+ Title: Phone Number Management for Poland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Poland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Poland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Poland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Poland phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Portugal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-portugal.md
+
+ Title: Phone Number Management for Portugal
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Portugal.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Portugal
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Portugal.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Portugal phone numbers are available
+| Country/Region |
+| :- |
+|Portugal|
+|United States*|
+
+\*Alphanumeric Sender ID only
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Saudi Arabia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-saudi-arabia.md
+
+ Title: Phone Number Management for Saudi Arabia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Saudi Arabia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Saudi Arabia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Saudi Arabia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Saudi Arabia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Singapore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-singapore.md
+
+ Title: Phone Number Management for Singapore
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Singapore.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Singapore
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Singapore.
++
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Singapore phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Slovakia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-slovakia.md
+
+ Title: Phone Number Management for Slovakia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Slovakia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Slovakia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Slovakia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
++
+## Azure subscription billing locations where Slovakia phone numbers are available
+| Country/Region |
+| :- |
+|Slovakia|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For South Africa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-south-africa.md
+
+ Title: Phone Number Management for South Africa
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in South Africa.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for South Africa
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in South Africa.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where South Africa phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For South Korea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-south-korea.md
+
+ Title: Phone Number Management for South Korea
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in South Korea.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for South Korea
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in South Korea.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where South Korea phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Spain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-spain.md
+
+ Title: Phone Number Management for Spain
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Spain.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Spain
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Spain.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Spain phone numbers are available
+| Country/Region |
+| :- |
+|Spain|
+|United States*|
+
+\* Alphanumeric Sender ID only
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Sweden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-sweden.md
+
+ Title: Phone Number Management for Sweden
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Sweden.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Sweden
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Sweden.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Sweden phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Switzerland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-switzerland.md
+
+ Title: Phone Number Management for Switzerland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Switzerland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Switzerland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Switzerland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Switzerland phone numbers are available
+| Country/Region |
+| :- |
+|Switzerland|
+|United States*|
+
+\* Alphanumeric Sender ID
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Taiwan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-taiwan.md
+
+ Title: Phone Number Management for Taiwan
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Taiwan.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Taiwan
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Taiwan.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Taiwan phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Thailand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-thailand.md
+
+ Title: Phone Number Management for Thailand
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Thailand.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Thailand
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Thailand.
++
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Thailand phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For United Arab Emirates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-arab-emirates.md
+
+ Title: Phone Number Management for United Arab Emirates
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United Arab Emirates.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for United Arab Emirates
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United Arab Emirates.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where United Arab Emirates phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For United Kingdom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-kingdom.md
+
+ Title: Phone Number Management for United Kingdom
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United Kingdom.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for United Kingdom
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United Kingdom.
++
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where United Kingdom phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For United States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-states.md
+
+ Title: Phone Number Management for United States
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United States.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for United States
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United States.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |General Availability | General Availability | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where United States phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
# Country/regional availability of telephone numbers and subscription eligibility
-Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them.
+Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them. The capabilities and numbers that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements.
++
+**Use the drop-down to select the country/region where you're getting numbers. You'll find information about availability, restrictions and other related info on the country specific page**
+> [!div class="op_single_selector"]
+>
+> - [Australia](../numbers/phone-number-management-for-australia.md)
+> - [Austria](../numbers/phone-number-management-for-austria.md)
+> - [Belgium](../numbers/phone-number-management-for-belgium.md)
+> - [Canada](../numbers/phone-number-management-for-canada.md)
+> - [China](../numbers/phone-number-management-for-china.md)
+> - [Denmark](../numbers/phone-number-management-for-denmark.md)
+> - [Estonia](../numbers/phone-number-management-for-estonia.md)
+> - [Finland](../numbers/phone-number-management-for-finland.md)
+> - [France](../numbers/phone-number-management-for-france.md)
+> - [Germany](../numbers/phone-number-management-for-germany.md)
+> - [Hong Kong](../numbers/phone-number-management-for-hong-kong.md)
+> - [Ireland](../numbers/phone-number-management-for-ireland.md)
+> - [Israel](../numbers/phone-number-management-for-israel.md)
+> - [Italy](../numbers/phone-number-management-for-italy.md)
+> - [Latvia](../numbers/phone-number-management-for-latvia.md)
+> - [Lithuania](../numbers/phone-number-management-for-lithuania.md)
+> - [Luxembourg](../numbers/phone-number-management-for-luxembourg.md)
+> - [Malaysia](../numbers/phone-number-management-for-malaysia.md)
+> - [Netherlands](../numbers/phone-number-management-for-netherlands.md)
+> - [New Zealand](../numbers/phone-number-management-for-new-zealand.md)
+> - [Norway](../numbers/phone-number-management-for-norway.md)
+> - [Philippines](../numbers/phone-number-management-for-philippines.md)
+> - [Poland](../numbers/phone-number-management-for-poland.md)
+> - [Portugal](../numbers/phone-number-management-for-portugal.md)
+> - [Saudi Arabia](../numbers/phone-number-management-for-saudi-arabia.md)
+> - [Singapore](../numbers/phone-number-management-for-singapore.md)
+> - [Slovakia](../numbers/phone-number-management-for-slovakia.md)
+> - [South Korea](../numbers/phone-number-management-for-south-korea.md)
+> - [Spain](../numbers/phone-number-management-for-spain.md)
+> - [Sweden](../numbers/phone-number-management-for-sweden.md)
+> - [Switzerland](../numbers/phone-number-management-for-switzerland.md)
+> - [Taiwan](../numbers/phone-number-management-for-taiwan.md)
+> - [Thailand](../numbers/phone-number-management-for-thailand.md)
+> - [United Arab Emirates](../numbers/phone-number-management-for-united-arab-emirates.md)
+> - [United Kingdom](../numbers/phone-number-management-for-united-kingdom.md)
+> - [United States](../numbers/phone-number-management-for-united-states.md)
-## Subscription eligibility
-
-To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired on trial accounts or by Azure free credits.
-
-More details on eligible subscription types are as follows:
-
-| Number Type | Eligible Azure Agreement Type |
-| :- | :-- |
-| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
-| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
-| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
-
-\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
-
-\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Create a support ticket or reach out to acstns@microsoft.com for assistance with your application.
-
-## Number capabilities and availability
-
-The capabilities and numbers that are available to you depend on the country/region that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country/region due to regulatory requirements.
-
-The following tables summarize current availability:
-
-## Customers with Australia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Australia, Germany, Netherlands, United Kingdom, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Austria Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Austria | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Austria | Local** | - | - | Public Preview | Public Preview\* |
-| Austria, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in Austria can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Belgium Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Belgium | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Belgium | Local | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-## Customers with Canada Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Denmark Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| Denmark, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-## Customers with Estonia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Estonia, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with France Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| France | Local** | - | - | Public Preview | Public Preview\* |
-| France | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Norway | Local** | - | - | Public Preview | Public Preview\* |
-| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
-| France, Germany, Netherlands, United Kingdom, Australia, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in France can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Germany Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Germany | Local | - | - | Public Preview | Public Preview\* |
-| Germany | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Alphanumeric sender ID in Netherlands can only be purchased for own use. Reselling or suballocating to another party is not allowed. Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Ireland Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Ireland, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Italy Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| : | :-- | : | : | :- | : |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| France | Local** | - | - | Public Preview | Public Preview\* |
-| France | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Italy, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers from Italy, France can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Latvia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Latvia, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Lithuania Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Lithuania, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Luxembourg Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Luxembourg | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Luxembourg | Local | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-## Customers with Netherlands Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Netherlands | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Netherlands | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Netherlands, Germany, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Alphanumeric sender ID in Netherlands can only be purchased for own use. Reselling or suballocating to another party is not allowed. Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Norway Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Norway | Local** | - | - | Public Preview | Public Preview\* |
-| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in Norway can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-## Customers with Poland Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Poland, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Portugal Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Portugal | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Portugal | Local** | - | - | Public Preview | Public Preview\* |
-| Portugal, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in Portugal can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Slovakia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Slovakia | Local | - | - | Public Preview | Public Preview\* |
-| Slovakia | Toll-Free | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-## Customers with Spain Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Spain | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Spain | Local | - | - | Public Preview | Public Preview\* |
-| Spain, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Sweden Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| Norway | Local** | - | - | Public Preview | Public Preview\* |
-| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Sweden, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Switzerland Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Switzerland | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Switzerland | Local | - | - | Public Preview | Public Preview\* |
-| Switzerland, Germany, Netherlands, United Kingdom, Australia, France, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with United Kingdom Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :-- | :- | :- | :- | : | : |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| United Kingdom, Germany, Netherlands, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
--
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with United States Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :- | :- | :- | :- | : |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| USA | Short-Codes\** | General Availability | General Availability | - | - |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID\** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
## Next steps
-For more information about Azure Communication Services' telephony options please see the following pages:
+For more information about Azure Communication Services' telephony options, see the following pages
- [Learn more about Telephony](../telephony/telephony-concept.md) - Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Alice is a Dynamics 365 contact center agent, who makes an outbound call from Om
**Total cost for the call**: $0.04 + $0.04 = $0.08
+For more information on Omnichannel for Customer Service pricing, see [pricing scenarios for voice calling](/dynamics365/customer-service/voice-channel-pricing-scenarios)
+ ### Pricing example: Group audio call using JS SDK and one PSTN leg
-Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's PSTN number, a US phone number beginning with `+1-425`.
+Alice and Bob are on a VoIP Call. Bob escalated the call to Charlie on Charlie's PSTN number, a US phone number beginning with `+1-425`.
- Alice used the JS SDK to build the app. They spoke for 10 minutes before calling Charlie on the PSTN number. - Once Bob escalated the call to Charlie on his PSTN number, the three of them spoke for another 10 minutes.
Asha calls your US toll-free number (acquired from Communication Services) from
**Cost calculations** - Inbound PSTN leg by Asha to toll-free number acquired from Communication Services x 10 minutes x $0.0220 per minute for receiving the call = $0.22-- One participant on the VOIP leg (David) x 5 minutes x $0.004 per participant leg per minute = $0.02
+- One participant on the VoIP leg (David) x 5 minutes x $0.004 per participant leg per minute = $0.02
Note that the service application that uses Call Automation SDK isn't charged to be part of the call. The additional monthly cost of leasing a US toll-free number isn't included in this calculation.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+## Australia telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 16.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|-||-|
+|Geographic |Starting at USD 0.0240/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0240/min |USD 0.1750/min |
+
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## China telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 54.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.3168/min |
+
+## Finland telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 40.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.1888/min |
+
+## Hong Kong telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.0672/min |
+
+## Israel telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 15.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.1344/min |
+
+## New Zealand telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 40.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.0666/min |
+
+## Poland telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 22.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.1125/min |
+
+## Singapore telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 22.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.0650/min |
+
+## Taiwan telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 5.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2718/min |
+
+## Thailand telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2377/min |
+
+## Saudi Arabia telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 48.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.5408/min |
+
+## Malaysia telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 21.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.1587/min |
+++
+## United Arab Emirates telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 10.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2632/min |
+
+## South Korea telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 23.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.1287/min |
++
+## Philippines telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.3345/min |
++ *** Note: Pricing for all countries/regions is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees.
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
When you hit service limitations, you will receive an HTTP status code 429 (Too
- Reduce the frequency of calls. - Avoid immediate retries because all requests accrue against your usage limits.
-You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). Throttling limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). Throttling limits can be increased through a request to Azure Support.
+
+1. Go to Azure portal
+1. Select Help+Support
+1. Click on Create new support request
+1. In the Problem description, please choose **Issue type** as **Technical** and add in the details.
+
+You can follow the documentation for [creating request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
## Acquiring phone numbers Before acquiring a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements. Otherwise, you can't purchase a phone number. The below limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/).
The Communication Services Calling SDK supports the following streaming configur
| Limit | Web | Windows/Android/iOS | | - | | -- | | **Maximum # of outgoing local streams that you can send simultaneously** | one video or one screen sharing | one video + one screen sharing |
-| **Maximum # of incoming remote streams that you can render simultaneously** | four videos + one screen sharing | six videos + one screen sharing |
+| **Maximum # of incoming remote streams that you can render simultaneously** | 9 videos + one screen sharing | 9 videos + one screen sharing |
While the Calling SDK will not enforce these limits, your users may experience performance degradation if they're exceeded.
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Rate Limits for SMS:
|Send Message|Short Code |Per Number|60|6000*|6000| |Send Message|Alphanumeric Sender ID |Per resource|60|600*|600|
-*If your company has requirements that exceed the rate-limits, email us at phone@microsoft.com and we will enable higher throughput.
+*If your company has requirements that exceed the rate-limits, submit [a request to Azure Support](../../../azure-portal/supportability/how-to-create-azure-support-request.md) to enable higher throughput.
## Carrier Fees ### What are the carrier fees for SMS?
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
SIP OPTIONS (Ping) - Status of SIP OPTIONS messages exchange:
Status - The overall health status of a Trunk: - Unknown - Indicates an unknown health status. - Online - Indicates that SBC connection is healthy. -- Inactive - Indicates inactive connection.
+- Warning - Indicates TLS or Ping is expired.
> [!IMPORTANT] > Before placing or receiving calls, make sure that SBC status is *Online*
communication-services Monitor Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/monitoring-troubleshooting-telephony/monitor-direct-routing.md
Microsoft is working on providing more tools for troubleshooting and monitoring.
## Monitoring availability of Session Border Controllers using Session Initiation Protocol (SIP) OPTIONS messages
-Azure Communication Services direct routing uses SIP OPTIONS sent by the Session Border Controller to monitor SBC health. There are no actions required from the Azure administrator to enable the SIP OPTIONS monitoring.
+Azure Communication Services direct routing uses SIP OPTIONS sent by the Session Border Controller to monitor SBC health. There are no actions required from the Azure administrator to enable the SIP OPTIONS monitoring. The collected information is taken into consideration when routing decisions are made.
+
+For example, if there are several SBCs available to route a call, direct routing considers the SIP OPTIONS information received from each SBC to determine routing.
+
+Here's an example of the configuration:
+
+![[Screenshot showing SIP options configuration example.](../../media/monitoring-troubleshooting-telephony/sip-options-config-routing.png)](../../media/monitoring-troubleshooting-telephony/sip-options-config-routing.png#lightbox)
+
+When an Azure Communication Services SDK app makes a call to number +1 425 \<any seven digits>, direct routing evaluates the route. There are two SBCs in the route: `sbc.contoso.com` and `sbc2.contoso.com`. Both SBCs have equal priority in the route. Before picking an SBC, the routing mechanism evaluates the health of each SBC, based on when the SBC sent the SIP OPTIONS last time.
+
+An SBC is considered healthy if statistics at the moment of sending the call shows that the SBC sends OPTIONS every minute.
+
+When a call is made, the following logic applies:
+
+- The SBC was paired at 10:00 AM.
+- The SBC sends OPTIONS at 10:01 AM, 10:02 AM, and so on.
+- At 10:15, a user makes a call and the routing mechanism selects this SBC.
+
+Direct routing takes the regular interval OPTIONS three times (the regular interval is one minute). If OPTIONS were sent during the last three minutes, the SBC is considered healthy.
+
+If the SBC in the example sent OPTIONS at any period between 11:12 AM and 11:15 AM (the time the call was made), it's considered healthy. If not, the SBC is demoted from the route.
+
+Demotion means that the SBC isn't tried first. For example, we have `sbc.contoso.com` and `sbc2.contoso.com` with equal priority in the same voice route.
+
+If `sbc.contoso.com` doesn't send SIP OPTIONS on a regular interval as previously described, it's demoted. Next, `sbc2.contoso.com` tries for the call. If `sbc2.contoso.com` can't deliver the call because of the error codes 408, 503, or 504, the `sbc.contoso.com` (demoted) is tried again before a failure is generated.
+If both `sbc.contoso.com` and `sbc2.contoso.com` don't send OPTIONS, direct routing tries to place a call to both anyway, and then `sbc3.contoso.com` is tried.
+
+If two (or more) SBCs in one route are considered healthy and equal, Fisher-Yates shuffle is applied to distribute the calls between the SBCs.
## Monitor with Azure portal and SBC logs
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Short Code Program Brief ID**: This ID is used to identify a short code program brief application. * **Email message ID**: This ID is used to identify Send Email requests. * **Correlation ID**: This ID is used to identify requests made using Call Automation.
-* **Call logs**: These logs contain detailed information that are used to troubleshoot calling and network issues.
+* **Call logs**: These logs contain detailed information can be used to troubleshoot calling and network issues.
Also take a look at our [service limits](service-limits.md) documentation for more information on throttling and limitations.
Console.WriteLine($"Email operation id = {emailSendOperation.Id}");
```
+## Accessing Support Files in the Calling SDK
++
+Calling SDK provides convenience methods to get access to the Log Files. To actively collect, it is encouraged to pair this functionality with your applications support tooling.
+
+[Log File Access Conceptual Document](../concepts/voice-video-calling/retrieve-support-files.md)
+[Log File Access Tutorials](../tutorials/log-file-retrieval-tutorial.md)
+ ## Enable and access call logs # [JavaScript](#tab/javascript)
const callClient = new CallClient({ logger });
``` You can use AzureLogger to redirect the logging output from Azure SDKs by overriding the `AzureLogger.log` method:
-You can log to the browser console, a file, buffer, send to our own service, etc... If you are going to send logs over
+You can log to the browser console, a file, buffer, send to our own service, etc. If you are going to send logs over
the network to your own service, do not send a request per log line because this will affect browser performance. Instead, accumulate logs lines and send them in batches. ```javascript // Redirect log output
AzureLogger.log = (...args) => {
# [iOS](#tab/ios)
-When developing for iOS, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
+In an iOS Application, logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
These can be accessed by opening Xcode. Go to Windows > Devices and Simulators > Devices. Select your device. Under Installed Apps, select your application and click on "Download container".
This process gives you a `xcappdata` file. Right-click on this file and select
# [Android](#tab/android)
-When developing for Android, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
+In an Android application, logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file is located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request.
On Android Studio, navigate to the Device File Explorer by selecting View > Tool
## Enable and access call logs (Windows)
-When developing for Windows, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
+In a Windows application, logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
These are accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways: 1. Open a Windows Command Prompt (Windows Key + R)
These are accessed by looking at where your app is keeping its local data. There
5. Open the folder with the logs by typing `start ` followed by the path returned by the step 3. For example: `start C:\Users\myuser\AppData\Local\Packages\e84000dd-df04-4bbc-bf22-64b8351a9cd9_k2q8b5fxpmbf6` 6. Please attach all the `*.blog` and `*.etl` files to your Azure support request. + ## Finding Azure Active Directory information * **Getting Directory ID**
The Azure Communication Services SMS SDK uses the following error codes to help
| 9999 | Message failed to deliver due to unknown error/failure| Try resending the message | + ## Related information - Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
+- Log Filename APIs for Calling SDK
- [Metrics](metrics.md) - [Service limits](service-limits.md)++
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The Azure Communication Services Calling SDK supports the following streaming co
| Limit | Web | Windows/Android/iOS | | - | | -- | | **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing |
-| **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing |
-| **Maximum # of incoming remote streams that can be rendered simultaneousl - Public preview WebSDK or greater [1.14.1](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1141-beta1-2023-06-01)** | 9 videos + 1 screen sharing |
+| **Maximum # of incoming remote streams that can be rendered simultaneously** | 9 videos + 1 screen sharing WebSDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24) or greater | 6 videos + 1 screen sharing |
-While the Calling SDK don't enforce these limits, your users may experience performance degradation if they're exceeded.
+While the Calling SDK don't enforce these limits, your users may experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](https://learn.microsoft.com/azure/communication-services/how-tos/calling-sdk/manage-video?branch=main&branchFallbackFrom=pr-en-us-249591&pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support.
## Calling SDK timeouts
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/closed-captions.md
Here are main scenarios where Closed Captions are useful:
## Availability
-Closed Captions are supported in Private Preview only in ACS to ACS calls on all platforms.
+Closed Captions are supported in Private Preview only in Azure Communication Services to Azure Communication Services calls on all platforms.
- Android - iOS - Web
communication-services Media Quality Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-quality-sdk.md
Title: Azure Communication Services Media Quality metrics-
-description: Provides an overview of the Azure Communication Services media quality statics SDK.
+ Title: Azure Communication Services media quality statistics
+
+description: Get an overview of the Azure Communication Services media quality statics SDK.
zone_pivot_groups: acs-plat-web-ios-android-windows
-# Media quality statistics
-To help understand media quality in VoIP and Video calls using Azure Communication Services, we have a feature called "Media quality statistics" that you can use to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics.
+# Media quality statistics
+
+To help you understand media quality in VoIP and video calls that use Azure Communication Services, there's a feature called *media quality statistics*. Use it to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics.
+ ::: zone pivot="platform-web" [!INCLUDE [Media Stats for Web](./includes/media-stats/media-stats-web.md)]
communication-services Retrieve Support Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/retrieve-support-files.md
+
+ Title: Retrieving support files from calling SDK applications
+description: Understand how to access Log Files for the creation of effective support tools
++++ Last updated : 07/17/2023++
+# Overview of Log File Access
+
+Modern sandboxed applications can sometimes face challenges that hinder user experience. Key among these challenges is the difficulty for an end-user to access the applications internal files such as log files. The Log File Access API offers a solution, helping facilitate access to support files, and eventual export from the device into the support process.
+
+### For Third-party Application Developers:
+Log file access eliminates the process of manually finding logs or requiring a development environment to extract them. Instead, it paves the way for a direct method to hand off crucial information. This not only speeds up the troubleshooting process but also enhances the overall user experience, as issues can be diagnosed and rectified more efficiently.
+
+### From Microsoft's Perspective:
+For Microsoft, the primary aim is to ensure that any issues arising from our platforms are addressed swiftly and effectively. Seamless log handoffs between your support team and ours enable our engineering teams to get a clear picture of the challenge at hand, diagnose it accurately, and set about resolving it.
+
+## Integrating Log Collection in Third-Party Applications
+
+**Developer Considerations:**
+As a developer, it's crucial to understand how and when to capture logs. When issues arise, timely delivery of log files aids in faster diagnostics and resolutions.
+
+1. **Timeliness**: Always prioritize the immediate retrieval of logs. The closer to the time of the incident, the more relevant and insightful the data will be.
+2. **User Interaction**: Determine the most intuitive way for your users to report problems. A seamless user experience can encourage more accurate and timely reporting.
+3. **Support Integration**: Consider how your support teams access these logs. Integration should be straightforward, ensuring efficient troubleshooting.
+4. **Collaboration with Azure**: Ensure easy accessibility for Azure's teams, perhaps through a direct link or a streamlined request mechanism.
+
+By addressing these elements, you can craft a system that not only serves the immediate needs of your users but also sets the stage for effective collaboration with Microsoft's support infrastructure.
+
+## Implementing Log Collection in Your Application
+
+When you are incorporating log collection strategies into your application, the responsibility to ensure the privacy and security of these logs lies with the developers. However, we're here to provide some suggestions to enhance your implementation process.
+
+### "Report an Issue" Dialog
+
+A simple yet effective method is the "Report an Issue" feature. Think of it as a direct line between the user and support. After a user encounters an issue, a prompt can ask users if they wish to report the problem. If they agree, logs can be automatically attached and sent to the relevant support channels.
+
+### Feedback After the Call
+
+Right after a call might be an opportune time to gather feedback. Using an end-of-call survey can be beneficial. Here, users can provide feedback on the call quality and, if needed, attach logs of any issues faced. This feedback ensures timely and relevant data collection.
+
+### Shake-to-Report Feature
+
+Taking inspiration from Microsoft Teams, consider integrating a shake-to-report feature. The user can shake their device and initiate the process to report an issue. It's a user-friendly method, but remember to inform users about this feature to ensure its effective use.
+
+### Proactive Auto-Detection
+
+For a more advanced approach, consider having the system automatically detect potential call issues. Upon detection, users can be prompted to share logs. It's a proactive measure, ensuring issues are caught early, but it's crucial to strike a balance to avoid unnecessary prompts.
+
+## Choosing the Best Strategy
+
+User consent is paramount. Always inform and ensure users are aware of what they're sharing and why. Each application and its user base are unique. Reflect on past interactions, and consider the resources at hand. These considerations guide you to select the best strategy for your application, ensuring a smooth user experience and efficient troubleshooting.
+
+## Further Reading
+
+- [End of Call Survey Conceptual Document](../voice-video-calling/end-of-call-survey-concept.md)
+- [Troubleshooting Info](../troubleshooting-info.md)
+- [Log Sharing Tutorial](../../tutorials/log-file-retrieval-tutorial.md)
communication-services Simulcast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md
Title: Azure Communication Services Simulcast
-description: Overview of Simulcast
+description: Overview of Simulcast - how sending multiple video quality rendentations helps overall call quality
# Simulcast
+Simulcast is a technique that allows video streaming applications to send multiple versions of the same video content at different resolutions and bitrates. This way, the receiver can choose the most suitable version based on their network conditions and device capabilities.
-Simulcast is supported starting from 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, simulcast on the sender side is supported on following desktop browsers - Chrome and Edge. Simulcast on receiver side is supported on all platforms that Azure Communication Services Calling supports.
-Support for Sender side Simulcast capability from mobile browsers will be added in the future.
+The lack of simulcast support leads to a degraded video experience in calls with three or more participants. If a video receiver with poor network conditions joins the conference, it will impact the quality of video received from the sender without simulcast support for all other participants. This is because the video sender optimizes its video feed against the lowest common denominator. With simulcast, the impact of lowest common denominator is minimized because the video sender produces specialized low fidelity video encoding for a subset of receivers that run on poor networks (or otherwise constrained).
-Simulcast is a technique by which an endpoint encodes the same video feed using different qualities, sends these video feeds of multiple qualities to a selective forwarding unit ΓÇô SFU that decides which of the receivers gets which quality.
-The lack of simulcast support leads to a degraded video experience in calls with three or more participants. If a video receiver with poor network conditions joins the conference, it will impact the quality of video received from the sender without simulcast support for all other participants. This is because the video sender will optimize its video feed against the lowest common denominator. With simulcast, the impact of lowest common denominator will be minimized. That is because the video sender will produce specialized low fidelity video encoding for a subset of receivers that run on poor networks (or otherwise constrained).
-## Scenarios where simulcast is useful
-- Users with unknown bandwidth constraints joining. When a new joiner joins the call, its bandwidth conditions are unknown when starting to receive video. It will not be sent high quality content before reliable estimation of its bandwidth is known to prevent overshooting the available bandwidth. In unicast, if everyone was receiving high quality content, then that would cause degradation for every other receiver until the reliable estimate of the bandwidth conditions can be achieved. In simulcast, lower resolution video can be sent to the new joiner until itsΓÇÖ bandwidth conditions are known while other keep receiving high quality video.
-In a similar way, if one of the receivers is on poor network, video quality of all other receivers on good network will be degraded to accommodate for the receiver on poor network in unicast. But in simulcast, lower resolution/bitrate content can be sent to the receiver on poor network and higher resolution/bitrate content can be sent to receivers on good network.
-- In content sharing, where thumbnails are often used for video content, lower resolution videos are requested from the producers. If in parallel zooming of someoneΓÇÖs video is needed, zoomed video will be low quality to prevent others looking at the content not to receive both content and video at high quality thus wasting bandwidth.-- When video is sent to a receiver who has a larger view(like a desktop receiver. On desktop, videos are usually rendered on big views) than another receiver who has a smaller view(like a mobile receiver. Mobile screens are usually small). With simulcast, the quality of the larger view will not be affected by the quality of the smaller view. Sender will send a high resolution to the larger view receiver and a smaller resolution to the smaller view receiver.
+Simulcast is supported on Azure Communication Services SDK for WebJS (1.9.1-beta.1+) and native SDK for Android, iOS, and Windows. Currently, simulcast on the sender side is supported on following desktop browsers - Chrome and Edge. Simulcast on receiver side is supported on all platforms that Azure Communication Services Calling supports. Support for Sender side Simulcast capability from mobile browsers will be added in the future.
-## How it's used/works
-Simulcast is adaptively enabled on-demand to save bandwidth and CPU resources of the publisher.
-Subscribers notify SFU of its maximum resolution preference based on the size of the renderer element.
-SFU tracks the bandwidth conditions and resolution requirements of all current subscribers to the publisherΓÇÖs video and forwards the aggregated parameters of all subscribers to the publisher. Publisher will pick the best set of parameters to give optimal quality to all receivers considering all publisherΓÇÖs and subscribersΓÇÖ constraints.
+## How Simulcast works
+Simulcast is adaptively enabled on-demand to save bandwidth and CPU resources of the publisher. Subscribers notify SFU of its maximum resolution preference based on the size of the renderer element. SFU tracks the bandwidth conditions and resolution requirements of all current subscribers to the publisherΓÇÖs video and forwards the aggregated parameters of all subscribers to the publisher. Publisher will pick the best set of parameters to give optimal quality to all receivers considering all publisherΓÇÖs and subscribersΓÇÖ constraints.
SFU will receive multiple qualities of the content and will choose the quality to forward to the subscriber. There will be no transcoding of the content on the SFU. SFU won't forward higher resolution than requested by the subscriber.+ ## Limitations Web endpoints support simulcast only for video content with maximum two distinct qualities. + ## Resolutions In adaptive simulcast, there are no set resolutions for high- and low-quality video streams. Optimal set of either single or multiple streams are chosen. If every subscriber to video is requesting and capable of receiving maximum resolution what publisher can provide, only that maximum resolution will be sent. Following resolutions are supported and requested by the receivers in web simulcast ΓÇô 180p, 240p, 360p, 540p, 720p.
communication-services Control Mid Call Media Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/control-mid-call-media-actions.md
call_automation_client = CallAutomationClient.from_connection_string((("<Azure C
-- ## Send DTMF
-You can send dtmf tones to an external participant, which may be useful when youΓÇÖre already on a call and need to invite another participant who has an extension number or an IVR menu to navigate.
+You can send DTMF tones to an external participant, which may be useful when youΓÇÖre already on a call and need to invite another participant who has an extension number or an IVR menu to navigate.
>[!NOTE] >This is only supported for external PSTN participants and supports sending a maximum of 18 tones at a time.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md
This guide will help you get started with playing audio files to participants by
|Status|Code|Subcode|Message| |-|--|--|--| |PlayCompleted|200|0|Action completed successfully.|
+|PlayCanceled|400|8508|Action failed, the operation was canceled.|
|PlayFailed|400|8535|Action failed, file format is invalid.| |PlayFailed|400|8536|Action failed, file could not be downloaded.|
-|PlayCanceled|400|8508|Action failed, the operation was canceled.|
+|PlayFailed|400|8565 | Action failed, bad request to Azure AI services. Check input parameters. |
+|PlayFailed | 401 | 8565 | Action failed, Azure AI services authentication error. |
+|PlayFailed | 403 | 8565 | Action failed, forbidden request to Azure AI services, free subscription used by the request ran out of quota. |
+|PlayFailed | 429 | 8565 | Action failed, requests exceeded the number of allowed concurrent requests for the Azure AI services subscription. |
+|PlayFailed | 408 | 8565 | Action failed, request to Azure AI services timed out. |
+|PlayFailed | 500 | 9999 | Unknown internal server error |
+|PlayFailed | 500 | 8572 | Action failed due to play service shutdown. |
+ ## Clean up resources
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
This guide will help you get started with recognizing DTMF input provided by par
|RecognizeCompleted|200|8514|Action completed as stop tone was detected.| |RecognizeCompleted|400|8508|Action failed, the operation was canceled.| |RecognizeCompleted|400|8532|Action failed, inter-digit silence timeout reached.|
+|RecognizeCanceled|400|8508|Action failed, the operation was canceled.|
|RecognizeFailed|400|8510|Action failed, initial silence timeout reached.| |RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.| |RecognizeFailed|500|8512|Unknown internal server error.|
-|RecognizeCanceled|400|8508|Action failed, the operation was canceled.|
--
+| RecognizeFailed | 400 | 8510 | Action failed, initial silence timeout reached |
+| RecognizeFailed | 400 | 8532 | Action failed, inter-digit silence timeout reached. |
+| RecognizeFailed | 400 | 8565 | Action failed, bad request to Azure AI services. Check input parameters. |
+| Recognize Failed | 400 | 8565 | Action failed, bad request to Azure AI services. Unable to process payload provided, check the play source input |
+| RecognizeFailed | 401 | 8565 | Action failed, Azure AI services authentication error. |
+| RecognizeFailed | 403 | 8565 | Action failed, forbidden request to Azure AI services, free subscription used by the request ran out of quota. |
+| RecognizeFailed | 429 | 8565 | Action failed, requests exceeded the number of allowed concurrent requests for the Azure AI services subscription. |
+| RecognizeFailed | 408 | 8565 | Action failed, request to Azure AI services timed out. |
+| RecognizeFailed | 500 | 8511 | Action failed, encountered failure while trying to play the prompt. |
+| RecognizeFailed | 500 | 8512 | Unknown internal server error. |
## Clean up resources
communication-services Teams Interop Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md
Title: Azure Communication Services Call Automation how-to for adding Microsoft Teams User into an existing call
-description: ProvIDes a how-to for adding a Microsoft Teams user to a call with Call Automation.
+description: Provides a how-to for adding a Microsoft Teams user to a call with Call Automation.
Last updated 03/28/2023 -+
+zone_pivot_groups: acs-js-csharp-java-python
# Add a Microsoft Teams user to an existing call using Call Automation In this quickstart, we use the Azure Communication Services Call Automation APIs to add, remove and transfer call to a Teams user.
-You need to be part of the Azure Communication Services TAP program. It's likely that youΓÇÖre already part of this program, and if you aren't, sign-up using https://aka.ms/acs-tap-invite. To access to the specific Teams Interop functionality for Call Automation, submit your Teams Tenant IDs and Azure Communication Services Resource IDs by filling this form ΓÇô https://aka.ms/acs-ca-teams-tap. You need to fill the form every time you need a new tenant ID and new resource ID allow-listed.
- ## Prerequisites - An Azure account with an active subscription.-- A Microsoft Teams tenant with administrative privileges.
+- A Microsoft Teams phone license and a Teams tenant with administrative privileges. Teams phone license is a must in order to use this feature, learn more about Teams licenses [here](https://www.microsoft.com/en-us/microsoft-teams/compare-microsoft-teams-bundle-options). Administrative privileges are required to authorize Communication Services resource to call Teams users, explained later in Step 1.
- A deployed [Communication Service resource](../../quickstarts/create-communication-resource.md) and valid connection string found by selecting Keys in left side menu on Azure portal. - [Acquire a PSTN phone number from the Communication Service resource](../../quickstarts/telephony/get-phone-number.md). Note the phone number you acquired to use in this quickstart. - An Azure Event Grid subscription to receive the `IncomingCall` event.
You need to be part of the Azure Communication Services TAP program. It's likely
## Step 1: Authorization for your Azure Communication Services Resource to enable calling to Microsoft Teams users
-To enable calling through Call Automation APIs, a [Microsoft Teams Administrator](/azure/active-directory/roles/permissions-reference#teams-administrator) or [Global Administrator](/en-us/azure/active-directory/roles/permissions-reference#global-administrator) must explicitly enable the ACS resource(s) access to their tenant to allow calling.
+To enable calling through Call Automation APIs, a [Microsoft Teams Administrator](/azure/active-directory/roles/permissions-reference#teams-administrator) or [Global Administrator](/en-us/azure/active-directory/roles/permissions-reference#global-administrator) must explicitly enable the Communication Services resource(s) access to their tenant to allow calling.
[Set-CsTeamsAcsFederationConfiguration (MicrosoftTeamsPowerShell)](/powershell/module/teams/set-csteamsacsfederationconfiguration)
-Tenant level setting that enables/disables federation between their tenant and specific ACS resources.
+Tenant level setting that enables/disables federation between their tenant and specific Communication Services resources.
[Set-CsExternalAccessPolicy (SkypeForBusiness)](/powershell/module/skype/set-csexternalaccesspolicy)
-User policy that allows the admin to further control which users in their organization can participate in federated communications with ACS users.
+User policy that allows the admin to further control which users in their organization can participate in federated communications with Communication Services users.
## Step 2: Use the Graph API to get Azure AD object ID for Teams users and optionally check their presence
-A Teams userΓÇÖs Azure Active Directory (Azure AD) object ID (OID) is required to add them to or transfer to them from an ACS call. The OID can be retrieved through 1) Office portal, 2) Azure AD portal, 3) Azure AD Connect; or 4) Graph API. The example below uses Graph API.
+A Teams userΓÇÖs Azure Active Directory (Azure AD) object ID (OID) is required to add them to or transfer to them from a Communication Services call. The OID can be retrieved through 1) Office portal, 2) Azure AD portal, 3) Azure AD Connect; or 4) Graph API. The example below uses Graph API.
Consent must be granted by an Azure AD admin before Graph can be used to search for users, learn more by following on the [Microsoft Graph Security API overview](/graph/security-concept-overview) document. The OID can be retrieved using the list users API to search for users. The following shows a search by display name, but other properties can be searched as well:
Response:
```
-## Step 3: Add a Teams user to an existing ACS call controlled by Call Automation APIs
-You need to complete the prerequisite step and have a web service app to control an ACS call. Using the callConnection object, add a participant to the call.
+## Step 3: Add a Teams user to an existing Communication Services call controlled by Call Automation APIs
+You need to complete the prerequisite step and have a web service app to control a Communication Services call. Using the callConnection object, add a participant to the call.
+ ```csharp CallAutomationClient client = new CallAutomationClient('<Connection_String>');
await answer.Value.CallConnection.AddParticipantAsync(
SourceDisplayName = "Jack (Contoso Tech Support)" }); ```+++
+```java
+CallAutomationClient client = new CallAutomationClientBuilder().connectionString("<resource_connection_string>").buildClient();
+AnswerCallResult answer = client.answerCall(incomingCallContext, "<Callback_URI>"));
+answer.getCallConnection().addParticipant(
+ new CallInvite(new MicrosoftTeamsUserIdentifier("<Teams_User_Guid>"))
+ .setSourceDisplayName("Jack (Contoso Tech Support)"));
+```
+++
+```typescript
+const client = new CallAutomationClient("<resource_connection_string>");
+const answer = await client.answerCall(incomingCallContext, "<Callback_URI>"));
+answer.callConnection.addParticipant({
+ targetParticipant: { microsoftTeamsUserId: "<Teams_User_Guid>" },
+ sourceDisplayName: "Jack (Contoso Tech Support)"
+});
+```
+++
+```python
+call_automation_client = CallAutomationClient.from_connection_string("<resource_connection_string>")
+answer = call_automation_client.answer_call(incoming_call_context = incoming_call_context, callback_url = "<Callback_URI>")
+call_connection_client = call_automation_client.get_call_connection(answer.call_connection_id)
+call_connection_client.add_participant(target_participant = CallInvite(
+ target = MicrosoftTeamsUserIdentifier(user_id="<USER_ID>"),
+ source_display_name = "Jack (Contoso Tech Support)"))
+```
++
+--
+ On the Microsoft Teams desktop client, Jack's call will be sent to the Microsoft Teams user through an incoming call toast notification. ![Screenshot of Microsoft Teams desktop client, Jack's call is sent to the Microsoft Teams user through an incoming call toast notification.](./media/incoming-call-toast-notification-teams-user.png)
-After the Microsoft Teams user accepts the call, the in-call experience for the Microsoft Teams user will have all the participants displayed on the Microsoft Teams roster.
+After the Microsoft Teams user accepts the call, the in-call experience for the Microsoft Teams user will have all the participants displayed on the Microsoft Teams roster. Note that your application that is managing the call using Call Automation API will remain hidden to Teams user on the call screen.
![Screenshot of Microsoft Teams user accepting the call and entering the in-call experience for the Microsoft Teams user.](./media/active-call-teams-user.png)
-## Step 4: Remove a Teams user from an existing ACS call controlled by Call Automation APIs
+## Step 4: Remove a Teams user from an existing Communication Services call controlled by Call Automation APIs
++ ```csharp await answer.Value.CallConnection.RemoveParticipantAsync(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>')); ```
-### Optional feature: Transfer to a Teams user from an existing ACS call controlled by Call Automation APIs
++
+```java
+answer.getCallConnection().removeParticipant(new MicrosoftTeamsUserIdentifier("<Teams_User_Guid>"));
+```
+++
+```typescript
+answer.callConnection.removeParticipant({ microsoftTeamsUserId: "<Teams_User_Guid>" });
+```
+++
+```python
+call_connection_client.remove_participant(target_participant = MicrosoftTeamsUserIdentifier(user_id="<USER_ID>"))
+```
++
+--
+
+### Optional feature: Transfer to a Teams user from an existing Communication Services call controlled by Call Automation APIs
++ ```csharp
-await answer.Value.CallConnection.TransferCallToParticipantAsync(new CallInvite(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>')));
+await answer.Value.CallConnection.TransferCallToParticipantAsync(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>'));
```+++
+```java
+answer.getCallConnection().transferCallToParticipant(new MicrosoftTeamsUserIdentifier("<Teams_User_Guid>"));
+```
+++
+```typescript
+answer.callConnection.transferCallToParticipant({ microsoftTeamsUserId: "<Teams_User_Guid>" });
+```
+++
+```python
+call_connection_client.transfer_call_to_participant(target_participant = MicrosoftTeamsUserIdentifier(user_id = "<USER_ID>"))
+```
++
+--
+ ### How to tell if your Tenant isn't enabled for this preview? ![Screenshot showing the error during Step 1.](./media/teams-federation-error.png)
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps
+- Learn how to [record your calls](../../quickstarts/voice-video-calling/get-started-call-recording.md).
- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.-- Learn more about capabilities of [Teams Interoperability support with ACS Call Automation](../../concepts/call-automation/call-automation-teams-interop.md)
+- Learn more about capabilities of [Teams Interoperability support with Azure Communication Services Call Automation](../../concepts/call-automation/call-automation-teams-interop.md)
- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call. - Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Closed Captions Teams Interop How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/closed-captions-teams-interop-how-to.md
zone_pivot_groups: acs-plat-web-ios-android-windows
Learn how to allow your users to enable closed captions during a Teams interoperability scenario where your users might be in a meeting between an Azure Communication Services user and a Teams client user, or where your users are using Azure Communication Services calling SDK with their Microsoft 365 identity. - ::: zone pivot="platform-windows" [!INCLUDE [Closed captions with Windows](./includes/closed-captions/closed-captions-teams-interop-windows.md)] ::: zone-end
communication-services Orientation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/orientation.md
+
+ Title: Screen orientation over the UI Library
+
+description: Use Azure Communication Services UI library for Mobile native to set orientation for different library screen.
++++ Last updated : 05/24/2022+
+zone_pivot_groups: acs-plat-ios-android
+
+#Customer intent: As a developer, I want to set the orientation of my pages in my application
++
+# Orientation
+
+Azure Communication Services UI Library enables developers to set the orientation of the UI Library screens. Developers can now specify screen orientation mode in call setup screen and in call screen of the UI Library.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md)
+- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md)
+++
+## Next steps
+
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
zone_pivot_groups: acs-js-csharp-java-python-+ # Quickstart: Set up and manage access tokens for Teams users
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
In this section you learned how to:
You may also want to: - Learn about [rooms concept](../../concepts/rooms/room-concept.md) - Learn about [voice and video calling concepts](../../concepts/voice-video-calling/about-call-types.md)
+ - Review Azure Communication Services [samples](../../samples/overview.md)
communication-services Join Rooms Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/join-rooms-call.md
Title: Quickstart - Join a room call
description: In this quickstart, you'll learn how to join a room call using web or native mobile calling SDKs --++ - Previously updated : 07/27/2022+ Last updated : 07/20/2023
-zone_pivot_groups: acs-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
- # Quickstart: Join a room call + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md). - Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)
+- A created room and participant added to it. [Create and manage rooms](get-started-rooms.md)
+ ## Obtain user access token If you have already created users and have added them as participants in the room following the "Set up room participants" section in [this page](./get-started-rooms.md), then you can directly use those users to join the room.
az communication identity token issue --scope voip --connection-string "yourConn
For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli). ---
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)
-## Obtain user access token
-You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token.
-```azurecli-interactive
-az communication identity token issue --scope voip --connection-string "yourConnectionString"
-```
-For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli).
[!INCLUDE [Join a room call from iOS calling SDK](./includes/rooms-quickstart-call-ios.md)]+ ::: zone-end ::: zone pivot="platform-android" -
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)-
-## Obtain user access token
-You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token.
-```azurecli-interactive
-az communication identity token issue --scope voip --connection-string "yourConnectionString"
-```
-For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli).
::: zone-end ## Next steps
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
# Quickstart: Join your calling app to a Teams Auto Attendant + In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Auto Attendant. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
# Quickstart: Join your calling app to a Teams call queue + In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Call Queue. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
communication-services Call Automation Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/call-automation-ai.md
+
+ Title: Call Automation AI sample
+
+description: Overview of Call Automation AI hero sample using Azure Communication Services to enable developers to learn how to incorporate AI into their workflows.
++++ Last updated : 08/11/2023++++
+zone_pivot_groups: acs-csharp-java
++
+# Get started with the Azure Communication Services Call Automation OpenAI sample
+
+The Azure Communication Services Call Automation OpenAI sample demonstrates how you can use Call Automation SDK and the recently announced public preview integration with Azure AI services to build intelligent virtual assistants.
++
+In this sample, we'll cover off what this sample does and what you need as pre-requisites before we run this sample locally on your machine.
+++
communication-services Add Voip Push Notifications Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-voip-push-notifications-event-grid.md
This tutorial explains how to deliver VOIP push notifications to native applicat
## Current Limitations The current limitations of using the ACS Native Calling SDK are that
- * There's a 24-hour limit after the register push notification API is called when the device token information is saved. After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices will not be delivered to the devices if those devices don't call the register push notification API again.
+ * There's a 24-hour limit after the register push notification API is called when the device token information is saved.
+ After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices will not be delivered to the devices if those devices don't call the register push notification API again.
* Can't deliver push notifications using Baidu or any other notification types supported by Azure Notification Hub but not yet supported in the ACS SDK. ## Setup for listening the events from Event Grid Notification
-To listen to the `Microsoft.Communication.IncomingCall` event from Event Grid notifications of the Azure Communication Calling resource in Azure.
1. Azure functions with APIs 1. Save device endpoint information. 2. Delete device endpoint information.
Here are the steps to deliver the push notification:
6. VOIP push is successfully delivered to the device and `CallAgent.handlePush` API should be called. ## Sample
-Code sample is provided [here](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/add-calling-push-notifications-event-grid).
+The sample provided below works for any Native platforms (iOS, Android , Windows).
+Code sample is provided [here](https://github.com/Azure-Samples/azure-communication-services-calling-event-grid/tree/main/add-calling-push-notifications-event-grid).
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
Title: Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services
+ Title: Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services
description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform.
-# Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services
+# Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services
-The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Graph APIs and ACS UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation.
+The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Microsoft Graph APIs and Azure Communication Services UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation.
## What are virtual events and event management platforms?
-Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
## What are the building blocks of an event management platform?
Throughout the rest of this tutorial, we will focus on how using Azure Communica
Microsoft Graph enables event management platforms to empower organizers to schedule and manage their events directly through the event management platform. For attendees, event management platforms can build custom registration flows right on their platform that registers the attendee for the event and generates unique credentials for them to join the Teams hosted event. >[!NOTE]
->For each required Graph API has different required scopes, ensure that your application has the correct scopes to access the data.
+>For each required Microsoft Graph API has different required scopes, ensure that your application has the correct scopes to access the data.
### Scheduling registration-enabled events with Microsoft Graph
-1. Authorize application to use Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees.
+1. Authorize application to use Microsoft Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees.
1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders. 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md).
- 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
+ 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
4. Refresh tokens can be revoked in the event of a breach or account termination
communication-services Log File Retrieval Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/log-file-retrieval-tutorial.md
+
+ Title: Log file retrieval
+
+description: Learn how to retrieve Log Files from the Calling SDK for enhanced supportability.
+++++ Last updated : 06/30/2021++++
+zone_pivot_groups: acs-programming-languages-java-swift-csharp
++
+# Log File Access tutorial
+
+In this tutorial, you learn how to access the Log Files stored on the device.
+
+## Prerequisites
+
+- Access to a `CallClient` instance
++++
+## Next steps
+
+To add enhanced log collection capabilities to your app, consider the following points.
+
+1. Explore support features:
+ - "Report an Issue" prompts
+ - End-of-call surveys
+ - Shake-to-report
+ - Proactive autodetection
+2. Always obtain user consent before submitting data.
+3. Customize strategies based on your users.
+
+Refer to the [Conceptual Document](../concepts/voice-video-calling/retrieve-support-files.md) for more in-depth guidance.
+
+## You may also like
+
+- [Retrieve log files Conceptual Document](../concepts/voice-video-calling/retrieve-support-files.md)
+- [End of call Survey](./end-of-call-survey-tutorial.md)
+- [User Facing Diagnostics](../concepts/voice-video-calling/user-facing-diagnostics.md)
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. T
## Building a virtual appointment sample In this section, weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile friendly browser experience, with code that you can use to explore and for production.
-### Step 1 - Configure bookings
+### Step 1: Configure bookings
This sample uses takes advantage of the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
And then make sure "Add online meeting" is enabled.
![Screenshot of Booking services online meeting configuration experience.](./media/virtual-visits/bookings-services-online-meeting.png)
-### Step 2 ΓÇô Sample Builder
+### Step 2: Sample Builder
Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder) or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard: select Industry template, configure the call experience (Chat or Screen Sharing availability), change themes and text to match your application style and get valuable feedback through post-call survey options. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors. [ ![Screenshot of Sample builder start page.](./media/virtual-visits/sample-builder-themes.png)](./media/virtual-visits/sample-builder-themes.png#lightbox)
-### Step 3 - Deploy
+### Step 3: Deploy
At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js). [ ![Screenshot of Sample builder deployment page.](./media/virtual-visits/sample-builder-landing.png)](./media/virtual-visits/sample-builder-landing.png#lightbox)
After walking through the ARM template, you can **Go to resource group**.
![Screenshot of a completed Azure Resource Manager Template.](./media/virtual-visits/azure-complete-deployment.png)
-### Step 4 - Test
+### Step 4: Test
The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services. ![Screenshot of produced azure resources in azure portal.](./media/virtual-visits/azure-resources.png)
Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` all
![Screenshot of final view of azure app service.](./media/virtual-visits/azure-resource-final.png)
-### Step 5 - Set deployed app URL in Bookings
+### Step 5: Set deployed app URL in Bookings
Enter the application url followed by "/visit" in the "Deployed App URL" field in https://outlook.office.com/bookings/businessinformation.
communication-services Before And After Appointment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits/extend-teams/before-and-after-appointment.md
Title: Extensibility of before and after appointment activities for Microsoft Te
description: Extend Microsoft Teams Virtual appointments before and after appointment activities with Azure Communication Services, Microsoft Graph API, and Power Platform -- Last updated 05/22/2023
Microsoft 365 introduces triggers (examples of triggers are: button is selected,
## Prerequisites The reader of this article is expected to be familiar with: - [Microsoft Teams Virtual appointments](https://www.microsoft.com/microsoft-teams/premium/virtual-appointments) product and provided [user experience](https://guidedtour.microsoft.com/guidedtour/industry-longform/virtual-appointments/1/1) -- [Microsoft Graph Booking API](https://learn.microsoft.com/graph/api/resources/booking-api-overview) to manage [Microsoft Booking](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app) via [Microsoft Graph API](https://learn.microsoft.com/graph/overview)-- [Microsoft Graph Online meeting API](https://learn.microsoft.com/graph/api/resources/onlinemeeting) to manage [Microsoft Teams meetings](https://www.microsoft.com/microsoft-teams/online-meetings) via [Microsoft Graph API](https://learn.microsoft.com/graph/overview)
+- [Microsoft Graph Booking API](/graph/api/resources/booking-api-overview) to manage [Microsoft Booking](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app) via [Microsoft Graph API](/graph/overview)
+- [Microsoft Graph Online meeting API](/graph/api/resources/onlinemeeting) to manage [Microsoft Teams meetings](https://www.microsoft.com/microsoft-teams/online-meetings) via [Microsoft Graph API](/graph/overview)
## Send SMS, email, and chat message when booking is canceled When a booking is canceled, there are three options to send confirmation of cancellation: SMS, email, and/or chat message. The following example shows how to configure each of the three options in Power Automate.
Second, you must configure every individual communication channel. We start with
The next parallel path is to send the email. After connecting to Azure Communication Services, you need to provide the sender's email. The receiver of the email can be taken from the booking property "Customer Email". Then you can provide the email subject and rich text body.
-The last parallel path sends a chat message to your chat solution powered by Azure Communication Services. After providing a connection to Azure Communication Services, you define the Azure Communication Services user ID that represents your organization (for example, a bot that replaces the value <APPLICATION USER ID> in the previous image). Then you select the scope "Chat" to receive an access token for this identity. Next, you create a new chat thread to send a message to this user. Lastly, you send a chat message in created chat thread about the cancellation of the Virtual appointment.
+The last parallel path sends a chat message to your chat solution powered by Azure Communication Services. After providing a connection to Azure Communication Services, you define the Azure Communication Services user ID that represents your organization (for example, a bot that replaces the value `<APPLICATION USER ID>` in the previous image). Then you select the scope "Chat" to receive an access token for this identity. Next, you create a new chat thread to send a message to this user. Lastly, you send a chat message in created chat thread about the cancellation of the Virtual appointment.
## Next steps - Learn [what extensibility options you have for Virtual appointments](./overview.md)
communication-services Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits/extend-teams/call.md
Azure Communication Services provides three customization options:
## Prerequisites The reader of this article is expected to have an understanding of the following topics:-- [Azure Communication Services](https://learn.microsoft.com/azure/communication-services/) [Chat](https://learn.microsoft.com/azure/communication-services/concepts/chat/concepts), [Calling](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features) and [user interface library](https://learn.microsoft.com/azure/communication-services/concepts/ui-library/ui-library-overview)
+- [Azure Communication Services](/azure/communication-services/) [Chat](/azure/communication-services/concepts/chat/concepts), [Calling](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features) and [user interface library](/azure/communication-services/concepts/ui-library/ui-library-overview)
## Customizable ready-to-use user interface composites You can integrate ready-to-use meeting composites provided by the Azure Communication Service user interface library. This composite provides out-of-the-box React components that can be integrated into your Web application. You can find more details [here](https://azure.github.io/communication-ui-library/?path=/docs/use-composite-in-non-react-environment--page) about using this composite with different web frameworks.
-1. First, provide details about the application's user. To do that, create [Azure Communication Call Adapter Arguments](https://learn.microsoft.com/javascript/api/@azure/communication-react/azurecommunicationcalladapterargs) to hold information about user ID, access token, display name, and Teams meeting URL.
+1. First, provide details about the application's user. To do that, create [Azure Communication Call Adapter Arguments](/javascript/api/@azure/communication-react/azurecommunicationcalladapterargs) to hold information about user ID, access token, display name, and Teams meeting URL.
```js const callAdapterArgs = {
const callAdapterArgs = {
endpoint: '<AZURE_COMMUNICATION_SERVICE_ENDPOINT_URL>'; } ```
-2. Create a custom React hook with [useAzureCommunicationCallAdapter](https://learn.microsoft.com/javascript/api/@azure/communication-react/#@azure-communication-react-useazurecommunicationcalladapter) to create a Call Adapter.
+2. Create a custom React hook with [useAzureCommunicationCallAdapter](/javascript/api/@azure/communication-react/#@azure-communication-react-useazurecommunicationcalladapter) to create a Call Adapter.
```js const callAdapter = useAzureCommunicationCallAdapter(callAdapterArgs); ```
-3. Return React component [CallComposite](https://learn.microsoft.com/javascript/api/@azure/communication-react/#@azure-communication-react-callwithchatcomposite) that provides meeting experience.
+3. Return React component [CallComposite](/javascript/api/@azure/communication-react/#@azure-communication-react-callwithchatcomposite) that provides meeting experience.
```js return (
return (
); ```
-You can further [customize the user interface with your own theme for customization and branding](https://azure.github.io/communication-ui-library/?path=/docs/theming--page) or [optimize the layout for desktop or mobile](https://learn.microsoft.com/javascript/api/@azure/communication-react/callwithchatcompositeprops#@azure-communication-react-callwithchatcompositeprops-formfactor). If you would like to customize the layout even further, you may utilize pre-existing user interface components as described in the subsequent section.
+You can further [customize the user interface with your own theme for customization and branding](https://azure.github.io/communication-ui-library/?path=/docs/theming--page) or [optimize the layout for desktop or mobile](/javascript/api/@azure/communication-react/callwithchatcompositeprops#@azure-communication-react-callwithchatcompositeprops-formfactor). If you would like to customize the layout even further, you may utilize pre-existing user interface components as described in the subsequent section.
## Build your own layout with user interface components
For more customization you can add more predefined buttons and, you can change t
## Build your own user interface with software development kits
-Azure Communication Services provides chat and calling SDKs to build Virtual appointment experiences. The experience consists of three main parts, [authentication](https://learn.microsoft.com/azure/communication-services/quickstarts/identity/access-tokens?tabs=windows&pivots=programming-language-csharp), [calling](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-interop?pivots=platform-web) and [chat](https://learn.microsoft.com/azure/communication-services/quickstarts/chat/meeting-interop?pivots=platform-web). We have dedicated QuickStarts and GitHub samples for each but the following code samples show how to enable the experience.
+Azure Communication Services provides chat and calling SDKs to build Virtual appointment experiences. The experience consists of three main parts, [authentication](/azure/communication-services/quickstarts/identity/access-tokens?tabs=windows&pivots=programming-language-csharp), [calling](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-interop?pivots=platform-web) and [chat](/azure/communication-services/quickstarts/chat/meeting-interop?pivots=platform-web). We have dedicated QuickStarts and GitHub samples for each but the following code samples show how to enable the experience.
The authentication of the user requires creating or selecting an existing Azure Communication Services user and issue a token. You can use connection string to create CommunicationIdentityClient. We encourage you to implement this logic in the backend, as sharing connectionstring with clients isn't secure. ```js var client = new CommunicationIdentityClient(connectionString);
var token = tokenResponse.Value.Token;
``` Now you have a valid Azure Communication Services user and access token assigned to this user. You can now integrate the calling experience. This part is implemented on the client side and for this example, letΓÇÖs assume that the properties are being propagated to the client from the backend. The following tutorial shows how to do it.
-First create a [CallClient](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/callclient) that initiates the SDK and give you access to [CallAgent](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/callagent) and device manager.
+First create a [CallClient](/javascript/api/azure-communication-services/@azure/communication-calling/callclient) that initiates the SDK and give you access to [CallAgent](/javascript/api/azure-communication-services/@azure/communication-calling/callagent) and device manager.
```js const callClient = new CallClient();
var meetingLocator = new TeamsMeetingLinkLocator("<TEAMS_MEETING_URL>");
callAgent.join(meetingLocator , new JoinCallOptions()); ```
-Those steps allow you to join the Teams meeting. You can then extend those steps with [management of speakers, microphone, camera and individual video streams](https://learn.microsoft.com/azure/communication-services/how-tos/calling-sdk/manage-video?pivots=platform-web). Then, optionally, you can also integrate chat in the Virtual appointment experience.
+Those steps allow you to join the Teams meeting. You can then extend those steps with [management of speakers, microphone, camera and individual video streams](/azure/communication-services/how-tos/calling-sdk/manage-video?pivots=platform-web). Then, optionally, you can also integrate chat in the Virtual appointment experience.
Create a [ChatClient](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-communication-chat/1.3.2-beta.1/classes/ChatClient.html) that initiates the SDK and give you access to notifications and [ChatThreadClient](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-communication-chat/1.3.2-beta.1/classes/ChatThreadClient.html).
communication-services Precall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits/extend-teams/precall.md
A successful Virtual appointment experience requires the device to be prepared f
## Prerequisites The reader of this article is expected to have a solid understanding of the following topics: - [Microsoft Teams Virtual appointments](https://www.microsoft.com/microsoft-teams/premium/virtual-appointments) product and provided [user experience](https://guidedtour.microsoft.com/guidedtour/industry-longform/virtual-appointments/1/1) -- [Microsoft Graph Booking API](https://learn.microsoft.com/graph/api/resources/booking-api-overview?view=graph-rest-1.0) to manage [Microsoft Booking](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app) via [Microsoft Graph API](https://learn.microsoft.com/graph/overview?view=graph-rest-1.0)-- [Microsoft Graph Online meeting API](https://learn.microsoft.com/graph/api/resources/onlinemeeting?view=graph-rest-1.0) to manage [Microsoft Teams meetings](https://www.microsoft.com/microsoft-teams/online-meetings) via [Microsoft Graph API](https://learn.microsoft.com/graph/overview?view=graph-rest-1.0)-- [Azure Communication Services](https://learn.microsoft.com/azure/communication-services/) [Chat](https://learn.microsoft.com/azure/communication-services/concepts/chat/concepts), [Calling](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features) and [user interface library](https://learn.microsoft.com/azure/communication-services/concepts/ui-library/ui-library-overview)
+- [Microsoft Graph Booking API](/graph/api/resources/booking-api-overview) to manage [Microsoft Booking](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app) via [Microsoft Graph API](/graph/overview)
+- [Microsoft Graph Online meeting API](/graph/api/resources/onlinemeeting) to manage [Microsoft Teams meetings](https://www.microsoft.com/microsoft-teams/online-meetings) via [Microsoft Graph API](/graph/overview)
+- [Azure Communication Services](/azure/communication-services/) [Chat](/azure/communication-services/concepts/chat/concepts), [Calling](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features) and [user interface library](/azure/communication-services/concepts/ui-library/ui-library-overview)
## Background validation
-Azure Communication Services provides [precall diagnostic APIs](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/pre-call-diagnostics) for validating device readiness, such as browser compatibility, network, and call quality. The following code snippet runs a 30-second test on the device.
+Azure Communication Services provides [precall diagnostic APIs](/azure/communication-services/concepts/voice-video-calling/pre-call-diagnostics) for validating device readiness, such as browser compatibility, network, and call quality. The following code snippet runs a 30-second test on the device.
+
+Create CallClient and get [PreCallDiagnostics](/javascript/api/azure-communication-services/@azure/communication-calling/precalldiagnosticsfeature) feature:
-Create CallClient and get [PreCallDiagnostics](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/precalldiagnosticsfeature?view=azure-communication-services-js) feature:
```js const callClient = new CallClient(); const preCallDiagnostics = callClient.feature(Features.PreCallDiagnostics);
const browserSupport = await preCallDiagnosticsResult.browserSupport;
} ```
-Additionally, you can validate [MediaStatsCallFeature](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/mediastatscallfeature?view=azure-communication-services-js), [DeviceCompatibility](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/devicecompatibility?view=azure-communication-services-js), [DeviceAccess](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/deviceaccess?view=azure-communication-services-js), [DeviceEnumeration](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/deviceenumeration?view=azure-communication-services-js), [InCallDiagnostics](https://learn.microsoft.com/javascript/api/azure-communication-services/@azure/communication-calling/incalldiagnostics?view=azure-communication-services-js) . You can also look at the [tutorial that implements pre-call diagnostics with a user interface library](https://learn.microsoft.com/azure/communication-services/tutorials/call-readiness/call-readiness-overview).
+Additionally, you can validate [MediaStatsCallFeature](/javascript/api/azure-communication-services/@azure/communication-calling/mediastatscallfeature), [DeviceCompatibility](/javascript/api/azure-communication-services/@azure/communication-calling/devicecompatibility), [DeviceAccess](/javascript/api/azure-communication-services/@azure/communication-calling/deviceaccess), [DeviceEnumeration](/javascript/api/azure-communication-services/@azure/communication-calling/deviceenumeration), [InCallDiagnostics](/javascript/api/azure-communication-services/@azure/communication-calling/incalldiagnostics) . You can also look at the [tutorial that implements pre-call diagnostics with a user interface library](/azure/communication-services/tutorials/call-readiness/call-readiness-overview).
Azure Communication Services has a ready-to-use tool called [Network Diagnostics](https://azurecommdiagnostics.net/) for developers to ensure that their device and network conditions are optimal for connecting to the service.
communication-services Sample Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits/sample-builder.md
This tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid
## Building a virtual appointment sample In this section, we're going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile-friendly browser experience, with code that you can use to explore and make the final product.
-### Step 1 - Configure bookings
+### Step 1: Configure bookings
This sample uses the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
And then, make sure "Add online meeting" is enabled.
![Screenshot of Booking services online meeting configuration experience.](../media/virtual-visits/bookings-services-online-meeting.png)
-### Step 2 ΓÇô Sample Builder
+### Step 2: Sample Builder
Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder) or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard: 1. Select the Industry template. 1. Configure the call experience (Chat or Screen Sharing availability).
You can preview your configuration live from the page in both Desktop and Mobile
[ ![Screenshot of Sample builder start page.](../media/virtual-visits/sample-builder-themes.png)](../media/virtual-visits/sample-builder-themes.png#lightbox)
-### Step 3 - Deploy
+### Step 3: Deploy
At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js). [ ![Screenshot of Sample builder deployment page.](../media/virtual-visits/sample-builder-landing.png)](../media/virtual-visits/sample-builder-landing.png#lightbox)
After walking through the ARM template, you can **Go to resource group**.
![Screenshot of a completed Azure Resource Manager Template.](../media/virtual-visits/azure-complete-deployment.png)
-### Step 4 - Test
+### Step 4: Test
The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services. ![Screenshot of produced azure resources in azure portal.](../media/virtual-visits/azure-resources.png)
Opening the App Service's URL and navigating to `https://<YOUR URL>/VISIT` allow
![Screenshot of final view of azure app service.](../media/virtual-visits/azure-resource-final.png)
-### Step 5 - Set deployed app URL in Bookings
+### Step 5: Set deployed app URL in Bookings
Enter the application URL followed by "/visit" in the "Deployed App URL" field at https://outlook.office.com/bookings/businessinformation.
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Previously updated : 06/12/2023 Last updated : 09/01/2023
-# What's new in Azure Communication Services
+# What's new in Azure Communication Services, June through August, 2023
We've created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services. Be sure to check back monthly for all the newest and latest information!
We've created this page to keep you updated on new features, blog posts, and oth
## New features Get detailed information on the latest Azure Communication Services feature launches.
-### Call Automation and Call Recording
+### Trial phone numbers
-Use Azure Communication Services call automation to transform your customer experiences. Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels.
+Explore the benefits of Trial Phone Numbers for Azure Communication Services. Enjoy a 30-day free trial period to assess features and make calls for up to 60 minutes, perfect for thorough testing and experimentation. Additionally, Trial Phone Numbers include recipient phone number verification, ensuring that calls are made only to verified numbers, safeguarding against any potential misuse.
-[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/build-2023-transforming-customer-experiences-with-automated-ai/ba-p/3827857)
+[Read more on the Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-august-2023-feature-updates/ba-p/3890595)
+
+[Try the trial phone numbers quickstart](https://aka.ms/trial-quickstart)
-[Try out a sample](https://aka.ms/acs-ca-demo)
-[Read the Call Automation conceptual docs](./concepts/call-automation/call-automation.md)
<br> <br>
+
+### Job router
-### Virtual Rooms
-
- Azure Communication Services Virtual Rooms is a new set of APIs that allows developers to control who can join a call, when they meet, and how they collaborate during group meetings. Azure Communication Services Rooms is now Generally Available.
+ Job Router is a robust tool that makes it easy for developers to add advanced routing rules to their business application. As part of Azure Communication Services, Job Router simplifies the routing of customer engagement interactions to the best agent or automated services, ensuring that every interaction is directed to the most appropriate resource.
-[Read more about the Rooms APIs](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-virtual-rooms-is-now-generally/ba-p/3845412)
+[Try Job Router](./concepts/router/concepts.md)
+[Try the Job Router Quickstart](./quickstarts/router/get-started-router.md?pivots=programming-language-csharp)
-[Try the virtual rooms quickstart](./quickstarts/rooms/join-rooms-call.md)
-
-[Read the virtual rooms conceptual documentation](./concepts/rooms/room-concept.md)
<br> <br>
-### Calling SDK for Windows
-
-Add first-class calling capabilities to your Windows applications. Use this SDK to natively create rich audio and video experiences in Windows for your customers, tailored to your specific needs and preferences. With the Calling SDK for Windows, you can implement real-time communication features such as voice and video calls, Microsoft Teams meeting integration, screen sharing, and raw media access. Azure Communication Services Calling SDK for Windows is now generally available.
-
-[Read more about the calling SDK for Windows](https://techcommunity.microsoft.com/t5/azure-communication-services/add-first-class-calling-capabilities-to-your-windows/ba-p/3836731)
-
-[Read the calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md)
-
-[Check out the calling SDK quickstart](./quickstarts/voice-video-calling/get-started-with-video-calling.md)
+### New geographies available for email
+With Email Geo Expansion, you can choose the location where your email communication service is created. This update means all email domain config information and data stored by Azure Communication Services Email at rest is retained in that geography. Secure your data and improve your email communication with Email Geo Expansion.
<br> <br>
Add first-class calling capabilities to your Windows applications. Use this SDK
Go deeper on common scenarios and learn more about how customers are using advanced Azure Communication Services features.
+### Capgemini and Microsoft are transforming customer experiences with intelligent communications
-### Generate and send SMS and email using Azure OpenAI services and Azure Communication Services
+Customer experience strategy leader Capgemini partners with Azure Communication Services to provide intelligent communication capabilities for enterprises.
-Learn how to us openAI and Azure Communication Services to send automated alerts.
+[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/capgemini-and-microsoft-are-transforming-customer-experiences/ba-p/3907619)
-[Read the blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/generate-and-send-sms-and-email-using-azure-openai-services-and/ba-p/3836098)
-
-[Read the step-by-step instructions](https://github.com/pnp/powerplatform-samples/blob/main/samples/contoso-school/lab/manual.md)
-
-[Check out the pre-built solution](https://github.com/pnp/powerplatform-samples/tree/main/samples/contoso-school)
-
-<br>
-<br>
<br> <br>
-### Azure Communications Services at Microsoft Build
-
-A recap of all of the Azure Communication Services sessions and discussions at Microsoft Build.
-[Read the full blog post](https://techcommunity.microsoft.com/t5/azure-communication-services/build-communication-apps-for-microsoft-teams-users-with-azure/ba-p/3775688)
-[View the UI Library documentation](https://azure.github.io/communication-ui-library/)
--
-<br>
-<br>
## From the community See examples and get inspired by what's being done in the community of Azure Communication Services users.
Listen to Azure Communication Services PMs Ashwinder Bhatti and Anuj Bhatia talk
<br> <br>
-### Create custom virtual meetings apps with Azure Communication Services and Microsoft Teams
+### Microsoft 365 & Power Platform Development Community call
-Join Microsoft PMs Tomas Chladek and Ben Olson as they discuss how to create virtual meetings applications that use Azure Communication Services and interop seamlessly with Microsoft Teams
+Microsoft 365 & Power Platform Development Community call on August 31, 2023. Recap on news and updates from Microsoft and community projects, followed by demos by the community on the art of possible.
-[Watch the video](https://youtu.be/IBCp_-dk_m0)
+[Watch the video](https://www.youtube.com/watch?v=gAqUr9wa2_0)
-[Learn how to set up the Microsoft Teams virtual appointments app](https://learn.microsoft.com/microsoft-365/frontline/virtual-appointments-app)
+[Learn more about Azure OpenAI function calling](../ai-services/openai/how-to/function-calling.md)
[Read more about Microsoft Teams Premium](https://www.microsoft.com/microsoft-teams/premium)
Learn how to convert a lengthy URL into a format that fits the format of SMS, an
<br> <br>
-### Building on the Microsoft Cloud: Audio/video calling from a custom app
-Join Microsoft Senior Cloud Advocate Ayca Bas and Principal Content Engineer Dan Wahlin as they share how Microsoft Cloud services can empower your apps with audio/video communication.
+### View of new features from Q2 2023
-[Watch the video](https://build.microsoft.com/sessions/78b513e3-6e5b-4c4a-a3da-d663219ed674?source=/speakers/2432ad6b-4c45-44ae-b1d6-2c0334e7eb33)
-
-[Read the accompanying tutorial](https://aka.ms/mscloud-acs-teams-tutorial)
-
-[Read the quickstart on how to send an SMS using Azure Communication Services](./quickstarts/sms/send.md)
--
-<br>
-<br>
-
-<br>
+This summer, we launched a host of new features, including:
+* Job Router
+* Trial Phone Numbers
+* Alphanumeric Sender ID
+* Email Geo Expansion
+* Call automation & recording
+* Direct routing
+* Virtual rooms
+* PSTN Updates
+* and others....
-### View of May's new features
+[View the complete list from August](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-august-2023-feature-updates/ba-p/3890595) of all new features added to Azure Communication Services in August.
-In May, we launched a host of new features, including:
-* Simulcast support on Edge/Chrome desktop
-* Inline image support and other Teams interoperability improvements
-* Skip setup screen for UI Library native
-* Raised hand
-* Power Automate inbound SMS connector
-* and others...
+[View the complete list from July](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-july-2023-feature-updates/ba-p/3869978) of all new features added to Azure Communication Services in July.
-[View the complete list](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-may-2023-feature-updates/ba-p/3813869) of all new features added to Azure Communication Services in April.
+[View the complete list from June](https://techcommunity.microsoft.com/t5/azure-communication-services/azure-communication-services-june-2023-feature-updates/ba-p/3841874) of all new features added to Azure Communication Services in June.
<br> <br>
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
+
+ Title: Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile
+description: After deploying Azure Communications Gateway, you must configure it to connect to the Operator Connect and Teams Phone Mobile environments.
++++ Last updated : 07/07/2023+
+ - template-how-to-pattern
+ - has-azure-ad-ps-ref
++
+# Connect to Operator Connect or Teams Phone Mobile
+
+After you have deployed Azure Communications Gateway, you need to connect it to the Microsoft Phone System and to your core network. You also need to onboard to the Operator Connect or Teams Phone Mobile environments.
+
+This article describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile. When you have finished the steps in this article, you will be ready to [Prepare for live traffic](prepare-for-live-traffic-operator-connect.md) with Operator Connect, Teams Phone Mobile and Azure Communications Gateway.
+
+## Prerequisites
+
+You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md).
+
+You must have access to a user account with the Azure Active Directory Global Admin role.
+
+## 1. Add the Project Synergy application to your Azure tenancy
+
+> [!NOTE]
+>This step and the next step ([2. Assign an Admin user to the Project Synergy application](#2-assign-an-admin-user-to-the-project-synergy-application)) set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. If you've already gone through onboarding, go to [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+
+The Operator Connect and Teams Phone Mobile programs require your Azure Active Directory tenant to contain a Microsoft application called Project Synergy. Operator Connect and Teams Phone Mobile inherit permissions and identities from your Azure Active Directory tenant through the Project Synergy application. The Project Synergy application also allows configuration of Operator Connect or Teams Phone Mobile and assigning users and groups to specific roles.
+
+To add the Project Synergy application:
+
+1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
+ 1. Open PowerShell.
+ 1. Run the following command and check whether `AzureAD` appears in the output.
+ ```azurepowershell
+ Get-Module -ListAvailable
+ ```
+ 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. Close your current PowerShell window.
+ 1. Open PowerShell as an admin.
+ 1. Run the following command.
+ ```azurepowershell
+ Install-Module AzureAD
+ ```
+ 1. Close your PowerShell admin window.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
+1. Select **Azure Active Directory**.
+1. Select **Properties**.
+1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
+1. Open PowerShell.
+1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
+ ```azurepowershell
+ Connect-AzureAD -TenantId "<AADTenantID>"
+ New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect"
+ ```
+
+## 2. Assign an Admin user to the Project Synergy application
+
+The user who sets up Azure Communications Gateway needs to have the Admin user role in the Project Synergy application. Assign them this role in the Azure portal.
+
+1. In the Azure portal, navigate to **Enterprise applications** using the left-hand side menu. Alternatively, you can search for it in the search bar; it's under the **Services** subheading.
+1. Set the **Application type** filter to **All applications** using the drop-down menu.
+1. Select **Apply**.
+1. Search for **Project Synergy** using the search bar. The application should appear.
+1. Select your **Project Synergy** application.
+1. Select **Users and groups** from the left hand side menu.
+1. Select **Add user/group**.
+1. Specify the user you want to use for setting up Azure Communications Gateway and give them the **Admin** role.
+
+## 3. Find the Object ID and Application ID for your Azure Communication Gateway resource
+
+Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway) and [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. Select your Communications Gateway resource.
+1. Select **Identity**.
+1. In **System assigned**, copy the **Object (principal) ID**.
+1. Search for the value of **Object (principal) ID** with the search bar. You should see an enterprise application with that value under the **Azure Active Directory** subheading. You might need to select **Continue searching in Azure Active Directory** to find it.
+1. Make a note of the **Object (principal) ID**.
+1. Select the enterprise application.
+1. Check that the **Object ID** matches the **Object (principal) ID** value that you copied.
+1. Make a note of the **Application ID**.
+
+## 4. Set up application roles for Azure Communications Gateway
+
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application in [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy).
+
+> [!IMPORTANT]
+> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+
+Do the following steps in the tenant that contains your Project Synergy application.
+
+1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
+ 1. Open PowerShell.
+ 1. Run the following command and check whether `AzureAD` appears in the output.
+ ```azurepowershell
+ Get-Module -ListAvailable
+ ```
+ 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. Close your current PowerShell window.
+ 1. Open PowerShell as an admin.
+ 1. Run the following command.
+ ```azurepowershell
+ Install-Module AzureAD
+ ```
+ 1. Close your PowerShell admin window.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
+1. Select **Azure Active Directory**.
+1. Select **Properties**.
+1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
+1. Open PowerShell.
+1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
+ ```azurepowershell
+ Connect-AzureAD -TenantId "<AADTenantID>"
+ ```
+1. Run the following cmdlet, replacing *`<CommunicationsGatewayObjectID>`* with the Object ID you noted down in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+ ```azurepowershell
+ $commGwayObjectId = "<CommunicationsGatewayObjectID>"
+ ```
+1. Run the following PowerShell commands. These commands add the following roles for Azure Communications Gateway: `TrunkManagement.Read`, `TrunkManagement.Write`, `partnerSettings.Read`, `NumberManagement.Read`, `NumberManagement.Write`, `Data.Read`, `Data.Write`.
+ ```azurepowershell
+ # Get the Service Principal ID for Project Synergy (Operator Connect)
+ $projectSynergyApplicationId = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $projectSynergyEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$projectSynergyApplicationId'"
+ $projectSynergyObjectId = $projectSynergyEnterpriseApplication.ObjectId
+
+ # Required Operator Connect - Project Synergy Roles
+ $trunkManagementRead = "72129ccd-8886-42db-a63c-2647b61635c1"
+ $trunkManagementWrite = "e907ba07-8ad0-40be-8d72-c18a0b3c156b"
+ $partnerSettingsRead = "d6b0de4a-aab5-4261-be1b-0e1800746fb2"
+ $numberManagementRead = "130ecbe2-d1e6-4bbd-9a8d-9a7a909b876e"
+ $numberManagementWrite = "752b4e79-4b85-4e33-a6ef-5949f0d7d553"
+ $dataRead = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $dataWrite = "98d32f93-eaa7-4657-b443-090c23e69f27"
+
+ $requiredRoles = $trunkManagementRead, $trunkManagementWrite, $partnerSettingsRead, $numberManagementRead, $numberManagementWrite, $dataRead, $dataWrite
+
+ foreach ($role in $requiredRoles) {
+ # Assign the relevant Role to the managed identity for the Azure Communications Gateway resource
+ New-AzureADServiceAppRoleAssignment -ObjectId $commGwayObjectId -PrincipalId $commGwayObjectId -ResourceId $projectSynergyObjectId -Id $role
+ }
+
+ ```
+
+## 5. Provide additional information to your onboarding team
+
+> [!NOTE]
+>This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip this step if you have already onboarded to TPM or OC.
+
+Before your onboarding team can finish onboarding you to the Operator Connect and/or Teams Phone Mobile environments, you need to provide them with some additional information.
+
+1. Wait for your onboarding team to provide you with a form to collect the additional information.
+1. Complete the form and give it to your onboarding team.
+1. Wait for your onboarding team to confirm that the onboarding process is complete.
+
+If you don't already have an onboarding team, contact azcog-enablement@microsoft.com, providing your Azure subscription ID and contact details.
+
+## 6. Test your Operator Connect portal access
+
+> [!IMPORTANT]
+> Before testing your Operator Connect portal access, wait for your onboarding team to confirm that the onboarding process is complete.
+
+Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
+
+## 7. Add the Application ID for Azure Communications Gateway to Operator Connect
+
+You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway).
+
+To enable the application, add the Application ID of the system-assigned managed identity representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment. You found this ID in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+
+1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
+1. Add a new **Application Id**, using the Application ID that you found.
+
+## 8. Register your deployment's domain name in Active Directory
+
+Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN). You need to add this domain name to your Active Directory tenant as a custom domain name, share the details with your onboarding team and then verify the domain name. This process confirms that you own the domain.
+
+1. Navigate to the **Overview** of your Azure Communications Gateway resource and select **Properties**. Find the field named **Domain**. This name is your deployment's domain name.
+1. Complete the following procedure: [Add your custom domain name to Azure AD](../active-directory/fundamentals/add-custom-domain.md#add-your-custom-domain-name-to-azure-ad).
+1. Share your DNS TXT record information with your onboarding team. Wait for your onboarding team to confirm that the DNS TXT record has been configured correctly.
+1. Complete the following procedure: [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name).
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Prepare for live traffic with Operator Connect and Teams Phone Mobile](prepare-for-live-traffic-operator-connect.md)
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Title: Deploy Azure Communications Gateway
-description: This article guides you through how to deploy an Azure Communications Gateway.
+description: This article guides you through planning for and deploying an Azure Communications Gateway.
Previously updated : 05/05/2023 Last updated : 09/06/2023 # Deploy Azure Communications Gateway
-This article guides you through creating an Azure Communications Gateway resource in Azure. You must configure this resource before you can deploy Azure Communications Gateway.
+This article guides you through planning for and creating an Azure Communications Gateway resource in Azure.
## Prerequisites
-Carry out the steps detailed in [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
+You must have completed [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
-## 1. Start creating the Azure Communications Gateway resource
-In this step, you'll create the Azure Communications Gateway resource.
-1. Sign in to the [Azure portal](https://azure.microsoft.com/).
-1. In the search bar at the top of the page, search for Communications Gateway and select **Communications Gateways**.
+## 1. Collect basic information for deploying an Azure Communications Gateway
- :::image type="content" source="media/deploy/search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for Azure Communications Gateway.":::
-
-1. Select the **Create** option.
+ Collect all of the values in the following table for the Azure Communications Gateway resource.
- :::image type="content" source="media/deploy/create.png" alt-text="Screenshot of the Azure portal. Shows the existing Azure Communications Gateway. A Create button allows you to create more Azure Communications Gateways.":::
+|**Value**|**Field name(s) in Azure portal**|
+ |||
+ |The name of the Azure subscription to use to create an Azure Communications Gateway resource. You must use the same subscription for all resources in your Azure Communications Gateway deployment. |**Project details: Subscription**|
+ |The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**|
+ |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**|
+ |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region**
+ |The voice codecs to use between Azure Communications Gateway and your network. |**Instance details: Supported Codecs**|
+ |The Unified Communications as a Service (UCaaS) service(s) Azure Communications Gateway should support. Choose from Teams Phone Mobile and Operator Connect. |**Instance details: Supported Voice Platforms**|
+ |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Services Routing Proxy (US only). |**Instance details: Emergency call handling**|
+ |The scope at which Azure Communications Gateway's autogenerated domain name label is unique. Communications Gateway resources get assigned an autogenerated domain name label that depends on the name of the resource. You'll need to register the domain name later when you deploy Azure Communications Gateway. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**|
+ |The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Instance details: Teams Voicemail Pilot Number**|
+ |A list of dial strings used for emergency calling.|**Instance details: Emergency Dial Strings**|
+ | How you plan to use Mobile Control Point (MCP) to route Teams Phone Mobile calls to Microsoft Phone System. Choose from **Integrated** (to deploy MCP in Azure Communications Gateway), **On-premises** (to use an existing on-premises MCP) or **None** (if you don't plan to offer Teams Phone Mobile or you'll use another method to route calls). |**Instance details: MCP**|
-1. Use the information you collected in [Collect Azure Communications Gateway resource values](prepare-to-deploy.md#4-collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration section and then select **Next: Service Regions**.
+## 2. Collect Service Regions configuration values
- :::image type="content" source="media/deploy/basics.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing the Basics section.":::
+Collect all of the values in the following table for both service regions in which you want to deploy Azure Communications Gateway.
-1. Use the information you collected in [Collect Service Regions configuration values](prepare-to-deploy.md#5-collect-service-regions-configuration-values) to fill out the fields in the **Service Regions** section and then select **Next: Tags**.
-1. (Optional) Configure tags for your Azure Communications Gateway resource: enter a **Name** and **Value** for each tag you want to create.
-1. Select **Review + create**.
+ |**Value**|**Field name(s) in Azure portal**|
+ |||
+ |The Azure regions to use for call traffic. |**Service Region One/Two: Region**|
+ |The IPv4 address used by Azure Communications Gateway to contact your network from this region. |**Service Region One/Two: Operator IP address**|
+ |The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**|
+ |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**|
-If you've entered your configuration correctly, you'll see a **Validation Passed** message at the top of your screen. Navigate to the **Review + create** section.
+## 3. Collect Test Lines configuration values
-If you haven't filled in the configuration correctly, you'll see an error message in the configuration section(s) containing the invalid configuration. Correct the invalid configuration by selecting the flagged section(s) and use the information within the error messages to correct invalid configuration before returning to the **Review + create** section.
+Collect all of the values in the following table for all the test lines that you want to configure for Azure Communications Gateway.
+ |**Value**|**Field name(s) in Azure portal**|
+ |||
+ |The name of the test line. |**Name**|
+ |The phone number of the test line, in E.164 format and including the country code. |**Phone Number**|
+ |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites).|**Testing purpose**|
-## 2. Submit your Azure Communications Gateway configuration
+> [!IMPORTANT]
+> You must configure at least six automated test lines. We recommend nine automated test lines (to allow simultaneous tests).
-Check your configuration and ensure it matches your requirements. If the configuration is correct, select **Create**.
+## 4. Decide if you want tags
-Once your resource has been provisioned, a message appears saying **Your deployment is complete**. Select **Go to resource group**, and then check that your resource group contains the correct Azure Communications Gateway resource.
+Resource naming and tagging is useful for resource management. It enables your organization to locate and keep track of resources associated with specific teams or workloads and also enables you to more accurately track the consumption of cloud resources by business area and team.
-> [!NOTE]
-> You will not be able to make calls immediately. You need to complete the remaining steps in this guide before your resource is ready to handle traffic.
+If you believe tagging would be useful for your organization, design your naming and tagging conventions following the information in the [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/).
-
-## 3. Find the Object ID and Application ID for your Azure Communication Gateway resource
+## 5. Start creating an Azure Communications Gateway resource
-Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway) and [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+Use the Azure portal to create an Azure Communications Gateway resource.
1. Sign in to the [Azure portal](https://azure.microsoft.com/).
-1. In the search bar at the top of the page, search for your Communications Gateway resource.
-1. Select your Communications Gateway resource.
-1. Select **Identity**.
-1. In **System assigned**, copy the **Object (principal) ID**.
-1. Search for the value of **Object (principal) ID** with the search bar. You should see an enterprise application with that value under the **Azure Active Directory** subheading. You might need to select **Continue searching in Azure Active Directory** to find it.
-1. Make a note of the **Object (principal) ID**.
-1. Select the enterprise application.
-1. Check that the **Object ID** matches the **Object (principal) ID** value that you copied.
-1. Make a note of the **Application ID**.
-
-## 4. Set up application roles for Azure Communications Gateway
-
-Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application when you [prepared to deploy Azure Communications Gateway](prepare-to-deploy.md#1-add-the-project-synergy-application-to-your-azure-tenancy).
-
-> [!IMPORTANT]
-> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [7. Add the Application ID for Azure Communications Gateway to Operator Connect](#7-add-the-application-id-for-azure-communications-gateway-to-operator-connect).
-
-Do the following steps in the tenant that contains your Project Synergy application.
-
-1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
- 1. Open PowerShell.
- 1. Run the following command and check whether `AzureAD` appears in the output.
- ```azurepowershell
- Get-Module -ListAvailable
- ```
- 1. If `AzureAD` doesn't appear in the output, install the module:
- 1. Close your current PowerShell window.
- 1. Open PowerShell as an admin.
- 1. Run the following command.
- ```azurepowershell
- Install-Module AzureAD
- ```
- 1. Close your PowerShell admin window.
-1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
-1. Select **Azure Active Directory**.
-1. Select **Properties**.
-1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
-1. Open PowerShell.
-1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
- ```azurepowershell
- Connect-AzureAD -TenantId "<AADTenantID>"
- ```
-1. Run the following cmdlet, replacing *`<CommunicationsGatewayObjectID>`* with the Object ID you noted down in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
- ```azurepowershell
- $commGwayObjectId = "<CommunicationsGatewayObjectID>"
- ```
-1. Run the following PowerShell commands. These commands add the following roles for Azure Communications Gateway: `TrunkManagement.Read`, `TrunkManagement.Write`, `partnerSettings.Read`, `NumberManagement.Read`, `NumberManagement.Write`, `Data.Read`, `Data.Write`.
- ```azurepowershell
- # Get the Service Principal ID for Project Synergy (Operator Connect)
- $projectSynergyApplicationId = "eb63d611-525e-4a31-abd7-0cb33f679599"
- $projectSynergyEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$projectSynergyApplicationId'"
- $projectSynergyObjectId = $projectSynergyEnterpriseApplication.ObjectId
-
- # Required Operator Connect - Project Synergy Roles
- $trunkManagementRead = "72129ccd-8886-42db-a63c-2647b61635c1"
- $trunkManagementWrite = "e907ba07-8ad0-40be-8d72-c18a0b3c156b"
- $partnerSettingsRead = "d6b0de4a-aab5-4261-be1b-0e1800746fb2"
- $numberManagementRead = "130ecbe2-d1e6-4bbd-9a8d-9a7a909b876e"
- $numberManagementWrite = "752b4e79-4b85-4e33-a6ef-5949f0d7d553"
- $dataRead = "eb63d611-525e-4a31-abd7-0cb33f679599"
- $dataWrite = "98d32f93-eaa7-4657-b443-090c23e69f27"
-
- $requiredRoles = $trunkManagementRead, $trunkManagementWrite, $partnerSettingsRead, $numberManagementRead, $numberManagementWrite, $dataRead, $dataWrite
-
- foreach ($role in $requiredRoles) {
- # Assign the relevant Role to the managed identity for the Azure Communications Gateway resource
- New-AzureADServiceAppRoleAssignment -ObjectId $commGwayObjectId -PrincipalId $commGwayObjectId -ResourceId $projectSynergyObjectId -Id $role
- }
-
- ```
-
-## 5. Provide additional information to your onboarding team
-
-> [!NOTE]
->This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip this step if you have already onboarded to TPM or OC.
+1. In the search bar at the top of the page, search for Communications Gateway and select **Communications Gateways**.
-Before your onboarding team can finish onboarding you to the Operator Connect and/or Teams Phone Mobile environments, you need to provide them with some additional information.
+ :::image type="content" source="media/deploy/search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for Azure Communications Gateway.":::
-1. Wait for your onboarding team to provide you with a form to collect the additional information.
-1. Complete the form and give it to your onboarding team.
-1. Wait for your onboarding team to confirm that the onboarding process is complete.
+1. Select the **Create** option.
-If you don't already have an onboarding team, contact azcog-enablement@microsoft.com, providing your Azure subscription ID and contact details.
+ :::image type="content" source="media/deploy/create.png" alt-text="Screenshot of the Azure portal. Shows the existing Azure Communications Gateway. A Create button allows you to create more Azure Communications Gateways.":::
-## 6. Test your Operator Connect portal access
+1. Use the information you collected in [1. Collect basic information for deploying an Azure Communications Gateway](#1-collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration section and then select **Next: Service Regions**.
-> [!IMPORTANT]
-> Before testing your Operator Connect portal access, wait for your onboarding team to confirm that the onboarding process is complete.
+ :::image type="content" source="media/deploy/basics.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing the Basics section.":::
-Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
+1. Use the information you collected in [2. Collect Service Regions configuration values](#2-collect-service-regions-configuration-values) to fill out the fields in the **Service Regions** section and then select **Next: Tags**.
+1. (Optional) Configure tags for your Azure Communications Gateway resource: enter a **Name** and **Value** for each tag you want to create.
+1. Select **Review + create**.
-## 7. Add the Application ID for Azure Communications Gateway to Operator Connect
+If you've entered your configuration correctly, the Azure portal displays a **Validation Passed** message at the top of your screen. Navigate to the **Review + create** section.
-You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [4. Set up application roles for Azure Communications Gateway](#4-set-up-application-roles-for-azure-communications-gateway).
+If you haven't filled in the configuration correctly, the Azure portal display an error symbol for the section(s) with invalid configuration. Select the section(s) and use the information within the error messages to correct the configuration, and then return to the **Review + create** section.
-To enable the application, add the Application ID of the system-assigned managed identity representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment. You found this ID in [3. Find the Object ID and Application ID for your Azure Communication Gateway resource](#3-find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
-1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
-1. Add a new **Application Id**, using the Application ID that you found.
+## 6. Submit your Azure Communications Gateway configuration
-## 8. Register your deployment's domain name in Active Directory
+Check your configuration and ensure it matches your requirements. If the configuration is correct, select **Create**.
-Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN). You need to add this domain name to your Active Directory tenant as a custom domain name, share the details with your onboarding team and then verify the domain name. This process confirms that you own the domain.
+Once your resource has been provisioned, a message appears saying **Your deployment is complete**. Select **Go to resource group**, and then check that your resource group contains the correct Azure Communications Gateway resource.
-1. Navigate to the **Overview** of your Azure Communications Gateway resource and select **Properties**. Find the field named **Domain**. This name is your deployment's domain name.
-1. Complete the following procedure: [Add your custom domain name to Azure AD](../active-directory/fundamentals/add-custom-domain.md#add-your-custom-domain-name-to-azure-ad).
-1. Share your DNS TXT record information with your onboarding team. Wait for your onboarding team to confirm that the DNS TXT record has been configured correctly.
-1. Complete the following procedure: [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name).
+> [!NOTE]
+> You will not be able to make calls immediately. You need to complete the remaining steps in this guide before your resource is ready to handle traffic.
-## 9. Wait for provisioning to complete
-You now need to wait for your resource to be provisioned and connected to the Microsoft Teams environment. When your resource has been provisioned and connected, your onboarding team will contact you and the Provisioning Status filed on the resource overview will be "Complete". We recommend you check in periodically to see if your resource has been provisioned. This process can take up to two weeks, because updating ACLs in the Azure and Teams environments is done on a periodic basis.
+## 7. Wait for provisioning to complete
+
+Wait for your resource to be provisioned and connected. When your resource is ready, your onboarding team contacts you and the Provisioning Status field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field has changed. This step might take up to two weeks.
+
+## 8. Connect Azure Communications Gateway to your networks
+
+When your resource has been provisioned, you can connect Azure Communications Gateway to your networks.
+
+1. Exchange TLS certificate information with your onboarding team.
+ 1. Azure Communications Gateway is preconfigured to support the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, provide your onboarding team with this root CA certificate.
+ 1. The root CA certificate for Azure Communications Gateway's certificate is the DigiCert Global Root G2 certificate. If your network doesn't have this root certificate, download it from https://www.digicert.com/kb/digicert-root-certificates.htm and install it in your network.
+1. Configure your infrastructure to meet the call routing requirements described in [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
+ * Depending on your network, you might need to configure SBCs, softswitches and access control lists (ACLs).
+ * Your network needs to send SIP traffic to per-region FQDNs for Azure Communications Gateway. To find these FQDNs:
+ 1. Go to the **Overview** page for your Azure Communications Gateway resource.
+ 1. In each **Service Location** section, find the **Hostname** field. You need to validate TLS connections against this hostname to ensure secure connections.
+ * We recommend configuring an SRV lookup for each region, using `_sip._tls.<regional-FQDN-from-portal>`. Replace *`<regional-FQDN-from-portal>`* with the per-region FQDNs that you found in the **Overview** page for your resource.
+1. If your Azure Communications Gateway includes integrated MCP, configure the connection to MCP:
+ 1. Go to the **Overview** page for your Azure Communications Gateway resource.
+ 1. In each **Service Location** section, find the **MCP hostname** field.
+ 1. Configure your test numbers with an iFC of the following form, replacing *`<mcp-hostname>`* with the MCP hostname for the preferred region for that subscriber.
+ ```xml
+ <InitialFilterCriteria>
+ <Priority>0</Priority>
+ <TriggerPoint>
+ <ConditionTypeCNF>0</ConditionTypeCNF>
+ <SPT>
+ <ConditionNegated>0</ConditionNegated>
+ <Group>0</Group>
+ <Method>INVITE</Method>
+ </SPT>
+ <SPT>
+ <ConditionNegated>1</ConditionNegated>
+ <Group>0</Group>
+ <SessionCase>4</SessionCase>
+ </SPT>
+ </TriggerPoint>
+ <ApplicationServer>
+ <ServerName>sip:<mcp-hostname>;transport=tcp;service=mcp</ServerName>
+ <DefaultHandling>0</DefaultHandling>
+ </ApplicationServer>
+ </InitialFilterCriteria>
+ ```
+1. Configure your routers and peering connection to ensure all traffic to Azure Communications Gateway is through Azure Internet Peering for Communications Services (also known as MAPS for Voice) or ExpressRoute Microsoft Peering.
+1. Enable Bidirectional Forwarding Detection (BFD) on your on-premises edge routers to speed up link failure detection.
+ - The interval must be 150 ms (or 300 ms if you can't use 150 ms).
+ - With MAPS, BFD must bring up the BGP peer for each Private Network Interface (PNI).
+1. Meet any other requirements for your communications platform (for example, the *Network Connectivity Specification* for Operator Connect or Teams Phone Mobile). If you don't have access to Operator Connect or Teams Phone Mobile specifications, contact your onboarding team.
## Next steps -- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)
+> [!div class="nextstepaction"]
+> [Connect to Operator Connect or Teams Phone Mobile](connect-operator-connect.md)
communications-gateway Emergency Calling Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling-operator-connect.md
+
+ Title: Emergency Calling with Azure Communications Gateway
+description: Understand Azure Communications Gateway's support for emergency calling
++++ Last updated : 01/09/2023+++
+# Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway
+
+Azure Communications Gateway supports Operator Connect and Teams Phone Mobile subscribers making emergency calls from Microsoft Teams clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you'll need to consider.
+
+## Overview of emergency calling with Azure Communications Gateway
+
+If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body.
+
+Unless you choose to route emergency calls directly to an Emergency Routing Service Provider (US only), Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP). For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-operator-connect).
+
+Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from several sources, all supported by Azure Communications Gateway:
+
+- [Dynamic locations](/microsoftteams/configure-dynamic-emergency-calling), based on the location of the client used to make the call.
+ - Enterprise administrators must add physical locations associated with network connectivity into the Location Information Server (LIS) in Microsoft Teams.
+ - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location.
+- Static locations that you assign to numbers.
+ - The Operator Connect API allows you to associate numbers with locations that enterprise administrators have already configured in the Microsoft Teams Admin Center as part of uploading numbers.
+ - Azure Communications Gateway's Number Management Portal also allows you to associate numbers with locations during upload. You can also manage the locations associated with numbers after the numbers have been uploaded.
+- Static locations that your enterprise customers assign. When you upload numbers, you can choose whether enterprise administrators can modify the location information associated with each number.
+
+> [!NOTE]
+> If you are taking responsibility for assigning static locations to numbers, note that enterprise administrators must have created the locations within the Microsoft Teams Admin Center first.
+
+Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy the Azure Communications Gateway resource](deploy.md). These strings will also be used by Microsoft Teams to identify emergency calls.
+
+## Emergency calling in the United States
+
+Within the United States, Microsoft Teams supports the Emergency Routing Service Providers (ERSPs) listed in the ["911 service providers" section of the list of Session Border Controllers certified for Direct Routing)](/microsoftteams/direct-routing-border-controllers). Azure Communications Gateway has been certified to interoperate with these ERSPs.
+
+You must route emergency calls to one of these ERSPs. If your network doesn't support PIDF-LO SIP bodies, Azure Communications Gateway can route emergency calls directly to your chosen ERSP. You must arrange this routing with your onboarding team.
+
+## Emergency calling with Teams Phone Mobile
+
+For Teams Phone Mobile subscribers, Azure Communications Gateway routes emergency calls from Microsoft Teams clients to your network in the same way as other originating calls. The call includes location information in accordance with the [emergency call considerations for Teams Phone Mobile](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-teams-phone-mobile).
+
+Your network must not route emergency calls from native dialers to Azure Communications Gateway or Microsoft Teams.
+
+## Next steps
+
+- Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing).
+- Learn about [dynamic emergency calling in Microsoft Teams](/microsoftteams/configure-dynamic-emergency-calling).
communications-gateway Emergency Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling.md
- Title: Emergency Calling with Azure Communications Gateway
-description: Understand Azure Communications Gateway's support for emergency calling
---- Previously updated : 01/09/2023---
-# Emergency calling with Azure Communications Gateway
-
-Azure Communications Gateway supports Operator Connect and Teams Phone Mobile subscribers making emergency calls from Microsoft Teams clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you'll need to consider.
-
-## Overview of emergency calling with Azure Communications Gateway
-
-If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body.
-
-Unless you choose to route emergency calls directly to an Emergency Routing Service Provider (US only), Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP). For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-operator-connect).
-
-Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from several sources, all supported by Azure Communications Gateway:
--- [Dynamic locations](/microsoftteams/configure-dynamic-emergency-calling), based on the location of the client used to make the call.
- - Enterprise administrators must add physical locations associated with network connectivity into the Location Information Server (LIS) in Microsoft Teams.
- - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location.
-- Static locations that you assign to numbers.
- - The Operator Connect API allows you to associate numbers with locations that enterprise administrators have already configured in the Microsoft Teams Admin Center as part of uploading numbers.
- - Azure Communications Gateway's Number Management Portal also allows you to associate numbers with locations during upload. You can also manage the locations associated with numbers after the numbers have been uploaded.
-- Static locations that your enterprise customers assign. When you upload numbers, you can choose whether enterprise administrators can modify the location information associated with each number.-
-> [!NOTE]
-> If you are taking responsibility for assigning static locations to numbers, note that enterprise administrators must have created the locations within the Microsoft Teams Admin Center first.
-
-Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy the Azure Communications Gateway resource](deploy.md). These strings will also be used by Microsoft Teams to identify emergency calls.
-
-## Emergency calling in the United States
-
-Within the United States, Microsoft Teams supports the Emergency Routing Service Providers (ERSPs) listed in the ["911 service providers" section of the list of Session Border Controllers certified for Direct Routing)](/microsoftteams/direct-routing-border-controllers). Azure Communications Gateway has been certified to interoperate with these ERSPs.
-
-You must route emergency calls to one of these ERSPs. If your network doesn't support PIDF-LO SIP bodies, Azure Communications Gateway can route emergency calls directly to your chosen ERSP. You must arrange this routing with your onboarding team.
-
-## Emergency calling with Teams Phone Mobile
-
-For Teams Phone Mobile subscribers, Azure Communications Gateway routes emergency calls from Microsoft Teams clients to your network in the same way as other originating calls. The call includes location information in accordance with the [emergency call considerations for Teams Phone Mobile](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-teams-phone-mobile).
-
-Your network must not route emergency calls from native dialers to Azure Communications Gateway or Microsoft Teams.
-
-## Next steps
--- Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing).-- Learn about [dynamic emergency calling in Microsoft Teams](/microsoftteams/configure-dynamic-emergency-calling).
communications-gateway Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md
+
+ Title: Getting started with Azure Communications Gateway
+description: Learn how to plan for and deploy Azure Communications Gateway
++++ Last updated : 09/01/2023+
+#CustomerIntent: As someone setting up Azure Communications Gateway, I want to understand the steps I need to carry out to have live traffic through my deployment.
++
+# Get started with Azure Communications Gateway
+
+Setting up Azure Communications Gateway requires planning your deployment, deploying your Azure Communications Gateway resource, and integrating with Operator Connect or Teams Phone Mobile.
+
+This article summarizes the steps and documentation that you need.
+
+> [!IMPORTANT]
+> You must fully understand the onboarding process for your chosen communications service and any dependencies introduced by the onboarding process. For advice, ask your onboarding team.
+>
+> Some steps in the deployment and integration process can require days or weeks to complete. For example, you might need to arrange Microsoft Azure Peering Service (MAPS) connectivity before you can deploy, wait for onboarding, or wait for a specific date to launch your service. We recommend that you read through any documentation from your onboarding team and the procedures in [2. Deploy Azure Communications Gateway](#2-deploy-azure-communications-gateway) and [3. Integrate with Operator Connect or Teams Phone Mobile](#3-integrate-with-operator-connect-or-teams-phone-mobile) before you start deploying.
+
+## 1. Learn about and plan for Azure Communications Gateway
+
+Read the following articles to learn about Azure Communications Gateway.
+
+- [Your network and Azure Communications Gateway](role-in-network.md), to learn how Azure Communications Gateway fits into your network.
+- [Onboarding with Included Benefits for Azure Communications Gateway](onboarding.md), to learn about onboarding to Operator Connect or Teams Phone Mobile and the support we can provide.
+- [Reliability in Azure Communications Gateway](reliability-communications-gateway.md), to create a network design that includes Azure Communications Gateway.
+- [Overview of security for Azure Communications Gateway](security.md), to learn about how Azure Communications Gateway keeps customer data and your network secure.
+- [Plan and manage costs for Azure Communications Gateway](plan-and-manage-costs.md), to learn about costs for Azure Communications Gateway.
+- [Azure Communications Gateway limits, quotas and restrictions](limits.md), to learn about the limits and quotas associated with the Azure Communications Gateway
+
+Read the following articles to learn about Operator Connect and Teams Phone Mobile with Azure Communications Gateway.
+
+- [Overview of interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md)
+- [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
+- [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md)
+
+As part of your planning, ensure your network can support the connectivity and interoperability requirements in these articles.
+
+Read through the procedures in [2. Deploy Azure Communications Gateway](#2-deploy-azure-communications-gateway) and [3. Integrate with Operator Connect or Teams Phone Mobile](#3-integrate-with-operator-connect-or-teams-phone-mobile) and use those procedures as input into your planning for deployment, testing and going live. You need to work with an onboarding team (from Microsoft or one that you arrange yourself) during these phases, so ensure that you discuss timelines and requirements with this team.
+
+## 2. Deploy Azure Communications Gateway
+
+Use the following procedures to deploy Azure Communications Gateway and connect it to your networks.
+
+1. [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) describes the steps you need to take before you can start creating your Azure Communications Gateway resource. You might need to refer to some of the articles listed in [1. Learn about and plan for Azure Communications Gateway](#1-learn-about-and-plan-for-azure-communications-gateway).
+1. [Deploy Azure Communications Gateway](deploy.md) describes how to create your Azure Communications Gateway resource in the Azure portal and connect it to your networks.
+
+## 3. Integrate with Operator Connect or Teams Phone Mobile
+
+Use the following procedures to integrate with Operator Connect and Teams Phone Mobile.
+
+1. [Connect to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile, including onboarding to the Operator Connect and Teams Phone Mobile environments.
+1. [Prepare for live traffic with Operator Connect, Teams Phone Mobile and Azure Communications Gateway](prepare-for-live-traffic-operator-connect.md) describes how to complete the requirements of the Operator Connect and Teams Phone Mobile programs and launch your service.
+
+## Next steps
+
+- Learn about [your network and Azure Communications Gateway](role-in-network.md)
+- Find out about [the newest enhancements to Azure Communications Gateway](whats-new.md).
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
+
+ Title: Overview of Operator Connect and Teams Phone Mobile interoperating with Azure Communications Gateway
+description: Understand how Azure Communications Gateway fits into your fixed and mobile networks and into the Operator Connect and Teams Phone Mobile environments
++++ Last updated : 09/01/2023+++
+# Overview of interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile
+
+Azure Communications Gateway can manipulate signaling and media to meet the requirements of your networks and the Operator Connect and Teams Phone Mobile programs. This article provides an overview of the interoperability features that Azure Communications Gateway offers for Operator Connect and Teams Phone Mobile.
+
+> [!IMPORTANT]
+> You must sign an Operator Connect or Teams Phone Mobile agreement with Microsoft to use this service.
+
+## Role and position in the network
+
+Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to the Microsoft Phone System, allowing you to support Operator Connect (for fixed line networks) and Teams Phone Mobile (for mobile networks). The following diagram shows where Azure Communications Gateway sits in your network.
+
+ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains certified SBC function and the MCP application server for anchoring mobile calls.
+
+Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
+
+### Compliance with Certified SBC specifications
+
+Azure Communications Gateway supports the Microsoft specifications for Certified SBCs for Operator Connect and Teams Phone Mobile. For more information about certification and these specifications, see [Session Border Controllers certified for Direct Routing](/microsoftteams/direct-routing-border-controllers) and the Operator Connect or Teams Phone Mobile documentation provided by your Microsoft representative.
+
+### Call control integration for Teams Phone Mobile
+
+[Teams Phone Mobile](/microsoftteams/operator-connect-mobile-plan) allows you to offer Microsoft Teams call services for calls made from the native dialer on mobile handsets, for example presence and call history. These features require anchoring the calls in Microsoft's Intelligent Conversation and Communications Cloud (IC3), part of the Microsoft Phone System.
+
+The Microsoft Phone System relies on information in SIP signaling to determine whether a call is:
+
+- To a Teams Phone Mobile subscriber.
+- From a Teams Phone Mobile subscriber or between two Teams Phone Mobile subscribers.
+
+Your core mobile network must supply this information to Azure Communications Gateway, by using unique trunks or by correctly populating an `X-MS-FMC` header as defined by the Teams Phone Mobile SIP specifications. If you don't have access to these specifications, contact your Microsoft representative or your onboarding team.
+
+Your core mobile network must also be able to anchor and divert calls into the Microsoft Phone System. You can choose from the following options.
+
+- Using Mobile Control Point (MCP) in Azure Communications Gateway. MCP is an IMS Application Server that queries the Teams Phone Mobile Consultation API to determine whether the call involves a Teams Phone Mobile Subscriber. MCP then adds X-MS-FMC headers and updates the signaling to divert the call into the Microsoft Phone System through Azure Communications Gateway. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
+- Deploying an on-premises version of Mobile Control Point (MCP) from Metaswitch. For more information, see the [Metaswitch description of Mobile Control Point](https://www.metaswitch.com/products/mobile-control-point). This version of MCP isn't included in Azure Communications Gateway.
+- Using other routing capabilities in your core network to detect Teams Phone Mobile subscribers and route INVITEs to or from these subscribers into the Microsoft Phone System through Azure Communications Gateway.
+
+> [!IMPORTANT]
+> If an INVITE has an X-MS-FMC header, the core must not route the call to Microsoft Teams. The call has already been anchored in the Microsoft Phone System.
+
+## SIP signaling
+
+Azure Communications Gateway automatically interworks calls to support the following requirements from Operator Connect and Teams Phone Mobile:
+
+- SIP over TLS
+- X-MS-SBC header (describing the SBC function)
+- Strict rules on a= attribute lines in SDP bodies
+- Strict rules on call transfer handling
+
+You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for:
+
+- Advanced SIP header or SDP message manipulation
+- Support for reliable provisional messages (100rel)
+- Interworking between early and late media
+- Interworking away from inband DTMF tones
+- Placing the unique tenant ID elsewhere in SIP messages to make it easier for your network to consume, for example in `tgrp` parameters
+
+The Microsoft Phone System requires calling (A-) and called (B-) telephone numbers to be in E.164 format. This requirement applies to both SIP and TEL numbers. We recommend that you configure your network to use the E.164 format for all numbers. If your network can't convert numbers to the E.164 format, contact your onboarding team or raise a support request to discuss your requirements for number conversion.
++
+## RTP and SRTP media
+
+The Microsoft Phone System typically requires SRTP for media. Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers further media manipulation features to allow your networks to interoperate with the Microsoft Phone System.
+
+### Media handling for calls
+
+You must select the codecs that you want to support when you deploy Azure Communications Gateway. If the Microsoft Phone System doesn't support these codecs, Azure Communications Gateway can perform transcoding (converting between codecs) on your behalf.
+
+Operator Connect and Teams Phone Mobile require core networks to support ringback tones (ringing tones) during call transfer. Core networks must also support comfort noise. If your core networks can't meet these requirements, Azure Communications Gateway can inject media into calls.
+
+### Media interworking options
+
+Azure Communications Gateway offers multiple media interworking options. For example, you might need to:
+
+- Change handling of RTCP
+- Control bandwidth allocation
+- Prioritize specific media traffic for Quality of Service
+
+For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
+
+## Number Management Portal for provisioning with Operator Connect APIs
+
+Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This Azure portal feature enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
+
+The Number Management Portal is available as part of the optional API Bridge feature.
+
+For more information, see [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
+
+> [!TIP]
+> The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
+
+### Providing call duration data to Microsoft Teams
+
+Azure Communications Gateway can use the Operator Connect APIs to upload information about the duration of individual calls (CallDuration information) into the Microsoft Teams environment. This information allows Microsoft Teams clients to display the call duration recorded by your network, instead of the call duration recorded by Microsoft Teams. Providing this information to Microsoft Teams is a requirement of the Operator Connect program that Azure Communications Gateway performs on your behalf.
+
+## Compatibility with monitoring requirements
+
+The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics that operators must monitor as part of the Operator Connect program and include:
+
+- Call quality
+- Call errors and unusual behavior (for example, call setup failures, short calls, or unusual disconnections)
+- Other errors in Azure Communications Gateway
+
+We'll investigate the potential fault, and determine whether the fault relates to Azure Communications Gateway or the Microsoft Phone System. We might require you to carry out some troubleshooting steps in your networks to help isolate the fault.
+
+## Next steps
+
+- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
+- Learn about [requesting changes to Azure Communications Gateway](request-changes.md).
communications-gateway Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability.md
- Title: Interoperability of Azure Communications Gateway with Microsoft Teams
-description: Understand how Azure Communications Gateway fits into your existing fixed and mobile networks and into Microsoft Teams
---- Previously updated : 04/26/2023---
-# Interoperability of Azure Communications Gateway with Microsoft Teams
-
-Azure Communications Gateway sits at the edge of your network. This position allows it to manipulate signaling and media to meet the requirements of your networks and the Microsoft Phone System. Azure Communications Gateway includes many interoperability settings by default, and you can arrange further interoperability configuration.
-
-## Role and position in the network
-
-Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to the Microsoft Phone System, allowing you to support Operator Connect (for fixed line networks) and Teams Phone Mobile (for mobile networks). The following diagram shows where Azure Communications Gateway sits in your network.
-
- Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains certified SBC function and the MCP application server for anchoring mobile calls.
-
-Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
-
-Azure Communications Gateway provides all the features of a traditional session border controller (SBC). These features include:
--- Signaling interworking features to solve interoperability problems-- Advanced media manipulation and interworking-- Defending against Denial of Service attacks and other malicious traffic-- Ensuring Quality of Service-
-Azure Communications Gateway also offers metrics for monitoring your deployment.
-
-You must provide the networking connection between Azure Communications Gateway and your core networks.
-
-### Compliance with Certified SBC specifications
-
-Azure Communications Gateway supports the Microsoft specifications for Certified SBCs for Operator Connect and Teams Phone Mobile. For more information about certification and these specifications, see [Session Border Controllers certified for Direct Routing](/microsoftteams/direct-routing-border-controllers) and the Operator Connect or Teams Phone Mobile documentation provided by your Microsoft representative.
-
-### Call control integration for Teams Phone Mobile
-
-[Teams Phone Mobile](/microsoftteams/operator-connect-mobile-plan) allows you to offer Microsoft Teams call services for calls made from the native dialer on mobile handsets, for example presence and call history. These features require anchoring the calls in Microsoft's Intelligent Conversation and Communications Cloud (IC3), part of the Microsoft Phone System.
-
-The Microsoft Phone System relies on information in SIP signaling to determine whether a call is:
--- To a Teams Phone Mobile subscriber.-- From a Teams Phone Mobile subscriber or between two Teams Phone Mobile subscribers.-
-Your core mobile network must supply this information to Azure Communications Gateway, by using unique trunks or by correctly populating an `X-MS-FMC` header as defined by the Teams Phone Mobile SIP specifications. If you don't have access to these specifications, contact your Microsoft representative or your onboarding team.
-
-Your core mobile network must also be able to anchor and divert calls into the Microsoft Phone System. You can choose from the following options.
--- Using Mobile Control Point (MCP) in Azure Communications Gateway. MCP is an IMS Application Server that queries the Teams Phone Mobile Consultation API to determine whether the call involves a Teams Phone Mobile Subscriber. MCP then adds X-MS-FMC headers and updates the signaling to divert the call into the Microsoft Phone System through Azure Communications Gateway. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).-- Deploying an on-premises version of Mobile Control Point (MCP) from Metaswitch. For more information, see the [Metaswitch description of Mobile Control Point](https://www.metaswitch.com/products/mobile-control-point). This version of MCP isn't included in Azure Communications Gateway.-- Using other routing capabilities in your core network to detect Teams Phone Mobile subscribers and route INVITEs to or from these subscribers into the Microsoft Phone System through Azure Communications Gateway.-
-> [!IMPORTANT]
-> If an INVITE has an X-MS-FMC header, the core must not route the call to Microsoft Teams. The call has already been anchored in the Microsoft Phone System.
-
-## SIP signaling
-
-Azure Communications Gateway includes SIP trunks to your own network and can interwork between your existing core networks and the requirements of the Microsoft Phone System. For example, Azure Communications Gateway automatically interworks calls to support the following requirements from Operator Connect and Teams Phone Mobile:
--- SIP over TLS-- X-MS-SBC header (describing the SBC function)-- Strict rules on a= attribute lines in SDP bodies-- Strict rules on call transfer handling-
-SIP trunks between your network and Azure Communications Gateway are multi-tenant, meaning that traffic from all your customers share the same trunk. By default, traffic sent from the Azure Communications Gateway contains an X-MSTenantID header. This header identifies the enterprise that is sending the traffic and can be used by your billing systems.
-
-You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for:
--- Advanced SIP header or SDP message manipulation-- Support for reliable provisional messages (100rel)-- Interworking between early and late media-- Interworking away from inband DTMF tones-- Placing the unique tenant ID elsewhere in SIP messages to make it easier for your network to consume, for example in `tgrp` parameters-
-## RTP and SRTP media
-
-The Microsoft Phone System typically requires SRTP for media. Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers further media manipulation features to allow your networks to interoperate with the Microsoft Phone System.
-
-### Media handling for calls
-
-You must select the codecs that you want to support when you deploy Azure Communications Gateway. If the Microsoft Phone System doesn't support these codecs, Azure Communications Gateway can perform transcoding (converting between codecs) on your behalf.
-
-Operator Connect and Teams Phone Mobile require core networks to support ringback tones (ringing tones) during call transfer. Core networks must also support comfort noise. If your core networks can't meet these requirements, Azure Communications Gateway can inject media into calls.
-
-### Media interworking options
-
-Azure Communications Gateway offers multiple media interworking options. For example, you might need to:
--- Change handling of RTCP-- Control bandwidth allocation-- Prioritize specific media traffic for Quality of Service-
-For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
-
-## Compatibility with monitoring requirements
-
-The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics that operators must monitor as part of the Operator Connect program and include:
--- Call quality-- Call errors and unusual behavior (for example, call setup failures, short calls, or unusual disconnections)-- Other errors in Azure Communications Gateway-
-We'll investigate the potential fault, and determine whether the fault relates to Azure Communications Gateway or the Microsoft Phone System. We may require you to carry out some troubleshooting steps in your networks to help isolate the fault.
-
-Azure Communications Gateway provides metrics that you can use to monitor the overall health of your Azure Communications Gateway deployment. If you notice any concerning metrics, you can raise an Azure Communications Gateway support ticket.
-
-## Next steps
--- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).-- Learn about [requesting changes to Azure Communications Gateway](request-changes.md).
communications-gateway Manage Enterprise Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise-operator-connect.md
+
+ Title: Use Azure Communications Gateway's Number Management Portal to manage an enterprise
+description: Learn how to add and remove enterprises and numbers for Operator Connect and Teams Phone Mobile with Azure Communication Gateway's Number Management Portal.
++++ Last updated : 07/17/2023+++
+# Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile
+
+Azure Communications Gateway's Number Management Portal enables you to manage enterprise customers and their numbers through the Azure portal.
+
+The Operator Connect and Teams Phone Mobile programs don't allow you to use the Operator Connect portal for provisioning after you've launched your service in the Teams Admin Center. The Number Management Portal is a simple alternative that you can use until you've finished integrating with the Operator Connect APIs.
+
+> [!IMPORTANT]
+> You must have selected Azure Communications Gateway's API Bridge option to use the Number Management Portal.
+
+## Prerequisites
+
+Confirm that you have [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** access to the Azure portal for your subscription. If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
+
+If you're assigning new numbers to an enterprise customer:
+
+* You must know the numbers you need to assign (as E.164 numbers). Each number must:
+ * Contain only digits (0-9), with an optional `+` at the start.
+ * Include the country code.
+ * Be up to 19 characters long.
+* You must have completed any internal procedures for assigning numbers.
+* You need to know the following information for each range of numbers.
+
+|Information for each range of numbers |Notes |
+|||
+|Calling profile |One of the Calling Profiles created by Microsoft for you.|
+|Intended usage | Individuals (calling users), applications or conference calls.|
+|Capabilities |Which types of call to allow (for example, inbound calls or outbound calls).|
+|Civic address | A physical location for emergency calls. The enterprise must have configured this address in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
+|Location | A description of the location for emergency calls. The enterprise must have configured this location in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
+|Whether the enterprise can update the civic address or location | If you don't allow the enterprise to update the civic address or location, you must specify a civic address or location. You can specify an address or location and also allow the enterprise to update it.|
+|Country | The country for the number. Only required if you're uploading a North American Toll-Free number, otherwise optional.|
+|Ticket number (optional) |The ID of any ticket or other request that you want to associate with this range of numbers. Up to 64 characters. |
+
+## 1. Go to your Communications Gateway resource
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. Select your Communications Gateway resource.
+
+## 2. Select an enterprise customer to manage
+
+When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a **consent**. This consent represents the relationship between you and the enterprise.
+
+The Number Management Portal allows you to update the status of these consents. Finding the consent for an enterprise is also the easiest way to manage numbers for an enterprise.
+
+1. From the overview page for your Communications Gateway resource, select **Consents** in the sidebar.
+1. Find the enterprise that you want to manage.
+1. If you need to change the status of the relationship, select **Update Relationship Status** from the menu for the enterprise. Set the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent Declined** or **Contract Terminated**, you must provide a reason.
+
+## 3. Manage numbers for the enterprise
+
+Assigning numbers to an enterprise allows IT administrators at the enterprise to allocate those numbers to their users.
+
+1. Go to the number management page for the enterprise.
+ * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
+ * Otherwise, select **Numbers** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID.
+1. To add new numbers for an enterprise:
+ 1. Select **Upload numbers**.
+ 1. Fill in the fields based on the information you determined in [Prerequisites](#prerequisites). These settings apply to all the numbers you upload in the **Telephone numbers** section.
+ 1. In **Telephone numbers**, upload the numbers, as a comma-separated list.
+ 1. Select **Review + upload** and **Upload**. Uploading creates an order for uploading numbers over the Operator Connect API.
+ 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers are available to the enterprise. You might need to refresh more than once.
+1. To remove numbers from an enterprise:
+ 1. Select the numbers.
+ 1. Select **Release numbers**.
+ 1. 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed.
+
+## 4. View civic addresses for an enterprise
+
+You can view civic addresses for an enterprise. The enterprise configures the details of each civic address, so you can't configure these details.
+
+1. Go to the civic address page for the enterprise.
+ * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
+ * Otherwise, select **Civic addresses** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID.
+1. View the civic addresses. You can see the address, the company name, the description and whether the address was validated when the enterprise configured the address.
+1. Optionally, select an individual address to view additional information provided by the enterprise (for example, the ELIN information).
+
+## Next steps
+
+Learn more about [the metrics you can use to monitor calls](monitoring-azure-communications-gateway-data-reference.md).
communications-gateway Manage Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise.md
- Title: Manage an enterprise in Azure Communications Gateway's Number Management Portal
-description: Learn how to manage enterprises and numbers for Operator Connect and Teams Phone Mobile with Azure Communication Gateway's Number Management Portal.
---- Previously updated : 07/17/2023---
-# Manage an enterprise in Azure Communications Gateway's Number Management Portal
-
-Azure Communications Gateway's Number Management Portal enables you to manage enterprise customers and their numbers through the Azure portal.
-
-The Operator Connect and Teams Phone Mobile programs don't allow you to use the Operator Connect portal for provisioning after you've launched your service in the Teams Admin Center. The Number Management Portal is a simple alternative that you can use until you've finished integrating with the Operator Connect APIs.
-
-> [!IMPORTANT]
-> You must have selected Azure Communications Gateway's API Bridge option to use the Number Management Portal.
-
-## Prerequisites
-
-Confirm that you have [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** access to the Azure portal for your subscription. If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
-
-If you're assigning new numbers to an enterprise customer:
-
-* You must know the numbers you need to assign (as E.164 numbers). Each number must:
- * Contain only digits (0-9), with an optional `+` at the start.
- * Include the country code.
- * Be up to 19 characters long.
-* You must have completed any internal procedures for assigning numbers.
-* You need to know the following information for each range of numbers.
-
-|Information for each range of numbers |Notes |
-|||
-|Calling profile |One of the Calling Profiles created by Microsoft for you.|
-|Intended usage | Individuals (calling users), applications or conference calls.|
-|Capabilities |Which types of call to allow (for example, inbound calls or outbound calls).|
-|Civic address | A physical location for emergency calls. The enterprise must have configured this address in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
-|Location | A description of the location for emergency calls. The enterprise must have configured this location in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
-|Whether the enterprise can update the civic address or location | If you don't allow the enterprise to update the civic address or location, you must specify a civic address or location. You can specify an address or location and also allow the enterprise to update it.|
-|Country | The country for the number. Only required if you're uploading a North American Toll-Free number, otherwise optional.|
-|Ticket number (optional) |The ID of any ticket or other request that you want to associate with this range of numbers. Up to 64 characters. |
-
-## 1. Go to your Communications Gateway resource
-
-1. Sign in to the [Azure portal](https://azure.microsoft.com/).
-1. In the search bar at the top of the page, search for your Communications Gateway resource.
-1. Select your Communications Gateway resource.
-
-## 2. Select an enterprise customer to manage
-
-When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a **consent**. This consent represents the relationship between you and the enterprise.
-
-The Number Management Portal allows you to update the status of these consents. Finding the consent for an enterprise is also the easiest way to manage numbers for an enterprise.
-
-1. From the overview page for your Communications Gateway resource, select **Consents** in the sidebar.
-1. Find the enterprise that you want to manage.
-1. If you need to change the status of the relationship, select **Update Relationship Status** from the menu for the enterprise. Set the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent Declined** or **Contract Terminated**, you must provide a reason.
-
-## 3. Manage numbers for the enterprise
-
-Assigning numbers to an enterprise allows IT administrators at the enterprise to allocate those numbers to their users.
-
-1. Go to the number management page for the enterprise.
- * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
- * Otherwise, select **Numbers** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID.
-1. To add new numbers for an enterprise:
- 1. Select **Upload numbers**.
- 1. Fill in the fields based on the information you determined in [Prerequisites](#prerequisites). These settings apply to all the numbers you upload in the **Telephone numbers** section.
- 1. In **Telephone numbers**, upload the numbers, as a comma-separated list.
- 1. Select **Review + upload** and **Upload**. Uploading creates an order for uploading numbers over the Operator Connect API.
- 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers are available to the enterprise. You might need to refresh more than once.
-1. To remove numbers from an enterprise:
- 1. Select the numbers.
- 1. Select **Release numbers**.
- 1. 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed.
-
-## 4. View civic addresses for an enterprise
-
-You can view civic addresses for an enterprise. The enterprise configures the details of each civic address, so you can't configure these details.
-
-1. Go to the civic address page for the enterprise.
- * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
- * Otherwise, select **Civic addresses** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID.
-1. View the civic addresses. You can see the address, the company name, the description and whether the address was validated when the enterprise configured the address.
-1. Optionally, select an individual address to view additional information provided by the enterprise (for example, the ELIN information).
-
-## Next steps
-
-Learn more about [the metrics you can use to monitor calls](monitoring-azure-communications-gateway-data-reference.md).
communications-gateway Mobile Control Point https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/mobile-control-point.md
For example, you could use the following iFC (replacing *`<mcp-hostname>`* with
</SPT> </TriggerPoint> <ApplicationServer>
- <ServerName>sips:<mcp-hostname>;transport=tcp;service=mcp</ServerName>
+ <ServerName>sip:<mcp-hostname>;transport=tcp;service=mcp</ServerName>
<DefaultHandling>0</DefaultHandling> </ApplicationServer>
- <ProfilePartIndicator>0</ProfilePartIndicator>
</InitialFilterCriteria> ``` ## Next steps -- Learn about [preparing to deploy Integrated Mobile Control Point in Azure Communications Gateway](prepare-to-deploy.md)-- Learn how to [integrate Azure Communications Gateway with Integrated Mobile Control Point with your network](prepare-for-live-traffic.md)
+- Learn about [preparing to deploying Integrated Mobile Control Point in Azure Communications Gateway](prepare-to-deploy.md)
+- Learn how to [integrate Azure Communications Gateway with Integrated Mobile Control Point with your network](prepare-for-live-traffic-operator-connect.md)
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
Previously updated : 01/25/2023 Last updated : 08/23/2023 # Monitoring Azure Communications Gateway
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Communications Gateway and how you can use the features of Azure Monitor to analyze and alert on this data.
-
-This article describes the monitoring data generated by Azure Communications Gateway. Azure Communications Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes how you can use Azure Monitor and Azure Resource Health to monitor your Azure Communications Gateway.
+If you notice any concerning resource health indicators or metrics, you can [raise a support ticket](request-changes.md).
## What is Azure Monitor?
The following sections build on this article by describing the specific data gat
> [!TIP] > To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md). -
-## Monitoring data
+## Azure Monitor data for Azure Communications Gateway
Azure Communications Gateway collects metrics. See [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md) for detailed information on the metrics created by Azure Communications Gateway. Azure Communications Gateway doesn't collect logs. For clarification on the different types of metrics available in Azure Monitor, see [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
-## Analyzing metrics
+## Analyzing, filtering and splitting metrics in Azure Monitor
You can analyze metrics for Azure Communications Gateway, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
-For a list of the metrics collected, see [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md).
-
-## Filtering and splitting
- All Azure Communications Gateway metrics support the **Region** dimension, allowing you to filter any metric by the Service Locations defined in your Azure Communications Gateway resource. You can also split a metric by the **Region** dimension to visualize how different segments of the metric compare with each other.
-For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md).
+For more information on filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md).
-## Alerts
+## Alerts in Azure Monitor
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview) and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+## What is Azure Resource Health?
+
+Azure Resource Health provides a personalized dashboard of the health of your resources. This dashboard helps you diagnose and get support for service problems that affect your Azure resources. It reports on the current and past health of your resources.
+
+Resource Health reports one of the following statuses for each resource.
+
+- *Available*: there are no known problems with your resource.
+- *Degraded*: a problem with your resource is reducing its performance.
+- *Unavailable*: there's a significant problem with your resource or with the Azure platform. For Azure Communications Gateway, this status usually means that calls can't be handled.
+- *Unknown*: Resource Health hasn't received information about the resource for more than 10 minutes.
+
+For more information, see [Resource Health overview](../service-health/resource-health-overview.md).
+
+## Using Azure Resource Health
+
+To access Resource Health for Azure Communications Gateway:
+
+1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+1. In the search bar at the top of the page, search for your Communications Gateway resource.
+1. Select your Communications Gateway resource.
+1. In the menu in the left pane, select **Resource health**.
+
+You can also [configure Resource Health alerts in the Azure portal](../service-health/resource-health-alert-monitor-guide.md). These alerts can notify you in near real-time when these resources have a change in their health status.
+ ## Next steps - See [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Communications Gateway.--- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for an overview of monitoring Azure resources.
+- See [Resource Health overview](../service-health/resource-health-overview.md) for an overview of Resource Health.
communications-gateway Monitoring Azure Communications Gateway Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitoring-azure-communications-gateway-data-reference.md
Azure Communications Gateway has the following dimensions associated with its me
| **Region** | The Service Locations defined in your Azure Communications Gateway resource. |
-## See Also
+## Next steps
+ - See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md) for a description of monitoring Azure Communications Gateway. - See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
communications-gateway Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/onboarding.md
To launch Operator Connect and/or Teams Phone Mobile, you need an onboarding partner. Launching requires changes to the Operator Connect or Teams Phone Mobile environments and your onboarding partner manages the integration process and coordinates with Microsoft Teams on your behalf. They can also help you design and set up your network for success.
-We provide a customer success program and onboarding service called Included Benefits for operators deploying Azure Communications Gateway. We work with your team to enable rapid and effective solution design and deployment. The program includes tailored guidance from Azure for Operators engineers, using proven practices and architectural guides.
+We provide a customer success program and onboarding service called _Included Benefits_ for operators deploying Azure Communications Gateway. We work with your team to enable rapid and effective solution design and deployment. The program includes tailored guidance from Azure for Operators engineers, using proven practices and architectural guides.
## Eligibility for Included Benefits and alternatives
Your responsibilities include:
## Next steps - [Review the reliability requirements for Azure Communications Gateway](reliability-communications-gateway.md).-- [Review the interoperability function of Azure Communications Gateway](interoperability.md).-- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
+- [Learn more about planning an Azure Communications Gateway deployment](get-started.md)
+
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Previously updated : 04/26/2023 Last updated : 09/06/2023
Azure Communications Gateway enables Microsoft Teams calling through the Operator Connect and Teams Phone Mobile programs for your telecommunications network. Azure Communications Gateway is certified as part of the Operator Connect Accelerator program. It provides Voice and IT integration with Microsoft Teams across both fixed and mobile networks.
-> [!IMPORTANT]
-> You must sign an Operator Connect or Teams Phone Mobile agreement with Microsoft to use this service.
:::image type="complex" source="media/azure-communications-gateway-overview.png" alt-text="Diagram that shows Azure Communications Gateway between Microsoft Phone System and your networks. Your networks can be fixed and/or mobile."::: Diagram that shows how Azure Communications Gateway connects to the Microsoft Phone System and to your fixed and mobile networks. Microsoft Teams clients connect to the Microsoft Phone system. Your fixed network connects to PSTN endpoints. Your mobile network connects to Teams Phone Mobile users. :::image-end:::
-Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including Teams Certified SBC function) so that you can integrate with Operator Connect and Teams Phone Mobile quickly, reliably and in a secure manner. As part of Microsoft Azure, the network elements in Azure Communications Gateway are fully managed and include an availability SLA. This full management simplifies network operations integration and accelerates the timeline for adding new network functions into production.
+Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including Teams Certified SBC function) so that you can integrate with Operator Connect and Teams Phone Mobile quickly, reliably and in a secure manner.
+
+As part of Microsoft Azure, the network elements in Azure Communications Gateway are fully managed and include an availability SLA. This full management simplifies network operations integration and accelerates the timeline for adding new network functions into production.
## Architecture
Azure Communications Gateway acts as the edge of your network, ensuring complian
To ensure availability, Azure Communications Gateway is deployed into two Azure Regions within a given Geography. It supports both active-active and primary-backup geographic redundancy models to fit with your network design.
-Connectivity between your network and Azure Communications Gateway must meet the Microsoft Teams _Network Connectivity Specification_. Azure Communications Gateway supports Microsoft Azure Peering Service (MAPS) for connectivity to on-premises environments, in line with this specification.
+Connectivity between your network and Azure Communications Gateway must meet the Microsoft Teams _Network Connectivity Specification_. You can achieve this using one of the following connectivity models.
+ The sites in your network must have cross-connects between them. You must also set up your routing so that each site in your deployment can route to both Azure Regions.
Azure Communications Gateway supports the SIP and RTP requirements for Teams Cer
Azure Communications Gateway's voice features include: -- **Optional direct peering to Emergency Routing Service Providers (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling with Azure Communications Gateway](emergency-calling.md). - **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and Microsoft Teams. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, for example: - 100rel and early media inter-working
Azure Communications Gateway's voice features include:
- Media transcoding - Ringback injection - **Call control integration for Teams Phone Mobile** - Azure Communications Gateway includes an optional IMS Application Server called Mobile Control Point (MCP). MCP ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
+- **Optional direct peering to Emergency Routing Service Providers for Operator Connect and Teams Phone Mobile (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md).
-## API features
+## Number Management Portal for provisioning for Operator Connect and Teams Phone Mobile
-Azure Communications Gateway includes optional API integration features. These features can help you to speed up your rollout and monetization of Teams Calling support.
+Launching Operator Connect or Teams Phone Mobile requires you to use the Operator Connect APIs to provision subscribers (instead of the Operator Connect Portal). Azure Communications Gateway offers a Number Management Portal integrated into the Azure portal. This portal uses the Operator Connect APIs, allowing you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
-### Number Management Portal
-
-Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This Azure portal feature enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
+For more information, see [Number Management Portal for provisioning with Operator Connect APIs](interoperability-operator-connect.md#number-management-portal-for-provisioning-with-operator-connect-apis) and [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
The Number Management Portal is available as part of the optional API Bridge feature. > [!TIP] > The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
-### CallDuration upload
+## API integration
+
+Azure Communications Gateway includes API integration features. These features can help you to speed up your rollout and monetization of Teams Calling support.
-Azure Communications Gateway can use the Operator Connect APIs to upload information about the duration of individual calls into the Microsoft Teams environment. This allows Microsoft Teams clients to display the call duration recorded by your network, instead of the call duration recorded by Microsoft Teams. Providing this information to Microsoft Teams is a requirement of the Operator Connect program that Azure Communications Gateway performs on your behalf.
+These features include:
+- The Number Management Portal for provisioning (part of the optional API Bridge feature), as described in [Number Management Portal for provisioning for Operator Connect and Teams Phone Mobile](#number-management-portal-for-provisioning-for-operator-connect-and-teams-phone-mobile).
+- For Operator Connect and Teams Phone Mobile programs, upload of call duration data to Microsoft Teams. For more information, see [Providing call duration data to Microsoft Teams](interoperability-operator-connect.md#providing-call-duration-data-to-microsoft-teams).
## Next steps -- [Learn how Azure Communications Gateway fits into your network](interoperability.md).-- [Learn about onboarding to Microsoft Teams and Azure Communications Gateway's Included Benefits](onboarding.md).-- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
+- [Learn how to get started with Azure Communications Gateway](get-started.md)
+- [Learn how Azure Communications Gateway fits into your network](role-in-network.md).
+- [Learn about the latest Azure Communications Gateway features](whats-new.md)
communications-gateway Plan And Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md
Previously updated : 12/06/2022 Last updated : 09/06/2023 # Plan and manage costs for Azure Communications Gateway
-This article describes how you plan for and manage costs for Azure Communications Gateway.
+This article describes how you're charged for Azure Communications Gateway and how you can plan for and manage these costs.
After you've started using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
Costs for Azure Communications Gateway are only a portion of the monthly costs i
## Prerequisites
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Microsoft Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Understand the full billing model for Azure Communications Gateway
Azure Communications Gateway runs on Azure infrastructure that accrues costs whe
### How you're charged for Azure Communications Gateway
-When you deploy or use Azure Communications Gateway, you'll be charged for your use of the voice features of the product. The charges are based on the number of users assigned to the platform by a series of SBC User meters. The meters include:
+When you deploy Azure Communications Gateway, you're charged for how you use the voice features of the product. The charges are based on the number of users assigned to the platform by a series of Azure Communications Gateway meters. The meters include:
-- A "service availability" meter that is charged hourly and includes the use of 1000 users for testing and early adoption.-- A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. The per-user fee is based on the maximum number of users during your billing cycle, excluding the 1000 users included in the service availability fee.
+- A "Fixed Network Service Fee" or a "Mobile Network Service Fee" meter.
+ - This meter is charged hourly and includes the use of 999 users for testing and early adoption.
+ - If your deployment includes fixed networks and mobile networks, you're charged the Mobile Network Service Fee.
+- A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. These per-user fees are based on the maximum number of users during your billing cycle, excluding the 999 users included in the service availability fee.
-For example, if you have 28,000 users assigned to the deployment each month you'll pay:
+For example, if you have 28,000 users assigned to the deployment each month, you're charged for:
* The service availability fee for each hour in the month
-* 24,000 users in the 1001-25000 tier
+* 24,001 users in the 1000-25000 tier
* 3000 users in the 25001-100000 tier > [!TIP] > If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed.
-If you choose to deploy the Number Management Portal by selecting the API Bridge option, you'll also be charged for the Number Management Portal. Fees work in the same way as the SBC User meters: a service availability meter and a per-user meter. The number of users charged for the Number Management Portal is always the same as the number of users charged on the SBC User meters.
+If you choose to deploy the Number Management Portal by selecting the API Bridge option, you'll also be charged for the Number Management Portal. Fees work in the same way as the other meters: a service fee meter and a per-user meter. The number of users charged for the Number Management Portal is always the same as the number of users charged on the other Azure Communications Gateway meters.
> [!NOTE] > A user is any telephone number that meets all the following criteria.
If your Azure subscription has a spending limit, Azure prevents you from spendin
### Other costs that might accrue with Azure Communications Gateway
-You'll need to pay for Azure networking costs, because these costs aren't included in the Azure Communications Gateway meters.
+You must pay for Azure networking costs, because these costs aren't included in the Azure Communications Gateway meters.
- If you're connecting to the public internet with Microsoft Azure Peering Service (MAPS), you might need to pay a third party for the cross-connect at the exchange location.-- If you're connecting into Azure as a next hop, you might need to pay vNet peering costs.
+- If you're connecting to the public internet with ExpressRoute Microsoft Peering, you must purchase ExpressRoute circuits with a specified bandwidth and data billing model.
+- If you're connecting into Azure as a next hop, you might need to pay virtual network peering costs.
### Costs if you cancel or change your deployment
-If you cancel Azure Communications Gateway, your final bill or invoice will only include charges on service-availability meters for the part of the billing cycle before you cancel. Per-user meters charge for the entire billing cycle.
+If you cancel Azure Communications Gateway, your final bill or invoice includes charges on service fee meters for the part of the billing cycle before you cancel. Per-user meters charge for the entire billing cycle.
-You'll need to remove any networking resources that you set up for Azure Communications Gateway. For example, if you're connecting into Azure as a next hop, you'll need to remove the vNet peering. Otherwise, you'll still be charged for those networking resources.
+You must remove any networking resources that you set up for Azure Communications Gateway. For example, if you're connecting into Azure as a next hop, you must remove the virtual network peering. Otherwise, you'll still be charged for those networking resources.
-If you have multiple Azure Communications Gateway deployments and you move users between deployments, these users will count towards meters in both deployments. This double counting only applies to the billing cycle in which you move the subscribers; in the next billing cycle, the users will only count towards meters in their new deployment.
+If you have multiple Azure Communications Gateway deployments and you move users between deployments, these users count towards meters in both deployments. This double counting only applies to the billing cycle in which you move the subscribers; in the next billing cycle, the users only count towards meters in their new deployment.
### Using Azure Prepayment with Azure Communications Gateway
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
## Next steps - View [Azure Communications Gateway pricing](https://azure.microsoft.com/pricing/details/communications-gateway/).-- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
communications-gateway Prepare For Live Traffic Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-operator-connect.md
+
+ Title: Prepare for live traffic with Azure Communications Gateway
+description: After deploying Azure Communications Gateway, you and your onboarding team must carry out further integration work before you can launch your Teams Phone Mobile or Operator Connect service.
++++ Last updated : 09/01/2023++
+# Prepare for live traffic with Operator Connect, Teams Phone Mobile and Azure Communications Gateway
+
+Before you can launch your Operator Connect or Teams Phone Mobile service, you and your onboarding team must:
+
+- Test your service.
+- Prepare for launch.
+
+In this article, you learn about the steps you and your onboarding team must take.
+
+> [!TIP]
+> In many cases, your onboarding team is from Microsoft, provided through the [Included Benefits](onboarding.md) or through a separate arrangement.
+
+> [!IMPORTANT]
+> Some steps can require days or weeks to complete. For example, you'll need to wait at least seven days for automated testing of your deployment and schedule your launch date at least two weeks in advance. We recommend that you read through these steps in advance to work out a timeline.
+
+## Prerequisites
+
+- You must have [deployed Azure Communications Gateway](deploy.md) using the Microsoft Azure portal and [connected it to Operator Connect or Teams Phone Mobile](connect-operator-connect.md).
+- You must have [chosen some test numbers](deploy.md#prerequisites).
+- You must have a tenant you can use for testing (representing an enterprise customer), and some users in that tenant to whom you can assign the test numbers.
+ - If you do not already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses.
+ - The test users must be licensed for Teams Phone System and in Teams Only mode.
+- You must have access to the following configuration portals.
+
+ |Configuration portal |Required permissions |
+ |||
+ |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#1-add-the-project-synergy-application-to-your-azure-tenancy))|
+ |[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management|
++
+## Methods
+
+In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions.
+
+## 1. Ask your onboarding team to register your test enterprise tenant
+
+Your onboarding team must register the test enterprise tenant that you chose in [Prerequisites](#prerequisites) with Microsoft Teams.
+
+1. Find your company's "Operator ID" in your [operator configuration in the Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
+1. Provide your onboarding contact with:
+ - Your company's name.
+ - Your company's Operator ID.
+ - The ID of the tenant to use for testing.
+1. Wait for your onboarding team to confirm that your test tenant has been registered.
+
+## 2. Assign numbers to test users in your tenant
+
+1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `commsgw`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process.
+1. In your test tenant, request service from your company.
+ 1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant.
+ 1. Select **Voice** > **Operators**.
+ 1. Select your company in the list of operators, fill in the form and select **Add as my operator**.
+1. In your test tenant, create some test users (if you don't already have suitable users). License the users for Teams Phone System and place them in Teams Only mode.
+1. Configure emergency locations in your test tenant.
+1. Upload numbers in the Number Management Portal (if you chose to deploy it as part of Azure Communications Gateway) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team.
+
+ # [Number Management Portal](#tab/number-management-portal)
+
+ 1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+ 1. In the search bar at the top of the page, search for your Communications Gateway resource.
+ 1. Select your Communications Gateway resource.
+ 1. On the overview page, select **Consents** in the sidebar.
+ 1. Select your test tenant.
+ 1. From the menu, select **Update Relationship Status**. Set the status to **Agreement signed**.
+ 1. From the menu, select **Manage Numbers**.
+ 1. Select **Upload numbers**.
+ 1. Fill in the fields as required, and then select **Review + upload** and **Upload**.
+
+ # [Operator Portal](#tab/no-number-management-portal)
+
+ 1. Open the Operator Portal.
+ 1. Select **Customer Consents**.
+ 1. Select your test tenant.
+ 1. Select **Update Relationship**. Set the status to **Agreement signed**.
+ 1. Select the link for your test tenant. The link opens **Number Management** > **Manage by Tenant**.
+ 1. Select **Upload Numbers**.
+ 1. Fill in the fields as required, and then select **Submit**.
+
+
+1. In your test tenant, assign these numbers to your test users.
+ 1. Sign in to the Teams Admin Center for your test tenant.
+ 1. Select **Voice** > **Phone numbers**.
+ 1. Select a number, then select **Edit**.
+ 1. Assign the number to a user.
+ 1. Repeat for all your test users.
+
+## 3. Carry out integration testing and request changes
+
+Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh.
+
+You must test typical call flows for your network. Your onboarding team will provide an example test plan that we recommend you follow. Your test plan should include call flow, failover, and connectivity testing.
+
+- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft will make the changes for you.
+- If you need changes to the configuration of devices in your core network, you must make those changes.
+
+## 4. Run a connectivity test and upload proof
+
+Before you can launch, Microsoft Teams requires proof that your network is properly connected to Microsoft's network.
+
+1. Provide your onboarding team with proof that BFD is enabled. You should have enabled BFD in [8. Connect Azure Communications Gateway to your networks](deploy.md#8-connect-azure-communications-gateway-to-your-networks) when you deployed Azure Communications Gateway. For example, if you have a Cisco router, you can provide configuration similar to the following.
+
+ ```text
+ interface TenGigabitEthernet2/0/0.150
+ description private peering to Azure
+ encapsulation dot1Q 15 second-dot1q 150
+ ip vrf forwarding 15
+ ip address 192.168.15.17 255.255.255.252
+ bfd interval 150 min_rx 150 multiplier 3
+
+ router bgp 65020
+ address-family ipv4 vrf 15
+ network 10.1.15.0 mask 255.255.255.128
+ neighbor 192.168.15.18 remote-as 12076
+ neighbor 192.168.15.18 fall-over bfd
+ neighbor 192.168.15.18 activate
+ neighbor 192.168.15.18 soft-reconfiguration inbound
+ exit-address-family
+ ```
+
+1. Test failover of the connectivity to your network. Your onboarding team will work with you to plan this testing and gather the required evidence.
+1. Work with your onboarding team to validate emergency call handling.
+
+## 5. Get your go-to-market resources approved
+
+Before you can go live, you must get your customer-facing materials approved by Microsoft Teams. Provide the following to your onboarding team for review.
+
+- Press releases and other marketing material
+- Content for your landing page
+- Logo for the Microsoft Teams Operator Directory (200 px by 200 px)
+- Logo for the Microsoft Teams Admin Center (170 px by 90 px)
+
+## 6. Test raising a ticket
+
+You must test that you can raise tickets in the Azure portal to report problems with Azure Communications Gateway. See [Get support or request changes for Azure Communications Gateway](request-changes.md).
+
+## 7. Learn about monitoring Azure Communications Gateway
+
+Your staff can use a selection of key metrics to monitor Azure Communications Gateway. These metrics are available to anyone with the Reader role on the subscription for Azure Communications Gateway. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
+
+## 8. Verify API integration
+
+Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect API for provisioning.
+
+# [Number Management Portal](#tab/number-management-portal)
+
+If you have the Number Management Portal, your onboarding team can obtain proof automatically. You don't need to do anything.
+
+# [Without the Number Management Portal](#tab/no-number-management-portal)
+
+If you don't have the Number Management Portal, you must provide your onboarding team with proof that you have made successful API calls for:
+
+- Partner consent
+- TN Upload to Account
+- Unassign TN
+- Release TN
+++
+## 9. Arrange synthetic testing
+
+Your onboarding team must arrange synthetic testing of your deployment. This synthetic testing is a series of automated tests lasting at least seven days. It verifies the most important metrics for quality of service and availability.
+
+After launch, synthetic traffic will be sent through your deployment using your test numbers. This traffic is used to continuously check the health of your deployment.
+
+## 10. Schedule launch
+
+Your launch date is the date that you'll appear to enterprises in the Teams Admin Center. Your onboarding team must arrange this date by making a request to Microsoft Teams.
+
+Your service can be launched on specific dates each month. Your onboarding team must submit the request at least two weeks before your preferred launch date.
+
+## Next steps
+
+- Wait for your launch date.
+- Learn about [getting support and requesting changes for Azure Communications Gateway](request-changes.md).
+- Learn about [using the Number Management Portal to manage enterprises](manage-enterprise-operator-connect.md)
+- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
communications-gateway Prepare For Live Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md
- Title: Prepare for live traffic with Azure Communications Gateway
-description: After deploying Azure Communications Gateway, you and your onboarding team must carry out further integration work before you can launch your service.
---- Previously updated : 07/18/2023--
-# Prepare for live traffic with Azure Communications Gateway
-
-Before you can launch your Operator Connect or Teams Phone Mobile service, you and your onboarding team must:
--- Integrate Azure Communications Gateway with your network.-- Test your service.-- Prepare for launch.-
-In this article, you learn about the steps you and your onboarding team must take.
-
-> [!TIP]
-> In many cases, your onboarding team is from Microsoft, provided through the [Included Benefits](onboarding.md) or through a separate arrangement.
-
-## Prerequisites
--- You must have [deployed Azure Communications Gateway](deploy.md) using the Microsoft Azure portal.-- You must have [chosen some test numbers](prepare-to-deploy.md#prerequisites).-- You must have a tenant you can use for testing (representing an enterprise customer), and some users in that tenant to whom you can assign the test numbers.
- - If you do not already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses.
- - The test users must be licensed for Teams Phone System and in Teams Only mode.
-- You must have access to the following configuration portals.-
- |Configuration portal |Required permissions |
- |||
- |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you prepared to deploy Azure Communications Gateway](prepare-to-deploy.md#1-add-the-project-synergy-application-to-your-azure-tenancy))|
- |[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management|
--
-## Methods
-
-In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions.
-
-## 1. Connect Azure Communications Gateway to your networks
-
-1. Exchange TLS certificate information with your onboarding team.
- 1. Azure Communications Gateway is preconfigured to support the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, provide your onboarding team with this root CA certificate.
- 1. The root CA certificate for Azure Communications Gateway's certificate is the DigiCert Global Root G2 certificate. If your network doesn't have this root certificate, download it from https://www.digicert.com/kb/digicert-root-certificates.htm and install it in your network.
-1. Configure your infrastructure to meet the call routing requirements described in [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
-1. Configure your network devices to send and receive SIP traffic from Azure Communications Gateway.
- * Depending on your network, you might need to configure SBCs, softswitches and access control lists (ACLs).
- * Your network needs to send SIP traffic to per-region FQDNs for Azure Communications Gateway. To find these FQDNs:
- 1. Go to the **Overview** page for your Azure Communications Gateway resource.
- 1. In each **Service Location** section, find the **Hostname** field. You need to validate TLS connections against this hostname to ensure secure connections.
- * We recommend configuring an SRV lookup for each region, using `_sip._tls.<regional-FQDN-from-portal>`. Replace *`<regional-FQDN-from-portal>`* with the per-region FQDNs that you found in the **Overview** page for your resource.
-1. If your Azure Communications Gateway includes integrated MCP, configure the connection to MCP:
- 1. Go to the **Overview** page for your Azure Communications Gateway resource.
- 1. In each **Service Location** section, find the **MCP hostname** field.
- 1. Configure your test numbers with an iFC of the following form, replacing *`<mcp-hostname>`* with the MCP hostname for the preferred region for that subscriber.
- ```xml
- <InitialFilterCriteria>
- <Priority>0</Priority>
- <TriggerPoint>
- <ConditionTypeCNF>0</ConditionTypeCNF>
- <SPT>
- <ConditionNegated>0</ConditionNegated>
- <Group>0</Group>
- <Method>INVITE</Method>
- </SPT>
- <SPT>
- <ConditionNegated>1</ConditionNegated>
- <Group>0</Group>
- <SessionCase>4</SessionCase>
- </SPT>
- </TriggerPoint>
- <ApplicationServer>
- <ServerName>sips:<mcp-hostname>;transport=tcp;service=mcp</ServerName>
- <DefaultHandling>0</DefaultHandling>
- </ApplicationServer>
- <ProfilePartIndicator>0</ProfilePartIndicator>
- </InitialFilterCriteria>
- ```
-1. Configure your routers and peering connection to ensure all traffic to Azure Communications Gateway is through Azure Internet Peering for Communications Services (also known as MAPS for Voice).
-1. Enable Bidirectional Forwarding Detection (BFD) on your on-premises edge routers to speed up link failure detection.
- - The interval must be 150 ms (or 300 ms if you can't use 150 ms).
- - With MAPS, BFD must bring up the BGP peer for each Private Network Interface (PNI).
-1. Meet any other requirements in the _Network Connectivity Specification_ for Operator Connect or Teams Phone Mobile. If you don't have access to this specification, contact your onboarding team.
-
-## 2. Ask your onboarding team to register your test enterprise tenant
-
-Your onboarding team must register the test enterprise tenant that you chose in [Prerequisites](#prerequisites) with Microsoft Teams.
-
-1. Find your company's "Operator ID" in your [operator configuration in the Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
-1. Provide your onboarding contact with:
- - Your company's name.
- - Your company's Operator ID.
- - The ID of the tenant to use for testing.
-1. Wait for your onboarding team to confirm that your test tenant has been registered.
-
-## 3. Assign numbers to test users in your tenant
-
-1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `commsgw`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process.
-1. In your test tenant, request service from your company.
- 1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant.
- 1. Select **Voice** > **Operators**.
- 1. Select your company in the list of operators, fill in the form and select **Add as my operator**.
-1. In your test tenant, create some test users (if you don't already have suitable users). License the users for Teams Phone System and place them in Teams Only mode.
-1. Configure emergency locations in your test tenant.
-1. Upload numbers in the Number Management Portal (if you chose to deploy it as part of Azure Communications Gateway) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team.
-
- # [Number Management Portal](#tab/number-management-portal)
-
- 1. Sign in to the [Azure portal](https://azure.microsoft.com/).
- 1. In the search bar at the top of the page, search for your Communications Gateway resource.
- 1. Select your Communications Gateway resource.
- 1. On the overview page, select **Consents** in the sidebar.
- 1. Select your test tenant.
- 1. From the menu, select **Update Relationship Status**. Set the status to **Agreement signed**.
- 1. From the menu, select **Manage Numbers**.
- 1. Select **Upload numbers**.
- 1. Fill in the fields as required, and then select **Review + upload** and **Upload**.
-
- # [Operator Portal](#tab/no-number-management-portal)
-
- 1. Open the Operator Portal.
- 1. Select **Customer Consents**.
- 1. Select your test tenant.
- 1. Select **Update Relationship**. Set the status to **Agreement signed**.
- 1. Select the link for your test tenant. The link opens **Number Management** > **Manage by Tenant**.
- 1. Select **Upload Numbers**.
- 1. Fill in the fields as required, and then select **Submit**.
-
-
-1. In your test tenant, assign these numbers to your test users.
- 1. Sign in to the Teams Admin Center for your test tenant.
- 1. Select **Voice** > **Phone numbers**.
- 1. Select a number, then select **Edit**.
- 1. Assign the number to a user.
- 1. Repeat for all your test users.
-
-## 4. Carry out integration testing and request changes
-
-Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh.
-
-You must test typical call flows for your network. Your onboarding team will provide an example test plan that we recommend you follow. Your test plan should include call flow, failover, and connectivity testing.
--- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft will make the changes for you.-- If you need changes to the configuration of devices in your core network, you must make those changes.-
-## 5. Run a connectivity test and upload proof
-
-Before you can launch, Microsoft Teams requires proof that your network is properly connected to Microsoft's network.
-
-1. Provide your onboarding team with proof that BFD is enabled. You enabled BFD in [1. Connect Azure Communications Gateway to your networks](#1-connect-azure-communications-gateway-to-your-networks). For example, if you have a Cisco router, you can provide configuration similar to the following.
-
- ```text
- interface TenGigabitEthernet2/0/0.150
- description private peering to Azure
- encapsulation dot1Q 15 second-dot1q 150
- ip vrf forwarding 15
- ip address 192.168.15.17 255.255.255.252
- bfd interval 150 min_rx 150 multiplier 3
-
- router bgp 65020
- address-family ipv4 vrf 15
- network 10.1.15.0 mask 255.255.255.128
- neighbor 192.168.15.18 remote-as 12076
- neighbor 192.168.15.18 fall-over bfd
- neighbor 192.168.15.18 activate
- neighbor 192.168.15.18 soft-reconfiguration inbound
- exit-address-family
- ```
-
-1. Test failover of the MAPS connections to your network. Your onboarding team will work with you to plan this testing and gather the required evidence.
-1. Work with your onboarding team to validate emergency call handling.
-
-## 6. Get your go-to-market resources approved
-
-Before you can go live, you must get your customer-facing materials approved by Microsoft Teams. Provide the following to your onboarding team for review.
--- Press releases and other marketing material-- Content for your landing page-- Logo for the Microsoft Teams Operator Directory (200 px by 200 px)-- Logo for the Microsoft Teams Admin Center (170 px by 90 px)-
-## 7. Test raising a ticket
-
-You must test that you can raise tickets in the Azure portal to report problems with Azure Communications Gateway. See [Get support or request changes for Azure Communications Gateway](request-changes.md).
-
-## 8. Learn about monitoring Azure Communications Gateway
-
-Your staff can use a selection of key metrics to monitor Azure Communications Gateway. These metrics are available to anyone with the Reader role on the subscription for Azure Communications Gateway. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
-
-## 9. Verify API integration
-
-Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect API for provisioning.
-
-# [Number Management Portal](#tab/number-management-portal)
-
-If you have the Number Management Portal, your onboarding team can obtain proof automatically. You don't need to do anything.
-
-# [Without the Number Management Portal](#tab/no-number-management-portal)
-
-If you don't have the Number Management Portal, you must provide your onboarding team with proof that you have made successful API calls for:
--- Partner consent-- TN Upload to Account-- Unassign TN-- Release TN---
-## 10. Arrange synthetic testing
-
-Your onboarding team must arrange synthetic testing of your deployment. This synthetic testing is a series of automated tests lasting at least seven days. It verifies the most important metrics for quality of service and availability.
-
-After launch, synthetic traffic will be sent through your deployment using your test numbers. This traffic is used to continuously check the health of your deployment.
-
-## 11. Schedule launch
-
-Your launch date is the date that you'll appear to enterprises in the Teams Admin Center. Your onboarding team must arrange this date by making a request to Microsoft Teams.
-
-Your service can be launched on specific dates each month. Your onboarding team must submit the request at least two weeks before your preferred launch date.
-
-## Next steps
--- Wait for your launch date.-- Learn about [getting support and requesting changes for Azure Communications Gateway](request-changes.md).-- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).-- Learn about [planning and managing costs for Azure Communications Gateway](plan-and-manage-costs.md).
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
The following sections describe the information you need to collect and the deci
## Prerequisites
-You must be a Telecommunications Service Provider who has signed an Operator Connect agreement with Microsoft. For more information, see [Operator Connect](https://cloudpartners.transform.microsoft.com/practices/microsoft-365-for-operators/connect).
-You need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Included Benefits](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself.
-You must own globally routable numbers that you can use for testing, as follows.
+## 1. Arrange onboarding
-|Type of testing|Numbers required |
-|||
-|Automated validation testing by Microsoft Teams test suites|Minimum: 6. Recommended: 9 (to run tests simultaneously).|
-|Manual test calls made by you and/or Microsoft staff during integration testing |Minimum: 1|
+For Operator Connect and Teams Phone Mobile, you need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Included Benefits](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself.
-After deployment, the automated validation testing numbers use synthetic traffic to continuously check the health of your deployment.
+## 2. Ensure you have a suitable support plan
We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
-## 1. Add the Project Synergy application to your Azure tenancy
-
-> [!NOTE]
->This step and the next step ([2. Assign an Admin user to the Project Synergy application](#2-assign-an-admin-user-to-the-project-synergy-application)) set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. If you've already gone through onboarding, go to [3. Create a network design](#3-create-a-network-design).
+## 3. Choose the Azure tenant to use
The Operator Connect and Teams Phone Mobile programs require your Azure Active Directory tenant to contain a Microsoft application called Project Synergy. Operator Connect and Teams Phone Mobile inherit permissions and identities from your Azure Active Directory tenant through the Project Synergy application. The Project Synergy application also allows configuration of Operator Connect or Teams Phone Mobile and assigning users and groups to specific roles. We recommend that you use an existing Azure Active Directory tenant for Azure Communications Gateway, because using an existing tenant uses your existing identities for fully integrated authentication. However, if you need to manage identities for Operator Connect separately from the rest of your organization, create a new dedicated tenant first.
-To add the Project Synergy application:
-
-1. Check whether the Azure Active Directory (`AzureAD`) module is installed in PowerShell. Install it if necessary.
- 1. Open PowerShell.
- 1. Run the following command and check whether `AzureAD` appears in the output.
- ```azurepowershell
- Get-Module -ListAvailable
- ```
- 1. If `AzureAD` doesn't appear in the output, install the module:
- 1. Close your current PowerShell window.
- 1. Open PowerShell as an admin.
- 1. Run the following command.
- ```azurepowershell
- Install-Module AzureAD
- ```
- 1. Close your PowerShell admin window.
-1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
-1. Select **Azure Active Directory**.
-1. Select **Properties**.
-1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
-1. Open PowerShell.
-1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
- ```azurepowershell
- Connect-AzureAD -TenantId "<AADTenantID>"
- New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect"
- ```
-
-## 2. Assign an Admin user to the Project Synergy application
-
-The user who sets up Azure Communications Gateway needs to have the Admin user role in the Project Synergy application.
-
-1. In your Azure portal, navigate to **Enterprise applications** using the left-hand side menu. Alternatively, you can search for it in the search bar; it's under the **Services** subheading.
-1. Set the **Application type** filter to **All applications** using the drop-down menu.
-1. Select **Apply**.
-1. Search for **Project Synergy** using the search bar. The application should appear.
-1. Select your **Project Synergy** application.
-1. Select **Users and groups** from the left hand side menu.
-1. Select **Add user/group**.
-1. Specify the user you want to use for setting up Azure Communications Gateway and give them the **Admin** role.
-
-## 3. Create a network design
-
-Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* that you've been issued. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
-
-For Teams Phone Mobile, you must decide how your network should determine whether a call involves a Teams Phone Mobile subscriber and therefore route the call to Microsoft Phone System. You can:
--- Use Azure Communications Gateway's integrated Mobile Control Point (MCP).-- Connect to an on-premises version of Mobile Control Point (MCP) from Metaswitch.-- Use other routing capabilities in your core network.-
-For more information on these options, see [Call control integration for Teams Phone Mobile](interoperability.md#call-control-integration-for-teams-phone-mobile) and [Mobile Control Point in Azure Communications Gateway](mobile-control-point.md).
-
-To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md).
- :::image type="content" source="media/azure-communications-gateway-redundancy.png" alt-text="Network diagram of an Azure Communications Gateway that uses MAPS as its peering service between Azure and an operators network.":::
+## 4. Get access to Azure Communications Gateway for your Azure subscription
-## 4. Collect basic information for deploying an Azure Communications Gateway
-
- Collect all of the values in the following table for the Azure Communications Gateway resource.
+Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details.
-|**Value**|**Field name(s) in Azure portal**|
- |||
- |The name of the Azure subscription to use to create an Azure Communications Gateway resource. You must use the same subscription for all resources in your Azure Communications Gateway deployment. |**Project details: Subscription**|
- |The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**|
- |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**|
- |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region**
- |The voice codecs to use between Azure Communications Gateway and your network. |**Instance details: Supported Codecs**|
- |The Unified Communications as a Service (UCaaS) platform(s) Azure Communications Gateway should support. These platforms are Teams Phone Mobile and Operator Connect Mobile. |**Instance details: Supported Voice Platforms**|
- |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Services Routing Proxy (US only). |**Instance details: Emergency call handling**|
- |The scope at which Azure Communications Gateway's autogenerated domain name label is unique. Communications Gateway resources get assigned an autogenerated domain name label that depends on the name of the resource. You'll need to register the domain name later when you deploy Azure Communications Gateway. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**|
- |The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Instance details: Teams Voicemail Pilot Number**|
- |A list of dial strings used for emergency calling.|**Instance details: Emergency Dial Strings**|
- | How you plan to use Mobile Control Point (MCP) to route Teams Phone Mobile calls to Microsoft Phone System. Choose from **Integrated** (to deploy MCP in Azure Communications Gateway), **On-premises** (to use an existing on-premises MCP) or **None** (if you don't plan to offer Teams Phone Mobile or you'll use another method to route calls). |**Instance details: MCP**|
+Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
-## 5. Collect Service Regions configuration values
+## 5. Create a network design
-Collect all of the values in the following table for both service regions in which you want to deploy Azure Communications Gateway.
+You must use Microsoft Azure Peering Service (MAPS) or ExpressRoute Microsoft Peering to connect your on-premises network to Azure Communications Gateway.
- |**Value**|**Field name(s) in Azure portal**|
- |||
- |The Azure regions to use for call traffic. |**Service Region One/Two: Region**|
- |The IPv4 address used by Azure Communications Gateway to contact your network from this region. |**Service Region One/Two: Operator IP address**|
- |The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**|
- |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**|
-## 6. Collect Test Lines configuration values
+If you want to use ExpressRoute Microsoft Peering, consult with your onboarding team and ensure that it's available in your region.
-Collect all of the values in the following table for all the test lines that you want to configure for Azure Communications Gateway.
+Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* that you've been issued. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
- |**Value**|**Field name(s) in Azure portal**|
- |||
- |The name of the test line. |**Name**|
- |The phone number of the test line, in E.164 format and including the country code. |**Phone Number**|
- |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites).|**Testing purpose**|
-> [!IMPORTANT]
-> You must configure at least six automated test lines. We recommend nine automated test lines (to allow simultaneous tests).
+For Teams Phone Mobile, you must decide how your network should determine whether a call involves a Teams Phone Mobile subscriber and therefore route the call to Microsoft Phone System. You can:
-## 7. Decide if you want tags
+- Use Azure Communications Gateway's integrated Mobile Control Point (MCP).
+- Connect to an on-premises version of Mobile Control Point (MCP) from Metaswitch.
+- Use other routing capabilities in your core network.
-Resource naming and tagging is useful for resource management. It enables your organization to locate and keep track of resources associated with specific teams or workloads and also enables you to more accurately track the consumption of cloud resources by business area and team.
+For more information on these options, see [Call control integration for Teams Phone Mobile](interoperability-operator-connect.md#call-control-integration-for-teams-phone-mobile) and [Mobile Control Point in Azure Communications Gateway](mobile-control-point.md).
-If you believe tagging would be useful for your organization, design your naming and tagging conventions following the information in the [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/).
+If you plan to route emergency calls through Azure Communications Gateway, read [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md) to learn about your options.
-## 8. Get access to Azure Communications Gateway for your Azure subscription
+## 6. Configure MAPS or ExpressRoute
-Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details.
+Connect your network to Azure Communications Gateway:
-Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
+- To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md).
+- To configure ExpressRoute Microsoft Peering, follow the instructions in [Tutorial: Configure peering for ExpressRoute circuit](../../articles/expressroute/expressroute-howto-routing-portal-resource-manager.md).
-## Next steps
+## Next step
-- [Create an Azure Communications Gateway resource](deploy.md)
+> [!div class="nextstepaction"]
+> [Deploy an Azure Communications Gateway resource and connect it to your networks](deploy.md)
communications-gateway Provision User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md
You need to use the Azure portal to configure user roles.
- Know who needs access. - Know the appropriate user role or roles to assign them. - Are signed in with a user account with a role that can change role assignments for the subscription, such as **Owner** or **User Access Administrator**.
-1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user that can change roles for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user account that can change roles for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
### 2.2 Assign a user role
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
Last updated 05/11/2023
-# What is reliability in Azure Communications Gateway?
+# Reliability in Azure Communications Gateway
Azure Communications Gateway ensures your service is reliable by using Azure redundancy mechanisms and SIP-specific retry behavior. Your network must meet specific requirements to ensure service availability.
The reliability design described in this document is implemented by Microsoft an
## Next steps
-> [!div class="nextstepaction"]
-> [Prepare to deploy an Azure Communications Gateway resource](prepare-to-deploy.md)
+- Learn about [how Azure Communications Gateway keeps your network and data secure](security.md)
+- Learn more about [planning an Azure Communications Gateway deployment](get-started.md)
+
communications-gateway Request Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md
Azure provides unlimited support for subscription management, which includes bil
## Prerequisites
-Perform initial troubleshooting to help determine if you should raise an issue with Azure Communications Gateway or a different component. We provide some examples where you should raise an issue with Azure Communications Gateway. Raising issues for the correct component helps resolve your issues faster.
+Perform initial troubleshooting to help determine if you should raise an issue with Azure Communications Gateway or a different component. Raising issues for the correct component helps resolve your issues faster.
Raise an issue with Azure Communications Gateway if you experience an issue with: - SIP and RTP exchanged by Azure Communications Gateway and your network.
You must have an **Owner**, **Contributor**, or **Support Request Contributor**
1. Concisely describe your problem or the change you need in the **Summary** box. 1. Select an **Issue type** from the drop-down menu.
-1. Select your **Subscription** from the drop-down menu. Choose the subscription where you're noticing the problem or need a change. The support engineer assigned to your case will only be able to access resources in the subscription you specify. If the issue applies to multiple subscriptions, you can mention other subscriptions in your description, or by sending a message later. However, the support engineer will only be able to work on subscriptions to which you have access.
-1. A new **Service** option will appear giving you the option to select either **My services** or **All services**. Select **My services**.
-1. In **Service type** select **Azure Communications Gateway** from the drop-down menu.
-1. A new **Problem type** option will appear. Select the problem type that most accurately describes your issue from the drop-down menu.
+1. Select your **Subscription** from the drop-down menu. Choose the subscription where you're noticing the problem or need a change. The support engineer assigned to your case can only access resources in the subscription you specify. If the issue applies to multiple subscriptions, you can mention other subscriptions in your description, or by sending a message later. However, the support engineer can only work on subscriptions to which you have access.
+1. In the new **Service** option, select **My services**.
+1. Set **Service type** to **Azure Communications Gateway**.
+1. In the new **Problem type** drop-down, select the problem type that most accurately describes your issue.
* Select **API Bridge Issue** if your Number Management Portal is returning errors when you try to gain access or carry out actions. * Select **Configuration and Setup** if you experience issues during initial provisioning and onboarding, or if you want to change configuration for an existing deployment. * Select **Monitoring** for issues with metrics and logs. * Select **Voice Call Issue** if calls aren't connecting, have poor quality, or show unexpected behavior. * Select **Other issue or question** if your issue or question doesn't apply to any of the other problem types.
-1. A new **Problem subtype** option will appear. Select the problem subtype that most accurately describes your issue from the drop-down menu. If the problem type you selected only has one subtype, the subtype is automatically selected.
+1. From the new **Problem subtype** drop-down menu, select the problem subtype that most accurately describes your issue. If the problem type you selected only has one subtype, the subtype is automatically selected.
1. Select **Next**. ## 3. Assess the recommended solutions
Before creating your request, review the details and diagnostics that you'll sen
## Next steps
-Learn how to [Manage an Azure support request](../azure-portal/supportability/how-to-manage-azure-support-request.md).
+> [!div class="nextstepaction"]
+> [Learn how to manage an Azure support request](../azure-portal/supportability/how-to-manage-azure-support-request.md).
communications-gateway Role In Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/role-in-network.md
+
+ Title: Azure Communications Gateway's role in your network
+description: Azure Communication Gateway sits on the edge of your network. Its interoperability features allow it to adapt to your requirements.
++++ Last updated : 07/25/2023+++
+# Your network and Azure Communications Gateway
+
+Azure Communications Gateway sits at the edge of your network. This position allows it to manipulate signaling and media to meet the requirements of your networks and your chosen communications services. Azure Communications Gateway includes many interoperability settings by default, and you can arrange further interoperability configuration.
+
+> [!TIP]
+> This section provides a brief overview of Azure Communications Gateway's interoperability features. For detailed information about interoperability with Operator Connect and Teams Phone Mobile, see [Interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile for Microsoft Teams](interoperability-operator-connect.md).
+
+## Role and position in the network
+
+Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to one or more communications services. The following diagram shows where Azure Communications Gateway sits in your network.
+
+ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains certified SBC function and the MCP application server for anchoring mobile calls.
+
+Azure Communications Gateway provides all the features of a traditional session border controller (SBC). These features include:
+
+- Signaling interworking features to solve interoperability problems
+- Advanced media manipulation and interworking
+- Defending against Denial of Service attacks and other malicious traffic
+- Ensuring Quality of Service
+
+Azure Communications Gateway also offers metrics for monitoring your deployment.
+
+You must provide the networking connection between Azure Communications Gateway and your core networks.
+
+## SIP signaling
+
+Azure Communications Gateway includes SIP trunks to your own network and can interwork between your existing core networks and the requirements of your chosen communications service.
++
+You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for:
+
+- Advanced SIP header or SDP message manipulation
+- Support for reliable provisional messages (100rel)
+- Interworking between early and late media
+- Interworking away from inband DTMF tones
+
+## RTP and SRTP media
+
+Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers other media manipulation features to allow your networks to interoperate with your chosen communications services. For example, you can use Azure Communications for:
+
+- Transcoding (converting) between codecs supported by your network and codecs supported by your chosen communications service.
+- Changing how RTCP is handled
+- Controlling bandwidth allocation
+- Prioritizing specific media traffic for Quality of Service
+
+For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
+
+## Next steps
+
+- Learn about [Interoperability for Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md)
+- Learn about [onboarding and Inclusive Benefits](onboarding.md)
+- Learn about [planning an Azure Communications Gateway deployment](get-started.md)
communications-gateway Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md
When encrypting traffic to send to your network, Azure Communications Gateway pr
Azure Communications Gateway uses mutual TLS for SIP, meaning that both the client and the server for the connection verify each other.
-You must manage the certificates that your network presents to Azure Communications Gateway. By default, Azure Communications Gateway supports the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, you must provide this certificate to your onboarding team when you [prepare for live traffic](prepare-for-live-traffic.md#1-connect-azure-communications-gateway-to-your-networks).
+You must manage the certificates that your network presents to Azure Communications Gateway. By default, Azure Communications Gateway supports the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, you must provide this certificate to your onboarding team when you [connect Azure Communications Gateway to your networks](deploy.md#8-connect-azure-communications-gateway-to-your-networks).
-We manage the certificate that Azure Communications Gateway uses to connect to your network and Microsoft Phone System. Azure Communications Gateway's certificate uses the DigiCert Global Root G2 certificate as the root CA certificate. If your network doesn't already support this certificate as a root CA certificate, you must download and install this certificate when you [prepare for live traffic](prepare-for-live-traffic.md#1-connect-azure-communications-gateway-to-your-networks).
+We manage the certificate that Azure Communications Gateway uses to connect to your network and Microsoft Phone System. Azure Communications Gateway's certificate uses the DigiCert Global Root G2 certificate as the root CA certificate. If your network doesn't already support this certificate as a root CA certificate, you must download and install this certificate when you [connect Azure Communications Gateway to your networks](deploy.md#8-connect-azure-communications-gateway-to-your-networks).
### Cipher suites for SIP and RTP
The following cipher suites are used for encrypting SIP and RTP.
## Next steps - Read the [security baseline for Azure Communications Gateway](/security/benchmark/azure/baselines/azure-communications-gateway-security-baseline?toc=/azure/communications-gateway/toc.json&bc=/azure/communications-gateway/breadcrumb/toc.json)-- Learn about [how Azure Communications Gateway communicates with Microsoft Teams and your network](interoperability.md).
+- Learn about [how Azure Communications Gateway communicates with Microsoft Teams](interoperability-operator-connect.md).
+- Learn about [planning an Azure Communications Gateway deployment](get-started.md)
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Previously updated : 05/05/2023 Last updated : 09/06/2023 # What's new in Azure Communications Gateway? This article covers new features and improvements for Azure Communications Gateway.
+## September 2023
+
+### ExpressRoute Microsoft Peering between Azure and operator networks
+
+From September 2023, you can use ExpressRoute Microsoft Peering to connect operator networks to Azure Communications Gateway as an alternative to Peering Services Voice (also known as MAPS for voice). We recommend that most deployments use MAPS for voice unless there's a specific reason that ExpressRoute Microsoft Peering is preferable. For example, you might have existing ExpressRoute connectivity to your network that you can reuse. For details and examples of when ExpressRoute might be preferable to MAPS, see [Using ExpressRoute for Microsoft PSTN Services](../../articles/expressroute/using-expressroute-for-microsoft-pstn.md).
+ ## May 2023 ### Integrated Mobile Control Point for Teams Phone Mobile integration From May 2023, you can deploy Mobile Control Point (MCP) as part of Azure Communications Gateway. MCP is an IMS Application Server that simplifies interworking with Microsoft Phone System for mobile calls. It ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
-You can add MCP when you deploy Azure Communications Gateway or by requesting changes to an existing deployment. For more information, see [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) and [Deploy Azure Communications Gateway](deploy.md) or [Get support or request changes to your Azure Communications Gateway](request-changes.md)
+You can add MCP when you deploy Azure Communications Gateway or by requesting changes to an existing deployment. For more information, see [Deploy Azure Communications Gateway](deploy.md) or [Get support or request changes to your Azure Communications Gateway](request-changes.md)
### Authentication with managed identities for Operator Connect APIs
This new authentication model replaces an earlier model that required you to cre
## Next steps - [Learn more about Azure Communications Gateway](overview.md).-- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
+- [Get started with Azure Communications Gateway](get-started.md).
confidential-computing Concept Skr Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/concept-skr-attestation.md
Previously updated : 2/2/2023 Last updated : 8/22/2023
Make sure to set the value of [--sku] to "premium".
A Secure Key Release Policy is a json format release policy as defined [here](/rest/api/keyvault/keys/create-key/create-key?tabs=HTTP#keyreleasepolicy) that specifies a set of claims required in addition to authorization to release the key. The claims here are MAA based claims as referenced [here for SGX](/azure/attestation/attestation-token-examples#sample-jwt-generated-for-sgx-attestation) and here for [AMD SEV-SNP CVM](/azure/attestation/attestation-token-examples#sample-jwt-generated-for-sev-snp-attestation).
-Visit the TEE specific [examples page for more details](skr-policy-examples.md)
+Visit the TEE specific [examples page for more details](skr-policy-examples.md). For more information on the SKR policy grammar, see [Azure Key Vault secure key release policy grammar](../key-vault/keys/policy-grammar.md).
Before you set an SKR policy make sure to run your TEE application through the remote attestation flow. Remote attestation isn't covered as part of this tutorial.
No. Not at this time.
[AKV REST API With SKR Details](/rest/api/keyvault/keys/create-key/create-key?tabs=HTTP)
+[Azure Key Vault secure key release policy grammar](../key-vault/keys/policy-grammar.md)
+ [AKV SDKs](../key-vault/general/client-libraries.md)
confidential-computing Harden A Linux Image To Remove Azure Guest Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/harden-a-linux-image-to-remove-azure-guest-agent.md
+
+ Title: Harden a Linux image to remove Azure guest agent
+description: Learn how to use the Azure CLI to harden a linux image to remove Azure guest agent.
++
+m
++ Last updated : 8/03/2023++++
+# Harden a Linux image to remove Azure guest agent
+
+**Applies to:** :heavy_check_mark: Linux Images
+
+Azure supports two provisioning agents [cloud-init](https://github.com/canonical/cloud-init), and the [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) (WALA), which forms the prerequisites for creating the [generalized images](/azure/virtual-machines/generalize#linux) (Azure Compute Gallery or Managed Image). The Azure Linux Agent contains Provisioning Agent code and Extension Handling code in one package.
+
+It's crucial to comprehend what functionalities the VM loses before deciding to remove the Azure Linux Agent. Removal of the guest agent removes the functionality enumerated atΓÇ»[Azure Linux Agent](/azure/virtual-machines/extensions/agent-linux?branch=pr-en-us-247336).
+
+This "how to" shows you steps to remove guest agent from the Linux image.
+## Prerequisites
+
+- If you don't have an Azure subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Ubuntu image - you can choose one from the [Azure Marketplace](/azure/virtual-machines/linux/cli-ps-findimage).
+
+### Remove Azure Linux Agent and prepare a generalized Linux image
+
+Steps to create an image that removes the Azure Linux Agent are as follows:
+
+1. Download an Ubuntu image.
+
+ [How to download a Linux VHD from Azure](/azure/virtual-machines/linux/download-vhd?tabs=azure-portal)
+
+2. Mount the image.
+
+ Follow the instructions in step 2 of [remove sudo users from the Linux Image](/azure/confidential-computing/harden-the-linux-image-to-remove-sudo-users) to mount the image.
+
+3. Remove the Azure Linux agent
+
+ Run as root to [remove the Azure Linux Agent](/azure/virtual-machines/linux/disable-provisioning)
+
+ For Ubuntu 18.04+
+ ```
+ sudo chroot /mnt/dev/$imagedevice/ apt -y remove walinuxagent
+ ```
++
+> [!NOTE]
+> If you know you will not reinstall the Linux Agent again [remove the Azure Linux Agent artifacts](/azure/virtual-machines/linux/disable-provisioning#:~:text=Step%202%3A%20(Optional)%20Remove%20the%20Azure%20Linux%20Agent%20artifacts), you can run the following steps.
++
+4. (Optional) Remove the Azure Linux Agent artifacts.
+
+ If you know you will not reinstall the Linux Agent again, then you can run the following else skip this step:
+
+ For Ubuntu 18.04+
+ ```
+ sudo chroot /mnt/dev/$imagedevice/ rm -rf /var/lib/walinuxagent
+ sudo chroot /mnt/dev/$imagedevice/ rm -rf /etc/ walinuxagent.conf
+ sudo chroot /mnt/dev/$imagedevice/ rm -rf /var/log/ walinuxagent.log
+ ```
+
+5. Create a systemd service to provision the VM.
+
+ Since we are removing the Azure Linux Agent, we need to provide a mechanism to report ready. Copy the contents of the bash script or python script located [here](/azure/virtual-machines/linux/no-agent?branch=pr-en-us-247336#add-required-code-to-the-vm) to the mounted image and make the file executable (i.e, grant execute permission on the file - chmod).
+ ```
+ sudo chmod +x /mnt/dev/$imagedevice/usr/local/azure-provisioning.sh
+ ```
+
+ To ensure report ready mechanism, create a [systemd service unit](/azure/virtual-machines/linux/no-agent#:~:text=Automating%20running%20the%20code%20at%20first%20boot)
+ and add the following to the /etc/systemd/system (this example names the unit file azure-provisioning.service)
+ ```
+ sudo chroot /mnt/dev/$imagedevice/ systemctl enable azure-provisioning.service
+ ```
+ Now the image is generalized and can be used to create a VM.
+
+6. Unmount the image.
+ ```
+ umount /mnt/dev/$imagedevice
+ ```
+
+ The image prepared does not include Azure Linux Agent anymore.
+
+7. Use the prepared image to deploy a confidential VM.
+
+ Follow the steps starting from 4 in the [Create a custom image for Azure confidential VM](/azure/confidential-computing/how-to-create-custom-image-confidential-vm) document to deploy the agent-less confidential VM.
+
+> [!NOTE]
+> If you are looking to deploy cvm scaled scale using the custom image, please note that some features related to auto scaling will be restricted. Will manual scaling rules continue to work as expected, the autoscaling ability will be limited due to the agentless custom image. More details on the restrictions can be found here for the [provisioning agent](/azure/virtual-machines/linux/disable-provisioning). Alternatively, you can navigate to the metrics tab on the azure portal and confirm the same.
+
+## Next Steps
+
+[Create a custom image for Azure confidential VM](/azure/confidential-computing/how-to-create-custom-image-confidential-vm)
confidential-computing How To Leverage Virtual Tpms In Azure Confidential Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-leverage-virtual-tpms-in-azure-confidential-vms.md
Title: How to leverage virtual TPMs in Azure confidential VMs
+ Title: Leverage virtual TPMs in Azure confidential VMs
description: Learn how to use the vTPM benefits after building trust in a confidential VM.
Last updated 08/02/2023 -+
-# How to leverage virtual TPMs in Azure confidential VMs
+# Leverage virtual TPMs in Azure confidential VMs
**Applies to:** :heavy_check_mark: Linux VMs
These steps list out which artifacts you need and how to get them:
The AMD Versioned Chip Endorsement Key (VCEK) is used to sign the AMD SEV-SNP report. The VCEK certificate allows you to verify that the report was signed by a genuine AMD CPU key. There are two ways retrieve the certificate:
- a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known IMDS endpoint:
+ a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service) (IMDS) endpoint:
```bash curl -H Metadata:true http://169.254.169.254/metadat/certification > vcek cat ./vcek | jq -r '.vcekCert , .certificateChain' > ./vcek.pem
confidential-computing Overview Azure Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview-azure-products.md
Azure confidential computing can help you:
> [!VIDEO https://www.youtube.com/embed/ds48uwDaA-w] ## Azure offerings+
+Confidential computing support is expanding from foundational virtual machine, GPU and container offerings up to data, virtual desktop and managed HSM services with many more being planned based on customer demand.
++ Verifying that applications are running confidentially form the very foundation of confidential computing. This verification is multi-pronged and relies on the following suite of Azure offerings: - [Microsoft Azure Attestation](../attestation/overview.md), a remote attestation service for validating the trustworthiness of multiple Trusted Execution Environments (TEEs) and verifying integrity of the binaries running inside the TEEs.
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Last updated 04/12/2023 -+ ms.devlang: azurecli
confidential-computing Quick Create Confidential Vm Portal Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md
Last updated 3/27/2022 -+ # Quickstart: Create confidential VM on AMD in the Azure portal
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Microsoft Azure confidential ledger is a new and highly secure service for manag
## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 3.6+](/azure/developer/python/configure-local-development-environment)-- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
+- Python versions that are [supported by the Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites).
+- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell).
## Set up
confidential-ledger Verify Node Quotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-node-quotes.md
+
+ Title: Establish trust on Azure confidential ledger
+description: Learn how to establish trust on Azure confidential ledger by verifying the node quote
++ Last updated : 08/18/2023+++++
+# Establish trust on Azure confidential ledger
+
+An Azure confidential ledger node executes on top of a Trusted Execution Environment(TEE), such as Intel SGX, which guarantees the confidentiality of the data while in process. The trustworthiness of the platform and the binaries running inside it is guaranteed through a remote attestation process. An Azure confidential ledger requires a node to present a quote before joining the network. The quote report data contains the cryptographic hash of the node's identity public key and the MRENCLAVE value. The node is allowed to join the network if the quote is found to be valid and the MRENCLAVE value is one of the allowed values in the auditable governance.
+
+## Prerequisites
+
+- Install [CCF](https://microsoft.github.io/CCF/main/build_apps/install_bin.html) or the [CCF Python package](https://pypi.org/project/ccf/).
+- An Azure confidential ledger instance.
+
+## Verify node quote
+
+The node quote can be downloaded from `https://<ledgername>.confidential-ledger.azure.com` and verified by using the `oeverify` tool that ships with the [Open Enclave SDK](https://github.com/openenclave/openenclave/blob/master/tools/oeverify/README.md) or with the `verify_quote.sh` script. It is installed with the CCF installation or the CCF Python package. For complete details about the script and the supported parameters, refer to [verify_quote.sh](https://microsoft.github.io/CCF/main/use_apps/verify_quote.html).
+
+```bash
+verify_quote.sh https://<ledgername>.confidential-ledger.azure.com:443
+```
+The script checks if the cryptographic hash of the node's identity public key (DER encoded) matches the SGX report data and that the MRENCLAVE value present in the quote is trusted. A list of trusted MRENCLAVE values in the network can be downloaded from the `https://<ledgername>.confidential-ledger.azure.com/node/code` endpoint. An optional `mrenclave` parameter can be supplied to check if the node is running the trusted code. If supplied, the mreclave value in the quote must match it exactly.
+
+## Next steps
+
+* [Overview of Microsoft Azure confidential ledger](overview.md)
+* [Azure confidential ledger architecture](architecture.md)
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
ms.suite: integration Previously updated : 02/13/2023 Last updated : 08/23/2023 tags: connectors
The MQ connector provides a wrapper around a Microsoft MQ client, which includes
* MQ 7.5 * MQ 8.0
-* MQ 9.0, 9.1, and 9.2
+* MQ 9.0, 9.1, 9.2, and 9.3
## Connector technical reference
These steps use the Azure portal, but with the appropriate Azure Logic Apps exte
1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. On the designer, select **Choose an operation**, if not selected.
+1. [Follow these general steps to add the MQ built-in trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger). For more information, see [MQ built-in connector triggers](/azure/logic-apps/connectors/built-in/reference/mq/#triggers).
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**.
-
-1. From the triggers list, select the [MQ trigger](/azure/logic-apps/connectors/built-in/reference/mq/#triggers) that you want to use.
-
-1. Provide the [information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**.
+1. Provide the [required information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**.
1. When the trigger information box appears, provide the required [information for your trigger](/azure/logic-apps/connectors/built-in/reference/mq/#triggers).
The following steps use the Azure portal, but with the appropriate Azure Logic A
1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer.
-1. In your workflow where you want to add an MQ action, follow one of these steps:
-
- * To add an action under the last step, select **New step**.
-
- * To add an action between steps, move your pointer over the connecting arrow so that the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. [Follow these general steps to add the MQ action that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). For more information, see [MQ connector actions](/connectors/mq/#actions).
-1. Under the **Choose an operation** search box, select **Enterprise**. In the search box, enter **mq**.
-
-1. From the actions list, select the [MQ action](/connectors/mq/#actions) that you want to use.
-
-1. Provide the [information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**.
+1. Provide the [required information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**.
1. When the action information box appears, provide the required [information for your action](/connectors/mq/#actions).
The steps to add and use an MQ action differ based on whether your workflow uses
1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer.
-1. In your workflow where you want to add an MQ action, follow one of these steps:
-
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the MQ built-in action that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action). For more information, see [MQ built-in connector actions](/azure/logic-apps/connectors/built-in/reference/mq/#actions).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
-
-1. On the **Add an action** pane, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**.
-
-1. From the actions list, select the [MQ action](/azure/logic-apps/connectors/built-in/reference/mq/#actions) that you want to use.
-
-1. Provide the [information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**.
+1. Provide the [required information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**.
1. When the action information box appears, provide the required [information for your action](/azure/logic-apps/connectors/built-in/reference/mq/#actions).
The steps to add and use an MQ action differ based on whether your workflow uses
1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer.
-1. In your workflow where you want to add an MQ action, follow one of these steps:
-
- * To add an action under the last step, select **New step**.
-
- * To add an action between steps, move your mouse over the connecting arrow between those steps, select the plus sign (**+**) that appears between those steps, and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **mq**.
-
-1. From the actions list, select the [MQ action](/connectors/mq/#actions) that you want to use.
+1. [Follow these general steps to add the MQ action that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action). For more information, see [MQ connector actions](/connectors/mq/#actions).
-1. Provide the [information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**.
+1. Provide the [required information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**.
1. When the action information box appears, provide the required [information for your action](/connectors/mq/#actions).
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md
Title: Connect to Office 365 Outlook
-description: Automate tasks and workflows that manage email, contacts, and calendars in Office 365 Outlook by using Azure Logic Apps.
+description: Connect to Office 365 Outlook from workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 08/11/2021 Last updated : 08/23/2023 tags: connectors
-# Connect to Office 365 Outlook using Azure Logic Apps
+# Connect to Office 365 Outlook from Azure Logic Apps
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Office 365 Outlook connector](/connectors/office365connector/), you can create automated tasks and workflows that manage your work or school account by building logic apps. For example, you can automate these tasks:
+
+To automate tasks for your Office 365 Outlook account in workflows using Azure Logic Apps, you can add operations from the [Office 365 Outlook connector](/connectors/office365connector/) to your workflow. For example, your workflow can perform the following tasks:
* Get, send, and reply to email. * Schedule meetings on your calendar. * Add and edit contacts.
-You can use any trigger to start your workflow, for example, when a new email arrives, when a calendar item is updated, or when an event happens in a difference service, such as Salesforce. You can use actions that respond to the trigger event, for example, send an email or create a new calendar event.
+This guide shows how to add an Office 365 Outlook trigger or action to your workflow in Azure Logic Apps.
+
+> [!NOTE]
+>
+> The Office 365 Outlook connector works only with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool), for example, @fabrikam.onmicrosoft.com.
+> If you have an @outlook.com or @hotmail.com account, use the [Outlook.com connector](../connectors/connectors-create-api-outlook.md).
+> To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
+
+## Connector technical reference
+
+For information about this connector's operations and any limits, based on the connector's Swagger file, see the [connector's reference page](/connectors/office365/).
## Prerequisites
-* Your Microsoft Office 365 account for Outlook where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- You need these credentials so that you can authorize your workflow to access your Outlook account.
+* Your Microsoft Office 365 account for Outlook where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool).
> [!NOTE]
- > If you have an @outlook.com or @hotmail.com account, use the [Outlook.com connector](../connectors/connectors-create-api-outlook.md).
- > To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
>
- > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn), Azure Active Directory (Azure AD) authentication
- > works only with an account for Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts.
+ > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn),
+ > Azure Active Directory (Azure AD) authentication works only with an account for
+ > Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts.
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* The logic app workflow from where you want to access your Outlook account. To add an Office 365 Outlook trigger, you have to start with a blank workflow. To add an Office 365 Outlook action, your workflow can start with any trigger.
+
+## Add an Office 365 Outlook trigger
+
+Based on whether you have a Consumption or Standard logic app workflow, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
-* The logic app where you want to access your Outlook account. To start your workflow with an Office 365 Outlook trigger, you need to have a blank logic app workflow. To add an Office 365 Outlook action to your workflow, your logic app workflow needs to already have a trigger.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-## Connector reference
+1. [Follow these general steps to add the Office 365 Outlook trigger](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger) that you want to your workflow.
-For technical details about this connector, such as triggers, actions, and limits, as described by the connector's Swagger file, see the [connector's reference page](/connectors/office365/).
+ This example continues with the trigger named **When an upcoming event is starting soon**. This *polling* trigger regularly checks for any updated calendar event in your email account, based on the specified schedule.
-## Add a trigger
+1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
-A [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an event that starts the workflow in your logic app. This example logic app uses a "polling" trigger that checks for any updated calendar event in your email account, based on the specified interval and frequency.
+ > [!NOTE]
+ >
+ > Your connection doesn't expire until revoked, even if you change your sign-in credentials.
+ > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md).
+
+1. In the trigger information box, provide the required information, for example:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Calendar Id** | Yes | **Calendar** | The calendar to check |
+ | **Interval** | Yes | **15** | The number of intervals |
+ | **Frequency** | Yes | **Minute** | The unit of time |
+
+ To add other available parameters, such as **Time zone**, open the **Add new parameter** list, and select the parameters that you want.
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app in the visual designer.
+ ![Screenshot shows Azure portal, Consumption workflow, and trigger parameters.](./media/connectors-create-api-office365-outlook/calendar-settings-consumption.png)
-1. In the search box, enter `office 365 outlook` as your filter. This example selects **When an upcoming event is starting soon**.
+1. Save your workflow. On the designer toolbar, select **Save**.
- ![Select trigger to start your logic app](./media/connectors-create-api-office365-outlook/office365-trigger.png)
+### [Standard](#tab/standard)
-1. If you don't have an active connection to your Outlook account, you're prompted to sign in and create that connection. To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). Otherwise, provide the information for the trigger's properties.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. [Follow these general steps to add the Office 365 Outlook trigger](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) that you want to your workflow.
+
+ This example continues with the trigger named **When an upcoming event is starting soon**. This *polling* trigger regularly checks for any updated calendar event in your email account, based on the specified schedule.
+
+1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
> [!NOTE]
+ >
> Your connection doesn't expire until revoked, even if you change your sign-in credentials. > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md).
- This example selects the calendar that the trigger checks, for example:
+1. In the trigger information box, provide the required information, for example:
- ![Configure the trigger's properties](./media/connectors-create-api-office365-outlook/select-calendar.png)
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Calendar Id** | Yes | **Calendar** | The calendar to check |
+ | **Interval** | Yes | **15** | The number of intervals |
+ | **Frequency** | Yes | **Minute** | The unit of time |
-1. In the trigger, set the **Frequency** and **Interval** values. To add other available trigger properties, such as **Time zone**, select those properties from the **Add new parameter** list.
+ To add other available parameters, such as **Time zone**, open the **Add new parameter** list, and select the parameters that you want.
- For example, if you want the trigger to check the calendar every 15 minutes, set **Frequency** to **Minute**, and set **Interval** to `15`.
+ ![Screenshot shows Azure portal, Standard workflow, and trigger parameters.](./media/connectors-create-api-office365-outlook/calendar-settings-standard.png)
- ![Set frequency and interval for the trigger](./media/connectors-create-api-office365-outlook/calendar-settings.png)
+1. Save your workflow. On the designer toolbar, select **Save**.
-1. On the designer toolbar, select **Save**.
+
-Now add an action that runs after the trigger fires. For example, you can add the Twilio **Send message** action, which sends a text when a calendar event starts in 15 minutes.
+You can now add any other actions that your workflow requires. For example, you can add the Twilio **Send message** action, which sends a text when a calendar event starts in 15 minutes.
-## Add an action
+## Add an Office 365 Outlook action
-An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an operation that's run by the workflow in your logic app. This example logic app creates a new contact in Office 365 Outlook. You can use the output from another trigger or action to create the contact. For example, suppose your logic app uses the Salesforce trigger, **When a record is created**. You can add the Office 365 Outlook **Create contact** action and use the outputs from the trigger to create the new contact.
+Based on whether you have a Consumption or Standard logic app workflow, follow the corresponding steps:
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the visual designer.
+### [Consumption](#tab/consumption)
-1. To add an action as the last step in your workflow, select **New step**.
+1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in the designer.
- To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
+ This example continues with the Office 365 Outlook trigger named **When a new email arrives**.
-1. In the search box, enter `office 365 outlook` as your filter. This example selects **Create contact**.
+1. [Follow these general steps to add the Office 365 Outlook action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action) that you want to your workflow.
- ![Select the action to run in your logic app](./media/connectors-create-api-office365-outlook/office365-actions.png)
+ This example continues with the Office 365 Outlook action named **Create contact**. This operation creates a new contact in Office 365 Outlook. You can use the output from a previous operation in the workflow to create the contact.
-1. If you don't have an active connection to your Outlook account, you're prompted to sign in and create that connection. To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). Otherwise, provide the information for the action's properties.
+1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
> [!NOTE]
+ >
> Your connection doesn't expire until revoked, even if you change your sign-in credentials. > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md).
- This example selects the contacts folder where the action creates the new contact, for example:
+1. In the trigger information box, provide the required information, for example:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Folder Id** | Yes | **Contacts** | The folder where the action creates the new contact |
+ | **Given name** | Yes | <*contact-name*> | The name to give the contact |
+ | **Home phones** | Yes | <*home-phone-number*> | The home phone number for the contact |
- ![Configure the action's properties](./media/connectors-create-api-office365-outlook/select-contacts-folder.png)
+ This example selects the **Contacts** folder where the action creates the new contact and uses trigger outputs for the remaining parameter values:
- To add other available action properties, select those properties from the **Add new parameter** list.
+ ![Screenshot shows Azure portal, Consumption workflow, and action parameters.](./media/connectors-create-api-office365-outlook/create-contact-consumption.png)
-1. On the designer toolbar, select **Save**.
+ To add other available parameters, open the **Add new parameter** list, and select the parameters that you want.
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in the designer.
+
+ This example continues with the Office 365 Outlook trigger named **When a new email arrives**.
+
+1. [Follow these general steps to add the Office 365 Outlook action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action) that you want to your workflow.
+
+ This example continues with the Office 365 Outlook action named **Create contact**. This operation creates a new contact in Office 365 Outlook. You can use the output from a previous operation in the workflow to create the contact.
+
+1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
+
+ > [!NOTE]
+ >
+ > Your connection doesn't expire until revoked, even if you change your sign-in credentials.
+ > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md).
+
+1. In the trigger information box, provide the required information, for example:
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Folder Id** | Yes | **Contacts** | The folder where the action creates the new contact |
+ | **Given name** | Yes | <*contact-name*> | The name to give the contact |
+ | **Home phones** | Yes | <*home-phone-number*> | The home phone number for the contact |
+
+ This example selects the **Contacts** folder where the action creates the new contact and uses trigger outputs for the remaining parameter values:
+
+ ![Screenshot shows Azure portal, Standard workflow, and action parameters.](./media/connectors-create-api-office365-outlook/create-contact-standard.png)
+
+ To add other available parameters, open the **Add new parameter** list, and select the parameters that you want.
+
+1. Save your workflow. On the designer toolbar, select **Save**.
++ <a name="connect-using-other-accounts"></a>
If you try connecting to Outlook by using a different account than the one curre
* Set up the other account with the **Contributor** role in your logic app's resource group.
- 1. On your logic app's resource group menu, select **Access control (IAM)**. Set up the other account with the **Contributor** role.
+ 1. In the Azure portal, open your logic app's resource group.
+
+ 1. On the resource group menu, select **Access control (IAM)**.
+
+ 1. Assign the **Contributor** role to the other account.
For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
If you try connecting to Outlook by using a different account than the one curre
## Next steps * [Managed connectors for Azure Logic Apps](managed.md)
-* [Built-in connectors for Azure Logic Apps](built-in.md)
-* [What are connectors in Azure Logic Apps](introduction.md)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
To increase the timeout for sending a message, [add the `ServiceProviders.Servic
<a name="permissions-connection-string"></a>
-## Step 1 - Check access to Service Bus namespace
+## Step 1: Check access to Service Bus namespace
To confirm that your logic app resource has permissions to access your Service Bus namespace, use the following steps:
To confirm that your logic app resource has permissions to access your Service B
![Screenshot showing the Azure portal, Service Bus namespace, and 'Shared access policies' selected.](./media/connectors-create-api-azure-service-bus/azure-service-bus-namespace.png)
-## Step 2 - Get connection authentication requirements
+## Step 2: Get connection authentication requirements
Later, when you add a Service Bus trigger or action for the first time, you're prompted for connection information, including the connection authentication type. Based on your logic app workflow type, Service Bus connector version, and selected authentication type, you'll need the following items:
connectors File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/file-system.md
+
+ Title: Connect to on-premises file systems
+description: Connect to file systems on premises from workflows in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 08/17/2023++
+# Connect to on-premises file systems from workflows in Azure Logic Apps
++
+This how-to guide shows how to access an on-premises file share from a workflow in Azure Logic Apps by using the File System connector. You can then create automated workflows that run when triggered by events in your file share or in other systems and run actions to manage your files. The connector provides the following capabilities:
+
+- Create, get, append, update, and delete files.
+- List files in folders or root folders.
+- Get file content and metadata.
+
+In this how-to guide, the example scenarios demonstrate the following tasks:
+
+- Trigger a workflow when a file is created or added to a file share, and then send an email.
+- Trigger a workflow when copying a file from a Dropbox account to a file share, and then send an email.
+
+## Limitations and known issues
+
+- The File System connector currently supports only Windows file systems on Windows operating systems.
+- Mapped network drives aren't supported.
+
+## Connector technical reference
+
+The File System connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
+
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector differs in the following ways: <br><br>- The built-in connector supports only Standard logic apps that run in an App Service Environment v3 with Windows plans only. <br><br>- The built-in version can connect directly to a file share and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [File System built-in connector reference](/azure/logic-apps/connectors/built-in/reference/filesystem/) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* To connect to your file share, different requirements apply, based on your logic app and the hosting environment:
+
+ - Consumption logic app workflows
+
+ - In multi-tenant Azure Logic Apps, you need to meet the following requirements, if you haven't already:
+
+ 1. [Install the on-premises data gateway on a local computer](../logic-apps/logic-apps-gateway-install.md).
+
+ The File System managed connector requires that your gateway installation and file system server must exist in the same Windows domain.
+
+ 1. [Create an on-premises data gateway resource in Azure](../logic-apps/logic-apps-gateway-connection.md).
+
+ 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system.
+
+ - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector.
+
+ - Standard logic app workflows
+
+ You can use the File System built-in connector or managed connector.
+
+ * To use the File System managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+
+ * To use the File System built-in connector, your Standard logic app workflow must run in App Service Environment v3, but doesn't require the data gateway resource.
+
+* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer.
+
+* To follow the example scenario in this how-to guide, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This example uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
+
+ > [!IMPORTANT]
+ >
+ > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps.
+ > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can
+ > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application).
+ > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
+
+* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free.
+
+* The logic app workflow where you want to access your file share. To start your workflow with a File System trigger, you have to start with a blank workflow. To add a File System action, start your workflow with any trigger.
+
+<a name="add-file-system-trigger"></a>
+
+## Add a File System trigger
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ For more information, see [File System triggers](/connectors/filesystem/#triggers). This example continues with the trigger named **When a file is created**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector trigger:
+
+ ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/file-system-connection-consumption.png)
+
+ The following example shows the connection information for the File System ISE-based trigger:
+
+ ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/connect-file-systems/file-system-connection-ise.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
+
+ ![Screenshot showing Consumption workflow designer and the trigger named When a file is created.](media/connect-file-systems/trigger-file-system-when-file-created-consumption.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Consumption workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-file-system-send-email-consumption.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow sends an email about the new file.
+
+### [Standard](#tab/standard)
+
+#### Built-in connector trigger
+
+The following steps apply only to Standard logic app workflows in an App Service Environment v3 with Windows plans only.
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** built-in trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ For more information, see [File System triggers](/azure/logic-apps/connectors/built-in/reference/filesystem/#triggers). This example continues with the trigger named **When a file is added**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+
+ The following example shows the connection information for the File System built-in connector trigger:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/connect-file-systems/trigger-file-system-connection-built-in-standard.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check.
+
+ ![Screenshot showing Standard workflow designer and information for the trigger named When a file is added.](media/connect-file-systems/trigger-when-file-added-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is added, and action named Send an email.](media/connect-file-systems/trigger-send-email-built-in-standard.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes.
+ > After the dynamic content list and expression editor options appear, select the dynamic content
+ > list (lightning icon). When the dynamic content list appears, select from the available outputs.
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow sends an email about the new file.
+
+#### Managed connector trigger
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** managed trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ For more information, see [File System triggers](/connectors/filesystem/#triggers). This example continues with the trigger named **When a file is created**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector trigger:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/trigger-file-system-connection-managed-standard.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
+
+ ![Screenshot showing Standard workflow designer and managed connector trigger named When a file is created.](media/connect-file-systems/trigger-when-file-created-managed-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-send-email-managed-standard.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes.
+ > After the dynamic content list and expression editor options appear, select the dynamic content
+ > list (lightning icon). When the dynamic content list appears, select from the available outputs.
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow sends an email about the new file.
+++
+<a name="add-file-system-action"></a>
+
+## Add a File System action
+
+The example logic app workflow starts with the [Dropbox trigger](/connectors/dropbox/#triggers), but you can use any trigger that you want.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ For more information, see [File System triggers](/connectors/filesystem/#actions). This example continues with the action named **Create file**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector action:
+
+ ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/file-system-connection-consumption.png)
+
+ The following example shows the connection information for the File System ISE-based connector action:
+
+ ![Screenshot showing connection information for File System ISE-based connector action.](media/connect-file-systems/file-system-connection-ise.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action.
+
+ For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox.
+
+ ![Screenshot showing Consumption workflow designer and the File System managed connector action named Create file.](media/connect-file-systems/action-file-system-create-file-consumption.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the action's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-consumption.png)
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+
+### [Standard](#tab/standard)
+
+#### Built-in connector action
+
+These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only.
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ For more information, see [File System actions](/azure/logic-apps/connectors/built-in/reference/filesystem/#actions). This example continues with the action named **Create file**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+
+ The following example shows the connection information for the File System built-in connector action:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/connect-file-systems/action-file-system-connection-built-in-standard.png)
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action. For this example, follow these steps:
+
+ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
+
+ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
+
+ 1. After the **File content** parameter appears on the action information pane, select inside the parameter's edit box.
+
+ 1. After the dynamic content list and expression editor options appear, select the dynamic content list (lightning icon). From the list that appears, under the **When a file is created** trigger section, select **File Content**.
+
+ When you're done, the **File Content** trigger output appears in the **File content** parameter:
+
+ ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-built-in-standard.png)
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+
+#### Managed connector action
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ For more information, see [File System actions](/connectors/filesystem/#actions). This example continues with the action named **Create file**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector action:
+
+ ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/action-file-system-connection-managed-standard.png)
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action. For this example, follow these steps:
+
+ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
+
+ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
+
+ 1. After the **File content** parameter appears on the action information pane, select inside the parameter's edit box.
+
+ 1. After the dynamic content list and expression editor options appear, select the dynamic content list (lightning icon). From the list that appears, under the **When a file is created** trigger section, select **File Content**.
+
+ When you're done, the **File Content** trigger output appears in the **File content** parameter:
+
+ ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-managed-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-managed-standard.png)
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+++
+## Next steps
+
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
container-apps Application Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md
When you deploy a container app, the first revision is automatically created. [M
A container app flows through four phases: deployment, update, deactivation, and shut down.
+> [!NOTE]
+> [Azure Container Apps jobs](jobs.md) don't support revisions. Jobs are deployed and updated directly.
+ ## Deployment As a container app is deployed, the first revision is automatically created.
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--query customerId \ --output tsv) LOG_ANALYTICS_WORKSPACE_ID_ENC=$(printf %s $LOG_ANALYTICS_WORKSPACE_ID | base64 -w0) # Needed for the next step
- lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
+ LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME \ --query primarySharedKey \ --output tsv)
- lOG_ANALYTICS_KEY_ENC=$(printf %s $lOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
+ LOG_ANALYTICS_KEY_ENC=$(printf %s $LOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
``` # [PowerShell](#tab/azure-powershell)
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--query customerId ` --output tsv) $LOG_ANALYTICS_WORKSPACE_ID_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_WORKSPACE_ID))# Needed for the next step
- $lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys `
+ $LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys `
--resource-group $GROUP_NAME ` --workspace-name $WORKSPACE_NAME ` --query primarySharedKey ` --output tsv)
- $lOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($lOG_ANALYTICS_KEY))
+ $LOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_KEY))
```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" \ --configuration-settings "logProcessor.appLogs.destination=log-analytics" \ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" \
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}"
+ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}"
``` # [PowerShell](#tab/azure-powershell)
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" ` --configuration-settings "logProcessor.appLogs.destination=log-analytics" ` --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" `
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}"
+ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}"
```
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 07/07/2023 Last updated : 08/22/2023
Optionally, you can choose to have the extension install [KEDA](https://keda.sh/
The following table describes the role of each revision created for you:
-| Pod | Description | Number of Instances | CPU | Memory |
-|-|-|-|-|-|
-| `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB |
-| `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB |
-| `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 1 GB |
-| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB |
-| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 200 millicpu | 500 MB |
-| `<extensionName>-k8se-event-processor` | An alternative routing destination to help with apps that have scaled to zero while the system gets the first instance available. | 2 | 100 millicpu | 500 MB |
-| `<extensionName>-k8se-http-scaler` | Monitors inbound request volume in order to provide scaling information to [KEDA](https://keda.sh). | 1 | 100 millicpu | 500 MB |
-| `<extensionName>-k8se-keda-cosmosdb-scaler` | Keda Cosmos DB Scaler | 1 | 10 m | 128 MB |
-| `<extensionName>-k8se-keda-metrics-apiserver` | Keda Metrics Server | 1 | 1 Core | 1000 MB |
-| `<extensionName>-k8se-keda-operator` | Manages component updated and service endpoints for Dapr | 1 | 100 millicpu | 500 MB |
-| `<extensionName>-k8se-local-envoy` | A front-end proxy layer for all data-plane tcp requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB |
-| `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB |
-| `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB |
-| dapr-metrics | Dapr metrics pod | 1 | 100 millicpu | 500 MB |
-| dapr-operator | Manages component updates and service endpoints for Dapr | 1 | 100 millicpu | 500 MB |
-| dapr-placement-server | Used for Actors only - creates mapping tables that map actor instances to pods | 1 | 100 millicpu | 500 MB |
-| dapr-sentry | Manages mTLS between services and acts as a CA | 2 | 800 millicpu | 200 MB |
+| Pod | Description | Number of Instances | CPU | Memory | Type |
+|-|-|-|-|-|-|
+| `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB | ReplicaSet |
+| `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB | ReplicaSet |
+| `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 1 GB | ReplicaSet |
+| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB | ReplicaSet |
+| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 200 millicpu | 500 MB | ReplicaSet |
+| `<extensionName>-k8se-event-processor` | An alternative routing destination to help with apps that have scaled to zero while the system gets the first instance available. | 2 | 100 millicpu | 500 MB | ReplicaSet |
+| `<extensionName>-k8se-http-scaler` | Monitors inbound request volume in order to provide scaling information to [KEDA](https://keda.sh). | 1 | 100 millicpu | 500 MB | ReplicaSet |
+| `<extensionName>-k8se-keda-cosmosdb-scaler` | Keda Cosmos DB Scaler | 1 | 10 m | 128 MB | ReplicaSet |
+| `<extensionName>-k8se-keda-metrics-apiserver` | Keda Metrics Server | 1 | 1 Core | 1000 MB | ReplicaSet |
+| `<extensionName>-k8se-keda-operator` | Manages component updated and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | ReplicaSet |
+| `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | DaemonSet |
+| `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB | ReplicaSet |
+| dapr-metrics | Dapr metrics pod | 1 | 100 millicpu | 500 MB | ReplicaSet |
+| dapr-operator | Manages component updates and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | ReplicaSet |
+| dapr-placement-server | Used for Actors only - creates mapping tables that map actor instances to pods | 1 | 100 millicpu | 500 MB | StatefulSet |
+| dapr-sentry | Manages mTLS between services and acts as a CA | 2 | 800 millicpu | 200 MB | ReplicaSet |
## FAQ for Azure Container Apps on Azure Arc (Preview)
ARM64 based clusters aren't supported at this time.
### Container Apps extension v1.12.8 (June 2023)
+ - Update OSS Fluent Bit to 2.1.2
- Upgrade of Dapr to 1.10.6 - Support for container registries exposed on custom port - Enable activate/deactivate revision when a container app is stopped - Fix Revisions List not returning init containers - Default allow headers added for cors policy
+### Container Apps extension v1.12.9 (July 2023)
+
+ - Minor updates to EasyAuth sidecar containers
+ - Update of Extension Monitoring Agents
+
+### Container Apps extension v1.17.8 (August 2023)
+
+ - Update EasyAuth to 1.6.16
+ - Update of Dapr to 1.10.8
+ - Update Envoy to 1.25.6
+ - Add volume mount support for Azure Container App jobs
+ - Added IP Restrictions for applications with TCP Ingress type
+ - Added support for Container Apps with multiple exposed ports
+
## Next steps
-[Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
+[Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
container-apps Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-pipelines.md
The task supports the following scenarios:
* Build from source code without a Dockerfile and deploy to Container Apps. Supported languages include .NET, Node.js, PHP, Python, and Ruby * Deploy an existing container image to Container Apps
-With the production release this task comes with Azure DevOps and no longer requires explicit installation. For the complete documentation please see [AzureContainerApps@1 - Azure Container Apps Deploy v1 task](https://learn.microsoft.com/azure/devops/pipelines/tasks/reference/azure-container-apps-v1).
+With the production release this task comes with Azure DevOps and no longer requires explicit installation. For the complete documentation please see [AzureContainerApps@1 - Azure Container Apps Deploy v1 task](/azure/devops/pipelines/tasks/reference/azure-container-apps-v1).
### Usage examples
The task uses the Dockerfile in `appSourcePath` to build the container image. If
#### Deploy an existing container image to Container Apps
-The following snippet shows how to deploy an existing container image to Container Apps. Note, that we're deploying a publicly available image and won't need any registry authentication as a result.
+The following snippet shows how to deploy an existing container image to Container Apps. The task authenticates with the registry using the service connection. If the service connection's identity isn't assigned the `AcrPush` role for the registry, supply the registry's admin credentials using the `acrUsername` and `acrPassword` input parameters.
```yaml steps: - task: AzureContainerApps@1 inputs: azureSubscription: 'my-subscription-service-connection'
- imageToDeploy : 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
containerAppName: 'my-container-app' resourceGroup: 'my-container-app-rg' imageToDeploy: 'myregistry.azurecr.io/my-container-app:$(Build.BuildId)'
steps:
The Azure Container Apps task needs to authenticate with your Azure Container Registry to push the container image. The container app also needs to authenticate with your Azure Container Registry to pull the container image.
-To push images, the task automatically authenticates with the container registry specified in `acrName` using the service connection provided in `azureSubscription`.
+To push images, the task automatically authenticates with the container registry specified in `acrName` using the service connection provided in `azureSubscription`. If the service connection's identity isn't assigned the `AcrPush` role for the registry, supply the registry's admin credentials using `acrUsername` and `acrPassword`.
To pull images, Azure Container Apps uses either managed identity (recommended) or admin credentials to authenticate with the Azure Container Registry. To use managed identity, the target container app for the task must be [configured to use managed identity](managed-identity-image-pull.md). To authenticate with the registry's admin credentials, set the task's `acrUsername` and `acrPassword` inputs.
container-apps Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md
Previously updated : 06/19/2023 Last updated : 08/23/2023
Billing in Azure Container Apps is based on your [plan type](plans.md).
- Your plan selection determines billing calculations. - Different applications in an environment can use different plans.
-For more information, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/).
+This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/).
## Consumption plan
The following resources are free during each calendar month, per subscription:
- The first 360,000 GiB-seconds - The first 2 million HTTP requests
-This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/).
-
+Free usage doesn't appear on your bill. You'll only be charged when your resource usage exceeds the monthly free grants.
> [!NOTE] > If you use Container Apps with [your own virtual network](networking.md#managed-resources) or your apps utilize other Azure resources, additional charges may apply. ### Resource consumption charges
-Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure for each revision. You're charged for the amount of resources allocated to each replica while it's running.
+Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure for each revision. [Azure Container Apps jobs](jobs.md) run replicas when job executions are triggered. You're charged for the amount of resources allocated to each replica while it's running.
There are 2 meters for resource consumption:
There are 2 meters for resource consumption:
The first 180,000 vCPU-seconds and 360,000 GiB-seconds in each subscription per calendar month are free.
+#### Container apps
+ The rate you pay for resource consumption depends on the state of your container app's revisions and replicas. By default, replicas are charged at an *active* rate. However, in certain conditions, a replica can enter an *idle* state. While in an *idle* state, resources are billed at a reduced rate.
-#### No replicas are running
+##### No replicas are running
When a revision is scaled to zero replicas, no resource consumption charges are incurred.
-#### Minimum number of replicas are running
+##### Minimum number of replicas are running
-Idle usage charges may apply when a revision is running under a specific set of circumstances. To be eligible for idle charges, a revision must be:
+Idle usage charges may apply when a container app's revision is running under a specific set of circumstances. To be eligible for idle charges, a revision must be:
- Configured with a [minimum replica count](scale-app.md) greater than zero - Scaled to the minimum replica count
Usage charges are calculated individually for each replica. A replica is conside
When a replica is idle, resource consumption charges are calculated at the reduced idle rates. When a replica isn't idle, the active rates apply.
-#### More than the minimum number of replicas are running
+##### More than the minimum number of replicas are running
When a revision is scaled above the [minimum replica count](scale-app.md), all of its running replicas are charged for resource consumption at the active rate.
+#### Jobs
+
+In the Consumption plan, resources consumed by Azure Container Apps jobs are charged the active rate. Idle charges don't apply to jobs because executions stop consuming resources once the job completes.
+ ### Request charges In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. Only requests that come from outside a Container Apps environment are billable.
In addition to resource consumption, Azure Container Apps also charges based on
- The first 2 million requests in each subscription per calendar month are free. - [Health probe](./health-probes.md) requests aren't billable.
+Request charges don't apply to Azure Container Apps jobs because they don't support ingress.
+ <a id="consumption-dedicated"></a> ## Dedicated plan You're billed based on workload profile instances, not by individual applications.
-Billing for apps running in the Dedicated plan is based on workload profile instances, not by individual applications. The charges are as follows:
+Billing for apps and jobs running in the Dedicated plan is based on workload profile instances, not by individual applications. The charges are as follows:
| Fixed management costs | Variable costs | |||
container-apps Blue Green Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/blue-green-deployment.md
After you test and verify the new revision, you can then point production traffi
This article shows you how to implement blue-green deployment in a container app. To run the following examples, you need a container app environment where you can create a new app. > [!NOTE]
-> Refer to [containerapps-blue-green repository](https://github.com/Azure-Samples/containerapps-blue-green) for a complete example of a github workflow that implements blue-green deployment for Container Apps.
+> Refer to [containerapps-blue-green repository](https://github.com/Azure-Samples/containerapps-blue-green) for a complete example of a GitHub workflow that implements blue-green deployment for Container Apps.
## Create a container app with multiple active revisions enabled
container-apps Code To Cloud Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/code-to-cloud-options.md
+
+ Title: Code to cloud options in Azure Container Apps
+description: Learn about the different options to deploy a container app directly from your development environment.
++++ Last updated : 08/30/2023+++
+# Select the right code-to-cloud path for Azure Container Apps
+
+You have several options available as you develop and deploy your apps to Azure Container Apps. As evaluate your goals and the needs of your team, consider the following questions:
+
+- Do you want to focus more on application changes, or infrastructure configuration?
+- Are you working on a team or as an individual?
+- How fast do you need to see changes reflected in the application or infrastructure?
+- How important is an automated workflow vs. an experimental workflow?
+
+Based on your situation, your answers to these questions affect your preferred development and deployment strategies. Individuals who want to rapidly iterate features have different needs than structured teams deploying to mature production environments.
+
+This article helps you select the most appropriate option for how you develop and deploy your applications to Azure Container Apps.
+
+Depending on your situation, you may want to deploy from a [code editor](#code-editor), through the [Azure portal](#azure-portal), with a hosted [code repository](#code-repository), or via [infrastructure as code](#infrastructure-as-code).
+
+## Code editor
+
+If you spend most your time editing code and favor rapid iteration of your applications, then you may want to use [Visual Studio](https://visualstudio.microsoft.com/) or [Visual Studio Code](https://code.visualstudio.com/). These editors allow you to easily build Docker files a deploy your applications directly to Azure Container Apps.
+
+This approach allows you to experiment with configuration options made in the early stages of an application's life.
+
+Once your application works as expected, then you can formalize the build process through your [code repository](#code-repository) to run and deploy your application.
+
+### Resources
+
+- [Deploy to Azure Container Apps using Visual Studio](deploy-visual-studio.md)
+- [Deploy to Azure Container Apps using Visual Studio Code](deploy-visual-studio-code.md)
+
+## Azure portal
+
+The Azure portal's focus is on setting up, changing, and experimenting with your Container Apps environment.
+
+While you can't use the portal to deploy your code, it's ideal for making incremental changes to your configuration. The portal's strengths lie in making it easy for you to set up, change, and experiment with your container app.
+
+You can also use the portal with [Azure App Spaces](/azure/app-spaces/overview) to deploy your applications to Container Apps.
+
+### Resources
+
+- [Deploy your first container app using the Azure portal](quickstart-portal.md)
+- [Deploy a web app with Azure App Spaces](/azure/app-spaces/quickstart-deploy-web-app?tabs=app-service)
+
+## Code repository
+
+GitHub and Azure DevOps repositories provide the most structured path to running your code on Azure Container Apps.
+
+As you maintain code in a repository, deployment takes place on the server rather than your local workstation. Remote execution engages safeguards to ensure your application is only updated through trusted channels.
+
+### Resources
+
+- [Deploy to Azure Container Apps with GitHub Actions](github-actions.md)
+- [Deploy to Azure Container Apps from Azure Pipelines](azure-pipelines.md)
+
+## Infrastructure as code
+
+Infrastructure as Code (IaC) allows you to maintain your infrastructure setup and configuration in code. Once in your codebase, you can ensure every deployed container environment is consistent, reproducible, and version-controlled.
+
+In Azure Container Apps, you can use the [Azure CLI](/cli/azure/) or the [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) to configure your applications.
+
+| CLI | Description | Best used with |
+|--|--|--|
+| Azure CLI | The Azure CLI allows you to deploy directly from your local workstation in the form of local code or container image. you can use PowerShell or Bash to automate application and infrastructure deployment. | Individuals or small teams during initial iteration phases. |
+| Azure Developer CLI (AZD) | AZD is a hybrid solution for handling both the development and operation of your application. When you use AZD, you need to maintain both your application code and infrastructure code in the same repository. The application code requires a Dockerfile for packaging, and the infrastructure code is defined in Bicep. | Applications managed by a single team. |
+
+#### Resources
+
+- **Azure CLI**
+ - [Build and deploy your container app from a repository](quickstart-code-to-cloud.md)
+ - [Deploy your first container app with containerapp up](get-started.md)
+ - [Set up GitHub Actions with Azure CLI](github-actions-cli.md)
+ - [Build and deploy your container app from a repository](tutorial-deploy-first-app-cli.md)
+
+- **Azure Developer CLI (AZD)**
+ - [Get started using Azure Developer CLI](/azure/developer/azure-developer-cli/get-started?tabs=localinstall&pivots=programming-language-nodejs)
+
+## Next steps
+
+- [Deploy to Azure Container Apps using Visual Studio](deploy-visual-studio.md)
+- [Deploy to Azure Container Apps using Visual Studio Code](deploy-visual-studio-code.md)
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
There's no perfect solution for every use case and every team. The following exp
## Container option comparisons ### Azure Container Apps
-Azure Container Apps enables you to build serverless microservices based on containers. Distinctive features of Container Apps include:
+Azure Container Apps enables you to build serverless microservices and jobs based on containers. Distinctive features of Container Apps include:
* Optimized for running general purpose containers, especially for applications that span many microservices deployed in containers. * Powered by Kubernetes and open-source technologies like [Dapr](https://dapr.io/), [KEDA](https://keda.sh/), and [envoy](https://www.envoyproxy.io/). * Supports Kubernetes-style apps and microservices with features like [service discovery](connect-apps.md) and [traffic splitting](revisions.md). * Enables event-driven application architectures by supporting scale based on traffic and pulling from [event sources like queues](scale-app.md), including [scale to zero](scale-app.md).
-* Support of long running processes and can run [background tasks](background-processing.md).
+* Supports running on demand, scheduled, and event-driven [jobs](jobs.md).
Azure Container Apps doesn't provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use [Azure Kubernetes Service](../aks/intro-kubernetes.md). However, if you would like to build Kubernetes-style applications and don't require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices. For these reasons, many teams may prefer to start building container microservices with Azure Container Apps.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Previously updated : 5/4/2023 Last updated : 08/29/2023
Azure Container Apps manages the details of Kubernetes and container orchestrati
Azure Container Apps supports: -- Any Linux-based x86-64 (`linux/amd64`) container image
+- Any Linux-based x86-64 (`linux/amd64`) container image with no required base image
- Containers from any public or private container registry
+- [Sidecar](#sidecar-containers) and [init](#init-containers) containers
-Features include:
+Features also include:
-- There's no required base container image. - Changes to the `template` configuration section trigger a new [container app revision](application-lifecycle-management.md). - If a container crashes, it automatically restarts.
+Jobs features include:
+
+- Job executions use the `template` configuration section to define the container image and other settings when each execution starts.
+- If a container exits with a non-zero exit code, the job execution is marked as failed. You can configure a job to retry failed executions.
+ ## Configuration The following code is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container.
The following code is an example of the `containers` array in the [`properties.t
| Setting | Description | Remarks | ||||
-| `image` | The container image name for your container app. | This value takes the form of `repository/image-name:tag`. |
+| `image` | The container image name for your container app. | This value takes the form of `repository/<IMAGE_NAME>:<TAG>`. |
| `name` | Friendly name of the container. | Used for reporting and identification. | | `command` | The container's startup command. | Equivalent to Docker's [entrypoint](https://docs.docker.com/engine/reference/builder/) field. | | `args` | Start up command arguments. | Entries in the array are joined together to create a parameter list to pass to the startup command. | | `env` | An array of key/value pairs that define environment variables. | Use `secretRef` instead of the `value` field to refer to a secret. |
-| `resources.cpu` | The number of CPUs allocated to the container. | With the Consumption plan, values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to 2<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br> For example, `1.25` is valid, but `1.555` is invalid.<br> The default is 0.25 CPU per container.<br><br>When you use the Consumption workload profile in the Consumption + Dedicated plan structure, the same rules apply, except CPU must be less than or equal to 4.<br><br>When you use a Dedicated workload profile in the Consumption + Dedicated plan structure, the maximum CPU must be less than or equal to the number of cores available in the profile. |
-| `resources.memory` | The amount of RAM allocated to the container. | With the Consumption plan, values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to `4Gi`<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br>For example, `1.25Gi` is valid, but `1.555Gi` is invalid.<br>The default is `0.5Gi` per container.<br><br>When you use the Consumption workload profile in the Consumption + Dedicated plan structure, the same rules apply except memory must be less than or equal to `8Gi`.<br><br>When you use a dedicated workload profile in the Consumption + Dedicated plan structure, the maximum memory must be less than or equal to the amount of memory available in the profile. |
+| `resources.cpu` | The number of CPUs allocated to the container. | With the [Consumption plan](plans.md), values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to 2<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br> For example, `1.25` is valid, but `1.555` is invalid.<br> The default is 0.25 CPUs per container.<br><br>When you use the Consumption workload profile on the Dedicated plan, the same rules apply, except CPUs must be less than or equal to 4.<br><br>When you use the [Dedicated plan](plans.md), the maximum CPUs must be less than or equal to the number of cores available in the profile where the container app is running. |
+| `resources.memory` | The amount of RAM allocated to the container. | With the [Consumption plan](plans.md), values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to `4Gi`<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br>For example, `1.25Gi` is valid, but `1.555Gi` is invalid.<br>The default is `0.5Gi` per container.<br><br>When you use the the Consumption workload on the [Dedicated plan](plans.md), the same rules apply except memory must be less than or equal to `8Gi`.<br><br>When you use the Dedicated plan, the maximum memory must be less than or equal to the amount of memory available in the profile where the container app is running. |
| `volumeMounts` | An array of volume mount definitions. | You can define a temporary volume or multiple permanent storage volumes for your container. For more information about storage volumes, see [Use storage mounts in Azure Container Apps](storage-mounts.md).| | `probes`| An array of health probes enabled in the container. | This feature is based on Kubernetes health probes. For more information about probes settings, see [Health probes in Azure Container Apps](health-probes.md).| <a id="allocations"></a>
-In the Consumption plan and the Consumption workload profile in the [Consumption + Dedicated plan structure](plans.md#consumption-dedicated), the total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
+When you use either the Consumption plan or a Consumption workload on the Dedicated plan, the total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
| vCPUs (cores) | Memory | Consumption plan | Consumption workload profile | |||||
In the Consumption plan and the Consumption workload profile in the [Consumption
| `4.0` | `8.0Gi` | | Γ£ö | - The total of the CPU requests in all of your containers must match one of the values in the *vCPUs* column.+ - The total of the memory requests in all your containers must match the memory value in the memory column in the same row of the CPU column.
-When you use a Dedicated workload profile in the Consumption + Dedicated plan structure, the total CPU and memory allocations requested for all the containers in a container app must be less than or equal to the cores and memory available in the profile.
+When you use the Consumption profile on the Dedicated plan, the total CPU and memory allocations requested for all the containers in a container app must be less than or equal to the cores and memory available in the profile.
## Multiple containers
-In advanced scenarios, you can run multiple containers in a single container app. The containers share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md). There are two ways to run multiple containers in a container app: [sidecar containers](#sidecar-containers) and [init containers](#init-containers).
+In advanced scenarios, you can run multiple containers in a single container app. Use this pattern only in specific instances where your containers are tightly coupled.
+
+For most microservice scenarios, the best practice is to deploy each service as a separate container app.
+
+The multiple containers in the same container app share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md).
+
+There are two ways to run multiple containers in a container app: [sidecar containers](#sidecar-containers) and [init containers](#init-containers).
### Sidecar containers
-You can define multiple containers in a single container app to implement the [sidecar pattern](/azure/architecture/patterns/sidecar). Examples of sidecar containers include:
+You can define multiple containers in a single container app to implement the [sidecar pattern](/azure/architecture/patterns/sidecar).
+
+Examples of sidecar containers include:
- An agent that reads logs from the primary app container on a [shared volume](storage-mounts.md?pivots=aca-cli#temporary-storage) and forwards them to a logging service.+ - A background process that refreshes a cache used by the primary app container in a shared volume.
-> [!NOTE]
-> Running multiple containers in a single container app is an advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. In most situations where you want to run multiple containers, such as when implementing a microservice architecture, deploy each service as a separate container app.
+These scenarios are examples, and don't represent the only ways you can implement a sidecar.
To run multiple containers in a container app, add more than one container in the `containers` array of the container app template. ### <a name="init-containers"></a>Init containers
-You can define one or more [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in a container app. Init containers run before the primary app container and can be used to perform initialization tasks such as downloading data or preparing the environment.
+You can define one or more [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) in a container app. Init containers run before the primary app container and are used to perform initialization tasks such as downloading data or preparing the environment.
-Init containers are defined in the `initContainers` array of the container app template. The containers run in the order they are defined in the array and must complete successfully before the primary app container starts.
+Init containers are defined in the `initContainers` array of the container app template. The containers run in the order they're defined in the array and must complete successfully before the primary app container starts.
+
+> [!NOTE]
+> Init containers support [image pulls using managed identities](#managed-identity-with-azure-container-registry), but processes running in init containers don't have access to managed identities.
## Container registries
To use a container registry, you define the required fields in `registries` arra
} ```
-With the registry information added, the saved credentials can be used to pull a container image from the private registry when your app is deployed.
+Saved credentials are used to pull a container image from the private registry as your app is deployed.
The following example shows how to configure Azure Container Registry credentials in a container app.
The following example shows how to configure Azure Container Registry credential
``` > [!NOTE]
-> Docker Hub [limits](https://docs.docker.com/docker-hub/download-rate-limit/) the number of Docker image downloads. When the limit is reached, containers in your app will fail to start. You're recommended to use a registry with sufficient limits, such as [Azure Container Registry](../container-registry/container-registry-intro.md).
+> Docker Hub [limits](https://docs.docker.com/docker-hub/download-rate-limit/) the number of Docker image downloads. When the limit is reached, containers in your app will fail to start. Use a registry with sufficient limits, such as [Azure Container Registry](../container-registry/container-registry-intro.md) to avoid this problem.
### Managed identity with Azure Container Registry You can use an Azure managed identity to authenticate with Azure Container Registry instead of using a username and password. For more information, see [Managed identities in Azure Container Apps](managed-identity.md).
-When assigning a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or "system" for the system-assigned identity.
+When assigning a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or `system` for the system-assigned identity.
```json {
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 05/15/2023 Last updated : 08/28/2023 # Dapr integration with Azure Container Apps
This guide provides insight into core Dapr concepts and details regarding the Da
| Dapr API | Description | | -- | |
-| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. |
+| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. [See known limitations for Dapr service invocation in Azure Container Apps.](#unsupported-dapr-capabilities) |
| [**State management**][dapr-statemgmt] | Provides state management capabilities for transactions and CRUD operations. | | [**Pub/sub**][dapr-pubsub] | Allows publisher and subscriber container apps to intercommunicate via an intermediary message broker. | | [**Bindings**][dapr-bindings] | Trigger your applications based on events | | [**Actors**][dapr-actors] | Dapr actors are message-driven, single-threaded, units of work designed to quickly scale. For example, in burst-heavy workload situations. | | [**Observability**](./observability.md) | Send tracing information to an Application Insights backend. | | [**Secrets**][dapr-secrets] | Access secrets from your application code or reference secure values in your Dapr components. |
+| [**Configuration**][dapr-config] | Retrieve and subscribe to application configuration items for supported configuration stores. |
+ > [!NOTE] > The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see the Dapr FAQ][dapr-faq].
This resource defines a Dapr component called `dapr-pubsub` via ARM.
- **Custom configuration for Dapr Observability**: Instrument your environment with Application Insights to visualize distributed tracing. - **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec.
+- **Invoking non-Dapr services from Dapr as if they were Dapr-enabled**: Dapr's Service Invocation with Azure Container Apps is supported only between Dapr-enabled services.
- **Declarative pub/sub subscriptions** - **Any Dapr sidecar annotations not listed above** - **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. For more information, refer to the [Dapr FAQ][dapr-faq].
This resource defines a Dapr component called `dapr-pubsub` via ARM.
### Known limitations - **Actor reminders**: Require a minReplicas of 1+ to ensure reminders is always active and fires correctly.
+- **Jobs**: Dapr isn't supported for jobs.
## Next Steps
Now that you've learned about Dapr and some of the challenges it solves:
[dapr-bindings]: https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/ [dapr-actors]: https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/ [dapr-secrets]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/
+[dapr-config]: https://docs.dapr.io/developing-applications/building-blocks/configuration/
[dapr-cncf]: https://www.cncf.io/projects/dapr/ [dapr-args]: https://docs.dapr.io/reference/arguments-annotations-overview/ [dapr-component]: https://docs.dapr.io/concepts/components-concept/
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
- Title: Reliability in Azure Container Apps
-description: Learn how to plan for and recover from disaster recovery scenarios in Azure Container Apps
------ Previously updated : 08/10/2023--
-# Reliability in Azure Container Apps
-
-Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
-
-Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
-
-By enabling Container Apps' zone redundancy feature, replicas are automatically distributed across the zones in the region. Traffic is load balanced among the replicas. If a zone outage occurs, traffic will automatically be routed to the replicas in the remaining zones.
-
-> [!NOTE]
-> There is no extra charge for enabling zone redundancy, but it only provides benefits when you have 2 or more replicas, with 3 or more being ideal since most regions that support zone redundancy have 3 zones.
-
-In the unlikely event of a full region outage, you have the option of using one of two strategies:
--- **Manual recovery**: Manually deploy to a new region, or wait for the region to recover, and then manually redeploy all environments and apps.--- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md).-
-> [!NOTE]
-> Regardless of which strategy you choose, make sure your deployment configuration files are in source control so you can easily redeploy if necessary.
-
-Additionally, the following resources can help you create your own disaster recovery plan:
--- [Failure and disaster recovery for Azure applications](/azure/architecture/reliability/disaster-recovery)-- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)-
-## Set up zone redundancy in your Container Apps environment
-
-To take advantage of availability zones, you must enable zone redundancy when you create the Container Apps environment. The environment must include a virtual network (VNET) with an available subnet. To ensure proper distribution of replicas, you should configure your app's minimum and maximum replica count with values that are divisible by three. The minimum replica count should be at least three.
-
-### Enable zone redundancy via the Azure portal
-
-To create a container app in an environment with zone redundancy enabled using the Azure portal:
-
-1. Navigate to the Azure portal.
-1. Search for **Container Apps** in the top search box.
-1. Select **Container Apps**.
-1. Select **Create New** in the *Container Apps Environment* field to open the *Create Container Apps Environment* panel.
-1. Enter the environment name.
-1. Select **Enabled** for the *Zone redundancy* field.
-
-Zone redundancy requires a virtual network (VNET) with an infrastructure subnet. You can choose an existing VNET or create a new one. When creating a new VNET, you can accept the values provided for you or customize the settings.
-
-1. Select the **Networking** tab.
-1. To assign a custom VNET name, select **Create New** in the *Virtual Network* field.
-1. To assign a custom infrastructure subnet name, select **Create New** in the *Infrastructure subnet* field.
-1. You can select **Internal** or **External** for the *Virtual IP*.
-1. Select **Create**.
--
-### Enable zone redundancy with the Azure CLI
-
-Create a VNET and infrastructure subnet to include with the Container Apps environment.
-
-When using these commands, replace the `<PLACEHOLDERS>` with your values.
-
->[!NOTE]
-> The subnet associated with a Container App Environment requires a CIDR prefix of `/23` or larger.
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az network vnet create \
- --resource-group <RESOURCE_GROUP_NAME> \
- --name <VNET_NAME> \
- --location <LOCATION> \
- --address-prefix 10.0.0.0/16
-```
-
-```azurecli-interactive
-az network vnet subnet create \
- --resource-group <RESOURCE_GROUP_NAME> \
- --vnet-name <VNET_NAME> \
- --name infrastructure \
- --address-prefixes 10.0.0.0/21
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
--
-```azurepowershell-interactive
-$SubnetArgs = @{
- Name = 'infrastructure-subnet'
- AddressPrefix = '10.0.0.0/21'
-}
-$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs
-```
-
-```azurepowershell-interactive
-$VnetArgs = @{
- Name = <VNetName>
- Location = <Location>
- ResourceGroupName = <ResourceGroupName>
- AddressPrefix = '10.0.0.0/16'
- Subnet = $subnet
-}
-$vnet = New-AzVirtualNetwork @VnetArgs
-```
---
-Next, query for the infrastructure subnet ID.
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure --query "id" -o tsv | tr -d '[:space:]'`
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id
-```
---
-Finally, create the environment with the `--zone-redundant` parameter. The location must be the same location used when creating the VNET.
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az containerapp env create \
- --name <CONTAINER_APP_ENV_NAME> \
- --resource-group <RESOURCE_GROUP_NAME> \
- --location "<LOCATION>" \
- --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET \
- --zone-redundant
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
-
-```azurepowershell-interactive
-$WorkspaceArgs = @{
- Name = 'myworkspace'
- ResourceGroupName = <ResourceGroupName>
- Location = <Location>
- PublicNetworkAccessForIngestion = 'Enabled'
- PublicNetworkAccessForQuery = 'Enabled'
-}
-New-AzOperationalInsightsWorkspace @WorkspaceArgs
-$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName <ResourceGroupName> -Name $WorkspaceArgs.Name).CustomerId
-$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName <ResourceGroupName> -Name $WorkspaceArgs.Name).PrimarySharedKey
-```
-
-To create the environment, run the following command:
-
-```azurepowershell-interactive
-$EnvArgs = @{
- EnvName = <EnvironmentName>
- ResourceGroupName = <ResourceGroupName>
- Location = <Location>
- AppLogConfigurationDestination = "log-analytics"
- LogAnalyticConfigurationCustomerId = $WorkspaceId
- LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
- VnetConfigurationInfrastructureSubnetId = $InfrastructureSubnet
- VnetConfigurationInternal = $true
-}
-New-AzContainerAppManagedEnv @EnvArgs
-```
--
container-apps Environment Custom Dns Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-custom-dns-suffix.md
By default, an Azure Container Apps environment provides a DNS suffix in the format `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`. Each container app in the environment generates a domain name based on this DNS suffix. You can configure a custom DNS suffix for your environment. > [!NOTE]
+>
> To configure a custom domain for individual container apps, see [Custom domain names and certificates in Azure Container Apps](custom-domains-certificates.md).
+>
+> If you configure a custom DNS suffix for your environment, traffic to FQDNs that use this suffix will resolve to the environment. FQDNs that use this suffix outside the environment will be unreachable from the environment.
## Add a custom DNS suffix and certificate
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Previously updated : 03/13/2023 Last updated : 08/29/2023 # Azure Container Apps environments
-A Container Apps environment is a secure boundary around groups of container apps that share the same virtual network and write logs to the same logging destination.
+A Container Apps environment is a secure boundary around one or more container apps and jobs. The Container Apps runtime manages each environment by handling OS upgrades, scale operations, failover procedures, and resource balancing.
-Container Apps environments are fully managed where Azure handles OS upgrades, scale operations, failover procedures, and resource balancing.
+Environments include the following features:
+
+| Feature | Description |
+|||
+| Type | There are [two different types](#types) of Container Apps environments: Workload profiles environments and Consumption only environments. Workload profiles environments support both the Consumption and Dedicated [plans](plans.md) whereas Consumption only environments support only the Consumption [plan](plans.md). |
+| Virtual network | A virtual network supports each environment, which enforces the environment's secure boundaries. As you create an environment, a virtual network that has [limited network capabilities](networking.md) is created for you, or you can provide your own. Adding an [existing virtual network](vnet-custom.md) gives you fine-grained control over your network. |
+| Multiple container apps | When multiple container apps are in the same environment, they share the same virtual network and write logs to the same logging destination. |
+| Multi-service integration | You can add [Azure Functions](https://aka.ms/functionsonaca) and [Azure Spring Apps](https://aka.ms/asaonaca) to your Azure Container Apps environment. |
:::image type="content" source="media/environments/azure-container-apps-environments.png" alt-text="Azure Container Apps environments.":::
-Reasons to deploy container apps to the same environment include situations when you need to:
+Depending on your needs, you may want to use one or more Container Apps environments. Use the following criteria to help you decide if you should use a single or multiple environments.
+
+### Single environment
+
+Use a single environment when you want to:
- Manage related services - Deploy different applications to the same virtual network - Instrument Dapr applications that communicate via the Dapr service invocation API-- Have applications to share the same Dapr configuration-- Have applications share the same log analytics workspace
+- Have applications share the same Dapr configuration
+- Have applications share the same log destination
-Also, you may provide an [existing virtual network](vnet-custom.md) when you create an environment.
+### Multiple environments
-Reasons to deploy container apps to different environments include situations when you want to ensure:
+Use more than one environment when you want two or more applications to:
-- Two applications never share the same compute resources-- Two Dapr applications can't communicate via the Dapr service invocation API
+- Never share the same compute resources
+- Not communicate via the Dapr service invocation API
+- Be isolated due to team or environment usage (for example, test vs. production)
-You can add [**Azure Functions**](https://aka.ms/functionsonaca) and [**Azure Spring Apps**](https://aka.ms/asaonaca) to your Azure Container Apps environment.
+## Types
+
+| Type | Description | Plan | Billing considerations |
+|--|--|--|--|
+| Workload profile | Run serverless apps with support for scale-to-zero and pay only for resources your apps use with the consumption profile. You can also run apps with customized hardware and increased cost predictability using dedicated workload profiles. | Consumption and Dedicated | You can choose to run apps under either or both plans using seperate workload profiles. The Dedicated plan has a fixed cost for the entire environment regardless of how many workload profiles you're using. |
+| Consumption only | Run serverless apps with support for scale-to-zero and pay only for resources your apps use. | Consumption only | Billed only for individual container apps and their resource usage. There's no cost associated with the Container Apps environment. |
## Logs
Settings relevant to the Azure Container Apps environment API resource.
| `properties.appLogsConfiguration` | Used for configuring the Log Analytics workspace where logs for all apps in the environment are published. | | `properties.containerAppsConfiguration.daprAIInstrumentationKey` | App Insights instrumentation key provided to Dapr for tracing |
-## Billing
-
-Azure Container Apps has two different pricing structures.
--- If you're using the Consumption only plan, or only the Consumption workload profile in the Consumption + Dedicated plan structure then billing is relevant only to individual container apps and their resource usage. There's no cost associated with the Container Apps environment.-- If you're using any Dedicated workload profiles in the Consumption + Dedicated plan structure, there's a fixed cost for the Dedicated plan management. This cost is for the entire environment regardless of how many Dedicated workload profiles you're using.- ## Next steps > [!div class="nextstepaction"]
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Previously updated : 03/29/2023 Last updated : 08/29/2023
Network Security Groups (NSGs) needed to configure virtual networks closely rese
You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container Apps environment at the subscription level.
-In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. Learn more in the [networking concepts document](./networking.md#user-defined-routes-udrpreview).
+In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. Learn more in the [networking concepts document](./networking.md#user-defined-routes-udr).
In the Consumption only environment, custom user-defined routes (UDRs) and ExpressRoutes aren't supported.
container-apps Hardware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/hardware.md
+
+ Title: Hardware reference in Azure Container Apps
+description: Learn about hardware specifications in Container Apps
+++++ Last updated : 08/30/2023++
+# Azure Container Apps hardware reference
+
+Workload profiles in Azure Container Apps run on specialized hardware with specific restrictions. Use the following information to help you select the workload profile most appropriate for your application.
+
+## Image size limit
++
+For more information on differences in hardware selection, see the [workload profiles overview](workload-profiles-overview.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quotas](quotas.md)
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
Previously updated : 10/28/2022 Last updated : 08/29/2023 # Health probes in Azure Container Apps
-Health probes in Azure Container Apps are based on [Kubernetes health probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). You can set up probes using either TCP or HTTP(S) exclusively.
+Azure Container Apps Health probes are based on [Kubernetes health probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). These probes allow the Container Apps runtime to regularly inspect the status of your container apps.
-Container Apps support the following probes:
+You can set up probes using either TCP or HTTP(S) exclusively.
-- **Liveness**: Reports the overall health of your replica.-- **Readiness**: Signals that a replica is ready to accept traffic.-- **Startup**: Delay reporting on a liveness or readiness state for slower apps with a startup probe.
+Container Apps supports the following probes:
+| Probe | Description |
+|||
+| Startup | Checks if your application has successfully started. This check is separate from the liveness probe and executes during the initial startup phase of your application. |
+| Liveness | Checks if your application is still running and responsive. |
+| Readiness | Checks to see if a replica is ready to handle incoming requests. |
-For a full listing of the specification supported in Azure Container Apps, refer to [Azure REST API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
-
-> [!NOTE]
-> TCP startup probes are not supported for Consumption workload profiles in the [Consumption + Dedicated plan structure](./plans.md#consumption-dedicated).
+For a full list of the probe specification supported in Azure Container Apps, refer to [Azure REST API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
## HTTP probes
-HTTP probes allow you to implement custom logic to check the status of application dependencies before reporting a healthy status. Configure your health probe endpoints to respond with an HTTP status code greater than or equal to `200` and less than `400` to indicate success. Any other response code outside this range indicates a failure.
+HTTP probes allow you to implement custom logic to check the status of application dependencies before reporting a healthy status.
+
+Configure your health probe endpoints to respond with an HTTP status code greater than or equal to `200` and less than `400` to indicate success. Any other response code outside this range indicates a failure.
The following example demonstrates how to implement a liveness endpoint in JavaScript.
app.get('/liveness', (req, res) => {
## TCP probes
-TCP probes wait for a connection to be established with the server to indicate success. A probe failure is registered if no connection is made.
+TCP probes wait to establish a connection with the server to indicate success. The probe fails if it can't establish a connection to your application.
## Restrictions - You can only add one of each probe type per container. - `exec` probes aren't supported. - Port values must be integers; named ports aren't supported.-- gRPC is not supported.
+- gRPC isn't supported.
## Examples
containers:
-The optional `failureThreshold` setting defines the number of attempts Container Apps tries if the probe if execution fails. Attempts that exceed the `failureThreshold` amount cause different results for each probe.
+The optional `failureThreshold` setting defines the number of attempts Container Apps tries to execute the probe if execution fails. Attempts that exceed the `failureThreshold` amount cause different results for each probe type.
## Default configuration
If ingress is enabled, the following default probes are automatically added to t
| Probe type | Default values | | -- | -- |
-| Startup | Protocol: TCP<br>Port: ingress target port<br>Timeout: 1 second<br>Period: 1 second<br>Initial delay: 1 second<br>Success threshold: 1<br>Failure threshold: `timeoutSeconds` |
-| Readiness | Protocol: TCP<br>Port: ingress target port<br>Timeout: 5 seconds<br>Period: 5 seconds<br>Initial delay: 3 seconds<br>Success threshold: 1<br>Failure threshold: `timeoutSeconds / 5` |
+| Startup | Protocol: TCP<br>Port: ingress target port<br>Timeout: 3 seconds<br>Period: 1 second<br>Initial delay: 1 second<br>Success threshold: 1 second<br>Failure threshold: 240 seconds |
+| Readiness | Protocol: TCP<br>Port: ingress target port<br>Timeout: 5 seconds<br>Period: 5 seconds<br>Initial delay: 3 seconds<br>Success threshold: 1 second<br>Failure threshold: 48 seconds |
| Liveness | Protocol: TCP<br>Port: ingress target port |
-If your app takes an extended amount of time to start, which is very common in Java, you often need to customize the probes so your container won't crash.
+If your app takes an extended amount of time to start (which is common in Java) you often need to customize the probes so your container doesn't crash.
The following example demonstrates how to configure the liveness and readiness probes in order to extend the startup times.
container-apps Ingress How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md
You can configure ingress for your container app using the Azure CLI, an ARM tem
::: zone pivot="azure-cli"
-# [Azure CLI](#tab/azure-cli)
- This `az containerapp ingress enable` command enables ingress for your container app. You must specify the target port, and you can optionally set the exposed port if your transport type is `tcp`. ```azurecli
az containerapp ingress enable \
::: zone pivot="azure-portal"
-# [Portal](#tab/portal)
- Enable ingress for your container app by using the portal. You can enable ingress when you create your container app, or you can enable ingress for an existing container app.
You can enable ingress when you create your container app, or you can enable ing
You can configure ingress when you create your container app by using the Azure portal. - 1. Set **Ingress** to **Enabled**. 1. Configure the ingress settings for your container app. 1. Select **Limited to Container Apps Environment** for internal ingress or **Accepting traffic from anywhere** for external ingress.
The **Ingress** settings page for your container app also allows you to configur
::: zone pivot="azure-resource-manager"
-# [ARM template](#tab/arm-template)
- Enable ingress for your container app by using the `ingress` configuration property. Set the `external` property to `true`, and set your `transport` and `targetPort` properties. -`external` property can be set to *true* for external or *false* for internal ingress. - Set the `transport` to `auto` to detect HTTP/1 or HTTP/2, `http` for HTTP/1, `http2` for HTTP/2, or `tcp` for TCP.
Enable ingress for your container app by using the `ingress` configuration prope
} ``` -- ::: zone-end ::: zone pivot="azure-cli" ## Disable ingress
-# [Azure CLI](#tab/azure-cli)
- Disable ingress for your container app by using the `az containerapp ingress` command. ```azurecli
az containerapp ingress disable \
::: zone pivot="azure-portal"
-# [Portal](#tab/portal)
- You can disable ingress for your container app using the portal. 1. Select **Ingress** from the **Settings** menu of the container app page.
You can disable ingress for your container app using the portal.
::: zone pivot="azure-resource-manager"
-# [ARM template](#tab/arm-template)
- Disable ingress for your container app by omitting the `ingress` configuration property from `properties.configuration` entirely. -+
+## <a name="use-additional-tcp-ports"></a>Use additional TCP ports (preview)
+
+You can expose additional TCP ports from your application. To learn more, see the [ingress concept article](ingress-overview.md#additional-tcp-ports).
+++
+Adding additional TCP ports can be done through the CLI by referencing a YAML file with your TCP port configurations.
+
+```azurecli
+az containerapp create
+ --name <app-name> \
+ --resource-group <resource-group> \
+ --yaml <your-yaml-file>
+```
+
+The following is an example YAML file you can reference in the above CLI command. The configuration for the additional TCP ports is under `additionalPortMappings`.
+
+```yml
+location: northcentralus
+name: multiport-example
+properties:
+ configuration:
+ activeRevisionsMode: Single
+ ingress:
+ additionalPortMappings:
+ - exposedPort: 21025
+ external: false
+ targetPort: 1025
+ allowInsecure: false
+ external: true
+ targetPort: 1080
+ traffic:
+ - latestRevision: true
+ weight: 100
+ transport: http
+ managedEnvironmentId: <env id>
+ template:
+ containers:
+ - image: maildev/maildev
+ name: maildev
+ resources:
+ cpu: 0.25
+ memory: 0.5Gi
+ scale:
+ maxReplicas: 1
+ minReplicas: 1
+ workloadProfileName: Consumption
+type: Microsoft.App/containerApps
+```
+++
+This feature is not supported in the Azure portal.
+++
+The following ARM template provides an example of how you can add additional ports to your container apps. Each additional port should be added under `additionalPortMappings` within the `ingress` section for `configuration` within `properties` for the container app. The following is an example:
+
+```json
+{
+ ...
+ "properties": {
+ ...
+ "configuration": {
+ "ingress": {
+ ...
+ "additionalPortMappings": [
+ {
+ "external": false
+ "targetPort": 80
+ "exposedPort": 12000
+ }
+ ]
+ }
+ }
+ ...
+}
+```
::: zone-end
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
HTTP ingress adds headers to pass metadata about the client request to your cont
| `X-Forwarded-Proto` | Protocol used by the client to connect with the Container Apps service. | `http` or `https` | | `X-Forwarded-For` | The IP address of the client that sent the request. | | | `X-Forwarded-Host` | The host name the client used to connect with the Container Apps service. | |
-| `X-Forwarded-Client-Cert` | The client certificate if `clientCertificateMode` is set. | Semicolon seperated list of Hash, Cert, and Chain. For example: `Hash=....;Cert="...";Chain="...";` |
+| `X-Forwarded-Client-Cert` | The client certificate if `clientCertificateMode` is set. | Semicolon separated list of Hash, Cert, and Chain. For example: `Hash=....;Cert="...";Chain="...";` |
### <a name="tcp"></a>TCP
With TCP ingress enabled, your container app:
- Is accessible to other container apps in the same environment via its name (defined by the `name` property in the Container Apps resource) and exposed port number. - Is accessible externally via its fully qualified domain name (FQDN) and exposed port number if the ingress is set to "external".
+## <a name="additional-tcp-ports"></a>Additional TCP ports (preview)
+
+In addition to the main HTTP/TCP port for your container apps, you may expose additional TCP ports to enable applications that accept TCP connections on multiple ports. This feature is in preview.
+
+The following apply to additional TCP ports:
+- Additional TCP ports can only be external if the app itself is set as external and the container app is using a custom VNet.
+- Any externally exposed additional TCP ports must be unique across the entire Container Apps environment. This includes all external additional TCP ports, external main TCP ports, and 80/443 ports used by built-in HTTP ingress. If the additional ports are internal, the same port can be shared by multiple apps.
+- If an exposed port is not provided, the exposed port will default to match the target port.
+- Each target port must be unique, and the same target port cannot be exposed on different exposed ports.
+- There is a maximum of 5 additional ports per app. If additional ports are required, please open a support request.
+- Only the main ingress port supports built-in HTTP features such as CORS and session affinity. When running HTTP on top of the additional TCP ports, these built-in features are not supported.
+
+Visit the [how to article on ingress](ingress-how-to.md#use-additional-tcp-ports) for more information on how to enable additional ports for your container apps.
+ ## Domain names You can access your app in the following ways: -- The default fully-qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md).
+- The default fully qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md).
- A custom domain name: You can configure a custom DNS domain for your Container Apps environment. For more information, see [Custom domain names and certificates](./custom-domains-certificates.md). - The app name: You can use the app name for communication between apps in the same environment.
container-apps Jobs Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md
Title: Create a job with Azure Container Apps (preview)
+ Title: Create a job with Azure Container Apps
description: Learn to create an on-demand or scheduled job in Azure Container Apps Previously updated : 05/08/2023 Last updated : 08/17/2023 zone_pivot_groups: container-apps-job-types
-# Create a job with Azure Container Apps (preview)
+# Create a job with Azure Container Apps
Azure Container Apps [jobs](jobs.md) allow you to run containerized tasks that execute for a finite duration and exit. You can trigger a job manually, schedule their execution, or trigger their execution based on events.
container-apps Jobs Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-portal.md
+
+ Title: Create a job with Azure Container Apps using the Azure portal
+description: Learn to create an on-demand or scheduled job in Azure Container Apps using the Azure portal
+++++ Last updated : 08/21/2023+++
+# Create a job with Azure Container Apps using the Azure portal
+
+Azure Container Apps [jobs](jobs.md) allow you to run containerized tasks that execute for a finite duration and exit. You can trigger a job manually, schedule their execution, or trigger their execution based on events.
+
+Jobs are best suited to for tasks such as data processing, machine learning, or any scenario that requires on-demand processing.
+
+In this quickstart, you create a scheduled job. To learn how to create an event-driven job, see [Deploy an event-driven job with Azure Container Apps](tutorial-event-driven-jobs.md).
+
+## Prerequisites
+
+An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Also, please make sure to have the Resource Provider "Microsoft.App" registered.
+
+## Setup
+
+Begin by signing in to the [Azure portal](https://portal.azure.com).
+
+## Create a container app
+
+To create your Container Apps job, start at the Azure portal home page.
+
+1. Search for **Container App Jobs** in the top search bar.
+1. Select **Container App Jobs** in the search results.
+1. Select the **Create** button.
+
+### Basics tab
+
+In the *Basics* tab, do the following actions.
+
+1. Enter the following values in the *Project details* section.
+
+ | Setting | Action |
+ |||
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Create new** and enter **jobs-quickstart**. |
+ | Container job name | Enter **my-job**. |
+
+#### Create an environment
+
+Next, create an environment for your container app.
+
+1. Select the appropriate region.
+
+ | Setting | Value |
+ |--|--|
+ | Region | Select **Central US**. |
+
+1. In the *Create Container Apps environment* field, select the **Create new** link.
+1. In the *Create Container Apps Environment* page, on the *Basics* tab, enter the following values:
+
+ | Setting | Value |
+ |--|--|
+ | Environment name | Enter **my-environment**. |
+ | Type | Enter **Workload Profile**. |
+ | Zone redundancy | Select **Disabled** |
+
+1. Select the **Create** button at the bottom of the *Create Container App Environment* page.
+
+### Deploy the job
+
+1. In *Job details*, select **Scheduled** for the *Trigger type*.
+
+ In the *Cron expression* field, enter `*/1 * * * *`.
+
+ This expression starts the job every minute.
+
+1. Select the **Next: Container** button at the bottom of the page.
+
+1. In the *Container* tab, enter the following values:
+
+ | Setting | Value |
+ |--|--|
+ | Name | Enter **main-container**. |
+ | Image source | Select **Docker Hub or other registries**. |
+ | Image type | Select **Public**. |
+ | Registry login server | Enter **mcr.microsoft.com**. |
+ | Image and tag | Enter **k8se/quickstart-jobs:latest**. |
+ | Workload profile | Select **Consumption**. |
+ | CPU and memory | Select **0.25** and **0.5Gi**. |
+
+1. Select the **Review and create** button at the bottom of the page.
+
+ As the settings in the job are verified, if no errors are found, the *Create* button is enabled.
+
+ Any errors appear on a tab marked with a red dot. If you encounter errors, navigate to the appropriate tab and you'll find fields containing errors highlighted in red. Once all errors are fixed, select **Review and create** again.
+
+1. Select **Create**.
+
+ A page with the message *Deployment is in progress* is displayed. Once the deployment is successfully completed, you'll see the message: *Your deployment is complete*.
+
+### Verify deployment
+
+1. Select **Go to resource** to view your new Container Apps job.
+
+2. Select the **Execution history** tab.
+
+ The *Execution history* tab displays the status of each job execution. Select the **Refresh** button to update the list. Wait up to a minute for the scheduled job execution to start. Its status changes from *Pending* to *Running* to *Succeeded*.
+
+1. Select **View logs**.
+
+ The logs show the output of the job execution. It may take a few minutes for the logs to appear.
+
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
+
+1. Select the **jobs-quickstart** resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name **jobs-quickstart** in the *Are you sure you want to delete "jobs-quickstart"* confirmation dialog.
+1. Select **Delete**.
+ The process to delete the resource group may take a few minutes to complete.
+
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Container Apps jobs](jobs.md)
container-apps Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs.md
Title: Jobs in Azure Container Apps (preview)
-description: Learn about jobs in Azure Container Apps (preview)
+ Title: Jobs in Azure Container Apps
+description: Learn about jobs in Azure Container Apps
Previously updated : 05/08/2023 Last updated : 08/17/2023
-# Jobs in Azure Container Apps (preview)
+# Jobs in Azure Container Apps
Azure Container Apps jobs enable you to run containerized tasks that execute for a finite duration and exit. You can use jobs to perform tasks such as data processing, machine learning, or any scenario where on-demand processing is required.
Apps are services that run continuously. If a container in an app fails, it's re
Jobs are tasks that start, run for a finite duration, and exit when finished. Each execution of a job typically performs a single unit of work. Job executions start manually, on a schedule, or in response to events. Examples of jobs include batch processes that run on demand and scheduled tasks.
+### Example scenarios
+
+The following table compares common scenarios for apps and jobs:
+
+| Container | Compute resource | Notes |
+||||
+| An HTTP server that serves web content and API requests | App | Configure an [HTTP scale rule](scale-app.md#http). |
+| A process that generates financial reports nightly | Job | Use the [*Schedule* job type](#scheduled-jobs) and configure a cron expression. |
+| A continuously running service that processes messages from an Azure Service Bus queue | App | Configure a [custom scale rule](scale-app.md#custom). |
+| A job that processes a single message or a small batch of messages from an Azure queue and exits | Job | Use the *Event* job type and [configure a custom scale rule](tutorial-event-driven-jobs.md) to trigger job executions. |
+| A background task that's triggered on-demand and exits when finished | Job | Use the *Manual* job type and [start executions](#start-a-job-execution-on-demand) manually or programmatically using an API. |
+| A self-hosted GitHub Actions runner or Azure Pipelines agent | Job | Use the *Event* job type and configure a [GitHub Actions](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-github-actions) or [Azure Pipelines](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-azure-pipelines) scale rule. |
+| An Azure Functions app | App | [Deploy Azure Functions to Container Apps](../azure-functions/functions-container-apps-hosting.md). |
+| An event-driven app using the Azure WebJobs SDK | App | [Configure a scale rule](scale-app.md#custom) for each event source. |
+ ## Job trigger types A job's trigger type determines how the job is started. The following trigger types are available:
To create a manual job using the Azure CLI, use the `az containerapp job create`
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Manual" \
- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" ```
The following example Azure Resource Manager template creates a manual job named
"parallelism": 1, "replicaCompletionCount": 1 },
- "replicaRetryLimit": 1,
+ "replicaRetryLimit": 0,
"replicaTimeout": 1800, "triggerType": "Manual" },
The following example Azure Resource Manager template creates a manual job named
} ```
+# [Azure portal](#tab/azure-portal)
+
+To create a manual job using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Specify *Manual* as the trigger type.
+
+Enter the following values in the *Containers* tab to use a sample container image.
+
+| Setting | Value |
+|||
+| Name | *main* |
+| Image source | *Docker Hub or other registries* |
+| Image type | *Public* |
+| Registry login server | *mcr.microsoft.com* |
+| Image and tag | *k8se/quickstart-jobs:latest* |
+| CPU and memory | *0.25 CPU cores, 0.5 Gi memory*, or higher |
+
-The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits.
+The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a public sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits. To authenticate and use a private container image, see [Containers](containers.md#container-registries).
The above command only creates the job. To start a job execution, see [Start a job execution on demand](#start-a-job-execution-on-demand).
Container Apps jobs use cron expressions to define schedules. It supports the st
| Expression | Description | |||
+| `*/5 * * * *` | Runs every 5 minutes. |
| `0 */2 * * *` | Runs every two hours. | | `0 0 * * *` | Runs every day at midnight. | | `0 0 * * 0` | Runs every Sunday at midnight. |
To create a scheduled job using the Azure CLI, use the `az containerapp job crea
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Schedule" \
- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" \
- --cron-expression "0 0 * * *"
+ --cron-expression "*/1 * * * *"
``` # [Azure Resource Manager](#tab/azure-resource-manager)
The following example Azure Resource Manager template creates a manual job named
"properties": { "configuration": { "scheduleTriggerConfig": {
- "cronExpression": "0 0 * * *",
+ "cronExpression": "*/1 * * * *",
"parallelism": 1, "replicaCompletionCount": 1 },
- "replicaRetryLimit": 1,
+ "replicaRetryLimit": 0,
"replicaTimeout": 1800, "triggerType": "Schedule" },
The following example Azure Resource Manager template creates a manual job named
} ```
+# [Azure portal](#tab/azure-portal)
+
+To create a scheduled job using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Specify *Schedule* as the trigger type and define the schedule with a cron expression, such as `*/1 * * * *` to run every minute.
+
+Enter the following values in the *Containers* tab to use a sample container image.
+
+| Setting | Value |
+|||
+| Name | *main* |
+| Image source | *Docker Hub or other registries* |
+| Image type | *Public* |
+| Registry login server | *mcr.microsoft.com* |
+| Image and tag | *k8se/quickstart-jobs:latest* |
+| CPU and memory | *0.25 CPU cores, 0.5 Gi memory*, or higher |
+
-The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits.
+The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a public sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits. To authenticate and use a private container image, see [Containers](containers.md#container-registries).
-The cron expression `0 0 * * *` runs the job every day at midnight UTC.
+The cron expression `*/1 * * * *` runs the job every minute.
### Event-driven jobs Event-driven jobs are triggered by events from supported [custom scalers](scale-app.md#custom). Examples of event-driven jobs include: - A job that runs when a new message is added to a queue such as Azure Service Bus, Kafka, or RabbitMQ.-- A self-hosted GitHub Actions runner or Azure DevOps agent that runs when a new job is queued in a workflow or pipeline.
+- A self-hosted [GitHub Actions runner](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-github-actions) or [Azure DevOps agent](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-azure-pipelines) that runs when a new job is queued in a workflow or pipeline.
Container apps and event-driven jobs use [KEDA](https://keda.sh/) scalers. They both evaluate scaling rules on a polling interval to measure the volume of events for an event source, but the way they use the results is different.
To create an event-driven job using the Azure CLI, use the `az containerapp job
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Event" \
- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \
--image "docker.io/myuser/my-event-driven-job:latest" \ --cpu "0.25" --memory "0.5Gi" \ --min-executions "0" \
az containerapp job create \
--secrets "connection-string-secret=<QUEUE_CONNECTION_STRING>" ```
+The example configures an Azure Storage queue scale rule.
+ # [Azure Resource Manager](#tab/azure-resource-manager) The following example Azure Resource Manager template creates an event-driven job named `my-job` in a resource group named `my-resource-group` and a Container Apps environment named `my-environment`:
The following example Azure Resource Manager template creates an event-driven jo
], } },
- "replicaRetryLimit": 1,
+ "replicaRetryLimit": 0,
"replicaTimeout": 1800, "triggerType": "Event", "secrets": [
The following example Azure Resource Manager template creates an event-driven jo
} ```
+The example configures an Azure Storage queue scale rule.
+
+# [Azure portal](#tab/azure-portal)
+
+To create an event-driven job using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Specify *Event* as the trigger type and configure the scaling rule.
+
-The example configures an Azure Storage queue scale rule. For a complete tutorial, see [Deploy an event-driven job](tutorial-event-driven-jobs.md).
+For a complete tutorial, see [Deploy an event-driven job](tutorial-event-driven-jobs.md).
## Start a job execution on demand
Replace `<SUBSCRIPTION_ID>` with your subscription ID.
To authenticate the request, replace `<TOKEN>` in the `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure).
+# [Azure portal](#tab/azure-portal)
+
+Starting a job execution using the Azure portal isn't supported.
+
-When you start a job execution, you can choose to override the job's configuration. For example, you can override an environment variable or the startup command to pass specific data to the job.
+When you start a job execution, you can choose to override the job's configuration. For example, you can override an environment variable or the startup command to run the same job with different inputs. The overridden configuration is only used for the current execution and doesn't change the job's configuration.
# [Azure CLI](#tab/azure-cli)
-Azure CLI doesn't support overriding a job's configuration when starting a job execution.
+To override the job's configuration while starting an execution, use the `az containerapp job start` command and pass a YAML file containing the template to use for the execution. The following example starts an execution of a job named `my-job` in a resource group named `my-resource-group`.
+
+Retrieve the job's current configuration with the `az containerapp job show` command and save the template to a file named `my-job-template.yaml`:
+
+```azurecli
+az containerapp job show --name "my-job" --resource-group "my-resource-group" --query "properties.template" --output yaml > my-job-template.yaml
+```
+
+Edit the `my-job-template.yaml` file to override the job's configuration. For example, to override the environment variables, modify the `env` section:
+
+```yaml
+containers:
+- name: print-hello
+ image: ubuntu
+ resources:
+ cpu: 1
+ memory: 2Gi
+ env:
+ - name: MY_NAME
+ value: Azure Container Apps jobs
+ args:
+ - /bin/bash
+ - -c
+ - echo "Hello, $MY_NAME!"
+```
+
+Start the job using the template:
+
+```azurecli
+az containerapp job start --name "my-job" --resource-group "my-resource-group" \
+ --yaml my-job-template.yaml
+```
# [Azure Resource Manager](#tab/azure-resource-manager)
Authorization: Bearer <TOKEN>
Replace `<SUBSCRIPTION_ID>` with your subscription ID and `<TOKEN>` in the `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure).
+# [Azure portal](#tab/azure-portal)
+
+Starting a job execution using the Azure portal isn't supported.
+ ## Get job execution history
Replace `<SUBSCRIPTION_ID>` with your subscription ID.
To authenticate the request, add an `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure).
+# [Azure portal](#tab/azure-portal)
+
+To view the status of job executions using the Azure portal, search for *Container App Jobs* in the Azure portal and select the job. The *Execution history* tab displays the status of recent executions.
+
-The execution history for scheduled & event-based jobs is limited to the most recent `100` successful and failed job executions.
+The execution history for scheduled and event-based jobs is limited to the most recent 100 successful and failed job executions.
To list all executions of a job or to get detailed output from a job, query the logs provider configured for your Container Apps environment.
The following table includes the job settings that you can configure:
| Setting | Azure Resource Manager property | CLI parameter| Description | ||||| | Job type | `triggerType` | `--trigger-type` | The type of job. (`Manual`, `Schedule`, or `Event`) |
-| Parallelism | `parallelism` | `--parallelism` | The number of replicas to run per execution. |
-| Replica completion count | `replicaCompletionCount` | `--replica-completion-count` | The number of replicas to complete successfully for the execution to succeed. |
+| Parallelism | `parallelism` | `--parallelism` | The number of replicas to run per execution. For most jobs, set the value to `1`. |
+| Replica completion count | `replicaCompletionCount` | `--replica-completion-count` | The number of replicas to complete successfully for the execution to succeed. For most jobs, set the value to `1`. |
| Replica timeout | `replicaTimeout` | `--replica-timeout` | The maximum time in seconds to wait for a replica to complete. |
-| Replica retry limit | `replicaRetryLimit` | `--replica-retry-limit` | The maximum number of times to retry a failed replica. |
+| Replica retry limit | `replicaRetryLimit` | `--replica-retry-limit` | The maximum number of times to retry a failed replica. To fail a replica without retrying, set the value to `0`. |
### Example
The following example Azure Resource Manager template creates a job with advance
} ```
+# [Azure portal](#tab/azure-portal)
+
+To configure advanced settings using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Select *Configuration* to configure the settings.
+
-## Jobs preview restrictions
+## Jobs restrictions
-The following features are not supported:
+The following features aren't supported:
-- Volume mounts-- Init containers - Dapr-- Azure Key Vault references in secrets - Ingress and related features such as custom domains and SSL certificates ## Next steps
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Previously updated : 05/10/2023 Last updated : 03/23/2023
Secrets are defined at the application level in the `resources.properties.config
{ "name": "queue-connection-string", "keyVaultUrl": "<KEY-VAULT-SECRET-URI>",
- "identity": "System"
+ "identity": "system"
}], } } } ```
-Here, a connection string to a queue storage account is declared in the `secrets` array. Its value is automatically retrieved from Key Vault using the specified identity. To use a user managed identity, replace `System` with the identity's resource ID.
+Here, a connection string to a queue storage account is declared in the `secrets` array. Its value is automatically retrieved from Key Vault using the specified identity. To use a user managed identity, replace `system` with the identity's resource ID.
Replace `<KEY-VAULT-SECRET-URI>` with the URI of your secret in Key Vault.
az containerapp create \
--secrets "queue-connection-string=keyvaultref:<KEY_VAULT_SECRET_URI>,identityref:<USER_ASSIGNED_IDENTITY_ID>" ```
-Here, a connection string to a queue storage account is declared in the `--secrets` parameter. Replace `<KEY_VAULT_SECRET_URI>` with the URI of your secret in Key Vault. Replace `<USER_ASSIGNED_IDENTITY_ID>` with the resource ID of the user assigned identity. For system assigned identity, use `System` instead of the resource ID.
+Here, a connection string to a queue storage account is declared in the `--secrets` parameter. Replace `<KEY_VAULT_SECRET_URI>` with the URI of your secret in Key Vault. Replace `<USER_ASSIGNED_IDENTITY_ID>` with the resource ID of the user assigned identity. For system assigned identity, use `system` instead of the resource ID.
> [!NOTE] > The user assigned identity must have access to read the secret in Key Vault. System assigned identity can't be used with the create command because it's not available until after the container app is created.
Secrets Key Vault references aren't supported in PowerShell.
> [!NOTE]
-> If you're using [UDR With Azure Firewall](./networking.md#user-defined-routes-udrpreview), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall. Refer to [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview) to decide which additional service tags you need.
+> If you're using [UDR With Azure Firewall](networking.md#user-defined-routes-udr), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall. Refer to [configuring UDR with Azure Firewall](networking.md#configuring-udr-with-azure-firewall) to decide which additional service tags you need.
#### Key Vault secret URI and secret rotation
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Title: Networking environment in Azure Container Apps
+ Title: Networking in Azure Container Apps environment
description: Learn how to configure virtual networks in Azure Container Apps. Previously updated : 03/29/2023 Last updated : 08/29/2023
-# Networking environment in Azure Container Apps
+# Networking in Azure Container Apps environment
-Azure Container Apps run in the context of an [environment](environment.md), which is supported by a virtual network (VNet). By default, your Container App environment is created with a VNet that is automatically generated for you. Generated VNets are inaccessible to you as they're created in Microsoft's tenant. This VNet is publicly accessible over the internet, can only reach internet accessible endpoints, and supports a limited subset of networking capabilities such as ingress IP restrictions and container app level ingress controls.
+Azure Container Apps run in the context of an [environment](environment.md), with its own virtual network (VNet).
-Use the Custom VNet configuration to provide your own VNet if you need more Azure networking features such as:
+By default, your Container App environment is created with a VNet that is automatically generated for you. For fine-grained control over your network, you can provide an existing VNet when you create an environment. Once you create an environment with either a generated or existing VNet, the network type can't be changed.
+
+Generated VNets take on the following characteristics.
+
+They are:
+
+- inaccessible to you as they're created in Microsoft's tenant
+- publicly accessible over the internet
+- only able to reach internet accessible endpoints
+
+Further, they only support a limited subset of networking capabilities such as ingress IP restrictions and container app level ingress controls.
+
+Use an existing VNet if you need more Azure networking features such as:
- Integration with Application Gateway - Network Security Groups-- Communicating with resources behind private endpoints in your virtual network
+- Communication with resources behind private endpoints in your virtual network
-The features available depend on your environment selection.
+The available VNet features depend on your environment selection.
-## Environment Selection
+## Environment selection
-There are two environments in Container Apps: the Consumption only environment supports only the [Consumption plan (GA)](./plans.md) and the workload profiles environment that supports both the [Consumption + Dedicated plan structure (preview)](./plans.md). The two environments share many of the same networking characteristics. However, there are some key differences.
+Container Apps has two different [environment types](environment.md#types), which share many of the same networking characteristics with some key differences.
-| Environment Type | Description |
-|--|-|
-| Workload profiles environment (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. <br /> <br /> As workload profiles are currently in preview, the number of supported regions is limited. To learn more, visit the [workload profiles overview](./workload-profiles-overview.md).|
-| Consumption only environment | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /23. |
+| Environment type | Description | Supported plan types |
+||||
+| Workload profiles | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is `/27`. | Consumption, Dedicated |
+| Consumption only | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is `/23`. | Consumption |
-## Accessibility Levels
+## Accessibility levels
-In Container Apps, you can configure whether your container app allows public ingress or only ingress from within your VNet at the environment level.
+You can configure whether your container app allows public ingress or ingress only from within your VNet at the environment level.
| Accessibility level | Description |
-||-|
-| [External](vnet-custom.md) | Container Apps environments deployed as external resources are available for public requests. External environments are deployed with a virtual IP on an external, public facing IP address. |
-| [Internal](vnet-custom-internal.md) | When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNet's list of private IP addresses. |
+|||
+| [External](vnet-custom.md) | Allows your container app to accept public requests. External environments are deployed with a virtual IP on an external, public facing IP address. |
+| [Internal](vnet-custom-internal.md) | Internal environments have no public endpoints and are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNet's list of private IP addresses. |
## Custom VNet configuration
As you create a custom VNet, keep in mind the following situations:
- If you want your container app to restrict all outside access, create an [internal Container Apps environment](vnet-custom-internal.md). -- When you provide your own VNet, you need to provide a subnet that is dedicated to the Container App environment you deploy. This subnet isn't available to other services.
+- If you use your own VNet, you need to provide a subnet that is dedicated exclusively to the Container App environment you deploy. This subnet isn't available to other services.
-- Network addresses are assigned from a subnet range you define as the environment is created.
+- Network addresses are assigned from a subnet range you define as the environment is created.
- You can define the subnet range used by the Container Apps environment.+ - You can restrict inbound requests to the environment exclusively to the VNet by deploying the environment as [internal](vnet-custom-internal.md). > [!NOTE]
-> When you provide your own virtual network, additional [managed resources](networking.md#managed-resources) are created, which incur billing.
+> When you provide your own virtual network, additional [managed resources](networking.md#managed-resources) are created. These resources incur costs at their associated rates.
-As you begin to design the network around your container app, refer to [Plan virtual networks](../virtual-network/virtual-network-vnet-plan-design-arm.md) for important concerns surrounding running virtual networks on Azure.
+As you begin to design the network around your container app, refer to [Plan virtual networks](../virtual-network/virtual-network-vnet-plan-design-arm.md).
:::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Diagram of how Azure Container Apps environments use an existing V NET, or you can provide your own."::: > [!NOTE]
-> Moving VNets among different resource groups or subscriptions is not supported if the VNet is in use by a Container Apps environment.
+> Moving VNets among different resource groups or subscriptions is not allowed if the VNet is in use by a Container Apps environment.
## HTTP edge proxy behavior
-Azure Container Apps uses [Envoy proxy](https://www.envoyproxy.io/) as an edge HTTP proxy. TLS is terminated on the edge and requests are routed based on their traffic splitting rules and routes traffic to the correct application.
+Azure Container Apps uses the [Envoy proxy](https://www.envoyproxy.io/) as an edge HTTP proxy. Transport Layer Security (TLS) is terminated on the edge and requests are routed based on their traffic splitting rules and routes traffic to the correct application.
-HTTP applications scale based on the number of HTTP requests and connections. Envoy routes internal traffic inside clusters. Downstream connections support HTTP1.1 and HTTP2 and Envoy automatically detects and upgrades the connection should the client connection be upgraded. Upstream connection is defined by setting the `transport` property on the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) object.
+HTTP applications scale based on the number of HTTP requests and connections. Envoy routes internal traffic inside clusters.
+
+Downstream connections support HTTP1.1 and HTTP2 and Envoy automatically detects and upgrades connections if the client connection requires an upgrade.
+
+Upstream connections are defined by setting the `transport` property on the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) object.
### Ingress configuration Under the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) section, you can configure the following settings: -- **Accessibility level**: You can set your container app as externally or internally accessible in the environment. An environment variable `CONTAINER_APP_ENV_DNS_SUFFIX` is used to automatically resolve the FQDN suffix for your environment. When communicating between Container Apps within the same environment, you may also use the app name. For more information on how to access your apps, see [ingress](./ingress-overview.md#domain-names).
+- **Accessibility level**: You can set your container app as externally or internally accessible in the environment. An environment variable `CONTAINER_APP_ENV_DNS_SUFFIX` is used to automatically resolve the fully qualified domain name (FQDN) suffix for your environment. When communicating between container apps within the same environment, you may also use the app name. For more information on how to access your apps, see [Ingress in Azure Container Apps](./ingress-overview.md#domain-names).
- **Traffic split rules**: You can define traffic splitting rules between different revisions of your application. For more information, see [Traffic splitting](traffic-splitting.md).
-For more information about ingress configuration, see [Ingress in Azure Container Apps](ingress-overview.md).
-
-### Scenarios
-
-For more information about scenarios, see [Ingress in Azure Container Apps](ingress-overview.md).
+For more information about different networking scenarios, see [Ingress in Azure Container Apps](ingress-overview.md).
## Portal dependencies For every app in Azure Container Apps, there are two URLs.
-Container Apps generates the first URL, which is used to access your app. See the *Application Url* in the *Overview* window of your container app in the Azure portal for the fully qualified domain name (FQDN) of your container app.
+The Container Apps runtime initially generates a fully qualified domain name (FQDN) used to access your app. See the *Application Url* in the *Overview* window of your container app in the Azure portal for the FQDN of your container app.
-The second URL grants access to the log streaming service and the console. If necessary, you may need to add `https://azurecontainerapps.dev/` to the allowlist of your firewall or proxy.
+A second URL is also generated for you. This location grants access to the log streaming service and the console. If necessary, you may need to add `https://azurecontainerapps.dev/` to the allowlist of your firewall or proxy.
## Ports and IP addresses The following ports are exposed for inbound connections.
-| Use | Port(s) |
+| Protocol | Port(s) |
|--|--| | HTTP/HTTPS | 80, 443 |
IP addresses are broken down into the following types:
| Type | Description | |--|--|
-| Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |
-| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Outbound IPs aren't guaranteed and may change over time. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is only supported on the workload profile environment. |
-| Internal load balancer IP address | This address only exists in an internal deployment. |
+| Public inbound IP address | Used for application traffic in an external deployment, and management traffic in both internal and external deployments. |
+| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Outbound IPs may change over time. Using a NAT gateway or other proxy for outbound traffic from a Container Apps environment is only supported in a [workload profiles environment](workload-profiles-overview.md). |
+| Internal load balancer IP address | This address only exists in an [internal environment](#accessibility-levels). |
## Subnet
-Virtual network integration depends on a dedicated subnet. How IP addresses are allocated in a subnet and what subnet sizes are supported depends on which plan you're using in Azure Container Apps. Selecting an appropriately sized subnet for the scale of your Container Apps is important as subnet sizes can't be modified post creation in Azure.
--- Consumption only environment:
- - /23 is the minimum subnet size required for virtual network integration.
- - Container Apps reserves a minimum of 60 IPs for infrastructure in your VNet, and the amount may increase up to 256 addresses as your container environment scales.
- - As your app scales, a new IP address is allocated for each new replica.
+Virtual network integration depends on a dedicated subnet. How IP addresses are allocated in a subnet and what subnet sizes are supported depends on which [plan](plans.md) you're using in Azure Container Apps.
-- Workload profiles environment:
- - /27 is the minimum subnet size required for virtual network integration.
- - The subnet you're integrating your container app with must be delegated to `Microsoft.App/environments`.
- - 11 IP addresses are automatically reserved for integration with the subnet. When your apps are running on workload profiles, the number of IP addresses required for infrastructure integration doesn't vary based on the scale of your container apps.
- - More IP addresses are allocated depending on your Container App's workload profile:
- - When you're using Consumption workload profiles for your container app, IP address assignment behaves the same as when running on the Consumption only environment. As your app scales, a new IP address is allocated for each new replica.
- - When you're using the Dedicated workload profile for your container app, each node has 1 IP address assigned.
+Select your subnet size carefully. Subnet sizes can't be modified after you create a Container Apps environment.
As a Container Apps environment is created, you provide resource IDs for a single subnet. If you're using the CLI, the parameter to define the subnet resource ID is `infrastructure-subnet-resource-id`. The subnet hosts infrastructure components and user app containers.
-In addition, if you're using the Azure CLI with the Consumption only environment and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
+If you're using the Azure CLI with a Consumption only environment and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
+
+Different environment types have different subnet requirements:
+
+### Workload profiles environment
+
+- `/27` is the minimum subnet size required for virtual network integration.
+
+- Your subnet must be delegated to `Microsoft.App/environments`.
+
+- Container Apps automatically reserves 11 IP addresses for integration with the subnet. When your apps are running in a workload profiles environment, the number of IP addresses required for infrastructure integration doesn't vary based on the scale demands of the environment. Additional IP addresses are allocated according to the following rules depending on the type of workload profile you are using more IP addresses are allocated depending on your environment's workload profile:
+
+ - When you're using the [Dedicated workload profile](workload-profiles-overview.md#profile-types) for your container app, each node has one IP address assigned.
+
+ - When you're using the [Consumption workload profile](workload-profiles-overview.md#profile-types), the IP address assignment behaves the same as when running on the [Consumption only environment](environment.md#types). As your app scales, a new IP address is allocated for each new replica.
+
+### Consumption only environment
+
+- `/23` is the minimum subnet size required for virtual network integration.
+
+- The Container Apps runtime reserves a minimum of 60 IPs for infrastructure in your VNet. The reserved amount may increase up to 256 addresses as apps in your environment scale.
+
+- As your apps scale, a new IP address is allocated for each new replica.
-### Subnet Address Range Restrictions
+### Subnet address range restrictions
-Subnet address ranges can't overlap with the following ranges reserved by AKS:
+Subnet address ranges can't overlap with the following ranges reserved by Azure Kubernetes
- 169.254.0.0/16 - 172.30.0.0/16 - 172.31.0.0/16 - 192.0.2.0/24
-In addition, Container Apps on the workload profiles environment reserve the following addresses:
+In addition, a workload profiles environment reserves the following addresses:
- 100.100.0.0/17 - 100.100.128.0/19
In addition, Container Apps on the workload profiles environment reserve the fol
## Routes
-User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles environment, which is in preview. In the Consumption only environment, these features aren't supported.
+<a name="udr"></a>
+
+### User defined routes (UDR)
-### User defined routes (UDR) - preview
+User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles environment, which is in preview. In the Consumption only environment, these features aren't supported.
> [!NOTE]
-> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview).
+> When using UDR with Azure Firewall in Azure Container Apps, you need to add certain FQDN's and service tags to the allowlist for the firewall. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewall).
-You can use UDR on the workload profiles architecture to restrict outbound traffic from your container app through Azure Firewall or other network appliances. Configuring UDR is done outside of the Container Apps environment scope. UDR isn't supported for external environments.
+- You can use UDR with workload profiles environments to restrict outbound traffic from your container app through Azure Firewall or other network appliances.
+
+- Configuring UDR is done outside of the Container Apps environment scope.
+
+- UDR isn't supported for external environments.
:::image type="content" source="media/networking/udr-architecture.png" alt-text="Diagram of how UDR is implemented for Container Apps."::: Azure creates a default route table for your virtual networks upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. For example, you can create a UDR that routes all traffic to the firewall.
-#### Configuring UDR with Azure Firewall - preview:
+#### Configuring UDR with Azure Firewall
-UDR is only supported on the workload profiles environment. The following application and network rules must be added to the allowlist for your firewall depending on which resources you're using.
+User defined routes is only supported in a workload profiles environment. The following application and network rules must be added to the allowlist for your firewall depending on which resources you're using.
-> [!Note]
+> [!NOTE]
> For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md).
-##### Azure Firewall - Application Rules
+##### Application rules
Application rules allow or deny traffic based on the application layer. The following outbound firewall application rules are required based on scenario.
-| Scenarios | FQDNs | Description |
+| Scenarios | FQDNs | Description |
|--|--|--|
-| All scenarios | *mcr.microsoft.com*, **.data.mcr.microsoft.com* | These FQDNs for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these application rules or the network rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
-| Azure Container Registry (ACR) | *Your-ACR-address*, **.blob.windows.net* | These FQDNs are required when using Azure Container Apps with ACR and Azure Firewall. |
-| Azure Key Vault | *Your-Azure-Key-Vault-address*, *login.microsoft.com* | These FQDNs are required in addition to the service tag required for the network rule for Azure Key Vault. |
-| Docker Hub Registry | *hub.docker.com*, *registry-1.docker.io*, *production.cloudflare.docker.com* | If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add these FQDNs to the firewall. |
+| All scenarios | `mcr.microsoft.com`, `*.data.mcr.microsoft.com` | These FQDNs for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these application rules or the network rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
+| Azure Container Registry (ACR) | *Your-ACR-address*, `*.blob.windows.net`, `login.microsoft.com` | These FQDNs are required when using Azure Container Apps with ACR and Azure Firewall. |
+| Azure Key Vault | *Your-Azure-Key-Vault-address*, `login.microsoft.com` | These FQDNs are required in addition to the service tag required for the network rule for Azure Key Vault. |
+| Managed Identity | `*.identity.azure.net`, `login.microsoftonline.com`, `*.login.microsoftonline.com`, `*.login.microsoft.com` | These FQDNs are required when using managed identity with Azure Firewall in Azure Container Apps.
+| Docker Hub Registry | `hub.docker.com`, `registry-1.docker.io`, `production.cloudflare.docker.com` | If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add these FQDNs to the firewall. |
-##### Azure Firewall - Network Rules
+##### Network rules
Network rules allow or deny traffic based on the network and transport layer. The following outbound firewall network rules are required based on scenario.
-| Scenarios | Service Tag | Description |
+| Scenarios | Service Tag | Description |
|--|--|--|
-| All scenarios | *MicrosoftContainerRegistry*, *AzureFrontDoorFirstParty* | These Service Tags for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these network rules or the application rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
-| Azure Container Registry (ACR) | *AzureContainerRegistry* | When using ACR with Azure Container Apps, you'll need to configure these application rules used by Azure Container Registry. |
-| Azure Key Vault | *AzureKeyVault*, *AzureActiveDirectory* | These service tags are required in addition to the FQDN for the application rule for Azure Key Vault. |
+| All scenarios | `MicrosoftContainerRegistry`, `AzureFrontDoorFirstParty` | These Service Tags for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these network rules or the application rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. |
+| Azure Container Registry (ACR) | `AzureContainerRegistry`, `AzureActiveDirectory` | When using ACR with Azure Container Apps, you need to configure these application rules used by Azure Container Registry. |
+| Azure Key Vault | `AzureKeyVault`, `AzureActiveDirectory` | These service tags are required in addition to the FQDN for the application rule for Azure Key Vault. |
+| Managed Identity | `AzureActiveDirectory` | When using Managed Identity with Azure Container Apps, you'll need to configure these application rules used by Managed Identity. |
-> [!Note]
+> [!NOTE]
> For Azure resources you are using with Azure Firewall not listed in this article, please refer to the [service tags documentation](../virtual-network/service-tags-overview.md#available-service-tags).
-### NAT gateway integration - preview
+<a name="nat"></a>
+
+### NAT gateway integration
+
+You can use NAT Gateway to simplify outbound connectivity for your outbound internet traffic in your virtual network in a workload profiles environment.
-You can use NAT Gateway to simplify outbound connectivity for your outbound internet traffic in your virtual network on the workload profiles environment. NAT Gateway is used to provide a static public IP address, so when you configure NAT Gateway on your Container Apps subnet, all outbound traffic from your container app is routed through the NAT Gateway's static public IP address.
+When you configure a NAT Gateway on your subnet, the NAT Gateway provides a static public IP address for your environment. All outbound traffic from your container app is routed through the NAT Gateway's static public IP address.
-### Lock down your Container App environment
+### Environment security
:::image type="content" source="media/networking/locked-down-network.png" alt-text="Diagram of how to fully lock down your network for Container Apps.":::
-With the workload profiles environment (preview), you can fully secure your ingress/egress networking traffic. To do so, you should use the following features:
-- Create your internal container app environment on the workload profiles environment. For steps, see [here](./workload-profiles-manage-cli.md).-- Integrate your Container Apps with an Application Gateway. For steps, see [here](./waf-app-gateway.md).-- Configure UDR to route all traffic through Azure Firewall. For steps, see [here](./user-defined-routes.md).
+You can fully secure your ingress and egress networking traffic workload profiles environment by taking the following actions:
+
+- Create your internal container app environment in a workload profiles environment. For steps, refer to [Manage workload profiles with the Azure CLI](./workload-profiles-manage-cli.md#create).
+
+- Integrate your Container Apps with an [Application Gateway](./waf-app-gateway.md).
-## <a name="mtls"></a> Environment level network encryption - preview
+- Configure UDR to route all traffic through [Azure Firewall](./user-defined-routes.md).
-Azure Container Apps supports environment level network encryption using mutual transport layer security (mTLS). When end-to-end encryption is required, mTLS will encrypt data transmitted between applications within an environment. Applications within a Container Apps environment are automatically authenticated. However, Container Apps currently does not support authorization for access control between applications using the built-in mTLS.
+## <a name="mtls"></a> Environment level network encryption (preview)
-When your apps are communicating with a client outside of the environment, two-way authentication with mTLS is supported, to learn more see [configure client certificates](client-certificate-authorization.md).
+Azure Container Apps supports environment level network encryption using mutual transport layer security (mTLS). When end-to-end encryption is required, mTLS encrypts data transmitted between applications within an environment.
+
+Applications within a Container Apps environment are automatically authenticated. However, the Container Apps runtime doesn't support authorization for access control between applications using the built-in mTLS.
+
+When your apps are communicating with a client outside of the environment, two-way authentication with mTLS is supported. To learn more, see [configure client certificates](client-certificate-authorization.md).
> [!NOTE] > Enabling mTLS for your applications may increase response latency and reduce maximum throughput in high-load scenarios.
When your apps are communicating with a client outside of the environment, two-w
You can enable mTLS using the following commands. On create:+ ```azurecli az containerapp env create \ --name <environment-name> \
az containerapp env create \
``` For an existing container app:+ ```azurecli az containerapp env update \ --name <environment-name> \
You can enable mTLS in the ARM template for Container Apps environments using th
... } ```+ ## DNS -- **Custom DNS**: If your VNet uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. When configuring your NSG or Firewall, don't block the `168.63.129.16` address, otherwise, your Container Apps environment won't function.
+- **Custom DNS**: If your VNet uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. When configuring your NSG or firewall, don't block the `168.63.129.16` address, otherwise, your Container Apps environment won't function correctly.
-- **VNet-scope ingress**: If you plan to use VNet-scope [ingress](ingress-overview.md) in an internal Container Apps environment, configure your domains in one of the following ways:
+- **VNet-scope ingress**: If you plan to use VNet-scope [ingress](ingress-overview.md) in an internal environment, configure your domains in one of the following ways:
- 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App environmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The A record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment.
+ 1. **Non-custom domains**: If you don't plan to use a custom domain, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a private DNS Zone named as the Container App environmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The `A` record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment.
- 1. **Custom domains**: If you plan to use custom domains, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment.
+ 1. **Custom domains**: If you plan to use custom domains and are using an external Container Apps environment, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. If you are using an internal Container Apps environment, there is no validation for the DNS binding, as the cluster can only be accessed from within the virtual network. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment.
-The static IP address of the Container Apps environment can be found in the Azure portal in **Custom DNS suffix** of the container app page or using the Azure CLI `az containerapp env list` command.
+The static IP address of the Container Apps environment is available in the Azure portal in **Custom DNS suffix** of the container app page or using the Azure CLI `az containerapp env list` command.
## Managed resources
-When you deploy an internal or an external environment into your own network, a new resource group is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and it shouldn't be modified.
+When you deploy an internal or an external environment into your own network, a new resource group is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform. Don't modify the services in this group or the resource group itself.
+
+### Workload profiles environment
+
+The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `ME_` by default, and the resource group name *can* be customized as you create your container app environment.
+
+For external environments, the resource group contains a public IP address used specifically for inbound connectivity to your external environment and a load balancer. For internal environments, the resource group only contains a [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
+
+In addition to the standard [Azure Container Apps billing](./billing.md), you're billed for:
+
+- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress if using an internal or external environment, plus one standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for ingress if using an external environment. If you need more public IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
+
+- One standard [load balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
-#### Consumption only environment
-The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `MC_` by default, and the resource group name *can't* be customized during container app creation. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer.
+- The cost of data processed (in GBs) includes both ingress and egress for management operations.
-In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
+### Consumption only environment
-- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
+The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `MC_` by default, and the resource group name *can't* be customized when you create a container app. The resource group contains public IP addresses used specifically for outbound connectivity from your environment and a load balancer.
-- Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
+In addition to the standard [Azure Container Apps billing](./billing.md), you're billed for:
-#### Workload profiles environment
-The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `ME_` by default, and the resource group name *can* be customized during container app environment creation. For external environments, the resource group contains a public IP address used specifically for inbound connectivity to your external environment and a load balancer. For internal environments, the resource group only contains a Load Balancer.
+- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress. If you need more IPs for egress due to Source Network Address Translation (SNAT) issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
-In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
-- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for ingress in external environments and one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).-- The cost of data processed (GB) includes both ingress and egress for management operations.
+- Two standard [load balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [load balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (in GBs) includes both ingress and egress for management operations.
## Next steps
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Azure Container Apps is a fully managed environment that enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include: - Deploying API endpoints-- Hosting background processing applications
+- Hosting background processing jobs
- Handling event-driven processing - Running microservices
With Azure Container Apps, you can:
- [**Build microservices with Dapr**](microservices.md) and [access its rich set of APIs](./dapr-overview.md).
+- [**Run jobs**](jobs.md) on-demand, on a schedule, or based on events.
+ - Add [**Azure Functions**](https://aka.ms/functionsonaca) and [**Azure Spring Apps**](https://aka.ms/asaonaca) to your Azure Container Apps environment. - [**Use specialized hardware**](plans.md) for access to increased compute resources.
container-apps Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/plans.md
Previously updated : 03/28/2023 Last updated : 08/29/2023
Azure Container Apps features two different plan types.
-| Plan type | Description | In Preview |
-|--|--|--|
-| [Consumption](#consumption-plan) | Serverless environment with support for scale-to-zero and pay only for resources your apps use. | No |
-| [Consumption + Dedicated plan structures (preview)](#consumption-dedicated) | Fully managed environment with support for scale-to-zero and pay only for resources your apps use. Optionally, run apps with customized hardware and increased cost predictability using Dedicated workload profiles. | Yes |
-
-## Consumption plan
-
-The Consumption plan features a serverless architecture that allows your applications to scale in and out on demand. Applications can scale to zero, and you only pay for running apps.
-
-Use the Consumption plan when you don't have specific hardware requirements for your container app.
+| Plan type | Description |
+|--|--|
+| [Dedicated](#dedicated) | Fully managed environment with support for scale-to-zero and pay only for resources your apps use. Optionally, run apps with customized hardware and increased cost predictability using Dedicated [workload profiles environment](environment.md#types). |
+| [Consumption](#consumption) | Serverless environment with support for scale-to-zero and pay only for resources your apps use. |
<a id="consumption-dedicated"></a>
-## Consumption + Dedicated plan structure (preview)
-
-The Consumption + Dedicated plan structure consists of a serverless plan that allows your applications to scale in and out on demand. Applications can scale to zero, and you only pay for running apps. It also consists of a fully managed plan you can optionally use that provides dedicated, customized hardware to run your apps on.
+## Dedicated
-You can select from general purpose and memory optimized [workflow profiles](workload-profiles-overview.md) that provide larger amounts of CPU and memory. You pay per node, versus per app, and workload profile can scale in and out as demand changes.
+The Dedicated plan consists of a series of workload profiles that range from the default consumption profile to profiles that feature dedicated hardware customized for specialized compute needs.
-Use the Consumption + Dedicated plan structure when you need any of the following in a single environment:
+You can select from general purpose and memory optimized [workload profiles](workload-profiles-overview.md) that provide larger amounts of CPU and memory. You pay per instance of the workload profile, versus per app, and workload profiles can scale in and out as demand rises and falls.
-- **Consumption usage**: Use of the Consumption plan to run apps that need to scale to zero that don't have specific hardware requirements.
+Use the Dedicated plan when you need any of the following in a single environment:
-- **Secure outbound traffic**: You can create environments with no public inbound access, and customize the outbound network path from environments to use firewalls or other network appliances.
+- **Secure outbound traffic**: You can assign single outbound network path to systems protected by firewalls or other network appliances.
-Use the Dedicated plan within the Consumption + Dedicated plan structure when you need any of the following features:
+- **Environment isolation**: Dedicated workload profiles provide access to dedicated hardware with a single tenant guarantee.
-- **Environment isolation**: Use of the Dedicated workload profiles provides apps with dedicated hardware with a single tenant guarantee.
+- **Customized compute**: Select from many types and sizes of workload profiles based on your apps requirements. You can deploy many apps to each workload profile. Each workload profile can scale independently as more apps are added or removed or as apps scale their replicas up or down.
-- **Customized compute**: Select from many types and sizes of Dedicated workload profiles based on your apps requirements. You can deploy many apps to each workload profile. Each workload profile can scale independently as more apps are added or removed or as apps scale their replicas up or down. -- **Cost control**: Traditional serverless compute options optimize for scale in response to events and may not provide cost control options. With Dedicated workload profiles, you can set minimum and maximum scaling to help you better control costs.
+- **Cost control**: Traditional serverless compute options optimize for scale in response to events and may not provide cost control options. Dedicated workload profiles let you set minimum and maximum scaling to help you better control costs.
- The Consumption + Dedicated plan structure can be more cost effective when you're running higher scale deployments with steady throughput.
+ The Dedicated plan can be more cost effective when you're running higher scale deployments with steady throughput.
> [!NOTE] > When configuring your cluster with a user defined route for egress, you must explicitly send egress traffic to a network virtual appliance such as Azure Firewall.
+## Consumption
+
+The Consumption plan features a serverless architecture that allows your applications to scale in and out on demand. Applications can scale to zero, and you only pay for running apps.
+
+Use the Consumption plan when you don't have specific hardware requirements for your container app.
+ ## Next steps Deploy an app with: - [Consumption plan](quickstart-portal.md)-- [Consumption + Dedicated plan structure](workload-profiles-manage-cli.md)
+- [Dedicated plan](workload-profiles-manage-cli.md)
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The *Is Configurable* column in the following tables denotes a feature maximum m
| Feature | Scope | Default | Is Configurable | Remarks | |--|--|--|--|--| | Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region. |
-| Environments | Global | Up to 20 | Yes | Limit up to 20 environments per subscription accross all regions |
+| Environments | Global | Up to 20 | Yes | Limit up to 20 environments per subscription across all regions |
| Container Apps | Environment | Unlimited | n/a | | | Revisions | Container app | 100 | No | | | Replicas | Revision | 300 | Yes | |
The *Is Configurable* column in the following tables denotes a feature maximum m
| Cores | Replica | 2 | No | Maximum number of cores available to a revision replica. | | Cores | Environment | 100 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
-## Consumption + Dedicated plan structure
+## Workload Profiles Environments
### Consumption workload profile | Feature | Scope | Default | Is Configurable | Remarks | |--|--|--|--|--| | Cores | Replica | 4 | No | Maximum number of cores available to a revision replica. |
-| Cores | Environment | 100 | Yes | Maximum number of cores the Consumption workload profile in a Consumption + Dedicated plan structure environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
+| Cores | Environment | 100 | Yes | Maximum number of cores the Consumption workload profile in a Dedicated plan environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
### Dedicated workload profiles | Feature | Scope | Default | Is Configurable | Remarks | |--|--|--|--|--| | Cores | Replica | Up to maximum cores a workload profile supports | No | Maximum number of cores available to a revision replica. |
-| Cores | Environment | 100 | Yes | Maximum number of cores all Dedicated workload profiles in a Consumption + Dedicated plan structure environment can accommodate. Calculated by the sum of cores available in each node of all workload profile in a Consumption + Dedicated plan structure environment. |
+| Cores | Environment | 100 | Yes | Maximum number of cores all Dedicated workload profiles in a Dedicated plan environment can accommodate. Calculated by the sum of cores available in each node of all workload profile in a Dedicated plan environment. |
For more information regarding quotas, see the [Quotas roadmap](https://github.com/microsoft/azure-container-apps/issues/503) in the Azure Container Apps GitHub repository.
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
Azure Container Apps implements container app versioning by creating revisions.
:::image type="content" source="media/revisions/azure-container-apps-revisions.png" alt-text="Azure Container Apps: Containers":::
+> [!NOTE]
+> [Azure Container Apps jobs](jobs.md) don't have revisions. Each job execution uses the latest configuration of the job.
+ ## Use cases Container Apps revisions help you manage the release of updates to your container app by creating a new revision each time you make a *revision-scope* change to your app. You can control which revisions are active, and the external traffic that is routed to each active revision.
container-apps Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/start.md
+
+ Title: Get started with Azure Container Apps
+description: First steps in working Azure Container Apps
++++ Last updated : 08/30/2023+++
+# Get started with Azure Container Apps
+
+Azure Container Apps is a fully managed environment that enables you to run containerized applications and microservices on a serverless platform. Container Apps simplifies the process of deploying, running, and scaling applications packaged as containers in the Azure ecosystem.
+
+Get started with Azure Container Apps by exploring the following resources.
+
+## Resources
+
+| Action | Resource |
+|||
+| Deploy your first container app | ΓÇó [From an existing image](quickstart-portal.md)<br>ΓÇó [From code on your machine](deploy-visual-studio-code.md) |
+| Define scale rules | ΓÇó [Scale an app](scale-app.md) |
+| Set up ingress | ΓÇó [Set up ingress](ingress-how-to.md) |
+| Add a custom domain | ΓÇó [With a free certificate](custom-domains-managed-certificates.md)<br>ΓÇó [With an existing certificate](custom-domains-certificates.md)|
+| Run tasks for a finite duration | ΓÇó [Create a job](jobs-get-started-cli.md) |
+| Review best practices | ΓÇó [Custom virtual network](vnet-custom.md)<br>ΓÇó [Enable authentication](authentication.md)<br>ΓÇó [Manage revisions](revisions-manage.md) |
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy your first container app](get-started.md)
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
In this tutorial, you learn how to run Azure Pipelines agents as an [event-drive
- **Azure DevOps organization**: If you don't have a DevOps organization with an active subscription, you [can create one for free](https://azure.microsoft.com/services/devops/). ::: zone-end
-Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a list of limitations.
+Refer to [jobs restrictions](jobs.md#jobs-restrictions) for a list of limitations.
## Setup
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E
# [PowerShell](#tab/powershell) ```powershell
-az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \
- --trigger-type Event \
- --replica-timeout 1800 \
- --replica-retry-limit 1 \
- --replica-completion-count 1 \
- --parallelism 1 \
- --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
- --min-executions 0 \
- --max-executions 10 \
- --polling-interval 30 \
- --scale-rule-name "azure-pipelines" \
- --scale-rule-type "azure-pipelines" \
- --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" \
- --scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" \
- --cpu "2.0" \
- --memory "4Gi" \
- --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" \
- --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" \
+az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" `
+ --trigger-type Event `
+ --replica-timeout 1800 `
+ --replica-retry-limit 1 `
+ --replica-completion-count 1 `
+ --parallelism 1 `
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" `
+ --min-executions 0 `
+ --max-executions 10 `
+ --polling-interval 30 `
+ --scale-rule-name "azure-pipelines" `
+ --scale-rule-type "azure-pipelines" `
+ --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" `
+ --scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" `
+ --cpu "2.0" `
+ --memory "4Gi" `
+ --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" `
+ --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" `
--registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io" ```
container-apps Tutorial Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-scaling.md
+
+ Title: 'Tutorial: Scale an Azure Container Apps application'
+description: Scale an Azure Container Apps application using the Azure CLI.
++++ Last updated : 08/02/2023++
+ms.devlang: azurecli
++
+# Tutorial: Scale a container app
+
+Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of the container app are created on-demand. These instances are known as replicas.
+
+In this tutorial, you add an HTTP scale rule to your container app and observe how your application scales.
+
+## Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have an Azure account, you can [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). <br><br>You need the *Contributor* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Get one for [free](https://github.com/join). |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli). |
+
+## Setup
+
+Run the following command and follow the prompts to sign in to Azure from the CLI and complete the authentication process.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az login
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az login
+```
+++
+Ensure you're running the latest version of the CLI via the `az upgrade` command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az upgrade
+```
+++
+Install or update the Azure Container Apps extension for the CLI.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az extension add --name containerapp --upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
++
+```azurepowershell
+az extension add --name containerapp --upgrade
+```
+++
+Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az provider register --namespace Microsoft.App
+```
+
+```azurecli
+az provider register --namespace Microsoft.OperationalInsights
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az provider register --namespace Microsoft.App
+```
+
+```azurepowershell
+az provider register --namespace Microsoft.OperationalInsights
+```
+++
+## Create and deploy the container app
+
+Create and deploy your container app with the `containerapp up` command. This command creates a:
+
+- Resource group
+- Container Apps environment
+- Log Analytics workspace
+
+If any of these resources already exist, the command uses the existing resources rather than creating new ones.
+
+Lastly, the command creates and deploys the container app using a public container image.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp up \
+ --name my-container-app \
+ --resource-group my-container-apps \
+ --location centralus \
+ --environment 'my-container-apps' \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --target-port 80 \
+ --ingress external \
+ --query properties.configuration.ingress.fqdn \
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+az containerapp up `
+ --name my-container-app `
+ --resource-group my-container-apps `
+ --location centralus `
+ --environment my-container-apps `
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest `
+ --target-port 80 `
+ --ingress external `
+ --query properties.configuration.ingress.fqdn `
+```
+++
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
+
+By setting `--ingress` to `external`, you make the container app available to public requests.
+
+The `up` command returns the fully qualified domain name (FQDN) for the container app. Copy this FQDN to a text file. You'll use it in the [Send requests](#send-requests) section. Your FQDN looks like the following example:
+
+```text
+https://my-container-app.icydune-96848328.centralus.azurecontainerapps.io
+```
+
+## Add scale rule
+
+Add an HTTP scale rule to your container app by running the `az containerapp update` command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp update \
+ --name my-container-app \
+ --resource-group my-container-apps \
+ --scale-rule-name my-http-scale-rule \
+ --scale-rule-http-concurrency 1
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+az containerapp update `
+ --name my-container-app `
+ --resource-group my-container-apps `
+ --scale-rule-name my-http-scale-rule `
+ --scale-rule-http-concurrency 1
+```
+++
+This command adds an HTTP scale rule to your container app with the name `my-http-scale-rule` and a concurrency setting of `1`. If your app receives more than one concurrent HTTP request, the runtime creates replicas of your app to handle the requests.
+
+The `update` command returns the new configuration as a JSON response to verify your request was a success.
+
+## Start log output
+
+You can observe the effects of your application scaling by viewing the logs generated by the Container Apps runtime. Use the `az containerapp logs show` command to start listening for log entries.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show \
+ --name my-container-app \
+ --resource-group my-container-apps \
+ --type=system \
+ --follow=true
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+az containerapp logs show `
+ --name my-container-app `
+ --resource-group my-container-apps `
+ --type=system `
+ --follow=true
+```
+++
+The `show` command returns entries from the system logs for your container app in real time. You can expect a response like the following example:
+
+```json
+{
+ "TimeStamp":"2023-08-01T16:49:03.02752",
+ "Log":"Connecting to the container 'my-container-app'..."
+}
+{
+ "TimeStamp":"2023-08-01T16:49:03.04437",
+ "Log":"Successfully Connected to container:
+ 'my-container-app' [Revision: 'my-container-app--9uj51l6',
+ Replica: 'my-container-app--9uj51l6-5f96557ffb-5khg9']"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9480811+00:00",
+ "Log":"Microsoft.Hosting.Lifetime[14]"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9481264+00:00",
+ "Log":"Now listening on: http://[::]:3500"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9490917+00:00",
+ "Log":"Microsoft.Hosting.Lifetime[0]"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9491036+00:00",
+ "Log":"Application started. Press Ctrl+C to shut down."
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.949723+00:00",
+ "Log":"Microsoft.Hosting.Lifetime[0]"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9497292+00:00",
+ "Log":"Hosting environment: Production"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9497325+00:00",
+ "Log":"Microsoft.Hosting.Lifetime[0]"
+}
+{
+ "TimeStamp":"2023-08-01T16:47:31.9497367+00:00",
+ "Log":"Content root path: /app/"
+}
+```
+
+For more information, see [az containerapp logs](/cli/azure/containerapp/logs).
+
+## Send requests
+
+# [Bash](#tab/bash)
+
+Open a new bash shell. Run the following command, replacing `<YOUR_CONTAINER_APP_FQDN>` with the fully qualified domain name for your container app that you saved from the [Create and deploy the container app](#create-and-deploy-the-container-app) section.
+
+```bash
+seq 1 50 | xargs -Iname -P10 curl "<YOUR_CONTAINER_APP_FQDN>"
+```
+
+These commands send 50 requests to your container app in concurrent batches of 10 requests each.
+
+| Command or argument | Description |
+|||
+| `seq 1 50` | Generates a sequence of numbers from 1 to 50. |
+| `|` | The pipe operator sends the sequence to the `xargs` command. |
+| `xargs` | Runs `curl` with the specified URL |
+| `-Iname` | Acts as a placeholder for the output of `seq`. This argument prevents the return value from being sent to the `curl` command. |
+| `curl` | Calls the given URL. |
+| `-P10` | Instructs `xargs` to run up to 10 processes at a time. |
+
+For more information, see the documentation for:
+- [seq](https://www.man7.org/linux/man-pages/man1/seq.1.html)
+- [xargs](https://www.man7.org/linux/man-pages/man1/xargs.1.html)
+- [curl](https://www.man7.org/linux/man-pages/man1/curl.1.html)
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Open a new command prompt and enter PowerShell. Run the following commands, replacing `<YOUR_CONTAINER_APP_FQDN>` with the fully qualified domain name for your container app that you saved from the [Create and deploy the container app](#create-and-deploy-the-container-app) section.
+
+```powershell
+$url="<YOUR_CONTAINER_APP_FQDN>"
+$Runspace = [runspacefactory]::CreateRunspacePool(1,10)
+$Runspace.Open()
+1..50 | % {
+ $ps = [powershell]::Create()
+ $ps.RunspacePool = $Runspace
+ [void]$ps.AddCommand("Invoke-WebRequest").AddParameter("UseBasicParsing",$true).AddParameter("Uri",$url)
+ [void]$ps.BeginInvoke()
+}
+```
+
+These commands send 50 requests to your container app in asynchronous batches of 10 requests each.
+
+| Command or argument | Description |
+|||
+| `[runspacefactory]::CreateRunspacePool(1,10)` | Creates a `RunspacePool` that allows up to 10 runspaces to run concurrently. |
+| `1..50 | % { }` | Runs the code enclosed in the curly braces 50 times. |
+| `$ps = [powershell]::Create()` | Creates a new PowerShell instance. |
+| `$ps.RunspacePool = $Runspace` | Tells the PowerShell instance to run in the `RunspacePool`. |
+| `[void]$ps.AddCommand("Invoke-WebRequest")` | Sends a request to your container app. |
+| `.AddParameter("UseBasicParsing", $true)` | Sends a request to your container app. |
+| `.AddParameter("Uri", $url)` | Sends a request to your container app. |
+| `[void]$ps.BeginInvoke()` | Tells the PowerShell instance to run asynchronously. |
+
+For more information, see [Beginning Use of PowerShell Runspaces](https://devblogs.microsoft.com/scripting/beginning-use-of-powershell-runspaces-part-3/)
+++
+In the first shell, where you ran the `az containerapp logs show` command, the output now contains one or more log entries like the following.
+
+```json
+{
+ "TimeStamp":"2023-08-01 18:09:52 +0000 UTC",
+ "Type":"Normal",
+ "ContainerAppName":"my-container-app",
+ "RevisionName":"my-container-app--9uj51l6",
+ "ReplicaName":"my-container-app--9uj51l6-5f96557ffb-f795d",
+ "Msg":"Replica 'my-container-app--9uj51l6-5f96557ffb-f795d' has been scheduled to run on a node.",
+ "Reason":"AssigningReplica",
+ "EventSource":"ContainerAppController",
+ "Count":0
+}
+```
+
+## View scaling in Azure portal (optional)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the **Search** bar at the top, enter **my-container-app**.
+1. In the search results, under *Resources*, select **my-container-app**.
+1. In the navigation bar at the left, expand **Application** and select **Scale and replicas**.
+1. In the *Scale and Replicas* page, select **Replicas**.
+1. Your container app now has more than one replica running.
++
+You may need to select **Refresh** to see the new replicas.
+
+1. In the navigation bar at the left, expand **Monitoring** and select **Metrics**.
+1. In the *Metrics* page, set **Metric** to **Requests**.
+1. Select **Apply splitting**.
+1. Expand the **Values** drop-down and check **Replica**.
+1. Select the blue checkmark icon to finish editing the splitting.
+1. The graph shows the requests received by your container app, split by replica.
+
+ :::image type="content" source="media/scale-app/azure-container-apps-scale-replicas-metrics-1.png" alt-text="Container app metrics graph, showing requests split by replica.":::
+
+1. By default, the graph scale is set to last 24 hours, with a time granularity of 15 minutes. Select the scale and change it to the last 30 minutes, with a time granularity of one minute. Select the **Apply** button.
+1. Select on the graph and drag to highlight the recent increase in requests received by your container app.
++
+The following screenshot shows a zoomed view of how the requests received by your container app are divided among replicas.
++
+## Clean up resources
+
+If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this tutorial.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+
+```azurecli
+az group delete --name my-container-apps
+```
+
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set scaling rules in Azure Container Apps](scale-app.md)
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Previously updated : 03/29/2023 Last updated : 08/29/2023
-# Control outbound traffic with user defined routes (preview)
+# Control outbound traffic with user defined routes
->[!Note]
-> This feature is in preview and is only supported for the workload profiles environment. User defined routes only work with an internal Azure Container Apps environment.
+> [!NOTE]
+> This feature is only supported for the workload profiles environment type. User defined routes only work with an internal Azure Container Apps environment.
This article shows you how to use user defined routes (UDR) with [Azure Firewall](../firewall/overview.md) to lock down outbound traffic from your Container Apps to back-end Azure resources or other network resources.
Your virtual networks in Azure have default route tables in place when you creat
## Configure firewall policies > [!NOTE]
-> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. Please refer to [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview) to determine which service tags you need.
+> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. Please refer to [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewall) to determine which service tags you need.
Now, all outbound traffic from your container app is routed to the firewall. Currently, the firewall still allows all outbound traffic through. In order to manage what outbound traffic is allowed or denied, you need to configure firewall policies.
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Previously updated : 08/31/2022 Last updated : 08/29/2023 zone_pivot_groups: azure-cli-or-portal
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only environment. When using the Workload Profiles environment, a `/27` or larger is required. To learn more about subnet sizing, see the [networking environment overview](./networking.md#subnet).
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only environment. When using the workload profiles environment, a `/27` or larger is required. To learn more about subnet sizing, see the [networking environment overview](./networking.md#subnet).
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
The following table describes the parameters used in for `containerapp env creat
||| | `name` | Name of the Container Apps environment. | | `resource-group` | Name of the resource group. |
-| `logs-workspace-id` | (Optional) The ID of an existing the Log Analytics workspace. If omitted, a workspace will be created for you. |
+| `logs-workspace-id` | (Optional) The ID of an existing the Log Analytics workspace. If omitted, a workspace is created for you. |
| `logs-workspace-key` | The Log Analytics client secret. Required if using an existing workspace. | | `location` | The Azure location where the environment is to deploy. | | `infrastructure-subnet-resource-id` | Resource ID of a subnet for infrastructure components and user application containers. |
You must either provide values for all three of these properties, or none of the
## Clean up resources
-If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components.
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group removes the resource group automatically created by the Container Apps service containing the custom network components.
::: zone pivot="azure-cli"
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only Architecture. When using a workload profiles environment, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
container-apps Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/whats-new.md
+
+ Title: What's new in Azure Container Apps
+
+description: Learn more about what's new in Azure Container Apps.
++++ Last updated : 08/30/2023
+# Customer Intent: As an Azure Container Apps user, I'd like to know about new and improved features in Azure Container Apps.
++
+# What's new in Azure Container Apps
+
+This article lists significant updates and new features available in Azure Container Apps.
+
+## Dapr
+
+Learn about new and updated Dapr features available in Azure Container Apps.
+
+### August 2023
+
+| Feature | Documentation | Description |
+| - | - | -- |
+| [Stable Configuration API](https://docs.dapr.io/developing-applications/building-blocks/configuration/) | [Dapr integration with Azure Container Apps](./dapr-overview.md) | Dapr's Configuration API is now stable and supported in Azure Container Apps. |
+
+### June 2023
+
+| Feature | Documentation | Description |
+| - | - | -- |
+| [Multi-app Run improved](https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run) | [Multi-app Run logs](https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-overview/#logs) | Use `dapr run -f .` to run multiple Dapr apps and see the app logs written to the console _and_ a local log file. |
+
+### May 2023
+
+| Feature | Documentation | Description |
+| - | - | -- |
+| [Easy component creation](./dapr-component-connection.md) | [Connect to Azure services via Dapr components in the Azure portal](./dapr-component-connection.md) | This feature makes it easier to configure and secure dependent Azure services to be used with Dapr APIs in the portal using the Service Connector feature. |
++
+## Next steps
+
+[Learn more about Dapr in Azure Container Apps.](./dapr-overview.md)
container-apps Workload Profiles Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-cli.md
Title: Create a Consumption + Dedicated workload profiles environment (preview)
-description: Learn to create an environment with a specialized hardware profile.
+ Title: Create a workload profiles environment with the Azure CLI
+description: Learn to create an environment with a specialized hardware profile using the Azure CLI.
Previously updated : 04/11/2023 Last updated : 08/29/2023 zone_pivot_groups: container-apps-vnet-types
-# Manage workload profiles in a Consumption + Dedicated workload profiles plan structure (preview)
+# Manage workload profiles with the Azure CLI
-Learn to manage a Container Apps environment with workload profile support.
-
-## Supported regions
-
-The following regions support workload profiles during preview:
--- North Central US-- North Europe-- West Europe-- East US
+Learn to manage a workload profiles environment using the Azure CLI.
<a id="create"></a>
The following regions support workload profiles during preview:
::: zone pivot="aca-vnet-managed"
-Azure Container Apps run in an environment, which uses a virtual network (VNet). By default, your Container App environment is created with a managed VNet that is automatically generated for you. Generated VNets are inaccessible to you as they're created in Microsoft's tenant.
+By default, your Container Apps environment is created with a managed VNet that is automatically generated for you. Generated VNets are inaccessible to you as they're created in Microsoft's tenant.
-Create a container apps environment with a [custom VNet](./workload-profiles-manage-cli.md?pivots=aca-vnet-custom) if you need any of the following features:
+Alternatively, you can create an environment with a [custom VNet](./workload-profiles-manage-cli.md?pivots=aca-vnet-custom) if you need any of the following features:
- [User defined routes](user-defined-routes.md) - Integration with Application Gateway
When you create an environment with a custom VNet, you have full control over th
::: zone-end
-Use the following commands to create an environment with workload profile support.
+Use the following commands to create a workload profiles environment.
::: zone pivot="aca-vnet-custom"
Use the following commands to create an environment with workload profile suppor
--query "id" ```
- Copy the ID value and paste into the next command.
+ Copy the **ID** value and paste into the next command.
- The `Microsoft.App/environments` delegation is required to give the Container Apps runtime the needed control over your VNet to run workload profiles in the Container Apps environment.
+ The `Microsoft.App/environments` delegation is required to give the Container Apps runtime the required control over your VNet to run workload profiles in the Container Apps environment.
- You can specify as small as a `/27` CIDR (32 IPs-8 reserved) for the subnet. Some things to consider if you're going to specify a `/27` CIDR:
+ You can specify as small as a `/27` CIDR (32 IPs-8 reserved) for the subnet. If you're going to specify a `/27` CIDR, consider the following items:
- - There are 11 IP addresses reserved for Container Apps infrastructure. Therefore, a `/27` CIDR has a maximum of 21 IP available addresses.
+ - There are 11 IP addresses reserved for Container Apps infrastructure. Therefore, a `/27` CIDR has a maximum of 21 available IP addresses.
- - IP addresses are allocated differently between Consumption and Dedicated profiles:
+ - IP addresses are allocated differently between Consumption only and Dedicated plans:
- | Consumption | Consumption + Dedicated |
+ | Consumption only | Dedicated |
|||
- | Every replica requires one IP. Users can't have apps with more than 21 replicas across all apps. Zero downtime deployment requires double the IPs since the old revision is running until the new revision is successfully deployed. | Every instance (VM node) requires a single IP. You can have up to 21 instances across all workload profiles, and hundreds or more replicas running on these workload profiles. |
+ | Every replica requires one IP. Users can't have apps with more than 21 replicas across all apps. Zero downtime deployment requires double the IPs since the old revision is running until the new revision is successfully deployed. | Every instance (VM node) requires a single IP. You can have up to 21 instances across all workload profiles, and hundreds or more replicas running on these workload profiles. |
::: zone-end
-1. Create *Consumption + Dedicated* environment with workload profile support
+1. Create *workload profiles* environment
::: zone pivot="aca-vnet-custom" >[!Note]
- > In Container Apps, you can configure whether your Container Apps will allow public ingress or only ingress from within your VNet at the environment level. In order to restrict ingress to just your VNet, you need to set the `--internal-only` flag.
+ > You can configure whether your container app allows public ingress or only ingress from within your VNet at the environment level. In order to restrict ingress to just your VNet, set the `--internal-only` flag.
# [External environment](#tab/external-env)
Use the following commands to create an environment with workload profile suppor
This command can take up to 10 minutes to complete.
-1. Check status of environment. Here, you're looking to see if the environment is created successfully.
+1. Check the status of your environment. The following command reports if the environment is created successfully.
```bash az containerapp env show \
Use the following commands to create an environment with workload profile suppor
- This command deploys the application to the built-in Consumption workload profile. If you want to create an app in a dedicated workload profile, you first need to [add the profile to the environment](#add-profiles).
+ This command deploys the application to the built-in Consumption workload profile. If you want to create an app in a Dedicated profile, you first need to [add the profile to the environment](#add-profiles).
This command creates the new application in the environment using a specific workload profile.
az containerapp env workload-profile set \
--max-nodes <MAX_NODES> ```
-The value you select for the `<WORKLOAD_PROFILE_NAME>` placeholder is the workload profile "friendly name".
+The value you select for the `<WORKLOAD_PROFILE_NAME>` placeholder is the workload profile *friendly name*.
Using friendly names allow you to add multiple profiles of the same type to an environment. The friendly name is what you use as you deploy and maintain a container app in a workload profile.
The following commands allow you to list available profiles in your region and o
Use the `list-supported` command to list the supported workload profiles for your region.
-The following Azure CLI command displays the results in a table
+The following Azure CLI command displays the results in a table.
```azurecli az containerapp env workload-profile list-supported \
container-apps Workload Profiles Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-portal.md
Title: Create a Consumption + Dedicated workload profiles environment (preview) in the Azure portal
+ Title: Create a workload profiles environment in the Azure portal
description: Learn to create an environment with a specialized hardware profile in the Azure portal. Previously updated : 04/11/2023 Last updated : 08/29/2023
-# Manage workload profiles in a Consumption + Dedicated workload profiles plan structure (preview) in the Azure portal
+# Manage a workload profiles in the Azure portal
-Learn to manage Container Apps environments with [workload profile](./workload-profiles-overview.md) support.
+Learn to manage a [workload profiles](./workload-profiles-overview.md) environment in the Azure portal.
## Create a container app in a workload profile
Learn to manage Container Apps environments with [workload profile](./workload-p
1. Configure the new environment.
- :::image type="content" source="media/workload-profiles/azure-container-apps-dedicated-environment.png" alt-text="Screenshot of create an Azure Container Apps Consumption + Dedicated plan environment window.":::
+ :::image type="content" source="media/workload-profiles/azure-container-apps-workload-profiles-environment.png" alt-text="Screenshot of create an Azure Container Apps workload profiles environment window.":::
Enter the following values to create your environment. | Property | Value | | | | | Environment name | Enter an environment name. |
- | Plan | Select **(Preview) Consumption and Dedicated workload profiles** |
+ | Environment type| Select **Workload profiles** |
Select the new **Workload profiles** tab at the top of this section.
From this window, you can:
- Adjust the minimum and maximum number of instances available to a profile - Add new profiles-- Delete existing profiles (except for the Consumption profile)
+- Delete existing profiles (except for the consumption profile)
## Delete a profile
container-apps Workload Profiles Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-overview.md
Title: Workload profiles in Consumption + Dedicated plan structure environments in Azure Container Apps
+ Title: Workload profiles in Azure Container Apps
description: Learn how to select a workload profile for your container app Previously updated : 03/30/2023 Last updated : 08/10/2023
-# Workload profiles in Consumption + Dedicated plan structure environments in Azure Container Apps (preview)
+# Workload profiles in Azure Container Apps
-Under the [Consumption + Dedicated plan structure](./plans.md#consumption-dedicated), you can use different workload profiles in your environment. Workload profiles determine the amount of compute and memory resources available to container apps deployed in an environment.
+A workload profile determines the amount of compute and memory resources available to the container apps deployed in an environment.
Profiles are configured to fit the different needs of your applications. | Profile type | Description | Potential use | |--|--|--|
-| Consumption | Automatically added to any new environment. | Apps that don't require specific hardware requirements |
-| Dedicated General purpose | Balance of memory and compute resources | Apps needing larger amounts of CPU and/or memory |
-| Dedicated Memory optimized | Increased memory resources | Apps needing large in-memory data, in-memory machine learning models, or other high memory requirements |
+| Consumption | Automatically added to any new environment. | Apps that don't require specific hardware requirements |
+| Dedicated (General purpose) | Balance of memory and compute resources | Apps that require larger amounts of CPU and/or memory |
+| Dedicated (Memory optimized) | Increased memory resources | Apps that need access to large in-memory data, in-memory machine learning models, or other high memory requirements |
-A Consumption workload profile is automatically added to all Consumption + Dedicated plan structure environment you create. You can optionally add dedicated workload profiles of any type or size as you create an environment or after it's created.
+The Consumption workload profile is the default profile added to every Workload profiles [environment](environment.md) type. You can add Dedicated workload profiles to your environment as you create an environment or after it's created.
For each Dedicated workload profile in your environment, you can: - Select the type and size - Deploy multiple apps into the profile-- Use autoscaling to add and remove nodes based on the needs of the apps-- Limit scaling of the profile to for better cost control and predicatibilty
+- Use autoscaling to add and remove instances based on the needs of the apps
+- Limit scaling of the profile to better control costs
-You can configure each of your apps to run on any of the workload profiles defined in your Container Apps environment. This configuration is ideal for deploying a microservice solution where each app can run on the appropriate compute infrastructure.
+You can configure each of your apps to run on any of the workload profiles defined in your Container Apps environment. This configuration is ideal for deploying microservices where each app can run on the appropriate compute infrastructure.
## Profile types
-There are different types and sizes of workload profiles available by region. By default each Consumption + Dedicated plan structure includes a Consumption profile, but you can also add any of the following profiles:
+There are different types and sizes of workload profiles available by region. By default, each Dedicated plan includes a consumption profile, but you can also add any of the following profiles:
| Display name | Name | Cores | MemoryGiB | Category | Allocation | |||||||
There are different types and sizes of workload profiles available by region. By
Select a workload profile and use the *Name* field when you run `az containerapp env workload-profile set` for the `--workload-profile-type` option.
+In addition to different core and memory sizes, workload profiles also have varying image size limits available. To learn more about the image size limits for your container apps, see [hardware reference](hardware.md#image-size-limit).
+ The availability of different workload profiles varies by region. ## Resource consumption
-You can constrain the memory and CPU usage of each app inside a workload profile, and you can run multiple apps inside a single instance of a workload profile. However, the total amount of resources available to a container app is less than what's allocated to a profile. The difference between allocated and available resources is what's reserved for the Azure Container Apps runtime.
+You can constrain the memory and CPU usage of each app inside a workload profile, and you can run multiple apps inside a single instance of a workload profile. However, the total amount of resources available to a container app is less than what's allocated to a profile. The difference between allocated and available resources is the amount reserved by the Container Apps runtime.
## Scaling
-When demand for new apps or more replicas of an existing app exceeds the profile's current resources, profile instances may be added. Inversely, if the number of apps or replicas goes down, profile instances may be removed. You have control over the constraints on the minimum and maximum number of profile instances. Azure calculates [billing](billing.md#consumption-dedicated) largely based on the number of running profile instances.
+When demand for new apps or more replicas of an existing app exceeds the profile's current resources, profile instances may be added.
+
+At the same time, if the number of required replicas goes down, profile instances may be removed. You have control over the constraints on the minimum and maximum number of profile instances.
+
+Azure calculates [billing](billing.md#consumption-dedicated) largely based on the number of running profile instances.
## Networking
-When using workload profiles in the Consumption + Dedicated plan structure, additional networking features to fully secure your ingress/egress networking traffic such as user defined routes are available. To learn more about what networking features are supported, see [networking concepts](./networking.md), and for steps on how to secure your network with Container Apps, see the [lock down your Container App environment section](./networking.md#lock-down-your-container-app-environment).
+When you use the workload profile environment, extra networking features that fully secure your ingress and egress networking traffic (such as user defined routes) are available. To learn more about what networking features are supported, see [Networking in Azure Container Apps environment](./networking.md). For steps on how to secure your network with Container Apps, see the [lock down your Container App environment section](networking.md#environment-security).
## Next steps
container-instances Container Instances Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-application-gateway.md
az network application-gateway create \
--public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \
- --servers "$ACI_IP"
+ --servers "$ACI_IP" \
--priority 100 ```
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
To run certain compute-intensive workloads on Azure Container Instances, deploy
This article shows how to add GPU resources when you deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md) or [Resource Manager template](container-instances-multi-container-group.md). You can also specify GPU resources when you deploy a container instance using the Azure portal. > [!IMPORTANT]
-> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](https://learn.microsoft.com/azure/virtual-machines/nc-series-retirement) and [NCv2 Series](https://learn.microsoft.com/azure/virtual-machines/ncv2-series-retirement) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](https://learn.microsoft.com/azure/aks/aks-migration).
+> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md).
> [!IMPORTANT] > This feature is currently in preview, and some [limitations apply](#preview-limitations). Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
container-instances Container Instances Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-reference-yaml.md
The following tables describe the values you need to set in the schema.
| name | string | No | Name of the header. | | value | string | No | Value of the header. |
+> [!IMPORTANT]
+> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md).
+ ### GpuResource object | Name | Type | Required | Value | | - | - | - | - | | count | integer | Yes | The count of the GPU resource. |
-| sku | enum | Yes | The SKU of the GPU resource. - K80, P100, V100 |
+| sku | enum | Yes | The SKU of the GPU resource. - V100 |
## Next steps
container-instances Container Instances Resource And Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md
The following limits are default limits that canΓÇÖt be increased through a quot
| Resource | Actual Limit | | | : | | Standard sku container groups per region per subscription | 100 |
-| Standard sku cores (CPUs) per region per subscription | 100 |
-| Standard sku cores (CPUs) for K80 GPU per region per subscription | 0 |
+| Standard sku cores (CPUs) per region per subscription | 100 |
| Standard sku cores (CPUs) for V100 GPU per region per subscription | 0 | | Container group creates per hour |300<sup>1</sup> | | Container group creates per 5 minutes | 100<sup>1</sup> | | Container group deletes per hour | 300<sup>1</sup> | | Container group deletes per 5 minutes | 100<sup>1</sup> |
-## Standard Core Resources
+## Standard Container Resources
### Linux Container Groups
The following resources are available in all Azure Regions supported by Azure Co
| :-: | :--: | :-: | | 4 | 16 | 20 | Y |
-## GPU Resources (Preview)
+## Spot Container Resources (Preview)
+
+The following maximum resources are available to a container group deployed using [Spot Containers](container-instances-spot-containers-overview.md) (preview).
+
+> [!NOTE]
+> Spot Containers are only available in the following regions at this time: East US 2, West Europe, and West US.
+
+| Max CPU | Max Memory (GB) | VNET Max CPU | VNET Max Memory (GB) | Storage (GB) |
+| :: | :: | :-: | :--: | :-: |
+| 4 | 16 | N/A | N/A | 50 |
+
+## Confidential Container Resources (Preview)
+
+The following maximum resources are available to a container group deployed using [Confidential Containers](container-instances-confidential-overview.md) (preview).
+
+> [!NOTE]
+> Confidential Containers are only available in the following regions at this time: East US, North Europe, West Europe, and West US.
+
+| Max CPU | Max Memory (GB) | VNET Max CPU | VNET Max Memory (GB) | Storage (GB) |
+| :: | :: | :-: | :--: | :-: |
+| 4 | 16 | 4 | 16 | 50 |
+
+## GPU Container Resources (Preview)
> [!IMPORTANT] > K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md).
The following maximum resources are available to a container group deployed with
| V100 | 1 | 6 | 112 | 50 | | V100 | 2 | 12 | 224 | 50 | | V100 | 4 | 24 | 448 | 50 |
-<!
-| K80 | 1 | 6 | 56 | 50 |
-| K80 | 2 | 12 | 112 | 50 |
-| K80 | 4 | 24 | 224 | 50 |
-| P100, V100 | 1 | 6 | 112 | 50 |
-| P100, V100 | 2 | 12 | 224 | 50 |
-| P100, V100 | 4 | 24 | 448 | 50 |
->
## Next steps
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
Examples in this article are formatted for the Bash shell. If you prefer another
## Deploy to new virtual network > [!NOTE]
-> If you are using port 29 to have only 3 IP addresses, we recommend always to go one range above or below. For example, use port 28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able start or not able to stop states.
+> If you are using subnet IP range /29 to have only 3 IP addresses. we recommend always to go one range above (never below). For example, use subnet IP range /28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able to start, restart or even not able to stop states.
To deploy to a new virtual network and have Azure create the network resources for you automatically, specify the following when you execute [az container create][az-container-create]:
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation verify $IMAGE ``` Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message.+
+## Next steps
+
+See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/quickstarts/ratify-on-azure.md).
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
Previously updated : 08/03/2023 Last updated : 08/25/2023
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
Title: Indexing in Azure Cosmos DB for Apache Cassandra account
-description: Learn how secondary indexing works in Azure Azure Cosmos DB for Apache Cassandra account.
+description: Learn how secondary indexing works in Azure Cosmos DB for Apache Cassandra account.
cosmos-db Cmk Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cmk-troubleshooting-guide.md
A troubleshooting solution, for example, would be to create a new identity with
After updating the account's default identity, you need to wait upwards to one hour for the account to stop being in revoke state. If the issue isn't resolved after more than two hours, contact customer service.
-## Customer Managed Key does not exist
+## Azure Key Vault Resource not found
### Reason for error?
-You see this error when the customer managed key isn't found on the specified Azure Key Vault.
+You see this error when the Azure Key Vault or specified Key are not found.
### Troubleshooting
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
In Azure Cosmos DB, you must explicitly configure the cross-region data replicat
**Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. For more information, see [periodic backup/restore](periodic-backup-restore-introduction.md).
+## Residency requirements for analytical store
+
+Analytical store is resident by default as it is stored in either locally redundant or zone redundant storage. To learn more, see the [analytical store](analytical-store-introduction.md) article.
++ ## Use Azure Policy to enforce the residency requirements If you have data residency requirements that require you to keep all your data in a single Azure region, you can enforce zone-redundant or locally redundant backups for your account by using an Azure Policy. You can also enforce a policy that the Azure Cosmos DB accounts are not geo-replicated to other regions.
cosmos-db How To Setup Customer Managed Keys Existing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md
+
+ Title: Configure customer-managed keys on existing accounts
+
+description: Store customer-managed keys in Azure Key Vault to use for encryption in your existing Azure Cosmos DB account with access control.
+++ Last updated : 08/17/2023++
+ms.devlang: azurecli
++
+# Configure customer-managed keys for your existing Azure Cosmos DB account with Azure Key Vault (Preview)
++
+Enabling a second layer of encryption for data at rest using [Customer Managed Keys](./how-to-setup-customer-managed-keys.md) while creating a new Azure Cosmos DB account has been Generally available for some time now. As a natural next step, we now have the capability to enable CMK on existing Azure Cosmos DB accounts.
+
+This feature eliminates the need for data migration to a new account to enable CMK. It helps to improve customersΓÇÖ security and compliance posture.
+
+> [!NOTE]
+> Currently, enabling customer-managed keys on existing Azure Cosmos DB accounts is in preview. This preview is provided without a service-level agreement. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Enabling CMK kicks off a background, asynchronous process to encrypt all the existing data in the account, while new incoming data are encrypted before persisting. There's no need to wait for the asynchronous operation to succeed. The enablement process consumes unused/spare RUs so that it doesn't affect your read/write workloads. You can refer to this [link](./how-to-setup-customer-managed-keys.md?tabs=azure-powershell#how-do-customer-managed-keys-influence-capacity-planning) for capacity planning once your account is encrypted.
+
+## Get started by enabling CMK on your existing accounts
+
+### Prerequisites
+
+All the prerequisite steps needed while configuring Customer Managed Keys for new accounts is applicable to enable CMK on your existing account. Refer to the steps [here](./how-to-setup-customer-managed-keys.md?tabs=azure-portal#prerequisites)
+
+### Steps to enable CMK on your existing account
+
+To enable CMK on an existing account, update the account with an ARM template setting a Key Vault key identifier in the keyVaultKeyUri property ΓÇô just like you would when enabling CMK on a new account. This step can be done by issuing a PATCH call with the following payload:
+
+```
+ {
+ "properties": {
+ "keyVaultKeyUri": "<key-vault-key-uri>"
+ }
+ }
+```
+
+The output of this CLI command for enabling CMK waits for the completion of encryption of data.
+
+```azurecli
+ az cosmosdb update --name "testaccount" --resource-group "testrg" --key-uri "https://keyvaultname.vault.azure.net/keys/key1"
+```
+
+### Steps to enable CMK on your existing Azure Cosmos DB account with PITR or Analytical store account
+
+For enabling CMK on existing account that has continuous backup and point in time restore enabled, we need to follow some extra steps. Follow step 1 to step 5 and then follow instructions to enable CMK on existing account.
+
+> [!NOTE]
+> System-assigned identity and continuous backup mode is currently under Public Preview and may change in the future. Currently, only user-assigned managed identity is supported for enabling CMK on continuous backup accounts.
+++
+1. Configure managed identity to your cosmos account [Configure managed identities with Azure AD for your Azure Cosmos DB account](./how-to-setup-managed-identity.md)
+
+1. Update cosmos account to set default identity to point to managed identity added in previous step
+
+ **For System managed identity :**
+ ```
+ az cosmosdb updateΓÇ»--resource-group $resourceGroupNameΓÇ» --name $accountNameΓÇ» --default- identity "SystemAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/ systemAssignedIdentities/MyID"
+ ```
+
+ **For User managed identity  :**
+
+ ```
+ az cosmosdb update -n $sourceAccountName -g $resourceGroupName --default-identity "UserAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyID"
+ ```
+
+1. Configure Keyvault as given in documentation [here](./how-to-setup-customer-managed-keys.md?tabs=azure-cli#configure-your-azure-key-vault-instance)
+
+1. Add [access policy](./how-to-setup-customer-managed-keys.md?tabs=azure-cli#using-a-managed-identity-in-the-azure-key-vault-access-policy) in the keyvault for the default identity that is set in previous step
+
+1. Update cosmos account to set keyvault uri, this update triggers enabling CMK on account  
+ ```
+ az cosmosdb update --name $accountName --resource-group $resourceGroupName --key-uri $keyVaultKeyURIΓÇ»
+ ```
+## Known limitations
+
+- Enabling CMK is available only at a Cosmos DB account level and not at collections.
+- We don't support enabling CMK on existing Azure Cosmos DB for Apache Cassandra accounts.
+- We don't support enabling CMK on existing accounts that are enabled for Materialized Views and Full Fidelity Change Feed (FFCF) as well.
+- Ensure account must not have documents with large IDs greater than 990 bytes before enabling CMK. If not, you'll get an error due to max supported limit of 1024 bytes after encryption.
+- During encryption of existing data, [control plane](./audit-control-plane-logs.md) actions such as "add region" is blocked. These actions are unblocked and can be used right after the encryption is complete.
+
+## Monitor the progress of the resulting encryption
+
+Enabling CMK on an existing account is an asynchronous operation that kicks off a background task that encrypts all existing data. As such, the REST API request to enable CMK provides in its response an "Azure-AsyncOperation" URL. Polling this URL with GET requests return the status of the overall operation, which eventually Succeed. This mechanism is fully described in [this](https://learn.microsoft.com/azure/azure-resource-manager/management/async-operations) article.
+
+The Cosmos DB account can continue to be used and data can continue to be written without waiting for the asynchronous operation to succeed. CLI command for enabling CMK waits for the completion of encryption of data.
+
+If you have further questions, reach out to Microsoft Support.
+
+## FAQs
+
+**What are the factors on which the encryption time depends?**
+
+Enabling CMK is an asynchronous operation and depends on sufficient unused RUs being available. We suggest enabling CMK during off-peak hours and if applicable you can increase RUs before hand, to speed up encryption. It's also a direct function of data size.
+
+**Do we need to brace ourselves for downtime?**
+
+Enabling CMK kicks off a background, asynchronous process to encrypt all the data. There's no need to wait for the asynchronous operation to succeed. The Azure Cosmos DB account is available for reads and writes and there's no need for a downtime.
+
+**Can you bump up the RUΓÇÖs once CMK has been triggered?**
+
+It's suggested to bump up the RUs before you trigger CMK. Once CMK is triggered, then some control plane operations are blocked till the encryption is complete. This block may prevent the user from increasing the RUΓÇÖs once CMK is triggered.
+
+**Is there a way to reverse the encryption or disable encryption after triggering CMK?**
+
+Once the data encryption process using CMK is triggered, it can't be reverted.
+
+**Will enabling encryption using CMK on existing account have an impact on data size and read/writes?**
+
+As you would expect, by enabling CMK there's a slight increase in data size and RUs to accommodate extra encryption/decryption processing.
+
+**Should you back up the data before enabling CMK?**
+
+Enabling CMK doesn't pose any threat of data loss. In general, we suggest you back up the data regularly.
+
+**Are old backups taken as a part of periodic backup encrypted?**
+
+No. Old periodic backups aren't encrypted. Newly generated backups after CMK enabled is encrypted.
+
+**What is the behavior on existing accounts that are enabled for Continuous backup (PITR)**
+
+When CMK is turned on, the encryption is turned on for continuous backups as well. All restores going forward is encrypted.
+
+**What is the behavior if CMK is enabled on PITR enabled account and we restore account to the time CMK was disabled?**
+
+In this case CMK is explicitly enabled on the restored target account for the following reasons:
+- Once CMK is enabled on the account, there's no option to disable CMK.
+- This behavior is in line with the current design of restore of CMK enabled account if periodic backup
+
+**What happens when user revokes the key while CMK migration is in-progress?**
+
+The state of the key is checked when CMK encryption is triggered. If the key in Azure Key vault is in good standing, the encryption is started and the process completes without further check. Even if the key is revoked, or Azure key vault is deleted or unavailable, the encryption process succeeds.
+
+**Can we enable CMK encryption on our existing production account?**
+
+Yes. Since the capability is currently in preview, we recommend testing all scenarios first on nonproduction accounts and once you're comfortable you can consider production accounts.
+
+## Next steps
+
+* Learn more about [data encryption in Azure Cosmos DB](database-encryption-at-rest.md).
+* You can choose to add a second layer of encryption with your own keys, to learn more, see the [customer-managed keys](how-to-setup-cmk.md) article.
+* For an overview of Azure Cosmos DB security and the latest improvements, see [Azure Cosmos DB database security](database-security.md).
+* For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Data stored in your Azure Cosmos DB account is automatically and seamlessly encr
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos DB account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account. > [!NOTE]
-> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
+> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation. Enabling customer-managed keys on your existing accounts is available for preview. You can refer to the link [here](how-to-setup-customer-managed-keys-existing-accounts.md) for more details
> [!WARNING] > The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
Here, create a new key using Azure Key Vault and retrieve the unique identifier.
:::image type="content" source="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" lightbox="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" alt-text="Screenshot of the dialog to create a new key.":::
- > [!TIP]
- > Alternatively, you can use the Azure CLI to generate a key with:
- >
- > ```azurecli
- > az keyvault key create \
- > --vault-name <name-of-key-vault> \
- > --name <name-of-key>
- > ```
- >
- > For more information on managing a key vault with the Azure CLI, see [manage Azure Key Vault with the Azure CLI](../key-vault/general/manage-with-cli2.md).
+ > [!TIP]
+ > Alternatively, you can use the Azure CLI to generate a key with:
+ >
+ > ```azurecli
+ > az keyvault key create \
+ > --vault-name <name-of-key-vault> \
+ > --name <name-of-key>
+ > ```
+ >
+ > For more information on managing a key vault with the Azure CLI, see [manage Azure Key Vault with the Azure CLI](../key-vault/general/manage-with-cli2.md).
1. After the key is created, select the newly created key and then its current version.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
Azure Cosmos DB supports two indexing modes:
- **None**: Indexing is disabled on the container. This mode is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete. > [!NOTE]
-> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmosdblazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing).
+> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmosdbindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing).
By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index items as they're written.
When removing indexed paths, you should group all your changes into one indexing
When you drop an indexed path, the query engine will immediately stop using it, and will do a full scan instead. > [!NOTE]
-> Where possible, you should always try to group multiple indexing changes into one single indexing policy modification
+> Where possible, you should always try to group multiple index removals into one single indexing policy modification.
+
+> [!IMPORTANT]
+> Removing an index takes affect immediately, whereas adding a new index takes some time as it requires an indexing transformation. When replacing one index with another (for example, replacing a single property index with a composite-index) make sure to add the new index first and then wait for the index transformation to complete **before** you remove the previous index from the indexing policy. Otherwise this will negatively affect your ability to query the previous index and may break any active workloads that reference the previous index.
## Indexing policies and TTL
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
To get started with intra-account offline container copy for NoSQL and Cassandra
### API for MongoDB
-To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline container copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription.
+To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline collection copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription.
<a name="how-to-do-container-copy"></a>
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Today's applications are required to be highly responsive and always online. To
Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+Use Retrieval Augmented Generation (RAG) to bring the most semantically relevant data to enrich your AI-powered applications built with Azure OpenAI models like GPT-3.5 and GPT-4. For more information, see [Retrieval Augmented Generation (RAG) with Azure Cosmos DB](rag-data-openai.md).
+ App development is faster and more productive thanks to: - Turnkey multi region data distribution anywhere in the world - Open source APIs - SDKs for popular languages.
+- Retrieval Augmented Generation that brings your data to Azure OpenAI to
As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
End-to-end database management, with serverless and automatic scaling matching y
## Solutions that benefit from Azure Cosmos DB
-[Web, mobile, gaming, and IoT application](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for various data will benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
+[Web, mobile, gaming, and IoT applications](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
## Next steps
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Title: Compatibility and feature support description: Review Azure Cosmos DB for MongoDB vCore supported features and syntax including; commands, query support, datatypes, aggregation, and operators.---+++ - Previously updated : 04/11/2023 Last updated : 08/28/2023 # MongoDB compatibility and feature support with Azure Cosmos DB for MongoDB vCore + Azure Cosmos DB is Microsoft's fully managed NoSQL and relational database, offering [multiple database APIs](../../choose-api.md). You can communicate with Azure Cosmos DB for MongoDB using the MongoDB drivers, SDKs and tools you're already familiar with. Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB wire protocol. By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides.
cosmos-db Connect Using Robomongo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/connect-using-robomongo.md
- Title: Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore
-description: Learn how to connect to Azure Cosmos DB for MongoDB vCore using Studio 3T
--- Previously updated : 07/07/2023---
-# Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore
-
-Studio 3T (also known as Robomongo or Robo 3T) is a professional GUI that offers IDE & client tools for MongoDB. It's a great tool to speed up MongoDB development with a friendly user interface. In order to connect to your Azure Cosmos DB for MongoDB vCore cluster using Studio 3T, you must:
-
-* Download and install [Studio 3T](https://robomongo.org/)
-* Have your Azure Cosmos DB for MongoDB vCore [connection string](quickstart-portal.md#get-cluster-credentials) information
-
-## Connect using Studio 3T
-
-To add your Azure Cosmos DB cluster to the Studio 3T connection manager, perform the following steps:
-
-1. Retrieve the connection information for your Azure Cosmos DB for MongoDB vCore using the instructions [here](quickstart-portal.md#get-cluster-credentials).
-
- :::image type="content" source="./media/connect-using-robomongo/connection-string.png" alt-text="Screenshot of the connection string page.":::
-2. Run the **Studio 3T** application.
-
-3. Click the connection button under **File** to manage your connections. Then, click **New Connection** in the **Connection Manager** window, which will open up another window where you can paste the connection credentials.
-
-4. In the connection credentials window, choose the first option and paste your connection string. Click **Next** to move forward.
-
- :::image type="content" source="./media/connect-using-robomongo/new-connection.png" alt-text="Screenshot of the Studio 3T connection credentials window.":::
-5. Choose a **Connection name** and double check your connection credentials.
-
- :::image type="content" source="./media/connect-using-robomongo/connection-configuration.png" alt-text="Screenshot of the Studio 3T connection details window.":::
-6. On the **SSL** tab, check **Use SSL protocol to connect**.
-
- :::image type="content" source="./media/connect-using-robomongo/connection-ssl.png" alt-text="Screenshot of the Studio 3T new connection SSL Tab.":::
-7. Finally, click **Test Connection** in the bottom left to verify that you are able to connect, then click **Save**.
-
-## Next steps
--- Learn [how to use Bicep templates](quickstart-bicep.md) to deploy your Azure Cosmos DB for MongoDB vCore cluster.-- Learn [how to connect your Nodejs web application](tutorial-nodejs-web-app.md) to a MongoDB vCore cluster.-- Check the [migration options](migration-options.md) to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore.
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/high-availability.md
Title: High availability and replication description: Review replication and high availability concepts in the context of Azure Cosmos DB for MongoDB vCore.+++ --- Previously updated : 02/07/2023 Last updated : 08/28/2023 # High availability in Azure Cosmos DB for MongoDB vCore + High availability (HA) avoids database downtime by maintaining standby replicas of every node in a cluster. If a node goes down, Azure Cosmos DB for MongoDB vCore switches incoming connections from the failed node to its standby replica. ## How it works
cosmos-db How To Connect Studio 3T https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-connect-studio-3t.md
+
+ Title: Use Studio 3T to connect
+
+description: Connect to an Azure Cosmos DB for MongoDB vCore account using the Studio 3T community tool to query data.
++++++ Last updated : 08/28/2023
+# CustomerIntent: As a database owner, I want to use Studio 3T so that I can connect to and query my collections.
++
+# Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore
++
+Studio 3T (also known as Robomongo or Robo 3T) is a professional GUI that offers IDE & client tools for MongoDB. It's a popular community tool to speed up MongoDB development with a straightforward user interface.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for MongoDB vCore cluster.
+ - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
+ - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
+- [Studio 3T](https://robomongo.org/) community tool
+
+## Connect using Studio 3T
+
+To add your Azure Cosmos DB cluster to the Studio 3T connection manager, perform the following steps:
+
+1. Retrieve the connection information for your Azure Cosmos DB for MongoDB vCore using the instructions [here](quickstart-portal.md#get-cluster-credentials).
+
+ :::image type="content" source="./media/connect-using-robomongo/connection-string.png" alt-text="Screenshot of the connection string page.":::
+
+1. Run the **Studio 3T** application.
+
+1. Select the connection button under **File** to manage your connections. Then, select **New Connection** in the **Connection Manager** window, which opens another window where you can paste the connection credentials.
+
+1. In the connection credentials window, choose the first option and paste your connection string. Select **Next** to move forward.
+
+ :::image type="content" source="./media/connect-using-robomongo/new-connection.png" alt-text="Screenshot of the Studio 3T connection credentials window.":::
+
+1. Choose a **Connection name** and double check your connection credentials.
+
+ :::image type="content" source="./media/connect-using-robomongo/connection-configuration.png" alt-text="Screenshot of the Studio 3T connection details window.":::
+
+1. On the **SSL** tab, check **Use SSL protocol to connect**.
+
+ :::image type="content" source="./media/connect-using-robomongo/connection-ssl.png" alt-text="Screenshot of the Studio 3T new connection TLS/SSL Tab.":::
+
+1. Finally, select **Test Connection** in the bottom left to verify that you're able to connect, then select **Save**.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Migration options](migration-options.md)
cosmos-db How To Create Text Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-create-text-index.md
+
+ Title: Search and query with text indexes
+
+description: Configure and use text indexes to perform and fine tune text searches in Azure Cosmos DB for MongoDB vCore.
++++++ Last updated : 08/28/2023
+# CustomerIntent: As a database query developer, I want to create a text index so that I can perform full-text searches.
++
+# Search and query with text indexes in Azure Cosmos DB for MongoDB vCore
++
+One of the key features that Azure Cosmos DB for MongoDB vCore provides is text indexing, which allows for efficient searching and querying of text-based data. The service implements **version 2** text indexes. Version 2 supports case sensitivity but not diacritic sensitivity.
+
+Text indexes in Azure Cosmos DB for MongoDB are special data structures that optimize text-based queries, making them faster and more efficient. They're designed to handle textual content like documents, articles, comments, or any other text-heavy data. Text indexes use techniques such as tokenization, stemming, and stop words to create an index that improves the performance of text-based searches.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for MongoDB vCore cluster.
+ - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
+ - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
+
+## Define a text index
+
+For simplicity, let us consider an example of a blog application with the following setup:
+
+- **Database name**: `cosmicworks`
+- **Collection name**: `products`
+
+This example application stores articles as documents with the following structure:
+
+```json
+{
+ "_id": ObjectId("617a34e7a867530bff1b2346"),
+ "title": "Azure Cosmos DB - A Game Changer",
+ "content": "Azure Cosmos DB is a globally distributed, multi-model database service.",
+ "author": "John Doe",
+ "category": "Technology",
+ "published": true
+}
+```
+
+1. Use the `createIndex` method with the `text` option to create a text index on the `title` field.
+
+ ```javascript
+ use cosmicworks;
+
+ db.products.createIndex({ Title: "text" })
+ ```
+
+ > [!NOTE]
+ > While you can define only one text index per collection, Azure Cosmos DB for MongoDB vCore allows you to create text indexes on multiple fields to enable you to perform text searches across different fields in your documents.
+
+1. Optionally, create an index to support search on both the `title` and `content` fields.
+
+ ```javascript
+ db.products.createIndex({ Title: "text", content: "text" })
+ ```
+
+## Configure text index options
+
+Text indexes in Azure Cosmos DB for MongoDB come with several options to customize their behavior. For example, you can specify the language for text analysis, set weights to prioritize certain fields, and configure case-insensitive searches. Here's an example of creating a text index with options:
+
+1. Create an index to support search on both the `title` and `content` fields with English language support. Also, assign higher weights to the `title` field to prioritize it in search results.
+
+ ```javascript
+ db.products.createIndex(
+ { Title: "text", content: "text" },
+ { default_language: "english", weights: { Title: 10, content: 5 }, caseSensitive: false }
+ )
+ ```
+
+### Weights in text indexes
+
+When creating a text index, you can assign different weights to individual fields in the index. These weights represent the importance or relevance of each field in the search. Azure Cosmos DB for MongoDB vCore calculates a score and assigned weights for each document based on the search terms when executing a text search query. The score represents the relevance of the document to the search query.
+
+1. Create an index to support search on both the `title` and `content` fields. Assign a weight of 2 to the "title" field and a weight of 1 to the "content" field.
+
+ ```javascript
+ db.products.createIndex(
+ { Title: "text", content: "text" },
+ { weights: { Title: 2, content: 1 } }
+ )
+ ```
+
+ > [!NOTE]
+ > When a client performs a text search query with the term "Cosmos DB," the score for each document in the collection will be calculated based on the presence and frequency of the term in both the "title" and "content" fields, with higher importance given to the "title" field due to its higher weight.
+
+## Perform a text search using a text index
+
+Once the text index is created, you can perform text searches using the "text" operator in your queries. The text operator takes a search string and matches it against the text index to find relevant documents.
+
+1. Perform a text search for the phrase `Cosmos DB`.
+
+ ```javascript
+ db.products.find(
+ { $text: { $search: "Cosmos DB" } }
+ )
+ ```
+
+1. Optionally, use the `$meta` projection operator along with the `textScore` field in a query to see the weight
+
+ ```javascript
+ db.products.find(
+ { $text: { $search: "Cosmos DB" } },
+ { score: { $meta: "textScore" } }
+ )
+ ```
+
+## Dropping a text index
+
+To drop a text index in MongoDB, you can use the `dropIndex()` method on the collection and specify the index key or name for the text index you want to remove.
+
+1. Drop a text index by explicitly specifying the key.
+
+ ```javascript
+ db.products.dropIndex({ Title: "text" })
+ ```
+
+1. Optionally, drop a text index by specifying the autogenerated unique name.
+
+ ```javascript
+ db.products.dropIndex("title_text")
+ ```
+
+## Text index limitations
+
+- Only one text index can be defined on a collection.
+- Text indexes support simple text searches and don't provide advanced search capabilities like regular expression searches.
+- Hint() isn't supported in combination with a query using $text expression.
+- Sort operations can't use the ordering of the text index in MongoDB.
+- Text indexes can be relatively large, consuming significant storage space compared to other index types.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Build a Node.js web application](tutorial-nodejs-web-app.md)
cosmos-db How To Migrate Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-migrate-native-tools.md
+
+ Title: Migrate MongoDB using MongoDB native tools
+
+description: Use MongoDB native tools to migrate small datasets from existing MongoDB instances to Azure Cosmos DB for MongoDB vCore offline.
++++++ Last updated : 08/28/2023
+# CustomerIntent: As a database owner, I want to use the native tools in MongoDB Core so that I can migrate an existing dataset to Azure Cosmos DB for MongoDB vCore.
++
+# Migrate MongoDB to Azure Cosmos DB for MongoDB vCore offline using MongoDB native tools
++
+In this tutorial, you use MongoDB native tools to perform an offline (one-time) migration of a database from an on-premises or cloud instance of MongoDB to Azure Cosmos DB for MongoDB vCore. The MongoDB native tools are a set of binaries that facilitate data manipulation on an existing MongoDB instance. The focus of this doc is on migrating data out of a MongoDB instance using *mongoexport/mongoimport* or *mongodump/mongorestore*. Since the native tools connect to MongoDB using connection strings, you can run the tools anywhere. The native tools can be the simplest solution for small datasets where total migration time isn't a concern.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for MongoDB vCore cluster.
+ - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
+ - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
+- [MongoDB native tools](https://www.mongodb.com/try/download/database-tools) installed on your machine.
+
+## Prepare
+
+Prior to starting the migration, make sure you have prepared your Azure Cosmos DB for MongoDB vCore account and your existing MongoDB instance for migration.
+
+- MongoDB instance (source)
+ - Complete the [premigration assessment](../pre-migration-steps.md#pre-migration-assessment) to determine if there are a list of incompatibilities and warnings between your source instance and target account.
+ - Ensure that your MongoDB native tools match the same version as the existing (source) MongoDB instance.
+ - If your MongoDB instance has a different version than Azure Cosmos DB for MongoDB vCore, then install both MongoDB native tool versions and use the appropriate tool version for MongoDB and Azure Cosmos DB for MongoDB vCore, respectively.
+ - Add a user with `readWrite` permissions, unless one already exists. You eventually use this credential with the *mongoexport* and *mongodump* tools.
+- Azure Cosmos DB for MongoDB vCore (target)
+ - Gather the Azure Cosmos DB for MongoDB vCore [account's credentials](./quickstart-portal.md#get-cluster-credentials).
+ - [Configure Firewall Settings](./security.md#network-security-options) on Azure Cosmos DB for MongoDB vCore.
+
+> [!TIP]
+> We recommend running these tools within the same network as the MongoDB instance to avoid further firewall issues.
+
+## Choose the proper MongoDB native tool
+
+There are some high-level considerations when choosing the right MongoDB native tool for your offline migration.
+
+## Perform the migration
+
+Migrate a collection from the source MongoDB instance to the target Azure Cosmos DB for MongoDB vCore account using your preferred native tool. For more information on selecting a tool, see [native MongoDB tools](migration-options.md#native-mongodb-tools-offline)
+
+> [!TIP]
+> If you simply have a small JSON file that you want to import into Azure Cosmos DB for MongoDB vCore, the *mongoimport* tool is a quick solution for ingesting the data.
+
+### [mongoexport/mongoimport](#tab/export-import)
+
+1. To export the data from the source MongoDB instance, open a terminal and use the ``--host``, ``--username``, and ``--password`` arguments to connect to and export JSON records.
+
+ ```bash
+ mongoexport \
+ --host <hostname><:port> \
+ --username <username> \
+ --password <password> \
+ --db <database-name> \
+ --collection <collection-name> \
+ --out <filename>.json
+ ```
+
+1. Optionally, export a subset of the MongoDB data by adding a ``--query`` argument. This argument ensures that the tool only exports documents that match the filter.
+
+ ```bash
+ mongoexport \
+ --host <hostname><:port> \
+ --username <username> \
+ --password <password> \
+ --db <database-name> \
+ --collection <collection-name> \
+ --query '{ "quantity": { "$gte": 15 } }' \
+ --out <filename>.json
+ ```
+
+1. Import the previously exported file into the target Azure Cosmos DB for MongoDB vCore account.
+
+ ```bash
+ mongoimport \
+ --file <filename>.json \
+ --type json \
+ --writeConcern="{w:0}" \
+ --db <database-name> \
+ --collection <collection-name> \
+ --ssl \
+ <target-connection-string>
+ ```
+
+1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the import operation's status.
+
+### [mongodump/mongorestore](#tab/dump-restore)
+
+1. To create a data dump of all data in your MongoDB instance, open a terminal and use the ``--host``, ``--username``, and ``--password`` arguments to dump the data as native BSON.
+
+ ```bash
+ mongodump \
+ --host <hostname><:port> \
+ --username <username> \
+ --password <password> \
+ --out <dump-directory>
+ ```
+
+1. Optionally, you can specify the ``--db`` and ``--collection`` arguments to narrow the scope of the data you wish to dump:
+
+ ```bash
+ mongodump \
+ --host <hostname><:port> \
+ --username <username> \
+ --password <password> \
+ --db <database-name> \
+ --out <dump-directory>
+ ```
+
+ ```bash
+ mongodump \
+ --host <hostname><:port> \
+ --username <username> \
+ --password <password> \
+ --db <database-name> \
+ --collection <collection-name> \
+ --out <dump-directory>
+ ```
+
+1. Observe that the tool created a directory with the native BSON data dumped. The files and folders are organized into a resource hierarchy based on the database and collection names. Each database is a folder and each collection is a `.bson` file.
+
+1. Restore the contents of any specific collection into an Azure Cosmos DB for MongoDB vCore account by specifying the collection's specific BSON file. The filename is constructed using this syntax: `<dump-directory>/<database-name>/<collection-name>.bson`.
+
+ ```bash
+ mongorestore \
+ --writeConcern="{w:0}" \
+ --db <database-name> \
+ --collection <collection-name> \
+ --ssl \
+ <dump-directory>/<database-name>/<collection-name>.bson
+ ```
+
+1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the restore operation's status.
+++
+## Next step
+
+> [!div class="nextstepaction"]
+> [Build a Node.js web application](tutorial-nodejs-web-app.md)
cosmos-db How To Restore Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-restore-cluster.md
Title: Restore a cluster backup description: Restore an Azure Cosmos DB for MongoDB vCore cluster from a point in time encrypted backup snapshot.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # Restore a cluster in Azure Cosmos DB for MongoDB vCore
cosmos-db How To Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-scale-cluster.md
Title: Scale or configure a cluster description: Scale an Azure Cosmos DB for MongoDB vCore cluster by changing the tier and disk size or change the configuration by enabling high availability.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster
cosmos-db How To Text Indexes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-text-indexes.md
- Title: Text indexes in Azure Cosmos DB for MongoDB vCore-
-description: How to configure and use text indexes in Azure Cosmos DB for MongoDB vCore
------- Previously updated : 07/26/2023--
-# Text indexes in Azure Cosmos DB for MongoDB vCore
--
-One of the key features that Azure Cosmos DB for MongoDB vCore provides is text indexing, which allows for efficient searching and querying of text-based data. The service implements version 2 text indexes which support case sensitivity but not diacritic sensitivity. In this article, we will explore the usage of text indexes in Azure Cosmos DB for MongoDB, along with practical examples and syntax to help you leverage this feature effectively.
-
-## What are Text Indexes?
-
-Text indexes in Azure Cosmos DB for MongoDB are special data structures that optimize text-based queries, making them faster and more efficient. They are designed to handle textual content like documents, articles, comments, or any other text-heavy data. Text indexes use techniques such as tokenization, stemming, and stop words to create an index that improves the performance of text-based searches.
-
-## Defining a Text Index
-
-For simplicity let us consider an example of a blog application that stores articles with the following document structure:
-
-```json
-{
- "_id": ObjectId("617a34e7a867530bff1b2346"),
- "title": "Azure Cosmos DB - A Game Changer",
- "content": "Azure Cosmos DB is a globally distributed, multi-model database service.",
- "author": "John Doe",
- "category": "Technology",
- "published": true
-}
-```
-
-To create a text index in Azure Cosmos DB for MongoDB, you can use the "createIndex" method with the "text" option. Here's an example of how to create a text index for a "title" field in a collection named "articles":
-
-```
-db.articles.createIndex({ Title: "text" })
-```
-
-While we can define only one text index per collection, Azure Cosmos DB for MongoDB allows you to create text indexes on multiple fields to enable you to perform text searches across different fields in your documents.
-
-For example, if we want to perform search on both the "title" and "content" fields, then the text index can be defined as:
-
-```
-db.articles.createIndex({ Title: "text", content: "text" })
-```
-
-## Text Index Options
-
-Text indexes in Azure Cosmos DB for MongoDB come with several options to customize their behavior. For example, you can specify the language for text analysis, set weights to prioritize certain fields, and configure case-insensitive searches. Here's an example of creating a text index with options:
-
-```
-db.articles.createIndex(
- { content: "text", Title: "text" },
- { default_language: "english", weights: { Title: 10, content: 5 }, caseSensitive: false }
-)
-```
-In this example, we have defined a text index on both the "content" and "title" fields with English language support. We have also assigned higher weights to the "title" field to prioritize it in search results.
-
-## Significance of weights in text indexes
-
-When creating a text index, you have the option to assign different weights to individual fields in the index. These weights represent the importance or relevance of each field in the search.
-
-When executing a text search query, Cosmos DB will calculate a score for each document based on the search terms and the assigned weights of the indexed fields. The score represents the relevance of the document to the search query.
--
-```
-db.articles.createIndex(
- { Title: "text", content: "text" },
- { weights: { Title: 2, content: 1 } }
-)
-```
-
-For example, let's say we have a text index with two fields: "title" and "content." We assign a weight of 2 to the "title" field and a weight of 1 to the "content" field. When a user performs a text search query with the term "Cosmos DB," the score for each document in the collection will be calculated based on the presence and frequency of the term in both the "title" and "content" fields, with higher importance given to the "title" field due to its higher weight.
-
-To look at the score of documents in the query result, you can use the $meta projection operator along with the textScore field in your query projection.
--
-```
-db.articles.find(
- { $text: { $search: "Cosmos DB" } },
- { score: { $meta: "textScore" } }
-)
-```
-
-## Performing a Text Search
-
-Once the text index is created, you can perform text searches using the "text" operator in your queries. The text operator takes a search string and matches it against the text index to find relevant documents. Here's an example of a text search query:
-
-```
-db.articles.find({ $text: { $search: "Azure Cosmos DB" } })
-```
-
-This query will return all documents in the "articles" collection that contain the terms "Azure" and "Cosmos DB" in any order.
-
-## Limitations
-
-* Only one text index can be defined on a collection.
-* Text indexes support simple text searches and do not provide advanced search capabilities like regular expression searches.
-* Hint() is not supported in combination with a query using $text expression.
-* Sort operations cannot leverage the ordering of the text index in MongoDB.
-* Text indexes can be relatively large, consuming significant storage space compared to other index types.
---
-## Dropping a text index
-
-To drop a text index in MongoDB, you can use the dropIndex() method on the collection and specify the index key or name for the text index you want to remove.
-
-```
-db.articles.dropIndex({ Title: "text" })
-```
-or
-```
-db.articles.dropIndex("title_text")
-```
cosmos-db How To Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-transactions.md
+
+ Title: Group multiple operations in transactions
+
+description: Support atomicity, consistency, isolation, and durability with transactions in Azure Cosmos DB for MongoDB vCore.
++++++ Last updated : 08/28/2023
+# CustomerIntent: As a developer, I want to use transactions so that I can group multiple operations together.
++
+# Group multiple operations in transactions in Azure Cosmos DB for MongoDB vCore
++
+It's common to want to group multiple operations into a single transaction to either commit or rollback together. In database principles, transactions typically implement four key **ACID** principles. ACID stands for:
+
+- **Atomicity**: Transactions complete entirely or not at all.
+- **Consistency**: Databases transition from one consistent state to another.
+- **Isolation**: Individual transactions are shielded from simultaneous ones.
+- **Durability**: Finished transactions are permanent, ensuring data remains consistent, even during system failures.
+
+The ACID principles in database management ensure transactions are processed reliably. Azure Cosmos DB for MongoDB vCore implements these principles, enabling you to create transactions for multiple operations.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for MongoDB vCore cluster.
+ - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
+ - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
+
+## Create a transaction
+
+Create a new transaction using the appropriate methods from the developer language of your choice. These methods typically include some wrapping mechanism to group multiple transactions together, and a method to commit the transaction.
+
+### [JavaScript](#tab/javascript)
+
+> [!NOTE]
+> The samples in this section assume you have a collection variable named `collection`.
+
+1. Use `startSession()` to create a client session for the transaction operation.
+
+ ```javascript
+ const transactionSession = client.startSession();
+ ```
+
+1. Create a transaction using `withTransaction()` and place all relevant transaction operations within the callback.
+
+ ```javascript
+ await transactionSession.withTransaction(async () => {
+ await collection.insertOne({ name: "Coolarn shirt", price: 38.00 }, transactionSession);
+ await collection.insertOne({ name: "Coolarn shirt button", price: 1.50 }, transactionSession);
+ });
+ ```
+
+1. Commit the transaction using `commitTransaction()`.
+
+ ```javascript
+ transactionSession.commitTransaction();
+ ```
+
+1. Use `endSession()` to end the transaction session.
+
+ ```javascript
+ transactionSession.endSession();
+ ```
+
+### [Java](#tab/java)
+
+> [!NOTE]
+> The samples in this section assume you have a collection variable named `databaseCollection`.
+
+1. Use `startSession()` to create a client session for the transaction operation within a `try` block.
+
+ ```java
+ try (ClientSession session = client.startSession()) {
+ }
+ ```
+
+1. Create a transaction using `startTransaction()`.
+
+ ```java
+ session.startTransaction();
+ ```
+
+1. Include all relevant transaction operations.
+
+ ```java
+ collection.insertOne(session, new Document().append("name", "Coolarn shirt").append("price", 38.00));
+ collection.insertOne(session, new Document().append("name", "Coolarn shirt button").append("price", 1.50));
+ ```
+
+1. Commit the transaction using `commitTransaction()`.
+
+ ```java
+ clientSession.commitTransaction();
+ ```
+
+### [Python](#tab/python)
+
+> [!NOTE]
+> The samples in this section assume you have a collection variable named `coll`.
+
+1. Use `start_session()` to create a client session for the transaction operation.
+
+ ```python
+ with client.start_session() as ts:
+ ```
+
+1. Within the session block, create a transaction using `start_transaction()`.
+
+ ```python
+ ts.start_transaction()
+ ```
+
+1. Include all relevant transaction operations.
+
+ ```python
+ coll.insert_one({ 'name': 'Coolarn shirt', 'price': 38.00 }, session=ts)
+ coll.insert_one({ 'name': 'Coolarn shirt button', 'price': 1.50 }, session=ts)
+ ```
+
+1. Commit the transaction using `commit_transaction()`.
+
+ ```python
+ ts.commit_transaction()
+ ```
+++
+## Roll back a transaction
+
+Occasionally, you may be required to undo a transaction before it's committed.
+
+### [JavaScript](#tab/javascript)
+
+1. Using an existing transaction session, abort the transaction with `abortTransaction()`.
+
+ ```javascript
+ transactionSession.abortTransaction();
+ ```
+
+1. End the transaction session.
+
+ ```javascript
+ transactionSession.endSession();
+ ```
+
+### [Java](#tab/java)
+
+1. Using an existing transaction session, abort the transaction with `()`.
+
+ ```java
+ clientSession.abortTransaction();
+ ```
+
+### [Python](#tab/python)
+
+1. Using an existing transaction session, abort the transaction with `abort_transaction()`.
+
+ ```javascript
+ ts.abort_transaction()
+ ```
+++
+## Next step
+
+> [!div class="nextstepaction"]
+> [Build a Node.js web application](tutorial-nodejs-web-app.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/introduction.md
Title: Introduction/Overview description: Learn about Azure Cosmos DB for MongoDB vCore, a fully managed MongoDB-compatible database for building modern applications with a familiar architecture.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # What is Azure Cosmos DB for MongoDB vCore? (Preview)
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
Title: Migrate data from MongoDB
+ Title: Options to migrate data from MongoDB
-description: Learn about the various options to migrate your data from other MongoDB sources to Azure Cosmos DB for MongoDB vCore.
---
+description: Review various options to migrate your data from other MongoDB sources to Azure Cosmos DB for MongoDB vCore.
+++ Previously updated : 03/09/2023 Last updated : 08/28/2023
+# Options to migrate data from MongoDB to Azure Cosmos DB for MongoDB vCore
-# Options for migrating data to Azure Cosmos DB for MongoDB vCore
-
-## Pre-migration assessment
-
+This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore-based offering.
+## Premigration assessment
Assessment involves finding out whether you're using the [features and syntax that are supported](./compatibility.md). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning. The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources. -
-> [!NOTE]
+> [!TIP]
> We recommend you to go through [the supported features and syntax](./compatibility.md) in detail, as well as perform a proof-of-concept prior to the actual migration.
+## Native MongoDB tools (Offline)
-This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore-based offering.
+You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexport/mongoimport* to migrate datasets offline (without replicating live changes) to Azure Cosmos DB for MongoDB vCore-based offering.
-## Native MongoDB tools (Offline)
+| Scenario | MongoDB native tool |
+| | |
+| Move subset of database data (JSON/CSV-based) | *mongoexport/mongoimport* |
+| Move whole database (BSON-based) | *mongodump/mongorestore* |
-- You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexport/mongoimport* to migrate datasets offline (without replicating live changes) to Azure Cosmos DB for MongoDB vCore-based offering.-- *mongodump/mongorestore* works well for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into Azure Cosmos DB.
+- *mongoexport/mongoimport* is the best pair of migration tools for migrating a subset of your MongoDB database.
+ - *mongoexport* exports your existing data to a human-readable JSON or CSV file. *mongoexport* takes an argument specifying the subset of your existing data to export.
+ - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB for MongoDB vCore in this case.).
+ - JSON and CSV aren't a compact format; you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB for MongoDB vCore.
+- *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into Azure Cosmos DB for MongoDB vCore.
- *mongodump* exports your existing data as a BSON file.
- - *mongorestore* imports your BSON file dump into Azure Cosmos DB.
-- *mongoexport/mongoimport* works well for migrating a subset of your MongoDB database.
- - *mongoexport* exports your existing data to a human-readable JSON or CSV file.
- - *mongoexport* takes an argument specifying the subset of your existing data to export.
- - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB in this case.).
- - Since JSON and CSV aren't compact formats, you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB.
-- Here's how you can [migrate data to Azure Cosmos DB for MongoDB vCore using the native MongoDB tools](../tutorial-mongotools-cosmos-db.md#perform-the-migration).
+ - *mongorestore* imports your BSON file dump into Azure Cosmos DB for MongoDB vCore.
+
+> [!NOTE]
+> The MongoDB native tools can move data only as fast as the host hardware allows.
## Data migration using Azure Databricks (Offline/Online)
This document describes the various options to lift and shift your MongoDB workl
## Next steps -- Migrate data to Azure Cosmos DB for MongoDB vCore [using native MongoDB tools](../tutorial-mongotools-cosmos-db.md).
+- Migrate data to Azure Cosmos DB for MongoDB vCore [using native MongoDB tools](how-to-migrate-native-tools.md).
- Migrate data to Azure Cosmos DB for MongoDB vCore [using Azure Databricks](../migrate-databricks.md).
cosmos-db Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/quickstart-bicep.md
Title: |
Quickstart: Deploy a cluster by using a Bicep template description: In this quickstart, create a new Azure Cosmos DB for MongoDB vCore cluster to store databases, collections, and documents by using a Bicep template.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023
When you're done with your Azure Cosmos DB for MongoDB vCore cluster, you can de
-## Next steps
+## Next step
In this guide, you learned how to create an Azure Cosmos DB for MongoDB vCore cluster. You can now migrate data to your cluster.
cosmos-db Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/quickstart-portal.md
Title: |
Quickstart: Create a new cluster description: In this quickstart, create a new Azure Cosmos DB for MongoDB vCore cluster to store databases, collections, and documents by using the Azure portal.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # Quickstart: Create an Azure Cosmos DB for MongoDB vCore cluster by using the Azure portal
When you're done with Azure Cosmos DB for MongoDB vCore cluster, you can delete
:::image type="content" source="media/quickstart-portal/delete-resource-group-dialog.png" alt-text="Screenshot of the delete resource group confirmation dialog with the name of the group filled out.":::
-## Next steps
+## Next step
In this guide, you learned how to create an Azure Cosmos DB for MongoDB vCore cluster. You can now migrate data to your cluster.
cosmos-db Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/security.md
Title: Security options and features description: Review security options and built-in security mechanisms for Azure Cosmos DB for MongoDB vCore accounts.--- Previously updated : 02/07/2023++++ Last updated : 08/28/2023 # Security in Azure Cosmos DB for MongoDB vCore + This page outlines the multiple layers of security available to protect the data in your cluster. ## In transit
cosmos-db Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/transactions.md
- Title: ACID Transactions in Azure Cosmos DB for MongoDB vCore
-description: Delve deep into the importance and functionality of ACID transactions in Azure Cosmos DB's MongoDB vCore.
--- Previously updated : 08/08/2023---
-# ACID Transactions in Azure Cosmos DB for MongoDB vCore
--
-ACID stands for Atomicity, Consistency, Isolation, and Durability. These principles in database management ensure transactions are processed reliably:
--- **Atomicity**: Transactions complete entirely or not at all.-- **Consistency**: Databases transition from one consistent state to another.-- **Isolation**: Individual transactions are shielded from simultaneous ones.-- **Durability**: Finished transactions are permanent, ensuring data remains consistent, even during system failures.-
-Azure Cosmos DB for MongoDB vCore builds off these principles, enabling developers to harness the advantages of ACID properties while benefiting from the innate flexibility and performance of Cosmos DB. This native feature is pivotal for a range of applications, from basic ones to comprehensive enterprise-grade solutions, especially when it comes to preserving transactional data integrity across distributed sharded clusters.
-
-## Why Use Azure Cosmos DB for MongoDB vCore?
-- **Native Vector Search**: Power your AI apps directly within Azure Cosmos DB, leveraging native high-dimensional data search and bypassing pricier external solutions.-- **Fully-Managed Azure Service**: Rely on a unified, dedicated support team for seamless database operations.-- **Effortless Azure Integrations**: Easily connect with a wide range of Azure services without the typical maintenance hassles.-
-## Next Steps
--- Begin your journey with ACID transactions in Azure Cosmos DB for MongoDB vCore by accessing our [guide and tutorials](quickstart-portal.md).-- Explore further capabilities and benefits of Azure Cosmos DB for MongoDB vCore in our [documentation](introduction.md).-
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/troubleshoot-common-issues.md
+
+ Title: Troubleshoot common errors in Azure Cosmos DB for MongoDB vCore
+description: This doc discusses the ways to troubleshoot common issues encountered in Azure Cosmos DB for MongoDB vCore.
+++ Last updated : 08/11/2023++++
+# Troubleshoot common issues in Azure Cosmos DB for MongoDB vCore
+
+This guide is tailored to assist you in resolving issues you may encounter when using Azure Cosmos DB for MongoDB vCore. The guide provides solutions for connectivity problems, error scenarios, and optimization challenges, offering practical insights to improve your experience.
+
+>[!Note]
+> Please note that these solutions are general guidelines and may require specific configurations based on individual situations. Always refer to official documentation and support resources for the most accurate and up-to-date information.
+
+## Common errors and solutions
+
+### Unable to Connect to Azure Cosmos DB for MongoDB vCore - Timeout error
+This issue might occur when the cluster does not have the correct firewall rule(s) enabled. If you're trying to access the cluster from a non-Azure IP range, you need to add extra firewall rules. Refer to [Security options and features - Azure Cosmos DB for MongoDB vCore](./security.md#network-security-options) for detailed steps. Firewall rules can be configured in the portal's Networking setting for the cluster. Options include adding a known IP address/range or enabling public IP access.
+++
+### Unable to Connect with DNSClient.DnsResponseException Error
+#### Debugging Connectivity Issues:
+Windows User: <br>
+Psping doesn't work. The use of nslookup confirms cluster reachability and discoverability, indicating network issues are unlikely.
+
+Unix Users: <br>
+For Socket/Network-related exceptions, potential network connectivity issues might be hindering the application from establishing a connection with the Azure Cosmos DB Mongo API endpoint.
+
+To check connectivity, follow these steps:
+```
+nc -v <accountName>.documents.azure.com 10250
+```
+If TCP connect to port 10250/10255 fails, an environment firewall may be blocking the Azure Cosmos DB connection. Kindly scroll down to the page's bottom to submit a support ticket.
+++
+#### Verify your connection string:
+Only use the connection string provided in the portal. Avoid using any variations. Particularly, the connection string using mongodb+srv:// protocol and the c. prefixes aren't recommended. If the issue persists, share application/client-side driver logs for debugging connectivity issues with the team by submitting a support ticket.
++
+## Next steps
+If you've completed all the troubleshooting steps and haven't been able to discover a solution for your issue, kindly consider submitting a [Support Ticket](https://azure.microsoft.com/support/create-ticket/).
+
cosmos-db Tutorial Nodejs Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/tutorial-nodejs-web-app.md
Title: |
Tutorial: Build a Node.js web application description: In this tutorial, create a Node.js web application that connects to an Azure Cosmos DB for MongoDB vCore cluster and manages documents within a collection.+++ --- Previously updated : 03/09/2023 Last updated : 08/28/2023
+# CustomerIntent: As a developer, I want to connect to Azure Cosmos DB for MongoDB vCore from my Node.js application, so I can build MERN stack applications.
# Tutorial: Connect a Node.js web app with Azure Cosmos DB for MongoDB vCore
-The [MERN (MongoDB, Express, React.js, Node.js) stack](https://www.mongodb.com/mern-stack) is a popular collection of technologies used to build many modern web applications. With Azure Cosmos DB for MongoDB vCore, you can build a new web application or migrate an existing application using MongoDB drivers that you're already familiar with. In this tutorial, you:
+
+In this tutorial, you build a Node.js web application that connects to Azure Cosmos DB for MongoDB vCore. The MongoDB, Express, React.js, Node.js (MERN) stack is a popular collection of technologies used to build many modern web applications. With Azure Cosmos DB for MongoDB vCore, you can build a new web application or migrate an existing application using MongoDB drivers that you're already familiar with. In this tutorial, you:
> [!div class="checklist"]
->
> - Set up your environment > - Test the MERN application with a local MongoDB container > - Test the MERN application with the Azure Cosmos DB for MongoDB vCore cluster > - Deploy the MERN application to Azure App Service
->
## Prerequisites To complete this tutorial, you need the following resources: - An existing Azure Cosmos DB for MongoDB vCore cluster.
- - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
- - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md?tabs=azure-cli).
-- A [GitHub account](https://github.com/join).
- - GitHub comes with free Codespaces hours for all users. For more information, see [GitHub Codespaces free utilization](https://github.com/features/codespaces#pricing).
+- A GitHub account.
+ - GitHub comes with free Codespaces hours for all users.
-## 1 - Configure dev environment
+## Configure development environment
-A [development container](https://containers.dev/) environment is available with all dependencies required to complete every exercise in this project. You can run the development container in GitHub Codespaces or locally using Visual Studio Code.
+A development container environment is available with all dependencies required to complete every exercise in this project. You can run the development container in GitHub Codespaces or locally using Visual Studio Code.
### [GitHub Codespaces](#tab/github-codespaces)
-[GitHub Codespaces](https://docs.github.com/codespaces) runs a development container managed by GitHub with [Visual Studio Code for the Web](https://code.visualstudio.com/docs/editor/vscode-web) as the user interface. For the most straightforward development environment, use GitHub Codespaces so that you have the correct developer tools and dependencies preinstalled to complete this training module.
+GitHub Codespaces runs a development container managed by GitHub with Visual Studio Code for the Web as the user interface. For the most straightforward development environment, use GitHub Codespaces so that you have the correct developer tools and dependencies preinstalled to complete this training module.
> [!IMPORTANT]
-> All GitHub accounts can use Codespaces for up to 60 hours free each month with 2 core instances. For more information, see [GitHub Codespaces monthly included storage and core hours](https://docs.github.com/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts).
+> All GitHub accounts can use Codespaces for up to 60 hours free each month with 2 core instances.
-1. Start the process to create a new GitHub Codespace on the `main` branch of the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository.
+1. Start the process to create a new GitHub Codespace on the `main` branch of the `azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app` GitHub repository.
- > [!div class="nextstepaction"]
- > [Open this project in GitHub Codespaces](https://github.com/codespaces/new?hide_repo_select=true&ref=main&repo=611024069)
+ [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Azure-Samples/msdocs-azure-cosmos-db-mongodb-mern-web-app?quickstart=1)
1. On the **Create codespace** page, review the codespace configuration settings and then select **Create new codespace**
A [development container](https://containers.dev/) environment is available with
### [Visual Studio Code](#tab/visual-studio-code)
-The [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) for Visual Studio Code requires [Docker](https://docs.docker.com/) to be installed on your local machine. The extension hosts the development container locally using the Docker host with the correct developer tools and dependencies preinstalled to complete this training module.
+The **Dev Containers extension** for Visual Studio Code requires **Docker** to be installed on your local machine. The extension hosts the development container locally using the Docker host with the correct developer tools and dependencies preinstalled to complete this training module.
1. Open **Visual Studio Code** in the context of an empty directory.
-1. Ensure that you have the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) installed in Visual Studio Code.
+1. Ensure that you have the **Dev Containers extension** installed in Visual Studio Code.
1. Open a new terminal in the editor.
The [Dev Containers extension](https://marketplace.visualstudio.com/items?itemNa
> > :::image type="content" source="media/tutorial-nodejs-web-app/open-terminal-option.png" lightbox="media/tutorial-nodejs-web-app/open-terminal-option.png" alt-text="Screenshot of the menu option to open a new terminal.":::
-1. Clone the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository into the current directory.
+1. Clone the `azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app` GitHub repository into the current directory.
```bash git clone https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app.git .
The [Dev Containers extension](https://marketplace.visualstudio.com/items?itemNa
-## 2 - Test the MERN application's API with the MongoDB container
+## Test the MERN application's API with the MongoDB container
Start by running the sample application's API with the local MongoDB container to validate that the application works.
Start by running the sample application's API with the local MongoDB container t
1. Close the terminal.
-## 3 - Test the MERN application with the Azure Cosmos DB for MongoDB vCore cluster
+## Test the MERN application with the Azure Cosmos DB for MongoDB vCore cluster
Now, let's validate that the application works seamlessly with Azure Cosmos DB for MongoDB vCore. For this task, populate the pre-existing cluster with seed data using the MongoDB shell and then update the API's connection string.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the Azure portal (<https://portal.azure.com>).
1. Navigate to the existing Azure Cosmos DB for MongoDB vCore cluster page.
Now, let's validate that the application works seamlessly with Azure Cosmos DB f
1. Close the extra browser tab/window. Then, close the terminal.
-## 4 - Deploy the MERN application to Azure App Service
+## Deploy the MERN application to Azure App Service
Deploy the service and client to Azure App Service to prove that the application works end-to-end. Use secrets in the web apps to store environment variables with credentials and API endpoints.
Deploy the service and client to Azure App Service to prove that the application
clientAppName="client-app-$suffix" ```
-1. If you haven't already, sign in to the Azure CLI using the [`az login --use-device-code`](/cli/azure/reference-index#az-login) command.
+1. If you haven't already, sign in to the Azure CLI using the `az login --use-device-code` command.
1. Change the current working directory to the **server/** path.
Deploy the service and client to Azure App Service to prove that the application
cd server ```
-1. Create a new web app for the server component of the MERN application with [`az webapp up`](/cli/azure/webapp#az-webapp-up).
+1. Create a new web app for the server component of the MERN application with `az webapp up`.
```shell az webapp up \
Deploy the service and client to Azure App Service to prove that the application
--runtime "NODE|18-lts" ```
-1. Create a new connection string setting for the server web app named `CONNECTION_STRING` with [`az webapp config connection-string set`](/cli/azure/webapp/config/connection-string#az-webapp-config-connection-string-set). Use the same value for the connection string you used with the MongoDB shell and **.env** file earlier in this tutorial.
+1. Create a new connection string setting for the server web app named `CONNECTION_STRING` with `az webapp config connection-string set`. Use the same value for the connection string you used with the MongoDB shell and **.env** file earlier in this tutorial.
```shell az webapp config connection-string set \
Deploy the service and client to Azure App Service to prove that the application
--settings "CONNECTION_STRING=<mongodb-connection-string>" ```
-1. Get the URI for the server web app with [`az webapp show`](/cli/azure/webapp#az-webapp-show) and store it in a shell variable name d **serverUri**.
+1. Get the URI for the server web app with `az webapp show` and store it in a shell variable name d **serverUri**.
```azurecli serverUri=$(az webapp show \
Deploy the service and client to Azure App Service to prove that the application
--output tsv) ```
-1. Use the [`open-cli`](https://www.npmjs.com/package/open-cli) package and command from NuGet with `npx` to open a browser window using the URI for the server web app. Validate that the server app is returning your JSON array data from the MongoDB vCore cluster.
+1. Use the `open-cli` package and command from NuGet with `npx` to open a browser window using the URI for the server web app. Validate that the server app is returning your JSON array data from the MongoDB vCore cluster.
```shell npx open-cli "https://$serverUri/products" --yes
Deploy the service and client to Azure App Service to prove that the application
cd ../client ```
-1. Create a new web app for the client component of the MERN application with [`az webapp up`](/cli/azure/webapp#az-webapp-up).
+1. Create a new web app for the client component of the MERN application with `az webapp up`.
```shell az webapp up \
Deploy the service and client to Azure App Service to prove that the application
--runtime "NODE|18-lts" ```
-1. Create a new app setting for the client web app named `REACT_APP_API_ENDPOINT` with [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set). Use the server API endpoint stored in the **serverUri** shell variable.
+1. Create a new app setting for the client web app named `REACT_APP_API_ENDPOINT` with `az webapp config appsettings set`. Use the server API endpoint stored in the **serverUri** shell variable.
```shell az webapp config appsettings set \
Deploy the service and client to Azure App Service to prove that the application
--settings "REACT_APP_API_ENDPOINT=https://$serverUri" ```
-1. Get the URI for the client web app with [`az webapp show`](/cli/azure/webapp#az-webapp-show) and store it in a shell variable name d **clientUri**.
+1. Get the URI for the client web app with `az webapp show` and store it in a shell variable name d **clientUri**.
```azurecli clientUri=$(az webapp show \
Deploy the service and client to Azure App Service to prove that the application
--output tsv) ```
-1. Use the [`open-cli`](https://www.npmjs.com/package/open-cli) package and command from NuGet with `npx` to open a browser window using the URI for the client web app. Validate that the client app is rendering data from the server app's API.
+1. Use the `open-cli` package and command from NuGet with `npx` to open a browser window using the URI for the client web app. Validate that the client app is rendering data from the server app's API.
```shell npx open-cli "https://$clientUri" --yes
Deploy the service and client to Azure App Service to prove that the application
When you're working in your own subscription, at the end of a project, it's a good idea to remove the resources that you no longer need. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-1. To delete the entire resource group, use [`az group delete`](/cli/azure/group#az-group-delete).
+1. To delete the entire resource group, use `az group delete`.
```azurecli az group delete \
When you're working in your own subscription, at the end of a project, it's a go
--yes ```
-1. Validate that the resource group is deleted using [`az group list`](/cli/azure/group#az-group-list).
+1. Validate that the resource group is deleted using `az group list`.
```azurecli az group list
You may also wish to clean up your development environment or return it to its t
Deleting the GitHub Codespaces environment ensures that you can maximize the amount of free per-core hours entitlement you get for your account.
-> [!IMPORTANT]
-> For more information about your GitHub account's entitlements, see [GitHub Codespaces monthly included storage and core hours](https://docs.github.com/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts).
- 1. Sign into the GitHub Codespaces dashboard (<https://github.com/codespaces>).
-1. Locate your currently running codespaces sourced from the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository.
+1. Locate your currently running codespaces sourced from the `azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app` GitHub repository.
:::image type="content" source="media/tutorial-nodejs-web-app/codespace-dashboard.png" alt-text="Screenshot of all the running codespaces including their status and templates.":::
You aren't necessarily required to clean up your local environment, but you can
-## Next steps
+## Next step
Now that you have built your first application for the MongoDB vCore cluster, learn how to migrate your data to Azure Cosmos DB.
cosmos-db Vector Search Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search-ai.md
+
+ Title: Build AI apps with vector search
+
+description: Enhance AI-powered applications with Retrieval Augmented Generation (RAG) by using Azure Cosmos DB for MongoDB vCore vector search.
++++++ Last updated : 08/28/2023++
+# Build AI apps with Azure Cosmos DB for MongoDB vCore vector search
++
+Language models available in Azure OpenAI Service can elevate the capabilities of your AI-driven applications. To fully unleash the potential of language models, you must give them access to timely and relevant data from your application's data store. You can accomplish this process, known as Retrieval Augmented Generation (RAG), by using Azure Cosmos DB.
+
+This article delves into the core concepts of RAG. It provides links to tutorials and sample code that exemplify RAG strategies by using vector search in Azure Cosmos DB for MongoDB vCore.
+
+RAG elevates AI-powered applications by incorporating external knowledge and data into model inputs. With vector search in Azure Cosmos DB for MongoDB vCore, this process becomes seamless. You can use it to integrate the most pertinent information into your AI models with minimal effort.
+
+By using [embeddings](../../../ai-services/openai/tutorials/embeddings.md) and vector search, you can provide your AI applications with the context that they need to excel. Through the provided tutorials and code samples, you can become proficient in using RAG to create smarter and more context-aware AI solutions.
+
+## What is Retrieval Augmented Generation?
+
+RAG uses external knowledge and models to efficiently manage custom data or domain-specific expertise. This process involves extracting information from an external data source and integrating it into the model's input through prompt engineering. A robust approach is essential to identify the most pertinent data from the external source within the [token limitations of a request](../../../ai-services/openai/quotas-limits.md).
+
+RAG addresses these limitations by using embeddings, which convert data into vectors. Embeddings capture the semantic essence of the text and enable context comprehension beyond simple keywords.
+
+## What is vector search?
+
+[Vector search](./vector-search.md) is an approach that enables the discovery of analogous items based on shared data characteristics. It deviates from the necessity for precise matches within a property field.
+
+This method is invaluable in applications like text similarity searches, image association, recommendation systems, and anomaly detection. Its functionality revolves around the use of vector representations (sequences of numerical values) that are generated from your data via machine learning models or embeddings APIs. Examples of such APIs encompass [Azure OpenAI embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/).
+
+The technique gauges the disparity between your query vector and the data vectors. The data vectors that show the closest proximity to your query vector are identified as semantically akin.
+
+## How does vector search work in Azure Cosmos DB for MongoDB vCore?
+
+You can truly harness the power of RAG through the native vector search capability in Azure Cosmos DB for MongoDB vCore. This native search fuses AI-focused applications with stored data in Azure Cosmos DB.
+
+Vector search optimally stores, indexes, and searches high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore, alongside other application data. This capability eliminates the need to migrate data to costlier alternatives for vector search functionality.
+
+## Code samples and tutorials
+
+- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/mongovcorev2): Learn how to use .NET to build a chatbot that demonstrates the potential of RAG in a retail context.
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore): Walk through creating a recipe chatbot by using .NET, to showcase the application of RAG in a culinary scenario.
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore): Learn how to construct an Azure product chatbot that highlights the benefits of RAG.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Vector search](vector-search.md)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
description: Use vector indexing and search to integrate AI-based applications in Azure Cosmos DB for MongoDB vCore. -+ - Previously updated : 05/10/2023 Last updated : 08/28/2023 # Use vector search on embeddings in Azure Cosmos DB for MongoDB vCore
To create a vector index, use the following `createIndexes` template:
| `index_name` | string | Unique name of the index. | | `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. | | `kind` | string | Type of vector index to create. Currently, `vector-ivf` is the only supported index option. |
-| `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` is set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows. |
+| `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` is set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows. Using a `numLists` value of `1` is akin to performing brute-force search. |
| `similarity` | string | Similarity metric to use with the IVF index. Possible options are `COS` (cosine distance), `L2` (Euclidean distance), and `IP` (inner product). | | `dimensions` | integer | Number of dimensions for vector similarity. The maximum number of supported dimensions is `2000`. |
+> [!IMPORTANT]
+> Setting the _numLists_ parameter correctly is important for acheiving good accuracy and performance. We recommend that `numLists` is set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows.
+>
+> As the number of items in your database grows, you should tune _numLists_ to be larger in order to achieve good latency performance for vector search.
+>
+> If you're experimenting with a new scenario or creating a small demo, you can start with `numLists` set to `1` to perform a brute-force search across all vectors. This should provide you with the most accurate results from the vector search. After your initial setup, you should go ahead and tune the `numLists` parameter using the above guidance.
+ ## Examples The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration.
db.runCommand({
}, cosmosSearchOptions: { kind: 'vector-ivf',
- numLists: 100,
+ numLists: 3,
similarity: 'COS', dimensions: 3 }
In this example, `vectorIndex` is returned with all the `cosmosSearch` parameter
name: 'vectorSearchIndex', cosmosSearch: { kind: 'vector-ivf',
- numLists: 100,
+ numLists: 3,
similarity: 'COS', dimensions: 3 },
In this example, `vectorIndex` is returned with all the `cosmosSearch` parameter
This guide demonstrates how to create a vector index, add documents that have vector data, perform a similarity search, and retrieve the index definition. By using vector search, you can efficiently store, index, and query high-dimensional vector data directly in Azure Cosmos DB for MongoDB vCore. Vector search enables you to unlock the full potential of your data via vector embeddings, and it empowers you to build more accurate, efficient, and powerful applications. > [!div class="nextstepaction"]
-> [Introduction to Azure Cosmos DB for MongoDB vCore](introduction.md)
+> [Build AI apps with Azure Cosmos DB for MongoDB vCore vector search](vector-search-ai.md)
cosmos-db Monitor Logs Basic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-logs-basic-queries.md
Previously updated : 08/02/2023 Last updated : 08/22/2023
Here's a list of common troubleshooting queries.
Find operations that have a duration greater than 3 milliseconds.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```Kusto AzureDiagnostics
AzureDiagnostics
| summarize count() by clientIpAddress_s, TimeGenerated ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBDataPlaneRequests
Find user agents associated with each operation.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```Kusto AzureDiagnostics
AzureDiagnostics
| summarize count() by OperationName, userAgent_s ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBDataPlaneRequests
Find operations that ran for a long time by binning their runtime into five-second intervals.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```Kusto AzureDiagnostics
AzureDiagnostics
| render timechart ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBDataPlaneRequests
Measure skew by getting common statistics for physical partitions.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```Kusto AzureDiagnostics
AzureDiagnostics
| project SubscriptionId, regionName_s, databaseName_s, collectionName_s, partitionKey_s, sizeKb_d, ResourceId ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBPartitionKeyStatistics
CDBPartitionKeyStatistics
Measure the request charge (in RUs) for the largest queries.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```Kusto AzureDiagnostics
AzureDiagnostics
| limit 100 ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBQueryRuntimeStatistics
Sort operations by the amount of RU/s they're using.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| summarize max(responseLength_s), max(requestLength_s), max(requestCharge_s), count = count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h) ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBDataPlaneRequests
Find queries that consume more RU/s than a baseline amount.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
This query joins with data from ``DataPlaneRequests`` and ``QueryRunTimeStatistics``.
AzureDiagnostics
| limit 100 ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBQueryRuntimeStatistics
Get statistics in both request charge and duration for a specific query.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| project databasename_s, collectionname_s, OperationName1 , querytext_s,requestCharge_s1, duration_s1, bin(TimeGenerated, 1min) ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBQueryRuntimeStatistics
CDBDataPlaneRequests
Group operations by the resource distribution.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| summarize count = count() by OperationName, requestResourceType_s, bin(TimeGenerated, 1h) ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBDataPlaneRequests
Get the maximum throughput for a physical partition.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| summarize max(requestCharge_s) by bin(TimeGenerated, 1h), partitionId_g ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
CDBDataPlaneRequests
Measure RU/s consumption on a per-second basis per partition key.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| order by TimeGenerated asc ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBPartitionKeyRUConsumption
CDBPartitionKeyRUConsumption
Measure request charge per partition key.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| where parse_json(partitionKey_s)[0] == "2" ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBPartitionKeyRUConsumption
CDBPartitionKeyRUConsumption
Sort partition keys based on request unit consumption within a time window.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| order by total desc ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBPartitionKeyRUConsumption
CDBPartitionKeyRUConsumption
Find logs for partition keys filtered by the size of storage per partition key.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| where todouble(sizeKb_d) > 800000 ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBPartitionKeyStatistics
CDBPartitionKeyStatistics
Measure performance for; operation latency, RU/s usage, and response length.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| summarize percentile(todouble(responseLength_s), 50), percentile(todouble(responseLength_s), 99), max(responseLength_s), percentile(todouble(requestCharge_s), 50), percentile(todouble(requestCharge_s), 99), max(requestCharge_s), percentile(todouble(duration_s), 50), percentile(todouble(duration_s), 99), max(duration_s), count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h) ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBDataPlaneRequests
Get control plane long using ``ControlPlaneRequests``.
> [!TIP] > Remember to switch on the flag described in [Disable key-based metadata write access](audit-control-plane-logs.md#disable-key-based-metadata-write-access), and execute the operations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager.
-#### [Resource-specific](#tab/resource-specific)
+#### [Azure Diagnostics](#tab/azure-diagnostics)
```kusto AzureDiagnostics
AzureDiagnostics
| summarize by OperationName ```
-#### [Azure Diagnostics](#tab/azure-diagnostics)
+#### [Resource-specific](#tab/resource-specific)
```kusto CDBControlPlaneRequests
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md
Last updated 06/11/2019 -+ # Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
For certain scenarios, the effects of a delete by partition key operation isn't
- Aggregate queries that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete. - Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.-- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup.
+- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup.
+
+### Limitations
+- [Hierarchical partition keys](../hierarchical-partition-keys.md) deletion is not supported. This feature permits the deletion of items solely based on the last level of partition keys. For example, consider a scenario where a partition key consists of three hierarchical levels: country, state, and city. In this context, the delete by partition keys functionality can be employed effectively by specifying the complete partition key, encompassing all levels, namely country/state/city. Attempting to delete using intermediate partition keys, such as country/state or solely country, will result in an error.
## How to give feedback or report an issue/bug * Email cosmosPkDeleteFeedbk@microsoft.com with questions or feedback.
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-read-item.md
The following example point reads a single item asynchronously and returns a des
:::code language="csharp" source="~/cosmos-db-nosql-dotnet-samples/275-read-item/Program.cs" id="read_item" :::
-The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads and item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators).
+The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads an item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators).
Alternatively, you can return the **ItemResponse<>** generic type and explicitly get the resource. The more general **ItemResponse<>** type also contains useful metadata about the underlying API operation. In this example, metadata about the request unit charge for this operation is gathered using the **RequestCharge** property.
cosmos-db How To Geospatial Index Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-geospatial-index-query.md
Last updated 08/01/2023-+ # Index and query GeoJSON location data in Azure Cosmos DB for NoSQL
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
An [indexing policy update](../index-policy.md#modifying-the-indexing-policy) tr
> [!NOTE] > When you update indexing policy, writes to Azure Cosmos DB are uninterrupted. Learn more about [indexing transformations](../index-policy.md#modifying-the-indexing-policy)
+
+> [!IMPORTANT]
+> Removing an index takes affect immediately, whereas adding a new index takes some time as it requires an indexing transformation. When replacing one index with another (for example, replacing a single property index with a composite-index) make sure to add the new index first and then wait for the index transformation to complete **before** you remove the previous index from the indexing policy. Otherwise this will negatively affect your ability to query the previous index and may break any active workloads that reference the previous index.
+ ### Use the Azure portal
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/modeling-data.md
What to do?
## Takeaways
-The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
+The biggest takeaways from this article are to understand that data modeling in a world that's schema-free is as important as ever.
Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily.
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
Last updated 06/20/2023 ms.devlang: csharp, java-+ zone_pivot_groups: programming-languages-set-cosmos
To learn more about performance using the Java SDK:
* [Performance tips for Azure Cosmos DB Java V4 SDK](performance-tips-java-sdk-v4.md) ::: zone-end+
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There is a way to remove this request and reduce the latency of the single partition query operation. For single partition queries specify the partition key value for the item and pass it as [partition_key](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) argument:
+
+```python
+items = container.query_items(
+ query="SELECT * FROM r where r.city = 'Seattle'",
+ partition_key="Washington"
+ )
+```
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. The [max_item_count](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) allows you to set the maximum number of items to be returned in the enumeration operation.
+
+```python
+items = container.query_items(
+ query="SELECT * FROM r where r.city = 'Seattle'",
+ partition_key="Washington",
+ max_item_count=1000
+ )
+```
+
+## Next steps
+
+To learn more about using the Python SDK for API for NoSQL:
+
+* [Azure Cosmos DB Python SDK for API for NoSQL](sdk-python.md)
+* [Quickstart: Azure Cosmos DB for NoSQL client library for Python](quickstart-python.md)
++
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There is a way to remove this request and reduce the latency of the single partition query operation. For single partition queries scoping a query to a single partition can be accomplished two ways.
+
+Using a parameterized query expression and specifying partition key in query statement. The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`:
+
+```javascript
+// find all items with same categoryId (partitionKey)
+const querySpec = {
+ query: "select * from products p where p.categoryId=@categoryId",
+ parameters: [
+ {
+ name: "@categoryId",
+ value: "Bikes, Touring Bikes"
+ }
+ ]
+};
+
+// Get items
+const { resources } = await container.items.query(querySpec).fetchAll();
+
+for (const item of resources) {
+ console.log(`${item.id}: ${item.name}, ${item.sku}`);
+}
+```
+
+Or specify [partitionKey](/javascript/api/@azure/cosmos/feedoptions#@azure-cosmos-feedoptions-partitionkey) in `FeedOptions` and pass it as argument:
+
+```javascript
+const querySpec = {
+ query: "select * from products p"
+};
+
+const { resources } = await container.items.query(querySpec, { partitionKey: "Bikes, Touring Bikes" }).fetchAll();
+
+for (const item of resources) {
+ console.log(`${item.id}: ${item.name}, ${item.sku}`);
+}
+```
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. The [maxItemCount](/javascript/api/@azure/cosmos/feedoptions#@azure-cosmos-feedoptions-maxitemcount) allows you to set the maximum number of items to be returned in the enumeration operation.
+
+```javascript
+const querySpec = {
+ query: "select * from products p where p.categoryId=@categoryId",
+ parameters: [
+ {
+ name: "@categoryId",
+ value: items[2].categoryId
+ }
+ ]
+};
+
+const { resources } = await container.items.query(querySpec, { maxItemCount: 1000 }).fetchAll();
+
+for (const item of resources) {
+ console.log(`${item.id}: ${item.name}, ${item.sku}`);
+}
+```
+
+## Next steps
+
+To learn more about using the Node.js SDK for API for NoSQL:
+
+* [Azure Cosmos DB Node.js SDK for API for NoSQL](sdk-nodejs.md)
+* [Quickstart - Azure Cosmos DB for NoSQL client library for Node.js](quickstart-nodejs.md)
+
cosmos-db Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md
The following table shows a summary of the connectivity modes available for vari
|Connection mode |Supported protocol |Supported SDKs |API/Service port | |||||
-|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10250, 10255, 10256), Table (443), Cassandra (10350), Graph (443) <br> The port 10250 maps to a default Azure Cosmos DB for MongoDB instance without geo-replication. Whereas the ports 10255 and 10256 map to the instance that has geo-replication. |
+|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10255), Table (443), Cassandra (10350), Graph (443) <br> |
|Direct | TCP (Encrypted via TLS) | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range | ## <a id="direct-mode"></a> Direct mode connection architecture
cosmos-db Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v4.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 4.48.1 and above.
+It's strongly recommended to use version 4.48.2 and above.
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
cosmos-db Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-table-size.md
provides helper functions to query this information.
<td>citus_table_size(relation_name)</td> <td><ul> <li><p>citus_relation_size plus:</p>
-<blockquote>
+ <ul> <li>size of <a href="https://www.postgresql.org/docs/current/static/storage-fsm.html">free space map</a></li> <li>size of <a href="https://www.postgresql.org/docs/current/static/storage-vm.html">visibility map</a></li> </ul>
-</blockquote></li>
+</li>
</ul></td> </tr> <tr class="odd"> <td>citus_total_relation_size(relation_name)</td> <td><ul> <li><p>citus_table_size plus:</p>
-<blockquote>
+ <ul> <li>size of indices</li> </ul>
-</blockquote></li>
+</li>
</ul></td> </tr> </tbody>
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 08/09/2023 Last updated : 08/24/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### August 2023
+* General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.21, 12.16, 13.12, 14.9, and 15.4) are now available in all supported regions.
+* General availability: [PgBouncer](http://www.pgbouncer.org/) version 1.20 is now supported for all [PostgreSQL versions](reference-versions.md#postgresql-versions) in all [supported regions](./resources-regions.md)
+ * See [Connection pooling and managed PgBouncer in Azure Cosmos DB for PostgreSQL](./concepts-connection-pool.md).
* General availability: Citus 12 is now available in [all supported regions](./resources-regions.md) with PostgreSQL 14 and PostgreSQL 15. * Check [what's new in Citus 12](https://www.citusdata.com/updates/v12-0/). * See [Postgres and Citus version in-place upgrade](./concepts-upgrade.md).
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 08/09/2023 Last updated : 08/24/2023 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | > | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | > | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 | 1.4.0 |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | 2.5.0 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 |
### Full-text search extensions
The versions of each extension installed in a cluster sometimes differ based on
> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | > | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 | > | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 | 4.7.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 |
> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 | > | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 | > | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
The versions of each extension installed in a cluster sometimes differ based on
> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | > | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |
-> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.0 | 1.0 | 1.0 |
+> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.2 | 1.2 | 1.2 |
> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 | 1.4 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 |
The versions of each extension installed in a cluster sometimes differ based on
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||
-> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.4.0 | 0.4.0 | 0.4.0 | 0.4.0 | 0.4.0 |
+> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 |
### PostGIS extensions > [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||
-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
## pg_stat_statements
There's a tradeoff between the query execution information pg_stat_statements pr
You can use dblink and postgres\_fdw to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. To use
-these extensions to connect between Azure Cosmos DB for PostgreSQL servers or
-clusters, set **Allow Azure services and resources to access this cluster (or
+these extensions to connect between Azure Cosmos DB for PostgreSQL clusters with [public access](./concepts-firewall-rules.md), set **Allow Azure services and resources to access this cluster (or
server)** to ON. You also need to turn this setting ON if you want to use the extensions to loop back to the same server. The **Allow Azure services and resources to access this cluster** setting can be found in the Azure portal
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 08/09/2023 Last updated : 08/24/2023 # Supported database versions in Azure Cosmos DB for PostgreSQL
versions](https://www.postgresql.org/docs/release/):
### PostgreSQL version 15
-The current minor release is 15.3. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/15.3/) to
+The current minor release is 15.4. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/15.4/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 14
-The current minor release is 14.8. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/14.8/) to
+The current minor release is 14.9. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/14.9/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 13
-The current minor release is 13.11. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/13.11/) to
+The current minor release is 13.12. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/13.12/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 12
-The current minor release is 12.15. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/12.15/) to
+The current minor release is 12.16. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/12.16/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 11
-The current minor release is 11.20. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/11.20/) to
+The current minor release is 11.21. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/11.21/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 10 and older
cosmos-db Rag Data Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/rag-data-openai.md
+
+ Title: Use data with Azure OpenAI
+
+description: Use Retrieval Augmented Generation (RAG) and vector search to ground your Azure OpenAI models with data stored in Azure Cosmos DB.
++++ Last updated : 08/16/2023++
+# Use Azure Cosmos DB data with Azure OpenAI
++
+The Large Language Models (LLMs) in Azure OpenAI are incredibly powerful tools that can take your AI-powered applications to the next level. The utility of LLMs can increase significantly when the models can have access to the right data, at the right time, from your application's data store. This process is known as Retrieval Augmented Generation (RAG) and there are many ways to do this today with Azure Cosmos DB.
+
+In this article, we review key concepts for RAG and then provide links to tutorials and sample code that demonstrate some of most powerful RAG patterns using *vector search* to bring the most semantically relevant data to your LLMs. These tutorials can help you become comfortable with using your Azure Cosmos DB data in Azure OpenAI models.
+
+To jump right into tutorials and sample code for RAG patterns with Azure Cosmos DB, use the following links:
+
+| | Description |
+| | |
+| **[Azure Cosmos DB for NoSQL with Azure Cognitive Search](#azure-cosmos-db-for-nosql-and-azure-cognitive-search)**. | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure Cognitive Search. |
+| **[Azure Cosmos DB for Mongo DB vCore](#azure-cosmos-db-for-mongodb-vcore)**. | Featuring native support for vector search, store your application data and vector embeddings together in a single MongoDB-compatible service. |
+| **[Azure Cosmos DB for PostgreSQL](#azure-cosmos-db-for-postgresql)**. | Offering native support vector search, you can store your data and vectors together in a scalable PostgreSQL offering. |
+
+## Key concepts
+
+This section includes key concepts that are critical to implementing RAG with Azure Cosmos DB and Azure OpenAI.
+
+### Retrieval Augmented Generation (RAG)
+
+RAG involves the process of retrieving supplementary data to provide the LLM with the ability to use this data when it generates responses. When presented with a user's question or prompt, RAG aims to select the most pertinent and current domain-specific knowledge from external sources, such as articles or documents. This retrieved information serves as a valuable reference for the model when generating its response. For example, a simple RAG pattern using Azure Cosmos DB for NoSQL could be:
+
+1. Insert data into an Azure Cosmos DB for NoSQL database and collection.
+2. Create embeddings from a data property using an Azure OpenAI Embeddings model
+3. Link the Azure Cosmos DB for NoSQL to Azure Cognitive Search (for vector indexing/search)
+4. Create a vector index over the embeddings properties.
+5. Create a function to perform vector similarity search based on a user prompt.
+6. Perform question answering over the data using an Azure OpenAI Completions model
+
+The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857)
+
+### Prompts and prompt engineering
+
+A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
+
+- **Instructions** provide directives to the LLM
+- **Primary content**: gives information to the LLM for processing
+- **Examples**: help condition the model to a particular task or process
+- **Cues**: direct the LLM's output in the right direction
+- **Supporting content**: represents supplemental information the LLM can use to generate output
+
+The process of creating good prompts for a scenario is called *prompt engineering*. For more information about prompts and best practices for prompt engineering, see [Azure OpenAI Service - Azure OpenAI | Microsoft Learn](../ai-services/openai/concepts/prompt-engineering.md).
+
+### Tokens
+
+Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word `hamburger` would be divided into tokens such as `ham`, `bur`, and `ger` while a short and common word like `pear` would be considered a single token.
+
+In Azure OpenAI, input text provided to the API is turned into tokens (tokenized). The number of tokens processed in each API request depends on factors such as the length of the input, output, and request parameters. The quantity of tokens being processed also impacts the response time and throughput of the models. There are limits to the amount tokens each model can take in a single request/response from Azure OpenAI. [Learn more about Azure OpenAI Service quotas and limits here](../ai-services/openai/quotas-limits.md)
+
+### Vectors
+
+Vectors are ordered arrays of numbers (typically floats) that can represent information about some data. For example, an image can be represented as a vector of pixel values, or a string of text can be represented as a vector or ASCII values. The process for turning data into a vector is called *vectorization*.
+
+### Embeddings
+
+Embeddings are vectors that represent important features of data. Embeddings are often learned by using a deep learning model, and machine learning and AI models utilized them as features. Embeddings can also capture semantic similarity between similar concepts. For example, in generating an embedding for the words `person` and `human`, we would expect their embeddings (vector representation) to be similar in value since the words are also semantically similar.
+
+ Azure OpenAI features models for creating embeddings from text data. The service breaks text out into tokens and generates embeddings using models pretrained by OpenAI. [Learn more about creating embeddings with Azure OpenAI here.](../ai-services/openai//concepts/understand-embeddings.md)
+
+### Vector search
+
+Vector search refers to the process of finding all vectors in a dataset that are semantically similar to a specific query vector. Therefore, a query vector for the word `human`, and I search the entire dictionary for semantically similar words, I would expect to find the word `person` as a close match. This closeness, or distance, is measured using a similarity metric such as cosine similarity. The more similar the vectors are, the smaller the distance between them.
+
+Consider a scenario where you have a query over millions of document and you want to find the most similar document in your data. You can create embeddings for your data and the query document using Azure OpenAI. Then, you can perform a vector search to find the most similar documents from your dataset. However, performing a vector search across a few examples is trivial. Performing this same search across thousands or millions of data points becomes challenging. There are also trade-offs between exhaustive search and approximate nearest neighbor (ANN) search methods including latency, throughput, accuracy, and cost, all of which can depend on the requirements of your application.
+
+Adding Azure Cosmos DB vector search capabilities to Azure OpenAI Service enables you to store long term memory and chat history to improve your Large Language Model (LLM) solution. Vector search allows you to efficiently query back the most relevant context to personalize Azure OpenAI prompts in a token-efficient manner. Storing vector embeddings alongside the data in an integrated solution minimizes the need to manage data synchronization and accelerates your time-to-market for AI app development.
+
+## Using Azure Cosmos DB data with Azure OpenAI
+
+The RAG pattern harnesses external knowledge and models to effectively handle custom data or domain-specific knowledge. It involves extracting pertinent information from an external data source and integrating it into the model request through prompt engineering.
+
+A robust mechanism is necessary to identify the most relevant data from the external source that can be passed to the model considering the limitation of a restricted number of tokens per request. This limitation is where embeddings play a crucial role. By converting the data in our database into embeddings and storing them as vectors for future use, we apply the advantage of capturing the semantic meaning of the text, going beyond mere keywords to comprehend the context.
+
+Prior to sending a request to Azure OpenAI, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the model request using prompt engineering.
+
+## Azure Cosmos DB for NoSQL and Azure Cognitive Search
+
+Implement RAG-patterns with Azure Cosmos DB for NoSQL and Azure Cognitive Search. This approach enables powerful integration of your data residing in Azure Cosmos DB for NoSQL into your AI-oriented applications. Azure Cognitive Search empowers you to efficiently index, and query high-dimensional vector data, which is stored in Azure Cosmos DB for NoSQL.
+
+### Code samples
+
+- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/cognitive-search-vector-v2)
+- [.NET samples - Hackathon project](https://github.com/AzureCosmosDB/OpenAIHackathon)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch)
+- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel)
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch)
+
+## Azure Cosmos DB for MongoDB vCore
+
+RAG can be applied using the native vector search feature in Azure Cosmos DB for MongoDB vCore, facilitating a smooth merger of your AI-centric applications with your stored data in Azure Cosmos DB. The use of vector search offers an efficient way to store, index, and search high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore alongside other application data. This approach removes the necessity of migrating your data to costlier alternatives for vector search.
+
+### Code samples
+
+- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/mongovcorev2)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)
+
+## Azure Cosmos DB for PostgreSQL
+
+You can employ RAG by utilizing native vector search within Azure Cosmos DB for PostgreSQL. This strategy provides a seamless integration of your AI-driven applications, including the ones developed using Azure OpenAI embeddings, with your data housed in Azure Cosmos DB. By taking advantage of vector search, you can effectively store, index, and execute queries on high-dimensional vector data directly within Azure Cosmos DB for PostgreSQL along with the rest of your data.
+
+### Code samples
+
+- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch)
+
+## Next steps
+
+- [Vector search with Azure Cognitive Search](../search/vector-search-overview.md)
+- [Vector search with Azure Cosmos DB for MongoDB vCore(mongodb/vcore/vector-search.md)
+- [Vector search with Azure Cosmos DB PostgreSQL](postgresql/howto-use-pgvector.md)
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
To view costs for a subscription, open **Cost Management** in the customer's Azu
Cost analysis, budgets, and alerts are available for the subscription and resource group Azure RBAC scopes at pay-as-you-go rate-based costs.
-Amortized views and actual costs for reserved instances in the Azure RBAC scopes show zero charges. Purchase costs for entitlements such as Reserved instances and Marketplace fees are only shown in billing scopes in the partner's tenant where the purchases were made.
+Amortized views and actual costs for reserved instances in the Azure RBAC scopes show zero charges. Purchase costs for entitlements such as Reserved instances, Saving Plan purchases, and Marketplace fees are only shown in billing scopes in the partner's tenant where the purchases were made.
The retail rates used to compute costs shown in the view are the same prices shown in the Azure Pricing Calculator for all customers. Costs shown don't include any discounts or credits that the partner may have like Partner Earned Credits, Tier Discounts, and Global Service discounts.
cost-management-billing Pricing Calculator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/pricing-calculator.md
+
+ Title: Estimate costs with the Azure pricing calculator
+description: This article explains how to use the Azure pricing calculator to turn anticipated usage into an estimated cost, which makes it easier to plan and budget for your Azure usage.
++ Last updated : 08/23/2023++++++
+# Estimate costs with the Azure pricing calculator
+
+The Azure pricing calculator helps you turn anticipated usage into an estimated cost, which makes it easier to plan and budget for your Azure usage. Whether you're a small business owner or an enterprise-level organization, the web-based tool helps you make informed decisions about your cloud spending. When you log in, the calculator also provides a cost estimate for your Azure consumption with your negotiated or discounted prices. This article explains how to use the Azure pricing calculator.
+
+>[!NOTE]
+> Prices shown in this article are examples to help you understand how the calculator works. They are not actual prices.
+
+## Access the Azure pricing calculator
+
+There are two ways to navigate to the calculator:
+
+- Go to [https://azure.microsoft.com/pricing/calculator/](https://azure.microsoft.com/pricing/calculator/)
+
+-Or-
+
+- Go to the [Azure website](https://azure.microsoft.com/) and select the pricing calculator link under **Pricing** in the navigation menu.
+
+## Understand the Azure pricing calculator
+
+Let's look at the three main sections of the pricing calculator page:
+
+**The product picker** - It shows all Azure services that the calculator can estimate costs for. In this section, there's a search box, Azure service categories, and product cards.
++
+There are other tabs next to the **Products** tab that we cover later. There's also a **Log in** link to authenticate for various functions and features that we cover later.
+
+**Estimate and product configuration** - The pricing calculator helps you build _estimates_, which are collections of Azure products, similar to a shopping cart.
+
+Until you add products to your estimate, it appears blank. Here's an example.
++
+When you add a product to your estimate, the following sections get added to your estimate:
+
+- The estimation tools are at the top of the estimate.
+- The product configuration is under the estimation tools.
+
+**Estimation summary** - The estimation summary is shown below the product configuration.
++
+As you continue to add more services to your estimate, more product configuration sections get added, one per service.
+
+Below your estimate are some links for next steps. There's also a feedback link to help improve the Azure pricing calculator experience.
++
+## Build an estimate
+
+Since it's your first time, you start with an empty estimate.
+
+1. Use the product picker to find a product. You can browse the catalog or search for the Azure service name.
+2. Select product tile to add it to the estimate. It adds the product with a default configuration.
+3. The pop of the configuration shows high-level filters like region, product type, tiers, and so on. Use the filters to narrow your product selection. The configurations offered change to reflect the features offered by the selected subproduct.
+4. Update the default configurations to show your expected monthly consumption. Estimates automatically update for the new configuration. For example, a virtual machine configuration defaults to run for one month (730 hours). Changing the configuration to 200 hours automatically updates the estimate.
+5. Some products offer special pricing plans, like reserved instances or savings plans. You can choose these options, if available, to lower your costs.
+6. Depending on the selected product or pricing plan, the estimate is split into upfront and monthly costs.
+ - Upfront costs are incurred before the product is consumed.
+ - Monthly costs are incurred after the product is consumed.
+7. Although optional, we recommend that you give the configuration a unique name. Finding a particular configuration in a large estimate is easier when it has a unique name.
+8. Repeat the steps to add more products to an estimate.
+9. Finally, don't forget to add a support plan. Choose from Basic, Developer, Standard or Professional Direct. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans/).
+
+Here's an example of a virtual machine configuration:
++
+## Use advanced calculator features
+
+Here's an example with detailed descriptions of all the elements and options in an estimate.
++
+| Item number | Name | Description |
+| | | |
+| 1 | Your estimate | Creates multiple estimates and build multiple what-if scenarios. Or, segregate an estimate for different teams or applications. |
+| 2 | Expand all | Expands all configurations to view the details of each product configuration. |
+| 3 | Collapse all | Collapses all configurations to view a high-level view of the products in the estimate. |
+| 4 | Rearrange the services | Rearranges products in the estimate to form a group. For example, group by product type or application. |
+| 5 | Delete all | Deletes all products in the estimate to start with an empty estimate. The action can't be undone. |
+| 6 | More info | Select to learn more about the product, pricing, or browse product documentation. |
+| 7 | Clone | Clones the current product with its configuration to quickly create a similar product. |
+| 8 | Delete | Deletes the selected product and its configuration. The action can't be undone. |
+| 9 | Export | Exports the current estimate to an Excel file. You can share the file with others. |
+| 10 | Save | Saves your progress with the estimate. |
+| 11 | Save as | Renames a saved estimate. |
+| 12 | Share | Creates a unique link for the estimate. You can share the link with others. However, only you can make changes to the estimate. |
+| 13 | Currency | Changes the estimated costs to another currency. |
+
+## Understand calculator data
+
+This section provides more information about where the pricing comes from, how calculations work, and alternative sources for the prices in the calculator.
+
+The per-unit pricing information displayed in the Azure Pricing Calculator originates from data provided by the [Azure Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices).
+
+The Azure Pricing Calculator considers various factors to provide a cost estimate. Here's how it works:
+
+- **Product Configuration -** The calculator pulls the per-unit pricing for each product from the Azure Retail Pricing API based on the different product parameters selected by the user such as: region, size, operating system, tier, and other specific features.
+- **Consumption Estimation -** The calculator then goes further and uses the usage quantities that you input, such as hours, units, and others to estimate consumption and calculate estimated costs.
+- **Pricing plans -** You can select from different pricing plans and savings options for each product. They include pay-as-you-go, one or three-year reserved instances, and savings plans for discounted rates. Selecting a different pricing plan results in different pricing.
+
+If you need to access pricing information programmatically or require more detailed pricing data, you can use the Azure Retail Pricing API. The API provides comprehensive retail price information for all Azure services across different regions and currencies. For more information, see [Azure Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices).
+
+## View an estimate with your agreement prices
+
+The calculator helps you understand the retail costs of your Azure services, but it can also show any negotiated rates specific to your Azure Billing Account. Showing your negotiated prices helps you to get a more accurate representation of your expected costs.
+
+At the bottom of your calculator estimate, notice the list item titled **Licensing Program**.
++
+After you log in (not sign-in at the top of the page that takes you to the Azure portal), select the **Licensing Program** list item for the following options:
+
+- Microsoft Customer Agreement (MCA)
+- Enterprise Agreement (EA)
+- New Commerce Cloud Solution Provider (CSP)
+- Microsoft Online Service Agreement (MOSA)
+
+If you have negotiated pricing associated with an MCA Billing Account:
+
+1. Select the **Microsoft Customer Agreement (MCA)** option in the licensing program.
+2. Select **None selected (change)**.
+ :::image type="content" source="./media/pricing-calculator/none-selected-change.png" alt-text="Screenshot showing the None selected (change) option." lightbox="./media/pricing-calculator/none-selected-change.png" :::
+3. Select a billing account and select **Apply**.
+ :::image type="content" source="./media/pricing-calculator/choose-billing-account.png" alt-text="Screenshot showing the Choose Billing Account area." lightbox="./media/pricing-calculator/choose-billing-account.png" :::
+4. Next, select a billing profile and select **Apply**.
+ :::image type="content" source="./media/pricing-calculator/choose-billing-profile.png" alt-text="Screenshot showing the Choose Billing Profile area." lightbox="./media/pricing-calculator/choose-billing-profile.png" :::
+
+Your calculator estimate updates with your MCA price sheet information.
+
+If you have negotiated pricing associated with an EA billing account:
+
+1. Select the **Enterprise Agreement (EA)** option in the licensing program list.
+ :::image type="content" source="./media/pricing-calculator/select-program-offer-enterprise-agreement.png" alt-text="Screenshot showing the Enterprise Agreement (EA) list item." lightbox="./media/pricing-calculator/select-program-offer-enterprise-agreement.png" :::
+2. In the Choose Agreement area, select your enrollment or your billing account ID and then select **Apply**.
+ :::image type="content" source="./media/pricing-calculator/select-choose-agreement-enterprise-agreement.png" alt-text="Screenshot showing the Choose Agreement area." lightbox="./media/pricing-calculator/select-choose-agreement-enterprise-agreement.png" :::
+
+Your calculator estimate refreshes with your EA price sheet information.
+
+If you want to change your selected enrollment, select the **Selected agreement** link to the right of the licensing program list item. Here's an example.
++
+If you're a Cloud Solution Provider (CSP) partner who has transitioned to the new commerce experience, you can view your estimate by selecting the Microsoft Customer Agreement (MCA) option in the licensing program.
+
+>[!NOTE]
+> Partner Earned Credit (PEC) estimation isn't available in the calculator, so you need to manually apply your anticipated PEC to the monthly estimate.
+
+If you don't have access to log in to the calculator to see negotiated prices, contact your administrator or Azure Account Manager.
+
+## Help us improve the calculator
+
+If you want to provide feedback about the Pricing Calculator, there's a link at the bottom of the page. We welcome your feedback.
++
+## Next steps
+
+- Estimate prices with the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/).
+- Learn more about the [Azure Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices).
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Depending on the view and scope you're using, you may also see cost insights bel
:::image type="content" source="./media/quick-acm-cost-analysis/see-insights.png" alt-text="Screenshot showing insights." lightbox="./media/quick-acm-cost-analysis/see-insights.png" :::
-Lastly, use the table to find your top cost contributors and expand each row to understand how costs are broken down to the next level. Examples include resources with their product meters and services with a breakdown of products.
+Lastly, use the table to identify and review your top cost contributors and drill in for more details.
:::image type="content" source="./media/quick-acm-cost-analysis/table-show-cost-contributors.png" alt-text="Screenshot showing a table view of subscription costs with their nested resources." lightbox="./media/quick-acm-cost-analysis/table-show-cost-contributors.png" ::: This view is where you spend most of your time in Cost analysis. To explore further:
-1. Open other smart views to get different perspectives on your cost.
-2. If you want to drill into data further, you might need to [Change scope](understand-work-scopes.md#switch-between-scopes-in-cost-management) to a lower level. For example, you can't view the Subscriptions smart view if your current scope is a subscription.
-3. Open a custom view and apply other filters or group the data to explore.
+1. Expand rows to take a quick peek and see how costs are broken down to the next level. Examples include resources with their product meters and services with a breakdown of products.
+2. Select the name to drill down and see the next level details in a full view. From there, you can drill down again and again, to get down to the finest level of detail, based on what you're interested in. Examples include selecting a subscription, then a resource group, and then a resource to view the specific product meters for that resource.
+3. Select the shortcut menu (Γï») to see related costs. Examples include filtering the list of resource groups to a subscription or filtering resources to a specific location or tag.
+4. Select the shortcut menu (Γï») to open the management screen for that resource, resource group, or subscription. From this screen, you can stop or delete resources to avoid future charges.
+5. Open other smart views to get different perspectives on your costs.
+6. Open a customizable view and apply other filters or group the data to explore further.
> [!NOTE] > If you want to visualize and monitor daily trends within the period, enable the [chart preview feature](enable-preview-features-cost-management-labs.md#chartsfeature) in Cost Management Labs, available from the **Try preview** command.
cost-management-billing Reservation Utilization Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reservation-utilization-alerts.md
Title: Reservation utilization alerts
description: This article helps you set up and use reservation utilization alerts. Previously updated : 08/10/2023 Last updated : 08/30/2023
The reservation utilization alert is used to monitor the utilization of most cat
## Supported scopes and required permissions
-You can create a reservation utilization alert rule at any of the following scopes provided you have adequate permissions. For example, if you are an enterprise admin within an enterprise agreement, the alert rule should be created at the enrollment scope. It's important to note that this alert will monitor all reservations available within the enrollment, regardless of their benefit scope such as single resource group, single subscription, management group, or shared.
+You can create a reservation utilization alert rule at any of the following scopes provided you have adequate permissions. For example, if you're an enterprise admin within an enterprise agreement, the alert rule should be created at the enrollment scope. It's important to note that this alert monitors all reservations available within the enrollment, regardless of their benefit scope. Scopes include single resource group, single subscription, management group, or shared.
| Supported agreement | Alert rule scope | Required role | Supported actions | | | | | |
The following table explains the fields in the alert rule form.
| Utilization percentage | Mandatory | When any of the reservations have a utilization that is less than the target percentage, then the alert notification is sent. | Utilization is less than 95% | | Time grain | Mandatory | Choose the time over which reservation utilization value should be averaged. For example, if you choose Last 7-days, then the alert rule evaluates the last 7-day average reservation utilization of all reservations. **Note**: Last day reservation utilization is subject to change because the usage data refreshes. So, Cost Management relies on the last 7-day or 30-day averaged utilization, which is more accurate. | Last 7-days, Last 30-days| | Start on | Mandatory | The start date for the alert rule. | Current or any future date |
-| Sent | Mandatory | Choose the rate at which consecutive alert notifications are sent. For example, assume that you chose the weekly option. If you receive your first alert notification on 2 May, then the next possible notification is sent a week later, which is 9 May. | Daily ΓÇô If you want everyday notification.<br><br>Weekly ΓÇô If you want the notifications to be a week apart.<br><br> Monthly ΓÇô If you want the notifications to be a month apart.|
+| Sent | Mandatory | Choose the rate at which consecutive alert notifications are sent. For example, assume that you chose the weekly option. If you receive your first alert notification on 2 May, then the next possible notification is sent a week later, which is 9 May. **Note**: The `Sent` and `Time grain` fields in the alert rule form are independent of each other. You can set them according to your needs. | Daily ΓÇô If you want everyday notification.<br><br>Weekly ΓÇô If you want the notifications to be a week apart.<br><br> Monthly ΓÇô If you want the notifications to be a month apart.|
| Until | Mandatory | The end date for the alert rule. | The end date can be anywhere from one day to three years from the current date or the start date, whichever comes first. For example, if you create an alert on 3 March 2023, the end date can be any date from 4 March 2023, to 3 March 2026. | | Recipients | Mandatory | You can enter up to 20 email IDs including distribution lists as alert recipients. | admin@contoso.com | | Language | Mandatory | The language to be used in the alert email body | Any language supported by the Azure portal |
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
The following tables show data that's included or isn't in Cost Management. All
| **Included** | **Not included** | | | |
-| Azure service usage⁵ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace offering usage⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace purchases⁶ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Reservation purchases⁷ | |
+| Azure service usage (including deleted resources)⁵ | Unbilled services (e.g., free tier resources) |
+| Marketplace offering usage⁶ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace purchases⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Reservation purchases⁷ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
| Amortization of reservation purchases⁷ | | | New Commerce non-Azure products (Microsoft 365 and Dynamics 365) ⁸ | |
_⁷ Reservation purchases are only available for Enterprise Agreement (EA) and
_⁸ Only available for specific offers._
+Please note Cost Management data only includes the usage and purchases from services and resources that are actively running. Cost data is historical and will include resources, resource groups, and subscriptions that have been stopped, deleted, or cancelled and may not reflect the same resources, resource groups, and subscriptions you see in other tools, like Azure Resource Manager or Azure Resource Graph, which only show the current resources that are deployed in your subscriptions. Not all resources emit usage and therefore may not be represented in the cost data. Similarly, some resources are not tracked by Azure Resource Manager and may not be represented in subscription resources.
+ ## How tags are used in cost and usage data Cost Management receives tags as part of each usage record submitted by the individual services. The following constraints apply to these tags:
cost-management-billing Ea Direct Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-direct-portal-get-started.md
tags: billing
Previously updated : 12/16/2022 Last updated : 09/06/2023
This article helps direct and indirect Azure Enterprise Agreement (Azure EA) cus
- Cost analysis in the Azure portal > [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
+> The Azure Enterprise portal (EA portal) is getting deprecated. We recommend that EA customers and partners use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal or Azure Government portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
+> - The EA portal is retiring on November 15, 2023, for EA enrollments in the Azure commercial cloud.
+> - Starting November 15, 2023, indirect EA customers and partners wonΓÇÖt be able to manage their Azure Government EA enrollment in the EA portal. Instead, they must use the Azure Government portal.
+> - The Azure Government portal is accessed only with Azure Government credentials. For more information, see [Access your EA billing account in the Azure Government portal](../../azure-government/documentation-government-how-to-access-enterprise-agreement-billing-account.md).
We have several videos that walk you through getting started with the Azure portal for Enterprise Agreements. Check out the series at [Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm).
You view and track your Microsoft Azure Consumption Commitment (MACC) in the Azu
## Now that you're familiar with the basics, here are some more links to help you get onboarded
-[Azure EA pricing](./ea-pricing-overview.md) provides details on how usage is calculated and goes over charges for various Azure services in the Enterprise Agreement where the calculations are more complex.
+[Azure EA pricing](./ea-pricing-overview.md) provides details about how usage is calculated. It also explains how charges for various Azure services in the Enterprise Agreement, where the calculations are more complex.
If you'd like to know about how Azure reservations for VM reserved instances can help you save money with your enterprise enrollment, see [Azure EA VM reserved instances](ea-portal-vm-reservations.md).
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
Various partner programs have differing rules for the RBAC roles. Contact your P
For more information, see: - [Azure Partner Admin Link](https://www.microsoftpartnercommunity.com/atvwr79957/attachments/atvwr79957/Webinars/53/1/Azure%20Partner%20Admin%20Link%20FAQ.pdf)-- [Get recognized with Partner Admin Link](https://www.microsoftpartnercommunity.com/t5/Microsoft-Partner-Network/Get-recognized-with-Partner-Admin-Link/td-p/8389/page/2)
+- [Get recognized with Partner Admin Link](https://techcommunity.microsoft.com/t5/microsoft-partner-community/ct-p/PartnerCommunity)
**Who can link the partner ID?**
cost-management-billing Mca Enterprise Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-enterprise-operations.md
The following changes apply to enterprise administrators on an Enterprise Agreem
- A billing profile is created for your enrollment. You'll use the billing profile to manage billing for your organization, like your Enterprise Agreement enrollment. To learn more about billing profiles, [understand billing profiles](../understand/mca-overview.md#billing-profiles). - An invoice section is created for each department in your Enterprise Agreement enrollment. You'll use the invoice sections to manage your departments. You can create new invoice sections to set up additional departments. To learn more about invoice sections, see [understand invoice sections](../understand/mca-overview.md#invoice-sections).-- You'll use the Azure subscription creator role on invoice sections to give others permission to create Azure subscription, like the accounts that were created in Enterprise Agreement enrollment.
+- You'll use the Azure subscription creator role on invoice sections to give others permission to create Azure subscriptions, like the accounts that were created in Enterprise Agreement enrollment.
- You'll use the [Azure portal](https://portal.azure.com) to manage billing for your organization, instead of the Azure EA portal. You are given the following roles on the new billing account:
An invoice section is created for each department you had in your Enterprise Agr
The accounts that were created in your Enterprise Agreement enrollment are not supported in the new billing account. The account's subscriptions belong to the respective invoice section for their department. Account owners can create and manage subscriptions for their invoice sections.
-To view aggregate cost for subscriptions that belonged to an account, you must set a cost center for each subscription. Then you can use the Azure usage and charges csv file to filter the subscriptions by the cost center.
+To view the aggregate cost for subscriptions that belonged to an account, you must set a cost center for each subscription. Then you can use the Azure usage and charges .csv file to filter the subscriptions by the cost center.
### Download usage and charges csv, price sheet, and tax documents
-A monthly invoice is generated for each billing profile in your billing account. For each invoice, you can download Azure usage and charges csv file, price sheet, and tax document (if applicable). You can also download Azure usage and charges csv file for the current month's charges.
+A monthly invoice is generated for each billing profile in your billing account. For each invoice, you can download the Azure usage and charges .csv file, price sheet, and tax document (if applicable). You can also download Azure usage and charges .csv file for the current month's charges.
-To learn how to download Azure usage and charges csv file, see [download usage for your Microsoft Customer Agreement](../understand/download-azure-daily-usage.md).
+To learn how to download Azure usage and charges .csv file, see [download usage for your Microsoft Customer Agreement](../understand/download-azure-daily-usage.md).
-To learn how to download price sheet, see [download pricing for your Microsoft Customer Agreement](ea-pricing.md).
+To learn how to download the price sheet, see [download pricing for your Microsoft Customer Agreement](ea-pricing.md).
To learn how to download tax documents, see [view the tax documents for your Microsoft Customer Agreement](../understand/mca-download-tax-document.md#view-and-download-tax-documents).
Create an invoice section to organize your costs based on your needs, like depar
### Create a new account
-Assign users the Azure subscription creator role on invoice sections to give them permission to create Azure subscription, like the accounts that are created in Enterprise Agreement enrollment. For more information on assigning roles, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
+Assign users the Azure subscription creator role on invoice sections to give them permission to create Azure subscriptions, like the accounts that are created in Enterprise Agreement enrollment. For more information on assigning roles, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
## Changes for department administrators The following changes apply to department administrators on an Enterprise Agreement that got renewed to a Microsoft Customer Agreement. - An invoice section is created for each department in your Enterprise Agreement enrollment. You'll use the invoice section(s) to manage your department(s). To learn more about invoice sections, see [understand invoice sections](../understand/mca-overview.md#invoice-sections).-- You'll use the Azure subscription creator role on the invoice section to give others permission to create Azure subscription, like the accounts that are created in Enterprise Agreement enrollment.
+- You'll use the Azure subscription creator role on the invoice section to give others permission to create Azure subscriptions, like the accounts that are created in Enterprise Agreement enrollment.
- You'll use the Azure portal to manage billing for your organization, instead of the Azure EA portal. You are given the following roles on the new billing account:
-**Invoice section owner** - You are assigned the invoice section owner role on the invoice section that is created for the department(s) you had in Enterprise Agreement. The role lets you view and track charges, and control who can create Azure subscriptions and buy other products for the invoice section.
+**Invoice section owner** - You are assigned the invoice section owner role on the invoice section that is created for the department(s) you had in the Enterprise Agreement. The role lets you view and track charges and control who can create Azure subscriptions and buy other products for the invoice section.
### View charges for your department
To learn how to provide, access to your invoice section, see [manage billing rol
### Create a new account in your department
-Assign users the Azure subscription creator role on invoice section that's created for your department. For more information on assigning roles, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
+Assign users the Azure subscription creator role on the invoice section that's created for your department. For more information on assigning roles, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
### View charges for accounts in your departments The accounts that were created in your Enterprise Agreement enrollment are not supported in the new billing account. The account's subscriptions belong to the respective invoice section for their department. Account owners can create and manage subscriptions for their invoice sections.
-To view aggregate cost for subscriptions that belonged to an account in your department, you must set a cost center for each subscription. Then you can use the Azure usage and charges file to filter the subscriptions by the cost center.
+To view the aggregate cost for subscriptions that belonged to an account in your department, you must set a cost center for each subscription. Then you can use the Azure usage and charges file to filter the subscriptions by the cost center.
## Changes for account owners
Account owners on the Enterprise Agreement get permission to create Azure subscr
To create additional Azure subscriptions, you are given the following role on the new billing account.
-**Azure subscription creator** - You are assigned the Azure subscription creator role on the invoice section that is created for your department in Enterprise Agreement. If your account doesn't belong to a department, you get Azure subscription creator role on a section named Default invoice section. The role lets you create Azure subscriptions for the invoice section.
+**Azure subscription creator** - You are assigned the Azure subscription creator role on the invoice section that is created for your department in the Enterprise Agreement. If your account doesn't belong to a department, you get Azure subscription creator role on a section named Default invoice section. The role lets you create Azure subscriptions for the invoice section.
### Create an Azure subscription
You can create Azure subscriptions for your invoice section in the Azure portal.
### View charges for your account
-To view charges for subscriptions that belonged to an account, go to the [subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal. The subscriptions page displays charges for all your subscription.
+To view charges for subscriptions that belonged to an account, go to the [subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal. The subscriptions page displays charges for all your subscriptions.
### View charges for a subscription You can view charges for a subscription either on the [subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) or the Azure cost analysis. For more information on Azure cost analysis, see [explore and analyze costs with Cost analysis](../costs/quick-acm-cost-analysis.md). > [!NOTE]
-> To understand what could change upon migration from Enterprise Agreement to Microsoft Customer Agreement, refer [this](https://learn.microsoft.com/azure/cost-management-billing/costs/migrate-cost-management-api) document.
+> To understand what could change upon migration from Enterprise Agreement to Microsoft Customer Agreement, refer to the [Migrate Cost Management API](../costs/migrate-cost-management-api.md) article.
## Need help? Contact support
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
You can exchange your reservation from the [Azure portal](https://portal.azure.c
1. Review and complete the transaction. [![Example image showing the VM product to purchase with an exchange, completing the return](./media/exchange-and-refund-azure-reservations/exchange-refund-confirm-exchange.png)](./media/exchange-and-refund-azure-reservations/exchange-refund-confirm-exchange.png#lightbox)
-To refund a reservation, go to **Reservation Details** and select **Refund**.
+To refund a reservation, go into the Reservationthat you are looking to cancel and select **Return**.
## Exchange multiple reservations
cost-management-billing Limited Time Central Sweden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-central-sweden.md
+
+ Title: Save on select Linux VMs in Sweden Central for a limited time
+description: Learn about how to save up to 50% on select Linux VMs in Sweden Central for a limited time.
+++++ Last updated : 08/28/2023++++
+# Save on select Linux VMs in Sweden Central for a limited time
+
+Save up to 50 percent compared to pay-as-you-go pricing when you purchase one-year [Azure Reserved Virtual Machine (VM) Instances](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3) for select Linux VMs in Sweden Central for a limited time. This offer is available between September 1, 2023 ΓÇô February 29, 2024.
+
+## Purchase the limited time offer
+
+To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/SwedenCentral/Purchase1) a one-year term for Azure Reserved Virtual Machine Instances for qualified VM instances in the Sweden Central region.
+
+## Charge back limited time offer costs
+
+Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for reservations. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly reservation cost. The cost is the total cost of reservation usage by the resource on that day. Users with an individual subscription can get the amortized cost data from their usage file. For more information, see [Charge back Azure Reservation costs](charge-back-usage.md).
+
+## Terms and conditions of the limited time offer
+
+These terms and conditions (hereinafter referred to as "terms") govern the limited time offer ("offer") provided by Microsoft to customers purchasing a one-year Azure Reserved VM Instance in Sweden Central between September 1, 2023 (12 AM Pacific Standard Time) ΓÇô February 29, 2024 (11:59 PM Pacific Standard Time), for any of the following VM series:
+
+- Dadsv5
+- Dasv5
+- Ddsv5
+- Ddv5
+- Dldsv5
+- Dlsv5
+- Dsv5
+- Dv5
+- Eadsv5
+- Easv5
+- Ebdsv5
+- Ebsv5
+- Edsv5
+- Edv5
+- Esv5
+- Ev5
+
+The offer provides them with a discount up to 50% compared to pay-as-you-go pricing. The savings doesn't include operating system costs. Actual savings may vary based on instance type or usage.
+
+**Eligibility** - The Offer is open to individuals who meet the following criteria:
+
+- To buy a reservation, you must have the owner role or reservation purchaser role on an Azure subscription that's of one of the following types:
+ - Enterprise (MS-AZR-0017P or MS-AZR-0148P)
+ - Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P)
+ - Microsoft Customer Agreement
+- Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations?source=azlto1) to purchase Azure Reservations. You can't purchase a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription, you must use built-in owner or built-in reservation purchaser role.
+- For more information about who can purchase a reservation, see [Buy an Azure reservation](prepare-buy-reservation.md?source=azlto2).
+
+**Offer details** - Upon successful purchase and payment for the one-year Azure Reserved VM Instance in Sweden Central for one or more of the qualified VMs during the specified period, the discount applies automatically to the number of running virtual machines in Sweden Central that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For more information about how to pay and save with an Azure Reserved VM Instance, see [Prepay for Azure virtual machines to save money](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3).
+
+- Additional taxes may apply.
+- Payment will be processed using the payment method on file for the selected subscriptions.
+- Estimated savings are calculated based on your current on-demand rate.
+
+**Qualifying purchase** - To be eligible for the 50% discount, customers must make a purchase of the one-year Azure Reserved Virtual Machine Instances for one of the following qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024.
+
+- Dadsv5
+- Dasv5
+- Ddsv5
+- Ddv5
+- Dldsv5
+- Dlsv5
+- Dsv5
+- Dv5
+- Eadsv5
+- Easv5
+- Ebdsv5
+- Ebsv5
+- Edsv5
+- Edv5
+- Esv5
+- Ev5
+
+Instance size flexibility is available for these VMs. For more information about Instance Size Flexibility, see [Virtual machine size flexibility](../../virtual-machines/reserved-vm-instance-size-flexibility.md?source=azlto7).
+
+**Discount limitations**
+
+- The discount automatically applies to the number of running virtual machines in Sweden Central that match the reservation scope and attributes.
+- The discount applies for one year after the date of purchase.
+- The discount only applies to resources associated with subscriptions purchased through Enterprise, Cloud Solution Provider (CSP), Microsoft Customer Agreement and individual plans with pay-as-you-go rates.
+- A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+- When you deallocate, delete, or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+- Stopped VMs are billed and continue to use reservation hours. Deallocate or delete VM resources or scale-in other VMs to use your available reservation hours with other workloads.
+- For more information about how Azure Reserved VM Instance discounts are applied, see [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md?source=azlto4).
+
+**Exchanges and refunds** - The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md?source=azlto6).
+
+**Renewals**
+
+- The renewal price **will not be** the limited time offer price, but the price available at time of renewal.
+- For more information about renewals, see [Automatically renew Azure reservations](reservation-renew.md?source=azlto5).
+
+**Termination or modification** - Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice.
+
+If you have purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024 you'll continue to get the discount throughout the one-year term, even if the offer is canceled.
+
+By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer.
+
+## Next steps
+
+- [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md?source=azlto4)
+- [Purchase Azure Reserved VM instances in the Azure portal](https://aka.ms/azure/pricing/SwedenCentral/Purchase1)
cost-management-billing Limited Time Us West https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-us-west.md
+
+ Title: Save on select VMs in US West for a limited time
+description: Learn about how to save up to 50% on select VMs in US West for a limited time.
+++++ Last updated : 08/28/2023++++
+# Save on select VMs in US West for a limited time
+
+Save up to 50 percent compared to pay-as-you-go pricing when you purchase one-year [Azure Reserved Virtual Machine (VM) Instances](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3) for `Dv3s` VMs in US West for a limited time. This offer is available between September 1, 2023 ΓÇô November 30, 2023.
+
+## Purchase the limited time offer
+
+To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/USWest/Purchase) a one-year term for Azure Reserved Virtual Machine Instances for qualified `Dv3s` instances in the US West region.
+
+## Charge back limited time offer costs
+
+Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for reservations. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly reservation cost. The cost is the total cost of reservation usage by the resource on that day. Users with an individual subscription can get the amortized cost data from their usage file. For more information, see [Charge back Azure Reservation costs](charge-back-usage.md).
+
+## Terms and conditions of the limited time offer
+
+These terms and conditions (hereinafter referred to as "terms") govern the limited time offer ("offer") provided by Microsoft to customers purchasing a one-year Azure Reserved VM Instance in US West between September 1, 2023 (12 AM Pacific Standard Time) ΓÇô November 30, 2023 (11:59 PM Pacific Standard Time), for any of the following VM series:
+
+- D2v3
+- D4v3
+- D8v3
+- D16v3
+- D32v3
+- D48v3
+- D64v3
+
+The offer provides them with a discount up to 50% compared to pay-as-you-go pricing. The savings doesn't include operating system costs. Actual savings may vary based on instance type or usage.
+
+**Eligibility** - The Offer is open to individuals who meet the following criteria:
+
+- To buy a reservation, you must have the owner role or reservation purchaser role on an Azure subscription that's one of the following types:
+ - Enterprise (MS-AZR-0017P or MS-AZR-0148P)
+ - Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P)
+ - Microsoft Customer Agreement
+- Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations?source=azlto1) to purchase Azure Reservations. You won't be able to purchase a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in owner or built-in reservation purchaser role.
+- For more information about who can purchase a reservation visit, see [Buy an Azure reservation](prepare-buy-reservation.md?source=azlto2).
+
+**Offer details** - Upon successful purchase and payment for the one-year Azure Reserved VM Instance in US West for one or more of the qualified VMs during the specified period, the discount applies automatically to the number of running virtual machines in US West that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For more information about how to pay and save with an Azure Reserved VM Instance, see [Prepay for Azure virtual machines to save money](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3).
+
+- Additional taxes may apply.
+- Payment will be processed using the payment method on file for the selected subscriptions.
+- Estimated savings are calculated based on your current on-demand rate.
+
+**Qualifying purchase** - To be eligible for the 50% discount, customers must make a purchase of the one-year Azure Reserved Virtual Machine Instances. The purchase must be for one of the following qualified VMs in US West between September 1, 2023, and November 30, 2023:
+
+- D2v3
+- D4v3
+- D8v3
+- D16v3
+- D32v3
+- D48v3
+- D64v3
+
+Instance size flexibility is available for these VMs. For more information about Instance Size Flexibility, see [Virtual machine size flexibility](../../virtual-machines/reserved-vm-instance-size-flexibility.md?source=azlto7).
+
+**Discount limitations**
+
+- The discount automatically applies to the number of running virtual machines in US West that match the reservation scope and attributes.
+- The discount applies for one year after the date of purchase.
+- The discount only applies to resources associated with subscriptions purchased through Enterprise, Cloud Solution Provider (CSP), Microsoft Customer Agreement and individual plans with pay-as-you-go rates.
+- A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+- When you deallocate, delete, or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+- Stopped VMs are billed and continue to use reservation hours. Deallocate or delete VM resources or scale-in other VMs to use your available reservation hours with other workloads.
+- For more information about how Azure Reserved VM Instance discounts are applied, see [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md?source=azlto4).
+
+**Exchanges and refunds** - The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md?source=azlto6).
+
+**Renewals**
+
+- The renewal price **will not be** the limited time offer price, but the price available at time of renewal.
+- For more information about renewals, see [Automatically renew Azure reservations](reservation-renew.md?source=azlto5).
+
+**Termination or modification** - Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice.
+
+If you have purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in US West between September 1, 2023, and November 30, 2023, you'll continue to get the discount throughout the one-year term, even if the offer is canceled.
+
+By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer.
+
+## Next steps
+
+- [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md?source=azlto4)
+- [Purchase Azure Reserved VM instances in the Azure portal](https://aka.ms/azure/pricing/USWest/Purchase)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Previously updated : 07/20/2023 Last updated : 08/21/2023
Azure Reservations help you save money by committing to one-year or three-years
## Who can buy a reservation
-To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You can't buy a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in Owner or built-in Reservation Purchaser role.
+To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement.
+
+Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. CSP partners can buy reservations for them in Partner Center when authorized by their customers. For more information, see [Buy Microsoft Azure reservations on behalf of your customers](/partner-center/azure-reservations-buying). Or, once the partner has given permission to the end customer and they have the reservation purchaser role, they can purchase reservations in the Azure portal.
+
+You can't buy a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in Owner or built-in Reservation Purchaser role.
Enterprise Agreement (EA) customers can limit purchases to EA admins by disabling the **Add Reserved Instances** option in the EA Portal. Direct EA customers can now disable Reserved Instance setting in [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to Policies menu to change settings.
cost-management-billing Understand Azure Cache For Redis Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md
Title: Understand how the reservation discount is applied to Azure Cache for Redis | Microsoft Docs description: Learn how reservation discount is applied to Azure Cache for Redis instances.--+++ Last updated 12/06/2022-+ + # How the reservation discount is applied to Azure Cache for Redis After you buy an Azure Cache for Redis reserved capacity, the reservation discount is automatically applied to cache instances that match the attributes and quantity of the reservation. A reservation covers only the compute costs of your Azure Cache for Redis. You're charged for storage and networking at the normal rates. Reserved capacity is only available for [premium tier](../../azure-cache-for-redis/quickstart-create-redis.md) caches.
cost-management-billing Understand Azure Data Explorer Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-data-explorer-reservation-charges.md
If you have questions or need help, [create a support request](https://go.micros
To learn more about Azure reservations, see the following articles:
-* [Prepay for Azure Data Explorer compute resources with Azure Azure Data Explorer reserved capacity](/azure/data-explorer/pricing-reserved-capacity)
+* [Prepay for Azure Data Explorer compute resources with Azure Data Explorer reserved capacity](/azure/data-explorer/pricing-reserved-capacity)
* [What are reservations for Azure?](save-compute-costs-reservations.md) * [Manage Azure reservations](manage-reserved-vm-instance.md) * [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
cost-management-billing View Purchase Refunds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-purchase-refunds.md
ms.reviwer: nitinarora
Previously updated : 07/28/2023 Last updated : 08/21/2023
Enterprise Agreement and Microsoft Customer Agreement billing readers can view a
## View reservation transactions in the Azure portal
-An Enterprise enrollment or Microsoft Customer Agreement billing administrator can view reservation transactions in Cost Management and Billing.
+A Microsoft Customer Agreement billing administrator can view reservation transactions in Cost Management and Billing. For EA enrollments, EA Admins, Indirect Admins, and Partner Admins can view reservation transactions in Cost Management and Billing.
To view the corresponding refunds for reservation transactions, select a **Timespan** that includes the purchase refund dates. You might have to select **Custom** under the **Timespan** list option. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for **Cost Management + Billing**.
-1. Select **Reservation transactions**.
+1. Search for **Cost Management + Billing** and select it.
+1. Select a billing scope.
+1. Select **Reservation transactions**.
+ The Reservation transactions left menu item only appears if you have a billing scope selected.
1. To filter the results, select **Timespan**, **Type**, or **Description**. 1. Select **Apply**.
cost-management-billing Scope Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/scope-savings-plan.md
Previously updated : 02/03/2023 Last updated : 08/28/2023 # Savings plan scopes
You have the following options to scope a savings plan, depending on your needs:
- **Single resource group scope** - Applies the savings plan benefit to the eligible resources in the selected resource group only. - **Single subscription scope** - Applies the savings plan benefit to the eligible resources in the selected subscription. - **Shared scope** - Applies the savings plan benefit to eligible resources within subscriptions that are in the billing context. If a subscription was moved to different billing context, the benefit will no longer be applied to this subscription and will continue to apply to other subscriptions in the billing context.
- - For Enterprise Agreement customers, the billing context is the enrollment.
+ - For Enterprise Agreement customers, the billing context is the enrollment. The savings plan shared scope would include multiple Active Directory tenants in an enrollment.
- For Microsoft Customer Agreement customers, the billing scope is the billing profile. - **Management group** - Applies the savings plan benefit to eligible resources in the list of subscriptions that are a part of both the management group and billing scope. To buy a savings plan for a management group, you must have at least read permission on the management group and be a savings plan owner on the billing subscription.
cost-management-billing Mca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-overview.md
Previously updated : 04/26/2023 Last updated : 08/29/2023
Roles on the billing account have the highest level of permissions. By default,
Use a billing profile to manage your invoice and payment methods. A monthly invoice is generated at the beginning of the month for each billing profile in your account. The invoice contains respective charges for all Azure subscriptions and other purchases from the previous month.
-A billing profile is automatically created for your billing account. It contains one invoice section by default. You may create additional sections to easily track and organize costs based on your needs whether is it per project, department, or development environment. You'll see these sections on the billing profile's invoice reflecting the usage of each subscription and purchases you've assigned to it.
+A billing profile is automatically created for your billing account. It contains one invoice section by default. You may create more sections to easily track and organize costs based on your needs whether is it per project, department, or development environment. The sections are shown on the billing profile's invoice reflecting the usage of each subscription and purchases you've assigned to it.
Roles on the billing profiles have permissions to view and manage invoices and payment methods. Assign these roles to users who pay invoices like members of the accounting team in your organization. For more information, see [billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks). + ### Each billing profile gets a monthly invoice A monthly invoice is generated at the beginning of the month for each billing profile. The invoice contains all charges from the previous month.
-You can view the invoice, download documents and change setting to get future invoices by email, in the Azure portal. For more information, see [download invoices for a Microsoft Customer Agreement](../manage/download-azure-invoice-daily-usage-date.md#download-invoices-for-a-microsoft-customer-agreement).
+You can view the invoice, download documents and the change setting to get future invoices by email, in the Azure portal. For more information, see [download invoices for a Microsoft Customer Agreement](../manage/download-azure-invoice-daily-usage-date.md#download-invoices-for-a-microsoft-customer-agreement).
+
+If an invoice becomes overdue, past-due email notifications are only sent to users with role assignments on the overdue billing profile. Ensure that users who should receive overdue notifications have one of the following roles:
+
+- Billing profile owner
+- Billing profile contributor
+- Invoice manager
### Invoice payment methods
Apply policies to control Azure Marketplace and Reservation purchases using a bi
### Azure plans determine pricing and service level agreement for subscriptions
-Azure plans determine the pricing and service level agreements for Azure subscriptions. They are automatically enabled when you create a billing profile. All invoice sections that are associated with the billing profile can use these plans. Users with access to the invoice section use the plans to create Azure subscriptions. The following Azure plans are supported in billing accounts for Microsoft Customer Agreement:
+Azure plans determine the pricing and service level agreements for Azure subscriptions. They're automatically enabled when you create a billing profile. All invoice sections that are associated with the billing profile can use these plans. Users with access to the invoice section use the plans to create Azure subscriptions. The following Azure plans are supported in billing accounts for Microsoft Customer Agreement:
| Plan | Definition | ||-|
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 03/13/2023 Last updated : 07/21/2023
This article applies to customers with a Microsoft Customer Agreement (MCA) and
There are two ways to pay for your bill for Azure. You can pay with the default payment method of your billing profile or you can make a one-time payment with **Pay now**.
-If you signed up for Azure through a Microsoft representative, then your default payment method will always be set to *wire transfer*. Automatic credit card payment isn't an option if you signed up for Azure through a Microsoft representative. Instead, you can [pay with a credit card for individual invoices](#pay-now-in-the-azure-portal).
+If you signed up for Azure through a Microsoft representative, then your default payment method is always set to *wire transfer*. Automatic credit card payment isn't an option if you signed up for Azure through a Microsoft representative. Instead, you can [pay with a credit card for individual invoices](#pay-now-in-the-azure-portal).
[!INCLUDE [Pay by check](../../../includes/cost-management-pay-check.md)]
Here's a table summarizing payment methods for different agreement types
**The Reserve Bank of India has issued new directives.**
-On 1 October 2021, automatic payments in India may block some credit card transactions, especially transactions exceeding 5,000 INR. Because of this situation, you may need to make payments manually in the Azure portal. This directive won't affect the total amount you'll be charged for your Azure usage.
+As of October 2021, automatic payments in India may block some credit card transactions, especially transactions exceeding 5,000 INR. Because of this situation, you may need to make payments manually in the Azure portal. This directive doesn't affect the total amount you're charged for your Azure usage.
-On 8 June 2022, The Reserve Bank of India (RBI) increased the limit of e-mandates on cards for recurring payments from INR 5,000 to INR 15,000.
+As of June 2022, The Reserve Bank of India (RBI) increased the limit of e-mandates on cards for recurring payments from INR 5,000 to INR 15,000.
[Learn more about the Reserve Bank of India directive; Processing of e-mandate on cards for recurring transactions](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
-On 30 September 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you'll need to add and verify your payment method to make a payment in the Azure portal for all invoices.
+As of September 2022, Microsoft and other online merchants no longer stores credit card information. To comply with this regulation, Microsoft removed all stored card details from Microsoft Azure. To avoid service interruption, you need to add and verify your payment method to make a payment in the Azure portal for all invoices.
[Learn about the Reserve Bank of India directive; Restriction on storage of actual card data](https://rbidocs.rbi.org.in/rdocs/notification/PDFs/DPSSC09B09841EF3746A0A7DC4783AC90C8F3.PDF)
After you submit the payment, allow time for the payment to appear in the Azure
Refunds are treated as a regular charge. TheyΓÇÖre refunded to your bank account.
+## Partial payment for Azure global in China
+
+Partial payment is available for Azure global pay-as-you-go customers in China only. If you accrue usage higher than your credit card limit, you can use the following self-serve process to split the invoice amount across multiple credit cards using partial payment.
+
+>[!NOTE]
+> To avoid service interruption, pay the full invoice amount by the due date found on the invoice.
+
+To make a partial payment, use the following steps.
+
+1. Sign into the [Azure portal](https://portal.azure.com).
+2. Search for **Cost Management + Billing**.
+3. In the left menu, select **Invoices** under **Billing**.
+4. If any of your eligible invoices are due or past due, a blue **Pay now** link for the invoice is available. Select **Pay now**.
+5. In the Pay now window, select or tap **Select a payment method** to choose an existing credit card or add a new one.
+6. After you select a payment method, select **Pay now**.
+7. If the payment fails, the partial payment feature appears in the Pay now experience. The minimum partial payment amount is 10,000 CNY. Enter an amount greater than 10,000 CNY.
+8. After you enter the amount, select the **Select a payment method** option to choose an existing credit card or add a new one. It's the card the first partial payment is applied to.
+9. After you select a payment method, select **Pay now**.
+10. Repeat steps 8 ΓÇô 9 until you fully pay the invoice amount.
+ ## Pay by default payment method The default payment method of your billing profile can either be a credit card, debit card, or wire transfer.
If the default payment method for your billing profile is a credit or debit card
If your automatic credit or debit card charge gets declined for any reason, you can make a one-time payment with a credit or debit card in the Azure portal using **Pay now**.
-If you have a Microsoft Online Services Program (pay-as-you-go) account and you have a bill due, you'll see the **Pay now** banner on your subscription property page.
+If you have a Microsoft Online Services Program (pay-as-you-go) account and you have a bill due, the **Pay now** banner appears on your subscription property page.
If you want to learn how to change your default payment method to wire transfer, see [How to pay by invoice](../manage/pay-by-invoice.md).
If your default payment method is wire transfer, check your invoice for payment
> - [Bulgaria](/legal/pay/bulgaria) > - [Cameroon](/legal/pay/cameroon) > - [Canada](/legal/pay/canada)
-> - [Cape Verde](/legal/pay/cape-verde)
+> - [Cabo Verde](/legal/pay/cape-verde)
> - [Cayman Islands](/legal/pay/cayman-islands) > - [Chile](/legal/pay/chile) > - [China (PRC)](/legal/pay/china-prc)
If your default payment method is wire transfer, check your invoice for payment
> - [Lithuania](/legal/pay/lithuania) > - [Luxembourg](/legal/pay/luxembourg) > - [Macao Special Administrative Region](/legal/pay/macao)
-> - [Macedonia, Former Yugoslav Republic of](/legal/pay/macedonia)
> - [Malaysia](/legal/pay/malaysia) > - [Malta](/legal/pay/malta) > - [Mauritius](/legal/pay/mauritius)
If your default payment method is wire transfer, check your invoice for payment
> - [New Zealand](/legal/pay/new-zealand) > - [Nicaragua](/legal/pay/nicaragua) > - [Nigeria](/legal/pay/nigeria)
+> - [North Macedonia, Republic of](/legal/pay/macedonia)
> - [Norway](/legal/pay/norway) > - [Oman](/legal/pay/oman) > - [Pakistan](/legal/pay/pakistan)
To pay invoices in the Azure portal, you must have the correct [MCA permissions]
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search on **Cost Management + Billing**. 1. In the left menu, select **Invoices** under **Billing**.
-1. If any of your eligible invoices are due or past due, you'll see a blue **Pay now** link for that invoice. Select **Pay now**.
+1. If any of your eligible invoices are due or past due, a blue **Pay now** link appears for the invoice. Select **Pay now**.
1. In the Pay now window, select or tap **Select a payment method** to choose an existing credit card or add a new one. 1. After you select a payment method, select **Pay now**.
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
You can create **Azure Databricks linked service** to register Databricks worksp
| newClusterNumOfWorker| Number of worker nodes that this cluster should have. A cluster has one Spark Driver and num_workers Executors for a total of num_workers + 1 Spark nodes. A string formatted Int32, like "1" means numOfWorker is 1 or "1:10" means autoscale from 1 as min and 10 as max. | No | | newClusterNodeType | This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster. For example, the Spark nodes can be provisioned and optimized for memory or compute intensive workloads. This field is required for new cluster | No | | newClusterSparkConf | a set of optional, user-specified Spark configuration key-value pairs. Users can also pass in a string of extra JVM options to the driver and the executors via spark.driver.extraJavaOptions and spark.executor.extraJavaOptions respectively. | No |
-| newClusterInitScripts| a set of optional, user-defined initialization scripts for the new cluster. Specifying the DBFS path to the init scripts. | No |
+| newClusterInitScripts| a set of optional, user-defined initialization scripts for the new cluster. You can specify the init scripts in workspace files (recommended) or via the DBFS path (legacy). | No |
## Azure SQL Database linked service
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
Previously updated : 08/08/2023 Last updated : 08/18/2023 # Change data capture resource overview
The new Change Data Capture resource in ADF allows for full fidelity change data
* JSON * ORC * Parquet
+* Azure Synapse Analytics
## Known limitations * Currently, when creating source/target mappings, each source and target is only allowed to be used once.
The new Change Data Capture resource in ADF allows for full fidelity change data
For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md).
+## Azure Synapse Analytics as Target
+When using Azure Synapse Analytics as target, the **Staging Settings** is available on the main table canvas. Enabling staging is mandatory when selecting Azure Synapse Analytics as the target. This significantly enhances write performance by utilizing performant bulk loading capability such as COPY INTO command. **Staging Settings** can be configured in two ways: utilizing **Factory settings** or opting for a **Custom settings**. **Factory settings** apply at the factory level. For the first time, if these settings aren't configured, you'll be directed to the global staging setting section for configuration. Once set, all CDC top-level resources will adopt this configuration. **Custom settings** is scoped only for the CDC resource for which it is configured and overrides the **Factory settings**.
+
+> [!NOTE]
+> As we utilize the COPY INTO command to transfer data from the staging location to Azure Synapse Analytics, it is advisable to ensure that all required permissions are pre-configured within Azure Synapse Analytics.
++ > [!NOTE] > We always use the last published configuration when starting a CDC. For running CDCs, while your data is being processed, you will be billed 4 v-cores of General Purpose Data Flows. ## Next steps - [Learn how to set up a change data capture resource](how-to-change-data-capture-resource.md).
+- [Learn how to set up a change data capture resource with schema evolution](how-to-change-data-capture-resource-with-schema-evolution.md).
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
# Copy and transform data from Microsoft 365 (Office 365) into Azure using Azure Data Factory or Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Azure Data Factory and Synapse Analytics pipelines integrate with [Microsoft Graph data connect](/graph/data-connect-concept-overview), allowing you to bring the rich organizational data in your Microsoft 365 (Office 365) tenant into Azure in a scalable way and build analytics applications and extract insights based on these valuable data assets. Integration with Privileged Access Management provides secured access control for the valuable curated data in Microsoft 365 (Office 365). Please refer to [this link](/graph/data-connect-concept-overview) for an overview on Microsoft Graph data connect and refer to [this link](/graph/data-connect-policies#licensing) for licensing information.
+Azure Data Factory and Synapse Analytics pipelines integrate with [Microsoft Graph data connect](/graph/data-connect-concept-overview), allowing you to bring the rich organizational data in your Microsoft 365 (Office 365) tenant into Azure in a scalable way and build analytics applications and extract insights based on these valuable data assets. Integration with Privileged Access Management provides secured access control for the valuable curated data in Microsoft 365 (Office 365). Please refer to [this link](/graph/data-connect-concept-overview) for an overview of Microsoft Graph data connect.
This article outlines how to use the Copy Activity to copy data and Data Flow to transform data from Microsoft 365 (Office 365). For an introduction to copy data, read the [copy activity overview](copy-activity-overview.md). For an introduction to transforming data, read [mapping data flow overview](concepts-data-flow-overview.md).
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
Last updated 01/18/2023
-# Copy data from Shopify using Azure Data Factoryor Synapse Analytics (Preview)
+# Copy data from Shopify using Azure Data Factory or Synapse Analytics (Preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Shopify. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md
To use a Set Variable activity in a pipeline, complete the following steps:
## Setting a pipeline return value with UI
-We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_. This allows communication from the child pipeline to the calling pipeline, in the following scenario.
+We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_, allowing communication from the child pipeline to the calling pipeline, in the following scenario.
You don't need to define the variable, before using it. For more information, see [Pipeline Return Value](tutorial-pipeline-return-value.md)
value | String literal or expression object value that the variable is assigned
## Incrementing a variable
-A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable.
+A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field, that is, no self-referencing. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable. Here's an example of this pattern:
-Below is an example of this pattern:
+* First you define two variables: one for the iterator, and one for temporary storage.
++
+* Then you use two activities to increment values
:::image type="content" source="media/control-flow-set-variable-activity/increment-variable.png" alt-text="Screenshot shows increment variable."::: ``` json {
- "name": "pipeline3",
+ "name": "pipeline1",
"properties": { "activities": [ {
- "name": "Set I",
+ "name": "Increment J",
"type": "SetVariable",
- "dependsOn": [
- {
- "activity": "Increment J",
- "dependencyConditions": [
- "Succeeded"
- ]
- }
- ],
+ "dependsOn": [],
+ "policy": {
+ "secureOutput": false,
+ "secureInput": false
+ },
"userProperties": [], "typeProperties": {
- "variableName": "i",
+ "variableName": "temp_j",
"value": {
- "value": "@variables('j')",
+ "value": "@add(variables('counter_i'),1)",
"type": "Expression" } } }, {
- "name": "Increment J",
+ "name": "Set I",
"type": "SetVariable",
- "dependsOn": [],
+ "dependsOn": [
+ {
+ "activity": "Increment J",
+ "dependencyConditions": [
+ "Succeeded"
+ ]
+ }
+ ],
+ "policy": {
+ "secureOutput": false,
+ "secureInput": false
+ },
"userProperties": [], "typeProperties": {
- "variableName": "j",
+ "variableName": "counter_i",
"value": {
- "value": "@string(add(int(variables('i')), 1))",
+ "value": "@variables('temp_j')",
"type": "Expression" } } } ], "variables": {
- "i": {
- "type": "String",
- "defaultValue": "0"
+ "counter_i": {
+ "type": "Integer",
+ "defaultValue": 0
},
- "j": {
- "type": "String",
- "defaultValue": "0"
+ "temp_j": {
+ "type": "Integer",
+ "defaultValue": 0
} }, "annotations": []
Below is an example of this pattern:
} ```
-Variables are currently scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
+Variables are scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
## Next steps
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
If you want to restrict access for Data Factory resources in your subscriptions
You're unable to access each PaaS resource when both sides are exposed to Private Link and a private endpoint. This issue is a known limitation of Private Link and private endpoints.
-For example, A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesn't block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B can't access data factory A via public in virtual network B anymore.
+For example, customer A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesn't block public access, customer B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B can't access data factory A via public in virtual network B anymore.
## Next steps
data-factory Deploy Linked Arm Templates With Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deploy-linked-arm-templates-with-vsts.md
Title: Deploy linked ARM templates with VSTS
-description: Learn how to deploy linked ARM templates with Visual Studio Team Services (VSTS).
+description: Learn how to deploy linked ARM templates with Azure DevOps Services (formerly Visual Studio Team Services, or VSTS).
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes how to deploy linked Azure Resource Manager (ARM) templates with Visual Studio Team Services (VSTS).
+This article describes how to deploy linked Azure Resource Manager (ARM) templates with Azure DevOps Services (formerly Visual Studio Team Services, or VSTS).
## Overview
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
ms.devlang: powershell
-+ Last updated 07/17/2023
data-factory Enable Azure Key Vault For Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-azure-key-vault-for-managed-airflow.md
+
+ Title: Enable Azure Key Vault for airflow
+
+description: This article explains how to enable Azure Key Vault as the secret backend for a Managed Airflow instance.
++++ Last updated : 08/29/2023++
+# Enable Azure Key Vault for Managed Airflow
++
+> [!NOTE]
+> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
+
+Apache Airflow provides a range of backends for storing sensitive information like variables and connections, including Azure Key Vault. This guide shows you how to configure Azure Key Vault as the secret backend for Apache Airflow, enabling you to store and manage your sensitive information in a secure and centralized manner.
+
+## Prerequisites
+
+- **Azure subscription** - If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure storage account** - If you don't have a storage account, see [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
+- **Azure Data Factory pipeline** - You can follow any of the tutorials and create a new data factory pipeline in case you don't already have one or create one with one select in [Get started and try out your first data factory pipeline](quickstart-get-started.md).
+- **Azure Key Vault** - You can follow [this tutorial to create a new Azure Key Vault](/azure/key-vault/general/quick-create-portal) if you donΓÇÖt have one.
+- **Service Principal** - You'll need to [create a new service principal](/azure/active-directory/develop/howto-create-service-principal-portal) or use an existing one and grant it permission to access Azure Key Vault (example - grant the **key-vault-contributor role** to the SPN for the key vault, so the SPN can manage it). Additionally, you'll need to get the service principal **Client ID** and **Client Secret** (API Key) to add them as environment variables, as described later in this article.
+
+## Permissions
+
+Assign your SPN the following roles in your key vault from the [Built-in roles](/azure/role-based-access-control/built-in-roles).
+
+- Key Vault Contributor
+- Key Vault Secrets User
+
+## Enable the Azure Key Vault backend for a Managed Airflow instance
+
+Follow these steps to enable the Azure Key Vault as the secret backend for your Managed Airflow instance.
+
+1. Navigate to the [Managed Airflow instance's integrated runtime (IR) environment](how-does-managed-airflow-work.md).
+1. Install the [**apache-airflow-providers-microsoft-azure**](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/https://docsupdatetracker.net/index.html) for the **Airflow requirements** during your initial Airflow environment setup.
+
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/airflow-environment-setup.png" alt-text="Screenshot showing the Airflow Environment Setup window highlighting the Airflow requirements." lightbox="media/enable-azure-key-vault-for-managed-airflow/airflow-environment-setup.png":::
+
+1. Add the following settings for the **Airflow configuration overrides** in integrated runtime properties:
+
+ - **AIRFLOW__SECRETS__BACKEND**: "airflow.providers.microsoft.azure.secrets.key_vault.AzureKeyVaultBackend"
+ - **AIRFLOW__SECRETS__BACKEND_KWARGS**: "{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "vault_url": **\<your keyvault uri\>**}ΓÇ¥
+
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/airflow-configuration-overrides.png" alt-text="Screenshot showing the configuration of the Airflow configuration overrides setting in the Airflow environment setup." lightbox="media/enable-azure-key-vault-for-managed-airflow/airflow-configuration-overrides.png":::
+
+1. Add the following for the **Environment variables** configuration in the Airflow integrated runtime properties:
+
+ - **AZURE_CLIENT_ID** = \<Client ID of SPN\>
+ - **AZURE_TENANT_ID** = \<Tenant Id\>
+ - **AZURE_CLIENT_SECRET** = \<Client Secret of SPN\>
+
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/environment-variables.png" alt-text="Screenshot showing the Environment variables section of the Airflow integrated runtime properties." lightbox="media/enable-azure-key-vault-for-managed-airflow/environment-variables.png":::
+
+1. Then you can use variables and connections and they will automatically be stored in Azure Key Vault. The name of connections and variables need to follow AIRFLOW__SECRETS__BACKEND_KWARGS as defined previously. For more information, refer to [Azure-key-vault as secret backend](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html).
+
+## Sample DAG using Azure Key Vault as the backend
+
+1. Create a new Python file **adf.py** with the following contents:
+
+ ```python
+ from datetime import datetime, timedelta
+ from airflow.operators.python_operator import PythonOperator
+ from textwrap import dedent
+ from airflow.models import Variable
+ from airflow import DAG
+ import logging
+
+ def retrieve_variable_from_akv():
+ variable_value = Variable.get("sample-variable")
+ logger = logging.getLogger(__name__)
+ logger.info(variable_value)
+
+ with DAG(
+ "tutorial",
+ default_args={
+ "depends_on_past": False,
+ "email": ["airflow@example.com"],
+ "email_on_failure": False,
+ "email_on_retry": False,
+ "retries": 1,
+ "retry_delay": timedelta(minutes=5),
+ },
+ description="A simple tutorial DAG",
+ schedule_interval=timedelta(days=1),
+ start_date=datetime(2021, 1, 1),
+ catchup=False,
+ tags=["example"],
+ ) as dag:
+
+ get_variable_task = PythonOperator(
+ task_id="get_variable",
+ python_callable=retrieve_variable_from_akv,
+ )
+
+ get_variable_task
+ ```
+
+1. Store variables for connections in Azure Key Vault. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md)
+
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png" alt-text="Screenshot showing the configuration of secrets in Azure Key Vault." lightbox="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png":::
+
+## Next steps
+
+- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)
+- [Managed Airflow pricing](airflow-pricing.md)
+- [How to change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delimited-text.md
Previously updated : 11/27/2022 Last updated : 09/05/2022
For a full list of sections and properties available for defining datasets, see
| escapeChar | The single character to escape quotes inside a quoted value.<br>The default value is **backslash `\`**. <br>When `escapeChar` is defined as empty string, the `quoteChar` must be set as empty string as well, in which case make sure all column values don't contain delimiters. | No | | firstRowAsHeader | Specifies whether to treat/make the first row as a header line with names of columns.<br>Allowed values are **true** and **false** (default).<br>When first row as header is false, note UI data preview and lookup activity output auto generate column names as Prop_{n} (starting from 0), copy activity requires [explicit mapping](copy-activity-schema-and-type-mapping.md#explicit-mapping) from source to sink and locates columns by ordinal (starting from 1), and mapping data flow lists and locates columns with name as Column_{n} (starting from 1). | No | | nullValue | Specifies the string representation of null value. <br>The default value is **empty string**. | No |
-| encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8","UTF-8 without BOM", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", "UTF-7", "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258".<br>Note mapping data flow doesn't support UTF-7 encoding. | No |
+| encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8","UTF-8 without BOM", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", "UTF-7", "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258".<br>Note mapping data flow doesn't support UTF-7 encoding.<br>Note mapping data flow doesn't support UTF-8 encoding with Byte Order Mark (BOM). | No |
| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed. <br>**Note** currently Copy activity doesn't support "snappy" & "lz4", and mapping data flow doesn't support "ZipDeflate", "TarGzip" and "Tar". <br>**Note** when using copy activity to decompress **ZipDeflate**/**TarGzip**/**Tar** file(s) and write to file-based sink data store, by default files are extracted to the folder:`<path specified in dataset>/<folder named as source compressed file>/`, use `preserveZipFileNameAsFolder`/`preserveCompressionFileNameAsFolder` on [copy activity source](#delimited-text-as-source) to control whether to preserve the name of the compressed file(s) as folder structure. | No | | compressionLevel | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](/dotnet/api/system.io.compression.compressionlevel) topic. | No |
data-factory Kubernetes Secret Pull Image From Private Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/kubernetes-secret-pull-image-from-private-container-registry.md
+
+ Title: Add a Kubernetes secret to pull an image from a private container registry
+
+description: This article explains how to add a Kubernetes secret to pull a custom image from a private container registry with Managed Airflow in Data Factory for Microsoft Fabric.
++++ Last updated : 08/30/2023++
+# Add a Kubernetes secret to access a private container registry
++
+> [!NOTE]
+> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
+
+This article explains how to add a Kubernetes secret to pull a custom image from a private container registry with Managed Airflow in Data Factory for Microsoft Fabric.
+
+## Prerequisites
+
+- **Azure subscription** - If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure storage account** - If you don't have a storage account, see [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
+- **Azure Data Factory pipeline** - You can follow any of the tutorials and create a new data factory pipeline in case you don't already have one or create one with one select in [Get started and try out your first data factory pipeline](quickstart-get-started.md).
+- **Azure Container Registry** - Configure an [Azure Container Registry](/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli) with the custom Docker image you want to use in the DAG. For more information on push and pull container images, see [Push & pull container image - Azure Container Registry](/azure/container-registry/container-registry-get-started-docker-cli?tabs=azure-cli).
+
+### Step 1: Create a new Managed Airflow environment
+
+Open the Azure Data Factory Studio and select the **Manage** tab from the left toolbar, then select **Apache Airflow** under **Workflow Orchestration Manager**. Finally, select **+ New** to create a new Managed Airflow environment.
++
+### Step 2: Add a Kubernetes secret
+
+In the Airflow environment setup window, scroll to the bottom and expand the **Advanced** section, then select **+ New** under **Kubernetes secrets**.
++
+### Step 3: Configure authentication
+
+Provide the required field **Secret name**, select **Private registry auth** for the **Secret type**, and enter the other required fields. The **Registry server URL** should be the URL of your private container registry, for example, ```\registry_name\>.azurecr.io```.
++
+Once you provide the required fields, select **Apply** to add the secret.
+
+## Next steps
+
+- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)
+- [Managed Airflow pricing](airflow-pricing.md)
+- [How to change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Data Factory
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
Next, create a C# .NET console application in Visual Studio:
``` > [!NOTE] > For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri).
-> For example, in US Azure Gov you would use authority of https://login.microsoftonline.us instead of https://login.microsoftonline.com, and use https://management.usgovcloudapi.net instead of https://management.azure.com/, and then create the data factory management client.
+> For example, in US Azure Gov you would use authority of `https://login.microsoftonline.us` instead of `https://login.microsoftonline.com`, and use `https://management.usgovcloudapi.net` instead of `https://management.azure.com/`, and then create the data factory management client.
> You can use PowerShell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment. 3. Add the following code to the **Main** method that creates an instance of **DataFactoryManagementClient** class. You use this object to create a data factory, a linked service, datasets, and a pipeline. You also use this object to monitor the pipeline run details.
data-factory Rest Apis For Airflow Integrated Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md
+
+ Title: REST APIs for the Managed Airflow integrated runtime
+description: This article documents the REST APIs for the Managed Airflow integrated runtime.
++++ Last updated : 08/09/2023++
+# REST APIs for the Managed Airflow integrated runtime
++
+> [!NOTE]
+> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
+
+This article documents the REST APIs for the Managed Airflow integrated runtime.
+
+## Create a new environment
+
+- **Method**: PUT
+- **URL**: ```https://management.azure.com/subscriptions/<subscriptionid>/resourcegroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<datafactoryName>/integrationruntimes/<airflowEnvName>?api-version=2018-06-01```
+- **URI parameters**:
+
+ |Name |In |Required |Type |Description |
+ ||||||
+ |Subscription Id | path | True | string | Subscription identifier |
+ |ResourceGroup Name | path | True | string | Resource group name (Regex pattern: ```^[-\w\._\(\)]+$```) |
+ |dataFactoryName | path | True | string | Name of the Azure Data Factory (Regex pattern: ```^[A-Za-z0-9]+(?:-[A-Za-z0-9]+)*$``` |
+ |airflowEnvName | path | True | string | Name of the Managed Airflow environment |
+ |Api-version | query | True | string | The API version |
+
+- **Request body (Airflow configuration)**:
+
+ |Name |Type |Description |
+ ||||
+ |name |string |Name of the Airflow environment |
+ |properties |propertyType |Configuration properties for the environment |
+
+- **Properties type**:
+
+ |Name |Type |Description |
+ ||||
+ |Type |string |The resource type (**Airflow** in this scenario) |
+ |typeProperties |typeProperty |Airflow |
+
+- **Type property**
+
+ |Name |Type |Description |
+ ||||
+ |computeProperties |computeProperty |Configuration of the compute type used for the environment. |
+ |airflowProperties |airflowProperty |Configuration of the Airflow properties for the environment. |
+
+- **Compute property**
+
+ |Name |Type |Description |
+ ||||
+ |location |string |The Airflow integrated runtime location defaults to the data factory region. To create an integrated runtime in a different region, create a new data factory in the required region. |
+ | computeSize | string |The size of the compute node you want your Airflow environment to run on. Example: ΓÇ£LargeΓÇ¥, ΓÇ£SmallΓÇ¥. 3 nodes are allocated initially. |
+ | extraNodes | integer |Each extra node adds 3 more workers. |
+
+- **Airflow property**
+
+ |Name |Type |Description |
+ ||||
+ |airflowVersion | string | Current version of Airflow (Example: 2.4.3) |
+ |airflowRequirements | Array\<string\> | Python libraries you wish to use. Example: ["flask-bcrypy=0.7.1"]. Can be a comma delimited list. |
+ |airflowEnvironmentVariables | Object (Key/Value pair) | Environment variables you wish to use. Example: { ΓÇ£SAMPLE_ENV_NAMEΓÇ¥: ΓÇ£testΓÇ¥ } |
+ |gitSyncProperties | gitSyncProperty | Git configuration properties |
+ |enableAADIntegration | boolean | Allows Azure AD to login to Airflow |
+ |userName | string or null | Username for Basic Authentication |
+ |password | string or null | Password for Basic Authentication |
+
+- **Git sync property**
+
+ |Name |Type |Description |
+ ||||
+ |gitServiceType | string | The Git service your desired repo is located in. Values: GitHub, AOD, GitLab, or BitBucket |
+ |gitCredentialType | string | Type of Git credential. Values: PAT (for Personal Access Token), None |
+ |repo | string | Repository link |
+ |branch | string | Branch to use in the repository |
+ |username | string | GitHub username |
+ |Credential | string | Value of the Personal Access Token |
+
+- **Responses**
+
+ |Name |Status code |Type |Description |
+ ||||-|
+ |Accepted | 200 | [Factory](/rest/api/datafactory/factories/get?tabs=HTTP#factory) | OK |
+ |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with additional error details |
+
+## Import DAGs
+
+- **Method**: POST
+- **URL**: ```https://management.azure.com/subscriptions/<subscriptionid>/resourcegroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<dataFactoryName>/airflow/sync?api-version=2018-06-01```
+- **Request body**:
+
+ |Name |Type |Description |
+ ||||
+ |IntegrationRuntimeName | string | Airflow environment name |
+ |LinkedServiceName | string | Azure Blob Storage account name where DAGs to be imported are located |
+ |StorageFolderPath | string | Path to the folder in blob storage with the DAGs |
+ |Overwrite | boolean | Overwrite the existing DAGs (Default=True) |
+ |CopyFolderStructure | boolean | Controls whether the folder structure will be copied or not |
+ |AddRequirementsFromFile | boolean | Add requirements from the DAG files |
+
+- **Responses**
+
+ |Name |Status code |Type |Description |
+ ||||-|
+ |Accepted | 200 | [Factory](/rest/api/datafactory/factories/get?tabs=HTTP#factory) | OK |
+ |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with additional error details |
+
+## Examples
+
+### Create a new environment using REST APIs
+
+Sample request:
+
+```rest
+HTTP
+PUT https://management.azure.com/subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01
+```
+
+Sample Body:
+
+```rest
+{
+ "name": "sample-2",
+ "properties": {
+ "type": "Airflow",
+ "typeProperties": {
+ "computeProperties": {
+ "location": "East US",
+ "computeSize": "Large",
+ "extraNodes": 0
+ },
+ "airflowProperties": {
+ "airflowVersion": "2.4.3",
+ "airflowEnvironmentVariables": {
+ "AIRFLOW__TEST__TEST": "test"
+ },
+ "airflowRequirements": [
+ "apache-airflow-providers-microsoft-azure"
+ ],
+ "enableAADIntegration": true,
+ "userName": null,
+ "password": null,
+ "airflowEntityReferences": []
+ }
+ }
+ }
+}
+```
+
+Sample Response:
+
+```rest
+Status code: 200 OK
+```
+
+Response Body:
+
+```rest
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/your-rg/providers/Microsoft.DataFactory/factories/your-df/integrationruntimes/sample-2",
+ "name": "sample-2",
+ "type": "Microsoft.DataFactory/factories/integrationruntimes",
+ "properties": {
+ "type": "Airflow",
+ "typeProperties": {
+ "computeProperties": {
+ "location": "East US",
+ "computeSize": "Large",
+ "extraNodes": 0
+ },
+ "airflowProperties": {
+ "airflowVersion": "2.4.3",
+ "pythonVersion": "3.8",
+ "airflowEnvironmentVariables": {
+ "AIRFLOW__TEST__TEST": "test"
+ },
+ "airflowWebUrl": "https://e57f7409041692.eastus.airflow.svc.datafactory.azure.com/login/",
+ "airflowRequirements": [
+ "apache-airflow-providers-microsoft-azure"
+ ],
+ "airflowEntityReferences": [],
+ "packageProviderPath": "plugins",
+ "enableAADIntegration": true,
+ "enableTriggerers": false
+ }
+ },
+ "state": "Initial"
+ },
+ "etag": "3402279e-0000-0100-0000-64ecb1cb0000"
+}
+```
+### Import DAGs
+
+Sample Request:
+
+```rest
+HTTP
+
+POST https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/your-rg/providers/Microsoft.DataFactory/factories/your-df/airflow/sync?api-version=2018-06-01
+```
+
+Sample Body:
+
+```rest
+{
+ "IntegrationRuntimeName": "sample-2",
+ "LinkedServiceName": "AzureBlobStorage1",
+ "StorageFolderPath": "your-container/airflow",
+ "CopyFolderStructure": true,
+ "Overwrite": true,
+ "AddRequirementsFromFile": true
+}
+```
+
+Sample Response:
+
+```rest
+Status Code: 202
+```
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
For Azure Data Factory v1 customers:
> [!NOTE] > Proxy considerations: > * Check to see whether the proxy server needs to be put on the Safe Recipients list. If so, make sure [these domains](./data-movement-security-considerations.md#firewall-requirements-for-on-premisesprivate-network) are on the Safe Recipients list.
- > * Check to see whether SSL/TLS certificate "wu2.frontend.clouddatahub.net/" is trusted on the proxy server.
+ > * Check to see whether SSL/TLS certificate `wu2.frontend.clouddatahub.net/` is trusted on the proxy server.
> * If you're using Active Directory authentication on the proxy, change the service account to the user account that can access the proxy as "Integration Runtime Service." ### Error message: Self-hosted integration runtime node/logical self-hosted IR is in Inactive/ "Running (Limited)" state
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
Last updated 08/10/2023
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes a solution template that you can use to extract data from a PDF source using Azure Data Factory and Form Recognizer.
+This article describes a solution template that you can use to extract data from a PDF source using Azure Data Factory and Azure AI Document Intelligence.
## About this solution template
-This template analyzes data from a PDF URL source using two Azure Form Recognizer calls. Then, it transforms the output to readable tables in a dataflow and outputs the data to a storage sink.
+This template analyzes data from a PDF URL source using two Azure AI Document Intelligence calls. Then, it transforms the output to readable tables in a dataflow and outputs the data to a storage sink.
This template contains two activities: -- **Web Activity** to call Azure Form Recognizer's layout model API
+- **Web Activity** to call Azure AI Document Intelligence's layout model API
- **Data flow** to transform extracted data from PDF This template defines 4 parameters: -- *FormRecognizerURL* is the Form recognizer URL ("https://{endpoint}/formrecognizer/v2.1/layout/analyze"). Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription. You need to replace the default value with your own URL.-- *FormRecognizerKey* is the Form Recognizer subscription key. You need to replace the default value with your own subscription key.
+- *FormRecognizerURL* is the Azure AI Document Intelligence URL ("https://{endpoint}/formrecognizer/v2.1/layout/analyze"). Replace {endpoint} with the endpoint that you obtained with your Azure AI Document Intelligence subscription. You need to replace the default value with your own URL.
+- *FormRecognizerKey* is the Azure AI Document Intelligence subscription key. You need to replace the default value with your own subscription key.
- *PDF_SourceURL* is the URL of your PDF source. You need to replace the default value with your own URL. - *outputFolder* is the name of the folder path where you want your files to be in your destination store. You need to replace the default value with your own folder path. ## Prerequisites
-* Azure Form Recognizer Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer))
+* Azure AI Document Intelligence Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer))
## How to use this solution template
-1. Go to template **Extract data from PDF**. Create a **New** connection to your Form Recognizer resource or choose an existing connection.
+1. Go to template **Extract data from PDF**. Create a **New** connection to your Azure AI Document Intelligence resource or choose an existing connection.
- :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to Form Recognizer in template set up.":::
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to Azure AI Document Intelligence in template set up.":::
- In your connection to Form Recognizer, make sure to add a **Linked service Parameter**. You will need to use this parameter as your dynamic **Base URL**.
+ In your connection to Azure AI Document Intelligence, make sure to add a **Linked service Parameter**. You will need to use this parameter as your dynamic **Base URL**.
- :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-9.png" alt-text="Screenshot of where to add your Form Recognizer linked service parameter.":::
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-9.png" alt-text="Screenshot of where to add your Azure AI Document Intelligence linked service parameter.":::
:::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-8.png" alt-text="Screenshot of the linked service base URL that references the linked service parameter.":::
This template defines 4 parameters:
## Next steps - [What's New in Azure Data Factory](whats-new.md) - [Introduction to Azure Data Factory](introduction.md)-
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
the page.
## Creating Forwarding Rule to Endpoint 1. Login and copy script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs.
-2. Run the script on with the following options:<br/>
- **sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433**<br/>
- <FQDN/IP> is your target SQL Server IP.<br/>
+
+2. Run the script with the following options:
+
+ ```bash
+ sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433
+ ```
+ Set the placeholder `<FQDN/IP>` is your target SQL Server IP.
- > [!Note]
+ > [!NOTE]
> FQDN doesn't work for on-premises SQL Server unless you add a record in Azure DNS zone.
-3. Run below command and check the iptables in your backend server VMs. You can see one record in your iptables with your target IP.<br/>
- **sudo iptables -t nat -v -L PREROUTING -n --line-number**
+4. Run the following command and check the iptables in your backend server VMs. You can see one record in your iptables with your target IP.<br/>
+
+ ```bash
+ sudo iptables -t nat -v -L PREROUTING -n --line-number**
+ ```
- :::image type="content" source="./media/tutorial-managed-virtual-network/command-record-1.png" alt-text="Screenshot that shows the command record.":::
+ :::image type="content" source="./media/tutorial-managed-virtual-network/command-record-1.png" alt-text="Screenshot that shows the command record.":::
- >[!Note]
+ > [!NOTE]
> If you have more than one SQL Server or data sources, you need to define multiple load balancer rules and IP table records with different ports. Otherwise, there will be some conflict. For example,<br/> >
- >| |Port in load balancer rule|Backend port in load balance rule|Command run in backend server VM|
- >|||--||
- >|**SQL Server 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433|
- >|**SQL Server 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433|
-
+ > | |Port in load balancer rule|Backend port in load balance rule|Command run in backend server VM|
+ > |||--||
+ > |**SQL Server 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433|
+ > |**SQL Server 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433|
+
+ > [!NOTE]
+ > It's important to be aware that the configuration within the virtual machine (VM) is not permanent. This means that each time the VM restarts, it will require reconfiguration.
+ ## Create a Private Endpoint to Private Link Service 1. Select All services in the left-hand menu, select All resources, and then select your
data factory from the resources list.
4. Select + **New** under **Managed private endpoints**. 5. Select the **Private Link Service** tile from the list and select **Continue**. 6. Enter the name of private endpoint and select **myPrivateLinkService** in private link service list.
-7. Add <FQDN>,<port> of your target on-premises SQL Server. By default, port is 1433.
+7. Add the `<FQDN>,<port>` of your target on-premises SQL Server. By default, port is 1433.
:::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-6.png" alt-text="Screenshot that shows the private endpoint settings.":::
-> [!Note]
-> When deploying your SQL Server on a virtual machine within a virtual network, it is essential to enhance your FQDN by appending **privatelink**. Otherwise, it will be conflicted with other records in the DNS setting. For example, you can simply modify the SQL Server's FQDN from **sqlserver.westus.cloudapp.azure.net** to **sqlserver.privatelink.westus.cloudapp.azure.net**.
+ > [!NOTE]
+ > When deploying your SQL Server on a virtual machine within a virtual network, it is essential to enhance your FQDN by appending **privatelink**. Otherwise, it will be conflicted with other records in the DNS setting. For example, you can simply modify the SQL Server's FQDN from **sqlserver.westus.cloudapp.azure.net** to **sqlserver.privatelink.westus.cloudapp.azure.net**.
8. Create private endpoint.
data factory from the resources list.
:::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-3.png" alt-text="Screenshot that shows the SQL server linked service creation page.":::
- > [!Note]
+ > [!NOTE]
> If you have more than one SQL Server and need to define multiple load balancer rules and IP table records with different ports, make sure you explicitly add the port name after the FQDN when you edit Linked Service. The NAT VM will handle the port translation. If it's not explicitly specified, the connection will always time-out. ## Troubleshooting
data-factory Data Factory Api Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-api-change-log.md
The following classes have been renamed. The new names were the original names o
* **List** pipeline API returns only the summary of a pipeline instead of full details. For instance, activities in a pipeline summary only contain name and type. ### Feature additions
-* The [SqlDWSink](/dotnet/api/microsoft.azure.management.datafactories.models.sqldwsink) class supports two new properties, **SliceIdentifierColumnName** and **SqlWriterCleanupScript**, to support idempotent copy to Azure Azure Synapse Analytics. See the [Azure Synapse Analytics](data-factory-azure-sql-data-warehouse-connector.md) article for details about these properties.
+* The [SqlDWSink](/dotnet/api/microsoft.azure.management.datafactories.models.sqldwsink) class supports two new properties, **SliceIdentifierColumnName** and **SqlWriterCleanupScript**, to support idempotent copy to Azure Synapse Analytics. See the [Azure Synapse Analytics](data-factory-azure-sql-data-warehouse-connector.md) article for details about these properties.
* We now support running stored procedure against Azure SQL Database and Azure Synapse Analytics sources as part of the Copy Activity. The [SqlSource](/dotnet/api/microsoft.azure.management.datafactories.models.sqlsource) and [SqlDWSource](/dotnet/api/microsoft.azure.management.datafactories.models.sqldwsource) classes have the following properties: **SqlReaderStoredProcedureName** and **StoredProcedureParameters**. See the [Azure SQL Database](data-factory-azure-sql-connector.md#sqlsource) and [Azure Synapse Analytics](data-factory-azure-sql-data-warehouse-connector.md#sqldwsource) articles on Azure.com for details about these properties.
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
data-lake-analytics Understand Spark Code Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-code-concepts.md
This section provides high-level guidance on transforming U-SQL Scripts to Apach
- It starts with a [comparison of the two language's processing paradigms](#understand-the-u-sql-and-spark-language-and-processing-paradigms) - Provides tips on how to:
- - [Transform scripts](#transform-u-sql-scripts) including U-SQL's [rowset expressions](#transform-u-sql-rowset-expressions-and-sql-based-scalar-expressions)
- - [.NET code](#transform-net-code)
- - [Data types](#transform-typed-values)
- - [Catalog objects](#transform-u-sql-catalog-objects).
+ - [Transform scripts](#transform-u-sql-scripts) including U-SQL's [rowset expressions](#transform-u-sql-rowset-expressions-and-sql-based-scalar-expressions)
+ - [.NET code](#transform-net-code)
+ - [Data types](#transform-typed-values)
+ - [Catalog objects](#transform-u-sql-catalog-objects).
## Understand the U-SQL and Spark language and processing paradigms
Spark programs are similar in that you would use Spark connectors to read the da
U-SQL's expression language is C# and it offers various ways to scale out custom .NET code with user-defined functions, user-defined operators and user-defined aggregators.
-Azure Synapse and Azure HDInsight Spark both now natively support executing .NET code with .NET for Apache Spark. This means that you can potentially reuse some or all of your [.NET user-defined functions with Spark](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators). Note though that U-SQL uses the .NET Framework while .NET for Apache Spark is based on .NET Core 3.1 or later.
+Azure Synapse and Azure HDInsight Spark both now natively support executing .NET code with .NET for Apache Spark. This means that you can potentially reuse some or all of your [.NET user-defined functions with Spark](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators). Note though that U-SQL uses the .NET Framework while .NET for Apache Spark is based on .NET Core 3.1 or later.
[U-SQL user-defined operators (UDOs)](#transform-user-defined-operators-udos) are using the U-SQL UDO model to provide scaled-out execution of the operator's code. Thus, UDOs will have to be rewritten into user-defined functions to fit into the Spark execution model. .NET for Apache Spark currently doesn't support user-defined aggregators. Thus, [U-SQL user-defined aggregators](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators) will have to be translated into Spark user-defined aggregators written in Scala.
-If you don't want to take advantage of the .NET for Apache Spark capabilities, you'll have to rewrite your expressions into an equivalent Spark, Scala, Java, or Python expression, function, aggregator or connector.
+If you don't want to take advantage of the .NET for Apache Spark capabilities, you'll have to rewrite your expressions into an equivalent Spark, Scala, Java, or Python expression, function, aggregator or connector.
In any case, if you have a large amount of .NET logic in your U-SQL scripts, please contact us through your Microsoft Account representative for further guidance.
The other types of U-SQL UDOs will need to be rewritten using user-defined funct
### Transform U-SQL's optional libraries
-U-SQL provides a set of optional and demo libraries that offer [Python](data-lake-analytics-u-sql-python-extensions.md), [R](data-lake-analytics-u-sql-r-extensions.md), [JSON, XML, AVRO support](https://github.com/Azure/usql/tree/master/Examples/DataFormats), and some [cognitive services capabilities](data-lake-analytics-u-sql-cognitive.md).
+U-SQL provides a set of optional and demo libraries that offer [Python](data-lake-analytics-u-sql-python-extensions.md), [R](data-lake-analytics-u-sql-r-extensions.md), [JSON, XML, AVRO support](https://github.com/Azure/usql/tree/master/Examples/DataFormats), and some [Azure AI services capabilities](data-lake-analytics-u-sql-cognitive.md).
Spark offers its own Python and R integration, pySpark and SparkR respectively, and provides connectors to read and write JSON, XML, and AVRO.
-If you need to transform a script referencing the cognitive services libraries, we recommend contacting us via your Microsoft Account representative.
+If you need to transform a script referencing the Azure AI services libraries, we recommend contacting us via your Microsoft Account representative.
## Transform typed values
For more information, see:
In Spark, types per default allow NULL values while in U-SQL, you explicitly mark scalar, non-object as nullable. While Spark allows you to define a column as not nullable, it will not enforce the constraint and [may lead to wrong result](https://medium.com/@weshoffman/apache-spark-parquet-and-troublesome-nulls-28712b06f836).
-In Spark, NULL indicates that the value is unknown. A Spark NULL value is different from any value, including itself. Comparisons between two Spark NULL values, or between a NULL value and any other value, return unknown because the value of each NULL is unknown.
+In Spark, NULL indicates that the value is unknown. A Spark NULL value is different from any value, including itself. Comparisons between two Spark NULL values, or between a NULL value and any other value, return unknown because the value of each NULL is unknown.
-This behavior is different from U-SQL, which follows C# semantics where `null` is different from any value but equal to itself.
+This behavior is different from U-SQL, which follows C# semantics where `null` is different from any value but equal to itself.
Thus a SparkSQL `SELECT` statement that uses `WHERE column_name = NULL` returns zero rows even if there are NULL values in `column_name`, while in U-SQL, it would return the rows where `column_name` is set to `null`. Similarly, A Spark `SELECT` statement that uses `WHERE column_name != NULL` returns zero rows even if there are non-null values in `column_name`, while in U-SQL, it would return the rows that have non-null. Thus, if you want the U-SQL null-check semantics, you should use [isnull](https://spark.apache.org/docs/2.3.0/api/sql/https://docsupdatetracker.net/index.html#isnull) and [isnotnull](https://spark.apache.org/docs/2.3.0/api/sql/https://docsupdatetracker.net/index.html#isnotnull) respectively (or their DSL equivalent).
Most of the settable system variables have no direct equivalent in Spark. Some o
### U-SQL hints
-U-SQL offers several syntactic ways to provide hints to the query optimizer and execution engine:
+U-SQL offers several syntactic ways to provide hints to the query optimizer and execution engine:
- Setting a U-SQL system variable - an `OPTION` clause associated with the rowset expression to provide a data or plan hint
Spark's cost-based query optimizer has its own capabilities to provide hints and
## Next steps - [Understand Spark data formats for U-SQL developers](understand-spark-data-formats.md)-- [.NET for Apache Spark](/dotnet/spark/what-is-apache-spark-dotnet)
+- [.NET for Apache Spark](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
- [Upgrade your big data analytics solutions from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md) - [Transform data using Spark activity in Azure Data Factory](../data-factory/transform-data-using-spark.md) - [Transform data using Hadoop Hive activity in Azure Data Factory](../data-factory/transform-data-using-hadoop-hive.md)
data-lake-analytics Understand Spark Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-data-formats.md
After this transformation, you copy the data as outlined in the chapter [Move da
- [Understand Spark code concepts for U-SQL developers](understand-spark-code-concepts.md) - [Upgrade your big data analytics solutions from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md)-- [.NET for Apache Spark](/dotnet/spark/what-is-apache-spark-dotnet)
+- [.NET for Apache Spark](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
- [Transform data using Spark activity in Azure Data Factory](../data-factory/transform-data-using-spark.md) - [Transform data using Hadoop Hive activity in Azure Data Factory](../data-factory/transform-data-using-hadoop-hive.md)-- [What is Apache Spark in Azure HDInsight](../hdinsight/spark/apache-spark-overview.md)
+- [What is Apache Spark in Azure HDInsight](../hdinsight/spark/apache-spark-overview.md)
data-lake-analytics Understand Spark For Usql Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-for-usql-developers.md
It includes the steps you can take, and several alternatives.
- [Understand Spark data formats for U-SQL developers](understand-spark-data-formats.md) - [Understand Spark code concepts for U-SQL developers](understand-spark-code-concepts.md) - [Upgrade your big data analytics solutions from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md)-- [.NET for Apache Spark](/dotnet/spark/what-is-apache-spark-dotnet)
+- [.NET for Apache Spark](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
- [Transform data using Hadoop Hive activity in Azure Data Factory](../data-factory/transform-data-using-hadoop-hive.md) - [Transform data using Spark activity in Azure Data Factory](../data-factory/transform-data-using-spark.md)-- [What is Apache Spark in Azure HDInsight](../hdinsight/spark/apache-spark-overview.md)
+- [What is Apache Spark in Azure HDInsight](../hdinsight/spark/apache-spark-overview.md)
data-lake-store Data Lake Store Secure Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-secure-data.md
description: Learn how to secure data in Azure Data Lake Storage Gen1 using grou
+ Last updated 03/26/2018 - # Securing data stored in Azure Data Lake Storage Gen1 Securing data in Azure Data Lake Storage Gen1 is a three-step approach. Both Azure role-based access control (Azure RBAC) and access control lists (ACLs) must be set to fully enable access to data for users and security groups.
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
data-manager-for-agri Concepts Farm Operations Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-farm-operations-data.md
+
+ Title: Working with Farm Activities data in Azure Data Manager for Agriculture
+description: Learn how to integrate with Farm Activities data providers and ingest data into ADMA
++++ Last updated : 08/14/2023++
+# Working with Farm Activities data in Azure Data Manager for Agriculture
+Farm Activities data is one of the most important ground truth datasets in precision agriculture. It's these machine-generated reports that preserve the record of what exactly happened where and when that is used to both improve in-field practice and the downstream values chain analytics cases
+
+The Data Manager for Agriculture supports both
+* summary data - entered as properties in the operation data item directly
+* precision data - (for example, a .shp, .dat, .isoxml) uploaded as an attachment file and reference linked to the operation data item.
+
+New operation data can be pushed into the service via the APIs for operation and attachment creation. Or, if the desired source is in the supported list of OEM connectors, data can be synced automatically from providers like Climate FieldView with a farm operation ingestion job.
+* Azure Data Manager for Agriculture supports a range of Farm Activities data that can be found [here](/rest/api/data-manager-for-agri/#farm-activities)
+
+## Integration with farm equipment manufacturers
+Azure Data Manager for Agriculture fetches the associated Farm Activities data (planting, application, tillage & harvest) from the data provider (Ex: Climate FieldView) by creating a Farm Activities data ingestion job. Look [here](./how-to-ingest-and-egress-farm-operations-data.md) for more details.
+
+## Next steps
+
+* Test our APIs [here](/rest/api/data-manager-for-agri).
data-manager-for-agri Concepts Hierarchy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-hierarchy-model.md
Previously updated : 02/14/2023 Last updated : 08/22/2023
To generate actionable insights data related to growers, farms, and fields should be organized in a well defined manner. Firms operating in the agriculture industry often perform longitudinal studies and need high quality data to generate insights. Data Manager for Agriculture organizes agronomic data in the below manner. ## Understanding farm hierarchy
To generate actionable insights data related to growers, farms, and fields shoul
### Farm * Farms are logical entities. A farm is a collection of fields.
-* Farms don't have any geometry associated with them. Farm entity helps you organize your growing operations. For example Contoso Inc is the Party that has farms in Oregon and Idaho.
+* Farms don't have any geometry associated with them. Farm entity helps you organize your growing operations. For example, Contoso Inc is the Party that has farms in Oregon and Idaho.
### Field
-* Fields denote a stable boundary that is in general agnostic to seasons and other temporal constructs. For example, field could be the boundary denoted in government records.
+* Fields denote a stable geometry that is in general agnostic to seasons and other temporal constructs. For example, field could be the geometry denoted in government records.
* Fields are multi-polygon. For example, a road might divide the farm in two or more parts.
-* Fields are multi-boundary.
### Seasonal field
-* This is the most important construct in the farming world. A seasonal fields definition includes the following things
- * Boundary
+* Seasonal field is the most important construct in the farming world. A seasonal fields definition includes the following things
+ * geometry
* Season * Crop * A seasonal field is associated with a field or a farm
To generate actionable insights data related to growers, farms, and fields shoul
* A seasonal field is associated with one season. If a farmer cultivates across multiple seasons, they have to create one seasonal field per season. * It's multi-polygon. Same crop can be planted in different areas within the farm. -
-### Boundary
-* Boundary represents the geometry of a field or a seasonal field.
-* It's represented as a multi-polygon GeoJSON consisting of vertices (lat/long).
- ### Season
-* Season represents the temporal aspect of farming. It is a function of local agronomic practices, procedures and weather.
+* Season represents the temporal aspect of farming. It's a function of local agronomic practices, procedures and weather.
### Crop * Crop entity provides the phenotypic details of the planted crop.
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
Satellite imagery makes up a foundational pillar of agriculture data. To support
* Read the Sinergise Sentinel Hub terms of service and privacy policy: https://www.sentinel-hub.com/tos/ * Have your providerClientId and providerClientSecret ready
-## Ingesting boundary-clipped imagery
+## Ingesting geometry-clipped imagery
Using satellite data in Data Manager for Agriculture involves following steps: :::image type="content" source="./media/satellite-flow.png" alt-text="Diagram showing satellite data ingestion flow.":::
data-manager-for-agri Concepts Ingest Sensor Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md
In addition to the above approach, IOT devices (sensors/nodes/gateway) can direc
## Sensor topology
-The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each boundary under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data.
+The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each geometry under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data.
:::image type="content" source="./media/sensor-topology-new.png" alt-text="Screenshot showing sensor topology.":::
data-manager-for-agri Concepts Isv Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md
The solution framework is built on top of Data Manager for Agriculture that prov
Following are some of the examples of use cases on how an ISV partner could use the solution framework to build an industry specific solution.
-* Yield Prediction Model: An ISV partner can build a yield model using historical data for a specific boundary and track periodic progress. The ISV can then enable forecast of estimated yield for the upcoming season.
+* Yield Prediction Model: An ISV partner can build a yield model using historical data for a specific geometry and track periodic progress. The ISV can then enable forecast of estimated yield for the upcoming season.
* Carbon Emission Model: An ISV partner can estimate the amount of carbon emitted from the field based upon the imagery, sensors data for a particular farm. * Crop Identification: Use imagery data to identify crop growing in an area of interest.
data-manager-for-agri How To Ingest And Egress Farm Operations Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-ingest-and-egress-farm-operations-data.md
+
+ Title: Working with Farm Activities and in-field activity data in Azure Data Manager for Agriculture
+description: Learn how to manage Farm Activities data with manual and auto sync data ingestion jobs
++++ Last updated : 09/04/2023++
+# Working with Farm Activities and activity data in Azure Data Manager for Agriculture
+
+Users can create a farm operation data ingestion job to **pull associated Farm Activities activity data** from a specified data provider into your Azure Data Manager for Agriculture instance, associated with a specific party. The job handles any required auth refresh, and by default detects and syncs any changes daily. In some cases, the job will also **pull farm and field** information associated with the given account into the party.
+
+> [!NOTE]
+>
+>Before creating Farm Activities job, it is mandatory to successfully [**integrate with Farm Activities data provider oAuth flow**](./how-to-integrate-with-farm-ops-data-provider.md)
+>
+
+## Create FarmOperations Job
+
+Create a farm-operations job to ingest Farm Activity data with an ID of your choice. This job ID is used to monitor the status of the job using GET Farm Operations job.
+
+API documentation:[FarmOperations_CreateDataIngestionJob](/rest/api/data-manager-for-agri/farm-operations)
+
+> [!NOTE]
+>`shapeType` and `shapeResolution` are provider specific attributes. If they aren't applicable to your provider, set the value to "None".
+
+Based on the `startYear` and `operations` list provided, Azure Data Manager for Agriculture fetches the data from the start year to the current date.
+
+Along with specific data (geometry), Farm Activities data provider also gives us the DAT file for the activity performed on your farm or field. The DAT file, Shape File etc. contain a geometry that reflects where the activity was performed.
+
+Job status and details can be retrieved with: [FarmOperations_GetDataIngestionJobDetails](/rest/api/data-manager-for-agri/farm-operations)
++
+## Finding and retrieving Farm Activities data
+
+Now that the data is ingested into Azure Data Manager for Agriculture, it can be queried or listed with the following methods:
+
+### Method 1: search Farm Activities data using geometry intersect
+To account for the high degree of change found in field definitions, Azure Data Manager for Agriculture supports a search by intersect feature that allows you to organize data by space and time across parties, without needing to first know the farm/field hierarchy or association. If you have the partyId, you can use it in the input, and it gives you the list of farm activity data items for the specified party.
+
+[API Documentation](/rest/api/data-manager-for-agri/#farm-operations)
+
+You can also use the ID like `plantingId` to fetch the above data in the same API. If you remove the ID, you're able to see any other data that intersects with the same geometry across party. So it shows data for the same geometry across different parties.
+
+### Method 2: List data by type
+
+Retrieved data is sorted by activity type under the party. These can be listed, with standard filters applied. Individual data items may be retrieved to view the properties and metadata, including the `sourceActivityId`, `providerFieldId` and `Geometry`.
+
+[API Documentation](/rest/api/data-manager-for-agri/#farm-operations)
+
+## List and Download Attachments
+
+The message attribute in the response of `FarmOperations_GetDataIngestionJobDetails` API shows how much data was processed and how many attachments were created. To check the attachments associated to the partyId, go to attachment API. The response gives you all the attachments created under the partyId.
+
+API documentation: [Attachments](/rest/api/data-manager-for-agri/attachments)
+
+## Next steps
+
+* Understand our APIs [here](/rest/api/data-manager-for-agri).
data-manager-for-agri How To Integrate With Farm Ops Data Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-integrate-with-farm-ops-data-provider.md
+
+ Title: How to integrate with Farm Activities data provider
+description: Learn how to integrate with Farm Activities data provider
++++ Last updated : 08/14/2023+++
+# Integrate with Farm Activities Data Provider
+Azure Data Manager for Agriculture supports connectors to conveniently sync your end-users' data from a range of farm machinery data sources. The setup involves **Configuring oAuth flow as a pre-requisite for integrating with any Farm Activities data provider**, along with a per-account, transparent consent step that handles initial and incremental data sync to keep the ADMA data estate up to date.
+
+> [!NOTE]
+>
+> Steps 1 to 3 are part of the one-time-per-provider initial configuration. Once integrated, you will be able to enable all your end users to use the existing oAuth workflow and call the config API (Step 4) per user (PartyID) to retrieve the access token.
+
+## Provider setup
+The example flow here uses Climate FieldView
+### Step 1: App Creation
+
+If your application isn't already registered with Climate Fieldview, go to [FieldView portal](https://dev.fieldview.com/join-us/) and submit the form. Once FieldView processes your request, they send your `client_id` and `client_secret` which you'll use once per ADMA instance for FieldView.
+
+### Step 2: Provider Configuration
+
+Use the `oAuthProvider` API to create or update the oAuth provider (Ex: FIELDVIEW) with appropriate credentials of the newly created App.
+
+API documentation: [oAuthProviders - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-providers/create-or-update)
++
+**Optional Step:** Once the operation is done, you can run the [oAuthProviders_Get](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-providers/get) to verify whether the application is registered.
+Now, all the parties created in your Azure Data Manager for Agriculture instance can use FieldView as a provider to fetch Farm Activities data.
+
+### Step 3: Endpoint Configuration
+
+**User redirect endpoint**: This endpoint is where you want your users to be redirected to once the oAuth flow is completed. This endpoint will be generated by you and provided to ADMA as `userRedirectLink` in the oauth/tokens/:connect API.
+**Register the oAuth callback endpoint with your App on Climate FieldView portal.**
+## End-user account setup
+### Step 4: Party (End-user) Integration
+
+When a party (end-user) lands on your webpage where the user action is expected (Ex: Connect to FieldView button), make a call to `oauth/tokens/:connect` API in the below fashion to get the oAuth provider's (Ex: Climate FieldView) sign-in uri back to start the end-user oAuth flow.
+
+API documentation: [oAuthTokens - Get OAuth Connection Link](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-tokens/get-o-auth-connection-link)
+
+Once the `oauth/tokens/:connect` API successfully returns the `oauthAuthorizationLink`, **end-user clicks on this link to complete the oAuth flow** (Ex: For Climate FieldView, the user is served a FieldView access consent and sign-in page). Once the sign-in is completed, ADMA will redirect the user to the endpoint provided by customer (`userRedirectLink`) with the following query parameters in the url
+
+1. **status** (success/failure)
+2. **state** (optional string to uniquely identify the user at customer end)
+3. **message** (optional string)
+4. **errorCode** (optional string sent for Failure/error) in the parameters.
+
+> [!NOTE]
+>
+> If the API returns 404, then it implies the oAuth flow failed and ADMA could not acquire the access token.
+
+### Step 5: Check Access Token Info (Optional)
+
+This step is optional, only to confirm if for a given user or list of users, the required valid access token has been acquired or not. This can be done via making a call to the `oauth/tokens` API to **check for the entry `isValid: true` in the response body**.
+
+API documentation: [oAuthTokens - List](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-tokens/list)
+
+**This step marks the successful completion of the oAuth flow for a user**. Now, the user is all-set to trigger a new [FarmOperationsDataJob](./how-to-ingest-and-egress-farm-operations-data.md) to start pulling the Farm Activities data from Climate FieldView.
data-manager-for-agri How To Set Up Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-audit-logs.md
After you create a Data Manager for Agriculture resource instance, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure for data-plane requests. To do this, you need to enable logging for Azure Data Manager for Agriculture. You can then save log information at a destination such as a storage account, event hub or a log analytics workspace, that you provide.
-This article provides you with the steps to setup logging for Azure Data Manager for Agriculture.
+This article provides you with the steps to set up logging for Azure Data Manager for Agriculture.
## Enable collection of logs
The `categories` field for Data Manager for Agriculture can have values that are
### Categories table | category| Description | | | |
-|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Boundary, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses.
+|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses.
|FarmOperationsLogs|Logs for CRUD operations for FarmOperations data ingestion job, ApplicationData, PlantingData, HarvestingData, TillageData |SatelliteLogs| Logs for create and get operations for Satellite data ingestion job |WeatherLogs|Logs for create, delete and get operations for weather data ingestion job
All the `categories` of resource logs are mapped as a table in log analytics. To
### List of tables in log analytics and their mapping to categories in resource logs | Table name in log analytics| Categories in resource logs |Description | | | |
-|AgriFoodFarmManagementLogs|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Boundary, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses.
+|AgriFoodFarmManagementLogs|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses.
|AgriFoodFarmOperationsLogs|FarmOperationsLogs| Logs for CRUD operations for FarmOperations data ingestion job, ApplicationData, PlantingData, HarvestingData, TillageData. |AgriFoodSatelliteLogs|SatelliteLogs| Logs for create and get operations for satellite data ingestion job. |AgriFoodWeatherLogs|WeatherLogs|Logs for create, delete and get operations for weather data ingestion job.
All the `categories` of resource logs are mapped as a table in log analytics. To
|**partyId**| ID of the party associated with the operation. | |**Properties** | Available only in`AgriFoodJobProcessesLogs` table, it contains: `farmOperationEntityId` (ID of the entity that failed to be created by the farmOperation job), `farmOperationEntityType`(Type of the entity that failed to be created, can be ApplicationData, PeriodicJob, etc.), `errorCode`(Code for failure of the job at Data Manager for Agriculture end),`errorMessage`(Description of failure at the Data Manager for Agriculture end),`internalErrorCode`(Code of failure of the job provide by the provider),`internalErrorMessage`(Description of the failure provided by the provider),`providerId`(ID of the provider such as JOHN-DEERE). |
-Each of these tables can be queried by creating a log analytics workspace. Reference for query language is [here](https://learn.microsoft.com/azure/data-explorer/kql-quick-reference).
+Each of these tables can be queried by creating a log analytics workspace. Reference for query language is [here](/azure/data-explorer/kql-quick-reference).
### List of sample queries in the log analytics workspace | Query name | Description |
data-manager-for-agri How To Set Up Isv Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md
Previously updated : 02/14/2023 Last updated : 09/01/2023
Follow the following guidelines to install and use an ISV solution.
## Install an ISV solution 1. Once you've installed an instance of Azure Data Manager for Agriculture from Azure portal, navigate to Settings ->Solutions tab on the left hand side in your instance.
-2. You'll be able to view the list of available Solutions. Select the solution of your choice and click on Add button against it.
-3. You'll be navigated to Azure Marketplace page for the Solution.
-4. Click on Contact Me CTA and the ISV partner will contact you to help with next steps of installation.
-5. To edit an installed Solution, click on the edit icon against the Solution in Solutions page. You'll be redirected to Azure Marketplace from where you can contact the ISV partner by clicking on Contact Me.
-6. To delete an installed Solution, click on the delete icon against the Solution in Solutions page and you'll be redirected to Azure Marketplace from where you can Contact the ISV partner by clicking on Contact Me.
+2. Click on **Add** to view the list of Solutions available for installation. Select the solution of your choice and click on **Add** button against it.
+> [!NOTE]
+>
+>If a Solution has only private plans, you will see **Contact Us** button which will take you to Marketplace page for Solution.
+>
+3. You are navigated to the plan selection pane where you have to give the following inputs:
+ 1. Click on **Authorize** button to give consent to the Solution Provider to create app needed for Solution installation.
+ 2. Object ID. See [here](#identify-object-id-of-the-solution) to find your Object ID
+ 3. Select plan of your choice
+4. Select the Terms and Conditions checkbox and click on Add.
+5. The Solution is deployed on your ADMA instance and role assignment will be handled from the backend.
+
+## Edit an installed Solution
+
+ To edit an installed Solution, click on the edit icon against the Solution in Solutions page. You are redirected to plan selection pane to modify your plan. If it's a solution with private plans, you are redirected to Azure Marketplace from where you can contact the ISV partner by clicking on **Contact Me**.
+
+## Delete an installed Solution
+
+ To delete an installed Solution, click on the delete icon against the Solution in Solutions page and confirm the same on the popup. If it's a Solution with private plans, you'll be redirected to Azure Marketplace from where you can Contact the ISV partner by clicking on Contact Me.
## Use an ISV solution
A high level view of how you can create a new request and get responses from the
* If there's any error when this request is submitted, then you may have to verify the configuration and parameters. If you're unable to resolve the issue then contact us at madma@microsoft.com 5. Now you make a call to Data Manager using the Job ID to get the final response. * If the request processing is completed by the ISV Solution, you get the insight response back.
- * If the request processing is still in progress, you'll get the message ΓÇ£Processing in progressΓÇ¥
+ * If the request processing is still in progress, you get the message ΓÇ£Processing in progressΓÇ¥
+
+Once all the request/responses are successfully processed, the status of the request is closed. This final output of the request is stored in Data Manager for Agriculture. You must ensure that you're submitting requests within the predefined thresholds.
-Once all the request/responses are successfully processed, the status of the request is closed. This final output of the request will be stored in Data Manager for Agriculture. You must ensure that you're submitting requests within the pre-defined thresholds.
+## Identify Object ID of the Solution
+
+1. Navigate to Enterprise Applications page in Azure portal.
+2. Use the **Application ID** mentioned in the Solution Plan selection pane and filter the Applications.
+3. Copy the **Object ID**
## Next steps
data-manager-for-agri How To Set Up Sensors Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md
API Endpoint: PATCH /sensor-partners/{sensorPartnerId}/integrations/{integration
This step marks the completion of the sensor partner on-boarding from a customer perspective. , get all the required information to call your API endpoints to create Sensor model, Device model, Sensors & Devices. The partners are now able to push sensor events using the connection string generated for each sensor ID.
-The final step is to start consuming sensor events. Before consuming the events, you need to create a mapping of every sensor ID to a specific Party ID & Boundary ID.
+The final step is to start consuming sensor events. Before consuming the events, you need to create a mapping of every sensor ID to a specific Party ID and resource (Field, Seasonal Field).
## Step 6: Create sensor mapping
-Use the `SensorMappings` collection, call into the `SensorMappings_CreateOrUpdate` API to create mapping for each of sensor. Mapping is nothing but associating a sensor ID with a specific PartyID and BoundaryID. PartyID and BoundaryID are already present in the Data Manager for Agriculture system. This association ensures that as a platform you get to build data science models around a common boundary and party dimension. Every data source (satellite, weather, farm operations) is tied to a party & boundary. As you establish this mapping object on a per sensor level you power all the agronomic use cases to benefit from sensor data.
+Use the `SensorMappings` collection, call into the `SensorMappings_CreateOrUpdate` API to create mapping for each of sensor. Mapping is nothing but associating a sensor ID with a specific PartyID and a resource(field, seasonal field etc.). PartyID and resources are already present in the Data Manager for Agriculture system. This association ensures that as a platform you get to build data science models around a common geometry of the resource and party dimension. Every data source (satellite, weather, farm operations) is tied to a party & resource. As you establish this mapping object on a per sensor level you power all the agronomic use cases to benefit from sensor data.
API Endpoint: PATCH /sensor-mappings/{sensorMappingId}
data-manager-for-agri How To Use Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-events.md
# Azure Data Manager for Agriculture Preview as Event Grid source
-This article provides the properties and schema for Azure Data Manager for Agriculture events. For an introduction to event schemas, see [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema) event schema.
+This article provides the properties and schema for Azure Data Manager for Agriculture events. For an introduction to event schemas, see [Azure Event Grid](/azure/event-grid/event-schema) event schema.
## Prerequisites
Here are example scenarios for consuming events in our service:
2. If there are modifications to data-plane resources such as party, fields, farms and other similar elements, you can react to changes and you can trigger workflows. ## Filtering events
-You can filter Data Manager for Agriculture <a href="https://docs.microsoft.com/cli/azure/eventgrid/event-subscription" target="_blank"> events </a> by event type, subject, or fields in the data object. Filters in Event Grid match the beginning or end of the subject so that events that match can go to the subscriber.
+You can filter Data Manager for Agriculture <a href="/cli/azure/eventgrid/event-subscription" target="_blank"> events </a> by event type, subject, or fields in the data object. Filters in Event Grid match the beginning or end of the subject so that events that match can go to the subscriber.
For instance, for the PartyChanged event, to receive notifications for changes for a particular party with ID Party1234, you may use the subject filter "EndsWith" as shown:
Subjects in an event schema provide 'starts with' and 'exact match' filters as w
Similarly, to filter the same event for a group of party IDs, use the Advanced filter on partyId field in the event data object. In a single subscription, you may add five advanced filters with a limit of 25 values for each key filtered.
-To learn more about how to apply filters, see <a href = "https://docs.microsoft.com/azure/event-grid/how-to-filter-events" target = "_blank"> filter events for Event Grid. </a>
+To learn more about how to apply filters, see <a href = "/azure/event-grid/how-to-filter-events" target = "_blank"> filter events for Event Grid. </a>
## Subscribing to events You can subscribe to Data Manager for Agriculture events by using Azure portal or Azure Resource Manager client. Each of these provide the user with a set of functionalities. Refer to following resources to know more about each method.
-<a href = "https://docs.microsoft.com/azure/event-grid/subscribe-through-portal#:~:text=Create%20event%20subscriptions%201%20Select%20All%20services.%202,event%20types%20option%20checked.%20...%20More%20items..." target = "_blank"> Subscribe to events using portal </a>
+<a href = "/azure/event-grid/subscribe-through-portal" target = "_blank"> Subscribe to events using portal </a>
-<a href = "https://docs.microsoft.com/azure/event-grid/sdk-overview" target = "_blank"> Subscribe to events using the ARM template client </a>
+<a href = "/azure/event-grid/sdk-overview" target = "_blank"> Subscribe to events using the ARM template client </a>
## Practices for consuming events
Applications that handle Data Manager for Agriculture events should follow a few
* Check that the eventType is one you're prepared to process, and don't assume that all events you receive are the types you expect. * As messages can arrive out of order, use the modifiedTime and etag fields to understand the order of events for any particular object.
-* Data Manager for Agriculture events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see <a href = "https://docs.microsoft.com/azure/event-grid/delivery-and-retry" target = "_blank">Event Grid message delivery and retry </a>
+* Data Manager for Agriculture events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see <a href = "/azure/event-grid/delivery-and-retry" target = "_blank">Event Grid message delivery and retry </a>
* Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future.
Applications that handle Data Manager for Agriculture events should follow a few
|Microsoft.AgFoodPlatform.FarmChangedV2| Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.FieldChangedV2|Published when a Field is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.SeasonalFieldChangedV2|Published when a Seasonal Field is created /updated/deleted in an Azure Data Manager for Agriculture resource
-|Microsoft.AgFoodPlatform.BoundaryChangedV2|Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource
|Microsoft.AgFoodPlatform.CropChanged|Published when a Crop is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.CropProductChanged|Published when a Crop Product is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.SeasonChanged|Published when a Season is created /updated/deleted in an Azure Data Manager for Agriculture resource
For sensor mapping events, the data object contains following properties:
|:--| :-| :-| sensorId| string| ID associated with the sensor. partyId| string| ID associated with the party.
-boundaryId| string| ID associated with the boundary.
sensorPartnerId| string| ID associated with the sensorPartner. | ID | string| Unique ID of resource. actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
eTag| string| Implements optimistic concurrency.
description| string| Textual description of the resource. name| string| Name to identify resource.
-Boundary events have the following data object:
-
-|Property |Type |Description |
-|:|:|:|
-| ID | string | User defined ID of boundary |
-|actionType | string | Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. |
-|modifiedDateTime | string | Indicates the time at which the event was last modified. |
-|createdDateTime | string | Indicates the time at which the resource was created. |
-|status | string | Contains the user defined status of the object. |
-|eTag | string | Implements optimistic concurrency. |
-|partyId | string | ID of the party it belongs to. |
-|parentId | string | ID of the parent boundary belongs. |
-|parentType | string | Type of the parent boundary belongs to. Applicable values are Field, SeasonalField, Zone, Prescription, PlantTissueAnalysis, ApplicationData, PlantingData, TillageData, HarvestData etc. |
-|description | string | Textual description of the resource. |
-|properties | string | It contains user defined key ΓÇô value pair. |
- Seasonal field events have the following data object: Property| Type| Description |:--| :-| :-| ID | string| User defined ID of the seasonal field farmId| string| User defined ID of the farm that seasonal field is associated with.
-partyId| string| Id of the party it belongs to.
+partyId| string| ID of the party it belongs to.
seasonId| string| User defined ID of the season that seasonal field is associated with. fieldId| string| User defined ID of the field that seasonal field is associated with. name| string| User defined name of the seasonal field.
Insight events have the following data object:
Property| Type| Description |:--| :-| :-| modelId| string| ID of the associated model.|
-resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.|
-resourceType| string | Name of the resource type. Applicable values are Party, Farm, Field, SeasonalField, Boundary etc.|
+resourceId| string| User-defined ID of the resource such as farm, field etc.|
+resourceType| string | Name of the resource type. Applicable values are Party, Farm, Field, SeasonalField etc.|
partyId| string| ID of the party it belongs to.| modelVersion| string| Version of the associated model.| ID | string| User defined ID of the resource.|
InsightAttachment events have the following data object:
Property| Type| Description |:--| :-| :-| modelId| string| ID of the associated model.
-resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.
+resourceId| string| User-defined ID of the resource such as farm, field etc.
resourceType| string | Name of the resource type. partyId| string| ID of the party it belongs to. insightId| string| ID associated with the insight resource.
Property| Type| Description
|:--| :-| :-| | ID | string| User defined ID of the field. farmId| string| User defined ID of the farm that field is associated with.
-partyId| string| Id of the party it belongs to.
+partyId| string| ID of the party it belongs to.
name| string| User defined name of the field. actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. properties| Object| It contains user defined key-value pairs.
AttachmentChanged event has the following data object
Property| Type| Description |:--| :-| :-|
-resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.
+resourceId| string| User-defined ID of the resource such as farm, field etc.
resourceType| string | Name of the resource type. partyId| string| ID of the party it belongs to. | ID | string| User defined ID of the resource.
PrescriptionChanged event has the following data object
|Property | Type| Description| |:--| :-| :-| prescriptionMapId|string| User-defined ID of the associated prescription map.
-partyId| string|Id of the party it belongs to.
+partyId| string|ID of the party it belongs to.
| ID | string| User-defined ID of the prescription. actionType| string| Indicates the change triggered during publishing of the event. Applicable values are Created, Updated, Deleted status| string| Contains the user-defined status of the prescription.
NutrientAnalysisChanged event has the following data object:
|:--| :-| :-| parentId| string| ID of the parent nutrient analysis belongs to. parentType| string| Type of the parent nutrient analysis belongs to. Applicable value(s) are PlantTissueAnalysis.
-partyId| string|Id of the party it belongs to.
+partyId| string|ID of the party it belongs to.
| ID | string| User-defined ID of nutrient analysis. actionType| string| Indicates the change that is triggered during publishing of the event. Applicable values are Created, Updated, Deleted. properties| object| It contains user-defined key-value pairs.
data-manager-for-agri How To Use Nutrient Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-nutrient-apis.md
Here's how we have modeled tissue analysis in Azure Data Manager for Agriculture
* Step 1: Create a **plant tissue analysis** resource for every sample you get tested. * Step 2: For each nutrient that is being tested, create a nutrient analysis resource with plant tissue analysis as parent created in step 1. * Step 3: Upload analysis report from the lab (for example: pdf, xlsx files) as attachment and associate with the 'plant tissue analysis' resource created in step 1.
-* Step 4: If you have location (longitude, latitude) data, then create a point boundary with 'plant tissue analysis' as parent created in step 1.
+* Step 4: If you have location (longitude, latitude) data, then create a point geometry with 'plant tissue analysis' as parent created in step 1.
> [!Note]
-> One plant tissue analysis resource is created per sample. One point boundary can be associated with it.
+> One plant tissue analysis resource is created per sample. One point geometry can be associated with it.
## Next steps
data-manager-for-agri How To Write Weather Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-write-weather-extension.md
Hence the extension needs to provide a [**HandleBars template**](https://handleb
This section is dedicated for the functionalities/capabilities built by Data Manager for Agriculture. In the case of weather extension, centroid calculation is one such functionality.
-When users don't provide the latitude/longitude coordinates, Data Manager for Agriculture will be using the primary boundary of the field (ID passed by user) to compute the centroid. The computed centroid coordinates will be passed as the latitude and longitude to the extension (data provider). Hence for Data Manager for Agriculture to be able to understand the usage of location coordinates the functional parameters section is used.
+When users don't provide the latitude/longitude coordinates, Data Manager for Agriculture will be using the primary geometry of the field (ID passed by user) to compute the centroid. The computed centroid coordinates will be passed as the latitude and longitude to the extension (data provider). Hence for Data Manager for Agriculture to be able to understand the usage of location coordinates the functional parameters section is used.
For Data Manager for Agriculture to understand the usage of latitude and longitude in the `apiName` input parameters, the extension is expected to provide the `name` of key used for collecting location information followed by a **handlebar template** to imply how the latitude and longitude values need to be passed.
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Previously updated : 06/25/2023 Last updated : 08/23/2023
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
+## July 2023
+
+### Weather API update:
+We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs have been replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather).
+
+### New farm operations connector:
+We've added support for Climate FieldView as a built-in data source. You can now auto sync planting, application and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md).
+
+### Common Data Model now with geo-spatial support:
+WeΓÇÖve updated our data model to improve flexibility. The boundary object has been deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that may not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md).
+ ## June 2023 ### Use your license keys via key vault
data-manager-for-agri Sample Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/sample-events.md
The event samples given on this page represent an event notification.
} ````
- 6. **Event type: Microsoft.AgFoodPlatform.BoundaryChangedV2**
-
-````json
- {
- "data": {
- "parentType": "Field",
- "partyId": "amparty",
- "actionType": "Created",
- "modifiedDateTime": "2022-11-01T10:48:14Z",
- "eTag": "af005dfc-0000-0700-0000-6360f96e0000",
- "id": "amb",
- "name": "string",
- "description": "string",
- "createdDateTime": "2022-11-01T10:48:14Z"
- },
- "id": "v2-25fd01cf-72d4-401d-92ee-146de348e815",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
- "subject": "/parties/amparty/boundaries/amb",
- "eventType": "Microsoft.AgFoodPlatform.BoundaryChangedV2",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2022-11-01T10:48:14.2385557Z"
- }
- ````
- 7. **Event type: Microsoft.AgFoodPlatform.SeasonChanged** ````json {
The event samples given on this page represent an event notification.
{ "data": { "partyId": "contoso-partyId",
- "message": "Created job 'sat-ingestion-job-1' to fetch satellite data for boundary 'contoso-boundary' from startDate '08/07/2022' to endDate '10/07/2022' (both inclusive).",
+ "message": "Created job 'sat-ingestion-job-1' to fetch satellite data for resource 'contoso-field' from startDate '08/07/2022' to endDate '10/07/2022' (both inclusive).",
"status": "Running", "lastActionDateTime": "2022-11-07T09:35:23.3141004Z", "isCancellationRequested": false,
The event samples given on this page represent an event notification.
{ "data": { "partyId": "party1",
- "message": "Created job 'job-biomass-13sdqwd' to calculate biomass values for boundary 'boundary1' from plantingStartDate '05/03/2020' to inferenceEndDate '10/11/2020' (both inclusive).",
+ "message": "Created job 'job-biomass-13sdqwd' to calculate biomass values for resource 'field1' from plantingStartDate '05/03/2020' to inferenceEndDate '10/11/2020' (both inclusive).",
"status": "Waiting", "lastActionDateTime": "0001-01-01T00:00:00Z", "isCancellationRequested": false,
The event samples given on this page represent an event notification.
{ "data": { "partyId": "party",
- "message": "Created job 'job-soilmoisture-sf332q' to calculate soil moisture values for boundary 'boundary' from inferenceStartDate '05/01/2022' to inferenceEndDate '05/20/2022' (both inclusive).",
+ "message": "Created job 'job-soilmoisture-sf332q' to calculate soil moisture values for resource 'field1' from inferenceStartDate '05/01/2022' to inferenceEndDate '05/20/2022' (both inclusive).",
"status": "Waiting", "lastActionDateTime": "0001-01-01T00:00:00Z", "isCancellationRequested": false,
The event samples given on this page represent an event notification.
{ "data": { "modelId": "Microsoft.SoilMoisture",
- "resourceType": "Boundary",
- "resourceId": "boundary",
+ "resourceType": "Field",
+ "resourceId": "fieldId",
"modelVersion": "1.0", "partyId": "party", "actionType": "Updated",
The event samples given on this page represent an event notification.
"data": { "insightId": "f5c2071c-c7ce-05f3-be4d-952a26f2490a", "modelId": "Microsoft.SoilMoisture",
- "resourceType": "Boundary",
- "resourceId": "boundary",
+ "resourceType": "Field",
+ "resourceId": "fieldId",
"partyId": "party", "actionType": "Updated", "modifiedDateTime": "2022-11-03T18:21:26Z",
The event samples given on this page represent an event notification.
"data": { "sensorId": "sensor", "partyId": "ContosopartyId",
- "boundaryId": "ContosoBoundary",
"sensorPartnerId": "sensorpartner", "actionType": "Created", "status": "string",
databox-online Azure Stack Edge Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-alerts.md
This article describes how to view alerts and interpret alert severity for event
## Overview
-The Alerts blade for an Azure Stack Edge device lets you review Azure Stack Edge deviceΓÇôrelated alerts in real-time. From this blade, you can centrally monitor the health issues of your Azure Stack Edge devices and the overall Microsoft Azure Azure Stack Edge solution.
+The Alerts blade for an Azure Stack Edge device lets you review Azure Stack Edge deviceΓÇôrelated alerts in real-time. From this blade, you can centrally monitor the health issues of your Azure Stack Edge devices and the overall Microsoft Azure Stack Edge solution.
The initial display is a high-level summary of alerts at each severity level. You can drill down to see individual alerts at each severity level.
databox-online Azure Stack Edge Gpu 2304 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2304-release-notes.md
+
+ Title: Azure Stack Edge 2304 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2304 release.
++
+
+++ Last updated : 08/21/2023+++
+# Azure Stack Edge 2304 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2304 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2304** release, which maps to software version **2.2.2257.1193**.
+
+## Supported update paths
+
+This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318).
+
+You can update to the latest version using the following update paths:
+
+| Current version | Update to | Then apply |
+| --| --| --|
+|2205 and earlier |2207 |2304
+|2207 and later |2304 |
+
+## What's new
+
+The 2304 release has the following new features and enhancements:
+
+- **Fix for the Arc connectivity issue** - In the 2303 release, there was an issue with Arc agent where it couldn't connect to the Azure Stack Edge Kubernetes cluster. Owing to this issue, you weren't able to manage the Kubernetes cluster via Arc.
+
+ The 2304 release fixes the connectivity issue. To manage your Azure Stack Edge Kubernetes cluster via Arc, update to this release.
+- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).
+
+## Issues fixed in this release
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Fix for the Arc connectivity issue |In the 2303 release, there was an issue with Arc agent where it couldn't connect to the Azure Stack Edge Kubernetes cluster. Owing to this issue, you weren't able to manage the Kubernetes cluster via Arc. <BR> The 2304 release fixes the connectivity issue. To manage your Azure Stack Edge Kubernetes cluster via Arc, update to this release. |
+
+<!--## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Need known issues in 2303 |-->
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2303 release, there is an additional nodepool rollout. |The update may take longer. |
+|**28.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. |
+|**29.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**30.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 01/25/2023 Last updated : 07/19/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to add or delete virtual switches and virtual networks.
1. Provide a name for your virtual switch. 1. Choose the network interface on which the virtual switch should be created.
- 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
- 1. Select **Apply**. You can see that the specified virtual switch is created.
+ 1. Select **Apply**. You can see that the specified virtual switch is created.
+
+ You can create Virtual Machines from Azure portal using any of the virtual networks you have created.
![Screenshot of "Advanced networking" page with virtual switch added and enabled for compute in local UI for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
After the cluster is formed and configured, you can now create new virtual switc
1. Provide a name for your virtual switch. 1. Choose the network interface on which the virtual switch should be created.
- 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
- 1. Select **Apply**.
+ 1. Select **Apply**.
+
+ You can create Virtual Machines from Azure portal using any of the virtual networks you have created.
1. The configuration will take a couple minutes to apply and once the virtual switch is created, the list of virtual switches updates to reflect the newly created switch. You can see that the specified virtual switch is created and enabled for compute.
databox-online Azure Stack Edge Gpu Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md
Previously updated : 05/31/2022 Last updated : 07/19/2023 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure device related settings:
1. Enter a **Name** for your device. The name must contain from 1 to 13 characters and can have letter, numbers, and hyphens.
-1. Provide a **DNS domain** for your device. This domain is used to set up the device as a file server.
+1. Provide a **DNS domain** for your device using all lowercase characters. This domain is used to set up the device as a file server.
1. To validate and apply the configured device settings, select **Apply**.
databox-online Azure Stack Edge Gpu Disconnected Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-disconnected-scenario.md
Before you disconnect your Azure Stack Edge device from the network that allows
For Kubernetes deployment guidance, see [Choose the deployment type](azure-stack-edge-gpu-kubernetes-workload-management.md#choose-the-deployment-type). For IoT Edge deployment guidance, see [Run a compute workload with IoT Edge module on Azure Stack Edge](azure-stack-edge-gpu-deploy-compute-module-simple.md). > [!NOTE]
- > Some workloads running in VMs, Kerberos, and IoT Edge may require connectivity to Azure. For example, some cognitive services require connectivity for billing.
+ > Some workloads running in VMs, Kerberos, and IoT Edge may require connectivity to Azure. For example, some Azure AI services require connectivity for billing.
## Key differences for disconnected use
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 03/30/2023 Last updated : 08/21/2023 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest updates
-The current update is Update 2303. This update installs two updates, the device update followed by Kubernetes updates.
+The current update is Update 2304. This update installs two updates, the device update followed by Kubernetes updates.
The associated versions for this update are: -- Device software version: Azure Stack Edge 2303 (2.2.2257.1113)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2303 (2.2.2257.1113)
+- Device software version: Azure Stack Edge 2304 (2.2.2257.1193)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2304 (2.2.2257.1193)
- Kubernetes server version: v1.24.6 - IoT Edge version: 0.1.0-beta15-- Azure Arc version: 1.8.14
+- Azure Arc version: 1.10.6
- GPU driver version: 515.65.01 - CUDA version: 11.7
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2303-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2304-release-notes.md).
-**To apply 2303 update, your device must be running version 2207 or later.**
+**To apply 2304 update, your device must be running version 2207 or later.**
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2303.
+- You can update to 2207 from 2106 or later, and then install 2304.
### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer.
-If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2303.
+If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2304.
-Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2303:
+Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2304:
-1. Update your device version to 2303.
+1. Update your device version to 2304.
1. Update your Kubernetes version to 2210.
-1. Update your Kubernetes version to 2303.
+1. Update your Kubernetes version to 2304.
-If you are running 2210, you can update both your device version and Kubernetes version directly to 2303.
+If you are running 2210, you can update both your device version and Kubernetes version directly to 2304.
-In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2303.
+In Azure portal, the process will require two clicks, the first update gets your device version to 2304 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2304.
-From the local UI, you will have to run each update separately: update the device version to 2303, then update Kubernetes version to 2210, and then update Kubernetes version to 2303.
+From the local UI, you will have to run each update separately: update the device version to 2304, then update Kubernetes version to 2210, and then update Kubernetes version to 2304.
### Updates for a single-node vs two-node
databox-online Azure Stack Edge Move To Self Service Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-move-to-self-service-iot-edge.md
The high-level workflow is as follows:
1. Optional: If you have leaf IoT devices communicating with IoT Edge on Kubernetes, this step documents how to make changes to communicate with the IoT Edge on a VM.
-## Step 1. Create an IoT Edge device on Linux using symmetric keys
+## Step 1: Create an IoT Edge device on Linux using symmetric keys
Create and provision an IoT Edge device on Linux using symmetric keys. For detailed steps, see [Create and provision an IoT Edge device on Linux using symmetric keys](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
-## Step 2. Install and provision an IoT Edge on a Linux VM
+## Step 2: Install and provision an IoT Edge on a Linux VM
Follow the steps at [Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For other supported Linux distributions, see [Linux containers](../iot-edge/support.md).
-## Step 3. Deploy Azure IoT Edge modules from the Azure portal
+## Step 3: Deploy Azure IoT Edge modules from the Azure portal
Deploy Azure IoT modules to the new IoT Edge. For detailed steps, see [Deploy Azure IoT Edge modules from the Azure portal](../iot-edge/how-to-deploy-modules-portal.md). With the latest IoT Edge version, you can deploy your IoT Edge modules at scale. For more information, see [Deploy IoT Edge modules at scale using the Azure portal](../iot-edge/how-to-deploy-at-scale.md).
-## Step 4. Remove Azure IoT Edge modules
+## Step 4: Remove Azure IoT Edge modules
Once your modules are successfully running on the new IoT Edge instance running on a VM, you can delete the whole IoT Edge device associated with that IoT Edge instance. From IoT Hub on the Azure portal, delete the IoT Edge device connected to the IoT Edge, as shown below. ![Screenshot showing delete IoT Edge device from IoT Edge instance in Azure portal UI.](media/azure-stack-edge-move-to-self-service-iot-edge/azure-stack-edge-delete-iot-edge-device.png)
-## Step 5. Optional: Remove the IoT Edge service
+## Step 5: Optional: Remove the IoT Edge service
If you aren't using the Kubernetes cluster on Azure Stack Edge, use the following steps to [remove the IoT Edge service](azure-stack-edge-gpu-manage-compute.md#remove-iot-edge-service). This action will remove modules running on the IoT Edge device, the IoT Edge runtime, and the Kubernetes cluster that hosts the IoT Edge runtime.
From the Azure Stack Edge resource on Azure portal, under the Azure IoT Edge ser
> [!IMPORTANT] > Once the Kubernetes cluster is removed, there is no way to recover information from the Kubernetes cluster, whether it's IoT Edge-related or not.
-## Step 6. Optional: Configure an IoT Edge device as a transparent gateway
+## Step 6: Optional: Configure an IoT Edge device as a transparent gateway
If your IoT Edge device on Azure Stack Edge was configured as a gateway for downstream IoT devices, you must configure the IoT Edge running on the Linux VM as a transparent gateway. For more information, see [Configure and IoT Edge device as a transparent gateway](../iot-edge/how-to-create-transparent-gateway.md).
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
Previously updated : 04/27/2023 Last updated : 07/19/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
After the cluster is formed and configured, you'll now create new virtual switch
1. Provide a name for your virtual switch. 1. Choose the network interface on which the virtual switch should be created.
- 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
1. Select the intent to associate with this network interface as **compute**. Alternatively, the switch can be used for management traffic as well. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.
- > [!TIP]
- > Use *CTRL + Click* to select more than one intent for your virtual switch.
+ You can create Virtual Machines from Azure portal using any of the virtual networks you have created.
+
+ > [!TIP]
+ > Use *CTRL + Click* to select more than one intent for your virtual switch.
1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
databox-online Azure Stack Edge Technical Specifications Power Cords Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-power-cords-regional.md
Use the following table to find the correct cord specifications for your region:
|China|250|10|RVV300/500 3X0.75|GB 2099.1|C13|2000| |Colombia|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Costa Rica|125|10|SVE 18/3|NEMA 5-15P|C13|1830|
-|C├┤te D'Ivoire (Ivory Coast)|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
+|C├┤te D'Ivoire|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Croatia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Cyprus|250|5|H05VV-F 3x0.75|BS1363 SS145/A|C13|1800| |Czech Republic|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
Use the following table to find the correct cord specifications for your region:
|Lithuania|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Luxembourg|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Macao Special Administrative Region|2250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
-|Macedonia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Malaysia|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Malta|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Mauritius|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
Use the following table to find the correct cord specifications for your region:
|New Zealand|250|10|H05VV-F 3x1.00|AS/NZS 3112|C13|2438| |Nicaragua|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Nigeria|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
+|North Macedonia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Norway|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Oman|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Pakistan|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
databox Data Box Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data.md
The following table shows the UNC path to the shares on your Data Box and Azure
| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> | | Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_PageBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> | | Azure Files |<li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
-| Azure Block blobs (Archive) | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlobArchive>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Block blobs (Archive) | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob_Archive>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
If using a Windows Server host computer, follow these steps to connect to the Data Box.
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
Perform the following steps in the Azure portal to order a device.
|Setting |Value | |||
- |Transfer type | Select **Export to Azure**. |
+ |Transfer type | Select **Export from Azure**. |
|Subscription | Select an EA, CSP, or Azure sponsorship subscription for Data Box service. <br> The subscription is linked to your billing account. | |Resource group | Select an existing resource group. <br> A resource group is a logical container for the resources that can be managed or deployed together. | |Source Azure region | Select the Azure region where your data currently is. |
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
Last updated 10/21/2022 + # Azure Data Box system requirements This article describes important system requirements for your Microsoft Azure Data Box and for clients that connect to the Data Box. We recommend you review the information carefully before you deploy your Data Box and then refer to it when you need to during deployment and operation.
The software requirements include supported operating systems, file transfer pro
[!INCLUDE [data-box-supported-file-systems-clients](../../includes/data-box-supported-file-systems-clients.md)]
-> [!IMPORTANT]
-> Connection to Data Box shares is not supported via REST for export orders.
-
+> [!IMPORTANT]
+> Connection to Data Box shares is not supported via REST for export orders.
+> Transporting data from on-premises NFS clients into Data Box using NFSv4 is supported. However, to copy data from Data Box to Azure, Data Box supports only REST-based transport. Azure file share with NFSv4.1 does not support REST for data access/transfer.
### Supported storage accounts > [!Note]
The following table lists the ports that need to be opened in your firewall to a
## Next steps * [Deploy your Azure Data Box](data-box-deploy-ordered.md)+
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Previously updated : 01/17/2023 Last updated : 08/28/2023
Distributed denial of service (DDoS) attacks are some of the largest availabilit
Azure DDoS Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. Azure DDoS Protection protects at layer 3 and layer 4 network layers. For web applications protection at layer 7, you need to add protection at the application layer using a WAF offering. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md).
-## Tiers
+## Azure DDoS Protection: Tiers
### DDoS Network Protection
Azure DDoS Network Protection, combined with application design best practices,
DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
-For more information about the tiers, see [Tier comparison](ddos-protection-sku-comparison.md).
-## Key benefits
+For more information about the tiers, see [DDoS Protection tier comparison](ddos-protection-sku-comparison.md).
+## Azure DDoS Protection: Key Features
-### Always-on traffic monitoring
+- **Always-on traffic monitoring:**
Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. Azure DDoS Protection instantly and automatically mitigates the attack, once it's detected.
-### Adaptive real time tuning
+- **Adaptive real time tuning:**
Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time.
-### DDoS Protection telemetry, monitoring, and alerting
+- **DDoS Protection analytics, metrics, and alerting:**
Azure DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
+ - **Attack analytics:**
+Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more.
+
+ - **Attack metrics:**
+ Summarized metrics from each attack are accessible through Azure Monitor. See [View and configure DDoS protection telemetry](telemetry.md) to learn more.
+
+ - **Attack alerting:**
+ Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal. See [View and configure DDoS protection alerts
+](alerts.md) to learn more.
-### Azure DDoS Rapid Response
+- **Azure DDoS Rapid Response:**
During an active attack, Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md).
-### Native platform integration
+- **Native platform integration:**
Natively integrated into Azure. Includes configuration through the Azure portal. Azure DDoS Protection understands your resources and resource configuration.
-### Turnkey protection
+- **Turnkey protection:**
Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Network Protection is enabled. No intervention or user definition is required. Similarly, simplified configuration immediately protects a public IP resource when DDoS IP Protection is enabled for it.
-### Multi-Layered protection
+- **Multi-Layered protection:**
When deployed with a web application firewall (WAF), Azure DDoS Protection protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=/azure/virtual-network/toc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
-### Extensive mitigation scale
+- **Extensive mitigation scale:**
All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.
-### Attack analytics
-Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more.
-
-### Attack metrics
- Summarized metrics from each attack are accessible through Azure Monitor. See [View and configure DDoS protection telemetry](telemetry.md) to learn more.
-
-### Attack alerting
- Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal. See [View and configure DDoS protection alerts
-](alerts.md) to learn more.
-
-### Cost guarantee
+- **Cost guarantee:**
Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks.
-## Architecture
+## Azure DDoS Protection: Architecture
Azure DDoS Protection is designed for [services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default infrastructure-level DDoS protection applies, which defends against common network-layer attacks. To learn more about supported architectures, see [DDoS Protection reference architectures](./ddos-protection-reference-architectures.md).
For DDoS IP Protection, there's no need to create a DDoS protection plan. Custom
To learn about Azure DDoS Protection pricing, see [Azure DDoS Protection pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
+## Best Practices for DDoS Protection
+Maximize the effectiveness of your DDoS protection strategy by following these best practices:
+
+- Design your applications and infrastructure with redundancy and resilience in mind.
+- Implement a multi-layered security approach, including network, application, and data protection.
+- Prepare an incident response plan to ensure a coordinated response to DDoS attacks.
+
+To learn more about best practices, see [Fundamental best practices](./fundamental-best-practices.md).
+ ## DDoS Protection FAQ For frequently asked questions, see the [DDoS Protection FAQ](ddos-faq.yml).
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-partner-onboarding.md
The following steps are required for partners to configure integration with Azur
> [!NOTE] > Only 1 DDoS Protection Plan needs to be created for a given tenant. 2. Deploy a service with public endpoint in your (partner) subscriptions, such as load balancer, firewalls, and web application firewall.
-3. Enable Azure DDoS Protection on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For step-by-step instructions, see [Enable DDoS Protection plan](manage-ddos-protection.md#enable-ddos-protection-for-an-existing-virtual-network)
+3. Enable Azure DDoS Protection on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For step-by-step instructions, see [Enable DDoS Protection plan](manage-ddos-protection.md#enable-for-an-existing-virtual-network)
> [!IMPORTANT] > After Azure DDoS Protection is enabled on a virtual network, all public IPs within that virtual network are automatically protected. The origin of these public IPs can be either within Azure (client subscription) or outside of Azure. 4. Optionally, integrate Azure DDoS Protection telemetry and attack analytics in your application-specific customer-facing dashboard. For more information about using telemetry, see [View and configure DDoS protection telemetry](telemetry.md).
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
Previously updated : 06/22/2023 Last updated : 09/05/2023
Get started with Azure DDoS Network Protection by using the Azure portal.
A DDoS protection plan defines a set of virtual networks that have DDoS Network Protection enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions under a single Azure AD tenant to the same plan.
-In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
+In this QuickStart, you create a DDoS protection plan and link it to a virtual network.
:::image type="content" source="./media/manage-ddos-protection/ddos-network-protection-diagram-simple.png" alt-text="Diagram of DDoS Network Protection.":::
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
[!INCLUDE [DDoS-Protection-region-requirement.md](../../includes/DDoS-Protection-region-requirement.md)] ## Enable DDoS protection for a virtual network
-### Enable DDoS protection for a new virtual network
+### Enable for a new virtual network
1. Select **Create a resource** in the upper left corner of the Azure portal. 1. Select **Networking**, and then select **Virtual network**.
-1. Enter or select the following values.
+1. Enter or select the following values then select **Next**.
| Setting | Value | | | |
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
| Name | Enter **MyVnet**. | | Region | Enter **East US**. |
-1. Select **Next: IP Addresses** and enter the following values.
+1. In the *Security* pane, select **Enable** on the **Azure DDoS Network Protection** radio.
+1. Select **MyDdosProtectionPlan** from the **DDoS protection plan** pane. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+1. Select **Next**. In the IP address pane, select **Add IPv4 address space** and enter the following values. Then select **Add**.
| Setting | Value | | | |
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
| Subnet name | Under **Subnet name**, select the **Add subnet** link and enter **mySubnet.** | | Subnet address range | Enter **10.1.0.0/24.** |
-1. Select **Add**.
-1. Select **Next: Security**.
-1. Select **Enable** on the **DDoS Network Protection** radio.
-1. Select **MyDdosProtectionPlan** from the **DDoS protection plan** pane. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
1. Select **Review + create** then **Create**.
+ :::image type="content" source="./media/manage-ddos-protection/ddos-create-virtual-network.gif" alt-text="Gif of creating a virtual network with Azure DDoS Protection.":::
+ [!INCLUDE [DDoS-Protection-virtual-network-relocate-note.md](../../includes/DDoS-Protection-virtual-network-relocate-note.md)]
-### Enable DDoS protection for an existing virtual network
+### Enable for an existing virtual network
1. Create a DDoS protection plan by completing the steps in [Create a DDoS protection plan](#create-a-ddos-protection-plan), if you don't have an existing DDoS protection plan. 1. Enter the name of the virtual network that you want to enable DDoS Network Protection for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it. 1. Select **DDoS protection**, under **Settings**.
-1. Select **Enable**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then click **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+1. Select **Enable**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+
+ :::image type="content" source="./media/manage-ddos-protection/ddos-update-virtual-network.gif" alt-text="Gif of enabling DDoS Protection for a virtual network.":::
+
+### Add Virtual Networks to an existing DDoS protection plan
You can also enable the DDoS protection plan for an existing virtual network from the DDoS Protection plan, not from the virtual network. + 1. Search for "DDoS protection plans" in the **Search resources, services, and docs box** at the top of the Azure portal. When **DDoS protection plans** appears in the search results, select it. 1. Select the desired DDoS protection plan you want to enable for your virtual network. 1. Select **Protected resources** under **Settings**.
-1. Click **+Add** and select the right subscription, resource group and the virtual network name. Click **Add** again.
+1. Select **+Add** and select the right subscription, resource group and the virtual network name. Select **Add** again.
+ :::image type="content" source="./media/manage-ddos-protection/ddos-add-to-virtual-network.gif" alt-text="Gif of adding a virtual network with Azure DDoS Protection.":::
## Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview) Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](../firewall-manager/configure-ddos.md).
Azure Firewall Manager is a platform to manage and protect your network resource
## Enable DDoS protection for all virtual networks
-This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) will detect any virtual networks in a defined scope that don't have DDoS Network Protection enabled. This policy will then optionally create a remediation task that will create the association to protect the Virtual Network. See [Azure Policy built-in definitions for Azure DDoS Network Protection](policy-reference.md) for full list of built-in policies.
-
-### Disable for a virtual network:
-
-To disable DDoS protection for a virtual network proceed with the following steps.
-
-1. Enter the name of the virtual network you want to disable DDoS Network Protection for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
-1. Under **DDoS Network Protection**, select **Disable**.
+This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94de2ad3-e0c1-4caf-ad78-5d47bbc83d3d) detects any virtual networks in a defined scope that don't have DDoS Network Protection enabled. This policy will then optionally create a remediation task that creates the association to protect the Virtual Network. See [Azure Policy built-in definitions for Azure DDoS Network Protection](policy-reference.md) for full list of built-in policies.
## Validate and test
Under **Protected resources**, you can view your protected virtual networks and
:::image type="content" source="./media/manage-ddos-protection/ddos-protected-resources.png" alt-text="Screenshot showing protected resources.":::
+### Disable for a virtual network:
+
+To disable DDoS protection for a virtual network proceed with the following steps.
+
+1. Enter the name of the virtual network you want to disable DDoS Network Protection for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
+1. Under **DDoS Network Protection**, select **Disable**.
+
+ :::image type="content" source="./media/manage-ddos-protection/ddos-disable-in-virtual-network.gif" alt-text="Gif of disabling DDoS Protection within virtual network.":::
+ ## Clean up resources You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources. If you don't intend to use this DDoS protection plan, you should remove resources to avoid unnecessary charges.
You can keep your resources for the next tutorial. If no longer needed, delete t
## Next steps
-To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
+To learn how to configure metrics alerts through Azure Monitor, continue to the tutorials.
> [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry.md)
+> [Configure Azure DDoS Protection metric alerts through portal](alerts.md)
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023
dedicated-hsm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/troubleshoot.md
Only when fully finished with an HSM can it be deprovisioned and then Microsoft
**DO NOT DELETE the Resource Group of your Dedicated HSM directly. It will not delete the HSM resource, you will continue to be billed as it places the HSM into a orphaned state. If did not follow correct procedures and end up in this situation, contact Microsoft Support.**
-**Step 1** Zeorize the HSM. The Azure resource for an HSM cannot be deleted unless the HSM is in a "zeroized" state. Hence, all key material must have been deleted prior to trying to delete it as a resource. The quickest way to zeroize is to get the HSM admin password wrong 3 times (note: this refers to the HSM admin and not appliance level admin). Use command ΓÇÿhsm loginΓÇÖ and enter wrong password three times. The Luna shell does have a hsm -factoryreset command that zeroizes the HSM but it can only be executed via console on the serial port and customers do not have access to this.
+**Step 1:** Zeroize the HSM. The Azure resource for an HSM cannot be deleted unless the HSM is in a "zeroized" state. Hence, all key material must have been deleted prior to trying to delete it as a resource. The quickest way to zeroize is to get the HSM admin password wrong 3 times (note: this refers to the HSM admin and not appliance level admin). Use command ΓÇÿhsm loginΓÇÖ and enter wrong password three times. The Luna shell does have a hsm -factoryreset command that zeroizes the HSM but it can only be executed via console on the serial port and customers do not have access to this.
-**Step 2** Once HSM is zeroized, you can use either of the following commands to initiate the Delete Dedicated HSM resource
+**Step 2:** Once HSM is zeroized, you can use either of the following commands to initiate the Delete Dedicated HSM resource
> **Azure CLI**: az dedicated-hsm delete --resource-group \<RG name\> ΓÇô-name \<HSM name\> <br /> > **Azure PowerShell**: Remove-AzDedicatedHsm -Name \<HSM name\> -ResourceGroupName \<RG name\>
-**Step 3** Once step 2 is successful, you can delete the resource group to delete the other resources associated with the dedicated HSM by using either Azure CLI or Azure PowerShell.
+**Step 3:** Once **Step 2** is successful, you can delete the resource group to delete the other resources associated with the dedicated HSM by using either Azure CLI or Azure PowerShell.
> **Azure CLI**: az group delete --name \<RG name\> <br /> > **Azure PowerShell**: Remove-AzResourceGroup -Name \<RG name\>
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
To edit the rules for a group of machines:
![Add a custom rule.](./media/adaptive-application/adaptive-application-add-custom-rule.png) 1. If you're defining a known safe path, change the **Rule type** to 'Path' and enter a single path. You can include wildcards in the path. The following screens show some examples of how to use wildcards.
-
+ :::image type="content" source="media/adaptive-application/wildcard-examples.png" alt-text="Screenshot that shows examples of using wildcards." lightbox="media/adaptive-application/wildcard-examples.png":::
-
+ > [!TIP] > Some scenarios for which wildcards in a path might be useful: >
On this page, you learned how to use adaptive application control in Microsoft D
- [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md) - [Securing your Azure Kubernetes clusters](defender-for-kubernetes-introduction.md)-- View common question about [Adaptive application controls](faq-defender-for-servers.yml)
+- View common question about [Adaptive application controls](faq-defender-for-servers.yml)
defender-for-cloud Advanced Configurations For Malware Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/advanced-configurations-for-malware-scanning.md
Title: Microsoft Defender for Storage - advanced configurations for malware scanning description: Learn about the advanced configurations of Microsoft Defender for Storage malware scanning Previously updated : 08/08/2023 Last updated : 08/21/2023
-# Advanced configurations for malware scanning
+# Advanced configurations for malware scanning
Malware Scanning can be configured to send scanning results to the following:
This configuration can be performed using REST API as well:
Request URL:
-```
+```rest
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current/providers/Microsoft.Insights/diagnosticSettings/service?api-version=2021-05-01-preview ```+ Request Body:
-```
+```rest
{ "properties": { "workspaceId": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroup}/providers/microsoft.operationalinsights/workspaces/{workspaceName}",
For each storage account enabled with malware scanning, you can configure to sen
1. To configure the Event Grid custom topic destination, go to the relevant storage account, open the **Microsoft Defender for Cloud** tab, and select the settings to configure. > [!NOTE]
-> When you set an Event Grid custom topic, you should set **Override Defender for Storage subscription-level settingsΓÇ¥ to **On** to make sure it overrides the subscription-level settings.
+> When you set an Event Grid custom topic, you should set **Override Defender for Storage subscription-level settings** to **On** to make sure it overrides the subscription-level settings.
:::image type="content" source="media/azure-defender-storage-configure/event-grid-settings.png" alt-text="Screenshot that shows where to enable an Event Grid destination for scan logs." lightbox="media/azure-defender-storage-configure/event-grid-settings.png":::
This configuration can be performed using REST API as well:
Request URL:
-```
+```rest
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview ``` Request Body:
-```
+```rest
{ "properties": { "isEnabled": true,
Request Body:
} } ```+ ## Override Defender for Storage subscription-level settings The subscription-level settings inherit Defender for Storage settings on each storage account in the subscription. Use Override Defender for Storage subscription-level settings to configure settings for individual storage accounts different from those configured on the subscription level.
Overriding the settings of the subscriptions are usually used for the following
- Configure custom settings for Malware Scanning. - Disable Microsoft Defender for Storage on specific storage accounts.
-> [!NOTE]
+> [!NOTE]
> We recommend that you enable Defender for Storage on the entire subscription to protect all existing and future storage accounts in it. However, there are some cases where you would want to exclude specific storage accounts from Defender protection. If you've decided to exclude, follow the steps below to use the override setting and then disable the relevant storage account. If you are using Defender for Storage (classic), you can also [exclude storage accounts](defender-for-storage-classic-enable.md). ### Azure portal
To configure the settings of individual storage accounts different from those co
1. To adjust the monthly threshold for malware scanning in your storage accounts, you can modify the parameter called **Set limit of GB scanned per month** to your desired value. This parameter determines the maximum amount of data that can be scanned for malware each month, specifically for each storage account. If you wish to allow unlimited scanning, you can uncheck this parameter. By default, the limit is set at 5,000 GB. - 1. To disable Defender for Storage on this storage account, set the status of Microsoft Defender for Storage to **Off**. :::image type="content" source="media/azure-defender-storage-configure/defender-for-storage-settings.png" alt-text="Screenshot that shows where to turn off Defender for Storage in the Azure portal." lightbox="media/azure-defender-storage-configure/defender-for-storage-settings.png":::
Create a PUT request with this endpoint. Replace the subscriptionId, resourceGro
Request URL:
-```
+```rest
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview ``` Request Body:
-```
+```rest
{ "properties": { "isEnabled": true,
Request Body:
1. To disable Defender for Storage on this storage accounts, use the following request body:
- ```
+ ```rest
{ "properties": { "isEnabled": false,
Make sure you add the parameter `overrideSubscriptionLevelSettings` and its valu
## Next steps
-Learn more about [malware scanning settings](defender-for-storage-malware-scan.md).
+Learn more about [malware scanning settings](defender-for-storage-malware-scan.md).
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
In every subscription where this capability is enabled, all images stored in ACR
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerability Management) has the following capabilities: -- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-imagespowered-by-mdvm).-- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-matrix-defender-for-containers.md#registries-and-imagespowered-by-mdvm).
+- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-for-azurepowered-by-mdvm).
+- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-for-azurepowered-by-mdvm).
- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services). - **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability. - **Reporting** - Container Vulnerability Assessment for Azure powered by Microsoft Defender Vulnerability Management (MDVM) provides vulnerability reports using following recommendations:
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
| [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).-- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).
+- **Query vulnerability information via subassessment API** - You can get scan results via [REST API](subassessment-rest-api.md).
- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). - **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md).
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
The triggers for an image scan are: -- **One-time triggering** ΓÇô each image pushed or imported to a container registry is scanned shortly after being pushed or imported to a registry. In most cases, the scan is completed within a few minutes, but sometimes it may take up to an hour.
+- **One-time triggering**:
+ - each image pushed or imported to a container registry is scanned after being pushed or imported to a registry. In most cases, the scan is completed within a few minutes, but sometimes it may take up to an hour.
+ - [Preview] each image pulled from a registry is triggered to be scanned within 24 hours.
> [!NOTE]
- > While Container vulnerability assessment powered by MDVM is generally available for Defender CSPM, scan-on-push is currently in public preview.
+ > While Container vulnerability assessment powered by MDVM is generally available for Defender CSPM, scan-on-push and scan-on-pull is currently in public preview.
- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published. - **Re-scan** is performed once a day for:
- - images pushed in the last 90 days.
+ - images pushed in the last 90 days.
+ - [Preview] images pulled in the last 30 days.
- images currently running on the Kubernetes clusters monitored by Defender for Cloud (either via [agentless discovery and visibility for Kubernetes](how-to-enable-agentless-containers.md) or the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure)).
+ > [!NOTE]
+ > While Container vulnerability assessment powered by MDVM is generally available for Defender CSPM, scanning images pulled in the last 30 days is currently in public preview
+
## How does image scanning work? A detailed description of the scan process is described as follows:
A detailed description of the scan process is described as follows:
- For customers using either [agentless discovery and visibility within Kubernetes components](concept-agentless-containers.md) or [inventory collected via the Defender agent running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-agent), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an AKS cluster. > [!NOTE]
-> For Defender for Container Registries (deprecated), images are scanned once on push, and rescanned only once a week.
+> For Defender for Container Registries (deprecated), images are scanned once on push, on pull, and rescanned only once a week.
## If I remove an image from my registry, how long before vulnerabilities reports on that image would be removed?
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md
This article describes security alerts and notifications in Microsoft Defender for Cloud. ## What are security alerts?+ Security alerts are the notifications generated by Defender for Cloud's workload protection plans when threats are identified in your Azure, hybrid, or multicloud environments. - Security alerts are triggered by advanced detections available when you enable [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) for specific resource types. - Each alert provides details of affected resources, issues, and remediation steps. - Defender for Cloud classifies alerts and prioritizes them by severity.-- Alerts are displayed in the portal for 90 days, even if the resource related to the alert was deleted during that time. This is because the alert might indicate a potential breach to your organization that needs to be further investigated.
+- Alerts are displayed in the portal for 90 days, even if the resource related to the alert was deleted during that time. This is because the alert might indicate a potential breach to your organization that needs to be further investigated.
- Alerts can be exported to CSV format. - Alerts can also be streamed directly to a Security Information and Event Management (SIEM) such as Microsoft Sentinel, Security Orchestration Automated Response (SOAR), or IT Service Management (ITSM) solution. - Defender for Cloud leverages the [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/) to associate alerts with their perceived intent, helping formalize security domain knowledge.
Alerts have a severity level assigned to help prioritize how to attend to each a
- The specific trigger - The confidence level that there was malicious intent behind the activity that led to the alert - | Severity | Recommended response | |-|| | **High** | There is a high probability that your resource is compromised. You should look into it right away. Defender for Cloud has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. |
As the breath of threat coverage grows, so does the need to detect even the slig
In the cloud, attacks can occur across different tenants, Defender for Cloud can combine AI algorithms to analyze attack sequences that are reported on each Azure subscription. This technique identifies the attack sequences as prevalent alert patterns, instead of just being incidentally associated with each other.
-During an investigation of an incident, analysts often need extra context to reach a verdict about the nature of the threat and how to mitigate it. For example, even when a network anomaly is detected, without understanding what else is happening on the network or with regard to the targeted resource, it's difficult to understand what actions to take next. To help, a security incident can include artifacts, related events, and information. The additional information available for security incidents varies, depending on the type of threat detected and the configuration of your environment.
-
+During an investigation of an incident, analysts often need extra context to reach a verdict about the nature of the threat and how to mitigate it. For example, even when a network anomaly is detected, without understanding what else is happening on the network or with regard to the targeted resource, it's difficult to understand what actions to take next. To help, a security incident can include artifacts, related events, and information. The additional information available for security incidents varies, depending on the type of threat detected and the configuration of your environment.
### Correlating alerts into incidents
Defender for Cloud correlates alerts and contextual signals into incidents.
<a name="detect-threats"> </a>
-## How does Defender for Cloud detect threats?
+## How does Defender for Cloud detect threats?
To detect real threats and reduce false positives, Defender for Cloud monitors resources, collects, and analyzes data for threats, often correlating data from multiple sources.
You have a range of options for viewing your alerts outside of Defender for Clou
- **Download CSV report** on the alerts dashboard provides a one-time export to CSV. - **Continuous export** from Environment settings allows you to configure streams of security alerts and recommendations to Log Analytics workspaces and Event Hubs. [Learn more](continuous-export.md).-- **Microsoft Sentinel connector** streams security alerts from Microsoft Defender for Cloud into Microsoft Sentinel. [Learn more ](../sentinel/connect-azure-security-center.md).
+- **Microsoft Sentinel connector** streams security alerts from Microsoft Defender for Cloud into Microsoft Sentinel. [Learn more](../sentinel/connect-azure-security-center.md).
Learn about [streaming alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md) and how to [continuously export data](continuous-export.md). - ## Next steps In this article, you learned about the different types of alerts available in Defender for Cloud. For more information, see:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines are not equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium | | **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low | | **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
-| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
+| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | | **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | | **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | | **Suspicious usage of VMAccess extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VMAccess extension was detected on your virtual machines. Attackers may abuse the VMAccess extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |
-| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
+| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | | **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | | **Suspicious failed execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such failures may be associated with malicious scripts run by this extension. | Execution | Medium | | **Unusual deletion of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | | **Unusual execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | | **Custom script extension with suspicious entry-point in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
-| **Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
-
+| **Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
+ ## <a name="alerts-azureappserv"></a>Alerts for Azure App Service [Further details and notes](defender-for-app-service-introduction.md)
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](defender-for-resource-manager-introduction.md)
-| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-|-||:--:|-|
-| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
-| **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
-| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzDomainInfo) | A PowerShell script was run in your subscription and performed suspicious pattern of executing an information gathering operations to discover resources, permissions, and network structures. Threat actors use automated scripts, like MicroBurst, to gather information for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | Low |
-| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzureDomainInfo) | A PowerShell script was run in your subscription and performed suspicious pattern of executing an information gathering operations to discover resources, permissions, and network structures. Threat actors use automated scripts, like MicroBurst, to gather information for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | Low |
-| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(ARM_MicroBurst.AzVMBulkCMD) | A PowerShell script was run in your subscription and performed a suspicious pattern of executing code on a VM or a list of VMs. Threat actors use automated scripts, like MicroBurst, to run a script on a VM for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | Execution | High |
-| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(RM_MicroBurst.AzureRmVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) | A PowerShell script was run in your subscription and performed a suspicious pattern of extracting keys from an Azure Key Vault(s). Threat actors use automated scripts, like MicroBurst, to list keys and use them to access sensitive data or perform lateral movement. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | High |
-| **MicroBurst exploitation toolkit used to extract keys to your storage accounts**<br>(ARM_MicroBurst.AZStorageKeysREST) | A PowerShell script was run in your subscription and performed a suspicious pattern of extracting keys to Storage Account(s). Threat actors use automated scripts, like MicroBurst, to list keys and use them to access sensitive data in your Storage Account(s). This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | Collection | High |
-| **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) | A PowerShell script was run in your subscription and performed a suspicious pattern of extracting secrets from an Azure Key Vault(s). Threat actors use automated scripts, like MicroBurst, to list secrets and use them to access sensitive data or perform lateral movement. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | High |
-| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure**<br>(ARM_PowerZure.AzureElevatedPrivileges) | PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | - | High |
-| **PowerZure exploitation toolkit used to enumerate resources**<br>(ARM_PowerZure.GetAzureTargets) | PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
-| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
-| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
-| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) | Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high |
-| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Data Collection' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Collection | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Defense Evasion' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Defense Evasion | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Execution' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Defense Execution | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Impact' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Impact | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Initial Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Initial access | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Lateral Movement Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Lateral movement | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'persistence' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Persistence | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Privilege Escalation' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Privilege escalation | Medium |
-| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low |
-| **Suspicious Azure role assignment detected (Preview)**<br>(ARM_AnomalousRBACRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious Azure role assignment / performed using PIM (Privileged Identity Management) in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to allow administrators to grant principals access to Azure resources. While this activity may be legitimate, a threat actor might utilize role assignment to escalate their permissions allowing them to advance their attack. |Lateral Movement, Defense Evasion|Low (PIM) / High|
-| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium |
-| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
-| **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
-| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
-| **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
-| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
-| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
-| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
-| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium |
-| **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_MicroBurst.RunCodeOnBehalf) | A PowerShell script was run in your subscription and performed a suspicious pattern of executing an arbitrary code or exfiltrate Azure Automation account credentials. Threat actors use automated scripts, like MicroBurst, to run arbitrary code for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | Persistence, Credential Access | High |
-| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to another user account under their control. |  Lateral Movement, Defense Evasion | High |
+| Alert (alert type) || Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+|-| -- ||:--:|-|
+| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) || Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
+| **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) || Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
+| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzDomainInfo) || A PowerShell script was run in your subscription and performed suspicious pattern of executing an information gathering operations to discover resources, permissions, and network structures. Threat actors use automated scripts, like MicroBurst, to gather information for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | Low |
+| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzureDomainInfo) || A PowerShell script was run in your subscription and performed suspicious pattern of executing an information gathering operations to discover resources, permissions, and network structures. Threat actors use automated scripts, like MicroBurst, to gather information for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | Low |
+| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(ARM_MicroBurst.AzVMBulkCMD) || A PowerShell script was run in your subscription and performed a suspicious pattern of executing code on a VM or a list of VMs. Threat actors use automated scripts, like MicroBurst, to run a script on a VM for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | Execution | High |
+| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(RM_MicroBurst.AzureRmVMBulkCMD) || MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) || A PowerShell script was run in your subscription and performed a suspicious pattern of extracting keys from an Azure Key Vault(s). Threat actors use automated scripts, like MicroBurst, to list keys and use them to access sensitive data or perform lateral movement. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | High |
+| **MicroBurst exploitation toolkit used to extract keys to your storage accounts**<br>(ARM_MicroBurst.AZStorageKeysREST) || A PowerShell script was run in your subscription and performed a suspicious pattern of extracting keys to Storage Account(s). Threat actors use automated scripts, like MicroBurst, to list keys and use them to access sensitive data in your Storage Account(s). This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | Collection | High |
+| **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) || A PowerShell script was run in your subscription and performed a suspicious pattern of extracting secrets from an Azure Key Vault(s). Threat actors use automated scripts, like MicroBurst, to list secrets and use them to access sensitive data or perform lateral movement. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | - | High |
+| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure**<br>(ARM_PowerZure.AzureElevatedPrivileges) || PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | - | High |
+| **PowerZure exploitation toolkit used to enumerate resources**<br>(ARM_PowerZure.GetAzureTargets) || PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
+| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) || PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) || PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) || PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
+| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) || A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) || Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
+| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) || Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high |
+| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) || Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Data Collection' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Collection) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Collection | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Defense Evasion' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.DefenseEvasion) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Defense Evasion | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Execution' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Execution) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Defense Execution | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Impact' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Impact) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Impact | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Initial Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.InitialAccess) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Initial access | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Lateral Movement Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.LateralMovement) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Lateral movement | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'persistence' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Persistence) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Persistence | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Privilege Escalation' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.PrivilegeEscalation) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Privilege escalation | Medium |
+| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) || Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) || Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) || Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) || Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low |
+| **Suspicious Azure role assignment detected (Preview)**<br>(ARM_AnomalousRBACRoleAssignment) || Microsoft Defender for Resource Manager identified a suspicious Azure role assignment / performed using PIM (Privileged Identity Management) in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to allow administrators to grant principals access to Azure resources. While this activity may be legitimate, a threat actor might utilize role assignment to escalate their permissions allowing them to advance their attack. |Lateral Movement, Defense Evasion|Low (PIM) / High|
+| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium |
+| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
+| **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
+| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
+| **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
+| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
+| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
+|**Suspicious elevate access operation (Preview)**(ARM_AnomalousElevateAccess) ||Microsoft Defender for Resource Manager identified a suspicious "Elevate Access" operation. The activity is deemed suspicious, as this principal rarely invokes such operations. While this activity may be legitimate, a threat actor might utilize an "Elevate Access" operation to perform privilege escalation for a compromised user.|Privilege Escalation |Medium|
+| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
+| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) || Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium |
+| **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_MicroBurst.RunCodeOnBehalf) || A PowerShell script was run in your subscription and performed a suspicious pattern of executing an arbitrary code or exfiltrate Azure Automation account credentials. Threat actors use automated scripts, like MicroBurst, to run arbitrary code for malicious activities. This was detected by analyzing Azure Resource Manager operations in your subscription. This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise your environment for malicious intentions. | Persistence, Credential Access | High |
+| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) || Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) || PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) || PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) || Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to another user account under their control. |  Lateral Movement, Defense Evasion | High |
## <a name="alerts-azurestorage"></a>Alerts for Azure Storage
Microsoft Defender for Containers provides security alerts on the cluster level
|**Suspicious external access to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.InternalSasUsedExternally| The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. This type of access is considered suspicious because the SAS token is typically only used in internal networks (from private IP addresses). <br>The activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium | |**Suspicious external operation to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.UnusualOperationFromExternalIp| The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. The access is considered suspicious because operations invoked outside your network (not from private IP addresses) with this SAS token are typically used for a specific set of Read/Write/Delete operations, but other operations occurred, which makes this access suspicious. <br>This activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium | |**Unusual SAS token was used to access an Azure storage account from a public IP address (Preview)**<br>Storage.Blob_AccountSas.UnusualExternalAccess| The alert indicates that someone with an external (public) IP address has accessed the storage account using an account SAS token. The access is highly unusual and considered suspicious, as access to the storage account using SAS tokens typically comes only from internal (private) IP addresses. <br>It's possible that a SAS token was leaked or generated by a malicious actor either from within your organization or externally to gain access to this storage account. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Low |
-|**Malicious file uploaded to storage account (Preview)**<br>Storage.Blob_AM.MalwareFound| The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | LateralMovement | High |
+|**Malicious file uploaded to storage account**<br>Storage.Blob_AM.MalwareFound| The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | LateralMovement | High |
## <a name="alerts-azurecosmos"></a>Alerts for Azure Cosmos DB
Microsoft Defender for Containers provides security alerts on the cluster level
| **Access from a suspicious IP**<br>(CosmosDB_SuspiciousIp) | This Azure Cosmos DB account was successfully accessed from an IP address that was identified as a threat by Microsoft Threat Intelligence. | Initial Access | Medium | | **Access from an unusual location**<br>(CosmosDB_GeoAnomaly) | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low | | **Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium |
-| **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High |
+| **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | Medium |
| **Suspicious extraction of Azure Cosmos DB account keys** (AzureCosmosDB_SuspiciousListKeys.SuspiciousPrincipal) | A suspicious source extracted Azure Cosmos DB account access keys from your subscription. If this source is not a legitimate source, this may be a high impact issue. The access key that was extracted provides full control over the associated databases and the data stored within. See the details of each specific alert to understand why the source was flagged as suspicious. | Credential Access | high | | **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isn't authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts can't work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium | | **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack won't succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, it's an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High
- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - [Continuously export Defender for Cloud data](continuous-export.md)++
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Last updated 06/27/2023
# Review hardening recommendations
+> [!NOTE]
+> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ To reduce a machine's attack surface and avoid known risks, it's important to configure the operating system (OS) as securely as possible.
-The Microsoft cloud security benchmark has guidance for OS hardening which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md).
+The Microsoft cloud security benchmark has guidance for OS hardening, which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md).
Use the security recommendations described in this article to assess the machines in your environment and:
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
For more information on related tools, see the following pages:
- [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml) - [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)-- View common question about [asset inventory](faq-defender-for-servers.yml)
+- View common question about [asset inventory](faq-defender-for-servers.yml)
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Title: Reference list of attack paths and cloud security graph components
description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 04/13/2023 Last updated : 09/05/2023 # Reference list of attack paths and cloud security graph components
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account |
-| Internet expsed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
+| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
### AWS EC2 instances
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource | | Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both |
+### GCP VM Instances
+
+| Attack path display name | Attack path description |
+|--|--|
+| Internet exposed VM instance has high severity vulnerabilities | GCP VM instance '[VMInstanceName]' is reachable from the internet and has high severity vulnerabilities [Remote Code Execution]. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities allowing remote code execution on the machine and assigned with Service Account with read permission to GCP Storage bucket '[BucketName]' containing sensitive data. |
+| Internet exposed VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
+| Internet exposed VM instance has high severity vulnerabilities and a hosted database installed | GCP VM instance '[VMInstanceName]' with a hosted [DatabaseType] database is reachable from the internet and has high severity vulnerabilities. |
+| Internet exposed VM with high severity vulnerabilities has plaintext SSH private key | GCP VM instance '[MachineName]' is reachable from the internet, has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
+| VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
+| VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities [Remote Code Execution] and has read permissions to GCP Storage bucket '[BucketName]' containing sensitive data. |
+| VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'.|
+| VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
+| VM instance with high severity vulnerabilities has plaintext SSH private key | GCP VM instance to align with all other attack paths. Virtual machine '[MachineName]' has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
+ ### Azure data | Attack path display name | Attack path description |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | | SQL on VM has a user account with commonly used username and allows code execution on the VM (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)| | SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
-| Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. |
+| Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. |
| Internet exposed VM has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.| | Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container | An internal Azure storage container replicates its data to another Azure storage container that is reachable from the internet and allows public access, and poses this data at risk. | | Internet exposed Azure Blob Storage container with sensitive data is publicly accessible | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md).|
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). | |Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | |SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-|Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks.|
+|Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. |
|Internet exposed EC2 instance has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.| | Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket | An internal AWS S3 bucket replicates its data to another S3 bucket which is reachable from the internet and allows public access, and poses this data at risk. | | RDS snapshot is publicly available to all AWS accounts (Preview) | A snapshot of an RDS instance or cluster is publicly accessible by all AWS accounts. |
-| Internet exposed managed database allows basic (local user/password) authentication (Preview) | A database can be accessed through the internet and allows user/password authentication only which exposes the DB to brute force attacks. |
| Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute | | Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) | | SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket| | RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts |
+### GCP data
+
+| Attack path display name | Attack path description |
+|--|--|
+| GCP Storage Bucket with sensitive data is publicly accessible | GCP Storage Bucket [BucketName] with sensitive data allows public read access without authorization required. |
+ ### Azure containers Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
This section lists all of the cloud security graph components (connections and i
| Insight | Description | Supported entities | |--|--|--|
-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance |
-| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance |
-| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts |
+| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance, GCP VM instance, GCP SQL admin instance |
+| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance, Azure MariaDB Single Server, Azure MySQL Single Server, Azure MySQL Flexible Server, Synapse Workspace, Azure PostgreSQL Single Server, Azure SQL Managed Instance |
+| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | MDC Sensitive data discovery:<br /><br />Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server (preview), Azure SQL Database (preview), RDS Instance (preview), RDS Instance Database (preview), RDS Cluster (preview)<br /><br />Purview Sensitive data discovery (preview):<br /><br />Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts, GCP cloud storage bucket |
| Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
-| Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources |
+| Has tags | Lists the resource tags of the cloud resource | All Azure, AWS, and GCP resources |
| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
-| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository |
+| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository, GCP cloud storage bucket |
| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | Azure AD User account, IAM user | | Is external user | Indicates that the user account is outside the organization's domain | Azure AD User account | | Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
This section lists all of the cloud security graph components (connections and i
| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image |
-| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image |
+| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image, GCP VM instance |
+| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image, GCP VM instance |
| Public IP metadata | Lists the metadata of an Public IP | Public IP | | Identity metadata | Lists the metadata of an identity | Azure AD Identity |
This section lists all of the cloud security graph components (connections and i
|--|--|--|--| | Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Azure AD managed identity | | Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database |
-| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
+| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server, RDS Cluster, RDS Instance, GCP project, GCP Folder, GCP Organization | All Azure, AWS, and GCP resources, All Kubernetes entities, All DevOps entities, Azure SQL database, RDS Instance, RDS Instance Database |
+| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service, GCP VM instance, GCP instance group |
| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod | | Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group | | Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can quietly deploy the Azure Monitor Agent on your servers when you enable Defender for Servers.
+> [!NOTE]
+> As part of the Defender for Cloud updated strategy, Azure Monitor Agent will no longer be required for the Defender for Servers offering. However, it will still be required for Defender for SQL Server on machines. As a result, the autoprovisioning process for both agents will be adjusted accordingly. For more information about this change, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ In this article, we're going to show you how to deploy the agent so that you can protect your servers. ## Availability
Before you deploy AMA with Defender for Cloud, you must have the following prere
- Make sure your multicloud and on-premises machines have Azure Arc installed. - AWS and GCP machines
- - [Onboard your AWS connector](quickstart-onboard-aws.md) and auto provision Azure Arc.
- - [Onboard your GCP connector](quickstart-onboard-gcp.md) and auto provision Azure Arc.
+ - [Onboard your AWS connector](quickstart-onboard-aws.md) and autoprovision Azure Arc.
+ - [Onboard your GCP connector](quickstart-onboard-gcp.md) and autoprovision Azure Arc.
- On-premises machines - [Install Azure Arc](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). - Make sure the Defender plans that you want the Azure Monitor Agent to support are enabled:
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
To assess your machines for vulnerabilities, you can use one of the following so
Learn more in [View and remediate findings from vulnerability assessment solutions on your machines](remediate-vulnerability-findings-vm.md). - ## Next steps+ > [!div class="nextstepaction"] > [Remediate the discovered vulnerabilities](remediate-vulnerability-findings-vm.md)
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
The Microsoft Security DevOps uses the following Open Source tools:
| [Terrascan](https://github.com/accurics/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/accurics/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | container images, file systems, git repositories | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) |
-## Prerequisites
+## Prerequisites
-- Admin privileges to the Azure DevOps organization are required to install the extension.
+- Admin privileges to the Azure DevOps organization are required to install the extension.
If you don't have access to install the extension, you must request access from your Azure DevOps organization's administrator during the installation process.
If you don't have access to install the extension, you must request access from
:::image type="content" source="media/msdo-azure-devops-extension/repo-git.png" alt-text="Screenshot that shows you where to navigate to, to select Azure repo git.":::
-1. Select the relevant repository.
+1. Select the relevant repository.
:::image type="content" source="media/msdo-azure-devops-extension/repository.png" alt-text="Screenshot showing where to select your repository.":::
-5. Select **Starter pipeline**.
+1. Select **Starter pipeline**.
:::image type="content" source="media/msdo-azure-devops-extension/starter-piepline.png" alt-text="Screenshot showing where to select starter pipeline.":::
-1. Paste the following YAML into the pipeline:
+1. Paste the following YAML into the pipeline:
```yml # Starter pipeline
If you don't have access to install the extension, you must request access from
displayName: 'Microsoft Security DevOps' ```
-9. To commit the pipeline, select **Save and run**.
+1. To commit the pipeline, select **Save and run**.
The pipeline will run for a few minutes and save the results.
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
All of these capabilities are available as part of the [Defender CSPM](concept-c
## Agentless discovery and visibility within Kubernetes components
-Agentless discovery for Kubernetes provides API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup.
+Agentless discovery for Kubernetes provides API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup. For more information, see [Agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes).
-### How does agentless discovery for Kubernetes work?
-
-The discovery process is based on snapshots taken at intervals:
--
-When you enable the agentless discovery for Kubernetes extension, the following process occurs:
--- **Create**: Defender for Cloud creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator.-- **Assign**: Defender for Cloud assigns a built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope. The role contains the following permissions:-
- - AKS read (Microsoft.ContainerService/managedClusters/read)
- - AKS Trusted Access with the following permissions:
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
-
- Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
---- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.-- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
-
### What's the refresh interval? Agentless information in Defender CSPM is updated through a snapshot mechanism. It can take up to **24 hours** to see results in attack paths and the cloud security explorer.
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Previously updated : 06/29/2023 Last updated : 08/15/2023
Agentless scanning for VMs provides vulnerability assessment and software invent
|Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)| | Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
| Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
-| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
-| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK |
+| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) |
+| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Google-managed encryption key<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Customer-managed encryption key (CMEK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Customer-supplied encryption key (CSEK) |
## How agentless scanning for VMs works
defender-for-cloud Concept Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md
Last updated 05/07/2023
# Identify and analyze risks across your environment
-<iframe src="https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119]
One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all.
defender-for-cloud Concept Aws Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-aws-connector.md
The architecture of the authentication process across clouds is as follows:
1. Microsoft Defender for Cloud CSPM service acquires an Azure AD token with a validity life time of 1 hour that is signed by the Azure AD using the RS256 algorithm.
-1. The Azure AD token is exchanged with AWS short living credentials and Defender for Cloud's CPSM service assumes the CSPM IAM role (assumed with web identity).
+1. The Azure AD token is exchanged with AWS short living credentials and Defender for Cloud's CSPM service assumes the CSPM IAM role (assumed with web identity).
1. Since the principle of the role is a federated identity as defined in a trust relationship policy, the AWS identity provider validates the Azure AD token against the Azure AD through a process that includes:
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 06/20/2023 Last updated : 08/10/2023 # Cloud Security Posture Management (CSPM)
Microsoft Defender CSPM protects across all your multicloud workloads, but billi
> > - The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1, 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï >
-> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscription that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscriptions that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Plan availability
The following table summarizes each plan and their cloud availability.
| [Data exporting](export-to-siem.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Workflow automation](workflow-automation.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Tools for remediation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure | | [Container registries vulnerability assessment](concept-agentless-containers.md), including registry scanning | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
-| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
> [!NOTE] > If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors.
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
Previously updated : 03/23/2023 Last updated : 09/05/2023
Sensitive data discovery is available in the Defender CSPM and Defender for Stor
- If you have existing plans running, the extension is available, but turned off by default. - Existing plan status shows as ΓÇ£PartialΓÇ¥ rather than ΓÇ£FullΓÇ¥ if one or more extensions aren't turned on. - The feature is turned on at the subscription level.-
+- If sensitive data discovery is turned on, but Defender CSPM is not enabled, only storage resources will be scanned.
## What's supported
The table summarizes support for data-aware posture management.
|**Support** | **Details**| | | |
-|What Azure data resources can I discover? | [Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).|
-|What AWS data resources can I discover? | AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.|
-|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).|
+|What Azure data resources can I discover? | **Object storage:**<br /><br />[Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).<br /><br /><br />**Databases**<br /><br />Azure SQL Databases (Public preview) |
+|What AWS data resources can I discover? | **Object storage:**<br /><br />AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.<br /><br />**Databases**<br /><br />Any flavor of RDS instances (Public preview) |
+|What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region |
+|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> `Microsoft.Authorization/roleAssignments/*` (read, write, delete) **and** `Microsoft.Security/pricings/*` (read, write, delete) **and** `Microsoft.Security/pricings/SecurityOperators` (read, write)<br/><br/> Amazon S3 buckets and RDS instances: AWS account permission to run Cloud Formation (to create a role). <br/><br/>GCP storage buckets: Google account permission to run script (to create a role). |
|What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.|
-|What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Discovery is done locally in the region.|
-|What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region.|
-|Do I need to install an agent? | No, discovery is agentless.|
-|What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt include other costs except for the respective plan costs.|
+|What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> You can discover Azure SQL Databases in any region where Defender CSPM and Azure SQL Databases are supported. |
+|What AWS regions are supported? | S3:<br /><br />Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Montreal); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/><br />RDS:<br /><br />Africa (Capetown); Asia Pacific (Hong Kong); Asia Pacific (Hyderabad); Asia Pacific (Melbourne); Asia Pacific (Mumbai); Asia Pacific (Osaka); Asia Pacific (Seoul); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); Europe (Zurich); Middle East (UAE); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br /><br /> Discovery is done locally within the region. |
+|What GCP regions are supported? | europe-west1, us-east1, us-west1, us-central1, us-east4, asia-south1, northamerica-northeast1|
+|Do I need to install an agent? | No, discovery requires no agent installation. |
+|What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt incur additional costs except for the respective plan costs. |
|What permissions do I need to view/edit data sensitivity settings? | You need one of these Azure Active directory roles: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.|
+| What permissions do I need to perform onboarding? | You need one of these Azure Active directory roles: Security Admin, Contributor, Owner on the subscription level (where the GCP project/s reside in). For consuming the security findings: Security Reader, Security Admin, Reader, Contributor, Owner on the subscription level (where the GCP project/s reside). |
## Configuring data sensitivity settings The main steps for configuring data sensitivity setting include:+ - [Import custom sensitive info types/labels from Microsoft Purview compliance portal](data-sensitivity-settings.md#import-custom-sensitive-info-typeslabels) - [Customize sensitive data categories/types](data-sensitivity-settings.md#customize-sensitive-data-categoriestypes) - [Set the threshold for sensitivity labels](data-sensitivity-settings.md#set-the-threshold-for-sensitive-data-labels)
The main steps for configuring data sensitivity setting include:
Defender for Cloud starts discovering data immediately after enabling a plan, or after turning on the feature in plans that are already running.
+For object storage:
+ - It takes up to 24 hours to see the results for a first-time discovery. - After files are updated in the discovered resources, data is refreshed within eight days. - A new Azure storage account that's added to an already discovered subscription is discovered within 24 hours or less.-- A new AWS S3 bucket that's added to an already discovered AWS account is discovered within 48 hours or less.
+- A new AWS S3 bucket or GCP storage bucket that's added to an already discovered AWS account or Google account is discovered within 48 hours or less.
+
+For databases:
+
+- Databases are scanned on a weekly basis.
+- For newly enabled subscriptions, results will appear within 24 hours.
### Discovering AWS S3 buckets
In order to protect AWS resources in Defender for Cloud, you set up an AWS conne
- To connect AWS accounts, you need Administrator permissions on the account. - The role allows these permissions: S3 read only; KMS decrypt.
+### Discovering AWS RDS instances
+
+To protect AWS resources in Defender for Cloud, set up an AWS connector using a CloudFormation template to onboard the AWS account.
+
+- To discover AWS RDS instances, Defender for Cloud updates the CloudFormation template.
+- The CloudFormation template creates a new role in AWS IAM, to allow permission for the Defender for Cloud scanner to take the last available automated snapshot of your instance and bring it online in an isolated scanning environment within the same AWS region.
+- To connect AWS accounts, you need Administrator permissions on the account.
+- Automated snapshots need to be enabled on the relevant RDS Instances/Clusters.
+- The role allows these permissions (review the CloudFormation template for exact definitions):
+ - List all RDS DBs/clusters
+ - Copy all DB/cluster snapshots
+ - Delete/update DB/cluster snapshot with prefix *defenderfordatabases*
+ - List all KMS keys
+ - Use all KMS keys only for RDS on source account
+ - Full control on all KMS keys with tag prefix *DefenderForDatabases*
+ - Create alias for KMS keys
+
+### Discovering GCP storage buckets
+
+In order to protect GCP resources in Defender for Cloud, you can set up a Google connector using a script template to onboard the GCP account.
+
+- To discover GCP storage buckets, Defender for Cloud updates the script template.
+- The script template creates a new role in the Google account to allow permission for the Defender for Cloud scanner to access data in the GCP storage buckets.
+- To connect Google accounts, you need Administrator permissions on the account.
+ ## Exposed to the internet/allows public access Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details.
-**State** | **Azure storage accounts** | **AWS S3 Buckets**
- | |
-**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses.
-**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**.
+**State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** |
+ | | |
+**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. |
+**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£.
+Database resources do not allow public access but can still be exposed to the internet.
-## Next steps
+Internet exposure insights are available for the following resources:
-[Enable](data-security-posture-enable.md) data-aware security posture.
+Azure:
+
+- Azure SQL server
+- Azure Cosmos DB
+- Azure SQL Managed Instance
+- Azure MySQL Single Server
+- Azure MySQL Flexible Server
+- Azure PostgreSQL Single Server
+- Azure PostgreSQL Flexible Server
+- Azure MariaDB Single Server
+- Synapse Workspace
+AWS:
+- RDS instance
+
+> [!NOTE]
+>
+> - Exposure rules that include 0.0.0.0/0 are considered ΓÇ£excessively exposedΓÇ¥, meaning that they can be accessed from any public IP.
+> - Azure resources with the exposure rule ΓÇ£0.0.0.0ΓÇ¥ are accessible from any resource in Azure (regardless of tenant or subscription).
+
+## Next steps
+
+[Enable](data-security-posture-enable.md) data-aware security posture.
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Previously updated : 06/27/2023 Last updated : 09/05/2023 # About data-aware security posture As digital transformation accelerates, organizations move data to the cloud at an exponential rate using multiple data stores such as object stores and managed/hosted databases. The dynamic and complex nature of the cloud has increased data threat surfaces and risks. This causes challenges for security teams around data visibility and protecting the cloud data estate.
-Data-aware security in Microsoft Defender for Cloud helps you to reduce data risk, and respond to data breaches. Using data-aware security posture you can:
+Data-aware security in Microsoft Defender for Cloud helps you to reduce risk to data, and respond to data breaches. Using data-aware security posture you can:
- Automatically discover sensitive data resources across multiple clouds. - Evaluate data sensitivity, data exposure, and how data flows across the organization.
Data-aware security posture automatically and continuously discovers managed and
## Smart sampling
-Defender for Cloud uses smart sampling to discover a selected number of files in your cloud datastores. Smart sampling results discover evidence of sensitive data issues, while saving on discovery costs and time.
+Defender for Cloud uses smart sampling to discover a selected number of assets in your cloud data stores. Smart sampling results discover evidence of sensitive data issues, while saving on discovery costs and time.
## Data security in Defender CSPM
You can discover risk of data breaches by attack paths of internet-exposed VMs t
Cloud Security Explorer helps you identify security risks in your cloud environment by running graph-based queries on Cloud Security Graph (Defender for Cloud's context engine). You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
-You can leverage Cloud Security Explorer query templates, or build your own queries, to find insights about misconfigured data resources that are publicly accessible and contain sensitive data, across multicloud environments. You can run queries to examine security issues, and to get environment context into your asset inventory, exposure to internet, access controls, data flows, and more. Review [cloud graph insights](attack-path-reference.md#cloud-security-graph-components-list).
+You can leverage Cloud Security Explorer query templates, or build your own queries, to find insights about misconfigured data resources that are publicly accessible and contain sensitive data, across multicloud environments. You can run queries to examine security issues, and to get environment context into your asset inventory, exposure to the internet, access controls, data flows, and more. Review [cloud graph insights](attack-path-reference.md#cloud-security-graph-components-list).
## Data security in Defender for Storage
Defender for Storage monitors Azure storage accounts with advanced threat detect
When early suspicious signs are detected, Defender for Storage generates security alerts, allowing security teams to quickly respond and mitigate.
-By applying sensitivity information types and Microsoft Purview sensitivity labels on storage resources, you can easily prioritize the alerts and recommendations that focus on sensitive data.
+By applying sensitivity information types and Microsoft Purview sensitivity labels on storage resources, you can easily prioritize the alerts and recommendations that focus on sensitive data.
[Learn more about sensitive data discovery](defender-for-storage-data-sensitivity.md) in Defender for Storage.
Data sensitivity settings define what's considered sensitive data in your organi
When discovering resources for data sensitivity, results are based on these settings.
-When you enable data-aware security capabilities with the sensitive data discovery component in the Defender CSPM or Defender for Storage plans, Defender for Cloud uses algorithms to identify storage resources that appear to contain sensitive data. Resources are labeled in accordance with data sensitivity settings.
+When you enable data-aware security capabilities with the sensitive data discovery component in the Defender CSPM or Defender for Storage plans, Defender for Cloud uses algorithms to identify data resources that appear to contain sensitive data. Resources are labeled in accordance with data sensitivity settings.
Changes in sensitivity settings take effect the next time that resources are discovered. - ## Next steps
-[Prepare and review requirements](concept-data-security-posture-prepare.md) for data-aware security posture management.
+- [Prepare and review requirements](concept-data-security-posture-prepare.md) for data-aware security posture management.
+
+- [Understanding data aware security posture - Defender for Cloud in the Field video](episode-thirty-one.md)
defender-for-cloud Data Security Posture Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md
Previously updated : 04/13/2023 Last updated : 09/05/2023
Follow these steps to enable data-aware security posture. Don't forget to review
If Defender CSPM is already on, select **Settings** in the Monitoring coverage column of the Defender CSPM plan and make sure that the **Sensitive data discovery** component is set to **On** status.
+1. Once sensitive data discovery is turned **On** in Defender CSPM, it will automatically incorporate support for additional resource types as the range of supported resource types expands.
+ ## Enable in Defender CSPM (AWS) ### Before you start -- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#discovering-aws-s3-buckets) for AWS discovery, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
+- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#discovery) for AWS discovery, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
- Check that there's no policy that blocks the connection to your Amazon S3 buckets.
+- For RDS instances: cross-account KMS encryption is supported, but additional policies on KMS access may prevent access.
### Enable for AWS resources
+#### S3 buckets and RDS instances
+ 1. Enable data security posture as described above 1. Proceed with the instructions to download the CloudFormation template and to run it in AWS.
-Automatic discovery of S3 buckets in the AWS account starts automatically. The Defender for Cloud scanner runs in your AWS account and connects to your S3 buckets.
+Automatic discovery of S3 buckets in the AWS account starts automatically.
+
+For S3 buckets, the Defender for Cloud scanner runs in your AWS account and connects to your S3 buckets.
+
+For RDS instances, discovery will be triggered once **Sensitive Data Discovery** is turned on. The scanner will take the latest automated snapshot for an instance, create a manual snapshot within the source account, and copy it to an isolated Microsoft-owned environment within the same region.
+
+The snapshot is used to create a live instance that is spun up, scanned and then immediately destroyed (together with the copied snapshot).
+
+Only scan findings are reported by the scanning platform.
+
-### Check for blocking policies
+### Check for S3 blocking policies
If the enable process didn't work because of a blocked policy, check the following: - Make sure that the S3 bucket policy doesn't block the connection. In the AWS S3 bucket, select the **Permissions** tab > Bucket policy. Check the policy details to make sure the Microsoft Defender for Cloud scanner service running in the Microsoft account in AWS isn't blocked.-- Make sure that there's no SCP policy that blocks the connection to the S3 bucket. For
-example, your SCP policy might block read API calls to the AWS Region where your S3
-bucket is hosted.
-- Check that these required API calls are allowed by your SCP policy: AssumeRole,
-GetBucketLocation, GetObject, ListBucket, GetBucketPublicAccessBlock
-- Check that your SCP policy allows calls to the us-east-1 AWS Region, which is the default
-region for API calls.
+- Make sure that there's no SCP policy that blocks the connection to the S3 bucket. For example, your SCP policy might block read API calls to the AWS Region where your S3 bucket is hosted.
+- Check that these required API calls are allowed by your SCP policy: AssumeRole, GetBucketLocation, GetObject, ListBucket, GetBucketPublicAccessBlock
+- Check that your SCP policy allows calls to the us-east-1 AWS Region, which is the default region for API calls.
## Enable data-aware monitoring in Defender for Storage
-Sensitive data threat detection is enabled by default when the sensitive data discovery component is enabled in the Defender for Storage plan. [Learn more](defender-for-storage-data-sensitivity.md)
+Sensitive data threat detection is enabled by default when the sensitive data discovery component is enabled in the Defender for Storage plan. [Learn more](defender-for-storage-data-sensitivity.md).
+Only Azure Storage resources will be scanned if the Defender CSPM plan is turned off.
## Next steps
-[Review the security risks in your data](data-security-review-risks.md)
+[Review the security risks in your data](data-security-review-risks.md)
defender-for-cloud Data Security Review Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-review-risks.md
Previously updated : 03/14/2023 Last updated : 09/05/2023 # Explore risks to sensitive data
After you [discover resources with sensitive data](data-security-posture-enable.
View predefined attack paths to discover data breach risks, and get remediation recommendations, as follows: - 1. In Defender for Cloud, open **Recommendations** > **Attack paths**. 1. In **Risk category filter**, select **Data exposure** or **Sensitive data exposure** to filter the data-related attack paths.
View predefined attack paths to discover data breach risks, and get remediation
Other examples of attack paths for sensitive data include: - "Internet exposed Azure Storage container with sensitive data is publicly accessible"
+- "Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication"
- "VM has high severity vulnerabilities and read permission to a data store with sensitive data" - "Internet exposed AWS S3 Bucket with sensitive data is publicly accessible" - "Private AWS S3 bucket that replicates data to the internet is exposed and publicly accessible"
+- "RDS snapshot is publicly available to all AWS accounts"
[Review](attack-path-reference.md) a full list of attack paths. - ## Explore risks with Cloud Security Explorer Explore data risks and exposure in cloud security graph insights using a query template, or by defining a manual query.
When you open a predefined query it's populated automatically and can be tweaked
When sensitive data discovery is enabled in the Defender for Storage plan, you can prioritize and focus on alerts the alerts that affect resources with sensitive data. [Learn more](defender-for-storage-data-sensitivity.md) about monitoring data security alerts in Defender for Storage.
+For PaaS databases and S3 Buckets, findings are reported to Azure Resource Graph (ARG) allowing you to filter and sort by sensitivity labels and sensitive info types in Defender for Cloud Inventory, Alert and Recommendation blades.
+
+## Export findings
+
+It's common for the security administrator, who reviews sensitive data findings in attack paths or the security explorer, to lack direct access to the data stores. Therefore, they'll need to share the findings with the data owners, who can then conduct further investigation.
+
+For that purpose, use the **Export** within the **Contains sensitive data** insight.
++
+The CSV file produced will include:
+
+- **Sample name** ΓÇô depending on the resource type, this can be a database column, file name, or container name.
+- **Sensitivity label** ΓÇô the highest ranking label found on this resource (same value for all rows).
+- **Contained in** ΓÇô sample full path (file path or column full name).
+- **Sensitive info types** ΓÇô discovered info types per sample. If more than one info type was detected, a new row will be added for each info type. This is to allow an easier filtering experience.
+
+> [!NOTE]
+> **Download CSV report** in the Cloud Security Explorer page will export all insights retrieved by the query in raw format (json).
+ ## Next steps - Learn more about [attack paths](concept-attack-path.md).
defender-for-cloud Data Sensitivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-sensitivity-settings.md
description: Learn how to customize data sensitivity settings in Defender for Cl
Previously updated : 03/22/2023 Last updated : 09/05/2023 # Customize data sensitivity settings
Data sensitivity settings are used to identify and focus on managing the critica
- The sensitive info types and sensitivity labels that come from Microsoft Purview compliance portal and which you can select in Defender for Cloud. By default Defender for Cloud uses the [built-in sensitive information types](/microsoft-365/compliance/sensitive-information-type-learn-about) provided by Microsoft Purview compliance portal. Some of the info types and labels are enabled by default, and you can modify them as needed. - You can optionally allow the import of custom sensitive info types and allow the import of [sensitivity labels](/microsoft-365/compliance/sensitivity-labels) that you've defined in Microsoft Purview.-- If you import labels, you can set sensitivity thresholds that determine the minimum threshold sensitivity level for a label to be marked as sensitive in Defender for Cloud.
+- If you import labels, you can set sensitivity thresholds that determine the minimum threshold sensitivity level for a label to be marked as sensitive in Defender for Cloud.
This configuration helps you focus on your critical sensitive resources and improve the accuracy of the sensitivity insights.
Import as follows (Import only once):
1. In the consent notice message, select **Turn on** and then select **Yes** to share your custom info types and sensitivity labels with Defender for Cloud. > [!NOTE]
+>
> - Imported labels appear in Defender for Cloud in the order rank that's set in Microsoft Purview. > - The two sensitivity labels that are set to highest priority in Microsoft Purview are turned on by default in Defender for Cloud. - ## Customize sensitive data categories/types To customize data sensitivity settings that appear in Defender for Cloud, review the [prerequisites](concept-data-security-posture-prepare.md#configuring-data-sensitivity-settings), and then do the following.
To customize data sensitivity settings that appear in Defender for Cloud, review
## Set the threshold for sensitive data labels
- You can set a threshold to determine the minimum sensitivity level for a label to be marked as sensitive in Defender for Cloud.
+ You can set a threshold to determine the minimum sensitivity level for a label to be marked as sensitive in Defender for Cloud.
If you're using Microsoft Purview sensitivity labels, make sure that: -- the label scope is set to "Items"; under which you should configure [auto labeling for files and emails](/microsoft-365/compliance/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-for-office-apps)-- labels must be [published](/microsoft-365/compliance/create-sensitivity-labels#publish-sensitivity-labels-by-creating-a-label-policy) with a label policy that is in effect.
+- the label scope is set to "Items"; under which you should configure [auto labeling for files and emails](/microsoft-365/compliance/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-for-office-apps)
+- labels must be [published](/microsoft-365/compliance/create-sensitivity-labels#publish-sensitivity-labels-by-creating-a-label-policy) with a label policy that is in effect.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
If you're using Microsoft Purview sensitivity labels, make sure that:
:::image type="content" source="./media/concept-data-security-posture/sensitivity-threshold.png" alt-text="Screenshot of the data sensitivity page, showing the sensitivity label threshold."::: > [!NOTE]
+>
> - When you turn on the threshold, you select a label with the lowest setting that should be considered sensitive in your organization. > - Any resources with this minimum label or higher are presumed to contain sensitive data. > - For example, if you select **Confidential** as minimum, then **Highly Confidential** is also considered sensitive. **General**, **Public**, and **Non-Business** aren't. > - You canΓÇÖt select a sub label in the threshold. However, you can see the sublabel as the affected label on resources in attack path/Cloud Security Explorer, if the parent label is part of the threshold (part of the sensitive labels selected).
+> - The same settings will apply to any supported resource (object storage and databases).
## Next steps
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
Advanced Persistent Threats See the [video: Understanding APTs](/events/teched-2
### **Arc-enabled Kubernetes**
-Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center. See [What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md).
+Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center. See [What is Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview).
### **ARG**
To make sure that your server resources are secure, Microsoft Defender for Cloud
### Azure Policy for Kubernetes
-A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. It's deployed as an AKS add-on in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
## B
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
Previously updated : 06/19/2022 Last updated : 09/06/2023 + # Defender for Containers architecture Defender for Containers is designed differently for each Kubernetes environment whether they're running in:
To learn more about implementation details such as supported operating systems,
### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a>
-When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless. These are the required components:
--- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an AKS Security profile.
+When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and collected automatically through Azure infrastructure with no additional cost or configuration considerations. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers:
-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). The Azure Policy for Kubernetes pod is deployed as an AKS add-on.
+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an AKS Security profile.
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
:::image type="content" source="./media/defender-for-containers/architecture-aks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Azure Kubernetes Service, and Azure Policy." lightbox="./media/defender-for-containers/architecture-aks-cluster.png":::
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, t
### Architecture diagram of Defender for Cloud and Arc-enabled Kubernetes clusters
-For all clusters hosted outside of Azure, [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) is required to connect the clusters to Azure and provide Azure services such as Defender for Containers.
+These components are required in order to receive the full protection offered by Microsoft Defender for Containers:
-When a non-Azure container is connected to Azure with Arc, the [Arc extension](../azure-arc/kubernetes/extensions.md) collects Kubernetes audit logs data from all control plane nodes in the cluster. The extension sends the log data to the Microsoft Defender for Cloud backend in the cloud for further analysis. The extension is registered with a Log Analytics workspace used as a data pipeline, but the audit log data isn't stored in the Log Analytics workspace.
+- **[Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)** - An agent based solution that connects your clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](/azure/azure-arc/kubernetes/extensions). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
-Workload configuration information is collected by Azure Policy for Kubernetes. As explained in [this Azure Policy for Kubernetes page](../governance/policy/concepts/policy-for-kubernetes.md), the policy extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). Kubernetes admission controllers are plugins that enforce how your clusters are used. The add-on registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner.
+- **Defender agent**: The DaemonSet that is deployed on each node, collects host signals using [eBPF technology](https://ebpf.io/) and Kubernetes audit logs, to provide runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.
+
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](/azure/defender-for-cloud/kubernetes-workload-protections) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for Arc-enabled Kubernetes clusters is a preview feature.
Workload configuration information is collected by Azure Policy for Kubernetes.
### Architecture diagram of Defender for Cloud and EKS clusters
-These components are required in order to receive the full protection offered by Microsoft Defender for Containers:
+When Defender for Cloud protects a cluster hosted in Elastic Kubernetes Service, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers:
- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** – [AWS account’s CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.--- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).--- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an Arc-enabled Kubernetes extension. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension.--- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for AWS EKS clusters is a preview feature.
These components are required in order to receive the full protection offered by
### Architecture diagram of Defender for Cloud and GKE clusters<a name="jit-asc"></a>
-These components are required in order to receive the full protection offered by Microsoft Defender for Containers:
+When Defender for Cloud protects a cluster hosted in Google Kubernetes Engine, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers:
- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** – [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis. -- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).--- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an Arc-enabled Kubernetes extension. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension.--- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for GCP GKE clusters is a preview feature.
These components are required in order to receive the full protection offered by
+## How does agentless discovery for Kubernetes work?
+
+The discovery process is based on snapshots taken at intervals:
++
+When you enable the agentless discovery for Kubernetes extension, the following process occurs:
+
+- **Create**:
+ - If the extension is enabled from Defender CSPM, Defender for Cloud creates an identity in customer environments called `CloudPosture/securityOperator/DefenderCSPMSecurityOperator`.
+ - If the extension is enabled from Defender for Containers, Defender for Cloud creates an identity in customer environments called `CloudPosture/securityOperator/DefenderForContainersSecurityOperator`.
+- **Assign**: Defender for Cloud assigns a built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope. The role contains the following permissions:
+
+ - AKS read (Microsoft.ContainerService/managedClusters/read)
+ - AKS Trusted Access with the following permissions:
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
+
+ Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
+
+- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
+- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
+ ## Next steps In this overview, you learned about the architecture of container security in Microsoft Defender for Cloud. To enable the plan, see:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Previously updated : 07/25/2023 Last updated : 09/06/2023 # Overview of Microsoft Defender for Containers Microsoft Defender for Containers is the cloud-native solution to improve, monitor, and maintain the security of your clusters, containers, and their applications.
-Defender for Containers assists you with the three core aspects of container security:
+Defender for Containers assists you with four core aspects of container security:
- [**Environment hardening**](#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.
Defender for Containers assists you with the three core aspects of container sec
- [**Run-time threat protection for nodes and clusters**](#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and nodes generates security alerts for suspicious activities.
+- [**Agentless discovery for Kubernetes**](#agentless-discovery-for-kubernetes) - Provides tools that give you visibility into your data plane components, generating security insights based on your Kubernetes and environment configuration and lets you hunt for risks.
+ You can learn more by watching this video from the Defender for Cloud in the Field video series: [Microsoft Defender for Containers](episode-three.md). ## Microsoft Defender for Containers plan availability
Defender for Containers also includes host-level threat detection with over 60 K
Defender for Cloud monitors the attack surface of multicloud Kubernetes deployments based on the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/) in close partnership with Microsoft.
-## Learn More
+## Agentless discovery for Kubernetes
+
+Defender for containers uses [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph) to collect in an agentless manner information about your Kubernetes clusters. This data can be queried via [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer) and used for:
+
+1. Kubernetes inventory: gain visibility into your Kubernetes clusters data plane components such as nodes, pods, and cron jobs.
+
+1. Security insights: predefined security situations relevant to Kubernetes components, such as ΓÇ£exposed to the internetΓÇ¥. For more information, see [Security insights](attack-path-reference.md#insights).
+
+1. Risk hunting: querying various risk cases, correlating predefined or custom security scenarios across fine-grained Kubernetes properties as well as Defender For Containers security insights.
++
+## Learn more
Learn more about Defender for Containers in the following blogs:
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Vulnerability assessment for Azure powered by Qualys
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 07/30/2023 Last updated : 09/06/2023
In every subscription where this capability is enabled, all images stored in ACR
Container vulnerability assessment powered by Qualys has the following capabilities: -- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-akspowered-by-qualys).
+- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurepowered-by-qualys).
-- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-akspowered-by-qualys).
+- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurepowered-by-qualys).
- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services).
When a finding matches the criteria you've defined in your disable rules, it doe
- Disable findings with severity below medium - Disable findings that are nonpatchable - Disable findings with CVSS score below 6.5-- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
+- Disable findings with specific text in the security check or category (for example: "RedHat" or "CentOS Security Update for sudo")
> [!IMPORTANT] > To create a rule, you need permissions to edit a policy in Azure Policy.
To create a rule:
Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved-(powered by Qualys)](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
-To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
+To provide the findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the [agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes) or the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure). Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
+
+While Defender agent provides pod inventory every hour, the [agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes) provides an update every six hours. If both extensions are enabled, the newest information is used.
:::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/view-running-containers-vulnerability.png" alt-text="Screenshot of recommendations showing your running containers with the vulnerabilities associated with the images used by each container." lightbox="media/defender-for-containers-vulnerability-assessment-azure/view-running-containers-vulnerability.png":::
defender-for-cloud Defender For Dns Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-alerts.md
When you receive a security alert about suspicious and anomalous activities identified in DNS transactions, we recommend you investigate and respond to the alert as described below. Even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert.
-## Step 1. Contact
+## Step 1: Contact
1. Contact the resource owner to determine whether the behavior was expected or intentional. 1. If the activity is expected, dismiss the alert. 1. If the activity is unexpected, treat the resource as potentially compromised and mitigate as described in the next step.
-## Step 2. Immediate mitigation
+## Step 2: Immediate mitigation
1. Isolate the resource from the network to prevent lateral movement. 1. Run a full antimalware scan on the resource, following any resulting remediation advice.
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
Microsoft Defender for DNS doesn't use any agents.
In this article, you learned about Microsoft Defender for DNS.
-To protect your DNS layer, enable Microsoft Defender for DNS for each of your subscriptions as described in [Enable enhanced protections](enable-enhanced-security.md).
-
-> [!div class="nextstepaction"]
-> [Enable enhanced protections](enable-enhanced-security.md)
- For related material, see the following article: -- Security alerts might be generated by Defender for Cloud or received from other security products. To export all of these alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
+Security alerts might be generated by Defender for Cloud or received from other security products. To export all of these alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
+
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md
Depending on the *type* of access that occurred, some fields might not be availa
> [!TIP] > Azure virtual machines are assigned Microsoft IPs. This means that an alert might contain a Microsoft IP even though it relates to activity performed from outside of Microsoft. So even if an alert has a Microsoft IP, you should still investigate as described on this page.
-### Step 1. Identify the source
+### Step 1: Identify the source
1. Verify whether the traffic originated from within your Azure tenant. If the key vault firewall is enabled, it's likely that you've provided access to the user or application that triggered this alert. 1. If you can't verify the source of the traffic, continue to [Step 2. Respond accordingly](#step-2-respond-accordingly).
Depending on the *type* of access that occurred, some fields might not be availa
> Microsoft Defender for Key Vault is designed to help identify suspicious activity caused by stolen credentials. **Don't** dismiss the alert simply because you recognize the user or application. Contact the owner of the application or the user and verify the activity was legitimate. You can create a suppression rule to eliminate noise if necessary. Learn more in [Suppress security alerts](alerts-suppression-rules.md).
-### Step 2. Respond accordingly
+### Step 2: Respond accordingly
If you don't recognize the user or application, or if you think the access shouldn't have been authorized: - If the traffic came from an unrecognized IP Address:
If you don't recognize the user or application, or if you think the access shoul
1. Contact your administrator. 1. Determine whether there's a need to reduce or revoke Azure Active Directory permissions.
-### Step 3. Measure the impact
+### Step 3: Measure the impact
When the event has been mitigated, investigate the secrets in your key vault that were affected: 1. Open the **Security** page on your Azure key vault and view the triggered alert. 1. Select the specific alert that was triggered and review the list of the secrets that were accessed and the timestamp. 1. Optionally, if you have key vault diagnostic logs enabled, review the previous operations for the corresponding caller IP, user principal, or object ID.
-### Step 4. Take action
+### Step 4: Take action
When you've compiled your list of the secrets, keys, and certificates that were accessed by the suspicious user or application, you should rotate those objects immediately. 1. Affected secrets should be disabled or deleted from your key vault.
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
# Respond to Microsoft Defender for Resource Manager alerts
-When you receive an alert from Microsoft Defender for Resource Manager, we recommend you investigate and respond to the alert as described below. Microsoft Defender for Resource Manager protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert.
+When you receive an alert from Microsoft Defender for Resource Manager, we recommend you investigate and respond to the alert as described below. Defender for Resource Manager protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert.
-## Step 1. Contact
+## Step 1: Contact
1. Contact the resource owner to determine whether the behavior was expected or intentional. 1. If the activity is expected, dismiss the alert. 1. If the activity is unexpected, treat the related user accounts, subscriptions, and virtual machines as compromised and mitigate as described in the following step.
-## Step 2. Investigate alerts from Microsoft Defender for Resource Manager
+## Step 2: Investigate alerts from Microsoft Defender for Resource Manager
-Security alerts from Microsoft Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events.
+Security alerts from Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events.
-Microsoft Defender for Resource Manager provides visibility into activity that comes from third party service providers that have delegated access as part of the resource manager alerts. For example, `Azure Resource Manager operation from suspicious proxy IP address - delegated access`.
+Defender for Resource Manager provides visibility into activity that comes from third party service providers that have delegated access as part of the resource manager alerts. For example, `Azure Resource Manager operation from suspicious proxy IP address - delegated access`.
`Delegated access` refers to access with [Azure Lighthouse](/azure/lighthouse/overview) or with [Delegated administration privileges](/partner-center/dap-faq).
Alerts that show `Delegated access` also include a customized description and re
Learn more about [Azure Activity log](../azure-monitor/essentials/activity-log.md).
-To investigate security alerts from Microsoft Defender for Resource
+To investigate security alerts from Defender for Resource
1. Open Azure Activity log.
To investigate security alerts from Microsoft Defender for Resource
> [!TIP] > For a better, richer investigation experience, stream your Azure activity logs to Microsoft Sentinel as described in [Connect data from Azure Activity log](../sentinel/data-connectors/azure-activity.md).
-## Step 3. Immediate mitigation
+## Step 3: Immediate mitigation
1. Remediate compromised user accounts: - If theyΓÇÖre unfamiliar, delete them as they may have been created by a threat actor
To investigate security alerts from Microsoft Defender for Resource
## Next steps
-This page explained the process of responding to an alert from Microsoft Defender for Resource Manager. For related information see the following pages:
+This page explained the process of responding to an alert from Defender for Resource Manager. For related information, see the following pages:
- [Overview of Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md) - [Suppress security alerts](alerts-suppression-rules.md)
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Previously updated : 06/29/2023 Last updated : 09/04/2023 # Enable Microsoft Defender for SQL servers on machines
Learn more about [vulnerability assessment for Azure SQL servers on machines](de
|-|:-| |Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Protected SQL versions:|SQL Server version: 2012 R2, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
+|Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet **(Advanced Threat Protection Only)**| ## Set up Microsoft Defender for SQL servers on machines
defender-for-cloud Defender For Storage Classic Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md
The classic plan will be deprecated in the future, and the deprecation will be a
## Migration scenarios
-Migrating from the classic Defender for Storage plan to the new Defender for Storage plan is a straightforward process, and there are several ways to do it. You'll need to proactively [enable the new plan](../storage/common/azure-defender-storage-configure.md) to access its enhanced capabilities and pricing.
+Migrating from the classic Defender for Storage plan to the new Defender for Storage plan is a straightforward process, and there are several ways to do it. You'll need to proactively [enable the new plan](/azure/defender-for-cloud/tutorial-enable-storage-plan) to access its enhanced capabilities and pricing.
>[!NOTE] > To enable the new plan, make sure to disable the old Defender for Storage policies. Look for and disable policies named "Configure Azure Defender for Storage to be enabled", "Azure Defender for Storage should be enabled", or "Configure Microsoft Defender for Storage to be enabled (per-storage account plan)". ### Migrating from the classic Defender for Storage plan enabled with per-transaction pricing
-If the classic Defender for Storage plan is enabled with per-transaction pricing, you can switch to the new plan at either the subscription or resource level. You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#override-defender-for-storage-subscription-level-settings) from protected subscriptions.
+If the classic Defender for Storage plan is enabled with per-transaction pricing, you can switch to the new plan at either the subscription or resource level. You can also [exclude specific storage accounts](/azure/defender-for-cloud/advanced-configurations-for-malware-scanning) from protected subscriptions.
Storage accounts that were previously excluded from protected subscriptions in the per-transaction plan will not remain excluded when you switch to the new plan. However, the exclusion tags will remain on the resource and can be removed. In most cases, storage accounts that were previously excluded from protected subscriptions will benefit the most from the new pricing plan. ### Migrating from the classic Defender for Storage plan enabled with per-storage account pricing
-If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The pricing plan remains the same in the new Defender for Storage, except for extra charges for malware scanning, which are charged per GB scanned (free during preview).
+If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The new Defender for Storage plan has the same pricing plan with the exception of malware scanning which may incur extra charges and is billed per GB scanned.
- You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#) from protected subscriptions.
+You can learn more about Defender for Storage's pricing model on the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h).
+
+ You can also [exclude specific storage accounts](/azure/defender-for-cloud/advanced-configurations-for-malware-scanning) from protected subscriptions.
## Identify active Microsoft Defender for Storage pricing plans on your subscriptions
To help you better understand the differences between the classic plan and the n
| Category | New Defender for Storage plan | Classic (per-transaction plan) | Classic (per-storage account plan) | | | | | |
-| Pricing structure | Cost is based on the number of storage accounts you protect\*. Add-on costs for GB scanned for malware, if enabled (free during preview) | Cost is based on the number of transactions processed | Cost is based on the number of storage accounts you protect* |
+| Pricing structure | Cost is based on the number of storage accounts you protect\*. Add-on costs for GB scanned for malware, if enabled| Cost is based on the number of transactions processed | Cost is based on the number of storage accounts you protect* |
| Enablement options | Subscription and resource level | Subscription and resource level | Subscription only | | Exclusion of storage accounts from protected subscriptions | Yes | Yes | No | | Activity monitoring (security alerts) | Yes | Yes | Yes |
To help you better understand the differences between the classic plan and the n
The new plan offers a more comprehensive feature set designed to better protect your data. It also provides a more predictable pricing plan compared to the classic plan. We recommend you migrate to the new plan to take full advantage of its benefits.
-Learn more about how to [enable and configure Defender for Storage](../storage/common/azure-defender-storage-configure.md).
+Learn more about how to [enable and configure Defender for Storage](/azure/defender-for-cloud/tutorial-enable-storage-plan).
## Next steps
In this article, you learned about migrating to the new Microsoft Defender for S
> [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)+++
defender-for-cloud Defender For Storage Configure Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-configure-malware-scan.md
With Malware Scanning, you can build your automation response using the followin
- Blob index tags > [!TIP]
-> We recommend you try the Ninja training instructions, a hands-on lab with detailed step-by-step instructions on how to try out and test malware scanning end-to-end with setting up responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities.
+> We invite you to explore the Malware Scanning feature in Defender for Storage through our hands-on lab. Follow the [Ninja training](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/main/Labs/Modules/Module%2019%20-%20Defender%20for%20Storage.md) instructions for a detailed, step-by-step guide on how to set up and test Malware Scanning end-to-end, including configuring responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities.
Here are some response options that you can use to automate your response:
The event message is a JSON object that contains key-value pairs that provide de
Here's an example of an event message:
-```
+
+```json
{ "id": "52d00da0-8f1a-4c3c-aa2c-24831967356b", "subject": "storageAccounts/<storage_account_name>/containers/app-logs-storage/blobs/EICAR - simulating malware.txt",
In this article, you learned about Microsoft Defender for Storage.
> [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)+
defender-for-cloud Defender For Storage Infrastructure As Code Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-infrastructure-as-code-enablement.md
- # Enable and configure with Infrastructure as Code templates We recommend that you enable Defender for Storage on the subscription level. Doing so ensures all storage accounts in the subscription will be protected, including future ones.
We recommend that you enable Defender for Storage on the subscription level. Doi
## [Enable on a subscription](#tab/enable-subscription/)
+### Terraform template
+
+To enable and configure Microsoft Defender for Storage at the subscription level using Terraform, you can use the following code snippet:
+
+```
+resource "azurerm_security_center_subscription_pricing" "DefenderForStorage" {
+ tier = "Standard"
+ resource_type = "StorageAccounts"
+ subplan = "DefenderForStorageV2"
+
+ extension {
+ name = "OnUploadMalwareScanning"
+ additional_extension_properties = {
+ CapGBPerMonthPerStorageAccount = "5000"
+ }
+ }
+
+ extension {
+ name = "SensitiveDataDiscovery"
+ }
+}
+```
+
+**Modifying the monthly cap for malware scanning**
+
+To modify the monthly cap for malware scanning per storage account, adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month per storage account. If you want to permit unlimited scanning, assign the value "-1". The default limit is set at 5,000 GB.
+
+**Disabling features**
+
+If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can remove the corresponding extension block from the Terraform code.
+
+**Disabling the entire Defender for Storage plan**
+
+To disable the entire Defender for Storage plan, set the `tier` property value to **"Free"** and remove the `subPlan` and `extension` properties.
+
+Learn more about the `azurerm_security_center_subscription_pricing` resource by referring to the [azurerm_security_center_subscription_pricing documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/security_center_subscription_pricing). Additionally, you can find comprehensive details on the Terraform provider for Azure in the [Terraform AzureRM Provider documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs).
+ ### Bicep template To enable and configure Microsoft Defender for Storage at the subscription level using [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep), make sure your [target scope is set to subscription](/azure/azure-resource-manager/bicep/deploy-to-subscription?tabs=azure-cli#scope-to-subscription), and add the following to your Bicep template:
resource StorageAccounts 'Microsoft.Security/pricings@2023-01-01' = {
} ```
+**Modifying the monthly cap for malware scanning**
+ To modify the monthly cap for malware scanning per storage account, adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB.
-If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **False** under Sensitive data discovery.
+**Disabling features**
-To disable the entire Defender for Storage plan, set the `pricingTier` property value to **Free** and remove the subPlan and extensions properties.
+If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **False** under sensitive data discovery.
+
+**Disabling the entire Defender for Storage plan**
+
+To disable the entire Defender for Storage plan, set the `pricingTier` property value to **Free** and remove the `subPlan` and `extensions` properties.
Learn more about the [Bicep template in the Microsoft security/pricings documentation](/azure/templates/microsoft.security/pricings?pivots=deployment-language-bicep&source=docs).
To enable and configure Microsoft Defender for Storage at the subscription level
} ```
+**Modifying the monthly cap for malware scanning**
+ To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB.
-If you want to turn off the on-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **False** under Sensitive data discovery.
+**Disabling features**
-To disable the entire Defender plan, set the `pricingTier` property value to **Free** and remove the subPlan and extensions properties.
+If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under sensitive data discovery.
+
+**Disabling the entire Defender for Storage plan**
+
+To disable the entire Defender plan, set the `pricingTier` property value to **Free** and remove the `subPlan` and `extension` properties.
Learn more about the ARM template in the Microsoft.Security/Pricings documentation. ## [Enable on a storage account](#tab/enable-storage-account/)
+### Terraform template - storage account
+
+To enable and configure Microsoft Defender for Storage at the storage account level using Terraform, import the [AzAPI provider](https://registry.terraform.io/providers/Azure/azapi/latest/docs) and use the following code snippet:
+
+```
+resource "azurerm_storage_account" "example" { ... }
+
+resource "azapi_resource_action" "enable_defender_for_Storage" {
+ type = "Microsoft.Security/defenderForStorageSettings@2022-12-01-preview"
+ resource_id = "${azurerm_storage_account.example.id}/providers/Microsoft.Security/defenderForStorageSettings/current"
+ method = "PUT"
+
+ body = jsonencode({
+ properties = {
+ isEnabled = true
+ malwareScanning = {
+ onUpload = {
+ isEnabled = true
+ capGBPerMonth = 5000
+ }
+ }
+ sensitiveDataDiscovery = {
+ isEnabled = true
+ }
+ overrideSubscriptionLevelSettings = true
+ }
+ })
+}
+```
+
+> [!NOTE]
+> The `azapi_resource_action` used here is an action that is specific to the configuration of Microsoft Defender for Storage. It's different from the typical resource declarations in Terraform, and it's used to perform specific actions on the resource, such as enabling or disabling features.
+
+**Modifying the monthly cap for malware scanning**
+
+To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value "-1". The default limit is set at 5,000 GB.
+
+**Disabling features**
+
+If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
+
+**Disabling the entire Defender for Storage plan**
+
+To disable the entire Defender for Storage plan for the storage account, you can use the following code snippet:
+
+```
+resource "azurerm_storage_account" "example" { ... }
+
+resource "azapi_resource_action" "disable_defender_for_Storage" {
+ type = "Microsoft.Security/defenderForStorageSettings@2022-12-01-preview"
+ resource_id = "${azurerm_storage_account.example.id}/providers/Microsoft.Security/defenderForStorageSettings/current"
+ method = "PUT"
+
+ body = jsonencode({
+ properties = {
+ isEnabled = true
+ overrideSubscriptionLevelSettings = false
+ }
+ })
+}
+```
+
+You can change the value of `overrideSubscriptionLevelSettings` to **True** to disable Defender for Storage plan for the storage account under subscriptions with Defender for Storage enabled at the subscription level. If you want to keep some features enabled, you can modify the properties accordingly.
+Learn more about the __[Microsoft.Security/defenderForStorageSettings](/rest/api/defenderforcloud/defender-for-storage/create)__ API documentation for further customization and control over your storage account's security settings. Additionally, you can find comprehensive details on the Terraform provider for Azure in the [Terraform AzureRM Provider documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs).
+ ### Bicep template - storage account To enable and configure Microsoft Defender for Storage at the storage account level using Bicep, add the following to your Bicep template:
resource defenderForStorageSettings 'Microsoft.Security/DefenderForStorageSettin
} ```
-To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the capGBPerMonth parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB.
+**Modifying the monthly cap for malware scanning**
+
+To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth parameter` to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB.
-If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **false** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
+**Disabling features**
-To disable the entire Defender plan for the storage account, set the `isEnabled` property value to **false** and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties.
+If you want to turn off the On-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
+
+**Disabling the entire Defender for Storage plan**
+
+To disable the entire Defender plan for the storage account, set the `isEnabled` property value to **False** and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties.
Learn more about the [Microsoft.Security/DefenderForStorageSettings API](/rest/api/defenderforcloud/defender-for-storage/create) documentation. > [!TIP] > Malware Scanning can be configured to send scanning results to the following: <br> **Event Grid custom topic** - for near-real time automatic response based on every scanning result. Learn more how to [configure malware scanning to send scanning events to an Event Grid custom topic](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-storage-account#setting-up-event-grid-for-malware-scanning). <br> **Log Analytics workspace** - for storing every scan result in a centralized log repository for compliance and audit. Learn more how to [configure malware scanning to send scanning results to a Log Analytics workspace](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-storage-account#setting-up-logging-for-malware-scanning).
-Learn more on how to set up response for malware scanning results.
+Learn more on how to [set up response for malware scanning results.](/azure/defender-for-cloud/defender-for-storage-configure-malware-scan)
### ARM template - storage account
To enable and configure Microsoft Defender for Storage at the storage account le
} ```
-To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the capGBPerMonth parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB.
+**Modifying the monthly cap for malware scanning**
+
+To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value "-1". The default limit is set at 5,000 GB.
-If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the isEnabled value to false under the malwareScanning or sensitiveDataDiscovery properties sections.
+**Disabling features**
-To disable the entire Defender plan for the storage account, set the isEnabled property value to false and remove the malwareScanning and sensitiveDataDiscovery sections from the properties.
+If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
+
+**Disabling the entire Defender for Storage plan**
+
+To disable the entire Defender plan for the storage account, set the `isEnabled` property value to **False** and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties.
## Next steps
-Learn more about the [Microsoft.Security/DefenderForStorageSettings](/rest/api/defenderforcloud/defender-for-storage/create) API documentation.
+Learn more about the [Microsoft.Security/DefenderForStorageSettings](/rest/api/defenderforcloud/defender-for-storage/create) API documentation.
+
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features- description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 06/15/2023 Last updated : 08/21/2023
Defender for Storage includes:
- Activity Monitoring - Sensitive data threat detection (preview feature, new plan only)-- Malware Scanning (preview feature, new plan only)
+- Malware Scanning (new plan only)
:::image type="content" source="media/defender-for-storage-introduction/defender-for-storage-overview.gif" alt-text="Animated diagram showing how Defender for Storage protects against common threats to data."::: ## Getting started
-With a simple agentless setup at scale, you can [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.
+With a simple agentless setup at scale, you can [enable Defender for Storage](tutorial-enable-storage-plan.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.
> [!NOTE] > If you already have the Defender for Storage (classic) enabled and want to access the new security features and pricing, you'll need to [migrate to the new pricing plan](defender-for-storage-classic-migrate.md).
With a simple agentless setup at scale, you can [enable Defender for Storage](..
|Aspect|Details| |-|:-| |Release state:|General Availability (GA)|
-|Feature availability:|- Activity monitoring (security alerts) - General Availability (GA)<br>- Malware Scanning ΓÇô Preview, **General Availability (GA) on September 1, 2023** <br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|
-|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Malware Scanning is offered for free during the public preview but will **start being billed on September 1, 2023, at $0.15/GB (USD) of data ingested.** Customers are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per month per storage account and control costs using this feature.|
+|Feature availability:|- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|
+|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): $0.15/GB (USD) of data ingested\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Billing begins on September 3, 2023. To limit expenses, use the `Monthly capping` feature to set a cap on the amount of GB scanned per month, per storage account to help you control your costs. |
| Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | |Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the [classic plan](/azure/defender-for-cloud/defender-for-storage-classic))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
Defender for Storage continuously analyzes data and control plane logs from prot
### Malware Scanning (powered by Microsoft Defender Antivirus) > [!NOTE]
-> Malware Scanning is offered for free during public preview. **Billing will begin when generally available (GA) on September 1, 2023 and priced at $0.15 (USD)/GB of data scanned.** You are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per storage account per month and control costs.
+> **Billing for Malware Scanning begins on September 3, 2023.** To limit expenses, use the `Monthly capping` feature to set a cap on the amount of GB scanned per month, per storage account to help you control your costs.
Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, applying Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale. This is a configurable feature in the new Defender for Storage plan that is priced per GB scanned.
In summary, Malware Scanning, which is only available on the new plan for Blob s
In this article, you learned about Microsoft Defender for Storage. -- [Enable Defender for Storage](enable-enhanced-security.md)
+- [Enable Defender for Storage](tutorial-enable-storage-plan.md)
- Check out [common questions](faq-defender-for-storage.yml) about Defender for Storage. ++
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Title: Malware scanning in Microsoft Defender for Storage description: Learn about the benefits and features of malware scanning in Microsoft Defender for Storage. Previously updated : 08/15/2023 Last updated : 08/23/2023
Some common use-cases and scenarios for malware scanning in Defender for Storage
- **Machine learning training data:** the quality and security of the training data are critical for effective machine learning models. It's important to ensure these data sets are clean and safe, especially if they include user-generated content or data from external sources.
+ :::image type="content" source="media/defender-for-storage-malware-scan/malware-scan-tax-app-demo.gif" alt-text="animated GIF showing user-generated-content and data from external sources." lightbox="media/defender-for-storage-malware-scan/malware-scan-tax-app-demo.gif":::
+ ## Prerequisites To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
-You can [enable and configure Malware Scanning at scale](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription) for your subscriptions while maintaining granular control over configuring the feature for individual storage accounts. There are several ways to enable and configure Malware Scanning: [Azure built-in policy](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-at-scale-with-an-azure-built-in-policy) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#bicep-template) and [ARM template](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#arm-template), using the [Azure portal](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal), or directly with [REST API](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-with-rest-api).
-
-To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
+You can [enable and configure Malware Scanning at scale](tutorial-enable-storage-plan.md) for your subscriptions while maintaining granular control over configuring the feature for individual storage accounts. There are several ways to enable and configure Malware Scanning: [Azure built-in policy](defender-for-storage-policy-enablement.md) (the recommended method), programmatically using Infrastructure as Code templates, including [Terraform](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#terraform-template), [Bicep](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription&branch=pr-en-us-248836#bicep-template), and [ARM](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#azure-resource-manager-template) templates, using the [Azure portal](defender-for-storage-azure-portal-enablement.md?tabs=enable-subscription), or directly with the [REST API](defender-for-storage-rest-api-enablement.md?tabs=enable-subscription).
-## How does malware scanning work?
+## How does malware scanning work
### On-upload malware scanning
Learn how to configure Malware Scanning so that [every scan result is sent autom
You may want to log your scan results for compliance evidence or investigating scan results. By setting up a Log Analytics Workspace destination, you can store every scan result in a centralized log repository that is easy to query. You can view the results by navigating to the Log Analytics destination workspace and looking for the `StorageMalwareScanningResults` table.
-Learn more about [setting up Log Analytics results](../azure-monitor/logs/quick-create-workspace.md).
+Learn more about [setting up logging for malware scanning](advanced-configurations-for-malware-scanning.md#setting-up-logging-for-malware-scanning).
> [!TIP]
-> We recommend you try a hands-on lab to try out Malware Scanning in Defender for Storage: the [Ninja](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/main/Labs/Modules/Module%2019%20-%20Defender%20for%20Storage.md) training instructions for detailed step-by-step instructions on how to test Malware Scanning end-to-end with setting up responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities.
+> We invite you to explore the Malware Scanning feature in Defender for Storage through our hands-on lab. Follow the [Ninja training](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/main/Labs/Modules/Module%2019%20-%20Defender%20for%20Storage.md) instructions for a detailed, step-by-step guide on how to set up and test Malware Scanning end-to-end, including configuring responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities.
## Cost control Malware scanning is billed per GB scanned. To provide cost predictability, Malware Scanning supports setting a cap on the amount of GB scanned in a single month per storage account.
-The "capping" mechanism is designed to set a monthly scanning limit, measured in gigabytes (GB), for each storage account, serving as an effective costs control. If a predefined scanning limit is established for a storage account in a single calendar month, the scanning operation would automatically halt once this threshold is reached (with up to 20-GB deviation) and files won't be scanned for malware.
+The "capping" mechanism is designed to set a monthly scanning limit, measured in gigabytes (GB), for each storage account, serving as an effective cost control. If a predefined scanning limit is established for a storage account in a single calendar month, the scanning operation would automatically halt once this threshold is reached (with up to a 20-GB deviation), and files wouldn't be scanned for malware. Updating the cap typically takes up to an hour to take effect.
By default, a limit of 5 TB (5,000 GB) is established if no specific capping mechanism is defined. > [!TIP] > You can set the capping mechanism on either individual storage accounts or across an entire subscription (every storage account on the subscription will be allocated the limit defined on the subscription level).
-Follow [these steps](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal) to configure the capping mechanism.
+Follow [these steps](tutorial-enable-storage-plan.md#set-up-and-configure-microsoft-defender-for-storage) to configure the capping mechanism.
## Handling possible false positives
Despite the scanning process, access to uploaded data remains unaffected, and th
## Next steps Learn more on how to [set up response for malware scanning](defender-for-storage-configure-malware-scan.md#setting-up-response-to-malware-scanning) results.++++
defender-for-cloud Defender For Storage Rest Api Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-rest-api-enablement.md
We recommend that you enable Defender for Storage on the subscription level. Doi
To enable and configure Microsoft Defender for Storage at the subscription level using REST API, create a PUT request with this endpoint (replace the `subscriptionId` in the endpoint URL with your own Azure subscription ID):
-**PUT** https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2023-01-01
+```
+PUT
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2023-01-01
+
+```
And add the following request body: ```
Learn more on how to [set up response for malware scanning](defender-for-storage
## Next steps -- Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).
+- Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).
++
defender-for-cloud Defender For Storage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md
# Testing the Defender for Storage data security features
-After you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md), you can test the service and run a proof of concept to familiarize yourself with its features and validate the advanced security capabilities effectively protect your storage accounts by generating real security alerts. This guide will walk you through testing various aspects of the security coverage offered by Defender for Storage.
+After you [enable Microsoft Defender for Storage](tutorial-enable-storage-plan.md), you can test the service and run a proof of concept to familiarize yourself with its features and validate the advanced security capabilities effectively protect your storage accounts by generating real security alerts. This guide will walk you through testing various aspects of the security coverage offered by Defender for Storage.
There are three main components to test:
To simulate a malware upload using an EICAR test file, follow these steps:
1. Review the security alert:
- 1. Locate the alert titled **Malicious file uploaded to storage account (Preview)**.
-
+ . Locate the alert titled **Malicious file uploaded to storage account**.
1. Select on the alertΓÇÖs **View full details** button to see all the related details. 1. Learn more about Defender for Storage security alerts in the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md#alerts-azurestorage).
To test the sensitive data threat detection feature by uploading test data that
:::image type="content" source="media/defender-for-storage-test/testing-sensitivity-2.png" alt-text="Screenshot showing how to test a file in Malware Scanning for Social Security Number information.":::
- 1. Save the file with the updated information.
-
- 1. Upload the file you created to the **test-container** in the storage account.
+ 1. Save and upload the file to the **test-container** in the storage account.
:::image type="content" source="media/defender-for-storage-test/testing-sensitivity-3.png" alt-text="Screenshot showing how to upload a file in Malware Scanning to test for Social Security Number information.":::
To test the sensitive data threat detection feature by uploading test data that
1. Enable Defender for Storage on the storage account with the Sensitivity Data Discovery feature enabled.
- Allow 1-2 hours for the Sensitive Data Discovery engine to scan the storage account. Be aware that the process may take up to 24 hours to complete.
+ Sensitive data discovery scans for sensitive information within the first 24 hours when enabled at the storage account level or when a new storage account is created under a subscription protected by this feature at the subscription level. Following this initial scan, the service will scan for sensitive information every 7 days from the time of enablement.
+ > [!NOTE]
+ > If you enable the feature and then add sensitive data on the days after enablement, the next scan for that newly added data will occur within the next 7-day scanning cycle, depending on the day of the week the data was added.
1. Change access level: 1. Return to the Containers blade.
Learn more about:
- [Threat response](defender-for-storage-threats-alerts.md) - [Customizing data sensitivity settings](defender-for-storage-data-sensitivity.md)-- [Threat detection and alerts](defender-for-storage-threats-alerts.md)
+- [Threat detection and alerts](defender-for-storage-threats-alerts.md)
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
Previously updated : 06/29/2023 Last updated : 08/15/2023 # Enable agentless scanning for VMs
If you have Defender for Servers P2 already enabled and agentless scanning is tu
After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud.
+## Enable agentless scanning in GCP
+
+1. From Defender for Cloud's menu, select **Environment settings**.
+1. Select the relevant project or organization.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**.
+
+ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-plan.png" alt-text="Screenshot that shows where to select the plan for GCP projects." lightbox="media/enable-agentless-scanning-vms/gcp-select-plan.png":::
+
+1. In the settings pane, turn on ΓÇ»**Agentless scanning**.
+
+ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-agentless.png" alt-text="Screenshot that shows where to select agentless scanning." lightbox="media/enable-agentless-scanning-vms/gcp-select-agentless.png":::
+
+1. SelectΓÇ»**Save and Next: Configure Access**.
+1. Copy the onboarding script.
+1. Run the onboarding script in the GCP organization/project scope (GCP portal or gcloud CLI).
+1. Select ΓÇ»**Next: Review and generate**.
+1. Select ΓÇ»**Update**.
+ ## Exclude machines from scanning Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped.
defender-for-cloud Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-pull-request-annotations.md
Once you've completed these steps, you can select the build pipeline you created
1. (Optional) Select a category from the drop-down menu. > [!NOTE]
- > Only secret scan results and Infrastructure-as-Code misconfigurations for ARM/Bicep templates are currently supported.
+ > Only secret scan results and Infrastructure-as-Code misconfigurations (ARM, Bicep, Terraform, CloudFormation, Dockerfiles, Helm Charts, and more) are currently supported.
1. (Optional) Select a severity level from the drop-down menu.
defender-for-cloud Enable Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment.md
A notification message pops up in the top right corner that will verify that the
## How to enable runtime coverage -- For Defender for CSPM, use agentless discovery for Kubernetes. For more information, see [Onboard agentless container posture in Defender CSPM](how-to-enable-agentless-containers.md).-- For Defender for Containers, use the Defender agent. For more information, see [Deploy the Defender agent in Azure](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure).
+- For Defender CSPM, use agentless discovery for Kubernetes. For more information, see [Onboard agentless container posture in Defender CSPM](how-to-enable-agentless-containers.md).
+- For Defender for Containers, use agentless discovery for Kubernetes or use the Defender agent. For more information, see [Enable the plan](defender-for-containers-enable.md).
- For Defender for Container Registries, there is no runtime coverage. ## Next steps
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Last updated 06/15/2023
# Endpoint protection assessment and recommendations in Microsoft Defender for Cloud
+> [!NOTE]
+> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations: - [Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439)
Microsoft Defender for Cloud provides health assessments of [supported](supporte
- Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and any of the following occurs:
- * Any of the following properties are false:
+ - Any of the following properties are false:
- **AMServiceEnabled** - **AntispywareEnabled**
Microsoft Defender for Cloud provides health assessments of [supported](supporte
- **IoavProtectionEnabled** - **OnAccessProtectionEnabled**
- * If one or both of the following properties are 7 or more:
+ - If one or both of the following properties are 7 or more:
- **AntispywareSignatureAge** - **AntivirusSignatureAge** ## Microsoft System Center endpoint protection
-* Defender for Cloud recommends **Endpoint protection should be installed on your machines** when importing **SCEPMpModule ("$env:ProgramFiles\Microsoft Security Client\MpProvider\MpProvider.psd1")** and running **Get-MProtComputerStatus** results in **AMServiceEnabled = false**.
+- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when importing **SCEPMpModule ("$env:ProgramFiles\Microsoft Security Client\MpProvider\MpProvider.psd1")** and running **Get-MProtComputerStatus** results in **AMServiceEnabled = false**.
-* Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when **Get-MprotComputerStatus** runs and any of the following occurs:
+- Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when **Get-MprotComputerStatus** runs and any of the following occurs:
- * At least one of the following properties is false:
+ - At least one of the following properties is false:
- **AMServiceEnabled** - **AntispywareEnabled**
Microsoft Defender for Cloud provides health assessments of [supported](supporte
- **IoavProtectionEnabled** - **OnAccessProtectionEnabled**
- * If one or both of the following Signature Updates are greater or equal to 7:
+ - If one or both of the following Signature Updates are greater or equal to 7:
- * **AntispywareSignatureAge**
- * **AntivirusSignatureAge**
+ - **AntispywareSignatureAge**
+ - **AntivirusSignatureAge**
## Trend Micro
-* Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
- - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent** exists
- - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent\InstallationFolder** exists
- - The **dsa_query.cmd** file is found in the Installation Folder
- - Running **dsa_query.cmd** results with **Component.AM.mode: on - Trend Micro Deep Security Agent detected**
+- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
+ - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent** exists
+ - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent\InstallationFolder** exists
+ - The **dsa_query.cmd** file is found in the Installation Folder
+ - Running **dsa_query.cmd** results with **Component.AM.mode: on - Trend Micro Deep Security Agent detected**
## Symantec endpoint protection+ Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met: - **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"**
Defender for Cloud recommends **Endpoint protection health issues should be reso
- Check Real-Time Protection status: **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\AV\Storages\Filesystem\RealTimeScan\OnOff == 1** - Check Signature Update status: **HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LatestVirusDefsDate <= 7 days** - Check Full Scan status: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LastSuccessfulScanDateTime <= 7 days**-- Find signature version number Path to signature version for Symantec 12: **Registry Paths+ "CurrentVersion\SharedDefs" -Value "SRTSP"**
+- Find signature version number Path to signature version for Symantec 12: **Registry Paths+ "CurrentVersion\SharedDefs" -Value "SRTSP"**
- Path to signature version for Symantec 14: **Registry Paths+ "CurrentVersion\SharedDefs\SDSDefs" -Value "SRTSP"** Registry Paths:+ - **"HKLM:\Software\Symantec\Symantec Endpoint Protection" + $Path;** - **"HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection" + $Path**
Defender for Cloud recommends **Endpoint protection health issues should be reso
- Find Signature date: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "szContentCreationDate" >= 7 days** - Find Scan date: **HKLM:\Software\McAfee\Endpoint\AV\ODS -Value "LastFullScanOdsRunTime" >= 7 days**
-## McAfee Endpoint Security for Linux Threat Prevention
+## McAfee Endpoint Security for Linux Threat Prevention
Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
Defender for Cloud recommends **Endpoint protection health issues should be reso
- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **DAT and engine Update time** and both of them <= 7 days - **"/opt/McAfee/ens/tp/bin/mfetpcli --getoasconfig --summary"** returns **On Access Scan** status
-## Sophos Antivirus for Linux
+## Sophos Antivirus for Linux
Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:+ - File **/opt/sophos-av/bin/savdstatus** exits or search for customized location **"readlink $(which savscan)"** - **"/opt/sophos-av/bin/savdstatus --version"** returns Sophos name = **Sophos Anti-Virus and Sophos version >= 9** Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:+ - **"/opt/sophos-av/bin/savlog --maxage=7 | grep -i "Scheduled scan .\* completed" | tail -1"**, returns a value - **"/opt/sophos-av/bin/savlog --maxage=7 | grep "scan finished"** | tail -1", returns a value-- **"/opt/sophos-av/bin/savdstatus --lastupdate"** returns lastUpdate, which should be <= 7 days -- **"/opt/sophos-av/bin/savdstatus -v"** is equal to **"On-access scanning is running"**
+- **"/opt/sophos-av/bin/savdstatus --lastupdate"** returns lastUpdate, which should be <= 7 days
+- **"/opt/sophos-av/bin/savdstatus -v"** is equal to **"On-access scanning is running"**
- **"/opt/sophos-av/bin/savconfig get LiveProtection"** returns enabled ## Troubleshoot and support
defender-for-cloud Episode Thirty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-five.md
Last updated 08/08/2023
# Security alert correlation
-**Episode description**: In this episode of Defender for Cloud in the Field, Daniel Davrayev joins Yuri Diogenes to talk about security alert correlation capability in Defender for Cloud. Daniel talks about the importance of have a built-in capability to correlate alerts in Defender for Cloud, how this saves time for SOC analysts to investigate alert and respond to potential threats. Daniel also explains how data correlation works and demonstrate how this correlation appears in Defender for Cloud dashboard as a security incident.
+**Episode description**: In this episode of Defender for Cloud in the Field, Daniel Davrayev joins Yuri Diogenes to talk about security alert correlation capability in Defender for Cloud. Daniel talks about the importance of have a built-in capability to correlate alerts in Defender for Cloud, how this capability saves time for SOC analysts to investigate alert and respond to potential threats. Daniel also explains how data correlation works and demonstrate how this correlation appears in Defender for Cloud dashboard as a security incident.
> [!VIDEO https://aka.ms/docs/player?id=6573561d-70a6-4b4c-ad16-9efe747c9a61]
Last updated 08/08/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Defender CSPM support for GCP and more updates](episode-thirty-six.md)
defender-for-cloud Episode Thirty Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-seven.md
+
+ Title: Capabilities to counter identity-based supply chain attacks | Defender for Cloud in the Field
+description: Learn about Defender for Cloud's capability to counter identity-based supply chain attacks.
+ Last updated : 08/29/2023++
+# Capabilities to counter identity-based supply chain attacks
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Security Researcher, Hagai Kestenberg joins Yuri Diogenes to talk about Defender for Cloud capabilities to counter identity-based supply chain attacks. Hagai explains the different types of supply chain attacks and focuses on the risks of identity-based supply chain attacks. Hagai makes recommendations to mitigate this type of attack and explain the new capability in Defender for Resource Manager that can be used to identify this type of attack. Hagai also demonstrates the new alert generated by Defender for Resource Manager when this type of attack is identified.
+
+> [!VIDEO https://aka.ms/docs/player?id=d69fb652-46a7-4f8c-8632-8cf2cbc3685a]
+
+- [01:41](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=01m41s) - Intro
+- [04:04](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=04m04s) - Understanding identity-based supply chain attacks
+- [06:50](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=06m50s) - Identity-based supply chain attacks sample scenario
+- [08:26](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=08m26s) - Best practices to prevent identity-based supply chain attacks
+- [10:29](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=10m29s) - Demonstration
+
+## Recommended resources
+
+- [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/announcing-microsoft-defender-for-cloud-capabilities-to-counter/ba-p/3876012)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Thirty Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-six.md
+
+ Title: Defender CSPM support for GCP and more updates | Defender for Cloud in the Field
+description: Learn about Defender CSPM's support for GCP and more updates for Defender for Cloud.
+ Last updated : 08/29/2023++
+# Defender CSPM support for GCP and more updates
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Amit Biton joins Yuri Diogenes to talk about the new Defender CSPM support for GCP. Amit talks about the recent investments in multicloud and the alignment with Microsoft CNAPP strategy. Amit covers the capabilities that were released in Defender CSPM to cover GCP, including the new Microsoft Cloud Security Benchmark for GCP. Amit also demonstrate the use of Attack Path and Cloud Security explorer in a multicloud environment.
+
+> [!VIDEO https://aka.ms/docs/player?id=673a8d91-3b0e-4bfb-986c-888ae7532320]
+
+- [01:23](/shows/mdc-in-the-field/support-gcp#time=01m23s) - Overview of the new announcements for multicloud
+- [05:09](/shows/mdc-in-the-field/support-gcp#time=05m09s) - Microsoft CNAPP strategy
+- [08:55](/shows/mdc-in-the-field/support-gcp#time=08m55s) - Agentless capability
+- [12:54](/shows/mdc-in-the-field/support-gcp#time=12m54s) - Demonstration
+
+## Recommended resources
+
+- [Learn more](/azure/defender-for-cloud/concept-cloud-security-posture-management)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Capabilities to counter identity-based supply chain attacks](episode-thirty-seven.md)
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
Before you set up the Azure services for exporting alerts, make sure you have:
- if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read` - if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action` -->
-### Step 1. Set up the Azure services
+### Step 1: Set up the Azure services
You can set up your Azure environment to support continuous export using either:
You can set up your Azure environment to support continuous export using either:
For more detailed instructions, see [Prepare Azure resources for exporting to Splunk and QRadar](export-to-splunk-or-qradar.md).
-### Step 2. Connect the event hub to your preferred solution using the built-in connectors
+### Step 2: Connect the event hub to your preferred solution using the built-in connectors
Each SIEM platform has a tool to enable it to receive alerts from Azure Event Hubs. Install the tool for your platform to start receiving alerts.
defender-for-cloud Export To Splunk Or Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-splunk-or-qradar.md
In order to stream Microsoft Defender for Cloud security alerts to IBM QRadar an
To configure the Azure resources for QRadar and Splunk in the Azure portal:
-## Step 1. Create an Event Hubs namespace and event hub with send permissions
+## Step 1: Create an Event Hubs namespace and event hub with send permissions
1. In the [Event Hubs service](../event-hubs/event-hubs-create.md), create an Event Hubs namespace: 1. Select **Create**.
To configure the Azure resources for QRadar and Splunk in the Azure portal:
1. Select **Create** to create the policy. :::image type="content" source="media/export-to-siem/create-shared-access-policy.png" alt-text="Screenshot of creating a shared policy in Microsoft Event Hubs." lightbox="media/export-to-siem/create-shared-access-policy.png":::
-## Step 2. **For streaming to QRadar SIEM** - Create a Listen policy
+## Step 2: **For streaming to QRadar SIEM** - Create a Listen policy
1. Select **Add**, enter a unique policy name, and select **Listen**. 1. Select **Create** to create the policy.
To configure the Azure resources for QRadar and Splunk in the Azure portal:
:::image type="content" source="media/export-to-siem/create-shared-listen-policy.png" alt-text="Screenshot of creating a listen policy in Microsoft Event Hubs." lightbox="media/export-to-siem/create-shared-listen-policy.png":::
-## Step 3. Create a consumer group, then copy and save the name to use in the SIEM platform
+## Step 3: Create a consumer group, then copy and save the name to use in the SIEM platform
1. In the Entities section of the Event Hubs event hub menu, select **Event Hubs** and select the event hub you created.
To configure the Azure resources for QRadar and Splunk in the Azure portal:
1. Select **Consumer group**.
-## Step 4. Enable continuous export for the scope of the alerts
+## Step 4: Enable continuous export for the scope of the alerts
1. In the Azure search box, search for "policy" and go to the Policy. 1. In the Policy menu, select **Definitions**.
To configure the Azure resources for QRadar and Splunk in the Azure portal:
1. Select **Review and Create** and **Create** to finish the process of defining the continuous export to Event Hubs. - Notice that when you activate continuous export policy on the tenant (root management group level), it automatically streams your alerts on any **new** subscription that will be created under this tenant.
-## Step 5. **For streaming alerts to QRadar SIEM** - Create a storage account
+## Step 5: **For streaming alerts to QRadar SIEM** - Create a storage account
1. Go to the Azure portal, select **Create a resource**, and select **Storage account**. If that option isn't shown, search for "storage account". 1. Select **Create**.
To configure the Azure resources for QRadar and Splunk in the Azure portal:
:::image type="content" source="media/export-to-siem/copy-storage-account-key.png" alt-text="Screenshot of copying storage account key." lightbox="media/export-to-siem/copy-storage-account-key.png":::
-## Step 6. **For streaming alerts to Splunk SIEM** - Create an Azure AD application
+## Step 6: **For streaming alerts to Splunk SIEM** - Create an Azure AD application
1. In the menu search box, search for "Azure Active Directory" and go to Azure Active Directory. 1. Go to the Azure portal, select **Create a resource**, and select **Azure Active Directory**. If that option isn't shown, search for "active directory".
To configure the Azure resources for QRadar and Splunk in the Azure portal:
1. After the secret is created, copy the Secret ID and save it for later use together with the Application ID and Directory (tenant) ID.
-## Step 7. **For streaming alerts to Splunk SIEM** - Allow Azure AD to read from the event hub
+## Step 7: **For streaming alerts to Splunk SIEM** - Allow Azure AD to read from the event hub
1. Go to the Event Hubs namespace you created. 1. In the menu, go to **Access control**.
defender-for-cloud File Integrity Monitoring Enable Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-ama.md
Last updated 11/14/2022
To provide [File Integrity Monitoring (FIM)](file-integrity-monitoring-overview.md), the Azure Monitor Agent (AMA) collects data from machines according to [data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md). When the current state of your system files is compared with the state during the previous scan, FIM notifies you about suspicious modifications.
+> [!NOTE]
+> As part of our Defender for Cloud updated strategy, the Azure Monitor Agent will no longer be required to receive all the capabilities of Defender for Servers. All features that currently rely on the Azure Monitor Agent, including those described on this page, will be available through [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), by August 2024. To access the full capabilities of Defender for SQL server on machines, the Azure monitoring Agent (also known as AMA) is required. For more information about the feature road map, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ File Integrity Monitoring with the Azure Monitor Agent offers: - **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) that enhances security, reliability, and facilitates multi-homing experience to store data.
File Integrity Monitoring with the Azure Monitor Agent offers:
In this article you'll learn how to:
- - [Enable File Integrity Monitoring with AMA](#enable-file-integrity-monitoring-with-ama)
- - [Edit the list of tracked files and registry keys](#edit-the-list-of-tracked-files-and-registry-keys)
- - [Exclude machines from File Integrity Monitoring](#exclude-machines-from-file-integrity-monitoring)
+- [Enable File Integrity Monitoring with AMA](#enable-file-integrity-monitoring-with-ama)
+- [Edit the list of tracked files and registry keys](#edit-the-list-of-tracked-files-and-registry-keys)
+- [Exclude machines from File Integrity Monitoring](#exclude-machines-from-file-integrity-monitoring)
## Availability
To track changes to your files on machines with AMA:
To enable File Integrity Monitoring (FIM), use the FIM recommendation to select machines to monitor:
- 1. From Defender for Cloud's sidebar, open the **Recommendations** page.
- 1. Select the recommendation [File integrity monitoring should be enabled on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b7d740f-c271-4bfd-88fb-515680c33440). Learn more about [Defender for Cloud recommendations](review-security-recommendations.md).
- 1. Select the machines that you want to use File Integrity Monitoring on, select **Fix**, and select **Fix X resources**.
+1. From Defender for Cloud's sidebar, open the **Recommendations** page.
+1. Select the recommendation [File integrity monitoring should be enabled on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b7d740f-c271-4bfd-88fb-515680c33440). Learn more about [Defender for Cloud recommendations](review-security-recommendations.md).
+1. Select the machines that you want to use File Integrity Monitoring on, select **Fix**, and select **Fix X resources**.
+
+ The recommendation fix:
+
+ - Installs the `ChangeTracking-Windows` or `ChangeTracking-Linux` extension on the machines.
+ - Generates a data collection rule (DCR) for the subscription, named `Microsoft-ChangeTracking-[subscriptionId]-default-dcr`, that defines what files and registries should be monitored based on default settings. The fix attaches the DCR to all machines in the subscription that have AMA installed and FIM enabled.
+ - Creates a new Log Analytics workspace with the naming convention `defaultWorkspace-[subscriptionId]-fim` and with the default workspace settings.
- The recommendation fix:
+ You can update the DCR and Log Analytics workspace settings later.
- - Installs the `ChangeTracking-Windows` or `ChangeTracking-Linux` extension on the machines.
- - Generates a data collection rule (DCR) for the subscription, named `Microsoft-ChangeTracking-[subscriptionId]-default-dcr`, that defines what files and registries should be monitored based on default settings. The fix attaches the DCR to all machines in the subscription that have AMA installed and FIM enabled.
- - Creates a new Log Analytics workspace with the naming convention `defaultWorkspace-[subscriptionId]-fim` and with the default workspace settings.
-
- You can update the DCR and Log Analytics workspace settings later.
-
- 1. From Defender for Cloud's sidebar, go to **Workload protections** > **File integrity monitoring**, and select the banner to show the results for machines with Azure Monitor Agent.
+1. From Defender for Cloud's sidebar, go to **Workload protections** > **File integrity monitoring**, and select the banner to show the results for machines with Azure Monitor Agent.
:::image type="content" source="media/file-integrity-monitoring-enable-ama/file-integrity-monitoring-azure-monitoring-agent-banner.png" alt-text="Screenshot of banner in File integrity monitoring to show the results for machines with Azure Monitor Agent.":::
To edit the list of tracked files and registries:
1. Select the DCR that you want to update for a subscription. Each file in the list of Windows registry keys, Windows files, and Linux files contains a definition for a file or registry key, including name, path, and other options. You can also set **Enabled** to **False** to untrack the file or registry key without removing the definition.
-
+ Learn more about [system file and registry key definitions](../automation/change-tracking/manage-change-tracking.md#track-files).
-
+ 1. Select a file, and then add or edit the file or registry key definition. 1. Select **Add** to save the changes.
Every machine in the subscription that is attached to the DCR is monitored. You
To exclude a machine from File Integrity Monitoring:
-1. In the list of monitored machines in the FIM results, select the menu (**...**) for the machine
+1. In the list of monitored machines in the FIM results, select the menu (**...**) for the machine
1. Select **Detach data collection rule**. :::image type="content" source="media/file-integrity-monitoring-enable-ama/file-integrity-monitoring-azure-monitoring-agent-detach-rule.png" alt-text="Screenshot of the option to detach a machine from a data collection rule and exclude the machines from File Integrity Monitoring." lightbox="media/file-integrity-monitoring-enable-ama/file-integrity-monitoring-azure-monitoring-agent-detach-rule.png":::
defender-for-cloud File Integrity Monitoring Enable Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md
Last updated 11/14/2022
To provide [File Integrity Monitoring (FIM)](file-integrity-monitoring-overview.md), the Log Analytics agent uploads data to the Log Analytics workspace. By comparing the current state of these items with the state during the previous scan, FIM notifies you if suspicious modifications have been made.
+> [!NOTE]
+> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ In this article, you'll learn how to: - [Enable File Integrity Monitoring with the Log Analytics agent](#enable-file-integrity-monitoring-with-the-log-analytics-agent)
To disable FIM:
## Monitor workspaces, entities, and files
-### Audit monitored workspaces
+### Audit monitored workspaces
The **File integrity monitoring** dashboard displays for workspaces where FIM is enabled. The FIM dashboard opens after you enable FIM on a workspace or when you select a workspace in the **file integrity monitoring** window that already has FIM enabled.
The **Changes** tab (shown below) lists all changes for the workspace during the
### Edit monitored entities
-1. From the **File Integrity Monitoring dashboard** for a workspace, select **Settings** from the toolbar.
+1. From the **File Integrity Monitoring dashboard** for a workspace, select **Settings** from the toolbar.
:::image type="content" source="./media/file-integrity-monitoring-overview/file-integrity-monitoring-dashboard-settings.png" alt-text="Screenshot of accessing the file integrity monitoring settings for a workspace." lightbox="./media/file-integrity-monitoring-overview/file-integrity-monitoring-dashboard-settings.png":::
The **Changes** tab (shown below) lists all changes for the workspace during the
### Add a new entity to monitor
-1. From the **File Integrity Monitoring dashboard** for a workspace, select **Settings** from the toolbar.
+1. From the **File Integrity Monitoring dashboard** for a workspace, select **Settings** from the toolbar.
The **Workspace Configuration** opens. 1. On the **Workspace Configuration**:
- 1. Select the tab for the type of entity that you want to add: Windows registry, Windows files, Linux Files, file content, or Windows services.
- 1. Select **Add**.
+ 1. Select the tab for the type of entity that you want to add: Windows registry, Windows files, Linux Files, file content, or Windows services.
+ 1. Select **Add**.
In this example, we selected **Linux Files**.
The **Changes** tab (shown below) lists all changes for the workspace during the
### Folder and path monitoring using wildcards Use wildcards to simplify tracking across directories. The following rules apply when you configure folder monitoring using wildcards:-- Wildcards are required for tracking multiple files.-- Wildcards can only be used in the last segment of a path, such as C:\folder\file or /etc/*.conf-- If an environment variable includes a path that isn't valid, validation will succeed but the path will fail when inventory runs.-- When setting the path, avoid general paths such as c:\*.* which will result in too many folders being traversed.+
+- Wildcards are required for tracking multiple files.
+- Wildcards can only be used in the last segment of a path, such as `C:\folder\file` or` /etc/*.conf`
+- If an environment variable includes a path that isn't valid, validation succeeds but the path fails when inventory runs.
+- When setting the path, avoid general paths such as `c:\*.*`, which results in too many folders being traversed.
## Compare baselines using File Integrity Monitoring
To configure FIM to monitor registry baselines:
1. In the **Add Windows Registry for Change Tracking** window, select the **Windows Registry Key** text box. 1. Enter the following registry key:
- ```
+ ```reg
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters ```
To configure FIM to monitor registry baselines:
### Track changes to Windows files 1. In the **Add Windows File for Change Tracking** window, in the **Enter path** text box, enter the folder that contains the files that you want to track.
-In the example in the following figure,
-**Contoso Web App** resides in the D:\ drive within the **ContosWebApp** folder structure.
+In the example in the following figure, **Contoso Web App** resides in the D:\ drive within the **ContosWebApp** folder structure.
+ 1. Create a custom Windows file entry by providing a name of the setting class, enabling recursion, and specifying the top folder with a wildcard (*) suffix. :::image type="content" source="./media/file-integrity-monitoring-enable-log-analytics/baselines-add-file.png" alt-text="Screenshot of enable FIM on a file.":::
In the example in the following figure,
File Integrity Monitoring data resides within the Azure Log Analytics/ConfigurationChange table set.
- 1. Set a time range to retrieve a summary of changes by resource.
+1. Set a time range to retrieve a summary of changes by resource.
In the following example, we're retrieving all changes in the last 14 days in the categories of registry and files:
File Integrity Monitoring data resides within the Azure Log Analytics/Configurat
1. To view details of the registry changes:
- 1. Remove **Files** from the **where** clause.
+ 1. Remove **Files** from the **where** clause.
1. Remove the summarization line and replace it with an ordering clause: ```
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
Title: Identify and remediate attack paths
description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 07/10/2023 Last updated : 08/10/2023 # Identify and remediate attack paths
You can check out the full list of [Attack path names and descriptions](attack-p
| Aspect | Details | |--|--|
-| Release state | GA (General Availability) |
+| Release state | GA (General Availability) for Azure, AWS <Br> Preview for GCP |
| Prerequisites | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md), or [Enable Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md). <br> - [Enable Defender CSPM](enable-enhanced-security.md) <br> - Enable agentless container posture extension in Defender CSPM, or [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS, GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
## Features of the attack path overview page
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Title: Build queries with cloud security explorer
description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 08/10/2023 Last updated : 08/16/2023 # Build queries with cloud security explorer
Defender for Cloud's contextual security capabilities assists security teams in
Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
-With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure and AWS).
+With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure AWS, and GCP).
Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
Learn more about [the cloud security graph, attack path analysis, and the cloud
| Release state | GA (General Availability) | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled<br>- Defender for Servers P2 customers can use the explorer UI to query for keys and secrets, but must have Defender CSPM enabled to get the full value of the Explorer. | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds - GCP (Preview) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
## Prerequisites
defender-for-cloud Incidents Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents-reference.md
Learn how to [manage security incidents](incidents.md#managing-security-incident
| **Security incident detected suspicious Kubernetes cluster activity (Preview)** | This incident indicates that suspicious activity has been detected on your Kubernetes cluster following suspicious user activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same cluster, which increases the fidelity of malicious activity in your environment. The suspicious activity on your Kubernetes cluster might indicate that a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | High | | **Security incident detected suspicious storage activity (Preview)** | Scenario 1: This incident indicates that suspicious storage activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding suspicious storage activity may suggest they are attempting to access potentially sensitive data. <br><br> Scenario 2: This incident indicates that suspicious storage activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate that a threat actor gained unauthorized access to your environment, and the succeeding suspicious storage activity may suggest they are attempting to access potentially sensitive data. | High | | **Security incident detected suspicious Azure toolkit activity (Preview)** | This incident indicates that suspicious activity has been detected following the potential usage of an Azure toolkit. Multiple alerts from different Defender for Cloud plans have been triggered on the same user or service principal, which increases the fidelity of malicious activity in your environment. The usage of an Azure toolkit can indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | High |
+|**Security incident detected suspicious DNS activity (Preview)** | Scenario 1: This incident indicates that suspicious DNS activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious DNS activity might indicate that a threat actor gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious DNS activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious DNS activity might indicate that a threat actor gained unauthorized access to your environment and is attempting to compromise it. |High|
+|**Security incident detected suspicious SQL activity (Preview)** | Scenario 1: This incident indicates that suspicious SQL activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious SQL activity might indicate that a threat actor is targeting your SQL server and is attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious SQL activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious SQL activity might indicate that a threat actor is targeting your SQL server and is attempting to compromise it. |High|
| **Security incident detected suspicious app service activity (Preview)** | Scenario 1: This incident indicates that suspicious activity has been detected in your app service environment. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious app service activity might indicate that a threat actor is targeting your application and may be attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious activity has been detected in your app service environment. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious app service activity might indicate that a threat actor is targeting your application and may be attempting to compromise it.ΓÇï | High | | **Security incident detected compromised machine** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised this machine.| Medium/High | | **Security incident detected compromised machine with botnet communication** | This incident indicates suspicious botnet activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
Learn how to [manage security incidents](incidents.md#managing-security-incident
| **Security incident detected compromised machines** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resources, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised these machines. | Medium/High | | **Security incident detected compromised machines with malicious outgoing activity** | This incident indicates suspicious outgoing activity from your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resources, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | | **Security incident detected on multiple machines** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
-| **Security incident with shared process detected** | Scenario 1: This incident indicates suspicious activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered sharing the same process. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates suspicious activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered sharing the same process. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
+| **Security incident with shared process detected** | Scenario 1: This incident indicates suspicious activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered sharing the same process. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates suspicious activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered sharing the same process. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
## Next steps [Manage security incidents in Microsoft Defender for Cloud](incidents.md)+
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cl
Previously updated : 06/29/2023 Last updated : 08/27/2023 # Enable just-in-time access on VMs
You can use Microsoft Defender for Cloud's just-in-time (JIT) access to protect
Learn more about [how JIT works](just-in-time-access-overview.md) and the [permissions required to configure and use JIT](#prerequisites).
-In this article, you learn you how to include JIT in your security program, including how to:
+In this article, you learn how to include JIT in your security program, including how to:
- Enable JIT on your VMs from the Azure portal or programmatically - Request access to a VM that has JIT enabled from the Azure portal or programmatically
In this article, you learn you how to include JIT in your security program, incl
## Prerequisites -- JIT Requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) to be enabled on the subscription.
+- JIT requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) to be enabled on the subscription.
- **Reader** and **SecurityReader** roles can both view the JIT status and parameters. -- If you want to create custom roles that can work with JIT, you need the details from the following table:
+- If you want to create custom roles that work with JIT, you need the details from the following table:
| To enable a user to: | Permissions to set| | | |
In this article, you learn you how to include JIT in your security program, incl
> [!TIP] > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages. +
+> [!NOTE]
+> In order to successfully create a custom JIT policy, the policy name, together with the targeted VM name, must not exceed a total of 56 characters.
+ ## Work with JIT VM access using Microsoft Defender for Cloud
-You can use Defender for Cloud or you can programmatically enable JIT VM access with your own custom options, or you can enable JIT with default, hard-coded parameters from Azure Virtual machines.
+You can use Defender for Cloud or you can programmatically enable JIT VM access with your own custom options, or you can enable JIT with default, hard-coded parameters from Azure virtual machines.
**Just-in-time VM access** shows your VMs grouped into:
You can use Defender for Cloud or you can programmatically enable JIT VM access
### Enable JIT on your VMs from Microsoft Defender for Cloud From Defender for Cloud, you can enable and configure the JIT VM access. 1. Open the **Workload protections** and, in the advanced protections, select **Just-in-time VM access**.
-1. In the **Not configured** virtual machines, mark the VMs to protect with JIT and select **Enable JIT on VMs**.
+1. In the **Not configured** virtual machines tab, mark the VMs to protect with JIT and select **Enable JIT on VMs**.
The JIT VM access page opens listing the ports that Defender for Cloud recommends protecting: - 22 - SSH
To edit the existing JIT rules for a VM:
1. Open the **Workload protections** and, in the advanced protections, select **Just-in-time VM access**.
-1. In the **Configured** virtual machines, right-click on a VM and select edit.
+1. In the **Configured** virtual machines tab, right-click on a VM and select **Edit**.
1. In the **JIT VM access configuration**, you can either edit the list of port or select **Add** a new custom port.
When a VM has a JIT enabled, you have to request access to connect to it. You ca
1. From the **Just-in-time VM access** page, select the **Configured** tab.
-1. Select the VMs you want to access.
+1. Select the VMs you want to access:
- The icon in the **Connection Details** column indicates whether JIT is enabled on the network security group or firewall. If it's enabled on both, only the firewall icon appears.
When a VM has a JIT enabled, you have to request access to connect to it. You ca
1. Select **Open ports**.
-> [!NOTE]
-> If a user who is requesting access is behind a proxy, you can enter the IP address range of the proxy.
+ > [!NOTE]
+ > If a user who is requesting access is behind a proxy, you can enter the IP address range of the proxy.
## Other ways to work with JIT VM access
You can enable JIT on a VM from the Azure virtual machines pages of the Azure po
1. From Defender for Cloud's menu, select **Just-in-time VM access**.
- 1. From the **Configured** tab, right-click on the VM to which you want to add a port, and select edit.
+ 1. From the **Configured** tab, right-click on the VM to which you want to add a port, and select **Edit**.
![Editing a JIT VM access configuration in Microsoft Defender for Cloud.](./media/just-in-time-access-usage/jit-policy-edit-security-center.png)
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Previously updated : 07/11/2023 Last updated : 09/04/2023 # Protect your Kubernetes data plane hardening
You can enable the Azure Policy for Kubernetes by one of two ways:
- Enable for all current and future clusters using plan/connector settings - [Enabling for Azure subscriptions or on-premises](#enabling-for-azure-subscriptions-or-on-premises) - [Enabling for GCP projects](#enabling-for-gcp-projects)-- [Enable for existing clusters using recommendations (specific clusters or all clusters)](#manually-deploy-the-add-on-to-clusters-using-recommendations-on-specific-clusters).
+- [Deploy Azure Policy for Kubernetes on existing clusters](#deploy-azure-policy-for-kubernetes-on-existing-clusters)
-### Enable for all current and future clusters using plan/connector settings
+### Enable Azure Policy for Kubernetes for all current and future clusters using plan/connector settings
> [!NOTE] > When you enable this setting, the Azure Policy for Kubernetes pods are installed on the cluster. Doing so allocates a small amount of CPU and memory for the pods to use. This allocation might reach maximum capacity, but it doesn't affect the rest of the CPU and memory on the resource.
You can enable the Azure Policy for Kubernetes by one of two ways:
#### Enabling for Azure subscriptions or on-premises
-When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting on initial configuration you can enable it afterwards manually.
+When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting on initial configuration, you can enable it afterwards manually.
If you disabled the "Azure Policy for Kubernetes" settings under the containers plan, you can follow the below steps to enable it across all clusters in your subscription:
If you disabled the "Azure Policy for Kubernetes" settings under the containers
#### Enabling for GCP projects
-When you enable Microsoft Defender for Containers on a GCP connector, the "Azure Policy Extension for Azure Arc" setting is enabled by default for the Google Kubernetes Engine in the relevant project. If you disable the setting on initial configuration you can enable it afterwards manually.
+When you enable Microsoft Defender for Containers on a GCP connector, the "Azure Policy Extension for Azure Arc" setting is enabled by default for the Google Kubernetes Engine in the relevant project. If you disable the setting on initial configuration, you can enable it afterwards manually.
If you disabled the "Azure Policy Extension for Azure Arc" settings under the GCP connector, you can follow the below steps to [enable it on your GCP connector](defender-for-containers-enable.md?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-gke&preserve-view=true#protect-google-kubernetes-engine-gke-clusters).
-### Manually deploy the add-on to clusters using recommendations on specific clusters
+### Deploy Azure Policy for Kubernetes on existing clusters
-You can manually configure the Kubernetes data plane hardening add-on, or extension on specific cluster through the Recommendations page using the following recommendations:
--- **Azure Recommendations** - `"Azure Policy add-on for Kubernetes should be installed and enabled on your clusters"`, or `"Azure policy extension for Kubernetes should be installed and enabled on your clusters"`.-- **GCP Recommendation** - `"GKE clusters should have Microsoft Defender's extension for Azure Arc installed"`.-- **AWS Recommendation** - `"EKS clusters should have Microsoft Defender's extension for Azure Arc installed"`.-
-Once enabled, the hardening recommendation becomes available (some of the recommendations require another configuration to work).
+You can manually configure the Azure Policy for Kubernetes on existing Kubernetes clusters through the Recommendations page. Once enabled, the hardening recommendations become available (some of the recommendations require another configuration to work).
> [!NOTE]
-> For AWS it isn't possible to do onboarding at scale using the connector, but it can be installed on all clusters or specific clusters using the recommendation ["EKS clusters should have Microsoft Defender's extension for Azure Arc installed"](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/38307993-84fb-4636-8ce7-3a64466bb5cc).
+> For AWS it isn't possible to do onboarding at scale using the connector, but it can be installed on all existing clusters or on specific clusters using the recommendation [Azure Arc-enabled Kubernetes clusters should have the Azure policy extension for Kubernetes extension installed](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/0642d770-b189-42ef-a2ce-9dcc3ec6c169/subscriptionIds~/%5B%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%2204cd6fff-ef34-415e-b907-3c90df65c0e5%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null).
-**To deploy the add-on to specified clusters**:
+**To deploy the** **Azure Policy for Kubernetes** **to specified clusters**:
1. From the recommendations page, search for the relevant recommendation:
- - **Azure** - `Azure Kubernetes Service clusters should have the Azure Policy add-on for Kubernetes installed` or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`
- - **AWS** - `EKS clusters should have Microsoft Defender's extension for Azure Arc installed`
- - **GCP** - `GKE clusters should have Microsoft Defender's extension for Azure Arc installed`
- :::image type="content" source="./media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png" alt-text="Screenshot showing the Azure Kubernetes service clusters recommendation." lightbox="media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png":::
+ - **Azure -** `"Azure Kubernetes Service clusters should have the Azure Policy add-on for Kubernetes installed"`
+ - **GCP** - `"GKE clusters should have the Azure Policy extension"`.
+ - **AWS and On-premises** - `"Azure Arc-enabled Kubernetes clusters should have the Azure policy extension for Kubernetes extension installed"`.
+ :::image type="content" source="./media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png" alt-text="Screenshot showing the Azure Kubernetes service clusters recommendation." lightbox="media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png":::
- > [!TIP]
- > The recommendation is included in five different security controls and it doesn't matter which one you select in the next step.
+ > [!TIP]
+ > The recommendation is included in different security controls, and it doesn't matter which one you select in the next step.
1. From any of the security controls, select the recommendation to see the resources on which you can install the add-on.
Once enabled, the hardening recommendation becomes available (some of the recomm
## View and configure the bundle of recommendations
-Approximately 30 minutes after the add-on installation completes, Defender for Cloud shows the clustersΓÇÖ health status for the following recommendations, each in the relevant security control as shown:
+Approximately 30 minutes after the Azure Policy for Kubernetes installation completes, Defender for Cloud shows the clustersΓÇÖ health status for the following recommendations, each in the relevant security control as shown:
> [!NOTE]
-> If you're installing the add-on/extension for the first time, these recommendations will appear as new additions in the list of recommendations.
+> If you're installing the Azure Policy for Kubernetes for the first time, these recommendations will appear as new additions in the list of recommendations.
> [!TIP] > Some recommendations have parameters that must be customized via Azure Policy to use them effectively. For example, to benefit from the recommendation **Container images should be deployed only from trusted registries**, you'll have to define your trusted registries. If you don't enter the necessary parameters for the recommendations that require configuration, your workloads will be shown as unhealthy.
defender-for-cloud Management Groups Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md
You can add subscriptions to the management group that you created.
Once the Azure roles have been assigned to the users, the tenant administrator should remove itself from the user access administrator role.
-1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Active Directory admin center](https://aad.portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. In the navigation list, select **Azure Active Directory** and then select **Properties**.
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Title: Security recommendations for multi-factor authentication description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 06/28/2023 Last updated : 08/22/2023
-# Manage multi-factor authentication (MFA) enforcement on your subscriptions
+# Manage multi-factor authentication (MFA) on your subscriptions
-If you're using passwords, only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO).
+If you're using passwords only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO).
-There are multiple ways to enable MFA for your Azure Active Directory (AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud.
+There are multiple ways to enable MFA for your Azure Active Directory (Azure AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud.
## MFA and Microsoft Defender for Cloud Defender for Cloud places a high value on MFA. The security control that contributes the most to your secure score is **Enable MFA**.
-The recommendations in the Enable MFA control ensure you're meeting the recommended practices for users of your subscriptions:
+The following recommendations in the Enable MFA control ensure you're meeting the recommended practices for users of your subscriptions:
- Accounts with owner permissions on Azure resources should be MFA enabled - Accounts with write permissions on Azure resources should be MFA enabled - Accounts with read permissions on Azure resources should be MFA enabled
-There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, conditional access (CA) policy.
+
+There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, and conditional access (CA) policy.
### Free option - security defaults
Customers with Microsoft 365 can use **Per-user assignment**. In this scenario,
### MFA for Azure AD Premium customers
-For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you'll need [Azure Active Directory (AD) tenant permissions](../active-directory/roles/permissions-reference.md).
+For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you need [Azure Active Directory (Azure AD) tenant permissions](../active-directory/roles/permissions-reference.md).
Your CA policy must:
Learn more in the [Azure Conditional Access documentation](../active-directory/c
## Identify accounts without multi-factor authentication (MFA) enabled
-You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or using Azure Resource Graph.
+You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or by using the Azure Resource Graph.
### View the accounts without MFA enabled in the Azure portal
To see which accounts don't have MFA enabled, use the following Azure Resource G
1. Open **Azure Resource Graph Explorer**.
- :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Launching Azure Resource Graph Explorer** recommendation page" :::
+ :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Screenshot showing launching the Azure Resource Graph Explorer** recommendation page." lightbox="media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png":::
1. Enter the following query and select **Run query**.
- ```kusto
+ ```
securityresources
- | where type == "microsoft.security/assessments"
- | where properties.displayName contains "Accounts with owner permissions on Azure resources should be MFA enabled"
- | where properties.status.code == "Unhealthy"
+ | where type =~ "microsoft.security/assessments/subassessments"
+ | where id has "assessments/dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c" or id has "assessments/c0cb17b2-0607-48a7-b0e0-903ed22de39b" or id has "assessments/6240402e-f77c-46fa-9060-a7ce53997754"
+ | parse id with start "/assessments/" assessmentId "/subassessments/" userObjectId
+ | summarize make_list(userObjectId) by strcat(tostring(properties.displayName), " (", assessmentId, ")")
+ | project ["Recommendation Name"] = Column1 , ["Account ObjectIDs"] = list_userObjectId
``` 1. The `additionalData` property reveals the list of account object IDs for accounts that don't have MFA enforced. > [!NOTE]
- > The accounts are shown as object IDs rather than account names to protect the privacy of the account holders.
+ > The 'Account ObjectIDs' column contains the list of account object IDs for accounts that don't have MFA enforced per recommendation.
+
+ > [!TIP]
+ > Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get).
+
+## Limitations
+
+- Conditional Access feature to enforce MFA on external users/tenants isn't supported yet.
+- Conditional Access policy applied to Azure AD roles (such as all global admins, external users, external domain, etc.) isn't supported yet.
+- External MFA solutions such as Okta, Ping, Duo, and more aren't supported within the identity MFA recommendations.
-> [!TIP]
-> Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get).
## Next steps
-To learn more about recommendations that apply to other Azure resource types, see the following article:
+To learn more about recommendations that apply to other Azure resource types, see the following articles:
- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md) - Check out [common questions](faq-general.yml) about MFA.
defender-for-cloud Plan Defender For Servers Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md
To plan for Azure Arc deployment:
1. Azure Arc installs the Connected Machine agent to connect to and manage machines that are hosted outside of Azure. Review the following information: - The [agent components and data collected from machines](../azure-arc/servers/agent-overview.md#agent-resources).
- - [Network and internet access ](../azure-arc/servers/network-requirements.md) for the agent.
+ - [Network and internet access](../azure-arc/servers/network-requirements.md) for the agent.
- [Connection options](../azure-arc/servers/deployment-options.md) for the agent. ## Log Analytics agent and Azure Monitor agent
+> [!NOTE]
+> As the Log Analytics agent is set to retire in August 2024 and as part of the Defender for Cloud [updated strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation), all **Defender for Servers** features and capabilities will be provided either through [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), without dependency on either Log Analytics agent (MMA) or Azure Monitor agent (AMA). As a result, the shared autoprovisioning process for both agents will be adjusted accordingly For more information about this change, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ Defender for Cloud uses the Log Analytics agent and the Azure Monitor agent to collect information from compute resources. Then, it sends the data to a Log Analytics workspace for more analysis. Review the [differences and recommendations for both agents](../azure-monitor/agents/agents-overview.md). The following table describes the agents that are used in Defender for Servers: Feature | Log Analytics agent | Azure Monitor agent | |
-Foundational CSPM recommendations (free) that depend on the agent: [OS baseline recommendation](apply-security-baseline.md) (Azure VMs) | :::image type="icon" source="./medi) is used.
-Foundational CSPM: [System updates recommendations](recommendations-reference.md#compute-recommendations) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent"::: | Not yet available.
-Foundational CSPM: [Antimalware/endpoint protection recommendations](endpoint-protection-recommendations-technical.md) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent.":::
-Attack detection at the OS level and network layer, including fileless attack detection<br/><br/> Plan 1 relies on Defender for Endpoint capabilities for attack detection. | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent. Plan 1 relies on Defender for Endpoint.":::<br/><br/> Plan 2| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent. Plan 1 relies on Defender for Endpoint.":::<br/><br/> Plan 2
-File integrity monitoring (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent.":::
-[Adaptive application controls](adaptive-application-controls.md) (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent.":::
+Foundational CSPM recommendations (free) that depend on the agent: [OS baseline recommendation](apply-security-baseline.md) (Azure VMs) | :::image type="icon" source="./medi) is used.
+Foundational CSPM: [System updates recommendations](recommendations-reference.md#compute-recommendations) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | Not yet available.
+Foundational CSPM: [Antimalware/endpoint protection recommendations](endpoint-protection-recommendations-technical.md) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | :::image type="icon" source="./media/icons/yes-icon.png" :::
+Attack detection at the OS level and network layer, including fileless attack detection<br/><br/> Plan 1 relies on Defender for Endpoint capabilities for attack detection. | :::image type="icon" source="./media/icons/yes-icon.png" :::<br/><br/> Plan 2| :::image type="icon" source="./media/icons/yes-icon.png" :::<br/><br/> Plan 2
+File integrity monitoring (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | :::image type="icon" source="./media/icons/yes-icon.png" :::
+[Adaptive application controls](adaptive-application-controls.md) (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | :::image type="icon" source="./media/icons/yes-icon.png" :::
## Qualys extension
defender-for-cloud Plan Defender For Servers Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-scale.md
You can use a policy definition to enable Defender for Servers at scale:
- To get the built-in *Configure Azure Defender for Servers to be enabled* policy definition, in the Azure portal for your deployment, go to **Azure Policy** > **Policy Definitions**.
- :::image type="content" source="media/plan-defender-for-servers-scale/select-policy-definition.png" alt-text="Screenshot that shows the Configure Azure Defender for Servers to be enabled policy definition." lightbox="media/plan-defender-for-servers-scale/select-policy-definition.png":::
+ :::image type="content" source="media/plan-defender-for-servers-scale/select-policy-definition.png" alt-text="Screenshot that shows the Configure Azure Defender for Servers to be enabled policy definition." lightbox="media/plan-defender-for-servers-scale/select-policy-definition.png":::
- Alternatively, you can use a [custom policy](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Policy/Enable%20Defender%20for%20Servers%20plans) to enable Defender for Servers and select the plan at the same time. - You can enable only one Defender for Servers plan on each subscription. You can't enable both Defender for Servers Plan 1 and Plan 2 at the same subscription.
defender-for-cloud Plan Multicloud Security Determine Multicloud Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md
Defender for Cloud provides Cloud Security Posture Management (CSPM) features fo
## CWPP
+> [!NOTE]
+> As the Log Analytics agent is set to retire in August 2024 and as part of the Defender for Cloud [updated strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation), all **Defender for Servers** features and capabilities will be provided either through Microsoft Defender for Endpoint integration or agentless scanning, without dependency on either the Log Analytics agent (MMA) or Azure Monitor agent (AMA). For more information about this change, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ In Defender for Cloud, you enable specific plans to get Cloud Workload Platform Protection (CWPP) features. Plans to protect multicloud resources include: - [Defender for Servers](./defender-for-servers-introduction.md): Protect AWS/GCP Windows and Linux machines. - [Defender for Containers](./defender-for-containers-introduction.md): Help secure your Kubernetes clusters with security recommendations and hardening, vulnerability assessments, and runtime protection. - [Defender for SQL](./defender-for-sql-usage.md): Protect SQL databases running in AWS and GCP.
-### What agent do I need?
+### What extension do I need?
-The following table summarizes agent requirements for CWPP.
+The following table summarizes extension requirements for CWPP.
-| Agent |Defender for Servers|Defender for Containers|Defender fo SQL on Machines|
+| Extension |Defender for Servers|Defender for Containers|Defender for SQL on Machines|
|::|::|::|::| |Azure Arc Agent | Γ£ö | Γ£ö | Γ£ö |
-|Microsoft Defender for Endpoint extension |Γ£ö|
-|Vulnerability assessment| Γ£ö| |
+|Microsoft Defender for Endpoint extension |Γ£ö|||
+|Vulnerability assessment| Γ£ö| ||
+|Agentless Disk Scanning| Γ£ö | Γ£ö ||
|Log Analytics or Azure Monitor Agent (preview) extension|Γ£ö| |Γ£ö| |Defender agent| | Γ£ö| | |Azure Policy for Kubernetes | | Γ£ö| |
The following components and requirements are needed to receive full protection
- **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.
-To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
+ To autoprovision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
- **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](./integration-defender-for-endpoint.md?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities. - **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md), or the [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) solution. - **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines. #### Check networking requirements
-Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Auto-provisioning is enabled by default.
+Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Autoprovisioning is enabled by default.
### Defender for Containers
To receive the full benefits of Defender for SQL on your multicloud workload, yo
- **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.
- - To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
+ - To autoprovision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
- **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines - **Automatic SQL server discovery and registration**: Supports automatic discovery and registration of SQL servers
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
In this section of the wizard, you select the Defender for Cloud plans that you
1. By default, the **Containers** plan is set to **On**. This setting is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure that you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan. > [!NOTE]
- > Azure Arc-enabled Kubernetes, the Azure Arc extension for Microsoft Defender, and the Azure Arc extension for Azure Policy should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Azure Arc, if necessary), as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks).
+ > Azure Arc-enabled Kubernetes, the Azure Arc extensions for Defender agent, and Azure Policy for Kubernetes should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Azure Arc, if necessary), as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks).
Optionally, select **Configure** to edit the configuration as required. If you choose to turn off this configuration, the **Threat detection (control plane)** feature is also disabled. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
To complete this quickstart, you need:
| Regions: | Australia East, Central US, West Europe | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
-During the preview, the maximum number of GitHub repositories that you can onboard to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
-
-If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
## Connect your GitHub account
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
When you enable Defender for Cloud on an Azure subscription, the [Microsoft clou
The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves.
+> [!TIP]
+> Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate. When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard. Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [Multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud).
+ In this tutorial you'll learn how to: > [!div class="checklist"]
-> * Evaluate your regulatory compliance using the regulatory compliance dashboard
-> * Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products
-> * Improve your compliance posture by taking action on recommendations
-> * Download PDF/CSV reports as well as certification reports of your compliance status
-> * Setup alerts on changes to your compliance status
-> * Export your compliance data as a continuous stream and as weekly snapshots
+>
+> - Evaluate your regulatory compliance using the regulatory compliance dashboard
+> - Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products
+> - Improve your compliance posture by taking action on recommendations
+> - Download PDF/CSV reports as well as certification reports of your compliance status
+> - Setup alerts on changes to your compliance status
+> - Export your compliance data as a continuous stream and as weekly snapshots
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
Use the regulatory compliance dashboard to help focus your attention on the gaps
:::image type="content" source="./media/regulatory-compliance-dashboard/compliance-drilldown.png" alt-text="Screenshot that shows the exploration of the details of compliance with a specific standard." lightbox="media/regulatory-compliance-dashboard/compliance-drilldown.png":::
- The following list has a numbered item that matches each location in the image above, and describes what is in the image:
-- Select a compliance standard to see a list of all controls for that standard. (1)
+ The following list has a numbered item that matches each location in the image above, and describes what is in the image:
+
+- Select a compliance standard to see a list of all controls for that standard. (1)
- View the subscription(s) that the compliance standard is applied on. (2) - Select a Control to see more details. Expand the control to view the assessments associated with the selected control. Select an assessment to view the list of resources associated and the actions to remediate compliance concerns. (3)-- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)
+- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)
- In the Your Actions tab, you can see the automated and manual assessments associated to the control. (5) - Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6) - The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7)
The regulatory compliance has both automated and manual assessments that may nee
1. Select a compliance control to expand it.
-1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
+1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
1. Select a particular resource to view more details and resolve the recommendation for that resource. <br>For example, in the **Azure CIS 1.1.0** standard, select the recommendation **Disk encryption should be applied on virtual machines**.
The regulatory compliance has both automated and manual assessments that may nee
For more information about how to apply recommendations, see [Implementing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md).
-1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
+1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
> [!NOTE] > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
The regulatory compliance has automated and manual assessments that may need to
:::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard-ga.png" alt-text="Filtering the list of available Azure Audit reports using tabs and filters.":::
- For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
+ For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
> [!NOTE] > When you download one of these certification reports, you'll be shown the following privacy notice:
- >
+ >
> _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._ ### Check compliance offerings status
-Transparency provided by the compliance offerings (currently in preview) , allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
+Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
**To check the compliance offerings status**:
Use continuous export data to an Azure Event Hubs or a Log Analytics workspace:
:::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png" alt-text="Continuously export a weekly snapshot of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png"::: > [!TIP]
-> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance)
+> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance)
## Run workflow automations when there are changes to your compliance
For example, you might want Defender for Cloud to email a specific user when a c
In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory compliance dashboard to: > [!div class="checklist"]
-> * View and monitor your compliance posture regarding the standards and regulations that are important to you.
-> * Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
+>
+> - View and monitor your compliance posture regarding the standards and regulations that are important to you.
+> - Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multicloud environment.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 06/07/2023 Last updated : 09/06/2023 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## March 2023
+
+Updates in March include:
+
+- [A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection](#a-new-defender-for-storage-plan-is-available-including-near-real-time-malware-scanning-and-sensitive-data-threat-detection)
+- [Data-aware security posture (preview)](#data-aware-security-posture-preview)
+- [Improved experience for managing the default Azure security policies](#improved-experience-for-managing-the-default-azure-security-policies)
+- [Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)](#defender-cspm-cloud-security-posture-management-is-now-generally-available-ga)
+- [Option to create custom recommendations and security standards in Microsoft Defender for Cloud](#option-to-create-custom-recommendations-and-security-standards-in-microsoft-defender-for-cloud)
+- [Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA)](#microsoft-cloud-security-benchmark-mcsb-version-10-is-now-generally-available-ga)
+- [Some regulatory compliance standards are now available in government clouds](#some-regulatory-compliance-standards-are-now-available-in-government-clouds)
+- [New preview recommendation for Azure SQL Servers](#new-preview-recommendation-for-azure-sql-servers)
+- [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)
+
+### A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection
+
+Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we're announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you'll need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
+
+The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption. It also provides a more predictable and flexible pricing structure for better control over coverage and costs.
+
+The new plan has new capabilities now in public preview:
+
+- Detecting sensitive data exposure and exfiltration events
+
+- Near real-time blob on-upload malware scanning across all file types
+
+- Detecting entities with no identities using SAS tokens
+
+These capabilities enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
+
+All these capabilities are available in a new predictable and flexible pricing plan that provides granular control over data protection at both the subscription and resource levels.
+
+Learn more at [Overview of Microsoft Defender for Storage](defender-for-storage-introduction.md).
+
+### Data-aware security posture (preview)
+
+Microsoft Defender for Cloud helps security teams to be more productive at reducing risks and responding to data breaches in the cloud. It allows them to cut through the noise with data context and prioritize the most critical security risks, preventing a costly data breach.
+
+- Automatically discover data resources across cloud estate and evaluate their accessibility, data sensitivity and configured data flows.
+-Continuously uncover risks to data breaches of sensitive data resources, exposure or attack paths that could lead to a data resource using a lateral movement technique.
+- Detect suspicious activities that may indicate an ongoing threat to sensitive data resources.
+
+[Learn more](concept-data-security-posture.md) about data-aware security posture.
+
+### Improved experience for managing the default Azure security policies
+
+We introduce an improved Azure security policy management experience for built-in recommendations that simplifies the way Defender for Cloud customers fine tune their security requirements. The new experience includes the following new capabilities:
+
+- A simple interface allows better performance and fewer select when managing default security policies within Defender for Cloud, including enabling/disabling, denying, setting parameters and managing exemptions.
+- A single view of all built-in security recommendations offered by the Microsoft cloud security benchmark (formerly the Azure security benchmark). Recommendations are organized into logical groups, making it easier to understand the types of resources covered, and the relationship between parameters and recommendations.
+- New features such as filters and search have been added.
+
+Learn how to [manage security policies](tutorial-security-policy.md).
+
+Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/improved-experience-for-managing-the-default-azure-security/ba-p/3776522).
+
+### Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)
+
+We're announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
+
+- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)
+- **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer).
+
+Learn more about [Defender CSPM](overview-page.md).
+
+### Option to create custom recommendations and security standards in Microsoft Defender for Cloud
+
+Microsoft Defender for Cloud provides the option of creating custom recommendations and standards for AWS and GCP using KQL queries. You can use a query editor to build and test queries over your data.
+This feature is part of the Defender CSPM (Cloud Security Posture Management) plan. Learn how to [create custom recommendations and standards](create-custom-recommendations.md).
+
+### Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA)
+
+Microsoft Defender for Cloud is announcing that the Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA).
+
+MCSB version 1.0 replaces the Azure Security Benchmark (ASB) version 3 as Microsoft Defender for Cloud's default security policy for identifying security vulnerabilities in your cloud environments according to common security frameworks and best practices. MCSB version 1.0 appears as the default compliance standard in the compliance dashboard and is enabled by default for all Defender for Cloud customers.
+
+You can also learn [How Microsoft cloud security benchmark (MCSB) helps you succeed in your cloud security journey](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/announcing-microsoft-cloud-security-benchmark-v1-general/ba-p/3763013).
+
+Learn more about [MCSB](https://aka.ms/mcsb).
+
+### Some regulatory compliance standards are now available in government clouds
+
+We're announcing that the following regulatory standards are being updated with latest version and are available for customers in Azure Government and Microsoft Azure operated by 21Vianet.
+
+**Azure Government**:
+
+- [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss)
+- [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2)
+- [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)
+
+**Microsoft Azure operated by 21Vianet**:
+
+- [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2)
+- [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)
+
+Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+
+### New preview recommendation for Azure SQL Servers
+
+We've added a new recommendation for Azure SQL Servers, `Azure SQL Server authentication mode should be Azure Active Directory Only (Preview)`.
+
+The recommendation is based on the existing policy [`Azure SQL Database should have Azure Active Directory Only Authentication enabled`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabda6d70-9778-44e7-84a8-06713e6db027)
+
+This recommendation disables local authentication methods and allows only Azure Active Directory Authentication, which improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities.
+
+Learn how to [create servers with Azure AD-only authentication enabled in Azure SQL](/azure/azure-sql/database/authentication-azure-ad-only-authentication-create-server).
+
+### New alert in Defender for Key Vault
+
+Defender for Key Vault has the following new alert:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|||:-:||
+| **Denied access from a suspicious IP to a key vault**<br>(KV_SuspiciousIPAccessDenied) | An unsuccessful key vault access has been attempted by an IP that has been identified by Microsoft Threat Intelligence as a suspicious IP address. Though this attempt was unsuccessful, it indicates that your infrastructure might have been compromised. We recommend further investigations. | Credential Access | Low |
+
+You can see a list of all of the [alerts available for Key Vault](alerts-reference.md).
+
+## February 2023
+
+Updates in February include:
+
+- [Enhanced Cloud Security Explorer](#enhanced-cloud-security-explorer)
+- [Defender for Containers' vulnerability scans of running Linux images now GA](#defender-for-containers-vulnerability-scans-of-running-linux-images-now-ga)
+- [Announcing support for the AWS CIS 1.5.0 compliance standard](#announcing-support-for-the-aws-cis-150-compliance-standard)
+- [Microsoft Defender for DevOps (preview) is now available in other regions](#microsoft-defender-for-devops-preview-is-now-available-in-other-regions)
+- [The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-has-been-deprecated)
+
+### Enhanced Cloud Security Explorer
+
+An improved version of the cloud security explorer includes a refreshed user experience that removes query friction dramatically, added the ability to run multicloud and multi-resource queries, and embedded documentation for each query option.
+
+The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the prebuilt query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
+
+### Defender for Containers' vulnerability scans of running Linux images now GA
+
+Defender for Containers detects vulnerabilities in running containers. Both Windows and Linux containers are supported.
+
+In August 2022, this capability was [released in preview](release-notes-archive.md) for Windows and Linux. It's now released for general availability (GA) for Linux.
+
+When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the scan's findings: [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
+
+Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md).
+
+### Announcing support for the AWS CIS 1.5.0 compliance standard
+
+Defender for Cloud now supports the CIS Amazon Web Services Foundations v1.5.0 compliance standard. The standard can be [added to your Regulatory Compliance dashboard](update-regulatory-compliance-packages.md#add-a-regulatory-standard-to-your-dashboard), and builds on MDC's existing offerings for multicloud recommendations and standards.
+
+This new standard includes both existing and new recommendations that extend Defender for Cloud's coverage to new AWS services and resources.
+
+Learn how to [Manage AWS assessments and standards](how-to-manage-aws-assessments-standards.md).
+
+### Microsoft Defender for DevOps (preview) is now available in other regions
+
+Microsoft Defender for DevOps has expanded its preview and is now available in the West Europe and East Australia regions, when you onboard your Azure DevOps and GitHub resources.
+
+Learn more about [Microsoft Defender for DevOps](defender-for-devops-introduction.md).
+
+### The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated
+
+The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) has been deprecated and has been replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy.
+
+Learn more about [integrating Azure Key Vault with Azure Policy](../key-vault/general/azure-policy.md#network-access).
+
+## January 2023
+
+Updates in January include:
+
+- [The Endpoint protection (Microsoft Defender for Endpoint) component is now accessed in the Settings and monitoring page](#the-endpoint-protection-microsoft-defender-for-endpoint-component-is-now-accessed-in-the-settings-and-monitoring-page)
+- [New version of the recommendation to find missing system updates (Preview)](#new-version-of-the-recommendation-to-find-missing-system-updates-preview)
+- [Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts](#cleanup-of-deleted-azure-arc-machines-in-connected-aws-and-gcp-accounts)
+- [Allow continuous export to Event Hubs behind a firewall](#allow-continuous-export-to-event-hubs-behind-a-firewall)
+- [The name of the Secure score control Protect your applications with Azure advanced networking solutions has been changed](#the-name-of-the-secure-score-control-protect-your-applications-with-azure-advanced-networking-solutions-has-been-changed)
+- [The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports has been deprecated](#the-policy-vulnerability-assessment-settings-for-sql-server-should-contain-an-email-address-to-receive-scan-reports-has-been-deprecated)
+- [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets has been deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-has-been-deprecated)
+
+### The Endpoint protection (Microsoft Defender for Endpoint) component is now accessed in the Settings and monitoring page
+
+To access Endpoint protection, navigate to **Environment settings** > **Defender plans** > **Settings and monitoring**. From here you can set Endpoint protection to **On**. You can also see all of the other components that are managed.
+
+Learn more about [enabling Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) on your servers with Defender for Servers.
+
+### New version of the recommendation to find missing system updates (Preview)
+
+You no longer need an agent on your Azure VMs and Azure Arc machines to make sure the machines have all of the latest security or critical system updates.
+
+The new system updates recommendation, `System updates should be installed on your machines (powered by Azure Update Manager)` in the `Apply system updates` control, is based on the [Update management center (preview)](../update-center/overview.md). The recommendation relies on a native agent embedded in every Azure VM and Azure Arc machines instead of an installed agent. The Quick Fix in the new recommendation leads you to a one-time installation of the missing updates in the Update management center portal.
+
+To use the new recommendation, you need to:
+
+- Connect your non-Azure machines to Arc
+- Turn on the [periodic assessment property](../update-center/assessment-options.md#periodic-assessment). You can use the Quick Fix in the new recommendation, `Machines should be configured to periodically check for missing system updates` to fix the recommendation.
+
+The existing "System updates should be installed on your machines" recommendation, which relies on the Log Analytics agent, is still available under the same control.
+
+### Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts
+
+A machine connected to an AWS and GCP account that is covered by Defender for Servers or Defender for SQL on machines is represented in Defender for Cloud as an Azure Arc machine. Until now, that machine wasn't deleted from the inventory when the machine was deleted from the AWS or GCP account. Leading to unnecessary Azure Arc resources left in Defender for Cloud that represents deleted machines.
+
+Defender for Cloud will now automatically delete Azure Arc machines when those machines are deleted in connected AWS or GCP account.
+
+### Allow continuous export to Event Hubs behind a firewall
+
+You can now enable the continuous export of alerts and recommendations, as a trusted service to Event Hubs that are protected by an Azure firewall.
+
+You can enable continuous export as the alerts or recommendations are generated. You can also define a schedule to send periodic snapshots of all of the new data.
+
+Learn how to enable [continuous export to an Event Hubs behind an Azure firewall](continuous-export.md#continuously-export-to-an-event-hub-behind-a-firewall).
+
+### The name of the Secure score control Protect your applications with Azure advanced networking solutions has been changed
+
+The secure score control, `Protect your applications with Azure advanced networking solutions` has been changed to `Protect applications against DDoS attacks`.
+
+The updated name is reflected on Azure Resource Graph (ARG), Secure Score Controls API and the `Download CSV report`.
+
+### The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports has been deprecated
+
+The policy [`Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) has been deprecated.
+
+The Defender for SQL vulnerability assessment email report is still available and existing email configurations haven't changed.
+
+### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets has been deprecated
+
+The recommendation `Diagnostic logs in Virtual Machine Scale Sets should be enabled` has been deprecated.
+
+The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) has also been deprecated from any standards displayed in the regulatory compliance dashboard.
+
+| Recommendation | Description | Severity |
+|--|--|--|
+| Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year, enabling you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
+ ## December 2022 Updates in December include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 08/07/2023 Last updated : 09/06/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## September 2023
+
+|Date |Update |
+|-|-|
+| September 6 | [Preview release: Containers vulnerability assessment powered by Microsoft Defender Vulnerability Management now supports scan on pull](#preview-release-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-now-supports-scan-on-pull)|
+| September 6 | [Updated naming format of Center for Internet Security (CIS) standards in regulatory compliance](#updated-naming-format-of-center-for-internet-security-cis-standards-in-regulatory-compliance) |
+| September 5 | [Sensitive data discovery for PaaS databases (Preview)](#sensitive-data-discovery-for-paas-databases-preview) |
+| September 1 | [General Availability (GA): malware scanning in Defender for Storage](#general-availability-ga-malware-scanning-in-defender-for-storage)|
+
+### Preview release: containers vulnerability assessment powered by microsoft defender vulnerability management now supports scan on pull
+
+September 6, 2023
+
+Containers vulnerability assessment powered by Microsoft Defender Vulnerability Management (MDVM), now supports an additional trigger for scanning images pulled from an ACR. This newly added trigger provides additional coverage for active images in addition to the existing triggers scanning images pushed to an ACR in the last 90 days and images currently running in AKS.
+
+This new trigger will start rolling out today, and is expected to be available to all customers by September 13.
+
+For more information, see [Container Vulnerability Assessment powered by MDVM](agentless-container-registry-vulnerability-assessment.md)
+
+### Updated naming format of Center for Internet Security (CIS) standards in regulatory compliance
+
+September 6, 2023
+
+The naming format of CIS (Center for Internet Security) foundations benchmarks in the compliance dashboard is changed from `[Cloud] CIS [version number]` to `CIS [Cloud] Foundations v[version number]`. Refer to the following table:
+
+| Current Name | New Name |
+|--|--|
+| Azure CIS 1.1.0 | CIS Azure Foundations v1.1.0 |
+| Azure CIS 1.3.0 | CIS Azure Foundations v1.3.0 |
+| Azure CIS 1.4.0 | CIS Azure Foundations v1.4.0 |
+| AWS CIS 1.2.0 | CIS AWS Foundations v1.2.0 |
+| AWS CIS 1.5.0 | CIS AWS Foundations v1.5.0 |
+| GCP CIS 1.1.0 | CIS GCP Foundations v1.1.0 |
+| GCP CIS 1.2.0 | CIS GCP Foundations v1.2.0 |
+
+Learn how to [improve your regulatory compliance](regulatory-compliance-dashboard.md).
+
+### Sensitive data discovery for PaaS databases (Preview)
+
+September 5, 2023
+
+Data-aware security posture capabilities for frictionless sensitive data discovery for PaaS Databases (Azure SQL Databases and Amazon RDS Instances of any type) is now in public preview. This public preview allows you to create a map of your critical data wherever it resides, and the type of data that is found in those databases.
+
+Sensitive data discovery for Azure and AWS databases, adds to the shared taxonomy and configuration which is already publicly available for cloud object storage resources (Azure Blob Storage, AWS S3 buckets and GCP storage buckets) and provides a single configuration and enablement experience.
+
+Databases are scanned on a weekly basis. If you enable `sensitive data discovery`, discovery will run within 24 hours. The results can be viewed in the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md) or by reviewing the new [attack paths](how-to-manage-attack-path.md) for managed databases with sensitive data.
+
+Data-aware security posture for databases is available through the [Defender CSPM plan](tutorial-enable-cspm-plan.md), and is automatically enabled on subscriptions where `sensitive data discovery` option is enabled.
+
+You can learn more about data aware security posture in the following articles:
+
+- [Support and prerequisites for data-aware security posture](concept-data-security-posture-prepare.md)
+- [Enable data-aware security posture](data-security-posture-enable.md)
+- [Explore risks to sensitive data](data-security-review-risks.md)
+- [Azure data attack paths](attack-path-reference.md#azure-data)
+- [AWS data attack paths](attack-path-reference.md#aws-data)
+
+### General Availability (GA): malware scanning in Defender for Storage
+
+September 1, 2023
+
+Malware scanning is now generally available (GA) as an add-on to Defender for Storage. Malware scanning in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. The malware scanning capability is an agentless SaaS solution that allows setup at scale, and supports automating response at scale.
+
+Learn more about [malware scanning in Defender for Storage](defender-for-storage-malware-scan.md).
+
+Malware scanning is priced according to your data usage and budget. Billing begins on September 3, 2023. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) for more information.
+
+If you're using the previous plan (now renamed "Microsoft Defender for Storage (classic)"), you'll need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to enable malware scanning.
+
+Read the [Microsoft Defender for Cloud announcement blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/malware-scanning-for-cloud-storage-ga-pre-announcement-prevent/ba-p/3884470).
+ ## August 2023 Updates in August include: |Date |Update | |-|-|
+| August 30 | [Defender For Containers: Agentless Discovery for Kubernetes](#defender-for-containers-agentless-discovery-for-kubernetes)|
+| August 22 | [Recommendation release: Microsoft Defender for Storage should be enabled with malware scanning and sensitive data threat detection](#recommendation-release-microsoft-defender-for-storage-should-be-enabled-with-malware-scanning-and-sensitive-data-threat-detection)
+| August 17 | [Extended properties in Defender for Cloud security alerts are masked from activity logs](#extended-properties-in-defender-for-cloud-security-alerts-are-masked-from-activity-logs)
+| August 15 | [Preview release of GCP support in Defender CSPM](#preview-release-of-gcp-support-in-defender-cspm)|
| August 7 | [New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions](#new-security-alerts-in-defender-for-servers-plan-2-detecting-potential-attacks-abusing-azure-virtual-machine-extensions)
+| August 1 | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) |
+
+### Defender For Containers: Agentless discovery for Kubernetes
+
+August 30, 2023
+
+We're excited to introduce to Defender For Containers: Agentless discovery for Kubernetes. This release marks a significant step forward in container security, empowering you with advanced insights and comprehensive inventory capabilities for Kubernetes environments. The new container offering is powered by the Defender for Cloud contextual security graph. Here's what you can expect from this latest update:
+
+- Agentless Kubernetes discovery
+- Comprehensive inventory capabilities
+- Kubernetes-specific security insights
+- Enhanced risk hunting with Cloud Security Explorer
+
+Agentless discovery for Kubernetes is now available to all Defender For Containers customers. You can start using these advanced capabilities today. We encourage you to update your subscriptions to have the full set of extensions enabled, and benefit from the latest additions and features. Visit the **Environment and settings** pane of your Defender for Containers subscription to enable the extension.
+
+> [!NOTE]
+> Enabling the latest additions won't incur new costs to active Defender for Containers customers.
+
+For more information, see [Agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes).
+
+### Recommendation release: Microsoft Defender for Storage should be enabled with malware scanning and sensitive data threat detection
+
+August 22, 2023
+
+A new recommendation in Defender for Storage has been released. This recommendation ensures that Defender for Storage is enabled at the subscription level with malware scanning and sensitive data threat detection capabilities.
+
+| Recommendation | Description |
+|--|--|
+| Microsoft Defender for Storage should be enabled with malware scanning and sensitive data threat detection | Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes malware scanning and sensitive data threat detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. With a simple agentless setup at scale, when enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.|
+
+This new recommendation will replace the current recommendation `Microsoft Defender for Storage should be enabled` (assessment key 1be22853-8ed1-4005-9907-ddad64cb1417). However, this recommendation will still be available in Azure Government clouds.
+
+Learn more about [Microsoft Defender for Storage](defender-for-storage-introduction.md).
+
+### Extended properties in Defender for Cloud security alerts are masked from activity logs
+
+August 17, 2023
+
+We recently changed the way security alerts and activity logs are integrated. To better protect sensitive customer information, we no longer include this information in activity logs. Instead, we mask it with asterisks. However, this information is still available through the alerts API, continuous export, and the Defender for Cloud portal.
+
+Customers who rely on activity logs to export alerts to their SIEM solutions should consider using a different solution, as it isn't the recommended method for exporting Defender for Cloud security alerts.
+
+For instructions on how to export Defender for Cloud security alerts to SIEM, SOAR and other third party applications, see [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
+
+### Preview release of GCP support in Defender CSPM
+
+August 15, 2023
+
+We're announcing the preview release of the Defender CSPM contextual cloud security graph and attack path analysis with support for GCP resources. You can leverage the power of Defender CSPM for comprehensive visibility and intelligent cloud security across GCP resources.
+
+ Key features of our GCP support include:
+
+- **Attack path analysis** - Understand the potential routes attackers might take.
+- **Cloud security explorer** - Proactively identify security risks by running graph-based queries on the security graph.
+- **Agentless scanning** - Scan servers and identify secrets and vulnerabilities without installing an agent.
+- **Data-aware security posture** - Discover and remediate risks to sensitive data in Google Cloud Storage buckets.
+
+Learn more about [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options).
### New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions
Azure virtual machine extensions are small applications that run post-deployment
- Resetting credentials and creating administrative users - Encrypting disks
-Here is a table of the new alerts.
+Here's a table of the new alerts.
|Alert (alert type)|Description|MITRE tactics|Severity| |-|-|-|-|
-| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines are not equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |
-| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July, 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
+| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines aren't equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |
+| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
-| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
+| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | | **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | | **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | | **Suspicious usage of VMAccess extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VMAccess extension was detected on your virtual machines. Attackers may abuse the VMAccess extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |
-| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
+| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | | **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
- See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions).
+ See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions).
For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
+### Business model and pricing updates for Defender for Cloud plans
+
+August 1, 2023
+
+Microsoft Defender for Cloud has three plans that offer service layer protection:
+
+- Defender for Key Vault
+
+- Defender for Resource Manager
+- Defender for DNS
+
+These plans have transitioned to a new business model with different pricing and packaging to address customer feedback regarding spending predictability and simplifying the overall cost structure.
+
+**Business model and pricing changes summary**:
+
+Existing customers of Defender for Key-Vault, Defender for Resource Manager, and Defender for DNS keep their current business model and pricing unless they actively choose to switch to the new business model and price.
+
+- **Defender for Resource Manager**: This plan has a fixed price per subscription per month. Customers can switch to the new business model by selecting the Defender for Resource Manager new per-subscription model.
+
+Existing customers of Defender for Key-Vault, Defender for Resource Manager, and Defender for DNS keep their current business model and pricing unless they actively choose to switch to the new business model and price.
+
+- **Defender for Resource Manager**: This plan has a fixed price per subscription per month. Customers can switch to the new business model by selecting the Defender for Resource Manager new per-subscription model.
+- **Defender for Key Vault**: This plan has a fixed price per vault, per month with no overage charge. Customers can switch to the new business model by selecting the Defender for Key Vault new per-vault model
+- **Defender for DNS**: Defender for Servers Plan 2 customers gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS are no longer charged for Defender for DNS. Defender for DNS is no longer available as a standalone plan.
+
+Learn more about the pricing for these plans in the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h).
+ ## July 2023 Updates in July include:
Updates in July include:
| July 31 | [Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#preview-release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | July 30 | [Agentless container posture in Defender CSPM is now Generally Available](#agentless-container-posture-in-defender-cspm-is-now-generally-available) | | July 20 | [Management of automatic updates to Defender for Endpoint for Linux](#management-of-automatic-updates-to-defender-for-endpoint-for-linux)
-| July 18 | [Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM](#agentless-secret-scanning-for-virtual-machines-in-defender-for-servers-p2--dcspm) |
+| July 18 | [Agentless secret scanning for virtual machines in Defender for servers P2 & Defender CSPM](#agentless-secret-scanning-for-virtual-machines-in-defender-for-servers-p2--defender-cspm) |
| July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) | July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) | July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) |
By default, Defender for Cloud attempts to update your Defender for Endpoint for
Learn how to [manage automatic updates configuration for Linux](integration-defender-for-endpoint.md#manage-automatic-updates-configuration-for-linux).
-### Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM
+### Agentless secret scanning for virtual machines in Defender for servers P2 & Defender CSPM
July 18, 2023
-Secret scanning is now available as part of the agentless scanning in Defender for Servers P2 and DCSPM. This capability helps to detect unmanaged and insecure secrets saved on virtual machines, both in Azure or AWS resources, that can be used to move laterally in the network. If secrets are detected, Defender for Cloud can help to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
+Secret scanning is now available as part of the agentless scanning in Defender for Servers P2 and Defender CSPM. This capability helps to detect unmanaged and insecure secrets saved on virtual machines, both in Azure or AWS resources, that can be used to move laterally in the network. If secrets are detected, Defender for Cloud can help to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
For more information about how to protect your secrets with secret scanning, see [Manage secrets with agentless secret scanning](secret-scanning.md).
We have added four new Azure Active Directory authentication-related recommendat
### Two recommendations related to missing Operating System (OS) updates were released to GA
-The recommendations `System updates should be installed on your machines (powered by Update management center)` and `Machines should be configured to periodically check for missing system updates` have been released for General Availability.
+The recommendations `System updates should be installed on your machines (powered by Azure Update Manager)` and `Machines should be configured to periodically check for missing system updates` have been released for General Availability.
To use the new recommendation, you need to:
After completing these steps, you can remove the old recommendation `System upda
The two versions of the recommendations: - [`System updates should be installed on your machines`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)-- [`System updates should be installed on your machines (powered by Update management center)`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesV2RecommendationDetailsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+- [`System updates should be installed on your machines (powered by Azure Update Manager)`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesV2RecommendationDetailsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
will both be available until the [Log Analytics agent is deprecated on August 31, 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), which is when the older version (`System updates should be installed on your machines`) of the recommendation will be deprecated as well. Both recommendations return the same results and are available under the same control `Apply system updates`.
-The new recommendation `System updates should be installed on your machines (powered by Update management center)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview.
+The new recommendation `System updates should be installed on your machines (powered by Azure Update Manager)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview.
-The new recommendation `System updates should be installed on your machines (powered by Update management center)`, isn't expected to affect your Secure Score, as it has the same results as the old recommendation `System updates should be installed on your machines`.
+The new recommendation `System updates should be installed on your machines (powered by Azure Update Manager)`, isn't expected to affect your Secure Score, as it has the same results as the old recommendation `System updates should be installed on your machines`.
The prerequisite recommendation ([Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment)) has a negative effect on your Secure Score. You can remediate the negative effect with the available [Fix button](implement-security-recommendations.md).
Defender for APIs helps you to gain visibility into business-critical APIs. You
Learn more about [Defender for APIs](defender-for-apis-introduction.md).
-## March 2023
-
-Updates in March include:
--- [A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection](#a-new-defender-for-storage-plan-is-available-including-near-real-time-malware-scanning-and-sensitive-data-threat-detection)-- [Data-aware security posture (preview)](#data-aware-security-posture-preview)-- [Improved experience for managing the default Azure security policies](#improved-experience-for-managing-the-default-azure-security-policies)-- [Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)](#defender-cspm-cloud-security-posture-management-is-now-generally-available-ga)-- [Option to create custom recommendations and security standards in Microsoft Defender for Cloud](#option-to-create-custom-recommendations-and-security-standards-in-microsoft-defender-for-cloud)-- [Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA)](#microsoft-cloud-security-benchmark-mcsb-version-10-is-now-generally-available-ga)-- [Some regulatory compliance standards are now available in government clouds](#some-regulatory-compliance-standards-are-now-available-in-government-clouds)-- [New preview recommendation for Azure SQL Servers](#new-preview-recommendation-for-azure-sql-servers)-- [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)-
-### A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection
-
-Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we're announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you'll need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
-
-The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption. It also provides a more predictable and flexible pricing structure for better control over coverage and costs.
-
-The new plan has new capabilities now in public preview:
--- Detecting sensitive data exposure and exfiltration events--- Near real-time blob on-upload malware scanning across all file types--- Detecting entities with no identities using SAS tokens-
-These capabilities enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
-
-All these capabilities are available in a new predictable and flexible pricing plan that provides granular control over data protection at both the subscription and resource levels.
-
-Learn more at [Overview of Microsoft Defender for Storage](defender-for-storage-introduction.md).
-
-### Data-aware security posture (preview)
-
-Microsoft Defender for Cloud helps security teams to be more productive at reducing risks and responding to data breaches in the cloud. It allows them to cut through the noise with data context and prioritize the most critical security risks, preventing a costly data breach.
--- Automatically discover data resources across cloud estate and evaluate their accessibility, data sensitivity and configured data flows.--Continuously uncover risks to data breaches of sensitive data resources, exposure or attack paths that could lead to a data resource using a lateral movement technique.-- Detect suspicious activities that may indicate an ongoing threat to sensitive data resources.-
-[Learn more](concept-data-security-posture.md) about data-aware security posture.
-
-### Improved experience for managing the default Azure security policies
-
-We introduce an improved Azure security policy management experience for built-in recommendations that simplifies the way Defender for Cloud customers fine tune their security requirements. The new experience includes the following new capabilities:
--- A simple interface allows better performance and fewer select when managing default security policies within Defender for Cloud, including enabling/disabling, denying, setting parameters and managing exemptions.-- A single view of all built-in security recommendations offered by the Microsoft cloud security benchmark (formerly the Azure security benchmark). Recommendations are organized into logical groups, making it easier to understand the types of resources covered, and the relationship between parameters and recommendations.-- New features such as filters and search have been added.-
-Learn how to [manage security policies](tutorial-security-policy.md).
-
-Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/improved-experience-for-managing-the-default-azure-security/ba-p/3776522).
-
-### Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)
-
-We're announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
--- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)-- **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer).-
-Learn more about [Defender CSPM](overview-page.md).
-
-### Option to create custom recommendations and security standards in Microsoft Defender for Cloud
-
-Microsoft Defender for Cloud provides the option of creating custom recommendations and standards for AWS and GCP using KQL queries. You can use a query editor to build and test queries over your data.
-This feature is part of the Defender CSPM (Cloud Security Posture Management) plan. Learn how to [create custom recommendations and standards](create-custom-recommendations.md).
-
-### Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA)
-
-Microsoft Defender for Cloud is announcing that the Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA).
-
-MCSB version 1.0 replaces the Azure Security Benchmark (ASB) version 3 as Microsoft Defender for Cloud's default security policy for identifying security vulnerabilities in your cloud environments according to common security frameworks and best practices. MCSB version 1.0 appears as the default compliance standard in the compliance dashboard and is enabled by default for all Defender for Cloud customers.
-
-You can also learn [How Microsoft cloud security benchmark (MCSB) helps you succeed in your cloud security journey](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/announcing-microsoft-cloud-security-benchmark-v1-general/ba-p/3763013).
-
-Learn more about [MCSB](https://aka.ms/mcsb).
-
-### Some regulatory compliance standards are now available in government clouds
-
-We're announcing that the following regulatory standards are being updated with latest version and are available for customers in Azure Government and Microsoft Azure operated by 21Vianet.
-
-**Azure Government**:
--- [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss)-- [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2)-- [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)-
-**Microsoft Azure operated by 21Vianet**:
--- [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2)-- [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)-
-Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-
-### New preview recommendation for Azure SQL Servers
-
-We've added a new recommendation for Azure SQL Servers, `Azure SQL Server authentication mode should be Azure Active Directory Only (Preview)`.
-
-The recommendation is based on the existing policy [`Azure SQL Database should have Azure Active Directory Only Authentication enabled`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabda6d70-9778-44e7-84a8-06713e6db027)
-
-This recommendation disables local authentication methods and allows only Azure Active Directory Authentication, which improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities.
-
-Learn how to [create servers with Azure AD-only authentication enabled in Azure SQL](/azure/azure-sql/database/authentication-azure-ad-only-authentication-create-server).
-
-### New alert in Defender for Key Vault
-
-Defender for Key Vault has the following new alert:
-
-| Alert (alert type) | Description | MITRE tactics | Severity |
-|||:-:||
-| **Denied access from a suspicious IP to a key vault**<br>(KV_SuspiciousIPAccessDenied) | An unsuccessful key vault access has been attempted by an IP that has been identified by Microsoft Threat Intelligence as a suspicious IP address. Though this attempt was unsuccessful, it indicates that your infrastructure might have been compromised. We recommend further investigations. | Credential Access | Low |
-
-You can see a list of all of the [alerts available for Key Vault](alerts-reference.md).
-
-## February 2023
-
-Updates in February include:
--- [Enhanced Cloud Security Explorer](#enhanced-cloud-security-explorer)-- [Defender for Containers' vulnerability scans of running Linux images now GA](#defender-for-containers-vulnerability-scans-of-running-linux-images-now-ga)-- [Announcing support for the AWS CIS 1.5.0 compliance standard](#announcing-support-for-the-aws-cis-150-compliance-standard)-- [Microsoft Defender for DevOps (preview) is now available in other regions](#microsoft-defender-for-devops-preview-is-now-available-in-other-regions)-- [The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-has-been-deprecated)-
-### Enhanced Cloud Security Explorer
-
-An improved version of the cloud security explorer includes a refreshed user experience that removes query friction dramatically, added the ability to run multicloud and multi-resource queries, and embedded documentation for each query option.
-
-The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the prebuilt query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
-
-### Defender for Containers' vulnerability scans of running Linux images now GA
-
-Defender for Containers detects vulnerabilities in running containers. Both Windows and Linux containers are supported.
-
-In August 2022, this capability was [released in preview](release-notes-archive.md) for Windows and Linux. It's now released for general availability (GA) for Linux.
-
-When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the scan's findings: [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
-
-Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md).
-
-### Announcing support for the AWS CIS 1.5.0 compliance standard
-
-Defender for Cloud now supports the CIS Amazon Web Services Foundations v1.5.0 compliance standard. The standard can be [added to your Regulatory Compliance dashboard](update-regulatory-compliance-packages.md#add-a-regulatory-standard-to-your-dashboard), and builds on MDC's existing offerings for multicloud recommendations and standards.
-
-This new standard includes both existing and new recommendations that extend Defender for Cloud's coverage to new AWS services and resources.
-
-Learn how to [Manage AWS assessments and standards](how-to-manage-aws-assessments-standards.md).
-
-### Microsoft Defender for DevOps (preview) is now available in other regions
-
-Microsoft Defender for DevOps has expanded its preview and is now available in the West Europe and East Australia regions, when you onboard your Azure DevOps and GitHub resources.
-
-Learn more about [Microsoft Defender for DevOps](defender-for-devops-introduction.md).
-
-### The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated
-
-The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) has been deprecated and has been replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy.
-
-Learn more about [integrating Azure Key Vault with Azure Policy](../key-vault/general/azure-policy.md#network-access).
-
-## January 2023
-
-Updates in January include:
--- [The Endpoint protection (Microsoft Defender for Endpoint) component is now accessed in the Settings and monitoring page](#the-endpoint-protection-microsoft-defender-for-endpoint-component-is-now-accessed-in-the-settings-and-monitoring-page)-- [New version of the recommendation to find missing system updates (Preview)](#new-version-of-the-recommendation-to-find-missing-system-updates-preview)-- [Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts](#cleanup-of-deleted-azure-arc-machines-in-connected-aws-and-gcp-accounts)-- [Allow continuous export to Event Hubs behind a firewall](#allow-continuous-export-to-event-hubs-behind-a-firewall)-- [The name of the Secure score control Protect your applications with Azure advanced networking solutions has been changed](#the-name-of-the-secure-score-control-protect-your-applications-with-azure-advanced-networking-solutions-has-been-changed)-- [The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports has been deprecated](#the-policy-vulnerability-assessment-settings-for-sql-server-should-contain-an-email-address-to-receive-scan-reports-has-been-deprecated)-- [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets has been deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-has-been-deprecated)-
-### The Endpoint protection (Microsoft Defender for Endpoint) component is now accessed in the Settings and monitoring page
-
-To access Endpoint protection, navigate to **Environment settings** > **Defender plans** > **Settings and monitoring**. From here you can set Endpoint protection to **On**. You can also see all of the other components that are managed.
-
-Learn more about [enabling Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) on your servers with Defender for Servers.
-
-### New version of the recommendation to find missing system updates (Preview)
-
-You no longer need an agent on your Azure VMs and Azure Arc machines to make sure the machines have all of the latest security or critical system updates.
-
-The new system updates recommendation, `System updates should be installed on your machines (powered by Update management center)` in the `Apply system updates` control, is based on the [Update management center (preview)](../update-center/overview.md). The recommendation relies on a native agent embedded in every Azure VM and Azure Arc machines instead of an installed agent. The Quick Fix in the new recommendation leads you to a one-time installation of the missing updates in the Update management center portal.
-
-To use the new recommendation, you need to:
--- Connect your non-Azure machines to Arc-- Turn on the [periodic assessment property](../update-center/assessment-options.md#periodic-assessment). You can use the Quick Fix in the new recommendation, `Machines should be configured to periodically check for missing system updates` to fix the recommendation.-
-The existing "System updates should be installed on your machines" recommendation, which relies on the Log Analytics agent, is still available under the same control.
-
-### Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts
-
-A machine connected to an AWS and GCP account that is covered by Defender for Servers or Defender for SQL on machines is represented in Defender for Cloud as an Azure Arc machine. Until now, that machine wasn't deleted from the inventory when the machine was deleted from the AWS or GCP account. Leading to unnecessary Azure Arc resources left in Defender for Cloud that represents deleted machines.
-
-Defender for Cloud will now automatically delete Azure Arc machines when those machines are deleted in connected AWS or GCP account.
-
-### Allow continuous export to Event Hubs behind a firewall
-
-You can now enable the continuous export of alerts and recommendations, as a trusted service to Event Hubs that are protected by an Azure firewall.
-
-You can enable continuous export as the alerts or recommendations are generated. You can also define a schedule to send periodic snapshots of all of the new data.
-
-Learn how to enable [continuous export to an Event Hubs behind an Azure firewall](continuous-export.md#continuously-export-to-an-event-hub-behind-a-firewall).
-
-### The name of the Secure score control Protect your applications with Azure advanced networking solutions has been changed
-
-The secure score control, `Protect your applications with Azure advanced networking solutions` has been changed to `Protect applications against DDoS attacks`.
-
-The updated name is reflected on Azure Resource Graph (ARG), Secure Score Controls API and the `Download CSV report`.
-
-### The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports has been deprecated
-
-The policy [`Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) has been deprecated.
-
-The Defender for SQL vulnerability assessment email report is still available and existing email configurations haven't changed.
-
-### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets has been deprecated
-
-The recommendation `Diagnostic logs in Virtual Machine Scale Sets should be enabled` has been deprecated.
-
-The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) has also been deprecated from any standards displayed in the regulatory compliance dashboard.
-
-| Recommendation | Description | Severity |
-|--|--|--|
-| Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year, enabling you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
- ## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).
defender-for-cloud Sql Azure Vulnerability Assessment Rules Changelog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-rules-changelog.md
This article details the changes made to the SQL vulnerability assessment service rules. Rules that are updated, removed, or added will be outlined below. For an updated list of SQL vulnerability assessment rules, see [SQL vulnerability assessment rules](sql-azure-vulnerability-assessment-rules.md).
+## September 2023
+
+|Rule ID |Rule Title |Change details |
+||||
+|VA1018 |Latest updates should be installed |Logic change |
+ ## July 2023 |Rule ID |Rule Title |Change details |
defender-for-cloud Sql Azure Vulnerability Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-rules.md
SQL vulnerability assessment rules have five categories, which are in the follow
|Rule ID |Rule Title |Rule Severity |Rule Description |Platform | ||||||
-| VA1018 |Latest updates should be installed |High |Microsoft periodically releases Cumulative Updates (CUs) for each version of SQL Server. This rule checks whether the latest CU has been installed for the particular version of SQL Server being used, by passing in a string for execution. This rule checks that all users (except dbo) do not have permission to execute the xp_cmdshell extended stored procedure. |<nobr>SQL Server 2005<nobr/><br/><br/><nobr>SQL Server 2008<nobr/><br/><br/><nobr>SQL Server 2008<nobr/><br/><br/><nobr>SQL Server 2012<nobr/><br/><br/><nobr>SQL Server 2014<nobr/><br/><br/><nobr>SQL Server 2016<nobr/><br/><br/>SQL Server 2017<br/>|
+| VA1018 |Latest updates should be installed |High |Microsoft periodically releases Cumulative Updates (CUs) for each version of SQL Server. This rule checks whether the latest CU has been installed for the particular version of SQL Server being used, by passing in a string for execution. This rule checks that all users (except dbo) do not have permission to execute the xp_cmdshell extended stored procedure. |<nobr>SQL Server 2005<nobr/><br/><br/><nobr>SQL Server 2008<nobr/><br/><br/><nobr>SQL Server 2008<nobr/><br/><br/><nobr>SQL Server 2012<nobr/><br/><br/><nobr>SQL Server 2014<nobr/><br/><br/><nobr>SQL Server 2016<nobr/><br/><br/>SQL Server 2017<nobr/><br/><br/>SQL Server 2019<nobr/><br/><br/>SQL Server 2022<br/>|
|VA2128 |Vulnerability assessment is not supported for SQL Server versions lower than SQL Server 2012 |High |To run a vulnerability assessment scan on your SQL Server the server needs to be upgraded to SQL Server 2012 or higher, SQL Server 2008 R2 and below are no longer supported by Microsoft. For more information, [see](/azure/azure-sql/virtual-machines/windows/sql-server-extend-end-of-support) |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database<br/><br/>Azure Synapse | ## Surface Area Reduction
SQL vulnerability assessment rules have five categories, which are in the follow
## Next steps - [Vulnerability assessment](sql-azure-vulnerability-assessment-overview.md)-- [SQL vulnerability assessment rules changelog](sql-azure-vulnerability-assessment-rules-changelog.md)
+- [SQL vulnerability assessment rules changelog](sql-azure-vulnerability-assessment-rules-changelog.md)
defender-for-cloud Subassessment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md
++
+ Title: Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
+description: Learn about container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
++ Last updated : 08/16/2023+++
+# Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
+
+API Version: 2019-01-01-preview
+
+Get security subassessments on all your scanned resources inside a scope.
+
+## Overview
+
+You can access vulnerability assessment results pragmatically for both registry and runtime recommendations using the subassessments rest API.
+
+For more information on how to get started with our REST API, see [Azure REST API reference](/rest/api/azure/). Use the following information for specific information for the container vulnerability assessment results powered by Microsoft Defender Vulnerability Management.
+
+## HTTP Requests
+
+### Get
+
+#### GET
+
+`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments/{subAssessmentName}?api-version=2019-01-01-preview`
+
+#### URI Parameters
+
+| Name | In | Required | Type | Description |
+| -- | -- | -- | | |
+| assessmentName | path | True | string | The Assessment Key - Unique key for the assessment type |
+| scope | path | True | string | Scope of the query. Can be subscription (/subscriptions/0b06d9ea-afe6-4779-bd59-30e5c2d9d13f) or management group (/providers/Microsoft.Management/managementGroups/mgName). |
+| subAssessmentName | path | True | string | The Sub-Assessment Key - Unique key for the subassessment type |
+| api-version | query | True | string | API version for the operation |
+
+#### Responses
+
+| Name | Type | Description |
+| - | | - |
+| 200 OK | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/get#securitysubassessment) | OK |
+| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/get#clouderror) | Error response describing why the operation failed. |
+
+### List
+
+#### GET
+
+`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments?api-version=2019-01-01-preview`
+
+#### URI parameters
+
+| **Name** | **In** | **Required** | **Type** | **Description** |
+| | | | -- | |
+| **assessmentName** | path | True | string | The Assessment Key - Unique key for the assessment type |
+| **scope** | path | True | string | Scope of the query. The scope for AzureContainerVulnerability is the registry itself. |
+| **api-version** | query | True | string | API version for the operation |
+
+#### Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [SecuritySubAssessmentList](/rest/api/defenderforcloud/sub-assessments/list#securitysubassessmentlist) | OK |
+| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/list#clouderror) | Error response describing why the operation failed. |
+
+## Security
+
+### azure_auth
+
+Azure Active Directory OAuth2 Flow
+
+Type: oauth2
+Flow: implicit
+Authorization URL: `https://login.microsoftonline.com/common/oauth2/authorize`
+
+Scopes
+
+| Name | Description |
+| | -- |
+| user_impersonation | impersonate your user account |
+
+### Example
+
+### HTTP
+
+#### GET
+
+`https://management.azure.com/subscriptions/ 6ebb89c4-0e91-4f62-888f-c9518e662293/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry/providers/Microsoft.Security/assessments/ cf02effd-8e33-4b84-a012-1e61cf1a5638/subAssessments?api-version=2019-01-01-preview`
+
+#### Sample Response
+
+```json
+{
+ "value": [
+ {
+ "type": "Microsoft.Security/assessments/subAssessments",
+ "id": "/subscriptions/3905431d-c062-4c17-8fd9-c51f89f334c4/resourceGroups/PytorchEnterprise/providers/Microsoft.ContainerRegistry/registries/ptebic/providers/Microsoft.Security/assessments/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5/subassessments/3f069764-2777-3731-9698-c87f23569a1d",
+ "name": "3f069764-2777-3731-9698-c87f23569a1d",
+ "properties": {
+ "id": "CVE-2021-39537",
+ "displayName": "CVE-2021-39537",
+ "status": {
+ "code": "NotApplicable",
+ "severity": "High",
+ "cause": "Exempt",
+ "description": "Disabled parent assessment"
+ },
+ "remediation": "Create new image with updated package libncursesw5 with version 6.2-0ubuntu2.1 or higher.",
+ "description": "This vulnerability affects the following vendors: Gnu, Apple, Red_Hat, Ubuntu, Debian, Suse, Amazon, Microsoft, Alpine. To view more details about this vulnerability please visit the vendor website.",
+ "timeGenerated": "2023-08-08T08:14:13.742742Z",
+ "resourceDetails": {
+ "source": "Azure",
+ "id": "/repositories/public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121/images/sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6"
+ },
+ "additionalData": {
+ "assessedResourceType": "AzureContainerRegistryVulnerability",
+ "artifactDetails": {
+ "repositoryName": "public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121",
+ "registryHost": "ptebic.azurecr.io",
+ "digest": "sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6",
+ "tags": [
+ "biweekly.202305.2"
+ ],
+ "artifactType": "ContainerImage",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "lastPushedToRegistryUTC": "2023-05-15T16:00:40.2938142Z"
+ },
+ "softwareDetails": {
+ "osDetails": {
+ "osPlatform": "linux",
+ "osVersion": "ubuntu_linux_20.04"
+ },
+ "packageName": "libncursesw5",
+ "category": "OS",
+ "fixReference": {
+ "id": "USN-6099-1",
+ "url": "https://ubuntu.com/security/notices/USN-6099-1",
+ "description": "USN-6099-1: ncurses vulnerabilities 2023 May 23",
+ "releaseDate": "2023-05-23T00:00:00+00:00"
+ },
+ "vendor": "ubuntu",
+ "version": "6.2-0ubuntu2",
+ "evidence": [
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s",
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s"
+ ],
+ "language": "",
+ "fixedVersion": "6.2-0ubuntu2.1",
+ "fixStatus": "FixAvailable"
+ },
+ "vulnerabilityDetails": {
+ "cveId": "CVE-2021-39537",
+ "references": [
+ {
+ "title": "CVE-2021-39537",
+ "link": "https://nvd.nist.gov/vuln/detail/CVE-2021-39537"
+ }
+ ],
+ "cvss": {
+ "2.0": null,
+ "3.0": {
+ "base": 7.8,
+ "cvssVectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H/E:P/RL:U/RC:R"
+ }
+ },
+ "workarounds": [],
+ "publishedDate": "2020-08-04T00:00:00",
+ "lastModifiedDate": "2023-07-07T00:00:00",
+ "severity": "High",
+ "cpe": {
+ "uri": "cpe:2.3:a:ubuntu:libncursesw5:*:*:*:*:*:ubuntu_linux_20.04:*:*",
+ "part": "Applications",
+ "vendor": "ubuntu",
+ "product": "libncursesw5",
+ "version": "*",
+ "update": "*",
+ "edition": "*",
+ "language": "*",
+ "softwareEdition": "*",
+ "targetSoftware": "ubuntu_linux_20.04",
+ "targetHardware": "*",
+ "other": "*"
+ },
+ "weaknesses": {
+ "cwe": [
+ {
+ "id": "CWE-787"
+ }
+ ]
+ },
+ "exploitabilityAssessment": {
+ "exploitStepsVerified": false,
+ "exploitStepsPublished": false,
+ "isInExploitKit": false,
+ "types": [],
+ "exploitUris": []
+ }
+ },
+ "cvssV30Score": 7.8
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+| Name | Description |
+| | |
+| AzureResourceDetails | Details of the Azure resource that was assessed |
+| CloudError | Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This definition also follows the OData error response format.). |
+| CloudErrorBody | The error detail |
+| AzureContainerVulnerability | More context fields for container registry Vulnerability assessment |
+| CVE | CVE Details |
+| CVSS | CVSS Details |
+| ErrorAdditionalInfo | The resource management error additional info. |
+| SecuritySubAssessment | Security subassessment on a resource |
+| SecuritySubAssessmentList | List of security subassessments |
+| ArtifactDetails | Details for the affected container image |
+| SoftwareDetails | Details for the affected software package |
+| FixReference | Details on the fix, if available |
+| OS Details | Details on the os information |
+| VulnerabilityDetails | Details on the detected vulnerability |
+| CPE | Common Platform Enumeration |
+| Cwe | Common weakness enumeration |
+| VulnerabilityReference | Reference links to vulnerability |
+| ExploitabilityAssessment | Reference links to an example exploit |
+
+### AzureContainerRegistryVulnerability (MDVM)
+
+Additional context fields for Azure container registry vulnerability assessment
+
+| **Name** | **Type** | **Description** |
+| -- | -- | -- |
+| assessedResourceType | string: AzureContainerRegistryVulnerability | Subassessment resource type |
+| cvssV30Score | Numeric | CVSS V3 Score |
+| vulnerabilityDetails | VulnerabilityDetails | |
+| artifactDetails | ArtifactDetails | |
+| softwareDetails | SoftwareDetails | |
+
+### ArtifactDetails
+
+Context details for the affected container image
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| repositoryName | String | Repository name |
+| RepositoryHost | String | Repository host |
+| lastPublishedToRegistryUTC | Timestamp | UTC timestamp for last publish date |
+| artifactType | String: ContainerImage | |
+| mediaType | String | Layer media type |
+| Digest | String | Digest of vulnerable image |
+| Tags | String[] | Tags of vulnerable image |
+
+### Software Details
+
+Details for the affected software package
+
+| **Name** | **Type** | **Description** |
+| | | |
+| fixedVersion | String | Fixed Version |
+| category | String | Vulnerability category ΓÇô OS or Language |
+| osDetails | OsDetails | |
+| language | String | Language of affected package (for example, Python, .NET) could also be empty |
+| version | String | |
+| vendor | String | |
+| packageName | String | |
+| fixStatus | String | Unknown, FixAvailable, NoFixAvailable, Scheduled, WontFix |
+| evidence | String[] | Evidence for the package |
+| fixReference | FixReference | |
+
+### FixReference
+
+Details on the fix, if available
+
+| **Name** | **Type** | **description** |
+| -- | | |
+| ID | String | Fix ID |
+| Description | String | Fix Description |
+| releaseDate | Timestamp | Fix timestamp |
+| url | String | URL to fix notification |
+
+### OS Details
+
+Details on the os information
+
+| **Name** | **Type** | **Description** |
+| - | -- | -- |
+| osPlatform | String | For example: Linux, Windows |
+| osName | String | For example: Ubuntu |
+| osVersion | String | |
+
+### VulnerabilityDetails
+
+Details on the detected vulnerability
+
+| **Severity** | **Severity** | **The sub-assessment severity level** |
+| | -- | - |
+| LastModifiedDate | Timestamp | |
+| publishedDate | Timestamp | Published date |
+| ExploitabilityAssessment | ExploitabilityAssessment | |
+| CVSS | Dictionary <string, CVSS> | Dictionary from cvss version to cvss details object |
+| Workarounds | Workaround[] | Published workarounds for vulnerability |
+| References | VulnerabilityReference | |
+| Weaknesses | Weakness[] | |
+| cveId | String | CVE ID |
+| Cpe | CPE | |
+
+### CPE (Common Platform Enumeration)
+
+| **Name** | **Type** | **Description** |
+| | -- | |
+| language | String | Language tag |
+| softwareEdition | String | |
+| Version | String | Package version |
+| targetSoftware | String | Target Software |
+| vendor | String | Vendor |
+| product | String | Product |
+| edition | String | |
+| update | String | |
+| other | String | |
+| part | String | Applications Hardware OperatingSystems |
+| uri | String | CPE 2.3 formatted uri |
+
+### Weakness
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| Cwe | Cwe[] | |
+
+### Cwe (Common weakness enumeration)
+
+CWE details
+
+| **Name** | **Type** | **description** |
+| -- | -- | |
+| ID | String | CWE ID |
+
+### VulnerabilityReference
+
+Reference links to vulnerability
+
+| **Name** | **Type** | **Description** |
+| -- | -- | - |
+| link | String | Reference url |
+| title | String | Reference title |
+
+### ExploitabilityAssessment
+
+Reference links to an example exploit
+
+| **Name** | **Type** | **Description** |
+| | -- | |
+| exploitUris | String[] | |
+| exploitStepsPublished | Boolean | Had the exploits steps been published |
+| exploitStepsVerified | Boolean | Had the exploit steps verified |
+| isInExploitKit | Boolean | Is part of the exploit kit |
+| types | String[] | Exploit types, for example: NotAvailable, Dos, Local, Remote, WebApps, PrivilegeEscalation |
+
+### AzureResourceDetails
+
+Details of the Azure resource that was assessed
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| ID | string | Azure resource ID of the assessed resource |
+| source | string: Azure | The platform where the assessed resource resides |
+
+### CloudError
+
+Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This response also follows the OData error response format.).
+
+| **Name** | **Type** | **Description** |
+| -- | | -- |
+| error.additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. |
+| error.code | string | The error code. |
+| error.details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#clouderrorbody)[] | The error details. |
+| error.message | string | The error message. |
+| error.target | string | The error target. |
+
+### CloudErrorBody
+
+The error detail.
+
+| **Name** | **Type** | **Description** |
+| -- | | -- |
+| additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. |
+| code | string | The error code. |
+| details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list#clouderrorbody)[] | The error details. |
+| message | string | The error message. |
+| target | string | The error target. |
+
+### ErrorAdditionalInfo
+
+The resource management error additional info.
+
+| **Name** | **Type** | **Description** |
+| -- | -- | - |
+| info | object | The additional info. |
+| type | string | The additional info type. |
+
+### SecuritySubAssessment
+
+Security subassessment on a resource
+
+| **Name** | **Type** | **Description** |
+| -- | | |
+| ID | string | Resource ID |
+| name | string | Resource name |
+| properties.additionalData | AdditionalData: AzureContainerRegistryVulnerability | Details of the subassessment |
+| properties.category | string | Category of the subassessment |
+| properties.description | string | Human readable description of the assessment status |
+| properties.displayName | string | User friendly display name of the subassessment |
+| properties.id | string | Vulnerability ID |
+| properties.impact | string | Description of the impact of this subassessment |
+| properties.remediation | string | Information on how to remediate this subassessment |
+| properties.resourceDetails | ResourceDetails: [AzureResourceDetails](/rest/api/defenderforcloud/sub-assessments/list#azureresourcedetails) | Details of the resource that was assessed |
+| properties.status | [SubAssessmentStatus](/rest/api/defenderforcloud/sub-assessments/list#subassessmentstatus) | Status of the subassessment |
+| properties.timeGenerated | string | The date and time the subassessment was generated |
+| type | string | Resource type |
+
+### SecuritySubAssessmentList
+
+List of security subassessments
+
+| **Name** | **Type** | **Description** |
+| -- | | - |
+| nextLink | string | The URI to fetch the next page. |
+| value | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#securitysubassessment)[] | Security subassessment on a resource |
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
In the support table, **NA** indicates that the feature isn't available.
[Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA [Defender for APIs](defender-for-apis-introduction.md). [Review support preview regions](defender-for-apis-prepare.md#cloud-and-region-support). | Preview | NA | NA [Defender for App Service](defender-for-app-service-introduction.md) | GA | NA | NA
-[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Preview | NA | NA
+[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | GA | NA | NA
[Defender for Azure SQL database servers](defender-for-sql-introduction.md) | GA | GA | GA<br/><br/>A subset of alerts/vulnerability assessments is available.<br/>Behavioral threat protection isn't available. [Defender for Containers](defender-for-containers-introduction.md)<br/>[Review detailed feature support](support-matrix-defender-for-containers.md) | GA | GA | GA [Defender for DevOps](defender-for-devops-introduction.md) |Preview | NA | NA
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
description: Review support requirements for the Defender for Containers plan in
Previously updated : 06/07/2023 Last updated : 09/06/2023
This article summarizes support information for the [Defender for Containers pla
| Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|
+| [Agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes) | ACR, AKS | GA | GA | Agentless | Defender for Containers or Defender CSPM | Azure commercial clouds |
| Compliance-Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
| [Vulnerability assessment (powered by Qualys) - running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender agent | Defender for Containers | Commercial clouds | | [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - registry scan | ACR, Private ACR | Preview | | Agentless | Defender for Containers | Commercial clouds | | [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - running images | AKS | Preview | | Defender agent | Defender for Containers | Commercial clouds |
This article summarizes support information for the [Defender for Containers pla
| Discovery/provisioning-Defender agent auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | Discovery/provisioning-Azure Policy for Kubernetes auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-### Registries and images support for AKS - powered by Qualys
+### Registries and images support for Azure - powered by Qualys
| Aspect | Details | |--|--|
This article summarizes support information for the [Defender for Containers pla
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
-### Registries and images - powered by MDVM
+### Registries and images for Azure - powered by MDVM
[!INCLUDE [Registries and images support powered by MDVM](./includes/registries-images-mdvm.md)]
-### Kubernetes distributions and configurations
+### Kubernetes distributions and configurations - Azure
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br> ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)|
-<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
+<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested on Azure.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Discovery and provisioning | Auto provisioning of Defender agent | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | - | - | - | - | - |
-### Images support-EKS
+### Images support - AWS
| Aspect | Details | |--|--| | Registries and images | **Unsupported** <br>ΓÇó Images that have at least one layer over 2 GB<br> ΓÇó Public repositories and manifest lists <br>ΓÇó Images in the AWS management account aren't scanned so that we don't create resources in the management account. |
-### Kubernetes distributions/configurations support-EKS
+### Kubernetes distributions/configurations support - AWS
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+| Kubernetes distributions and configurations | **Supported**<br>ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)|
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE]
-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
### Private link restrictions
Outbound proxy without authentication and outbound proxy with basic authenticati
| Discovery and provisioning | Auto provisioning of Defender agent | GKE | Preview | - | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | GKE | Preview | - | Agentless | Defender for Containers |
-### Kubernetes distributions/configurations support-GKE
+### Kubernetes distributions/configurations support - GCP
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br />**Unsupported**<br /> ΓÇó Private network clusters<br /> ΓÇó GKE autopilot<br /> ΓÇó GKE AuthorizedNetworksConfig |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br><br />**Unsupported**<br /> ΓÇó Private network clusters<br /> ΓÇó GKE autopilot<br /> ΓÇó GKE AuthorizedNetworksConfig |
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE]
-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
### Private link restrictions
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
-## On-premises Arc-enabled machines
+## On-premises, Arc-enabled Kubernetes clusters
| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment | Registry scan - [OS packages](#registries-and-images-support--on-premises) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
-| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-support--on-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan - [OS packages](#registries-and-images-supporton-premises) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-supporton-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers |
| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy for Kubernetes | Defender for Containers | | Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers |
-| Runtime protection for [supported OS](#registries-and-images-support--on-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender agent | Defender for Containers |
+| Runtime protection for [supported OS](#registries-and-images-supporton-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender agent | Defender for Containers |
| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free | | Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender agent | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
-### Registries and images support -on-premises
+### Registries and images support - on-premises
| Aspect | Details | |--|--|
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+| Kubernetes distributions and configurations | **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE]
-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
#### Supported host operating systems
Defender for Containers relies on the **Defender agent** for several features. T
- Ubuntu 20.04 - Ubuntu 22.04
-Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage.
+Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, only get partial coverage.
#### Network restrictions
Outbound proxy without authentication and outbound proxy with basic authenticati
- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md). - Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).+
defender-for-cloud Support Matrix Defender For Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-storage.md
description: Learn about the permissions required to enable Defender for Storage
Previously updated : 08/14/2023 Last updated : 08/21/2023 # Required permissions for enabling Defender for Storage and its features
-This article lists the permissions required to [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) and its features.
+This article lists the permissions required to [enable Defender for Storage](tutorial-enable-storage-plan.md) and its features.
Microsoft Defender for Storage is an Azure-native layer of security intelligence that detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Common connector issues:
- Standards should be assigned on the security connector. To check, go to the **Environment settings** in the Defender for Cloud left menu, select the connector, and select **Settings**. There should be standards assigned. You can select the three dots to check if you have permissions to assign standards. - Connector resource should be present in Azure Resource Graph (ARG). Use the following ARG query to check: `resources | where ['type'] =~ "microsoft.security/securityconnectors"` - Make sure that sending Kubernetes audit logs is enabled on the AWS or GCP connector so that you can get [threat detection alerts for the control plane](alerts-reference.md#alerts-k8scluster).-- Make sure that Azure Arc and the Azure Policy Arc extension were installed successfully.-- Make sure that agents are installed to your Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations:
- - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed**
- - **GKE clusters should have the Azure Policy extension installed**
+- Make sure that The Defender agent and the Azure Policy for Kubernetes Arc extensions were installed successfully to your Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations:
- **EKS clusters should have Microsoft Defender's extension for Azure Arc installed** - **GKE clusters should have Microsoft Defender's extension for Azure Arc installed**
+ - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed**
+ - **GKE clusters should have the Azure Policy extension installed**
- If youΓÇÖre experiencing issues with deleting the AWS or GCP connector, check if you have a lock (in this case there might be an error in the Azure Activity log, hinting at the presence of a lock). - Check that workloads exist in the AWS account or GCP project.
You should [check which account](https://app.vssps.visualstudio.com/profile/view
:::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that is used to select an account.":::
-The first time you authorize the Microsoft Security application, you are given the ability to select an account. However, each time you login after that, the page defaults to the logged in account without giving you the chance to select an account.
+The first time you authorize the Microsoft Security application, you are given the ability to select an account. However, each time you log in after that, the page defaults to the logged in account without giving you the chance to select an account.
**To change the default account**:
defender-for-cloud Tutorial Enable Container Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md
Last updated 06/29/2023
-# Protect your Amazon Web Service (AWS) accounts containers with Defender for Containers
+# Protect your Amazon Web Service (AWS) containers with Defender for Containers
Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications.
To protect your EKS clusters, you need to enable the Containers plan on the rele
> [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md).
-## Deploy the Defender agent in Azure
+## Deploy the Defender agent in EKS clusters
Azure Arc-enabled Kubernetes, the Defender agent, and Azure Policy for Kubernetes should be installed and running on your EKS clusters. There's a dedicated Defender for Cloud recommendation that can be used to install these extensions (and Azure Arc if necessary):
defender-for-cloud Tutorial Enable Container Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md
Last updated 06/29/2023
-# Protect your Google Cloud Platform (GCP) project containers with Defender for Containers
+# Protect your Google Cloud Platform (GCP) containers with Defender for Containers
Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications.
There are two dedicated Defender for Cloud recommendations you can use to instal
1. In the Defender for Cloud menu, select **Recommendations**.
-1. From Defender for Cloud's **Recommendations** page, search for one of the recommendations by name.
+1. From Defender for Cloud's **Recommendations** page, search for each one of the recommendations above by name.
:::image type="content" source="media/tutorial-enable-containers-gcp/recommendation-search.png" alt-text="Screenshot showing how to search for the recommendation." lightbox="media/tutorial-enable-containers-gcp/recommendation-search-expanded.png":::
defender-for-cloud Tutorial Enable Containers Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-arc.md
If you would prefer to [assign a custom workspace](defender-for-containers-enabl
> [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md).
-## Deploy the Defender agent on Arc-enabled Kubernetes clusters that were onboarded to an Azure subscription
+## Deploy the Defender agent on Arc-enabled Kubernetes clusters
You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender agent](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc&tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api#deploy-the-defender-agent) with REST API, Azure CLI or with a Resource Manager template.
You can enable the Defender for Containers plan and deploy all of the relevant c
1. Navigate to the Recommendations page.
-1. Search for and select the `Azure Arc-enabled Kubernetes clusters should have Defender for Cloud's extension installed` recommendation.
+1. Search for and select the `Azure Arc-enabled Kubernetes clusters should have the Defender extension installed` recommendation.
:::image type="content" source="media/tutorial-enable-containers-azure/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender agent for Azure Arc-enabled Kubernetes clusters." lightbox="media/tutorial-enable-containers-azure/extension-recommendation.png":::
defender-for-cloud Tutorial Enable Cspm Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-cspm-plan.md
Title: Protect your resources with Defender CSPM plan on your subscription description: Learn how to enable Defender CSPM on your Azure subscription for Microsoft Defender for Cloud. Previously updated : 06/27/2023 Last updated : 09/05/2023 # Protect your resources with Defender CSPM
You have the ability to enable the **Defender CSPM** plan, which offers extra pr
For availability and to learn more about the features offered by each plan, see the [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options).
-You can learn more about Defender for CSPM's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+You can learn more about Defender CSPM's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
You can learn more about Defender for CSPM's pricing on [the pricing page](https
- In order to gain access to all of the features available from the CSPM plan, the plan must be enabled by the **Subscription Owner**.
-## Enable the Defender for CSPM plan
+## Enable the Defender CSPM plan
When you enable Defender for Cloud, you automatically receive the protections offered by the Foundational CSPM capabilities. In order to gain access to the other features provided by Defender CSPM, you need to enable the Defender CSPM plan on your subscription.
-**To enable the Defender for CSPM plan on your subscription**:
+**To enable the Defender CSPM plan on your subscription**:
1. Sign in to the [Azure portal](https://portal.azure.com).
When you enable Defender for Cloud, you automatically receive the protections of
1. Select **Save**.
-## Configure monitoring coverage
+## Enable the components of the Defender CSPM plan
-Once the Defender CSPM plan is enabled on your subscription, you have the ability to disable the agentless scanner or add exclusion tags to your subscription.
+Once the Defender CSPM plan is enabled on your subscription, you have the ability to enable the individual components of the Defender CSPM plan:
-**To configure monitoring coverage**:
+- **Agentless scanning for machines**: Scans your machines for installed software and vulnerabilities without relying on agents or impacting machine performance. You can disable the agentless scanner or add exclusion tags to your subscription.
+
+- **Agentless discovery for Kubernetes**: API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup. Required for Kubernetes inventory, identity and network exposure detection, risk hunting as part of the cloud security explorer. This extension is required for attack path analysis (DCSPM only).
+
+- **Container registries vulnerability assessments**: Provides vulnerability management for images stored in your container registries.
+
+- **Sensitive data discovery**: Sensitive data discovery automatically discovers managed cloud data resources containing sensitive data at scale. This feature accesses your data, it is agentless, uses smart sampling scanning, and integrates with Microsoft Purview sensitive information types and labels.
+
+**To enable the components of the Defender CSPM plan**:
1. On the Defender plans page, select **Settings**. :::image type="content" source="media/tutorial-enable-cspm-plan/cspm-settings.png" alt-text="Screenshot of the Defender plans page that shows where to select the settings option." lightbox="media/tutorial-enable-cspm-plan/cspm-settings.png":::
-1. On the Settings and monitoring page, select **Edit configuration**.
+1. Select **On** for each component to enable it.
+
+1. (Optional) For agentless scanning for machine select **Edit configuration**.
:::image type="content" source="media/tutorial-enable-cspm-plan/cspm-configuration.png" alt-text="Screenshot that shows where to select edit configuration." lightbox="media/tutorial-enable-cspm-plan/cspm-configuration.png":::
-1. Enter a tag name and tag value for any machines to be excluded from scans.
+ 1. Enter a tag name and tag value for any machines to be excluded from scans.
+
+ 1. Select **Apply**.
-1. Select **Apply**.
+1. Select **Continue**.
## Next steps
defender-for-cloud Tutorial Enable Databases Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-databases-plan.md
Database protection includes:
- [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)
-Defender for Databases protects four database protection plans at their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+These four database protection plans are priced separately. Get more info about Defender for Cloud's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
Defender for Databases protects four database protection plans at their own cost
When you enable database protection, you enable all four of the Defender plans and protect all of the supported databases on your subscription.
-**To enable Defender for App Service on your subscription**:
+**To enable Defender for Databases on your subscription**:
1. Sign in to the [Azure portal](https://portal.azure.com).
These plans protect all of the supported databases in your subscription.
- [Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) - [Overview of Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Overview of Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)+
defender-for-cloud Tutorial Enable Storage Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md
Title: Protect your storage accounts with the Microsoft Defender for Storage plan description: Learn how to enable the Defender for Storage on your Azure subscription for Microsoft Defender for Cloud. Previously updated : 08/01/2023 Last updated : 08/21/2023 # Deploy Microsoft Defender for Storage
Defender for Storage in Microsoft Defender for Cloud is an Azure-native layer of
| Aspect | Details | ||| |Release state: | General Availability (GA) |
-| Feature availability: | - Activity monitoring (security alerts) - General availability (GA)<br>- Malware Scanning - Preview, General Availability (GA) on September 1, 2023<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview<br>- Malware Scanning(add-on) - free during public preview**<br><br> Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud) to learn more. |
+| Feature availability: |- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview<br><br>Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud) to learn more. |
|Required roles and permissions: | For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions. | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds*<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the classic plan)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts |
To enable and configure Microsoft Defender for Storage and ensure maximum protec
> [!TIP] > The Malware Scanning feature has advanced configurations to help security teams support different workflows and requirements. -- [Override subscription-level settings to configure specific storage accounts](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#override-defender-for-storage-subscription-level-settings) with custom configurations that differ from the settings configured at the subscription level.
+- [Override subscription-level settings to configure specific storage accounts](advanced-configurations-for-malware-scanning.md#override-defender-for-storage-subscription-level-settings) with custom configurations that differ from the settings configured at the subscription level.
-There are several ways to enable and configure Defender for Storage: [Azure built-in policy](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-at-scale-with-an-azure-built-in-policy) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#bicep-template) and [ARM template](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#arm-template), using the [Azure portal](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal), or directly with [REST API](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-with-rest-api).
+There are several ways to enable and configure Defender for Storage: using the [Azure built-in policy](defender-for-storage-policy-enablement.md) (the recommended method), programmatically using Infrastructure as Code templates, including [Terraform](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#terraform-template), [Bicep](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#bicep-template), and [ARM](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#azure-resource-manager-template) templates, using the [Azure portal](defender-for-storage-azure-portal-enablement.md?tabs=enable-subscription), or directly with the [REST API](defender-for-storage-rest-api-enablement.md?tabs=enable-subscription).
Enabling Defender for Storage via a policy is recommended because it facilitates enablement at scale and ensures that a consistent security policy is applied across all existing and future storage accounts within the defined scope (such as entire management groups). This keeps the storage accounts protected with Defender for Storage according to the organization's defined configuration.
Enabling Defender for Storage via a policy is recommended because it facilitates
## Next steps - Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).++
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 08/14/2023 Last updated : 09/04/2023 # Important upcoming changes to Microsoft Defender for Cloud > [!IMPORTANT] > The information on this page relates to pre-release products or features, which may be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.+ [Defender for Servers](#defender-for-servers) On this page, you can learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows.
If you're looking for the latest release notes, you can find them in the [What's
|--|--| | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023| | [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | August 2023 |
-| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | August 2023 |
-| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 |
-| [Update naming format of Azure Center for Internet Security standards in regulatory compliance](#update-naming-format-of-azure-center-for-internet-security-standards-in-regulatory-compliance) | August 2023 |
| [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 | | [Deprecate and replace recommendations App Service Client Certificates](#deprecate-and-replace-recommendations-app-service-client-certificates) | August 2023 | | [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | September 2023 |
+| [Replacing secret scanning recommendation results in Defender for DevOps from CredScan with GitHub Advanced Security for Azure DevOps powered secret scanning](#replacing-secret-scanning-recommendation-results-in-defender-for-devops-from-credscan-with-github-advanced-security-for-azure-devops-powered-secret-scanning) | September 2023 |
| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 |
+| [Deprecating and replacing "Microsoft Defender for Storage plan should be enabled" recommendation](#deprecating-and-replacing-microsoft-defender-for-storage-plan-should-be-enabled-recommendation) | September 2023|
+| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | September 2023 |
| [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | August 2024 |
+### Replacing secret scanning recommendation results in Defender for DevOps from CredScan with GitHub Advanced Security for Azure DevOps powered secret scanning
+
+**Estimated date for change: September 2023**
+
+Currently, the recommendations for secret scanning in Azure DevOps repositories by Defender for DevOps are based on the results of CredScan, which is manually run using the Microsoft Security DevOps Extension. However, this mechanism of running secret scanning is being deprecated in September 2023. Instead, you can see secret scanning results generated by GitHub Advanced Security for Azure DevOps (GHAzDO).
+
+As GHAzDO enters Public Preview, we're working towards unifying the secret scanning experience across both GitHub Advanced Security and GHAzDO. This unification enables you to receive detections across all branches, git history, and secret leak protection via push protection to your repositories. This process can all be done with a single button press, without requiring any pipeline runs.
+
+For more information about GHAzDO Secret Scanning, see [Set up secret scanning](/azure/devops/repos/security/configure-github-advanced-security-features#set-up-secret-scanning).
+ ### Classic connectors for multicloud will be retired **Estimated date for change: September 15, 2023**
The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA
The following table explains how each capability will be provided after the Log Analytics agent retirement:
-| **Feature** | **Support** | **Alternative** |
+| **Feature** | **Deprecation plan** | **Alternative** |
| | | | | Defender for Endpoint/Defender for Cloud integration for down level machines (Windows Server 2012 R2, 2016) | Defender for Endpoint integration that uses the legacy Defender for Endpoint sensor and the Log Analytics agent (for Windows Server 2016 and Windows Server 2012 R2 machines) wonΓÇÖt be supported after August 2024. | Enable the GA [unified agent](/microsoft-365/security/defender-endpoint/configure-server-endpoints#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) integration to maintain support for machines, and receive the full extended feature set. For more information, see [Enable the Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md#windows). | | OS-level threat detection (agent-based) | OS-level threat detection based on the Log Analytics agent wonΓÇÖt be available after August 2024. A full list of deprecated detections will be provided soon. | OS-level detections are provided by Defender for Endpoint integration and are already GA. |
-| Adaptive application controls | The [current GA version](adaptive-application-controls.md) based on the Log Analytics agent will be deprecated in August 2024, along with the preview version based on the Azure monitoring agent. | The next generation of this feature is currently under evaluation, further information will be provided soon. |
-| Endpoint protection discovery recommendations | The current [GA and preview recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender for CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. |
-| Missing OS patches (system updates) | Recommendations to apply system updates based on the Log Analytics agent wonΓÇÖt be available after August 2024. | [New recommendations](release-notes.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), based on integration with Update Management Center, are already in GA, with no agent dependencies. |
+| Adaptive application controls | The [current GA version](adaptive-application-controls.md) based on the Log Analytics agent will be deprecated in August 2024, along with the preview version based on the Azure monitoring agent. | Adaptive Application Controls feature as it is today will be discontinued, and new capabilities in the application control space (on top of what Defender for Endpoint and Windows Defender Application Control offer today) will be considered as part of future Defender for Servers roadmap. |
+| Endpoint protection discovery recommendations | The current [GA recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. The preview recommendations available today over Azure Monitor agent (AMA) will be deprecated when the alternative is provided over Agentless Disk Scanning capability. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. |
+| Missing OS patches (system updates) | Recommendations to apply system updates based on the Log Analytics agent wonΓÇÖt be available after August 2024. The preview version available today over Guest Configuration agent will be deprecated when the alternative is provided over MDVM premium capabilities. Support of this feature for Docker-hub and VMMS will be deprecated in Aug 2024 and will be considered as part of future Defender for Servers roadmap.| [New recommendations](release-notes.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), based on integration with Update Management Center, are already in GA, with no agent dependencies. |
| OS misconfigurations (Azure Security Benchmark recommendations) | The [current GA version](apply-security-baseline.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The current preview version that uses the Guest Configuration agent will be deprecated as the Microsoft Defender Vulnerability Management integration becomes available. | A new version, based on integration with Premium Microsoft Defender Vulnerability Management, will be available early in 2024, as part of Defender for Servers plan 2. |
-| File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. | A new version of this feature, either agent-based or agentless, will be available by April 2024. |
+| File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The FIM [Public Preview version](file-integrity-monitoring-enable-ama.md) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over Defender for Endpoint.| A new version of this feature will be provided based on Microsoft Defender for Endpoint integration by April 2024. |
| The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion | The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion over the defined tables will remain supported via the AMA agent for the machines under subscriptions covered by Defender for Servers P2. Every machine is eligible for the benefit only once, even if both Log Analytics agent and Azure Monitor agent are installed on it. | | ##### Log analytics and Azure Monitoring agents autoprovisioning experience
Customers that rely on the `resourceID` to query DevOps recommendation data will
Queries will need to be updated to include both the old and new `resourceID` to show both, for example, total over time.
-Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations.
-
-The recommendations page's experience will have minimal impact and deprecated assessments may continue to show for a maximum of 14 days if new scan results aren't submitted.
-
-### DevOps Resource Deduplication for Defender for DevOps
-
-**Estimated date for change: August 2023**
-
-To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
-
-If you don't have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they don't want to keep by navigating to Defender for Cloud Environment Settings.
-
-Customers will have until July 31, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps.
-
-### Business model and pricing updates for Defender for Cloud plans
-
-**Estimated date for change: August 2023**
-
-Microsoft Defender for Cloud has three plans that offer service layer protection:
--- Defender for Key Vault--- Defender for Azure Resource Manager--- Defender for DNS-
-These plans are transitioning to a new business model with different pricing and packaging to address customer feedback regarding spending predictability and simplifying the overall cost structure.
-
-**Business model and pricing changes summary**:
-
-Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price.
+Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations. The template DevOps workbook is planned to be updated to reflect the new recommendations, although during the actual migration, customers may experience some errors with the workbook.
-- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.-
-Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price.
--- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.--- **Defender for Key Vault**: This plan will have a fixed price per vault at per month with no overage charge. Customers will have the option to switch to the new business model by selecting the Defender for Key Vault new per-vault model--- **Defender for DNS**: Defender for Servers Plan 2 customers will gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS will no longer be charged for Defender for DNS. Defender for DNS will no longer be available as a standalone plan.-
-For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h)
-
-### Update naming format of Azure Center for Internet Security standards in regulatory compliance
-
-**Estimated date for change: August 2023**
-
-The naming format of Azure CIS (Center for Internet Security) foundations benchmarks in the compliance dashboard is set for change from `[Cloud] CIS [version number]` to `CIS [Cloud] Foundations v[version number]`. Refer to the following table:
-
-| Current Name | New Name |
-|--|--|
-| Azure CIS 1.1.0 | CIS Azure Foundations v1.1.0 |
-| Azure CIS 1.3.0 | CIS Azure Foundations v1.3.0 |
-| Azure CIS 1.4.0 | CIS Azure Foundations v1.4.0 |
-| AWS CIS 1.2.0 | CIS AWS Foundations v1.2.0 |
-| AWS CIS 1.5.0 | CIS AWS Foundations v1.5.0 |
-| GCP CIS 1.1.0 | CIS GCP Foundations v1.1.0 |
-| GCP CIS 1.2.0 | CIS GCP Foundations v1.2.0 |
-
-Learn how to [improve your regulatory compliance](regulatory-compliance-dashboard.md).
+The experience on the recommendations page will be impacted and require customers to query under "All recommendations" to view the new DevOps recommendations. For Azure DevOps, deprecated assessments may continue to show for a maximum of 14 days if new pipelines are not run. Refer to [Defender for DevOps Common questions](/azure/defender-for-cloud/faq-defender-for-devops#why-don-t-i-see-recommendations-for-findings-) for details.
### Preview alerts for DNS servers to be deprecated
At that time, all billable data types will be capped if the daily cap is met. Th
Learn more about [workspaces with Microsoft Defender for Cloud](../azure-monitor/logs/daily-cap.md#workspaces-with-microsoft-defender-for-cloud).
+## Deprecating and replacing "Microsoft Defender for Storage plan should be enabled" recommendation
+
+**Estimated date for change: September 2023**
+
+The recommendation `Microsoft Defender for Storage plan should be enabled` will be deprecated on public clouds and will remain available on Azure Government cloud. This recommendation will be replaced by a new recommendation: `Microsoft Defender for Storage plan should be enabled with Malware Scanning and Sensitive Data Threat Detection`. This recommendation ensures that Defender for Storage is enabled at the subscription level with malware scanning and sensitive data threat detection capabilities.
+
+| Policy Name | Description | Policy Effect | Version |
+|--|--|--|--|
+| [Microsoft Defender for Storage should be enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f640d2586-54d2-465f-877f-9ffc1d2109f4) | Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes malware scanning and sensitive data threat detection.This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. | Audit, disabled | 1.0.0 |
+
+Learn more about [Microsoft Defender for Storage](defender-for-storage-introduction.md).
+
+### DevOps Resource Deduplication for Defender for DevOps
+
+**Estimated date for change: September 2023**
+
+To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
+
+If you don't have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they don't want to keep by navigating to Defender for Cloud Environment Settings.
+
+Customers will have until September 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps.
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-cloud View And Remediate Vulnerabilities For Images Running On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images-running-on-aks.md
description: Learn how to view and remediate runtime threat findings
Previously updated : 07/11/2023 Last updated : 09/06/2023 # View and remediate vulnerabilities for images running on your AKS clusters Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462ce) recommendation.
-To provide findings for the recommendation, Defender CSPM uses [agentless container registry vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) or the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
+To provide findings for the recommendation, Defender for Cloud uses [agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes) or the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
This article describes the workflow automation feature of Microsoft Defender for
1. Select **(+) Add**.
- :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of the screen to create a logic app." lightbox="media/workflow-automation/logic-apps-create-new.png":::
+ :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of where to create a logic app." lightbox="media/workflow-automation/logic-apps-create-new.png":::
1. Fill out all required fields and select **Review + Create**.
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
Use the following commands to restore data on your OT network sensor using the m
|User |Command |Full command syntax | |||| |**support** | `system restore` | No attributes |
-|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | `-f` `<filename>` |
For example, for the *support* user:
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
Horizon provides:
:::image type="content" source="media/concept-supported-protocols/sdk-horizon.png" alt-text="Infographic that describes features provided by the Horizon SDK." border="false":::
-### Collaborate with the Horizon community
-
-Join our community to help lead the way towards digital transformation and industry-wide collaboration for protocol support!
-
-The Horizon ICS community shares knowledge between domain experts in critical infrastructures, building management, production lines, transportation systems, and leading industries. For example, our community shares tutorials, discussion forums, instructor-led training, educational white papers, and more.
-
-To join the Horizon community, email us at: [horizon-community@microsoft.com](mailto:horizon-community@microsoft.com)
-- ## Next steps For more information:
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
Title: Create data mining queries and reports in Defender for IoT description: Learn how to create granular reports about network devices. Previously updated : 12/05/2022 Last updated : 08/28/2023
The following out-of-the-box reports are listed in the **Recommended** area, rea
| **Excluded CVEs** | Lists all detected devices that have CVEs that were manually excluded from the **CVEs** report. | | **Active Devices (Last 24 Hours)** | Lists all detective devices that have had active traffic within the last 24 hours. | | **Remote Access** | Lists all detected devices that communicate through remote session protocols. |
-| **CVEs** | Lists all detected devices with known vulnerabilities, along with CVSSv2 risk scores. <br> <br> Select **Edit** to delete and exclude specific CVEs from the report. <br><br> **Tip**: Delete CVEs to exclude them from the list to have your attack vector reports to reflect your network more accurately. |
+| **CVEs** | Lists all detected devices with known vulnerabilities, along with CVSS risk scores. <br> <br> Select **Edit** to delete and exclude specific CVEs from the report. <br><br> **Tip**: Delete CVEs to exclude them from the list to have your attack vector reports to reflect your network more accurately. |
| **Nonactive Devices (Last 7 Days)** | Lists all detected devices that haven't communicated for the past seven days. | Select a report to view todayΓÇÖs data. Use the :::image type="icon" source="media/how-to-generate-reports/refresh-icon.png" border="false"::: **Refresh**, :::image type="icon" source="media/how-to-generate-reports/expand-all-icon.png" border="false"::: **Expand all**, and :::image type="icon" source="media/how-to-generate-reports/collapse-all-icon.png" border="false"::: **Collapse all** options to update and change your report views.
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
To export device inventory data, select the **Import/Export file** :::image type
Save the exported file locally. > [!NOTE]
-> In the exported file, date values are based on the region settings for the machine you're using to access the OT sensor. We recommend exporting data only from a machine with the same region settings as the sensor that detected your data. For more information, see [Synchronize time zones on an OT sensor](how-to-manage-individual-sensors.md#synchronize-time-zones-on-an-ot-sensor).
+> The date format on the on-premises management console is always set to DD/MM/YYYY.
+> We recommend that you use the same date format on any machine where you'll be opening exported inventory files to ensure that dates in the exported inventory files are shown correctly.
> ## Add to and enhance device inventory data
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
The following table lists available responses for each notification, and when we
| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnet Configuration** and [configure subnets](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration). <br />- **Dismiss**: Remove the notification. |**Dismiss** | | **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. | Set with new operating system only if not already configured manually. <br><br>If the operating system has already been configured: **Dismiss**. | | **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**: <br />Remove the notification. |**Dismiss** |
-| **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling|
## View a device map for a specific zone
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
Title: Maintain threat intelligence packages on OT network sensors - Microsoft Defender for IoT description: Learn how to maintain threat intelligence packages on OT network sensors. Previously updated : 02/09/2023 Last updated : 08/28/2023
Microsoft Defender for IoT regularly delivers threat intelligence package update
Threat intelligence packages contain signatures, such as malware signatures, CVEs, and other security content.
+CVE scores shown are aligned with the [National Vulnerability Database (NVD)](https://nvd.nist.gov/vuln-metrics/cvss), and CVSS v3 scores are shown if they're relevant. If there's no CVSS v3 score relevant, the CVSS v2 score is shown instead.
+ > [!TIP] > We recommend ensuring that your OT network sensors always have the latest threat intelligence package installed so that you always have the full context of a threat before an environment is affected, and increased relevancy, accuracy, and actionable recommendations. >
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: OT monitoring software versions - Microsoft Defender for IoT description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features. Previously updated : 07/03/2023 Last updated : 08/09/2023 # OT monitoring software versions
This version includes the following updates and enhancements:
- [UI enhancements for downloading PCAP files from the sensor](how-to-view-alerts.md#access-alert-pcap-data) - [*cyberx* and *cyberx_host* users aren't enabled by default](roles-on-premises.md#default-privileged-on-premises-users)
+> [!NOTE]
+> Due to internal improvements to the OT sensor's device inventory, column edits made to your device inventory aren't retained after updating to version 23.1.2. If you'd previously edited the columns shown in your device inventory, you'll need to make those same edits again after updating your sensor.
+>
+ ## Versions 22.3.x ### 22.3.10
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
If your devices use proprietary protocols not supported [out-of-the-box](concept
- A tool that exponentially expands OT visibility and control, without the need to upgrade to new versions. - The security of allowing proprietary development without divulging sensitive information.
-For more information, contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.com).
- ## Prerequisites Before performing the steps described in this article, make sure that you have:
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 08/09/2023 Last updated : 08/28/2023
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## August 2023
+
+|Service area |Updates |
+|||
+| **OT networks** | [Defender for IoT's CVEs align to CVSS v3](#defender-for-iots-cves-align-to-cvss-v3) |
+
+### Defender for IoT's CVEs align to CVSS v3
+
+CVE scores shown in the OT sensor and on the Azure portal are aligned with the [National Vulnerability Database (NVD)](https://nvd.nist.gov/vuln-metrics/cvss), and starting with Defender for IoT's August threat intelligence update, CVSS v3 scores are shown if they're relevant. If there's no CVSS v3 score relevant, the CVSS v2 score is shown instead.
+
+View CVE data from the Azure portal, either on a Defender for IoT's device detail's **Vulnerabilities** tab, with resources available with the Microsoft Sentinel solution, or in a data mining query on your OT sensor. For more information, see:
+
+- [Maintain threat intelligence packages on OT network sensors](how-to-work-with-threat-intelligence-packages.md)
+- [View full device details](how-to-manage-device-inventory-for-organizations.md#view-full-device-details)
+- [Tutorial: Investigate and detect threats for IoT devices with Microsoft Sentinel](iot-advanced-threat-monitoring.md)
+- [Create data mining queries](how-to-create-data-mining-queries.md)
+ ## July 2023 |Service area |Updates |
deployment-environments Concept Environments Reliability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-reliability-availability.md
+
+ Title: Reliability and availability in Azure Deployment Environments
+description: Learn how Azure Deployment Environments supports disaster recovery. Understand reliability and availability within a single region and across regions.
++++ Last updated : 08/25/2023+++
+# Reliability in Azure Deployment Environments
+
+This article describes reliability support in Azure Deployment Environments, and covers intra-regional resiliency with availability zones and inter region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview).
+
+## Availability zone support
+
+Azure availability zones consist of at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability if a local zone fails. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services.
+
+Availability zone support for all resources in Azure Deployment Environments is enabled automatically. There's no action for you to take.
+
+Regions supported:
+- West US 2
+- South Central US
+- UK South
+- West Europe
+- East US
+- Australia East
+- East US 2
+- North Europe
+- West US 3
+- Japan East
+- East Asia
+- Central India
+- Korea Central
+- Canada Central
+
+For more detailed information on availability zones in Azure, seeΓÇ»[Regions and availability zones](/azure/reliability/availability-zones-overview).
+
+## Disaster recovery: cross-region failover
+
+Azure provides protection from regional or large geography disasters by making use of another region if there's a region-wide disaster.
+
+You can replicate the following Deployment Environments resources in an alternate region to prevent data loss if a cross-region failover occurs:
+
+- Dev center
+- Project
+- Catalog
+- Catalog items
+- Dev center environment type
+- Project environment type
+- Environments
+++
+For more information on Azure disaster recovery architecture, seeΓÇ»[Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture).
+
+## Related content
+
+- To learn more about how Azure supports reliability, see [Azure reliability](/azure/reliability).
+- To learn more about Deployment Environments resources, see [Azure Deployment Environments key concepts](./concept-environments-key-concepts.md).
+- To get started with Deployment Environments, see [Quickstart: Create and configure the Azure Deployment Environments dev center](./quickstart-create-and-configure-devcenter.md).
deployment-environments Tutorial Deploy Environments In Cicd Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md
You use a workflow that features three branches: main, dev, and test.
This workflow is a small example for the purposes of this tutorial. Real world workflows may be more complex.
+Before beginning this tutorial, you can familiarize yourself with Deployment Environments resources and concepts by reviewing [Key concepts for Azure Deployment Environments](/azure/deployment-environments/concept-environments-key-concepts).
+ In this tutorial, you learn how to: > [!div class="checklist"]
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
The image version must meet the following requirements:
- Generalized VM image. - You must create the image using these three sysprep options: `/generalize /oobe /mode:vm`. </br> For more information, see: [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true).
- - To speed up the Dev Box creation time, you can disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`. </br>
- For more information, see: [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true).
+ - To speed up the Dev Box creation time:
+ - Disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`. </br>
+ For more information, see: [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true).
+ - Run `defrag` and `chkdsk` during image creation, wait for them to finish. And disable `chkdisk` and `defrag` scheduled task.
- Single-session virtual machine (VM) images. (Multiple-session VM images aren't supported.) - No recovery partition. - For information about how to remove a recovery partition, see the [Windows Server command: delete partition](/windows-server/administration/windows-commands/delete-partition).
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-repository.md
description: Learn how to add a private artifact repository to your lab to store
Previously updated : 01/11/2022 Last updated : 09/30/2023
devtest-labs Add Artifact Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-vm.md
description: Learn how to add an artifact to a virtual machine in a lab in Azure
Previously updated : 06/30/2023 Last updated : 09/30/2023
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
description: Learn how to configure a remote desktop gateway in Azure DevTest La
Previously updated : 05/30/2023 Last updated : 09/30/2023
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md
description: Learn how to connect to lab virtual machines (VMs) through a browse
Previously updated : 06/14/2023 Last updated : 09/30/2023
devtest-labs Create Lab Windows Vm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-bicep.md
Previously updated : 03/22/2022 Last updated : 09/30/2023 # Quickstart: Use Bicep to create a lab in DevTest Labs
devtest-labs Create Lab Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-template.md
Previously updated : 01/03/2022 Last updated : 09/30/2023 # Quickstart: Use an ARM template to create a lab in DevTest Labs
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deliver-proof-concept.md
description: Use a proof of concept or pilot deployment to investigate incorpora
Previously updated : 03/22/2022 Last updated : 09/30/2023
devtest-labs Deploy Nested Template Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deploy-nested-template-environments.md
Previously updated : 01/26/2022 Last updated : 09/30/2023 # Deploy DevTest Labs environments by using nested templates
devtest-labs Devtest Lab Add Claimable Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-claimable-vm.md
description: Learn how to use the Azure portal to add a claimable virtual machin
Previously updated : 12/21/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
description: Learn about the Azure DevTest Labs Owner, Contributor, and DevTest
Previously updated : 01/26/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Add Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-vm.md
description: Learn how to use the Azure portal to add a virtual machine (VM) to
Previously updated : 05/22/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Artifact Author https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-artifact-author.md
description: Learn how to create and use artifacts to deploy and set up applicat
Previously updated : 01/11/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
description: Learn how to attach or detach a data disk for a lab virtual machine
Previously updated : 04/24/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Auto Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md
description: Learn how to set auto shutdown schedules and policies for Azure Dev
Previously updated : 04/24/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Auto Startup Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-startup-vm.md
description: Learn how to configure auto-start settings for VMs in a lab. This s
Previously updated : 04/24/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md
description: Learn definitions of some basic DevTest Labs concepts related to la
Previously updated : 06/14/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Configure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-vnet.md
description: Learn how to configure an existing virtual network and subnet to us
Previously updated : 02/15/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Create Custom Image From Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vhd-using-powershell.md
description: Automate creation of a custom image in Azure DevTest Labs from a VH
Previously updated : 12/28/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
description: Learn how to create a custom image from a provisioned virtual machi
Previously updated : 02/15/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Create Environment From Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-environment-from-arm.md
description: Learn how to create multi-VM, platform-as-a-service (PaaS) environm
Previously updated : 12/21/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Create Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md
description: Learn how to quickly create a lab in Azure DevTest Labs by using th
Previously updated : 05/22/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-template.md
description: Use a VHD file to create an Azure DevTest Labs virtual machine cust
Previously updated : 01/04/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-delete-lab-vm.md
description: Learn how to delete a virtual machine from a lab or delete a lab in
Previously updated : 03/14/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Dev Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-dev-ops.md
description: Learn how to use Azure DevTest Labs with continuous integration (CI
Previously updated : 12/28/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md
description: This article describes primary Azure DevTest Labs scenarios, and ho
Previously updated : 05/12/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Mandatory Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-mandatory-artifacts.md
description: Learn how to specify mandatory artifacts to install at creation of
Previously updated : 01/12/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md
description: Learn how DevTest Labs makes it easy to create, manage, and monitor
Previously updated : 04/20/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-reference-architecture.md
description: See a reference architecture and considerations for Azure DevTest L
Previously updated : 03/14/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-resize-vm.md
Previously updated : 06/30/2023 Last updated : 09/30/2023 # Resize a lab VM in Azure DevTest Labs
devtest-labs Devtest Lab Set Lab Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-set-lab-policy.md
description: Learn how to define lab policies such as VM sizes, maximum VMs per
Previously updated : 02/14/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
description: Troubleshoot issues with applying artifacts on an Azure DevTest Lab
Previously updated : 06/15/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Upload Vhd Using Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-azcopy.md
description: Walk through the steps to use the AzCopy command-line utility to up
Previously updated : 12/22/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Upload Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md
description: Walk through the steps to use PowerShell to upload a VHD file to a
Previously updated : 12/22/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Upload Vhd Using Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md
description: Walk through the steps to upload a VHD file to a DevTest Labs lab s
Previously updated : 12/23/2022 Last updated : 09/30/2023
devtest-labs Devtest Lab Use Arm And Powershell For Lab Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-arm-and-powershell-for-lab-resources.md
Previously updated : 01/11/2022 Last updated : 09/30/2023 # Azure Resource Manager (ARM) templates in Azure DevTest Labs
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
description: Learn how to view, edit, save, and store ARM virtual machine (VM) t
Previously updated : 06/09/2023 Last updated : 09/30/2023
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
description: Learn how to use Azure PowerShell to create and manage virtual mach
Previously updated : 03/17/2022 Last updated : 09/30/2023
devtest-labs Enable Browser Connection Lab Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/enable-browser-connection-lab-virtual-machines.md
description: Integrate Azure Bastion with DevTest Labs to enable accessing lab v
Previously updated : 12/20/2022 Last updated : 09/30/2023
devtest-labs Encrypt Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-storage.md
description: Learn about DevTest Labs storage accounts, encryption, customer-man
Previously updated : 03/15/2022 Last updated : 09/30/2023
devtest-labs How To Move Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md
Previously updated : 03/03/2022 Last updated : 09/30/2023
-# Move DevTest Labs to another region
+# Move DevTest Labs and schedules to another region
-To move a lab, create a copy of an existing lab in another region.
+You can move DevTest Labs and their associated schedules to another reason. To move a lab, create a copy of an existing lab in another region. When you've moved your lab, and you have a virtual machine (VM) in the target region, you can move your lab schedules.
In this article, you learn how to: > [!div class="checklist"]
In this article, you learn how to:
> - Deploy the template to create the new lab in the target region. > - Configure the new lab. > - Move data to the new lab.
+> - Move schedules to the new lab.
> - Delete the resources in the source region. ## Prerequisites
In this article, you learn how to:
- the Stored Secrets - PAT tokens of the private Artifact Repos to move the private repos together with the lab.
-## Prepare to move
+- When moving a lab schedule, ensure a Compute VM exists in the target region.
-To get started, export and modify a Resource Manager template.
+## Move a lab
+
+The following section describes how to create and customize an ARM template to move a lab from one region to another.
+
+You can move a schedule without moving a lab, if you have a VM in the target region. If you want to move a schedule without moving a lab, see [Move a schedule](#move-a-schedule).
+
+### Prepare to move a lab
-### Prepare your Virtual Network
+When you move a lab, there are some steps you must take to prepare for the move. You need to:
+
+- Prepare the virtual network
+- Export an ARM template of the lab
+- Modify the template
+- Deploy the template to move the lab
+- Configure the new lab
+- Swap the OS disks of the Compute VMs under the new VMs
+- Clean up the original lab
+
+#### Prepare the Virtual Network
+
+To get started, export and modify a Resource Manager template.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you don't have [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) under the target region, create one now.
-1. Move your current Virtual Network to the new region and resource group using the steps included in the article, "[Move an Azure virtual network to another region](../virtual-network/move-across-regions-vnet-portal.md)".
+1. Move the current Virtual Network to the new region and resource group using the steps included in the article, "[Move an Azure virtual network to another region](../virtual-network/move-across-regions-vnet-portal.md)".
Alternately, you can create a new virtual network, if you don't have to keep the original one.
-### Export an ARM template of your lab.
+#### Export an ARM template of the lab
-Next, you export a JSON template contains settings that describe your lab.
+Next, you export a JSON template contains settings that describe the lab.
To export a template by using Azure portal:
To export a template by using Azure portal:
This zip file contains the .json files that comprise the template and scripts to deploy the template. It contains all the resources under your lab listed in ARM template format, except for the Shared Image Gallery resources.
-### Modify the template
+#### Modify the template
In order for the ARM template to deploy correctly in the new region, you must change a few parts of the template.
To update the template by using Azure portal:
1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**. 1. Select **Template deployment**.-
- ![Azure Resource Manager templates library](../storage/common/media/storage-account-move/azure-resource-manager-template-library.png)
+
+ :::image type="content" source="./media/how-to-move-labs/azure-resource-manager-template-library.png" alt-text="Screenshot that shows the Azure Marketplace with template deployment selected.":::
1. Select **Create**.
To update the template by using Azure portal:
} ```
- 1. Find the `"type": "microsoft.devtestlab/labs/virtualnetworks"` resource. If you created a new virtual network earlier in these steps, you must add the actual subnet name in `/subnets/[SUBNET_NAME]`. If you chose to move the Vnet to a new region, you should skip this step.
+ 1. Find the `"type": "microsoft.devtestlab/labs/virtualnetworks"` resource. If you created a new virtual network earlier in these steps, you must add the actual subnet name in `/subnets/[SUBNET_NAME]`. If you chose to move the virtual network to a new region, you should skip this step.
1. Find the `"type": "microsoft.devtestlab/labs/virtualmachines"` resource.
To update the template by using Azure portal:
1. In the editor, save the template.
-## Deploy to move
+### Deploy the template to move the lab
Deploy the template to create a new lab in the target region.
Deploy the template to create a new lab in the target region.
1. Select the bell icon (notifications) from the top of the screen to see the deployment status. You shall see **Deployment in progress**. Wait until the deployment is completed.
-### Configure the new lab
+#### Configure the new lab
While most Lab resources have been replicated under the new region using the ARM template, a few edits still need to be moved manually. 1. Add the Compute Gallery back to the lab if there are any in the original one. 1. Add the policies "Virtual machines per user", "Virtual machines per lab" and "Allowed Virtual machine sizes" back to the moved lab
-### Swap the OS disks of the Compute VMs under the new VMs.
+#### Swap the OS disks of the Compute VMs under the new VMs
Note the VMs under the new Lab have the same specs as the ones under the old Lab. The only difference is their OS Disks.
Note the VMs under the new Lab have the same specs as the ones under the old Lab
1. Swap the OS disk of the Compute VM under the new lab with the new disk. To learn how, see the article, "[Change the OS disk used by an Azure VM using PowerShell](../virtual-machines/windows/os-disk-swap.md)".
+## Move a schedule
+
+There are two ways to move a schedule:
+
+ - Manually recreate the schedules on the moved VMs. This process can be time consuming and error prone. This approach is most useful when you have a few schedules and VMs.
+ - Export and redeploy the schedules by using ARM templates.
+
+Use the following steps to export and redeploy your schedule in another Azure region by using an ARM template:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Go to the source resource group that held your VMs.
+
+3. On the **Resource Group Overview** page, under **Resources**, select **Show hidden types**.
+
+4. Select all resources with the type **microsoft.devtestlab/schedules**.
+
+5. Select **Export template**.
+
+ :::image type="content" source="./media/how-to-move-labs/move-compute-schedule.png" alt-text="Screenshot that shows the hidden resources in a resource group, with schedules selected.":::
+
+6. On the **Export resource group template** page, select **Deploy**.
+
+7. On the **Custom deployment** page, select **Edit template**.
+
+8. In the template code, change all instances of `"location": "<old location>"` to `"location": "<new location>"` and then select **Save**.
+
+9. On the **Custom deployment** page, enter values that match the target VM:
+
+ |Name|Value|
+ |-|-|
+ |**Subscription**|Select an Azure subscription.|
+ |**Resource group**|Select the resource group name. |
+ |**Region**|Select a location for the lab schedule. For example, **Central US**. |
+ |**Schedule Name**|Must be a globally unique name. |
+ |**VirtualMachine_xxx_externalId**|Must be the target VM. |
+
+ :::image type="content" source="./media/how-to-move-labs/move-schedule-custom-deployment.png" alt-text="Screenshot that shows the custom deployment page, with new location values for the relevant settings.":::
+
+ >[!IMPORTANT]
+ >Each schedule must have a globally unique name; you will need to change the schedule name for the new location.
+
+10. Select **Review and create** to create the deployment.
+
+11. When the deployment is complete, verify that the new schedule is configured correctly on the new VM.
+ ## Discard or clean up
-After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare-to-move) and [Move](#deploy-to-move) sections of this article.
+After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare-to-move-a-lab) and [Move](#deploy-the-template-to-move-the-lab) sections of this article.
To commit the changes and complete the move, you must delete the original lab.
To remove a lab by using the Azure portal:
1. Select **Delete**, and confirm.
+You can also choose to clean up the original schedules if they're no longer used. Go to the original schedule resource group (where you exported templates from in step 5 above) and delete the schedule resource.
+ ## Next steps In this article, you moved DevTest Labs from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to: - [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)-- [Move Microsoft.DevtestLab/schedules to another region](./how-to-move-schedule-to-new-region.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
devtest-labs How To Move Schedule To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-schedule-to-new-region.md
- Title: Move a schedule to another region
-description: This article explains how to move a top level schedule to another Azure region.
--- Previously updated : 05/09/2022--
-# Move a schedule to another region
-
-In this article, you'll learn how to move a schedule by using an Azure Resource Manager (ARM) template.
-
-DevTest Labs supports two types of schedules.
--- Schedules apply only to compute virtual machines (VMs): schedules are stored as microsoft.devtestlab/schedules resources, and often referred to as top level schedules, or simply schedules. --- Lab schedules apply only to DevTest Labs (DTL) VMs: lab schedules. They are stored as microsoft.devtestlab/labs/schedules resources. This type of schedule is not covered in this article.-
-In this article, you'll learn how to:
-> [!div class="checklist"]
-> >
-> - Export an ARM template that contains your schedules.
-> - Modify the template by adding or updating the target region and other parameters.
-> - Delete the resources in the source region.
-
-## Prerequisites
--- Ensure that the services and features that your account uses are supported in the target region.-- For preview features, ensure that your subscription is allowlisted for the target region.-- Ensure a Compute VM exists in the target region.-
-## Move an existing schedule
-There are two ways to move a schedule:
--
-Use the following steps to export and redeploy your schedule in another Azure region by using an ARM template:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Go to the source resource group that held your VMs.
-
-3. On the **Resource Group Overview** page, under **Resources**, select **Show hidden types**.
-
-4. Select all resources with the type **microsoft.devtestlab/schedules**.
-
-5. Select **Export template**.
-
- :::image type="content" source="./media/how-to-move-schedule-to-new-region/move-compute-schedule.png" alt-text="Screenshot that shows the hidden resources in a resource group, with schedules selected.":::
-
-6. On the **Export resource group template** page, select **Deploy**.
-
-7. On the **Custom deployment** page, select **Edit template**.
-
-8. In the template code, change all instances of `"location": "<old location>"` to `"location": "<new location>"` and then select **Save**.
-
-9. On the **Custom deployment** page, enter values that match the target VM:
-
- |Name|Value|
- |-|-|
- |**Subscription**|Select an Azure subscription.|
- |**Resource group**|Select the resource group name. |
- |**Region**|Select a location for the lab schedule. For example, **Central US**. |
- |**Schedule Name**|Must be a globally unique name. |
- |**VirtualMachine_xxx_externalId**|Must be the target VM. |
-
- :::image type="content" source="./media/how-to-move-schedule-to-new-region/move-schedule-custom-deployment.png" alt-text="Screenshot that shows the custom deployment page, with new location values for the relevant settings.":::
-
- >[!IMPORTANT]
- >Each schedule must have a globally unique name; you will need to change the schedule name for the new location.
-
-10. Select **Review and create** to create the deployment.
-
-11. When the deployment is complete, verify that the new schedule is configured correctly on the new VM.
-
-## Discard or clean up
-
-Now you can choose to clean up the original schedules if they're no longer used. Go to the original schedule resource group (where you exported templates from in step 5 above) and delete the schedule resource.
-
-## Next steps
-
-In this article, you moved a schedule from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
--- [Move a DevTest Labs to another region](./how-to-move-labs.md).-- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md).
devtest-labs Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/network-isolation.md
Previously updated : 06/30/2023 Last updated : 09/30/2023 # Network isolation in Azure DevTest Labs
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-cli.md
Previously updated : 02/02/2022 Last updated : 09/30/2023 # Azure CLI Samples for Azure DevTest Labs
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-powershell.md
Previously updated : 02/02/2022 Last updated : 09/30/2023 # Azure PowerShell samples for Azure Lab Services
devtest-labs Start Machines Use Automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/start-machines-use-automation-runbooks.md
description: Learn how to start virtual machines in a specific order by using Az
Previously updated : 03/17/2022 Last updated : 09/30/2023
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
description: Learn how to publish an app from Visual Studio to an Azure file sha
Previously updated : 12/22/2022 Last updated : 09/30/2023
devtest-labs Troubleshoot Vm Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-deployment-failures.md
description: Learn how to troubleshoot virtual machine (VM) deployment failures
Previously updated : 02/27/2023 Last updated : 09/30/2023
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md
description: Use the Azure portal to create a lab, create a virtual machine in t
Previously updated : 05/22/2023 Last updated : 09/30/2023
devtest-labs Tutorial Use Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-use-custom-lab.md
description: Learn how to access a lab in Azure DevTest Labs, and claim, connect
Previously updated : 05/22/2023 Last updated : 09/30/2023
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
description: Use Azure PowerShell or Azure CLI command lines and scripts to star
Previously updated : 04/24/2023 Last updated : 09/30/2023 ms.devlang: azurecli
devtest-labs Use Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-paas-services.md
description: Learn about platform-as-a-service (PaaS) environments in Azure DevT
Previously updated : 03/22/2022 Last updated : 09/30/2023
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
Once graph updates are historized to Azure Data Explorer, you can run joint quer
For more of an introduction to data history, including a quick demo, watch the following IoT show video:
-<iframe src="https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15" width="1080" height="530"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15]
## Resources and data flow
digital-twins How To Use Power Platform Logic Apps Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-power-platform-logic-apps-connector.md
The connector is a wrapper around the Azure Digital Twins [data plane APIs](conc
For an introduction to the connector, including a quick demo, watch the following IoT show video:
-<iframe src="https://aka.ms/docs/player?id=d6c200c2-f622-4254-b61f-d5db613bbd11" width="1080" height="530"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=d6c200c2-f622-4254-b61f-d5db613bbd11]
You can also complete a basic walkthrough in the blog post [Simplify building automated workflows and apps powered by Azure Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things-blog/simplify-building-automated-workflows-and-apps-powered-by-azure/ba-p/3763051). For more information about the connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins).
dms Concepts Migrate Azure Mysql Replicate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-replicate-changes.md
You must run an offline migration scenario with "Enable Transactional Consistenc
While running the replicate changes migration, when your target is almost caught up with the source server, stop all incoming transactions to the source database and wait until all pending transactions have been applied to the target database. To confirm that the target database is up-to-date on the source server, run the query 'SHOW MASTER STATUS;', then compare that position to the last committed binlog event (displayed under Migration Progress). When the two positions are the same, the target has caught up with all changes, and you can start the cutover. + ### How Replicate Changes works The current implementation is based on streaming binlog changes from the source server and applying them to the target server. Like Data-in replication, this is easier to set up and doesn't require a physical connection between the source and the target servers. The server can send Binlog as a stream that contains binary data as documented [here](https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_replication.html). The client can specify the initial log position to start the stream with. The log file name describes the log position, the position within that file, and optionally GTID (Global Transaction ID) if gtid mode is enabled at the source.
-The data changes are sent as "row" events, which contain data for individual rows (prior and/or after the change depending on the operation type, which is insert, delete, or update). The row events are then applied in their raw format using a BINLOG statement: [MySQL 8.0 Reference Manual :: 13.7.8.1 BINLOG Statement](https://dev.mysql.com/doc/refman/8.0/en/binlog.html).
+The data changes are sent as "row" events, which contain data for individual rows (prior and/or after the change depending on the operation type, which is insert, delete, or update). The row events are then applied in their raw format using a BINLOG statement: [MySQL 8.0 Reference Manual :: 13.7.8.1 BINLOG Statement](https://dev.mysql.com/doc/refman/8.0/en/binlog.html). But for a DMS migration to a 5.7 server, DMS doesnΓÇÖt apply changes as BINLOG statements (since DMS doesnΓÇÖt have necessary privileges to do so) and instead translates the row events into INSERT, UPDATE, or DELETE statements.
## Prerequisites
To complete the replicate changes migration successfully, ensure that the follow
- When performing a replicate changes migration, the name of the database on the target server must be the same as the name on the source server. - Support is limited to the ROW binlog format. - DDL changes replication is supported only when you have selected the option for migrating entire server on DMS UI.
+- Renaming databases or tables is not supported when replicating changes.
## Next steps
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Title: What is Azure Database Migration Service? description: Overview of Azure Database Migration Service, which provides seamless migrations from many database sources to Azure Data platforms.--++ Last updated 02/08/2023
Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime (online migrations).
-With Azure Database Migration Service currently we offer two options:
+With Azure Database Migration Service currently we offer two versions:
-1. [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md)
+1. Database Migration Service - via [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md), [Azure portal](https://portal.azure.com/#create/Microsoft.AzureDMS), PowerShell and Azure CLI.
1. Database Migration Service (classic) - via Azure portal, PowerShell and Azure CLI.
-**Azure SQL Migration extension for Azure Data Studio** is powered by the latest version of Database Migration Service and provides more features. Currently, it supports SQL Database modernization to Azure. For improved functionality and supportability, consider migrating to Azure SQL Database by using the Azure SQL migration extension for Azure Data Studio.
+**Database Migration Service** powers the "Azure SQL Migration" extension for Azure Data Studio, and provides more features. Azure portal, PowerShell and Azure CLI can also be used to access DMS. Currently, it supports SQL Database modernization to Azure. For improved functionality and supportability, consider migrating to Azure SQL Database by using the DMS.
**Database Migration Service (classic)** via Azure portal, PowerShell and Azure CLI is an older version of the Azure Database Migration Service. It offers database modernization to Azure and support scenarios like – SQL Server, PostgreSQL, MySQL, and MongoDB.
With Azure Database Migration Service currently we offer two options:
## Compare versions
-In 2021, a newer version of the Azure Database Migration Service was released as an extension for Azure Data Studio, which improved the functionality, user experience and supportability of the migration service. Consider using the [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) whenever possible.
+Newer version of the Azure Database Migration Service is available as an extension for Azure Data Studio and can be accesses from Azure portal, which improved the functionality, user experience and supportability of the migration service. Consider using the [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) and DMS Azure portal whenever possible.
The following table compares the functionality of the versions of the Database Migration Service:
-|Feature |DMS (classic) |Azure SQL extension for Azure Data Studio |Notes|
-|||||
-|Assessment | No | Yes | Assess compatibility of the source. |
-|SKU recommendation | No | Yes | SKU recommendations for the target based on the assessment of the source. |
-|Azure SQL Database - Offline migration | Yes | Yes | Migrate to Azure SQL Database offline. |
-|Azure SQL Managed Instance - Online migration | Yes |Yes | Migrate to Azure SQL Managed Instance online with minimal downtime. |
-|Azure SQL Managed Instance - Offline migration | Yes |Yes | Migrate to Azure SQL Managed Instance offline. |
-|SQL Server on Azure SQL VM - Online migration | No | Yes |Migrate to SQL Server on Azure VMs online with minimal downtime.|
-|SQL Server on Azure SQL VM - Offline migration | Yes |Yes | Migrate to SQL Server on Azure VMs offline. |
-|Migrate logins|Yes | Yes | Migrate logins from your source to your target.|
-|Migrate schemas| Yes | No | Migrate schemas from your source to your target. |
-|Azure portal support |Yes | Yes | Monitor your migration by using the Azure portal. |
-|Integration with Azure Data Studio | No | Yes | Migration support integrated with Azure Data Studio. |
-|Regional availability|Yes |Yes | More regions are available with the extension. |
-|Improved user experience| No | Yes | The extension is faster, more secure, and easier to troubleshoot. |
-|Automation| Yes | Yes |The extension supports PowerShell and Azure CLI. |
-|Private endpoints| No | Yes| Connect to your source and target using private endpoints.
-|TDE support|No | Yes |Migrate databases encrypted with TDE. |
+|Feature |DMS(classic) |DMS - via Azure SQL extension for ADS|DMS - via Azure portal |Notes|
+||||||
+|Assessment | No | Yes | No | Assess compatibility of the source. |
+|SKU recommendation | No | Yes | No | SKU recommendations for the target based on the assessment of the source. |
+|Azure SQL Database - Offline migration | Yes | Yes | Yes | Migrate to Azure SQL Database offline. |
+|Azure SQL Managed Instance - Online migration | Yes |Yes | Yes | Migrate to Azure SQL Managed Instance online with minimal downtime. |
+|Azure SQL Managed Instance - Offline migration | Yes |Yes | Yes | Migrate to Azure SQL Managed Instance offline. |
+|SQL Server on Azure SQL VM - Online migration | No | Yes | Yes |Migrate to SQL Server on Azure VMs online with minimal downtime.|
+|SQL Server on Azure SQL VM - Offline migration | Yes |Yes | Yes | Migrate to SQL Server on Azure VMs offline. |
+|Migrate logins|Yes | Yes | No | Migrate logins from your source to your target.|
+|Migrate schemas| Yes | No | No | Migrate schemas from your source to your target. |
+|Azure portal support |Yes | Partial | Yes | Create and Monitor your migration by using the Azure portal. |
+|Integration with Azure Data Studio | No | Yes | No | Migration support integrated with Azure Data Studio. |
+|Regional availability|Yes |Yes | Yes | More regions are available with the extension. |
+|Improved user experience| No | Yes | Yes | The DMS is faster, more secure, and easier to troubleshoot. |
+|Automation| Yes | Yes | Yes |The DMS supports PowerShell and Azure CLI. |
+|Private endpoints| No | Yes| Yes | Connect to your source and target using private endpoints.
+|TDE support|No | Yes | No |Migrate databases encrypted with TDE. |
## Migrate databases to Azure with familiar tools
For up-to-date info about the regional availability of Azure Database Migration
* [Services and tools available for data migration scenarios](./dms-tools-matrix.md) * [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) * [FAQ about using Azure Database Migration Service](./faq.yml)+
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
Title: Azure Database Migration Service tools matrix description: Learn about the services and tools available to migrate databases and to support various phases of the migration process.--- Previously updated : 03/21/2020+++ Last updated : 08/23/2023
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| SQL Server | Azure Synapse Analytics | | | |
-| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
-| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | | | | | | | |
dms Known Issues Azure Mysql Fs Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-mysql-fs-online.md
Known issues associated with migrations to Azure Database for MySQL are described in the following sections.
+## Schema Migration Issue for v8.0 MySQL Flexible Server target
+
+- **Error**: A migration to a MySQL Flexible Server with engine version 8.0.30 or higher can fail when the feature to generate invisible primary keys for InnoDB tables is enabled (see [MySQL :: MySQL 8.0 Reference Manual :: 13.1.20.11 Generated Invisible Primary Keys](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html)). The failure may occur when migrating table schema from the source to the target, when applying changes during the replication phase of online migrations, when retrying a migration, or when migrating to a target where the schema has been migrated manually.
+
+ **Potential error message**:
+ - "Unknown error."
+ - "Failed to generate invisible primary key. Auto-increment column already exists."
+ - "The column 'my_row_id' in the target table 'table name' in database 'database' does not exist on the source table."
+
+ **Limitation**: Migration to MySQL Flexible Server instance where sql_generate_invisible_primary_key is enabled is not supported by DMS.
+
+ **Workaround**: Set the server parameter sql_generate_invisible_primary_key for target MySQL Flexible Server to OFF. The server parameter can be found in the Server parameters Blade under the All tab for the target MySQL Flexible Server. Additionally, drop the target database and start over the DMS migration to not have any mismatched schemas.
+ ## Incompatible SQL Mode One or more incompatible SQL modes can cause many different errors. Below is an example error along with server modes that should be looked at if this error occurs.
One or more incompatible SQL modes can cause many different errors. Below is an
- **Error**: An error occurred as referencing table cannot be found.
- **Potential error message**: The pipeline was unable to create the schema of object '{object}' for activity '{activity}' using strategy MySqlSchemaMigrationViewUsingTableStrategy because of a query execution.
+ **Potential error message**: The pipeline was unable to create the schema of object '{object}' for activity '{activity}' using strategy MySqlSchemaMigrationViewUsingTableStrategy because of a query execution.
- **Limitation**: The error can occur when the view is referring to a table that has been deleted or renamed, or when the view was created with incorrect or incomplete information.
+ **Limitation**: The error can occur when the view is referring to a table that has been deleted or renamed, or when the view was created with incorrect or incomplete information. This error can happen if a subset of tables are migrated, but the tables they depend on are not.
- **Workaround**: We recommend migrating views manually.
+ **Workaround**: We recommend migrating views manually. Check if all tables referenced in foreign keys and CREATE VIEW statements are selected for migration.
## All pooled connections broken - **Error**: All connections on the source server were broken.
- **Limitation**: The error occurs when all the connections that are acquired at the start of initial load are lost due to server restart, network issues, heavy traffic on the source server or other transient problems. This error isn't recoverable.
+ **Limitation**: The error occurs when all the connections that are acquired at the start of initial load are lost due to server restart, network issues, heavy traffic on the source server or other transient problems. This error isn't recoverable. Additionally, this error occurs if an attempt to migrate a server is made during the maintenance window.
**Workaround**: The migration must be restarted, and we recommend increasing the performance of the source server. Another issue is scripts that kill long running connections, prevents these scripts from working.
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Last updated 02/20/2020 -
- - seo-lt-2019
- - ignite-2022
+ # Troubleshoot common Azure Database Migration Service issues and errors
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md
Previously updated : 09/27/2022 Last updated : 09/06/2023 # Overview of DNS zones and records
-This article explains the key concepts of domains, DNS zones, DNS records, and record sets. You'll learn how it's supported in Azure DNS.
+This article explains the key concepts of domains, DNS zones, DNS records, and record sets. You learn how they're supported in Azure DNS.
## Domain names
To create a wildcard record set, use the record set name '\*'. You can also use
### CAA records CAA records allow domain owners to specify which Certificate Authorities (CAs) are authorized to issue certificates for their domain. This record allows CAs to avoid mis-issuing certificates in some circumstances. CAA records have three properties:
-* **Flags**: This field is an integer between 0 and 255, used to represent the critical flag that has special meaning per the [RFC](https://tools.ietf.org/html/rfc6844#section-3)
+* **Flags**: This field is an integer between 0 and 255, used to represent the critical flag that has special meaning per [RFC6844](https://tools.ietf.org/html/rfc6844#section-3)
* **Tag**: an ASCII string that can be one of the following: * **issue**: if you want to specify CAs that are permitted to issue certs (all types) * **issuewild**: if you want to specify CAs that are permitted to issue certs (wildcard certs only)
These constraints arise from the DNS standards and aren't limitations of Azure D
The NS record set at the zone apex (name '\@') gets created automatically with each DNS zone and gets deleted automatically when the zone gets deleted. It can't be deleted separately.
-This record set contains the names of the Azure DNS name servers assigned to the zone. You can add more name servers to this NS record set, to support cohosting domains with more than one DNS provider. You can also modify the TTL and metadata for this record set. However, removing or modifying the pre-populated Azure DNS name servers isn't allowed.
+This record set contains the names of the Azure DNS name servers assigned to the zone. You can add more name servers to this NS record set, to support cohosting domains with more than one DNS provider. You can also modify the TTL and metadata for this record set. However, removing or modifying the prepopulated Azure DNS name servers isn't allowed.
This restriction only applies to the NS record set at the zone apex. Other NS record sets in your zone (as used to delegate child zones) can be created, modified, and deleted without constraint. ### SOA records
-A SOA record set gets created automatically at the apex of each zone (name = '\@'), and gets deleted automatically when the zone gets deleted. SOA records cannot be created or deleted separately.
+A SOA record set gets created automatically at the apex of each zone (name = '\@'), and gets deleted automatically when the zone gets deleted. SOA records can't be created or deleted separately.
-You can modify all properties of the SOA record except for the `host` property. This property gets pre-configured to refer to the primary name server name provided by Azure DNS.
+You can modify all properties of the SOA record except for the `host` property. This property gets preconfigured to refer to the primary name server name provided by Azure DNS.
The zone serial number in the SOA record isn't updated automatically when changes are made to the records in the zone. It can be updated manually by editing the SOA record, if necessary.
The zone serial number in the SOA record isn't updated automatically when change
TXT records are used to map domain names to arbitrary text strings. They're used in multiple applications, in particular related to email configuration, such as the [Sender Policy Framework (SPF)](https://en.wikipedia.org/wiki/Sender_Policy_Framework) and [DomainKeys Identified Mail (DKIM)](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail).
-The DNS standards permit a single TXT record to contain multiple strings, each of which may be up to 255 characters in length. Where multiple strings are used, they are concatenated by clients and treated as a single string.
+The DNS standards permit a single TXT record to contain multiple strings, each of which may be up to 255 characters in length. Where multiple strings are used, they're concatenated by clients and treated as a single string.
When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary.
-The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined).
+The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 4096 characters`*` in each TXT record set (across all records combined).
+
+`*` 4096 character support is currently only available in the Azure Public Cloud. National clouds are limited to 1024 characters until 4k support rollout is complete.
## Tags and metadata
Azure DNS supports using Azure Resource Manager tags on DNS zone resources. It
### Metadata
-As an alternative to record set tags, Azure DNS supports annotating record sets using *metadata*. Similar to tags, metadata enables you to associate name-value pairs with each record set. This feature can be useful, for example to record the purpose of each record set. Unlike tags, metadata cannot be used to provide a filtered view of your Azure bill and cannot be specified in an Azure Resource Manager policy.
+As an alternative to record set tags, Azure DNS supports annotating record sets using *metadata*. Similar to tags, metadata enables you to associate name-value pairs with each record set. This feature can be useful, for example to record the purpose of each record set. Unlike tags, metadata can't be used to provide a filtered view of your Azure bill and can't be specified in an Azure Resource Manager policy.
## Etags
-Suppose two people or two processes try to modify a DNS record at the same time. Which one wins? And does the winner know that they've overwritten changes created by someone else?
+Suppose two people or two processes try to modify a DNS record at the same time. Which one wins? And does the winner know that they have overwritten changes created by someone else?
Azure DNS uses Etags to handle concurrent changes to the same resource safely. Etags are separate from [Azure Resource Manager 'Tags'](#tags). Each DNS resource (zone or record set) has an Etag associated with it. Whenever a resource is retrieved, its Etag is also retrieved. When updating a resource, you can choose to pass back the Etag so Azure DNS can verify the Etag on the server matches. Since each update to a resource results in the Etag being regenerated, an Etag mismatch indicates a concurrent change has occurred. Etags can also be used when creating a new resource to ensure the resource doesn't already exist.
energy-data-services Concepts Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md
Once the validations are successful, the system processes the content into stora
OSDU&trade; is a trademark of The Open Group. ## Next steps
-Advance to the manifest ingestion tutorial and learn how to perform a manifest-based file ingestion
-> [!div class="nextstepaction"]
-> [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md)
+- [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md)
+- [OSDU Operator Data Loading Quick Start Guide](https://community.opengroup.org/groups/osdu/platform/data-flow/data-loading/-/wikis/home#osdu-operator-data-loading-quick-start-guide)
energy-data-services How To Manage Data Security And Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md
In addition to TLS, when you interact with Azure Data Manager for Energy, all tr
### Prerequisites
-**Step 1- Configure the key vault**
+**Step 1: Configure the key vault**
1. You can use a new or existing key vault to store customer-managed keys. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../key-vault/general/overview.md) and [What is Azure Key Vault](../key-vault/general/basic-concepts.md)? 2. Using customer-managed keys with Azure Data Manager for Energy requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created.
In addition to TLS, when you interact with Azure Data Manager for Energy, all tr
2. Under **Settings**, choose **Properties**. 3. In the **purge protection** section, choose **Enable purge protection**.
-**Step 2 - Add a key**
+**Step 2: Add a key**
1. Next, add a key to the key vault. 2. To learn how to add a key with the Azure portal, see [Quickstart: Set and retrieve a key from Azure Key Vault using the Azure portal](../key-vault/keys/quick-create-portal.md). 3. It is recommended that the RSA key size is 3072, see [Configure customer-managed keys for your Azure Cosmos DB account | Microsoft Learn](../cosmos-db/how-to-setup-customer-managed-keys.md#generate-a-key-in-azure-key-vault).
-**Step 3 - Choose a managed identity to authorize access to the key vault**
+**Step 3: Choose a managed identity to authorize access to the key vault**
1. When you enable customer-managed keys for an existing Azure Data Manager for Energy instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault. 2. You can create a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity).
energy-data-services Quickstart Create Microsoft Energy Data Services Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md
Client Secret | Sometimes called an application password, a client secret is a s
1. In the search page, select *Create* on the card titled "Azure Data Manager for Energy".
-1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, *tier*, and the *region* in which you want to create your instance of Azure Data Manager for Energy. Enter the *App ID* that you created during the prerequisite steps. The default tier is currently the *Standard* tier. You can use the drop down to change your tier selection.
+1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, *tier*, and the *region* in which you want to create your instance of Azure Data Manager for Energy. Enter the *App ID* that you created during the prerequisite steps. The default tier is currently the *Standard* tier. You can use the drop down to change your tier selection. Learn more about tiers [Azure Data Manager for Energy Tier Details](https://learn.microsoft.com/azure/energy-data-services/concepts-tier-details)
+ [![Screenshot of the basic details page after you select Create for Azure Data Manager for Energy. This page allows you to enter both instance and data partition details.](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details-sku.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details-sku.png#lightbox)
-
+ Some naming conventions to guide you at this step:
Client Secret | Sometimes called an application password, a client secret is a s
> [!NOTE] > Azure Data Manager for Energy instance and data partition names, once created, cannot be changed later. + 1. Move to the next tab, *Networking*, and configure as needed. Learn more about [setting up a Private Endpoint in Azure Data Manager for Energy](../energy-data-services/how-to-set-up-private-links.md) [![Screenshot of the networking tab on the create workflow. This tab shows that customers can disable private access to their Azure Data Manager for Energy.](media/quickstart-create-microsoft-energy-data-services-instance/networking-tab.png)](media/quickstart-create-microsoft-energy-data-services-instance/networking-tab.png#lightbox)
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
This page is updated with the details about the upcoming release approximately a
<hr width = 100%>
+## August 2023
+
+### General Availability Fixed Pricing for Azure Data Manager for Energy
+Starting September 2023, the General Availability pricing changes for Azure Data Manager for Energy will be effective. You can visit the [Product Pricing Page](https://azure.microsoft.com/pricing/details/energy-data-services/) to learn more.
++ ## June 2023 ### Service Level Agreement (SLA) for Azure Data Manager for Energy
energy-data-services Resources Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/resources-partner-solutions.md
This article highlights Microsoft partners with software solutions officially su
| Interica | Interica OneView&trade; harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the subelement level. Quickly and easily discover data across multiple file systems and data silos and determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy adoption with Interica OneView&trade;](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView&trade;](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView&trade; connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)| | Katalyst | Katalyst Data Management&reg; provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe, and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) | | RoQC | RoQC Data Management AS is a Software, Advisory, and Consultancy company specializing in Subsurface Data Management. RoQCΓÇÖs LogQA provides powerful native, machine learningΓÇôbased QA and cleanup tools for log data once the data has been migrated to Microsoft Azure Data Manager for Energy, an enterprise-grade OSDU&trade; Data Platform on the Microsoft Cloud.| [RoQC and Microsoft simplify cloud migration with Microsoft Energy Data Services](https://azure.microsoft.com/blog/roqc-and-microsoft-simplify-cloud-migration-with-microsoft-energy-data-services/)|
+| SLB | SLB is the largest provider of digital solutions and technologies to the global energy industry. With deep expertise in the business needs of exploration and production (E&P) companies, SLB works in close partnership with its customers to enable performance, create industry-changing technologies, and improve sustainability throughout the global energy transition. Ensuring progress for people and the planet on the journey to net zero and beyond drives us. | [Schlumberger Launches Enterprise Data Solution](https://www.slb.com/news-and-insights/newsroom/press-release/2022/pr-2022-09-21-slb-enterprise-data-solution)|
| Wipro | Wipro offers services and accelerators that use the WINS (Wipro INgestion Service) framework, which speeds up the time-to-market and allows for seamless execution of domain workflows with data stored in Microsoft Azure Data Manager for Energy with minimal effort. | [Wipro and Microsoft partner on services and accelerators for the new Microsoft Energy Data Services](https://azure.microsoft.com/blog/wipro-and-microsoft-partner-on-services-and-accelerators-for-the-new-microsoft-energy-data-services/)| ## Next steps
energy-data-services Tutorial Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-manifest-ingestion.md
Before beginning this tutorial, the following prerequisites must be completed:
- **Search for Reference data** - Call Search service to retrieve the Reference metadata records ## Next steps
-Advance to the next tutorial to learn about sdutil
-> [!div class="nextstepaction"]
-> [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md)
-
+- [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md)
+- [OSDU Operator Data Loading Quick Start Guide](https://community.opengroup.org/groups/osdu/platform/data-flow/data-loading/-/wikis/home#osdu-operator-data-loading-quick-start-guide)
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-active-directory.md
Title: Authenticate Event Grid publishing clients using Azure Active Directory
description: This article describes how to authenticate Azure Event Grid publishing client using Azure Active Directory. Previously updated : 01/05/2022 Last updated : 08/17/2023 # Authentication and authorization with Azure Active Directory
With RBAC privileges taken care of, you can now [build your client application t
Use [Event Grid's data plane SDK](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) to publish events to Event Grid. Event Grid's SDK support all authentication methods, including Azure AD authentication.
+Here's the sample code that publishes events to Event Grid using the .NET SDK. You can get the topic endpoint on the **Overview** page for your Event Grid topic in the Azure portal. It's in the format: `https://<TOPIC-NAME>.<REGION>-1.eventgrid.azure.net/api/events`.
+
+```csharp
+ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredential();
+EventGridPublisherClient client = new EventGridPublisherClient( new Uri("<TOPIC ENDPOINT>"), managedIdentityCredential);
++
+EventGridEvent egEvent = new EventGridEvent(
+ "ExampleEventSubject",
+ "Example.EventType",
+ "1.0",
+ "This is the event data");
+
+// Send the event
+await client.SendEventAsync(egEvent);
+```
+ ### Prerequisites Following are the prerequisites to authenticate to Event Grid.
event-grid Choose Right Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/choose-right-tier.md
+
+ Title: Choose the right Event Grid tier for your solution
+description: Describes how to choose the right tier based on the resource features and use cases.
+ Last updated : 08/25/2023++
+# Choose the right Event Grid tier for your solution
+
+Azure Event Grid has two tiers with different capabilities. This article will share details on both.
+
+## Event Grid standard tier
+
+Event Grid standard tier enables pub-sub using MQTT broker functionality and pull delivery of messages through the Event Grid namespace.
+
+Use this tier:
+
+- If you want to publish and consume MQTT messages.
+- If you want to build applications with flexible consumption patterns, e. g. pull delivery.
+- If you want to go beyond 5 MB/s in ingress and egress throughput, up to 20 MB/s (ingress) and 40 MB/s (egress).
+
+For more information, see quotas and limits for [namespaces](quotas-limits.md#namespace-resource-limits).
+
+## Event Grid basic tier
+
+Event Grid basic tier enables push delivery using Event Grid custom topics, Event Grid system topics, Event domains and Event Grid partner topics.
+
+Use this tier:
+
+- If you want to build a solution to trigger actions based on custom application events, Azure system events, partner events.
+- If you want to publish events to thousands of topics at the same time.
+- If you want to go up to 5 MB/s in ingress and egress throughput.
+
+For more information, see quotas and limits for [custom topics, system topics and partner topics](quotas-limits.md#custom-topic-system-topic-and-partner-topic-resource-limits) and [domains](quotas-limits.md#domain-resource-limits).
+
+## Basic and standard tiers
+
+The standard tier of Event Grid is focused on providing support for higher ingress and egress rates, support for IoT solutions that require the use of bidirectional communication capabilities, and support pull delivery for multiple consumers. The basic tier is focused on providing push delivery support to trigger actions based on events. For a detailed breakdown of which quotas and limits are included in each Event Grid resource, see Quotas and limits.
+
+| Feature | Standard | Basic |
+||-|-|
+| Throughput | High, up to 20 MB/s (ingress) and 40 MB/s (egress) | Low, up to 5 MB/s (ingress and egress) |
+| MQTT v5 and v3.1.1 | Yes | |
+| Pull delivery | Yes | |
+| Publish and subscribe to custom events | Yes | |
+| Push delivery to Event Hubs | | Yes |
+| Push delivery to Azure services (Functions, Webhooks, Service Bus queues and topics, relay hybrid connections, and storage queues) | | Yes |
+| Subscribe to Azure system events | | Yes |
+| Subscribe to partner events | | Yes |
+| Domain scope subscriptions | | Yes |
+
+## Next steps
+
+- [Azure Event Grid overview](overview.md)
+- [Azure Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/)
+- [Azure Event Grid quotas and limits](..//azure-resource-manager/management/azure-subscription-service-limits.md)
+- [MQTT overview](mqtt-overview.md)
+- [Pull delivery overview](pull-delivery-overview.md)
+- [Push delivery overview](push-delivery-overview.md)
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 03/01/2023 Last updated : 08/16/2023 # Deliver events using private link service
To deliver events to Storage queues using managed identity, follow these steps:
1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/blobs/assign-azure-role-data-access.md) role on Azure Storage queue. 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned or user-assigned managed identity.
-> [!NOTE]
-> - If there's no firewall or virtual network rules configured for the Azure Storage account, you can use both user-assigned and system-assigned identities to deliver events to the Azure Storage account.
-> - If a firewall or virtual network rule is configured for the Azure Storage account, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the storage account. You can't use user-assigned managed identity whether this option is enabled or not.
+## Firewall and virtual network rules
+If there's no firewall or virtual network rules configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use both user-assigned and system-assigned identities to deliver events.
+
+If a firewall or virtual network rule is configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the destinations. You can't use user-assigned managed identity whether this option is enabled or not.
## Next steps For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
event-grid Create View Manage System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-system-topics.md
This article shows you how to create and manage system topics using the Azure po
## Create a system topic You can create a system topic for an Azure resource (Storage account, Event Hubs namespace, etc.) in two ways: -- Using the **Events** page of a resource, for example, Storage Account or Event Hubs Namespace. When you use the **Events** page in the Azure portal to create an event subscription for an event raised by an Azure source (for example: Azure Storage account), the portal creates a system topic for the Azure resource and then creates a subscription for the system topic. You specify the name of the system topic if you're creating an event subscription on the Azure resource for the first time. From the second time onwards, the system topic name is displayed for you in the read-only mode. See [Quickstart: Route Blob storage events to web endpoint with the Azure portal](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage) for detailed steps.-- Using the **Event Grid System Topics** page. The following steps are for creating a system topic using the **Event Grid System Topics** page.
+- Using the **Events** page of a resource, for example, Storage Account or Event Hubs Namespace. Event Grid automatically creates a system topic for you in this case.
+
+ When you use the **Events** page in the Azure portal to create an event subscription for an event raised by an Azure source (for example: Azure Storage account), the portal creates a system topic for the Azure resource and then creates a subscription for the system topic. You specify the name of the system topic if you're creating an event subscription on the Azure resource for the first time. From the second time onwards, the system topic name is displayed for you in the read-only mode. See [Quickstart: Route Blob storage events to web endpoint with the Azure portal](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage) for detailed steps.
+- Using the **Event Grid System Topics** page. You create a system topic manually in this case by using the following steps.
1. Sign in to [Azure portal](https://portal.azure.com). 2. In the search box at the top, type **Event Grid System Topics**, and then press **ENTER**.
- ![Search for system topics](./media/create-view-manage-system-topics/search-system-topics.png)
-3. On the **Event Grid System Topics** page, select **+ Add** on the toolbar.
+ :::image type="content" source="./media/create-view-manage-system-topics/search-system-topics.png" alt-text="Screenshot that shows the Azure portal with Event Grid System Topics in the search box.":::
+3. On the **Event Grid System Topics** page, select **+ Create** on the toolbar.
- ![Add system topic - toolbar button](./media/create-view-manage-system-topics/add-system-topic-menu.png)
+ :::image type="content" source="./media/create-view-manage-system-topics/add-system-topic-menu.png" alt-text="Screenshot that shows in the Event Grid System Topics page with the Create button selected.":::
4. On the **Create Event Grid System Topic** page, do the following steps: 1. Select the **topic type**. In the following example, **Storage Accounts** option is selected. 2. Select the **Azure subscription** that has your storage account resource.
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md
To set headers with a fixed value, provide the name of the header and its value
:::image type="content" source="./media/delivery-properties/static-header-property.png" alt-text="Delivery properties - static":::
-You may want check **Is secret?** when providing sensitive data. Sensitive data won't be displayed on the Azure portal.
+You might want to check **Is secret?** when you're providing sensitive data. The visibility of sensitive data on the Azure portal depends on the user's RBAC permission.
## Setting dynamic header values You can set the value of a header based on a property in an incoming event. Use JsonPath syntax to refer to an incoming event's property value to be used as the value for a header in outgoing requests. For example, to set the value of a header named **Channel** using the value of the incoming event property **system** in the event data, configure your event subscription in the following way:
event-grid Event Schema Event Grid Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-event-grid-namespace.md
The data object contains the following properties:
| `namespaceName` | string | Name of the Event Grid namespace where the MQTT client was connected or disconnected. | | `clientAuthenticationName` | string | Unique identifier for the MQTT client that the client presents to the service for authentication. This case-sensitive string can be up to 128 characters long, and supports UTF-8 characters.| | `clientSessionName` | string | Unique identifier for the MQTT client's session. This case-sensitive string can be up to 128 characters long, and supports UTF-8 characters.|
-| `sequenceNumber` | string | A number that helps indicate order of MQTT client session connected or disconnected events. Latest event will have a sequence number that is higher than the previous event. |
+| `sequenceNumber` | long | A number that helps indicate order of MQTT client session connected or disconnected events. Latest event will have a sequence number that is higher than the previous event. |
| `disconnectionReason` | string | Reason for the disconnection of the MQTT client's session. The value could be one of the values in the disconnection reasons table. | | `createdOn` | string | The time the client resource is created based on the provider's UTC time. | | `updatedOn` | string | The time the client resource is last updated based on the provider's UTC time. If the client resource was never updated, this value is identical to the value of the 'createdOn' property |
event-grid Handler Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-functions.md
Title: Use a function in Azure as an event handler for Azure Event Grid events description: Describes how you can use functions created in and hosted by Azure Functions as event handlers for Event Grid events. Previously updated : 05/23/2022 Last updated : 08/31/2023 # Use a function as an event handler for Event Grid events
We recommend that you use the first approach (Event Grid trigger) as it has the
- Event Grid automatically adjusts the rate at which events are delivered to a function triggered by an Event Grid event based on the perceived rate at which the function can process events. This rate match feature averts delivery errors that stem from the inability of a function to process events as the functionΓÇÖs event processing rate can vary over time. To improve efficiency at high throughput, enable batching on the event subscription. For more information, see [Enable batching](#enable-batching). > [!NOTE]
-> - When you add an event subscription using an Azure function, Event Grid fetches the access key for the target function using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription.
+> - When you an Event Grid trigger to add an event subscription using an Azure function, Event Grid fetches the access key for the target function using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription.
> - If you protect your Azure function with an **Azure Active Directory** application, you'll have to take the generic webhook approach using the HTTP trigger. Use the Azure function endpoint as a webhook URL when adding the subscription. -- ## Tutorials |Title |Description |
event-grid Mqtt Client Authorization Use Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-authorization-use-rbac.md
- Title: RBAC authorization for clients with Azure AD identity
-description: Describes RBAC roles to authorize clients with Azure AD identity to publish or subscribe MQTT messages
- Previously updated : 8/11/2023----
-# Authorizing access to publish or subscribe to MQTT messages in Event Grid namespace
-You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Azure Active Directory identity, to publish or subscribe access to specific topic spaces.
-
-## Prerequisites
-- You need an Event Grid namespace with MQTT enabled. [Learn about creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace)-- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)-
-## Operation types
-You can use following two data actions to provide publish or subscribe permissions to clients with Azure AD identities on specific topic spaces.
-
-**Topic spaces publish** data action
-Microsoft.EventGrid/topicSpaces/publish/action
-
-**Topic spaces subscribe** data action
-Microsoft.EventGrid/topicSpaces/subscribe/action
-
-> [!NOTE]
-> Currently, we recommend using custom roles with the actions provided.
-
-## Custom roles
-
-You can create custom roles using the publish and subscribe actions.
-
-The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope.
-
-**EventGridMQTTPublisherRole.json**: MQTT messages publish operation.
-
-```json
-{
- "roleName": "Event Grid namespace MQTT publisher",
- "description": "Event Grid namespace MQTT message publisher role",
- "assignableScopes": [
- "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
- ],
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/topicSpaces/publish/action"
- ],
- "notDataActions": []
- }
- ]
-}
-```
-
-**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation.
-
-```json
-{
- "roleName": "Event Grid namespace MQTT subscriber",
- "description": "Event Grid namespace MQTT message subscriber role",
- "assignableScopes": [
- "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
- ]
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/topicSpaces/subscribe/action"
- ],
- "notDataActions": []
- }
- ]
-}
-```
-
-## Create custom roles in Event Grid namespace
-1. Navigate to topic spaces page in Event Grid namespace
-1. Select the topic space for which the custom RBAC role needs to be created
-1. Navigate to the Access control (IAM) page within the topic space
-1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name.
-1. Switch the Baseline permissions to **Start from scratch**
-1. On the Permissions tab, select **Add permissions**
-1. In the selection page, find and select Microsoft Event Grid
- :::image type="content" source="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions.":::
-1. Navigate to Data Actions
-1. Select **Topic spaces publish** data action and select **Add**
- :::image type="content" source="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection.":::
-1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed.
-1. Select **Create** in Review + create tab to create the custom role.
-1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal).
-
-> [!NOTE]
-> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space.
-
-## Next steps
-See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md)
event-grid Mqtt Client Azure Ad Token And Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-azure-ad-token-and-rbac.md
+
+ Title: JWT authentication and RBAC authorization for clients with Azure AD identity
+description: Describes JWT authentication and RBAC roles to authorize clients with Azure AD identity to publish or subscribe MQTT messages
+ Last updated : 8/11/2023++++
+# Authenticating and Authorizing access to publish or subscribe to MQTT messages
+You can authenticate MQTT clients with Azure AD JWT to connect to Event Grid namespace. You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Azure Active Directory identity, to publish or subscribe access to specific topic spaces.
+
+> [!IMPORTANT]
+> This feature is supported only when using MQTT v5
+
+## Prerequisites
+- You need an Event Grid namespace with MQTT enabled. Learn about [creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace)
+- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)
++
+## Authentication using Azure AD JWT
+You can use the MQTT v5 CONNECT packet to provide the Azure AD JWT token to authenticate your client, and you can use the MQTT v5 AUTH packet to refresh the token.
+
+In CONNECT packet, you can provide required values in the following fields:
+
+|Field | Value |
+|||
+|Authentication Method | OAUTH2-JWT |
+|Authentication Data | JWT token |
+
+In AUTH packet, you can provide required values in the following fields:
+
+|Field | Value |
+|||
+| Authentication Method | OAUTH2-JWT |
+| Authentication Data | JWT token |
+| Authentication Reason Code | 25 |
+
+Authenticate Reason Code with value 25 signifies reauthentication.
+
+> [!NOTE]
+> Audience: ΓÇ£audΓÇ¥ claim must be set to "https://eventgrid.azure.net/".
+
+## Authorization to grant access permissions
+A client using Azure AD based JWT authentication needs to be authorized to communicate with the Event Grid namespace. You can create custom roles to enable the client to communicate with Event Grid instances in your resource group, and then assign the roles to the client. You can use following two data actions to provide publish or subscribe permissions, to clients with Azure AD identities, on specific topic spaces.
+
+**Topic spaces publish** data action
+Microsoft.EventGrid/topicSpaces/publish/action
+
+**Topic spaces subscribe** data action
+Microsoft.EventGrid/topicSpaces/subscribe/action
+
+> [!NOTE]
+> Currently, we recommend using custom roles with the actions provided.
+
+### Custom roles
+
+You can create custom roles using the publish and subscribe actions.
+
+The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope.
+
+**EventGridMQTTPublisherRole.json**: MQTT messages publish operation.
+
+```json
+{
+ "roleName": "Event Grid namespace MQTT publisher",
+ "description": "Event Grid namespace MQTT message publisher role",
+ "assignableScopes": [
+ "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
+ ],
+ "permissions": [
+ {
+ "actions": [],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.EventGrid/topicSpaces/publish/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+}
+```
+
+**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation.
+
+```json
+{
+ "roleName": "Event Grid namespace MQTT subscriber",
+ "description": "Event Grid namespace MQTT message subscriber role",
+ "assignableScopes": [
+ "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
+ ]
+ "permissions": [
+ {
+ "actions": [],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.EventGrid/topicSpaces/subscribe/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+}
+```
+
+## Create custom roles
+1. Navigate to topic spaces page in your Event Grid namespace
+1. Select the topic space for which the custom RBAC role needs to be created
+1. Navigate to the Access control (IAM) page within the topic space
+1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name.
+1. Switch the Baseline permissions to **Start from scratch**
+1. On the Permissions tab, select **Add permissions**
+1. In the selection page, find and select Microsoft Event Grid
+ :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions.":::
+1. Navigate to Data Actions
+1. Select **Topic spaces publish** data action and select **Add**
+ :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection.":::
+1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed.
+1. Select **Create** in Review + create tab to create the custom role.
+1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal).
+
+## Assign the custom role to your Azure AD identity
+1. In the Azure portal, navigate to your Event Grid namespace
+1. Navigate to the topic space to which you want to authorize access.
+1. Go to the Access control (IAM) page of the topic space
+1. Select the **Role assignments** tab to view the role assignments at this scope.
+1. Select **+ Add** and Add role assignment.
+1. On the Role tab, select the role that you created in the previous step.
+1. On the Members tab, select User, group, or service principal to assign the selected role to one or more service principals (applications).
+ - Users and groups work when user/group belong to fewer than 200 groups.
+1. Select **Select members**.
+1. Find and select the users, groups, or service principals.
+1. Select **Review + assign** on the Review + assign tab.
+
+> [!NOTE]
+> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space.
+
+## Next steps
+- See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md)
+- To learn more about how Managed Identities work, you can refer to [How managed identities for Azure resources work with Azure virtual machines - Microsoft Entra](/azure/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm)
+- To learn more about how to obtain tokens from Azure AD, you can refer to [obtaining Azure AD tokens](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#get-a-token)
+- To learn more about Azure Identity client library, you can refer to [using Azure Identity client library](/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-the-azure-identity-client-library)
+- To learn more about implementing an interface for credentials that can provide a token, you can refer to [TokenCredential Interface](/java/api/com.azure.core.credential.tokencredential)
+- To learn more about how to authenticate using Azure Identity, you can refer to [examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples)
event-grid Mqtt Client Life Cycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-life-cycle-events.md
# MQTT Clients Life Cycle Events Client Life Cycle events allow applications to react to events about the client connection status or the client resource operations. It allows you to:-- Keep track of your client's connection status. For example, you can build an application that queries the connection status of each client before running a specific operation.-- React with a mitigation action for client disconnections. For example, you can build an application that updates a database, creates a ticket, and delivers an email notification every time a client is disconnected for mitigating action.-- Track the namespace that your clients are attached to during automated failovers.
+- Monitor your clients' connection status. For example, you can build an application that analyzes clients' connections to optimize behavior.
+- React with a mitigation action for client disconnections. For example, you can build an application that initiates an auto-mitigation flow or creates a support ticket every time a client is disconnected.
+- Track the namespace that your clients are attached to. For example, confirm that your clients are connected to the right namespace after you initiate a failover.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
az eventgrid system-topic create --resource-group <Resource Group > --name <Syst
``` ## Behavior:-- There's no latency guarantee for the client lifecycle events.
+- There's no latency guarantee for the client lifecycle events. The client connection status events indicate the last reported state of the client session's connection, not the real-time connection status.
- Duplicate client life cycle events may be published. - The client life cycle events' timestamp indicates when the service detected the events, which may differ from the actual time of the event. - The order of client life cycle events isn't guaranteed, events may arrive out of order. However, the sequence number on the connection status events can be used to determine the original order of the events.
az eventgrid system-topic create --resource-group <Resource Group > --name <Syst
- Example 1: if a client gets created, then updated twice within 3 seconds, EG will emit only one MQTTClientCreatedOrUpdated event with the final values for the metadata of the client. - Example 2: if a client gets created, then deleted within 5 seconds, EG will emit only MQTTClientDeleted event.
+### Order connection status events:
+The sequence number on the MQTTClientSessionConnected and MQTTClientSessionDisconnected events can be used to determine the last reported state of the client session's connection as the sequence number is incremented with every new event. The sequence number for the MQTTClientSessionDisconnected always matches the sequence number of the MQTTClientSessionConnected event for the same connection. For example, the list of events and sequence numbers below is a sample of events in the right order for the same client:
+- MQTTClientSessionConnected > "sequenceNumber": 1
+- MQTTClientSessionDisconnected > "sequenceNumber": 1
+- MQTTClientSessionConnected > "sequenceNumber": 2
+- MQTTClientSessionDisconnected > "sequenceNumber": 2
+
+Here is a sample logic to order the events:
+For each client:
+- Store the sequence number and the connection status from the first event.
+- For every new MQTTClientSessionConnected event:
+ - if the new sequence number is greater than the previous one, update the sequence number and the connection status to match the new event.
+- For every new MQTTClientSessionDisconnected event:
+ - if the new sequence number is equal than or greater than the previous one, update the sequence number and the connection status to match the new event.
## Next steps - To learn more about system topics, go to [System topics in Azure Event Grid](system-topics.md)
event-grid Mqtt Publish And Subscribe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md
step certificate fingerprint client1-authnID.pem
1. On the Review + create tab of the Create namespace page, select **Create**. > [!NOTE]
- > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see Create a Namespace.
+ > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see Create a Namespace.
+ 1. After the deployment succeeds, select **Go to resource** to navigate to the Event Grid Namespace Overview page for your namespace. 1. In the Overview page, you see that the MQTT is in Disabled state. To enable MQTT, select the **Disabled** link, it will redirect you to Configuration page. 1. On Configuration page, select the Enable MQTT option, and Apply the settings.
step certificate fingerprint client1-authnID.pem
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-client1-metadata.png" alt-text="Screenshot of client 1 configuration."::: 6. Select **Create** to create the client.
-7. Repeat the above steps to create another client called ΓÇ£client2ΓÇ¥.
+7. Repeat the above steps to create another client called "client2".
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-client2-metadata.png" alt-text="Screenshot of client 2 configuration.":::
step certificate fingerprint client1-authnID.pem
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-1.png" alt-text="Screenshot showing creation of first permission binding."::: 4. Select **Create** to create the permission binding. 5. Create one more permission binding by selecting **+ Permission binding** on the toolbar.
-6. Provide a name and give $all client group Subscriber access to the Topicspace1 as shown.
+6. Provide a name and give $all client group Subscriber access to the "Topicspace1" as shown.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-2.png" alt-text="Screenshot showing creation of second permission binding."::: 7. Select **Create** to create the permission binding.
step certificate fingerprint client1-authnID.pem
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-2.png" alt-text="Screenshot showing client 1 configuration part 2 on MQTTX app."::: 1. Select Connect to connect the client to the Event Grid MQTT service.
-1. Repeat the above steps to connect the second client ΓÇ£client2ΓÇ¥, with corresponding authentication information as shown.
+1. Repeat the above steps to connect the second client "client2", with corresponding authentication information as shown.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client2-configuration-1.png" alt-text="Screenshot showing client 2 configuration part 1 on MQTTX app.":::
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
You can configure **private links** to connect to Azure Event Grid to **publish
## How much does Event Grid cost?
-Azure Event Grid uses a pay-per-event pricing model. You only pay for what you use. For the push-style delivery that is generally available, the first 100,000 operations per month are free. Examples of operations include event publication, event delivery, delivery attempts, event filter evaluations that refer to event data properties (sometimes referred to as Advanced Filters), and events sent to a dead letter location. For details, see the [pricing page](https://azure.microsoft.com/pricing/details/event-grid/).
-
-Event Grid operations involving Namespaces and its resources, including MQTT and pull HTTP delivery operations, are in public preview and are available at no charge today.
+Azure Event Grid offers two tiers and uses a pay-per-use pricing model. For details on pricing, see [Azure Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/). To learn more about the capabilities for each tier, see [Choose the right Event Grid tier](choose-right-tier.md).
## Next steps
Event Grid operations involving Namespaces and its resources, including MQTT and
### Data distribution using pull or push delivery -- [Pull delivery overview](pull-delivery-overview.md).-- [Push delivery overview](push-delivery-overview.md).
+- [Pull delivery overview](pull-delivery-overview.md)
+- [Push delivery overview](push-delivery-overview.md)
- [Concepts](concepts.md)-- Quickstart: [Publish and subscribe to app events using namespace topics](publish-events-using-namespace-topics.md).
+- Quickstart: [Publish and subscribe to app events using namespace topics](publish-events-using-namespace-topics.md)
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
event-grid Powershell Webhook Secure Delivery Azure Ad App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-app.md
Title: Azure PowerShell - Secure WebHook delivery with Azure AD Application in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD Application using Azure Event Grid ms.devlang: powershell-+ Last updated 10/14/2021
event-grid Powershell Webhook Secure Delivery Azure Ad User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-user.md
Title: Azure PowerShell - Secure WebHook delivery with Azure AD User in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD User using Azure Event Grid ms.devlang: powershell-+ Last updated 09/29/2021
event-grid Secure Webhook Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md
Title: Secure WebHook delivery with Azure AD in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure Active Directory using Azure Event Grid + Last updated 10/12/2022
event-grid Security Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-authentication.md
Title: Authenticate event delivery to event handlers (Azure Event Grid) description: This article describes different ways of authenticating delivery to event handlers in Azure Event Grid. Previously updated : 06/22/2022 Last updated : 08/31/2023 # Authenticate event delivery to event handlers (Azure Event Grid)
This article provides information on authenticating event delivery to event hand
## Overview Azure Event Grid uses different authentication methods to deliver events to event handlers. `
-| Authentication method | Supported handlers | Description |
+| Authentication method | Supported event handlers | Description |
|--|--|--|
-Access key | <p>Event Hubs</p><p>Service Bus</p><p>Storage Queues</p><p>Relay Hybrid Connections</p><p>Azure Functions</p><p>Storage Blobs (Deadletter)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> | Access keys are fetched using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription. |
-Managed System Identity <br/>&<br/> Role-based access control | <p>Event Hubs</p><p>Service Bus</p><p>Storage Queues</p><p>Storage  Blobs (Deadletter)</p></li></ul> | Enable managed system identity for the topic and add it to the appropriate role on the destination. For details, see [Use system-assigned identities for event delivery](#use-system-assigned-identities-for-event-delivery). |
-|Bearer token authentication with Azure AD protected webhook | Webhook | See the [Authenticate event delivery to webhook endpoints](#authenticate-event-delivery-to-webhook-endpoints) section for details.. |
+Access key | - Event Hubs<br/>- Service Bus<br/>- Storage Queues<br/>- Relay Hybrid Connections<br/>- Azure Functions<br/>- Storage Blobs (Deadletter) | Access keys are fetched using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription. |
+Managed System Identity <br/>&<br/> Role-based access control | - Event Hubs<br/>- Service Bus<br/>- Storage Queues<br/>- Storage  Blobs (Deadletter) | Enable managed system identity for the topic and add it to the appropriate role on the destination. For details, see [Use system-assigned identities for event delivery](#use-system-assigned-identities-for-event-delivery). |
+|Bearer token authentication with Microsoft Entra ID protected webhook | Webhook | See the [Authenticate event delivery to webhook endpoints](#authenticate-event-delivery-to-webhook-endpoints) section for details. |
Client secret as a query parameter | Webhook | See the [Using client secret as a query parameter](#using-client-secret-as-a-query-parameter) section for details. | > [!NOTE]
-> If you protect your Azure function with an Azure Active Directory app, you'll have to take the generic webhook approach using the HTTP trigger. Use the Azure function endpoint as a webhook URL when adding the subscription.
+> If you protect your Azure function with an Microsoft Entra ID app, you'll have to take the generic webhook approach using the HTTP trigger. Use the Azure function endpoint as a webhook URL when adding the subscription.
## Use system-assigned identities for event delivery You can enable a system-assigned managed identity for a topic or domain and use the identity to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts.
For detailed step-by-step instructions, see [Event delivery with a managed ident
The following sections describe how to authenticate event delivery to webhook endpoints. Use a validation handshake mechanism irrespective of the method you use. See [Webhook event delivery](webhook-event-delivery.md) for details.
-### Using Azure Active Directory (Azure AD)
-You can secure the webhook endpoint that's used to receive events from Event Grid by using Azure AD. You'll need to create an Azure AD application, create a role and service principal in your application authorizing Event Grid, and configure the event subscription to use the Azure AD application. Learn how to [Configure Azure Active Directory with Event Grid](secure-webhook-delivery.md).
+### Using Microsoft Entra ID
+You can secure the webhook endpoint that's used to receive events from Event Grid by using Microsoft Entra ID. You need to create a Microsoft Entra ID application, create a role and a service principal in your application authorizing Event Grid, and configure the event subscription to use the Microsoft Entra ID application. Learn how to [Configure Microsoft Entra ID with Event Grid](secure-webhook-delivery.md).
### Using client secret as a query parameter You can also secure your webhook endpoint by adding query parameters to the webhook destination URL specified as part of creating an Event Subscription. Set one of the query parameters to be a client secret such as an [access token](https://en.wikipedia.org/wiki/Access_token) or a shared secret. Event Grid service includes all the query parameters in every event delivery request to the webhook. The webhook service can retrieve and validate the secret. If the client secret is updated, event subscription also needs to be updated. To avoid delivery failures during this secret rotation, make the webhook accept both old and new secrets for a limited duration before updating the event subscription with the new secret.
-As query parameters could contain client secrets, they are handled with extra care. They are stored as encrypted and are not accessible to service operators. They are not logged as part of the service logs/traces. When retrieving the Event Subscription properties, destination query parameters aren't returned by default. For example: [--include-full-endpoint-url](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-show) parameter is to be used in Azure [CLI](/cli/azure).
+As query parameters could contain client secrets, they're handled with extra care. They're stored as encrypted and aren't accessible to service operators. They aren't logged as part of the service logs/traces. When retrieving the Event Subscription properties, destination query parameters aren't returned by default. For example: [--include-full-endpoint-url](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-show) parameter is to be used in Azure [CLI](/cli/azure).
For more information on delivering events to webhooks, see [Webhook event delivery](webhook-event-delivery.md)
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
Approximately, one capacity unit (CU) in a legacy cluster provides *ingress capa
With Legacy cluster, you can purchase up to 20 CUs. > [!Note]
-> Event Hubs Dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+> Legacy Event Hubs Dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
> [!IMPORTANT] > Migrating an existing Legacy cluster to a Self-Serve Cluster isn't currently support. For more information, see [migrating a Legacy cluster to Self-Serve Scalable cluster.](#can-i-migrate-my-standard-or-premium-namespaces-to-a-dedicated-tier-cluster).
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
// Use the producer client to send the batch of events to the event hub await producerClient.SendAsync(eventBatch); Console.WriteLine($"A batch of {numOfEvents} events has been published.");
+ Console.ReadLine();
} finally {
This section shows you how to create a .NET Core console application to send eve
{ // if it is too large for the batch throw new Exception($"Event {i} is too large for the batch and cannot be sent.");
+ Console.ReadLine();
} }
This section shows you how to create a .NET Core console application to send eve
// Use the producer client to send the batch of events to the event hub await producerClient.SendAsync(eventBatch); Console.WriteLine($"A batch of {numOfEvents} events has been published.");
+ Console.ReadLine();
} finally {
Replace the contents of **Program.cs** with the following code:
{ // Write the body of the event to the console window Console.WriteLine("\tReceived event: {0}", Encoding.UTF8.GetString(eventArgs.Data.Body.ToArray()));
+ Console.ReadLine();
return Task.CompletedTask; }
Replace the contents of **Program.cs** with the following code:
// Write details about the error to the console window Console.WriteLine($"\tPartition '{eventArgs.PartitionId}': an unhandled exception was encountered. This was not expected to happen."); Console.WriteLine(eventArgs.Exception.Message);
+ Console.ReadLine();
return Task.CompletedTask; } ```
Replace the contents of **Program.cs** with the following code:
{ // Write the body of the event to the console window Console.WriteLine("\tReceived event: {0}", Encoding.UTF8.GetString(eventArgs.Data.Body.ToArray()));
+ Console.ReadLine();
return Task.CompletedTask; }
Replace the contents of **Program.cs** with the following code:
// Write details about the error to the console window Console.WriteLine($"\tPartition '{eventArgs.PartitionId}': an unhandled exception was encountered. This was not expected to happen."); Console.WriteLine(eventArgs.Exception.Message);
+ Console.ReadLine();
return Task.CompletedTask; } ```
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
Use the following Azure PowerShell commands to add, list, remove, update, and de
- [`New-AzEventHubIPRuleConfig`](/powershell/module/az.eventhub/new-azeventhubipruleconfig) and [`Set-AzEventHubNetworkRuleSet`](/powershell/module/az.eventhub/set-azeventhubnetworkruleset) together to add an IP firewall rule - [`Remove-AzEventHubIPRule`](/powershell/module/az.eventhub/remove-azeventhubiprule) to remove an IP firewall rule. - ## Default action and public network access ### REST API
From API version **2021-06-01-preview onwards**, the default value of the `defau
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
-For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/preview/private-endpoint-connections/create-or-update).
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/controlplane-preview/private-endpoint-connections/create-or-update).
> [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
Azure portal always uses the latest API version to get and set properties. If y
++ ## Next steps For constraining access to Event Hubs to Azure virtual networks, see the following link:
For constraining access to Event Hubs to Azure virtual networks, see the followi
<!-- Links --> [express-route]: ../expressroute/expressroute-faqs.md#supported-services+ [lnk-deploy]: ../azure-resource-manager/templates/deploy-powershell.md+ [lnk-vnet]: event-hubs-service-endpoints.md++
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md
Last updated 10/18/2021
This tutorial walks you through how to set up a change data capture based system on Azure using [Event Hubs](./event-hubs-about.md?WT.mc_id=devto-blog-abhishgu) (for Kafka), [Azure DB for PostgreSQL](../postgresql/overview.md) and Debezium. It will use the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html) to stream database modifications from PostgreSQL to Kafka topics in Event Hubs > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
In this tutorial, you take the following steps:
event-hubs Event Hubs Kafka Mirror Maker Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-mirror-maker-tutorial.md
This tutorial shows how to mirror a Kafka broker into an Azure Event Hubs using
> This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/mirror-maker) > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
In this tutorial, you learn how to: > [!div class="checklist"]
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-service-endpoints.md
From API version **2021-06-01-preview onwards**, the default value of the `defau
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
-For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/preview/private-endpoint-connections/create-or-update).
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/controlplane-preview/private-endpoint-connections/create-or-update).
> [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md
Azure Event Hubs supports the following dimensions for metrics in Azure Monitor.
|Dimension name|Description| | - | -- |
-|Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension will see a value of '-NamespaceOnlyMetric-' in addition to all your Event Hubs. This represents request which were made at the namespace level. Examples include a request to list all Event Hubs under the namespace or requests to entities which failed authentication or authorization.|
+|Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of '-NamespaceOnlyMetric-' in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization.|
## Resource logs [!INCLUDE [event-hubs-diagnostic-log-schema](./includes/event-hubs-diagnostic-log-schema.md)]
Here's an example of a runtime audit log entry:
Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics. > [!NOTE]
-> Application metrics logs are available only in **premium** and **dedicated** tiers. Application Metric logs for following metrics- **IncomingBytes**. **IncomingMessages** ,**OutgoingBytes** ,**OutgoingMessages** are only generated if you have already created [Application Groups](resource-governance-overview.md#application-groups),in your environment. Application Groups should have the same security context - AAD ID or SAS key, which is being used to send/receive data to Azure Event Hubs.
+> Application metrics logs are available only in **premium** and **dedicated** tiers.
Name | Description - | -
Name | Description
`IncomingBytes` | Details of Publisher throughput sent to Event Hubs `OutgoinMessages` | Details of number of messages consumed from Event Hubs. `OutgoingBytes` | Details of Consumer throughput from Event Hubs.
+`OffsetCommit` | Number of offset commit calls made to the event hub
+`OffsetFetch` | Number of offset fetch calls made to the event hub.
+ ## Azure Monitor Logs tables
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/private-link-service.md
There are four provisioning states:
2. In the search bar, type in **event hubs**. 3. Select the **namespace** that you want to manage. 4. Select the **Networking** tab.
-5. Go to the appropriate section below based on the operation you want to: approve, reject, or remove.
+5. Go to the appropriate following section based on the operation you want to: approve, reject, or remove.
### Approve a private endpoint connection 1. If there are any connections that are pending, you see a connection listed with **Pending** in the provisioning state.
Aliases: <event-hubs-namespace-name>.servicebus.windows.net
## Limitations and design considerations
-**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-
-**Limitations**: This feature is available in all Azure public regions.
-
-**Maximum number of private endpoints per Event Hubs namespace**: 120.
+- For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+- This feature is available in all Azure public regions.
+- Maximum number of private endpoints per Event Hubs namespace: 120.
+- The traffic is blocked at the application layer, not at the TCP layer. Therefore, you see TCP connections or `nslookup` operations succeeding against the public endpoint even though the public access is disabled.
For more, see [Azure Private Link service: Limitations](../private-link/private-link-service-overview.md#limitations)
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
FastPath will honor UDRs configured on the GatewaySubnet and send traffic direct
> * FastPath UDR connectivity is not supported for Azure Dedicated Host workloads. > * FastPath UDR connectivity is not supported for IPv6 workloads.
+To enroll in the Public preview, please send an email **exrpm@microsoft.com** with the following information:
+- Azure subscription ID
+- Virtual Network(s) Azure Resource ID(s)
+- ExpressRoute Circuit(s) Azure Resource ID(s)
+- ExpressRoute Connection(s) Azure Resource ID(s)
+- Number of Virtual Network peering connections
+- Number of UDRs configured in the hub Virtual Network
+ ### Private Link Connectivity for 10Gbps ExpressRoute Direct Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path.
expressroute Expressroute Circuit Peerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-circuit-peerings.md
Previously updated : 09/19/2022 Last updated : 09/06/2023
You can connect more than one virtual network to the private peering domain. Rev
[!INCLUDE [expressroute-office365-include](../../includes/expressroute-office365-include.md)]
-Connectivity to Microsoft online services (Microsoft 365 and Azure PaaS services) occurs through Microsoft peering. We enable bi-directional connectivity between your WAN and Microsoft cloud services through the Microsoft peering routing domain. You must connect to Microsoft cloud services only over public IP addresses that are owned by you or your connectivity provider and you must adhere to all the defined rules. For more information, see the [ExpressRoute prerequisites](expressroute-prerequisites.md) page.
+Connectivity to Microsoft online services (Microsoft 365, Azure PaaS services and Microsoft PSTN services) occurs through Microsoft peering. We enable bi-directional connectivity between your WAN and Microsoft cloud services through the Microsoft peering routing domain. You must connect to Microsoft cloud services only over public IP addresses that are owned by you or your connectivity provider and you must adhere to all the defined rules. For more information, see the [ExpressRoute prerequisites](expressroute-prerequisites.md) page.
For more information on services supported, costs, and configuration details, see the [FAQ page](expressroute-faqs.md). For information on the list of connectivity providers offering Microsoft peering support, see the [ExpressRoute locations](expressroute-locations.md) page.
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
If your ExpressRoute circuit is enabled for Azure Microsoft peering, you can acc
* Power BI - Available via an Azure Regional Community, see [here](/power-bi/service-admin-where-is-my-tenant-located) for how to find out the region of your Power BI tenant. * Azure Active Directory * [Azure DevOps](https://blogs.msdn.microsoft.com/devops/2018/10/23/expressroute-for-azure-devops/) (Azure Global Services community)
+* [Microsoft PSTN services](./using-expressroute-for-microsoft-pstn.md)
* Azure Public IP addresses for IaaS (Virtual Machines, Virtual Network Gateways, Load Balancers, etc.) * Most of the other Azure services are also supported. Check directly with the service that you want to use to verify support.
ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard
ExpressRoute Local may not be available for an ExpressRoute Location. For peering location and supported Azure local region, see [locations and connectivity providers](expressroute-locations-providers.md#partners).
- > [!NOTE]
- > The restriction of Azure regions in the same metro doesn't apply for ExpressRoute Local in Virtual WAN.
- ### What are the benefits of ExpressRoute Local? While you need to pay egress data transfer for your Standard or Premium ExpressRoute circuit, you don't pay egress data transfer separately for your ExpressRoute Local circuit. In other words, the price of ExpressRoute Local includes data transfer fees. ExpressRoute Local is an economical solution if you have massive amount of data to transfer and want to have your data over a private connection to an ExpressRoute peering location near your desired Azure regions.
expressroute Expressroute Howto Add Gateway Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-portal-resource-manager.md
Title: 'Tutorial: Configure a virtual network gateway for ExpressRoute using Azure portal'
-description: This tutorial walks you through adding a virtual network gateway to a VNet for ExpressRoute using the Azure portal.
+description: This tutorial walks you through adding a virtual network gateway to a virtual network for ExpressRoute using the Azure portal.
Previously updated : 07/21/2022 Last updated : 08/31/2023
> * [Classic - PowerShell](expressroute-howto-add-gateway-classic.md) >
-This tutorial walks you through the steps to add and remove a virtual network gateway for a pre-existing virtual network (VNet). The steps for this configuration apply to VNets that were created using the Resource Manager deployment model for an ExpressRoute configuration. For more information about virtual network gateways and gateway configuration settings for ExpressRoute, see [About virtual network gateways for ExpressRoute](expressroute-about-virtual-network-gateways.md).
+This tutorial walks you through the steps to add and remove a virtual network gateway for a pre-existing virtual network (virtual network). The steps for this configuration apply to VNets that were created using the Resource Manager deployment model for an ExpressRoute configuration. For more information about virtual network gateways and gateway configuration settings for ExpressRoute, see [About virtual network gateways for ExpressRoute](expressroute-about-virtual-network-gateways.md).
In this tutorial, you learn how to: > [!div class="checklist"]
The steps for this tutorial use the values in the following configuration refere
## Create the gateway subnet 1. In the [portal](https://portal.azure.com), navigate to the Resource Manager virtual network for which you want to create a virtual network gateway.
-1. In the **Settings** section of your VNet, select **Subnets** to expand the Subnet settings.
+1. In the **Settings** section of your virtual network, select **Subnets** to expand the Subnet settings.
1. Select **+ Gateway subnet** to add a gateway subnet. :::image type="content" source="./media/expressroute-howto-add-gateway-portal-resource-manager/add-gateway-subnet.png" alt-text="Screenshot that shows the button to add the gateway subnet.":::
The steps for this tutorial use the values in the following configuration refere
| Setting | Value | | --| -- | | Subscription | Verify that the correct subscription is selected. |
- | Resource Group | The resource group will automatically be chosen once you select the virtual network. |
- | Name | Name your gateway. This isn't the same as naming a gateway subnet. It's the name of the gateway resource you're creating.|
- | Region | Change the **Region** field to point to the location where your virtual network is located. If the region isn't pointing to the location where your virtual network is, the virtual network won't appear in the **Virtual network** dropdown. |
+ | Resource Group | The resource group gets automatically chosen once you select the virtual network. |
+ | Name | Name your gateway. This name isn't the same as naming a gateway subnet. It's the name of the gateway resource you're creating.|
+ | Region | Change the **Region** field to point to the location where your virtual network is located. If the region isn't pointing to the location where your virtual network is, the virtual network doesn't appear in the **Virtual network** dropdown. |
| Gateway type | Select **ExpressRoute**| | SKU | Select the gateway SKU from the dropdown. | | Virtual network | Select *TestVNet*. |
If you no longer need the ExpressRoute gateway, locate the gateway in the virtua
In this tutorial, you learned how to create a virtual network gateway. For more information about virtual network gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
-To learn how to link your VNet to an ExpressRoute circuit, advance to the next tutorial.
+To learn how to link your virtual network to an ExpressRoute circuit, advance to the next tutorial.
> [!div class="nextstepaction"] > [Link a Virtual Network to an ExpressRoute circuit](expressroute-howto-linkvnet-portal-resource-manager.md)
expressroute Expressroute Howto Add Gateway Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-resource-manager.md
The steps for this task use a VNet based on the values in the following configur
## Add a gateway
+> [!IMPORTANT]
+> If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **-GatewaySku** or use Non-AZ SKU (Standard, HighPerformance, UltraPerformance) for -GatewaySKU with Standard and Static Public IP.
+>
+ 1. To connect with Azure, run `Connect-AzAccount`. 1. Declare your variables for this tutorial. Be sure to edit the sample to reflect the settings that you want to use.
The steps for this task use a VNet based on the values in the following configur
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG -Location $Location -IpConfigurations $ipconf -GatewayType Expressroute -GatewaySku Standard ```
-> [!IMPORTANT]
-> If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **-GatewaySku** or use Non-AZ SKU (Standard, HighPerformance, UltraPerformance) for -GatewaySKU with Standard and Static Public IP.
->
->
## Verify the gateway was created Use the following commands to verify that the gateway has been created:
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
description: In this quickstart, you learn how to create, provision, verify, upd
Previously updated : 07/18/2022 Last updated : 08/31/2023
From a browser, sign in to the [Azure portal](https://portal.azure.com) and sign
| | | | Port type | Select if you're connecting to a service provider or directly into Microsoft's global network at a peering location. | | Create new or import from classic | Select if you're creating a new circuit or if you're migrating a classic circuit to Azure Resource Manager. |
- | Provider | Select the internet service provider who you'll be requesting your service from. |
+ | Provider | Select the internet service provider who you are requesting your service from. |
| Peering Location | Select the physical location where you're peering with Microsoft. | | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Local, Standard and Premium. | | Billing model | Select the billing type for egress data charge. You can specify **Metered** for a metered data plan and **Unlimited** for an unlimited data plan. You can change the billing type from **Metered** to **Unlimited**. |
You can view all the circuits that you created by searching for **ExpressRoute c
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-circuit-menu.png" alt-text="Screenshot of ExpressRoute circuit menu.":::
-All Expressroute circuits created in the subscription will appear here.
+All Expressroute circuits created in the subscription appear here.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-circuit-list.png" alt-text="Screenshot of ExpressRoute circuit list."::: **View the properties**
-You can view the properties of the circuit by selecting it. On the Overview page for your circuit, you'll find the **Service Key**. Provide the service key to the service provider to complete the provisioning process. The service key is unique to your circuit.
+You can view the properties of the circuit by selecting it. On the Overview page for your circuit, you find the **Service Key**. Provide the service key to the service provider to complete the provisioning process. The service key is unique to your circuit.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-circuit-overview.png" alt-text="Screenshot of ExpressRoute properties."::: ### Send the service key to your connectivity provider for provisioning
-On this page, **Provider status** gives you the current state of provisioning on the service-provider side. **Circuit status** provides you the state on the Microsoft side. For more information about circuit provisioning states, see the [Workflows](expressroute-workflows.md#expressroute-circuit-provisioning-states) article.
+On this page, **Provider status** gives you the current state of provisioning on the service-provider side. **Circuit status** provides you with the state on the Microsoft side. For more information about circuit provisioning states, see the [Workflows](expressroute-workflows.md#expressroute-circuit-provisioning-states) article.
When you create a new ExpressRoute circuit, the circuit is in the following state:
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Title: 'Tutorial: Link a VNet to an ExpressRoute circuit - Azure PowerShell'
+ Title: 'Tutorial: Link a virtual network to an ExpressRoute circuit - Azure PowerShell'
description: This tutorial provides an overview of how to link virtual networks (VNets) to ExpressRoute circuits by using the Resource Manager deployment model and Azure PowerShell. Previously updated : 07/18/2022 Last updated : 08/31/2023
In this tutorial, you learn how to:
* Follow the instructions to [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and have the circuit enabled by your connectivity provider. * Ensure that you have Azure private peering configured for your circuit. See the [configure routing](expressroute-howto-routing-arm.md) article for routing instructions. * Ensure that Azure private peering gets configured and establishes BGP peering between your network and Microsoft for end-to-end connectivity.
- * Ensure that you have a virtual network and a virtual network gateway created and fully provisioned. Follow the instructions to [create a virtual network gateway for ExpressRoute](expressroute-howto-add-gateway-resource-manager.md). A virtual network gateway for ExpressRoute uses the GatewayType 'ExpressRoute', not VPN.
+ * Ensure that you have a virtual network and a virtual network gateway created and fully provisioned. Follow the instructions to [create a virtual network gateway for ExpressRoute](expressroute-howto-add-gateway-resource-manager.md). A virtual network gateway for ExpressRoute uses the GatewayType `ExpressRoute`, not VPN.
* You can link up to 10 virtual networks to a standard ExpressRoute circuit. All virtual networks must be in the same geopolitical region when using a standard ExpressRoute circuit.
-* A single VNet can be linked to up to 16 ExpressRoute circuits. Use the steps in this article to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both.
+* A single virtual network can be linked to up to 16 ExpressRoute circuits. Use the steps in this article to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both.
-* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
+* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on allows you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
-* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add more address spaces, up to 1,000, to the local or peered virtual networks.
* Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
$circuit = Get-AzExpressRouteCircuit -Name "MyCircuit" -ResourceGroupName "MyRG"
$auth1 = Get-AzExpressRouteCircuitAuthorization -ExpressRouteCircuit $circuit -Name "MyAuthorization1" ```
-The response to the previous commands will contain the authorization key and status:
+The response to the previous commands contains the authorization key and status:
```azurepowershell Name : MyAuthorization1
You can update certain properties of a virtual network connection.
**To update the connection weight**
-Your virtual network can be connected to multiple ExpressRoute circuits. You may receive the same prefix from more than one ExpressRoute circuit. To choose which connection to send traffic destined for this prefix, you can change *RoutingWeight* of a connection. Traffic will be sent on the connection with the highest *RoutingWeight*.
+Your virtual network can be connected to multiple ExpressRoute circuits. You may receive the same prefix from more than one ExpressRoute circuit. To choose which connection to send traffic destined for this prefix, you can change *RoutingWeight* of a connection. Traffic is sent on the connection with the highest *RoutingWeight*.
```azurepowershell-interactive $connection = Get-AzVirtualNetworkGatewayConnection -Name "MyVirtualNetworkConnection" -ResourceGroupName "MyRG"
$connection = Get-AzVirtualNetworkGatewayConnection -Name "MyConnection" -Resour
$connection.ExpressRouteGatewayBypass = $True Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection ```
-### FastPath and Private Link for 100 Gbps ExpressRoute Direct
+### FastPath and Private Link for 100-Gbps ExpressRoute Direct
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100 Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
+With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100-Gb ExpressRoute Direct circuits. To enable, follow the below guidance:
1. Send an email to **ERFastPathPL@microsoft.com**, providing the following information: * Azure Subscription ID
-* Virtual Network (VNet) Resource ID
+* Virtual Network (virtual network) Resource ID
* Azure Region where the Private Endpoint/Private Link service is deployed 2. Once you receive a confirmation from Step 1, run the following Azure PowerShell command in the target Azure subscription.
Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass
With FastPath and virtual network peering, you can enable ExpressRoute connectivity directly to VMs in a local or peered virtual network, bypassing the ExpressRoute virtual network gateway in the data path.
-With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct ExpressRoute traffic to an Azure Firewall or third party NVA. FastPath will honor the UDR and send traffic directly to the target Azure Firewall or NVA, bypassing the ExpressRoute virtual network gateway in the data path.
+With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct ExpressRoute traffic to an Azure Firewall or third party NVA. FastPath honors the UDR and send traffic directly to the target Azure Firewall or NVA, bypassing the ExpressRoute virtual network gateway in the data path.
To enroll in the preview, send an email to **exrpm@microsoft.com**, providing the following information: * Azure Subscription ID
-* Virtual Network (VNet) Resource ID
+* Virtual Network (virtual network) Resource ID
* ExpressRoute Circuit Resource ID
+* ExpressRoute Connection(s) Resource ID(s)
+* Number of Private Endpoints deployed to the local/Hub virtual network.
+* Resource ID of any User-Defined-Routes (UDRs) configured in the local/Hub virtual network.
**FastPath support for virtual network peering and UDRs is only available for ExpressRoute Direct connections**.
-### FastPath and Private Link for 10 Gbps ExpressRoute Direct
+### FastPath and Private Link for 10-Gbps ExpressRoute Direct
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10 Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
+With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10-Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
To enroll in this preview, run the following Azure PowerShell command in the target Azure subscription:
Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass
> [!NOTE] > Any connections configured for FastPath in the target subscription will be enrolled in the selected preview. We do not advise enabling these previews in production subscriptions. > If you already have FastPath configured and want to enroll in the preview feature, you need to do the following:
-> 1. Enroll in one of the FastPath preview features with the Azure PowerShell commands above.
+> 1. Enroll in one of the FastPath preview features with the Azure PowerShell commands.
> 1. Disable and then re-enable FastPath on the target connection. > 1. To switch between preview features, register the subscription with the target preview PowerShell command, and then disable and re-enable FastPath on the connection. >
Remove-AzVirtualNetworkGatewayConnection "MyConnection" -ResourceGroupName "MyRG
In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and in a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
-To learn how to configure route filters for Microsoft peering using PowerShell, advance to the next tutorial.
+To learn how to configure, route filters for Microsoft peering using PowerShell, advance to the next tutorial.
> [!div class="nextstepaction"] > [Configure route filters for Microsoft peering](how-to-routefilter-powershell.md)
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
Title: 'Tutorial: Link a VNet to an ExpressRoute circuit - Azure portal'
+ Title: 'Tutorial: Link a virtual network to an ExpressRoute circuit - Azure portal'
description: This tutorial shows you how to create a connection to link a virtual network to an Azure ExpressRoute circuit using the Azure portal. Previously updated : 07/18/2022 Last updated : 08/31/2023 + # Tutorial: Connect a virtual network to an ExpressRoute circuit using the Azure portal > [!div class="op_single_selector"]
> * [PowerShell (classic)](expressroute-howto-linkvnet-classic.md) >
-This tutorial helps you create a connection to link a virtual network (VNet) to an Azure ExpressRoute circuit using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or part of another subscription.
+This tutorial helps you create a connection to link a virtual network (virtual network) to an Azure ExpressRoute circuit using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or part of another subscription.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
* Follow the instructions to [create an ExpressRoute circuit](expressroute-howto-circuit-portal-resource-manager.md) and have the circuit enabled by your connectivity provider. * Ensure that you have Azure private peering configured for your circuit. See the [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-portal-resource-manager.md) article for peering and routing instructions. * Ensure that Azure private peering gets configured and establishes BGP peering between your network and Microsoft for end-to-end connectivity.
- * Ensure that you have a virtual network and a virtual network gateway created and fully provisioned. Follow the instructions to [create a virtual network gateway for ExpressRoute](expressroute-howto-add-gateway-resource-manager.md). A virtual network gateway for ExpressRoute uses the GatewayType 'ExpressRoute', not VPN.
+ * Ensure that you have a virtual network and a virtual network gateway created and fully provisioned. Follow the instructions to [create a virtual network gateway for ExpressRoute](expressroute-howto-add-gateway-resource-manager.md). A virtual network gateway for ExpressRoute uses the GatewayType `ExpressRoute`, not VPN.
* You can link up to 10 virtual networks to a standard ExpressRoute circuit. All virtual networks must be in the same geopolitical region when using a standard ExpressRoute circuit.
-* A single VNet can be linked to up to 16 ExpressRoute circuits. Use the following process to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both.
+* A single virtual network can be linked to up to 16 ExpressRoute circuits. Use the following process to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both.
-* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
+* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on also allows you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
-* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add more address spaces, up to 1,000, to the local or peered virtual networks.
* Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
-## Connect a VNet to a circuit - same subscription
+## Connect a virtual network to a circuit - same subscription
> [!NOTE] > BGP configuration information will not appear if the layer 3 provider configured your peerings. If your circuit is in a provisioned state, you should be able to create connections.
In this tutorial, you learn how to:
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-object.png" alt-text="Connection object screenshot":::
-## Connect a VNet to a circuit - different subscription
+## Connect a virtual network to a circuit - different subscription
You can share an ExpressRoute circuit across multiple subscriptions. The following figure shows a simple schematic of how sharing works for ExpressRoute circuits across multiple subscriptions.
FastPath support for virtual network peering is now in Public preview. Enrollmen
> [!NOTE] > Any connections configured for FastPath in the target subscription will be enrolled in this preview. We do not advise enabling this preview in production subscriptions. > If you already have FastPath configured and want to enroll in the preview feature, you need to do the following:
-> 1. Enroll in the FastPath preview feature with the Azure PowerShell command above.
+> 1. Enroll in the FastPath preview feature with the Azure PowerShell command.
> 1. Disable and then re-enable FastPath on the target connection. ## Clean up resources
-You can delete a connection and unlink your VNet to an ExpressRoute circuit by selecting the **Delete** icon on the page for your connection.
+You can delete a connection and unlink your virtual network to an ExpressRoute circuit by selecting the **Delete** icon on the page for your connection.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/delete-connection.png" alt-text="Delete connection":::
You can delete a connection and unlink your VNet to an ExpressRoute circuit by s
In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and in a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
-To learn how to configure route filters for Microsoft peering using the Azure portal, advance to the next tutorial.
+To learn how to configure, route filters for Microsoft peering using the Azure portal, advance to the next tutorial.
> [!div class="nextstepaction"] > [Configure route filters for Microsoft peering](how-to-routefilter-portal.md)
expressroute Expressroute Howto Routing Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
Previously updated : 07/13/2022 Last updated : 08/31/2023
This section helps you create, get, update, and delete the Microsoft peering con
2. Configure Microsoft peering for the circuit. Make sure that you have the following information before you continue.
- * A pair of subnets owned by you and registered in an RIR/IRR. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you'll assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
+ * A pair of subnets owned by you and registered in an RIR/IRR. One subnet is used for the primary link, while the other will be used for the secondary link. From each of these subnets, you assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
* IPv4: Two /30 subnets. These must be valid public IPv4 prefixes. * IPv6: Two /126 subnets. These must be valid public IPv6 prefixes. * Both: Two /30 subnets and two /126 subnets.
This section helps you create, get, update, and delete the Azure private peering
2. Configure Azure private peering for the circuit. Make sure that you have the following items before you continue with the next steps:
- * A pair of subnets that aren't part of any address space reserved for virtual networks. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you'll assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
+ * A pair of subnets that aren't part of any address space reserved for virtual networks. One subnet is used for the primary link, while the other will be used for the secondary link. From each of these subnets, you assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
* IPv4: Two /30 subnets. * IPv6: Two /126 subnets. * Both: Two /30 subnets and two /126 subnets.
expressroute Expressroute Howto Set Global Reach Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-cli.md
az account set --subscription <your subscription ID>
You can enable ExpressRoute Global Reach between any two ExpressRoute circuits. The circuits are required to be in supported countries/regions and were created at different peering locations. If your subscription owns both circuits, you may select either circuit to run the configuration. However, if the two circuits are in different Azure subscriptions you must create an authorization key from one of the circuits. Using the authorization key generated from the first circuit you can enable Global Reach on the second circuit.
+> [!NOTE]
+> ExpressRoute Global Reach configurations can only be seen from the configured circuit.
+ ## Enable connectivity between your on-premises networks When running the command to enable connectivity, note the following requirements for parameter values:
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md
Before you start configuration, confirm the following criteria:
* If your subscription owns both circuits, you can choose either circuit to run the configuration in the following sections. * If the two circuits are in different Azure subscriptions, you need authorization from one Azure subscription. Then you pass in the authorization key when you run the configuration command in the other Azure subscription.
+> [!NOTE]
+> ExpressRoute Global Reach configurations can only be seen from the configured circuit.
+ ## Enable connectivity Enable connectivity between your on-premises networks. There are separate sets of instructions for circuits that are in the same Azure subscription, and circuits that are different subscriptions.
expressroute Expressroute Howto Set Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach.md
Before you start configuration, confirm the following information:
* If your subscription owns both circuits, you can choose either circuit to run the configuration in the following sections. * If the two circuits are in different Azure subscriptions, you need authorization from one Azure subscription. Then you pass in the authorization key when you run the configuration command in the other Azure subscription.
+> [!NOTE]
+> ExpressRoute Global Reach configurations can only be seen from the configured circuit.
+ ## Enable connectivity Enable connectivity between your on-premises networks. There are separate sets of instructions for circuits that are in the same Azure subscription, and circuits that are different subscriptions.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 07/12/2023 Last updated : 08/28/2023
The following table shows connectivity locations and the service providers for e
| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire | | **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo |
-| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix |
+| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix, Orange Poland |
| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Equinix<br/>Internet2<br/>InterCloud<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | n/a | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Viasat<br/>Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Megaport<br/>Swisscom<br/>Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 08/07/2023 Last updated : 08/28/2023
The following table shows locations by service provider. If you want to view ava
| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** | Supported | Supported | Osaka | | **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** | Supported | Supported | Doha<br/>Doha2<br/>London2<br/>Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Supported | Supported | Melbourne<br/>Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon
+| **Orange Poland** | Supported | Supported | Warsaw |
| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Last updated 02/08/2023 + # ExpressRoute monitoring, metrics, and alerts This article helps you understand ExpressRoute monitoring, metrics, and alerts using Azure Monitor. Azure Monitor is one stop shop for all metrics, alerting, diagnostic logs across all of Azure.
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | | | [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
-| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
-| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | ConnectionName | Yes |
-| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | ConnectionName | Yes |
+| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
### ExpressRoute Direct
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
Previously updated : 06/12/2023 Last updated : 09/06/2023
To enable connectivity to other Azure services and infrastructure services, you
## <a name="bgp"></a>Support for BGP communities
-This section provides an overview of how BGP communities get used with ExpressRoute. Microsoft advertises routes in the private, Microsoft and public (deprecated) peering paths with routes tagged with appropriate community values. The rationale for doing so and the details on community values are describe as followed. Microsoft, however doesn't honor any community values tagged to routes advertised to Microsoft.
+This section provides an overview of how BGP communities get used with ExpressRoute. Microsoft advertises routes in the private, Microsoft and public (deprecated) peering paths with routes tagged with appropriate community values. The rationale for doing so and the details on community values are describe as followed. Microsoft, however, doesn't honor any community values tagged to routes advertised to Microsoft.
For private peering, if you [configure a custom BGP community value](./how-to-configure-custom-bgp-communities.md) on your Azure virtual networks, you'll see this custom value and a regional BGP community value on the Azure routes advertised to your on-premises over ExpressRoute.
In addition to the BGP tag for each region, Microsoft also tags prefixes based o
| Azure Global Services (1) | 12076:5050 | | Azure Active Directory |12076:5060 | | Azure Resource Manager |12076:5070 |
-| Other Office 365 Online services** | 12076:5100 |
+| Other Office 365 Online services (2) | 12076:5100 |
| Microsoft Defender for Identity | 12076:5220 |
+| Microsoft PSTN services (5) | 12076:5250 |
(1) Azure Global Services includes only Azure DevOps at this time.
-(2) Authorization required from Microsoft, refer [Configure route filters for Microsoft Peering](how-to-routefilter-portal.md)
+(2) Authorization required from Microsoft. See [Configure route filters for Microsoft Peering](how-to-routefilter-portal.md).
(3) This community also publishes the needed routes for Microsoft Teams services. (4) CRM Online supports Dynamics v8.2 and below. For higher versions, select the regional community for your Dynamics deployments.
+(5) Use of Microsoft Peering with PSTN services is restricted to specific use cases. See [Using ExpressRoute for Microsoft PSTN services](using-expressroute-for-microsoft-pstn.md).
+ > [!NOTE] > Microsoft does not honor any BGP community values that you set on the routes advertised to Microsoft. >
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Previously updated : 06/15/2023 Last updated : 08/23/2023
At line:1 char:1
The Address Resolution Protocol (ARP) table provides a mapping of the IP address and MAC address for a particular peering. The ARP table for an ExpressRoute circuit peering provides the following information for each interface (primary and secondary): * Mapping of the IP address for the on-premises router interface to the MAC address
-* Mapping of the IP address for the ExpressRoute router interface to the MAC address
+* Mapping of the IP address for the ExpressRoute router interface to the MAC address (optional)
* Age of the mapping ARP tables can help validate layer 2 configuration and troubleshoot basic layer 2 connectivity issues.
+>[!NOTE]
+> Depending on the hardware platform, the ARP results may vary and only display the *On-premises* interface.
+ To learn how to view the ARP table of an ExpressRoute peering and how to use the information to troubleshoot layer 2 connectivity issues, see [Getting ARP tables in the Resource Manager deployment model][ARP]. ## Validate BGP and routes on the MSEE
When your results are ready, you have two sets of them for the primary and secon
* **If you're testing PsPing from on-premises to Azure, received results show matches, but sent results show no matches**: This result indicates that traffic is coming in to Azure but isn't returning to on-premises. Check for return-path routing issues. For example, are you advertising the appropriate prefixes to Azure? Is a user-defined route (UDR) overriding prefixes? * **If you're testing PsPing from Azure to on-premises, sent results show matches, but received results show no matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. * **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down).
+ * You can run additional testing to confirm the unhealthy path by advertising a unique /32 on-premises route over the BGP session on this path.
+ * Run "Test your private peering connectivity" using the unique /32 advertised as the on-premise destination address and reveiw the results to confirm the path health.
Your test results for each MSEE device look like the following example:
expressroute Expressroute Troubleshooting Network Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-network-performance.md
Previously updated : 12/22/2022 Last updated : 08/25/2022
There are three basic steps to use this toolkit for Performance testing.
If the performance test isn't giving you expected results, figuring out why should be a progressive step-by-step process. Given the number of components in the path, a systematic approach will provide a faster path to resolution than jumping around. By troubleshooting systematically, you can prevent doing unnecessary testing multiple times. >[!NOTE]
->The scenario here is a performance issue, not a connectivity issue. The steps would be different if traffic wasn't passing at all.
+>The scenario here is a performance issue, not a connectivity issue. To isolate the connectivity problem to Azure network, follow [Verifying ExpressRotue connectivity](expressroute-troubleshooting-expressroute-overview.md) article.
> First, challenge your assumptions. Is your expectation reasonable? For instance, if you have a 1-Gbps ExpressRoute circuit and 100 ms of latency. It's not reasonable to expect the full 1 Gbps of traffic given the performance characteristics of TCP over high latency links. See the [References section](#references) for more on performance assumptions.
Test setup:
\* The latency to Brazil is a good example where the straight-line distance significantly differs from the fiber run distance. The expected latency would be in the neighborhood of 160 ms, but is actually 189 ms. The difference in latency would seem to indicate a network issue somewhere. But the reality is the fiber line doesn't go to Brazil in a straight line. So you should expect an extra 1,000 km or so of travel to get to Brazil from Seattle. >[!NOTE]
->While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor and uses a much lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with `-w` switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi-threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM.
+>While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor and uses a much lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with `-w` switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi-threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. To get the best Iperf results on Windows, use "Set-NetTCPSetting -AutoTuningLevelLocal Experimental". Please check your organizational policies before making any changes.
## Next steps
expressroute How To Routefilter Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-routefilter-portal.md
Previously updated : 07/20/2022 Last updated : 08/31/2023
expressroute Reset Circuit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/reset-circuit.md
-
Title: 'Reset a failed circuit - ExpressRoute: PowerShell: Azure | Microsoft Docs'+
+ Title: 'Reset a failed circuit - ExpressRoute | Microsoft Docs'
description: This article helps you reset an ExpressRoute circuit that is in a failed state. + Last updated 06/30/2023
When an operation on an ExpressRoute circuit doesn't complete successfully, the circuit may go into a 'failed' state. This article helps you reset a failed Azure ExpressRoute circuit.
+## Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+1. Search for **ExpressRoute circuits** in the search box at the top of the portal.
+
+1. Select the ExpressRoute circuit that you want to reset.
-## Reset a circuit
+1. Select **Refresh** from the top menu.
+
+ :::image type="content" source="./media/reset-circuit/refresh-circuit.png" alt-text="Screenshot of refresh button for an ExpressRoute circuit.":::
+
+## Azure PowerShell
+ 1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
expressroute Using Expressroute For Microsoft Pstn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/using-expressroute-for-microsoft-pstn.md
+
+ Title: 'Using ExpressRoute for Microsoft PSTN Services'
+description: ExpressRoute circuits can be used for Microsoft PSTN services, including Operator Connect, Azure Communications Gateway, and Azure Communication Services Direct Routing.
++++ Last updated : 09/06/2023++++
+# Using ExpressRoute for routing traffic to Microsoft PSTN services
+
+An ExpressRoute circuit provides private connectivity to the Microsoft backbone network. ExpressRoute *Microsoft Peering* connects your on-premises networks with public Microsoft services. You can use Microsoft Peering to provide your voice infrastructure endpoints outside Azure with network connectivity to Microsoft PSTN services (in addition to other Microsoft services).
+
+> [!TIP]
+> ExpressRoute circuits can also be used for *Private Peering*, which allows you to connect to private endpoints of your IaaS deployment in Azure regions.
+
+For more information about ExpressRoute, see the [Introduction to ExpressRoute][ExR-Intro] article.
+
+You can use ExpressRoute Microsoft Peering to connect to the following Microsoft PSTN
+
+* Operator Connect (including Calling, Conferencing and Teams Phone Mobile offers)
+* Azure Communications Gateway
+* Azure Communication Services Direct Routing
+
+> [!IMPORTANT]
+> Operator Connect SIP Trunks do not support encryption when using ExpressRoute Microsoft Peering connectivity.
+
+In this article, you'll learn about why you might consider using ExpressRoute to connect to these Microsoft PSTN services.
+
+## When to use ExpressRoute Microsoft Peering for Microsoft PSTN services
+
+In certain scenarios, using ExpressRoute Microsoft Peering provides better quality for voice calling than using the internet for your traffic. Microsoft owns one of the largest global networks, and the Microsoft network is optimized to achieve the core objective of offering the best network performance. The Microsoft network uses "cold potato" routing, meaning traffic enters and exits as close as possible to client devices/customer networks to reduce network hops and provide optimal quality of service for voice traffic. The Microsoft network is designed with redundancy and is highly available. For more information about architecture optimization, see [How Microsoft builds its fast and reliable global network][MGN].
+
+### For enterprises managing your own PSTN connectivity
+
+If your PSTN traffic is concentrated in multiple global locations and each location has its own ExpressRoute connection, ExpressRoute Microsoft Peering could be suited to you. This architecture is common for users of Direct Routing who have deployed their own SBCs in sites with ExpressRoute connectivity.
+
+### For Communications Services Providers
+
+We recommend that Communications Services Providers use Peering Service Voice interconnect (sometimes also called MAPSV or MAPS Voice) to connect their networks to the Microsoft network. To configure Peering Service Voice interconnection, follow [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md).
+
+In some cases, using ExpressRoute Microsoft Peering might be preferable as it allows you to:
+
+* Reuse existing ExpressRoute connectivity to your network instead of creating new Peering Service Voice connectivity.
+* Avoid port scarcity at peering locations.
+* Segregate your voice traffic on smaller circuits than the minimum 10-Gbps connections supported by Peering Service Voice interconnects.
+* Make use of 802.1Q tagging.
+
+Operator Connect providers must ensure the architecture used for network connectivity is compliant with the latest Microsoft Teams *Network Connectivity Specification*. This specification is made available to Operator Connect providers during onboarding.
+
+## Configuring Microsoft Peering for use with Microsoft PSTN services
+
+Multiple Microsoft services (including Microsoft PSTN services, Microsoft 365 services and some Azure PaaS offerings) can be connected via Microsoft Peering. With the use of a *Route Filter*, you can select which service prefixes you want Microsoft to advertise over Microsoft Peering to your on-premises network. To configure a suitable Route Filter for Microsoft PSTN services, follow [Configure route filters for Microsoft Peering][ExRRF], setting *Azure SIP Trunking* as an allowed service community.
+
+All Microsoft PSTN services supported for Microsoft Peering use the 52.120.0.0/15 subnet. The Azure SIP Trunking service community refers to this subnet.
+
+> [!NOTE]
+> You must connect to Microsoft cloud services only over public IP addresses that are owned by you or your connectivity provider and you must adhere to all the defined rules. For more information, see the [ExpressRoute prerequisites](./expressroute-prerequisites.md) page.
+
+## Next steps
+
+* [Create ExpressRoute Microsoft Peering][CreatePeering]
+
+<!--Link References-->
+[ExR-Intro]: ./expressroute-introduction.md
+[CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md
+[MGN]: https://azure.microsoft.com/blog/how-microsoft-builds-its-fast-and-reliable-global-network/
+[ExRRF]: ./how-to-routefilter-portal.md
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
The following fields are included in the table in the **Values** section on the
Many organizations opt to obfuscate their registry information. Sometimes contact email addresses end in *@anonymised.email*. This placeholder is used instead of a real contact address. Many fields are optional during registration configuration, so any field with an empty value wasn't included by the registrant. ++
+### Change history
+
+The "Change history" tab displays a list of modifications that have been applied to an asset over time. This information helps you track these changes over time and better understand the lifecycle of the asset. This tab displays a variety of changes, including but not limited to asset states, labels and external IDs. For each change, we list the user who implemented the change and a timestamp.
+
+[ ![Screenshot that shows the Change history tab.](media/change-history-1.png) ](media/change-history-1.png#lightbox)
+++ ## Next steps - [Understand dashboards](understanding-dashboards.md)
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
All assets are labeled as one of the following states:
These asset states are uniquely processed and monitored to ensure that customers have clear visibility into the most critical assets by default. For instance, ΓÇ£Approved InventoryΓÇ¥ assets are always represented in dashboard charts and are scanned daily to ensure data recency. All other kinds of assets are not included in dashboard charts by default; however, users can adjust their inventory filters to view assets in different states as needed. Similarly, "CandidateΓÇ¥ assets are only scanned during the discovery process; itΓÇÖs important to review these assets and change their state to ΓÇ£Approved InventoryΓÇ¥ if they are owned by your organization. +
+## Tracking inventory changes
+
+Your attack surface is constantly changing, which is why Defender EASM continuously analyzes and updates your inventory to ensure accuracy. Assets are frequently added and removed from inventory, so it's important to track these changes to understand your attack surface and identify key trends. The inventory changes dashboard provides an overview of these changes, displaying the "added" and "removed" counts for each asset type. You can filter the dashboard by two date ranges: either the last 7 or 30 days. For a more granular view of these inventory changes, refer to the "Changes by date" section.
++
+[ ![Screenshot of Inventory Changes screen.](media/inventory-changes-1.png)](media/inventory-changes-1.png#lightbox)
++++ ## Next steps -- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Modifying inventory assets](labeling-inventory-assets.md)
- [Understanding asset details](understanding-asset-details.md) - [Using and managing discovery](using-and-managing-discovery.md)
firewall-manager Quick Firewall Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-terraform.md
+
+ Title: 'Quickstart: Create an Azure Firewall and a firewall policy - Terraform'
+description: In this quickstart, you deploy an Azure Firewall and a firewall policy using Terraform.
+++ Last updated : 09/05/2023++++
+content_well_notifications:
+ - AI-Contribution
++
+# Quickstart: Create an Azure Firewall and a firewall policy - Terraform
+
+In this quickstart, you use Terraform to create an Azure Firewall and a firewall policy. The firewall policy has an application rule that allows connections to `www.microsoft.com` and a rule that allows connections to Windows Update using the **WindowsUpdate** FQDN tag. A network rule allows UDP connections to a time server at 13.86.101.172.
+
+Also, IP Groups are used in the rules to define the **Source** IP addresses.
+
+[Hashicorp Terraform](https://www.terraform.io/) is an open-source IaC (Infrastructure-as-Code) tool for provisioning and managing cloud infrastructure. It codifies infrastructure in configuration files that describe the desired state for your topology. Terraform enables the management of any infrastructure - such as public clouds, private clouds, and SaaS services - by using [Terraform providers](https://www.terraform.io/language/providers).
+
+For information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
+
+For information about Azure Firewall, see [What is Azure Firewall?](../firewall/overview.md).
+
+For information about IP Groups, see [IP Groups in Azure Firewall](../firewall/ip-groups.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
++
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Review and Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+Multiple Azure resources are defined in the Terraform code. The following resources are defined in the `main.tf` file:
+
+- [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+- [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+- [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+- [azurerm_ip_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/ip_group)
+- [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip)
+- [azurerm_firewall_policy](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/firewall_policy)
+- [azurerm_firewall_policy_rule_collection_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/firewall_policy_rule_collection_group)
+- [azurerm_firewall](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/firewall)
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `provider.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-azfw-with-fwpolicy/provider.tf":::
+
+1. Create a file named `main.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-azfw-with-fwpolicy/main.tf":::
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-azfw-with-fwpolicy/variables.tf":::
+
+1. Create a file named `outputs.tf` and insert the following code, being sure to update the value to your own backend hostname:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-azfw-with-fwpolicy/outputs.tf":::
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Firewall Manager policy overview](policy-overview.md)
firewall-manager Quick Secure Virtual Hub Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-terraform.md
+
+ Title: 'Quickstart: Secure virtual hub using Azure Firewall Manager - Terraform'
+description: In this quickstart, you learn how to secure your virtual hub using Azure Firewall Manager and Terraform.
+++ Last updated : 09/05/2023++++
+content_well_notifications:
+ - AI-Contribution
++
+# Quickstart: Secure your virtual hub using Azure Firewall Manager - Terraform
+
+In this quickstart, you use Terraform to secure your virtual hub using Azure Firewall Manager. The deployed firewall has an application rule that allows connections to `www.microsoft.com` . Two Windows Server 2019 virtual machines are deployed to test the firewall. One jump server is used to connect to the workload server. From the workload server, you can only connect to `www.microsoft.com`.
+
+For more information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
++
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Review and Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+Multiple Azure resources are defined in the Terraform code. The following resources are defined in the `main.tf` file:
+
+- [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+- [azurerm_virtual_wan](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_wan)
+- [azurerm_virtual_hub](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_hub)
+- [azurerm_virtual_hub_connection](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_hub_connection)
+- [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip)
+- [azurerm_firewall_policy](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/firewall_policy)
+- [azurerm_firewall_policy_rule_collection_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/firewall_policy_rule_collection_group)
+- [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+- [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+- [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface)
+- [azurerm_network_security_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_security_group)
+- [azurerm_network_interface_security_group_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_security_group_association
+- [azurerm_windows_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_virtual_machine)
+- [azurerm_route_table](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/route_table)
+- [azurerm_subnet_route_table_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet_route_table_association)
+- [azurerm_virtual_hub_route_table](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_hub_route_table)
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `provider.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-azfw-with-secure-hub/provider.tf":::
+
+1. Create a file named `main.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-azfw-with-secure-hub/main.tf":::
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-azfw-with-secure-hub/variables.tf":::
+
+1. Create a file named `outputs.tf` and insert the following code, being sure to update the value to your own backend hostname:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/201-azfw-with-secure-hub/outputs.tf":::
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about security partner providers](trusted-security-partners.md)
firewall-manager Secure Cloud Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network-powershell.md
Start with the first step, to configure your virtual network connections to prop
$VnetRoutingConfig = $Spoke1Connection.RoutingConfiguration # We take $Spoke1Connection as baseline for the future vnet config, all vnets will have an identical config $NoneRT = Get-AzVhubRouteTable -ResourceGroupName $RG -HubName $HubName -Name "noneRouteTable" $NewPropRT = @{}
-$NewPropRT.Add('Id', $NoneRT.id)
+$NewPropRT.Add('Id', $NoneRT.Id)
$PropRTList = @() $PropRTList += $NewPropRT $VnetRoutingConfig.PropagatedRouteTables.Ids = $PropRTList
$SSHRule = New-AzFirewallPolicyNetworkRule -Name PermitSSH -Protocol TCP `
-SourceAddress "10.0.0.0/8" -DestinationAddress "10.0.0.0/8" -DestinationPort 22 $NetCollection = New-AzFirewallPolicyFilterRuleCollection -Name "Management" -Priority 100 -ActionType Allow -Rule $SSHRule $NetGroup = New-AzFirewallPolicyRuleCollectionGroup -Name "Management" -Priority 200 -RuleCollection $NetCollection -FirewallPolicyObject $FWPolicy
-# Add Application Rul
+# Add Application Rule
$ifconfigRule = New-AzFirewallPolicyApplicationRule -Name PermitIfconfig -SourceAddress "10.0.0.0/8" -TargetFqdn "ifconfig.co" -Protocol "http:80","https:443" $AppCollection = New-AzFirewallPolicyFilterRuleCollection -Name "TargetURLs" -Priority 300 -ActionType Allow -Rule $ifconfigRule $NetGroup = New-AzFirewallPolicyRuleCollectionGroup -Name "TargetURLs" -Priority 300 -RuleCollection $AppCollection -FirewallPolicyObject $FWPolicy
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Previously updated : 06/06/2022 Last updated : 08/30/2023
For more information about Availability Zones, see [Regions and Availability Zon
## Unrestricted cloud scalability
-Azure Firewall can scale out as much as you need to accommodate changing network traffic flows, so you don't need to budget for your peak traffic.
+Azure Firewall can scale out as much as you need to accommodate changing network traffic flows, so you don't need to budget for your peak traffic.
## Application FQDN filtering rules
You can configure Azure Firewall to route all Internet-bound traffic to a design
## Web categories
-Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic. For more information about Azure Firewall Premium, see [Azure Firewall Premium features](premium-features.md).
+Web categories let administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic. For more information about Azure Firewall Premium, see [Azure Firewall Premium features](premium-features.md).
For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/news`, the following categorization is expected: -- Firewall Standard ΓÇô only the FQDN part will be examined, so `www.google.com` will be categorized as *Search Engine*.
+- Firewall Standard ΓÇô only the FQDN part is examined, so `www.google.com` is categorized as *Search Engine*.
-- Firewall Premium ΓÇô the complete URL will be examined, so `www.google.com/news` will be categorized as *News*.
+- Firewall Premium ΓÇô the complete URL is examined, so `www.google.com/news` is categorized as *News*.
The categories are organized based on severity under **Liability**, **High-Bandwidth**, **Business Use**, **Productivity Loss**, **General Surfing**, and **Uncategorized**. ### Category exceptions
-You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the pre-defined **Social networking** web category.
+You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the predefined **Social networking** web category.
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
With the Azure Firewall Resource Health check, you can now diagnose and get supp
Starting in August 2023, this preview will be automatically enabled on all firewalls and no action will be required to enable this functionality. For more information, see [Resource Health overview](../service-health/resource-health-overview.md).
+### Top flows (preview) and Flow trace logs (preview)
+
+- The Top flows log shows the top connections that contribute to the highest throughput through the firewall.
+- Flow trace logs show the full journey of a packet in the TCP handshake.
+
+For more information, see [Enable Top flows (preview) and Flow trace logs (preview) in Azure Firewall](enable-top-ten-and-flow-trace.md).
+
+### Auto-learn SNAT routes (preview)
+
+You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. For information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md#auto-learn-snat-routes-preview).
+
+### Embedded Firewall Workbooks (preview)
+
+Azure Firewall predefined workbooks are two clicks away and fully available from the **Monitoring** section in the Azure Firewall portal UI.
+
+For more information, see [Azure Firewall: New Monitoring and Logging Updates](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-new-monitoring-and-logging-updates/ba-p/3897897#:~:text=Embedded%20Firewall%20Workbooks%20are%20now%20in%20public%20preview)
+ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/forced-tunneling.md
Title: Azure Firewall forced tunneling
-description: You can configure forced tunneling to route Internet-bound traffic to an additional firewall or network virtual appliance for further processing.
+description: You can configure forced tunneling to route Internet-bound traffic to an another firewall or network virtual appliance for further processing.
Previously updated : 06/02/2022 Last updated : 08/30/2023
When you configure a new Azure Firewall, you can route all Internet-bound traffic to a designated next hop instead of going directly to the Internet. For example, you may have a default route advertised via BGP or using User Defined Route (UDR) to force traffic to an on-premises edge firewall or other network virtual appliance (NVA) to process network traffic before it's passed to the Internet. To support this configuration, you must create Azure Firewall with Forced Tunnel configuration enabled. This is a mandatory requirement to avoid service disruption. If this is a pre-existing firewall, you must recreate the firewall in Forced Tunnel mode to support this configuration. For more information, see the [Azure Firewall FAQ](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall) about stopping and restarting a firewall in Forced Tunnel mode.
-Some customers prefer not to expose a public IP address directly to the Internet. In this case, you can deploy Azure Firewall in Forced Tunneling mode without a public IP address. This configuration creates a management interface with a public IP address that is used by Azure Firewall for its operations. The public IP address is used exclusively by the Azure platform and can't be used for any other purpose.The tenant data path network can be configured without a public IP address, and Internet traffic can be forced tunneled to another Firewall or completely blocked.
+Some customers prefer not to expose a public IP address directly to the Internet. In this case, you can deploy Azure Firewall in Forced Tunneling mode without a public IP address. This configuration creates a management interface with a public IP address that is used by Azure Firewall for its operations. The public IP address is used exclusively by the Azure platform and can't be used for any other purpose. The tenant data path network can be configured without a public IP address, and Internet traffic can be forced tunneled to another Firewall or completely blocked.
Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesnΓÇÖt SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the Internet. However, with forced tunneling enabled, Internet-bound traffic is SNATed to one of the firewall private IP addresses in the AzureFirewallSubnet. This hides the source address from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding *0.0.0.0/0* as your private IP address range. With this configuration, Azure Firewall can never egress directly to the Internet. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
Azure Firewall provides automatic SNAT for all outbound traffic to public IP add
## Forced tunneling configuration
-You can configure Forced Tunneling during Firewall creation by enabling Forced Tunnel mode as shown below. To support forced tunneling, Service Management traffic is separated from customer traffic. An additional dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address. This public IP address is for management traffic. It is used exclusively by the Azure platform and can't be used for any other purpose.
+You can configure Forced Tunneling during Firewall creation by enabling Forced Tunnel mode as shown in the following screenshot. To support forced tunneling, Service Management traffic is separated from customer traffic. Another dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address. This public IP address is for management traffic. It's used exclusively by the Azure platform and can't be used for any other purpose.
In Forced Tunneling mode, the Azure Firewall service incorporates the Management subnet (AzureFirewallManagementSubnet) for its *operational* purposes. By default, the service associates a system-provided route table to the Management subnet. The only route allowed on this subnet is a default route to the Internet and *Propagate gateway* routes must be disabled. Avoid associating customer route tables to the Management subnet when you create the firewall.
If you enable forced tunneling, Internet-bound traffic is SNATed to one of the f
If your organization uses a public IP address range for private networks, Azure Firewall SNATs the traffic to one of the firewall private IP addresses in AzureFirewallSubnet. However, you can configure Azure Firewall to **not** SNAT your public IP address range. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
-Once you configure Azure Firewall to support forced tunneling, you can't undo the configuration. If you remove all other IP configurations on your firewall, the management IP configuration is removed as well and the firewall is deallocated. The public IP address assigned to the management IP configuration can't be removed, but you can assign a different public IP address.
+Once you configure Azure Firewall to support forced tunneling, you can't undo the configuration. If you remove all other IP configurations on your firewall, the management IP configuration is removed as well, and the firewall is deallocated. The public IP address assigned to the management IP configuration can't be removed, but you can assign a different public IP address.
## Next steps
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet --
## Next steps
+- For more information, see [Scale Azure Firewall SNAT ports with NAT Gateway for large workloads](https://azure.microsoft.com/blog/scale-azure-firewall-snat-ports-with-nat-gateway-for-large-workloads/).
- [Design virtual networks with NAT gateway](../virtual-network/nat-gateway/nat-gateway-resource.md) - [Integrate NAT gateway with Azure Firewall in a hub and spoke network](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md)
firewall Logs And Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/logs-and-metrics.md
The following metrics are available for Azure Firewall:
- Status: Possible values are *Healthy*, *Degraded*, *Unhealthy*. - Reason: Indicates the reason for the corresponding status of the firewall.
- If SNAT ports are used > 95%, they are considered exhausted and the health is 50% with status=**Degraded** and reason=**SNAT port**. The firewall keeps processing traffic and existing connections are not affected. However, new connections may not be established intermittently.
+ If SNAT ports are used > 95%, they're considered exhausted and the health is 50% with status=**Degraded** and reason=**SNAT port**. The firewall keeps processing traffic and existing connections aren't affected. However, new connections may not be established intermittently.
If SNAT ports are used < 95%, then firewall is considered healthy and health is shown as 100%.
The following metrics are available for Azure Firewall:
- There may be various reasons that can cause high latency in Azure Firewall. For example, high CPU utilization, high throughput, or a possible networking issue.
- This metric does not measure end-to-end latency of a given network path. In other words, this latency health probe does not measure how much latency Azure Firewall adds.
+ This metric doesn't measure end-to-end latency of a given network path. In other words, this latency health probe doesn't measure how much latency Azure Firewall adds.
- - When the latency metric is not functioning as expected, a value of 0 appears in the metrics dashboard.
- - As a reference, the average expected latency for a firewall is approximately 1 m/s. This may vary depending on deployment size and environment.
+ - When the latency metric isn't functioning as expected, a value of 0 appears in the metrics dashboard.
+ - As a reference, the average expected latency for a firewall is approximately 1 m/s. This may vary depending on deployment size and environment.
+ - The latency probe is based on Microsoft's Ping Mesh technology. So, intermittent spikes in the latency metric are to be expected. These spikes are normal and don't signal an issue with the Azure Firewall. They're part of the standard host networking setup that supports the system.
+
+ As a result, if you experience consistent high latency that last longer than typical spikes, consider filing a Support ticket for assistance.
## Next steps
firewall Policy Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-analytics.md
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
## Next steps
+- To learn more about Policy Analytics, see [Optimize performance and strengthen security with Policy Analytics for Azure Firewall](https://azure.microsoft.com/blog/optimize-performance-and-strengthen-security-with-policy-analytics-for-azure-firewall/).
- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md). - To learn more about Azure Firewall structured logs, see [Azure Firewall structured logs](firewall-structured-logs.md).
firewall Premium Deploy Certificates Enterprise Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy-certificates-enterprise-ca.md
To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre
## Next steps
-[Azure Firewall Premium in the Azure portal](premium-portal.md)
+- [Azure Firewall Premium in the Azure portal](premium-portal.md)
+- [Building a POC for TLS inspection in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723)
+
firewall Premium Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy.md
Let's create an application rule to allow access to sports web sites.
## Next steps
+- [Building a POC for TLS inspection in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723)
- [Azure Firewall Premium in the Azure portal](premium-portal.md)
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Azure Firewall Premium provides signature-based IDPS to allow rapid detection of
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.-- Over 58,000 rules in over 50 categories.
+- Over 67,000 rules in over 50 categories.
- The categories include malware command and control, phishing, trojans, botnets, informational events, exploits, vulnerabilities, SCADA network protocols, exploit kit activity, and more. - 20 to 40+ new rules are released each day. - Low false positive rating by using state-of-the-art malware detection techniques such as global sensor network feedback loop.
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
Previously updated : 04/15/2022 Last updated : 08/30/2023
As a result, there's no need to create an explicit deny rule from VNet-B to VNet
## Next steps -- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
+- [Learn more about Azure Firewall NAT behaviors](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-nat-behaviors/ba-p/3825834)
+- [Learn how to deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md)
- [Learn more about Azure network security](../networking/security/index.yml)
firewall Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/service-tags.md
Previously updated : 01/28/2022 Last updated : 08/31/2023
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md
You can use the Azure portal to specify private IP address ranges for the firewa
## Auto-learn SNAT routes (preview)
-You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network and hence traffic to destinations in the learned ranges aren't SNATed. Configure auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The Firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use JSON, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes.
+You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network, so traffic to destinations in the learned ranges aren't SNATed. Auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use an ARM template, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes.
-### Configure using JSON
+### Configure using an ARM template
You can use the following JSON to configure auto-learn. Azure Firewall must be associated with an Azure Route Server.
Use the following JSON to associate an Azure Route Server:
You can use the portal to associate a Route Server with Azure Firewall to configure auto-learn SNAT routes (preview).
-1. Select your resource group, and then select your firewall.
-2. Select **Overview**.
-3. Add a Route Server.
+Use the portal to complete the following tasks:
-Review learned routes:
-
-1. Select your resource group, and then select your firewall.
-2. Select **Learned SNAT IP Prefixes (preview)** in the **Settings** column.
+- Add a subnet named **RouteServerSubnet** to your existing firewall VNet. The size of the subnet should be at least /27.
+- Deploy a Route Server into the existing firewall VNet. For information about Azure Route Server, see [Quickstart: Create and configure Route Server using the Azure portal](../route-server/quickstart-configure-route-server-portal.md).
+- Add the route server on the firewall **Learned SNAT IP Prefixes (preview)** page.
+ :::image type="content" source="media/snat-private-range/add-route-server.png" alt-text="Screenshot showing firewall add a route server." lightbox="media/snat-private-range/add-route-server.png":::
+- Modify your firewall policy to enable **Auto-learn IP prefixes (preview)** in the **Private IP ranges (SNAT)** section.
+ :::image type="content" source="media/snat-private-range/auto-learn.png" alt-text="Screenshot showing firewall policy Private IP ranges (SNAT) settings." lightbox="media/snat-private-range/auto-learn.png":::
+- You can see the learned routes on the **Learned SNAT IP Prefixes (preview)** page.
## Next steps
firewall Tutorial Firewall Dnat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat.md
Previously updated : 06/06/2022 Last updated : 08/31/2023 #Customer intent: As an administrator, I want to deploy and configure Azure Firewall DNAT so that I can control inbound Internet access to resources located in a subnet.
You can configure Azure Firewall Destination Network Address Translation (DNAT) to translate and filter inbound Internet traffic to your subnets. When you configure DNAT, the NAT rule collection action is set to **Dnat**. Each rule in the NAT rule collection can then be used to translate your firewall public IP address and port to a private IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md).
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Set up a test network environment
-> * Deploy a firewall
-> * Create a default route
-> * Configure a DNAT rule
-> * Test the firewall
- > [!NOTE] > This article uses classic Firewall rules to manage the firewall. The preferred method is to use [Firewall Policy](../firewall-manager/policy-overview.md). To complete this procedure using Firewall Policy, see [Tutorial: Filter inbound Internet traffic with Azure Firewall policy DNAT using the Azure portal](tutorial-firewall-dnat-policy.md)
First, create the VNets and then peer them.
7. For **Resource group**, select **RG-DNAT-Test**. 1. For **Name**, type **VN-Hub**. 1. For **Region**, select the same region that you used before.
-1. Select **Next: IP Addresses**.
+1. Select **Next**.
+1. On the **Security** tab, select **Next**.
1. For **IPv4 Address space**, accept the default **10.0.0.0/16**.
-1. Under **Subnet name**, select **default**.
-1. Edit the **Subnet name** and type **AzureFirewallSubnet**.
+1. Under **Subnets**, select **default**.
+1. For **Subnet template**, select **Azure Firewall**.
The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet. > [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
-10. For **Subnet address range**, type **10.0.1.0/26**.
11. Select **Save**. 1. Select **Review + create**. 1. Select **Create**.
First, create the VNets and then peer them.
1. For **Resource group**, select **RG-DNAT-Test**. 1. For **Name**, type **VN-Spoke**. 1. For **Region**, select the same region that you used before.
-1. Select **Next: IP Addresses**.
+1. Select **Next**.
+1. On the **Security** tab, select **Next**.
1. For **IPv4 Address space**, edit the default and type **192.168.0.0/16**.
-1. Select **Add subnet**.
-1. For the **Subnet name** type **SN-Workload**.
-10. For **Subnet address range**, type **192.168.1.0/24**.
-11. Select **Add**.
+1. Under **Subnets**, select **default**.
+1. For the subnet **Name** type **SN-Workload**.
+1. For **Starting address**, type **192.168.1.0**.
+1. For **Subnet size**, select **/24**.
+1. Select **Save**.
1. Select **Review + create**. 1. Select **Create**.
Now peer the two VNets.
Create a workload virtual machine, and place it in the **SN-Workload** subnet. 1. From the Azure portal menu, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2019 Datacenter**.
+2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
**Basics**
Create a workload virtual machine, and place it in the **SN-Workload** subnet.
**Management**
+1. Select **Next: Monitoring**.
+
+**Monitoring**
+ 1. For **Boot diagnostics**, select **Disable**. 1. Select **Review + Create**. **Review + Create**
-Review the summary, and then select **Create**. This will take a few minutes to complete.
+Review the summary, and then select **Create**. This takes a few minutes to complete.
-After deployment finishes, note the private IP address for the virtual machine. It will be used later when you configure the firewall. Select the virtual machine name, and under **Settings**, select **Networking** to find the private IP address.
+After deployment finishes, note the private IP address for the virtual machine. It is used later when you configure the firewall. Select the virtual machine name. Select **Overview**, and under **Networking** note the private IP address.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
After deployment finishes, note the private IP address for the virtual machine.
|Resource group |Select **RG-DNAT-Test** | |Name |**FW-DNAT-test**| |Region |Select the same location that you used previously|
- |Firewall tier|**Standard**|
+ |Firewall SKU|**Standard**|
|Firewall management|**Use Firewall rules (classic) to manage this firewall**| |Choose a virtual network |**Use existing**: VN-Hub| |Public IP address |**Add new**, Name: **fw-pip**.|
After deployment finishes, note the private IP address for the virtual machine.
5. Accept the other defaults, and then select **Review + create**. 6. Review the summary, and then select **Create** to create the firewall.
- This will take a few minutes to deploy.
+ This takes a few minutes to deploy.
7. After deployment completes, go to the **RG-DNAT-Test** resource group, and select the **FW-DNAT-test** firewall. 8. Note the firewall's private and public IP addresses. You'll use them later when you create the default route and NAT rule.
For the **SN-Workload** subnet, you configure the outbound default route to go t
> [!IMPORTANT] > You do not need to configure an explicit route back to the firewall at the destination subnet. Azure Firewall is a stateful service and handles the packets and sessions automatically. If you create this route, you'll create an asymmetrical routing environment that interrupts the stateful session logic and results in dropped packets and connections.
-1. From the Azure portal home page, select **All services**.
-2. Under **Networking**, select **Route tables**.
+1. From the Azure portal home page, select **Create a resource**.
+2. Search for **Route table** and select it.
3. Select **Create**. 5. For **Subscription**, select your subscription. 1. For **Resource group**, select **RG-DNAT-Test**.
For the **SN-Workload** subnet, you configure the outbound default route to go t
1. Select **OK**. 1. Select **Routes**, and then select **Add**. 1. For **Route name**, type **FW-DG**.
-1. For **Address prefix destination**, select **IP Addresses**.
+1. For **Destination type**, select **IP Addresses**.
1. For **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**. 1. For **Next hop type**, select **Virtual appliance**.
For the **SN-Workload** subnet, you configure the outbound default route to go t
1. For **Destination ports**, type **3389**. 1. For **Translated Address** type the private IP address for the Srv-Workload virtual machine. 1. For **Translated port**, type **3389**.
-1. Select **Add**. This will take a few minutes to complete.
+1. Select **Add**.
+
+This takes a few minutes to complete.
## Test the firewall
frontdoor Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/apex-domain.md
To add a root or apex domain to your Azure Front Door profile, see [Onboard a ro
## DNS CNAME flattening
-The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`, you can create a CNAME record for `myappliation.contoso.com`, but you can't create a CNAME record for `contoso.com` itself.
+The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`, you can create a CNAME record for `myapplication.contoso.com`, but you can't create a CNAME record for `contoso.com` itself.
Azure Front Door doesn't expose the frontend public IP address associated with your Azure Front Door endpoint. So, you can't map an apex domain to an Azure Front Door IP address.
frontdoor Front Door Traffic Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-traffic-acceleration.md
Previously updated : 05/05/2022 Last updated : 08/31/2023 zone_pivot_groups: front-door-tiers
Front Door optimizes the traffic path from the end user to the backend server. T
## Select the Front Door edge location for the request (Anycast)
-Globally, [Front Door has over 150 edge locations](edge-locations-by-region.md), or points of presence (PoPs), located in many countries and regions. Every Front Door PoP can serve traffic for any request.
+Globally, [Front Door has over 150 edge locations](edge-locations-by-region.md), or points of presence (PoPs), located in many countries/regions. Every Front Door PoP can serve traffic for any request.
Traffic routed to the Azure Front Door edge locations uses [Anycast](https://en.wikipedia.org/wiki/Anycast) for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic. Anycast allows for user requests to reach the closest edge location in the fewest network hops. This architecture offers better round-trip times for end users by maximizing the benefits of [Split TCP](#connect-to-the-front-door-edge-location-split-tcp).
Front Door's architecture ensures that requests from your end users always reach
Split TCP enables the client's TCP connection to terminate inside a Front Door edge location close to the user. A separate TCP connection is established to the origin, and this separate connection might have a large round-trip time (RTT).
-The diagram below illustrates how three users, in different geographical locations, connect to a Front Door edge location close to their location. Front Door then maintains the longer-lived connection to the origin in Europe:
+The following diagram illustrates how three users, in different geographical locations, connect to a Front Door edge location close to their location. Front Door then maintains the longer-lived connection to the origin in Europe:
![Diagram illustrating how Front Door uses a short TCP connection to the closest Front Door edge location to the user, and a longer TCP connection to the origin.](media/front-door-traffic-acceleration/split-tcp-standard-premium.png)
Establishing a TCP connection requires 3-5 roundtrips from the client to the ser
Split TCP enables the client's TCP connection to terminate inside a Front Door edge location close to the user. A separate TCP connection is established to the backend, and this separate connection might have a large round-trip time (RTT).
-The diagram below illustrates how three users, in different geographical locations, connect to a Front Door edge location close to their location. Front Door then maintains the longer-lived connection to the backend in Europe:
+The following diagram illustrates how three users, in different geographical locations, connect to a Front Door edge location close to their location. Front Door then maintains the longer-lived connection to the backend in Europe:
![Diagram illustrating how Front Door uses a short TCP connection to the closest Front Door edge location to the user, and a longer TCP connection to the backend.](media/front-door-traffic-acceleration/split-tcp-classic.png)
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
You can add as many single-level subdomains of the wildcard domain in front-end
- Defining a different route for a subdomain than the rest of the domains (from the wildcard domain). -- Having a different WAF policy for a specific subdomain. For example, `*.contoso.com` allows adding `foo.contoso.com` without having to again prove domain ownership. But it doesn't allow `foo.bar.contoso.com` because it isn't a single level subdomain of `*.contoso.com`. To add `foo.bar.contoso.com` without extra domain ownership validation, `*.bar.contosonews.com` needs to be added.
+- Having a different WAF policy for a specific subdomain. For example, `*.contoso.com` allows adding `foo.contoso.com` without having to again prove domain ownership. But it doesn't allow `foo.bar.contoso.com` because it isn't a single level subdomain of `*.contoso.com`. To add `foo.bar.contoso.com` without extra domain ownership validation, `*.bar.contoso.com` needs to be added.
You can add wildcard domains and their subdomains with certain limitations:
frontdoor Integrate Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/integrate-storage-account.md
+
+ Title: Integrate an Azure Storage account with Azure Front Door
+
+description: This article shows you how to use Azure Front Door to deliver high-bandwidth content by caching blobs from Azure Storage.
++++ Last updated : 08/22/2023++++
+# Integrate an Azure Storage account with Azure Front Door
+
+Azure Front Door can be used to deliver high-bandwidth content by caching blobs from Azure Storage. In this article, you create an Azure Storage account and then enable Front Door to cache and accelerate contents from Azure Storage.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+
+## Sign in to the Azure portal
+
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+## Create a storage account
+
+A storage account gives access to the Azure Storage services. The storage account represents the highest level of the namespace for accessing each of the Azure Storage service components: Azure Blob, Queue, and Table storage. For more information, see [Introduction to Microsoft Azure Storage](../storage/common/storage-introduction.md).
+
+1. In the Azure portal, select **+ Create a resource** in the upper left corner. The **Create a resource** pane appears.
+
+1. On the **Create a resource** page, search for **Storage account** and select **Storage account** from the list. Then select **Create**.
+
+ :::image type="content" source="./media/integrate-storage-account/create-new-storage-account.png" alt-text="Screenshot of create a storage account.":::
+
+1. On the **Create a storage account** page, enter or select the following information for the new storage account:
+
+ | Setting | Value |
+ | | |
+ | Resource group | Select **Create new** and enter the name **AFDResourceGroup**. You may also select an existing resource group. |
+ | Storage account name | Enter a name for the account using 3-24 lowercase letters and numbers only. The name must be unique across Azure, and becomes the host name in the URL that's used to address blob, queue, or table resources for the subscription. To address a container resource in Blob storage, use a URI in the following format: http://*&lt;storageaccountname&gt;*.blob.core.windows.net/*&lt;container-name&gt;*.
+ | Region | Select an Azure region closest to you from the drop-down list. |
+
+ Leave all other settings as default. Select the **Review** tab, select **Create**, and then select **Review + Create**.
+
+1. The creation of the storage account may take a few minutes to complete. Once creation is complete, select **Go to resource** to go to the new storage account resource.
+
+## Enable Azure Front Door CDN for the storage account
+
+1. From the storage account resource, select **Front Door and CDN** from under **Security + networking** on the left side menu pane.
+
+ :::image type="content" source="./media/integrate-storage-account/storage-endpoint-configuration.png" alt-text="Screenshot of create an AFD endpoint.":::
+
+1. In the **New endpoint** section, enter the following information:
+
+ | Setting | Value |
+ | -- | -- |
+ | Service type | Select **Azure Front Door**. |
+ | Create new/use existing profile | You can create a new Front Door profile or select an existing one. |
+ | Profile name | Enter a name for the Front Door profile. You have a list of available Front Door profiles if you selected **Use existing**.|
+ | Endpoint name | Enter your endpoint hostname, such as *contoso1234*. This name is used to access your cached resources at the URL _&lt;endpoint-name + hash value&gt;_.z01.azurefd.net. |
+ | Origin hostname | By default, a new Front Door endpoint uses the hostname of your storage account as the origin server. |
+ | Pricing tier | Select **Standard** if you want to do content delivery or select **Premium** if you want to do content delivery and use security features. |
+ | Caching | *Optional* - Toggle on if you want to [enable caching](front-door-caching.md) for your static content. Choose an appropriate query string behavior. Enable compression if required.|
+ | WAF | *Optional* - Toggle on if you want to protect your endpoint from common vulnerabilities, malicious actor and bots with [Web Application Firewall](web-application-firewall.md). You can use an existing policy from the WAF policy dropdown or create a new one. |
+ | Private link | *Optional* - Toggle on if you want to keep your storage account private that is, not exposed to public internet. Select the region that is the same region as your storage account or closest to your origin. Select target sub resource as **blob**. |
+
+ :::image type="content" source="./media/integrate-storage-account/security-settings.png" alt-text="Screenshot of the caching, WAF and private link settings for an endpoint.":::
+
+ > [!NOTE]
+ > * With Standard tier, you can only use custom rules with WAF.To deploy managed rules and bot protection, choose Premium tier. For detailed comparison, see [Azure Front Door tier comparison](./standard-premium/tier-comparison.md).
+ > * Private Link feature is **only** available with Premium tier.
+
+1. Select **Create** to create the new endpoint. After the endpoint is created, it appears in the endpoint list.
+
+ :::image type="content" source="./media/integrate-storage-account/endpoint-created.png" alt-text="Screenshot of new Front Door endpoint created from Storage account.":::
+
+## Extra features
+
+From the storage account **Front Door and CDN** page, select the endpoint from the list to open the Front Door endpoint configuration page. You can enable more Front Door features for your delivery, such as [rules engine](front-door-rules-engine.md) and configure how traffic gets [load balanced](routing-methods.md).
+
+For best practices, refer to [Use Azure Front Door with Azure Storage blobs](scenario-storage-blobs.md).
+
+## Enable SAS
+
+If you want to grant limited access to private storage containers, you can use the Shared Access Signature (SAS) feature of your Azure Storage account. A SAS is a URI that grants restricted access rights to your Azure Storage resources without exposing your account key.
+
+## Access CDN content
+
+To access cached content with Azure Front Door, use the Front Door URL provided in the portal. The address for a cached blob has the following format:
+
+http://<*endpoint-name-with-hash-value*\>.z01.azurefd.net/<*myPublicContainer*\>/<*BlobName*\>
+
+> [!NOTE]
+> After you enable Azure Front Door access to a storage account, all publicly available objects are eligible for Front Door POP (Point-of-presence) caching. If you modify an object that is currently cached in Front Door, the new content won't be available through Azure Front Door until Front Door refreshes its content after the time-to-live period for the cached content expires.
+
+## Add a custom domain
+
+When you use Azure Front Door for content delivery, a custom domain is necessary if you would like your own domain name to be visible in your end-user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes.
+
+From the storage account **Front Door and CDN** page, select **View custom domains** for the Front Door endpoint. On the domains page, you can add a new custom domain to access your storage account. For more information, see [Configure a custom domain with Azure Front Door](./standard-premium/how-to-add-custom-domain.md).
+
+## Purge cached content from Front Door
+
+If you no longer want to cache an object in Azure Front Door, you can purge the cached content.
+
+From the storage account **Front Door and CDN** page, select the Front Door endpoint from the list to open the Front Door endpoint configuration page. Select **Purge cache** option at the top of the page and then select the endpoint, domain, and path to purge.
+
+> [!NOTE]
+> An object that's already cached in Azure Front Door remains cached until the time-to-live period for the object expires or until the endpoint is purged.
+
+## Clean up resources
+
+In the preceding steps, you created an Azure Front Door profile and an endpoint in a resource group. However, if you don't expect to use these resources in the future, you can delete them by deleting the resource group to avoid any charges.
+
+1. From the left-hand menu in the Azure portal, select **Resource groups** and then select *AFDResourceGroup**.
+
+2. On the **Resource group** page, select **Delete resource group**, enter *AFDResourceGroup* in the text box, then select **Delete**.
+
+ This action deletes the resource group, profile, and endpoint that you created in this quickstart.
+
+3. To delete your storage account, select it from the dashboard, then select **Delete** from the top menu.
+
+## Next steps
+
+* Learn how to use [Azure Front Door with Azure Storage blobs](scenario-storage-blobs.md)
+* Learn how to [enable Azure Front Door Private Link with Azure Blob Storage](standard-premium/how-to-enable-private-link-storage-account.md)
+* Learn how to [enable Azure Front Door Private Link with Storage Static Website](how-to-enable-private-link-storage-static-website.md)
++
frontdoor Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/faq.md
- Title: 'Azure Front Door: Frequently asked questions'
-description: This page provides answers to frequently asked questions about Azure Front Door Standard/Premium.
----- Previously updated : 05/18/2021---
-# Frequently asked questions for Azure Front Door Standard/Premium (Preview)
-
-This article answers common questions about Azure Front Door features and functionality.
-
-## General
-
-### What is Azure Front Door?
-
-Azure Front Door is a fast, reliable, and secure modern cloud CDN with intelligent threat protection. It provides static and dynamic content acceleration, global load balancing, and enhanced security for your global hyper-scale applications, APIs, websites, and cloud services with intelligent threat protection.
-
-### What features does Azure Front Door support?
-
-Azure Front Door supports:
-
-* Both static content and dynamic application acceleration
-* TLS/SSL offloading and end to end TLS
-* Web Application Firewall
-* Cookie-based session affinity
-* Url path-based routing
-* Free certificates and multiple domain managements
-
-For a full list of supported features, see [Overview of Azure Front Door](overview.md).
-
-### What is the difference between Azure Front Door and Azure Application Gateway?
-
-While both Front Door and Application Gateway are layer 7 (HTTP/HTTPS) load balancers, the primary difference is that Front Door is a global service. Application Gateway is a regional service. While Front Door can load balance between your different scale units/clusters/stamp units across regions, Application Gateway allows you to load balance between your VMs/containers that is within the scale unit.
-
-### When should we deploy an Application Gateway behind Front Door?
-
-The key scenarios why one should use Application Gateway behind Front Door are:
-
-* Front Door can do path-based load balancing only at the global level but if one wants to load balance traffic even further within their virtual network (VNET) then they should use Application Gateway.
-* Since Front Door doesn't work at a VM/container level, so it can't do Connection Draining. However, Application Gateway allows you to do Connection Draining.
-* With an Application Gateway behind Front Door, one can achieve 100% TLS/SSL offload and route only HTTP requests within their virtual network (VNET).
-* Front Door and Application Gateway both support session affinity. Front Door can direct ensuing traffic from a user session to the same cluster or backend in a given region. Application Gateway can direct affinitize the traffic to the same server within the cluster.
-
-### Can we deploy Azure Load Balancer behind Front Door?
-
-Azure Front Door needs a public VIP or a publicly available DNS name to route the traffic to. Deploying an Azure Load Balancer behind Front Door is a common use case.
-
-### What protocols does Azure Front Door support?
-
-Azure Front Door supports HTTP, HTTPS and HTTP/2.
-
-### How does Azure Front Door support HTTP/2?
-
-HTTP/2 protocol support is available to clients connecting to Azure Front Door only. The communication to backends in the backend pool is over HTTP/1.1. HTTP/2 support is enabled by default.
-
-### What resources are supported today as part of an origin group?
-
-Origin groups can be composed of two types of origins:
--- Public origins include storage accounts, App Service apps, Kubernetes instances, or any other custom hostname that has public connectivity. These origins must be defined either via a public IP address or a publicly resolvable DNS hostname. Members of origin groups can be deployed across availability zones, regions, or even outside of Azure as long as they have public connectivity. Public origins are supported for Azure Front Door Standard and Premium tiers.-- [Private Link origins](../private-link.md) are available when you use Azure Front Door (Premium).-
-### What regions is the service available in?
-
-Azure Front Door is a global service and isn't tied to any specific Azure region. The only location you need to specify while creating a Front Door is for the resource group. That location is basically specifying where the metadata for the resource group will be stored. Front Door resource itself is created as a global resource and the configuration is deployed globally to all edge locations.
-
-### Where are the edge locations for Azure Front Door?
-
-For the complete list of Azure Front Door edge locations, see [Azure Front Door edge locations](edge-locations.md).
-
-### Is Azure Front Door a dedicated deployment for my application or is it shared across customers?
-
-Azure Front Door is a globally distributed multi-tenant service. The infrastructure for Front Door is shared across all its customers. By creating a Front Door profile, you're defining the specific configuration required for your application. Changes made to your Front Door doesn't affect other Front Door configurations.
-
-### Is HTTP->HTTPS redirection supported?
-
-Yes. In fact, Azure Front Door supports host, path, query string redirection, and part of URL redirection. Learn more about [URL redirection](../front-door-url-redirect.md).
-
-### How do I lock down the access to my backend to only Azure Front Door?
-
-The best way to lock down your application to accept traffic only from your specific Front Door instance is to publish your application via Private Endpoint. Network traffic between Front Door and the application traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet.
-
-Learn more about the [securing origin for Front Door with Private Link](../private-link.md).
-
-Alternative way to lock down your application to accept traffic only from your specific Front Door, you'll need to set up IP ACLs for your backend. Then restrict the traffic of your backend to the specific value of the header 'X-Azure-FDID' sent by Front Door. These steps are detailed out as below:
-
-* Configure IP ACLing for your backends to accept traffic from Azure Front Door's backend IP address space and Azure's infrastructure services only. Refer to the IP details below for ACLing your backend:
-
- * Refer *AzureFrontDoor.Backend* section in [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) for Front Door's backend IP address range. You can also use the service tag *AzureFrontDoor.Backend* in your [network security groups](../../virtual-network/network-security-groups-overview.md#security-rules).
- * Azure's [basic infrastructure services](../../virtual-network/network-security-groups-overview.md#azure-platform-considerations) through virtualized host IP addresses: `168.63.129.16` and `169.254.169.254`.
-
- > [!WARNING]
- > Front Door's backend IP space may change later, however, we will ensure that before that happens, that we would have integrated with [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). We recommend that you subscribe to [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) for any changes or updates.
-
-* Do a GET operation on your Front Door with the API version `2020-01-01` or higher. In the API call, look for `frontdoorID` field. Filter on the incoming header '**X-Azure-FDID**' sent by Front Door to your backend with the value of the field `frontdoorID`. You can also find `Front Door ID` value under the Overview section from Front Door portal page.
-
-* Apply rule filtering in your backend web server to restrict traffic based on the resulting 'X-Azure-FDID' header value.
-
- Here's an example for [Microsoft Internet Information Services (IIS)](https://www.iis.net/):
-
- ``` xml
- <?xml version="1.0" encoding="UTF-8"?>
- <configuration>
- <system.webServer>
- <rewrite>
- <rules>
- <rule name="Filter_X-Azure-FDID" patternSyntax="Wildcard" stopProcessing="true">
- <match url="*" />
- <conditions>
- <add input="{HTTP_X_AZURE_FDID}" pattern="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" negate="true" />
- </conditions>
- <action type="AbortRequest" />
- </rule>
- </rules>
- </rewrite>
- </system.webServer>
- </configuration>
- ```
-
-* Azure Front Door also supports the *AzureFrontDoor.Frontend* service tag, which provides the list of IP addresses that clients use when connecting to Front Door. You can use the *AzureFrontDoor.Frontend* service tag when youΓÇÖre controlling the outbound traffic that should be allowed to connect to services deployed behind Azure Front Door. Azure Front Door also supports an additional service tag, *AzureFrontDoor.FirstParty*, to integrate internally with other Azure services. See [available service tags](../../virtual-network/service-tags-overview.md#available-service-tags) for more details on Azure Front Door service tags use cases.
-
-### Can the anycast IP change over the lifetime of my Front Door?
-
-The frontend anycast IP for your Front Door should typically not change and may remain static for the lifetime of the Front Door. However, there are **no guarantees** for the same. Kindly don't take any direct dependencies on the IP.
-
-### Does Azure Front Door support static or dedicated IPs?
-
-No, Azure Front Door currently doesn't support static or dedicated frontend anycast IPs.
-
-### Does Azure Front Door support x-forwarded-for headers?
-
-Yes, Azure Front Door supports the X-Forwarded-For, X-Forwarded-Host, and X-Forwarded-Proto headers. For X-Forwarded-For if the header was already present then Front Door appends the client socket IP to it. Else, it adds the header with the client socket IP as the value. For X-Forwarded-Host and X-Forwarded-Proto, the value is overridden.
-
-### How long does it take to deploy an Azure Front Door? Does my Front Door still work when being updated?
-
-Most Rules Engine configuration updates complete under 20 minutes. You can expect the rule to take effect as soon as the update is completed.
-
- > [!Note]
- > Most custom TLS/SSL certificate updates take from several minutes to an hour to be deployed globally.
-
-Any updates to routes or backend pools are seamless and will cause zero downtime (if the new configuration is correct). Certificate updates won't cause any outage, unless you're switching from 'Azure Front Door Managed' to 'Use your own cert' or the other way around.
--
-## Configuration
-
-### Can Azure Front Door load balance or route traffic within a virtual network?
-
-Azure Front Door (Standard) requires a public IP or a publicly resolvable DNS name to route traffic. Azure Front Door can't route directly to resources in a virtual network. You can use an Application Gateway or an Azure Load Balancer with a public IP to solve this problem.
-
-Azure Front Door (Premium) supports routing traffic to [Private Link origins](../private-link.md).
-
-### What are the various timeouts and limits for Azure Front Door?
-
-Learn about all the documented [timeouts and limits for Azure Front Door](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-classic-limits).
-
-### How long does it take for a rule to take effect after being added to the Front Door Rules Engine?
-
-Most rules engine configuration updates complete under 20 minutes. You can expect the rule to take effect as soon as the update is completed.
-
-### Can I configure Azure CDN behind my Front Door profile or Front Door behind my Azure CDN?
-
-Azure Front Door and Azure CDN can't be configured together because both services use the same Azure edge sites when responding to requests.
-
-## Performance
-
-### How does Azure Front Door support high availability and scalability?
-
-Azure Front Door is a globally distributed multi-tenant platform with huge amount of capacity to cater to your application's scalability needs. Delivered from the edge of Microsoft's global network, Front Door provides global load-balancing capability that allows you to fail over your entire application or even individual microservices across regions or different clouds.
-
-## TLS configuration
-
-### What TLS versions are supported by Azure Front Door?
-
-All Front Door profiles created after September 2019 use TLS 1.2 as the default minimum.
-
-Front Door supports TLS versions 1.0, 1.1 and 1.2. TLS 1.3 isn't yet supported. Refer to [Azure Front Door end-to-end TLS](../concept-end-to-end-tls.md) for more details.
-
-### Why is HTTPS traffic to my backend failing?
-
-For having successful HTTPS connections to your backend whether for health probes or for forwarding requests, there could be two reasons why HTTPS traffic might fail:
-
-* **Certificate subject name mismatch**: For HTTPS connections, Front Door expects that your backend presents certificate from a valid CA with subject name(s) matching the backend hostname. As an example, if your backend hostname is set to `myapp-centralus.contosonews.net` and the certificate that your backend presents during the TLS handshake doesn't have `myapp-centralus.contosonews.net` or `*myapp-centralus*.contosonews.net` in the subject name. Then Front Door will refuse the connection and result in an error.
- * **Solution**: It isn't recommended from a compliance standpoint but you can work around this error by disabling the certificate subject name check for your Front Door. You can find this option under Settings in Azure portal and under BackendPoolsSettings in the API.
-* **Backend hosting certificate from invalid CA**: Only certificates from [valid Certificate Authorities](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT) can be used at the backend with Front Door. Certificates from internal CAs or self-signed certificates aren't allowed.
-
-### Can I use client/mutual authentication with Azure Front Door?
-
-No. Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in [RFC 5246](https://tools.ietf.org/html/rfc5246), currently, Azure Front Door doesn't support client/mutual authentication.
-
-## Diagnostics and logging
-
-### What types of metrics and logs are available with Azure Front Door?
-
-For information on logs and other diagnostic capabilities, see Monitoring metrics and logs for Front Door.
-
-### What is the retention policy on the diagnostics logs?
-
-Diagnostic logs flow to the customers storage account and customers can set the retention policy based on their preference. Diagnostic logs can also be sent to an Event Hub or Azure Monitor logs. For more information, see [Azure Front Door Logging](how-to-logs.md).
-
-### How do I get audit logs for Azure Front Door?
-
-Audit logs are available for Azure Front Door. In the portal, select **Activity Log** in the menu page of your Front Door to access the audit log.
-
-### Can I set alerts with Azure Front Door?
-
-Yes, Azure Front Door does support alerts. Alerts are configured on metrics.
-
-## Billing
-
-### Will I be billed for the Azure Front Door resources that are disabled?
-
-Azure Front Door resources, like Front Door profiles, are not billed if disabled.
-
-## Next steps
-
-Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor How To Add Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-security-headers.md
Previously updated : 02/18/2021 Last updated : 08/31/2023
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
Previously updated : 06/09/2022 Last updated : 08/31/2023 # Connect Azure Front Door Premium to an App Service origin with Private Link
-This article will guide you through how to configure Azure Front Door Premium tier to connect to your App service privately using the Azure Private Link service.
+This article guides you through how to configure Azure Front Door Premium tier to connect to your App service privately using the Azure Private Link service.
## Prerequisites
Sign in to the [Azure portal](https://portal.azure.com).
## Enable Private Link to an App Service in Azure Front Door Premium
-In this section, you'll map the Private Link service to a private endpoint created in Azure Front Door's private network.
+In this section, you map the Private Link service to a private endpoint created in Azure Front Door's private network.
1. Within your Azure Front Door Premium profile, under *Settings*, select **Origin groups**.
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-app-service/private-endpoint-app-service.png" alt-text="Screenshot of enabling private link to a Web App.":::
-1. The table below has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the App service you want Azure Front Door Premium to connect with privately.
+1. The following table has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the App service you want Azure Front Door Premium to connect with privately.
| Setting | Value | | - | -- |
In this section, you'll map the Private Link service to a private endpoint creat
| Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. | | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.| | Region | Select the region that is the same or closest to your origin. |
- | Target sub resource | The type of sub-resource for the resource selected above that your private endpoint will be able to access. You can select *site*. |
+ | Target sub resource | The type of subresource for the resource selected previously that your private endpoint can access. You can select *site*. |
| Request message | Custom message to see while approving the Private Endpoint. | 1. Select **Add** to save your configuration. Then select **Update** to save the origin group settings.
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-app-service/private-endpoint-pending-approval.png" alt-text="Screenshot of pending private endpoint request.":::
-1. Once approved, it should look like the screenshot below. It will take a few minutes for the connection to fully establish. You can now access your app service from Azure Front Door Premium. Direct access to the App Service from the public internet gets disabled after private endpoint gets enabled.
+1. Once approved, it should look like the following screenshot. It takes a few minutes for the connection to fully establish. You can now access your app service from Azure Front Door Premium. Direct access to the App Service from the public internet gets disabled after private endpoint gets enabled.
:::image type="content" source="../media/how-to-enable-private-link-app-service/private-endpoint-approved.png" alt-text="Screenshot of approved endpoint request.":::
frontdoor Troubleshoot Performance Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-performance-issues.md
+
+ Title: Troubleshoot general performance problems with Azure Front Door
+
+description: In this article, investigate, diagnose, and resolve potential latency or bandwidth problems associated with Azure Front Door and site performance.
+++++ Last updated : 08/30/2023+
+#Customer intent: As a <type of user>, I want <some goal> so that <some reason>.
++
+# Troubleshoot general performance problems with Azure Front Door
+
+Performance problems can originate in several potential areas: the Azure Front Door service, the origin, the requesting client, or the path between any of these hops. This troubleshooting guide helps you identify which hop along the data path is most likely the root of a problem, and how to resolve the problem.
+
+## Check for known problems
+
+Before you start, check for any known problems on:
+
+- The [Azure Front Door platform](https://azure.status.microsoft/status).
+- Internet service providers (ISPs) in the path.
+- The requesting client's ability to connect and retrieve data.
+
+## Scenario 1: Investigate the origin
+
+If one of the origin servers is slow, then the first request for an object via Azure Front Door is slow. Further, if the content isn't cached at the Azure Front Door point of presence (POP), requests are forwarded to the origin. Serving from the origin negates the benefit of the POP's proximity and local delivery to the requesting client and instead relies on the origin's performance.
+
+### Scenario 1: Environment information needed
+
+- Azure Front Door endpoint name
+ - Endpoint host name
+ - Endpoint custom domain (if applicable)
+ - Origin host name
+- Full URL of the affected file
+
+### Scenario 1: Troubleshooting steps
+
+1. Check the response headers from the affected request.
+
+ To check response headers, use the following `curl` examples in Bash. You can also use your browser's developer tools by selecting the F12 key. Select the **Networking** tab, select the relevant file to be investigated, and then select the **Headers** tab. If the file is missing, reload the page with the developer tools open.
+
+ The initial response should have an `x-cache` header with a `TCP_MISS` value. The Azure Front Door POP forwards requests with this value to the origin. The origin sends the return traffic on that same path to the requesting client.
+
+ Here's an example that shows `TCP_MISS`:
+
+ ```bash
+ $ curl -I "https://S*******.z01.azurefd.net/media/EteSQSGXMAYVUN_?format=jpg&name=large"
+ HTTP/2 200
+ cache-control: max-age=604800, must-revalidate
+ content-length: 248381
+ content-type: image/jpeg
+ last-modified: Fri, 05 Feb 2021 15:34:05 GMT
+ accept-ranges: bytes
+ age: 0
+ server: ECS (sjc/4E76)
+ x-xcachep2c-originurl: https://p****.com:443/media/EteSQSGXMAYVUN_? format=jpg&name=large
+ x-xcachep2c-originip: 72.21.91.70
+ access-control-allow-origin: *
+ access-control-expose-headers: Content-Length
+ strict-transport-security: max-age=631138519
+ surrogate-key: media media/bucket/9 media/1357714621109579782
+ x-cache: TCP_MISS
+ x-connection-hash: 8c9ea346f78166a032b347a42d8cc561
+ x-content-type-options: nosniff
+ x-response-time: 26
+ x-tw-cdn: VZ
+ x-azure-ref-originshield: 0MlAkYAAAAACtEkUH8vEbTIFoZe4xuRLOU0pDRURHRTA1MDgAZDM0ZjBhNGUtMjc4
+ x-azure-ref: 0MlAkYAAAAACayEVNiWaKRI61MXUgRe97REFMRURHRTEwMTQAZDM0ZjBhNGUtMjc4
+ date: Wed, 10 Feb 2021 21:29:22 GMT
+ ```
+
+ Here's an example that shows `TCP_HIT`:
+
+ ```bash
+ $ curl -I "https://S*******.z01.azurefd.net/media/EteSQSGXMAYVUN_?format=jpg&name=large"
+ HTTP/2 200
+ cache-control: max-age=604800, must-revalidate
+ content-length: 248381
+ content-type: image/jpeg
+ last-modified: Fri, 05 Feb 2021 15:34:05 GMT
+ accept-ranges: bytes
+ age: 0
+ server: ECS (sjc/4E76)
+ x-xcachep2c-originurl: https://p****.com:443/media/EteSQSGXMAYVUN_?format=jpg&name=large
+ x-xcachep2c-originip: 72.21.91.70
+ access-control-allow-origin: *
+ access-control-expose-headers: Content-Length
+ strict-transport-security: max-age=631138519
+ surrogate-key: media media/bucket/9 media/1357714621109579782
+ x-cache: TCP_HIT
+ x-connection-hash: 8c9ea346f78166a032b347a42d8cc561
+ x-content-type-options: nosniff
+ x-response-time: 26
+ x-tw-cdn: VZ
+ x-azure-ref-originshield: 0MlAkYAAAAACtEkUH8vEbTIFoZe4xuRLOU0pDRURHRTA1MDgAZDM0ZjBhNGUtMjc4Mi00OWVhLWIzNTYtN2MzYj
+ x-azure-ref: 0NVAkYAAAAABHk4Fx0cOtQrp6cHFRf0ocREFMRURHRTEwMDUAZDM0ZjBhNGUtMjc4Mi00OWVhLWIzNTYtN2MzYj
+ date: Wed, 10 Feb 2021 21:29:25 GMT
+ ```
+
+1. Continue to request against the endpoint until the `x-cache` header has a `TCP_HIT` value.
+1. If the performance problem is resolved, the problem was based on the origin's speed and not the performance of Azure Front Door. The owner needs to address the Azure Front Door cache settings or the origin to improve performance.
+
+ If the problem persists, the source might be the client that's requesting the content or the Azure Front Door service. Move to Scenario 2 to identify the source.
+
+## Scenario 2: A single client or location (for example, an ISP) is slow
+
+A single client or location can be slow if there's a bad network route between the requesting client and the Azure Front Door POP. You should rule out any bad route because it affects the distance to the POP, which removes the Azure Front Door POP's proximity benefit.
+
+High latency or low bandwidth could be the result of an ISP problem, if you're using a virtual private network (VPN) or are part of a dispersed corporate network. A corporate network can run all traffic through a central, remote point.
+
+### Scenario 2: Environment information needed
+
+- Azure Front Door endpoint name
+ - Endpoint host name
+ - Endpoint custom domain (if applicable)
+ - Origin host name
+- Full URL of the affected file
+- Requesting client information
+ - Requesting client IP
+ - Requesting client location
+ - Requesting client path to the Azure environment (usually identified with [tracert](/windows-server/administration/windows-commands/tracert), [pathping](/windows-server/administration/windows-commands/pathping), or a similar tool)
+
+### Scenario 2: Troubleshooting steps
+
+1. To check the path to the POP, use [pathping](/windows-server/administration/windows-commands/pathping) or a similar tool for 500 packets to check the network route.
+
+ Pathping has a maximum of 250 queries. To test to 500, run the following query twice:
+
+ ```Console
+ pathping /q 250 <Full URL of Affected File>
+ ```
+
+1. Determine if the traffic is taking a path that would add time or travel to a distant region.
+
+ Look for IP, city, or region codes that don't take a reasonable route based on your geography (for example, traffic in Europe is being routed to the United States) or that have an excessive number of hops.
+1. To rule out requesting client settings, test from a different requesting client in the same region.
+1. If you identify additional hops or remote regions, the problem is with the client accessing the Azure Front Door POP and not with the Azure Front Door service itself. The connectivity or VPN provider needs to address hops between endpoints.
+
+ If you don't identify additional hops or remote regions *and* the content is being served from the cache (`x-cache: TCP_HIT`), the problem is with the Azure Front Door service. You might need to create a support request. Include a reference to this troubleshooting article and the steps that you took.
+
+> [!NOTE]
+> When the content is being served from the origin (`x-cache: TCP_MISS`), see [Scenario 1](troubleshoot-performance-issues.md#scenario-1-investigate-the-origin) earlier in this article.
+
+## Scenario 3: A website loads slowly
+
+In some scenarios, there's no problem with a single file but the performance of a whole (Azure Front Door proxied) webpage is unsatisfactory. A webpage performance tool shows poor site performance compared to a webpage outside Azure Front Door.
+
+A webpage often consists of many files. A website benefits from Azure Front Door only if Azure Front Door is serving each file on a webpage. You must configure Azure Front Door to maximize the benefit.
+
+Consider the following example:
+
+- Origin: `origin.contoso.com`
+- Azure Front Door custom domain: `contoso.com`
+- Page that you're trying to load: `https://contoso.com`
+
+When the page loads, the initial file at the "/" directory calls other files, which build the page. These files are images, JavaScript, text files, and more. If those files aren't called via the Azure Front Door host name (`contoso.com`), the page is not using Azure Front Door. So, if one of the files that the website requests is `http://www.images.fabrikam.com/businessimage.jpg`, the file is not benefiting from the use of Azure Front Door. Instead, the browser on the requesting client is requesting the file directly from the `images.fabrikam.com` server.
++
+### Scenario 3: Environment information needed
+
+- Azure Front Door endpoint name
+ - Endpoint host name
+ - Endpoint custom domain (if applicable)
+ - Origin host name
+ - Geographical location of the origin
+- Full URL of the affected webpage
+- Tool and metric that are measuring performance
+
+### Scenario 3: Troubleshooting
+
+1. Review the metric that shows the slower performance.
+
+ > [!IMPORTANT]
+ > Microsoft can't discern what's being measured by tools that it doesn't own.
+1. Open the Azure Front Door webpage in a browser, and then open the developer tools by selecting the F12 key.
+
+ You can use the developer tools in your browser to determine the source of the files being served. To view the *request URL* in the developer tools, select the **Networking** tab, select the file that you're investigating, and then select **General**. If the file is missing, reload the page with the developer tools open.
+1. Note the source, or the request URL, of the files.
+1. Identify which files are using the Azure Front Door host name and which files aren't.
+
+ In the preceding example, an image hosted in Azure Front Door would be `https://www.contoso.com/productimage1.jpg`. An image not hosted in Azure Front Door would be `http://www.images.fabrikam.com/businessimage.jpg`.
+1. Test the performance of the file that Azure Front Door is serving, its origin, and (if applicable) the testing webpage.
+
+ If the origin or testing webpage is served from a geographical region closer to the tool that's testing performance, you might need to use a tool or requesting client in another region to examine the Azure Front Door POP's proximity benefit.
+
+ > [!IMPORTANT]
+ > Any files served from outside the Azure Front Door host name won't benefit from it. You might need to redesign the webpage to do so.
+
+ If files are meant to be cached, be sure to test files that have the response header `x-cache: TCP_HIT`.
+
+1. Take action based on the collected data:
+
+ - If the collected data shows that files are being issued from servers outside the Azure Front Door host name, Azure Front Door is working as expected.
+
+ Slowly loading websites might require a change in webpage design. For assistance in optimizing your website to use Azure Front Door, connect with your website design team or with [Microsoft solution providers](https://www.microsoft.com/solution-providers/home).
+
+ > [!NOTE]
+ > The problem of slowly loading websites could take time to review, based on the complexity of a website's design and its file-calling instructions.
+
+ - If the collected data shows that the files' loading performance is better at Azure Front Door compared to the origin or test site, Azure Front Door is working as expected. The source of the problem might be individual client requests. In that case, see [Scenario 1](troubleshoot-performance-issues.md#scenario-1-investigate-the-origin) earlier in this article.
+
+ - If the collected data shows that performance is *not* better at Azure Front Door, you likely need to file a support request for further investigation. Include a reference to this troubleshooting article and the steps that you took.
global-secure-access How To Create Remote Network Custom Ike Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-create-remote-network-custom-ike-policy.md
IPSec tunnel is a bidirectional communication. This article provides the steps to set up the policy side the communication channel using the Microsoft Graph API. The other side of the communication is configured on your customer premises equipment.
-For more information about creating a remote network and the custom IKE policy, see [Create a remote network](how-to-create-remote-networks.md#create-a-remote-network-with-the-microsoft-entra-admin-center) and [Remote network configurations](reference-remote-network-configurations.md).
+For more information about creating a remote network and the custom IKE policy, see [Create a remote network](how-to-create-remote-networks.md#create-a-remote-network) and [Remote network configurations](reference-remote-network-configurations.md).
## Prerequisites
global-secure-access How To Create Remote Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-create-remote-networks.md
Previously updated : 07/27/2023 Last updated : 08/30/2023 -+ # How to create a remote network
Before you can set up remote networks, you need to onboard your tenant informati
You MUST complete the email step before selecting the checkbox.
-## Create a remote network with the Microsoft Entra admin center
+## Create a remote network
+
+You can create a remote network in the Microsoft Entra admin center or through the Microsoft Graph API.
+
+# [Microsoft Entra admin center](#tab/microsoft-entra-admin-center)
Remote networks are configured on three tabs. You must complete each tab in order. After completing the tab either select the next tab from the top of the page, or select the **Next** button at the bottom of the page.
The first step is to provide the name and location of your remote network. Compl
- **Region** 1. Select the **Next** button.
- ![Screenshot of the General tab of the create device link process.](media/how-to-create-remote-networks/create-basics-tab.png)
+ ![Screenshot of the basics tab of the create device link process.](media/how-to-create-remote-networks/create-basics-tab.png)
### Connectivity
You can assign the remote network to a traffic forwarding profile when you creat
The final tab in the process is to review all of the settings that you provided. Review the details provided here and select the **Create remote network** button.
-## Create remote networks using the Microsoft Graph API
+# [Microsoft Graph API](#tab/microsoft-graph-api)
Global Secure Access remote networks can be viewed and managed using Microsoft Graph on the `/beta` endpoint. Creating a remote network and assigning a traffic forwarding profile are separate API calls.
-### Create a remote network
- 1. Sign in to [Graph Explorer](https://aka.ms/ge). 1. Select POST as the HTTP method. 1. Select BETA as the API version.
Associating a traffic forwarding profile to your remote network using the Micros
``` 1. Select **Run query** to update the remote network.++
+## Verify your remote network configurations
+
+There are a few things to consider and verify when creating remote networks. You may need to double-check some settings.
+
+- **Verify IKE crypto profile**: The crypto profile (IKE phase 1 and phase 2 algorithms) set for a device link should match what has been set on the CPE. If you chose the **default IKE policy**, ensure that your CPE is set up with the crypto profile specified in the [Remote network configurations](reference-remote-network-configurations.md) reference article.
+
+- **Verify pre-shared key**: Compare the pre-shared key (PSK) you specified when creating the device link in Microsoft Global Secure Access with the PSK you specified on your CPE. This detail is added on the **Security** tab during the **Add a link** process. For more information, see [How to manage remote network device links.](how-to-manage-remote-network-device-links.md#add-a-device-link-using-the-microsoft-entra-admin-center).
+
+- **Verify local and peer BGP IP addresses**: The public IP addresses and BGP addresses specified while creating a device link in Microsoft Global Secure Access should match what you specified when configuring the CPE.
+ - The local and peer BGP addresses are reversed between the CPE and what is entered in Global Secure Access.
+ - **CPE**: Local BGP IP address = IP1, Peer BGP IP address = IP2
+ - **Global Secure Access**: Local BGP IP address = IP2, Peer BGP IP address = IP1
+ - Choose an IP address for Global Secure Access that doesn't overlap with your on-premises network.
+ - The same rule applies to ASNs.
+
+- **Verify ASN**: Global Secure Access uses BGP to advertise routes between two autonomous systems: your network and Microsoft's. These autonomous systems should have different ASNs.
+ - When creating a remote network in the Microsoft Entra admin center, use your network's ASN.
+ - When configuring your CPE, use Microsoft's ASN. Go to **Global Secure Access** > **Devices** > **Remote Networks**. Select **Links** and confirm the value in the **Link ASN** column.
+
+- **Verify your public IP address**: In a test environment or lab setup, the public IP address of your CPE may change unexpectedly. This change can cause the IKE negotiation to fail even though everything remains the same.
+ - If you encounter this scenario, complete the following steps:
+ - Update the public IP address in the crypto profile of your CPE.
+ - Go to the **Global Secure Access** > **Devices** > **Remote Networks**.
+ - Select the appropriate remote network, delete the old tunnel, and recreate a new tunnel with the updated public IP address.
+
+- **Port forwarding**: In some situations, the ISP router can also be a network address translation (NAT) device. A NAT converts the private IP addresses of home devices to a public internet-routable device.
+ - Generally, a NAT device changes both the IP address and the port. This port changing is the root of the problem.
+ - For IPsec tunnels to work, Global Secure Access uses port 500. This port is where IKE negotiation happens.
+ - If the ISP router changes this port to something else, Global Secure Access can't identify this traffic and negotiation fails.
+ - As a result, phase 1 of IKE negotiation fails and the tunnel isn't established.
+ - To remediate this failure, complete the port forwarding on your device, which tells the ISP router to not change the port and forward it as-is.
[!INCLUDE [Public preview important note](./includes/public-preview-important-note.md)]
global-secure-access How To Simulate Remote Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-simulate-remote-network.md
+
+ Title: Extend remote network connectivity to Azure virtual networks
+description: Configure Azure resources to simulate remote network connectivity to Microsoft's Security Edge Solutions, Microsoft Entra Internet Access and Microsoft Entra Private Access.
++ Last updated : 08/28/2023+++++
+# Create a remote network using Azure virtual networks
+
+Organizations may want to extend the capabilities of Microsoft Entra Internet Access to entire networks not just individual devices they can [install the Global Secure Access Client](how-to-install-windows-client.md) on. This article shows how to extend these capabilities to an Azure virtual network hosted in the cloud. Similar principles may be applied to a customer's on-premises network equipment.
++
+## Prerequisites
+
+In order to complete the following steps, you must have these prerequisites in place.
+
+- An Azure subscription and permission to create resources in the [Azure portal](https://portal.azure.com).
+ - A basic understanding of [site-to-site VPN connections](/azure/vpn-gateway/tutorial-site-to-site-portal).
+- A Microsoft Entra ID tenant with the [Global Secure Access Administrator](/azure/active-directory/roles/permissions-reference#global-secure-access-administrator) role assigned.
+- Completed the [remote network onboarding steps](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks).
+
+## Infrastructure creation
+
+Building this functionality out in Azure provides organizations the ability to understand how Microsoft Entra Internet Access works in a more broad implementation. The resources we create in Azure correspond to on-premises concepts in the following ways:
+
+- The **[virtual network](#virtual-network)** corresponds to your on-premises IP address space.
+- The **[virtual network gateway](#virtual-network-gateway)** corresponds to an on-premises virtual private network (VPN) router. This device is sometimes referred to as customer premises equipment (CPE).
+- The **[local network gateway](#local-network-gateway)** corresponds to the Microsoft side of the connection where traffic would flow to from your on-premises VPN router. The information provided by Microsoft as part of the [remote network onboarding steps](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks) is used here.
+- The **[connection](#create-site-to-site-vpn-connection)** links the two network gateways and contains the settings required to establish and maintain connectivity.
+- The **[virtual machine](#virtual-machine)** corresponds to client devices on your on-premises network.
+
+In this document, we use the following default values. Feel free to configure these settings according to your own requirements.
+
+**Subscription:** Visual Studio Enterprise
+**Resource group name:** Network_Simulation
+**Region:** East US
+
+### Resource group
+
+Create a resource group to contain all of the necessary resources.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) with permission to create resources.
+1. Select **Create a resource**.
+1. Search for **Resource group** and choose **Create** > **Resource group**.
+1. Select your **Subscription**, **Region**, and provide a name for your **Resource group**.
+1. Select **Review + create**.
+1. Confirm your details, then select **Create**.
+
+> [!TIP]
+> If you're using this article for testing Microsoft Entra Internet Access, you may clean up all related Azure resources by deleting the resource group you create after you're done.
+
+### Virtual network
+
+Next we need to create a virtual network inside of our resource group, then add a gateway subnet that we'll use in a future step.
+
+1. From the Azure portal, select **Create a resource**.
+1. Select **Networking** > **Virtual Network**.
+1. Select the **Resource group** created previously.
+1. Provide your network with a **Name**.
+1. Leave the default values for the other fields.
+1. Select **Review + create**.
+1. Select **Create**.
+
+When the virtual network is created, select **Go to resource** or browse to it inside of the resource group and complete the following steps:
+
+1. Select **Subnets**.
+1. Select **+ Gateway subnet**.
+1. Leave the defaults and select **Save**.
+
+### Virtual network gateway
+
+Next we need to create a virtual network gateway inside of our resource group.
+
+1. From the Azure portal, select **Create a resource**.
+1. Select **Networking** > **Virtual network gateway**.
+1. Provide your virtual network gateway with a **Name**.
+1. Select the appropriate region.
+1. Select the **Virtual network** created in the previous section.
+1. Create a **Public IP address** and **SECOND PUBLIC IP ADDRESS** and provide them with descriptive names.
+ 1. Set their **Availability zone** to **Zone-redundant**.
+1. Set **Configure BGP** to **Enabled**
+ 1. Set the **Autonomous system number (ASN)** to an appropriate value.
+ 1. Don't use any reserved ASN numbers or the ASN provided as part of [onboarding to Microsoft Entra Internet Access](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks). For more information, see the article [Global Secure Access remote network configurations](reference-remote-network-configurations.md#valid-autonomous-system-number-asn).
+1. Leave all other settings their defaults or blank.
+1. Select **Review + create**, confirm your settings.
+1. Select **Create**.
+ 1. You can continue to the following sections while the gateway is created.
++
+### Local network gateway
+
+You need to create two local network gateways. One for your primary and one for the secondary endpoints.
+
+You use the BGP IP addresses, Public IP addresses, and ASN values provided by Microsoft when you [onboard to Microsoft Entra Internet Access](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks) in this section.
+
+1. From the Azure portal, select **Create a resource**.
+1. Select **Networking** > **Local network gateway**.
+1. Select the **Resource group** created previously.
+1. Select the appropriate region.
+1. Provide your local network gateway with a **Name**.
+1. For **Endpoint**, select **IP address**, then provide the IP address provided in the Microsoft Entra admin center.
+1. Select **Next: Advanced**.
+1. Set **Configure BGP** to **Yes**
+ 1. Set the **Autonomous system number (ASN)** to the appropriate value provided in the Microsoft Entra admin center.
+ 1. Set the **BGP peer IP address** to the appropriate value provided in the Microsoft Entra admin center.
+1. Select **Review + create**, confirm your settings.
+1. Select **Create**.
++
+### Virtual machine
+
+1. From the Azure portal, select **Create a resource**.
+1. Select **Virtual machine**.
+1. Select the **Resource group** created previously.
+1. Provide a **Virtual machine name**.
+1. Select the Image you want to use, for this example we choose **Windows 11 Pro, version 22H2 - x64 Gen2**
+1. Select **Run with Azure Spot discount** for this test.
+1. Provide a **Username** and **Password** for your VM
+1. Move to the **Networking** tab.
+ 1. Select the **Virtual network** created previously.
+ 1. Keep the other networking defaults.
+1. Move to the **Management** tab
+ 1. Check the box **Login with Azure AD**
+ 1. Keep the other management defaults.
+1. Select **Review + create**, confirm your settings.
+1. Select **Create**.
+
+You may choose to lock down remote access to the network security group to only a specific network or IP.
+
+### Create Site-to-site VPN connection
+
+You create two connections one for your primary and secondary gateways.
+
+1. From the Azure portal, select **Create a resource**.
+1. Select **Networking** > **Connection**.
+1. Select the **Resource group** created previously.
+1. Under **Connection type**, select **Site-to-site (IPsec)**.
+1. Provide a **Name** for the connection, and select the appropriate **Region**.
+1. Move to the **Settings** tab.
+ 1. Select your **Virtual network gateway** and **Local network gateway** created previously.
+ 1. Create a **Shared key (PSK)** that you'll use in a future step.
+ 1. Check the box for **Enable BGP**.
+ 1. Keep the other default settings.
+1. Select **Review + create**, confirm your settings.
+1. Select **Create**.
++
+## Enable remote connectivity in Microsoft Entra
+
+### Create a remote network
+
+You need the public IP addresses of your virtual network gateway. These IP addresses can be found by browsing to the Configuration page of your virtual and local network gateways. You complete the **Add a link** sections twice to create a link for your primary and secondary connections.
++
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Global Secure Access Administrator](../active-directory/roles/permissions-reference.md#global-secure-access-administrator).
+1. Browse to **Global Secure Access Preview** > **Remote network** > **Create remote network**.
+1. Provide a **Name** for your network, select an appropriate **Region**, then select **Next: Connectivity**.
+1. On the **Connectivity** tab, select **Add a link**.
+ 1. On the **General** tab:
+ 1. Provide a **Link name** and set **Device type** to **Other**.
+ 1. Set the **IP address** to the primary IP address of your virtual network gateway.
+ 1. Set the **Local BGP address** to the primary private BGP IP address of your local network gateway.
+ 1. Set the **Peer BGP address** to the BGP IP address of your virtual network gateway.
+ 1. Set the **Link ASN** to the ASN of your virtual network gateway.
+ 1. Leave **Redundancy** set to **No redundancy**.
+ 1. Set **Bandwidth capacity (Mbps)** to the appropriate setting.
+ 1. Select Next to continue to the **Details** tab.
+ 1. On the **Details** tab:
+ 1. Leave the defaults selected unless you made a different selection previously.
+ 1. Select Next to continue to the **Security** tab.
+ 1. On the **Security** tab:
+ 1. Enter the **Pre-shared key (PSK)** set in the [previous section when creating the site to site connection](#create-site-to-site-vpn-connection).
+ 1. Select **Add link**.
+ 1. Select **Next: Traffic profiles**.
+1. On the **Traffic profiles** tab:
+ 1. Check the box for the **Microsoft 365 traffic profile**.
+ 1. Select **Next: Review + create**.
+1. Confirm your settings and select **Create remote network**.
+
+For more information about remote networks, see the article [How to create a remote network](how-to-create-remote-networks.md)
+
+## Verify connectivity
+
+After you create the remote networks in the previous steps, it may take a few minutes for the connection to be established. From the Azure portal, we can validate that the VPN tunnel is connected and that BGP peering is successful.
+
+1. In the Azure portal, browse to the **virtual network gateway** created earlier and select **Connections**.
+1. Each of the connections should show a **Status** of **Connected** once the configuration is applied and successful.
+1. Browsing to **BGP peers** under the **Monitoring** section allows you to confirm that BGP peering is successful. Look for the peer addresses provided by Microsoft. Once configuration is applied and successful, the **Status** should show **Connected**.
++
+You can also use the virtual machine you created to validate that traffic is flowing to Microsoft 365 locations like SharePoint Online. Browsing to resources in SharePoint or Exchange Online should result in traffic on your virtual network gateway. This traffic can be seen by browsing to [Metrics on the virtual network gateway](/azure/vpn-gateway/monitor-vpn-gateway#analyzing-metrics) or by [Configuring packet capture for VPN gateways](/azure/vpn-gateway/packet-capture).
+
+## Next steps
+
+- [Tutorial: Create a site-to-site VPN connection in the Azure portal](/azure/vpn-gateway/tutorial-site-to-site-portal)
global-secure-access Reference Remote Network Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/reference-remote-network-configurations.md
You can use any values *except* for the following reserved ASNs:
- Azure reserved ASNs: 12076, 65517,65518, 65519, 65520, 8076, 8075 - IANA reserved ASNs: 23456, >= 64496 && <= 64511, >= 65535 && <= 65551, 4294967295-- 65486
+- 65476
### Valid enums
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state. > [!NOTE]
-> Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations).
+> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policy/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation).
## Best practices
governance Compliance States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md
An applicable resource has a compliance state of exempt for a policy assignment
> [!NOTE] > _Exempt_ is different than _excluded_. For more details, see [scope](./scope.md).
-### Unknown (preview)
+### Unknown
Unknown is the default compliance state for definitions with `manual` effect, unless the default has been explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect.
+ ### Protected (preview)
+
+ Protected state signfies that the resource is covered under an assignment with a [denyAction](./effects.md#denyaction-preview) effect.
+ ### Not registered This compliance state is visible in portal when the Azure Policy Resource Provider hasn't been registered, or when the account logged in doesn't have permission to read compliance data.
So how is the aggregate compliance state determined if multiple resources or pol
1. Compliant 1. Error 1. Conflicting
+1. Protected (preview)
1. Exempted 1. Unknown (preview)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Previously updated : 08/29/2022 Last updated : 08/15/2023 + # Azure Policy definition structure Azure Policy establishes conventions for resources. Policy definitions describe resource compliance
always stay the same, however their values change based on the individual fillin
Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values.
-> [!NOTE]
-> Parameters may be added to an existing and assigned definition. The new parameter must include the
-> **defaultValue** property. This prevents existing assignments of the policy or initiative from
-> indirectly being made invalid.
+Parameters may be added to an existing and assigned definition. The new parameter must include the
+**defaultValue** property. This prevents existing assignments of the policy or initiative from
+indirectly being made invalid.
-> [!NOTE]
-> Parameters can't be removed from a policy definition that's been assigned.
+Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Instead of removing, you can classify the parameter as deprecated in the parameter metadata.
### Parameter properties
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
## Interchanging effects
-Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties.
+Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies assess a resource's compliance based on a child or extension resource's properties.
The following list is some general guidance around interchangeable effects: - **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable.
related resources to match.
- When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource. However, an [audit](#audit) effect should be considered instead.+
+> [!NOTE]
+>
+> **Type** and **Name** segments can be combined to generically retrieve nested resources.
+>
+> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`.
+>
+> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`."
+ - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
location of the Constraint template to use in Kubernetes to limit the allowed co
When a request call with an applicable action name and targeted scope is submitted, `denyAction` prevents the request from succeeding. The request is returned as a `403 (Forbidden)`. In the portal, the Forbidden can be viewed as a status on the deployment that was prevented by the policy assignment.
-`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios.
-
-> [!NOTE]
-> Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state.
+`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, `Microsoft.Resources/subscriptions` and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios.
#### Subscription deletion
-Policy won't block removal of resources that happens during a subscription deletion.
+Policy doesn't block removal of resources that happens during a subscription deletion.
#### Resource group deletion
-Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`.
+Policy evaluates resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule block a resource group deletion. Policy doesn't block removal of resources that don't support location and tags nor any policy with `mode:all`.
#### Cascade deletion
-Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child).
+Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy doesn't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child).
[!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)]
related resources to match and the template deployment to execute.
resource instead of all resources of the specified type. - When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource.+
+> [!NOTE]
+>
+> **Type** and **Name** segments can be combined to generically retrieve nested resources.
+>
+> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`.
+>
+> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`."
+ - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
logs, and the policy effect don't occur. For more information, see
## Manual
-The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
+The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
> [!NOTE] > Support for manual policy is available through various Microsoft Defender
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Applicability is determined by several factors:
- **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure.md#policy-rule). - **Mode** of the policy definition. - **Excluded scopes** specified in the assignment.
+- **Resource selectors** specified in the assignment.
- **Exemptions** of resources or resource hierarchies. Condition(s) in the `if` block of the policy rule are evaluated for applicability in slightly different ways based on the effect.
Following are special cases to the previously described applicability logic:
|Any invalid aliases in the `if` conditions |The policy isn't applicable | |When the `if` conditions consist of only `kind` conditions |The policy is applicable to all resources | |When the `if` conditions consist of only `name` conditions |The policy is applicable to all resources |
-|When the `if` conditions consist of only `type` and `kind` or `type` and `name` conditions |Only type conditions are considered when deciding applicability |
+|When the `if` conditions consist of only `type` and `kind` conditions |Only `type` conditions are considered when deciding applicability |
+|When the `if` conditions consist of only `type` and `name` conditions |Only `type` conditions are considered when deciding applicability |
+|When the `if` conditions consist of `type`, `kind`, and other conditions |Both `type` and `kind` conditions are considered when deciding applicability |
+|When the `if` conditions consist of `type`, `name`, and other conditions |Both `type` and `name` conditions are considered when deciding applicability |
|When any conditions (including deployment parameters) include a `location` condition |Won't be applicable to subscriptions | ### AuditIfNotExists and DeployIfNotExists policy effects
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
Title: Learn Azure Policy for Kubernetes description: Learn how Azure Policy uses Rego and Open Policy Agent to manage clusters running Kubernetes in Azure or on-premises. Previously updated : 06/17/2022 Last updated : 08/29/2023
Azure Policy for Kubernetes supports the following cluster environments:
- [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md) > [!IMPORTANT]
-> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on).
+> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Follow the instructions to [remove the add-ons](#remove-the-add-on).
## Overview
To enable and use Azure Policy with your Kubernetes cluster, take the following
The following general limitations apply to the Azure Policy Add-on for Kubernetes clusters: -- Azure Policy Add-on for Kubernetes is supported on [supported Kubernetes versions in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md).
+- Azure Policy Add-on for Kubernetes [supported Kubernetes versions in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md).
- Azure Policy Add-on for Kubernetes can only be deployed to Linux node pools.-- Maximum number of pods supported by the Azure Policy Add-on: **10,000**
+- Maximum number of pods supported by the Azure Policy Add-on per cluster: **10,000**
- Maximum number of Non-compliant records per policy per cluster: **500** - Maximum number of Non-compliant records per subscription: **1 million** - Installations of Gatekeeper outside of the Azure Policy Add-on aren't supported. Uninstall any
The following are general recommendations for using the Azure Policy Add-on:
policy assignments increases in the cluster, which requires audit and enforcement operations. - For fewer than 500 pods in a single cluster with a max of 20 constraints: two vCPUs and 350 MB
- memory per component.
+ of memory per component.
- For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB
- memory per component.
+ of memory per component.
- Windows pods [don't support security contexts](https://kubernetes.io/docs/concepts/security/pod-security-standards/#what-profiles-should-i-apply-to-my-windows-pods).
The following recommendation applies only to AKS and the Azure Policy Add-on:
## Install Azure Policy Add-on for AKS
-Before installing the Azure Policy Add-on or enabling any of the service features, your subscription
-must enable the **Microsoft.PolicyInsights** resource providers.
+Before you install the Azure Policy Add-on or enabling any of the service features, your subscription
+must enable the `Microsoft.PolicyInsights` resource providers.
1. You need the Azure CLI version 2.12.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see
must enable the **Microsoft.PolicyInsights** resource providers.
- Azure portal:
- Register the **Microsoft.PolicyInsights** resource providers. For steps, see
+ Register the `Microsoft.PolicyInsights` resource providers. For steps, see
[Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). - Azure CLI:
must enable the **Microsoft.PolicyInsights** resource providers.
1. If limited preview policy definitions were installed, remove the add-on with the **Disable** button on your AKS cluster under the **Policies** page.
-1. The AKS cluster must be a [supported AKS cluster version](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli). Use the following script to validate your AKS
+1. The AKS cluster must be a [supported Kubernetes version in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md). Use the following script to validate your AKS
cluster version: ```azurecli-interactive
must enable the **Microsoft.PolicyInsights** resource providers.
1. Install version _2.12.0_ or higher of the Azure CLI. For more information, see [Install the Azure CLI](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli).
-Once the above prerequisite steps are completed, install the Azure Policy Add-on in the AKS cluster
+After the prerequisites are completed, install the Azure Policy Add-on in the AKS cluster
you want to manage. - Azure portal
similar to the following output:
```output {
- "config": null,
- "enabled": true,
- "identity": null
+ "config": null,
+ "enabled": true,
+ "identity": null
} ``` ## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes
For an overview of the extensions platform, see [Azure Arc cluster extensions](.
### Prerequisites
-> Note: If you have already deployed Azure Policy for Kubernetes on an Azure Arc cluster using Helm directly without extensions, follow the instructions listed to [delete the Helm chart](#remove-the-add-on-from-azure-arc-enabled-kubernetes). Once the deletion is done, you can then proceed.
+If you have already deployed Azure Policy for Kubernetes on an Azure Arc cluster using Helm directly without extensions, follow the instructions to [delete the Helm chart](#remove-the-add-on-from-azure-arc-enabled-kubernetes). After the deletion is done, you can then proceed.
+ 1. Ensure your Kubernetes cluster is a supported distribution.
- > Note: Azure Policy for Arc extension is supported on [the following Kubernetes distributions](../../../azure-arc/kubernetes/validation-program.md).
+ > [!NOTE]
+ > Azure Policy for Arc extension is supported on [the following Kubernetes distributions](../../../azure-arc/kubernetes/validation-program.md).
+ 1. Ensure you have met all the common prerequisites for Kubernetes extensions listed [here](../../../azure-arc/kubernetes/extensions.md) including [connecting your cluster to Azure Arc](../../../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli).
- > Note: Azure Policy extension is supported for Arc enabled Kubernetes clusters [in these regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
+ > [!NOTE]
+ > Azure Policy extension is supported for Arc enabled Kubernetes clusters [in these regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
+ 1. Open ports for the Azure Policy extension. The Azure Policy extension uses these domains and ports to fetch policy definitions and assignments and report compliance of the cluster back to Azure Policy.
For an overview of the extensions platform, see [Azure Arc cluster extensions](.
|`login.windows.net` |`443` | |`dc.services.visualstudio.com` |`443` |
-1. Before installing the Azure Policy extension or enabling any of the service features, your subscription must enable the **Microsoft.PolicyInsights** resource providers.
- > Note: To enable the resource provider, follow the steps in
- [Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal)
- or run either the Azure CLI or Azure PowerShell command:
+1. Before you install the Azure Policy extension or enabling any of the service features, your subscription must enable the `Microsoft.PolicyInsights` resource providers.
+
+ > [!NOTE]
+ > To enable the resource provider, follow the steps in [Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal)
+ or run either the Azure CLI or Azure PowerShell command.
+ - Azure CLI ```azurecli-interactive
For an overview of the extensions platform, see [Azure Arc cluster extensions](.
### Create Azure Policy extension
+> [!NOTE]
> Note the following for Azure Policy extension creation: > - Auto-upgrade is enabled by default which will update Azure Policy extension minor version if any new changes are deployed. > - Any proxy variables passed as parameters to `connectedk8s` will be propagated to the Azure Policy extension to support outbound proxy.
->
+ To create an extension instance, for your Arc enabled cluster, run the following command substituting `<>` with your values: ```azurecli-interactive
you created it with.
1. Give the policy assignment a **Name** and **Description** that you can use to identify it easily.
-1. Set the [Policy enforcement](./assignment-structure.md#enforcement-mode) to one of the values
- below.
+1. Set the [Policy enforcement](./assignment-structure.md#enforcement-mode) to one of the following values:
- **Enabled** - Enforce the policy on the cluster. Kubernetes admission requests with violations are denied. - **Disabled** - Don't enforce the policy on the cluster. Kubernetes admission requests with
- violations aren't denied. Compliance assessment results are still available. When rolling out
+ violations aren't denied. Compliance assessment results are still available. When you roll out
new policy definitions to running clusters, _Disabled_ option is helpful for testing the policy definition as admission requests with violations aren't denied.
you created it with.
1. Select **Review + create**. Alternately, use the [Assign a policy - Portal](../assign-policy-portal.md) quickstart to find and
-assign a Kubernetes policy. Search for a Kubernetes policy definition instead of the sample 'audit
-vms'.
+assign a Kubernetes policy. Search for a Kubernetes policy definition instead of the sample _audit
+vms_.
> [!IMPORTANT] > Built-in policy definitions are available for Kubernetes clusters in category **Kubernetes**. For
field of the failed constraint. For details on _Non-compliant_ resources, see
Some other considerations: -- If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud
- Kubernetes policies are applied on the cluster automatically.
+- If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud Kubernetes policies are applied on the cluster automatically.
- When a deny policy is applied on cluster with existing Kubernetes resources, any pre-existing
- resource that is not compliant with the new policy continues to run. When the non-compliant
+ resource that isn't compliant with the new policy continues to run. When the non-compliant
resource gets rescheduled on a different node the Gatekeeper blocks the resource creation. -- When a cluster has a deny policy that validates resources, the user will not see a rejection
+- When a cluster has a deny policy that validates resources, the user doesn't get a rejection
message when creating a deployment. For example, consider a Kubernetes deployment that contains
- replicasets and pods. When a user executes `kubectl describe deployment $MY_DEPLOYMENT`, it does
- not return a rejection message as part of events. However,
+ replicasets and pods. When a user executes `kubectl describe deployment $MY_DEPLOYMENT`, it doesn't return a rejection message as part of events. However,
`kubectl describe replicasets.apps $MY_DEPLOYMENT` returns the events associated with rejection. > [!NOTE]
Two policy definitions reference the same `template.yaml` file stored at differe
such as the Azure Policy template store (`store.policy.core.windows.net`) and GitHub. When policy definitions and their constraint templates are assigned but aren't already installed on
-the cluster and are in conflict, they are reported as a conflict and won't be installed into the
+the cluster and are in conflict, they're reported as a conflict and aren't installed into the
cluster until the conflict is resolved. Likewise, any existing policy definitions and their
-constraint templates that are already on the cluster that conflict with newly assigned policy
-definitions continue to function normally. If an existing assignment is updated and there is a
+constraint templates that are already on the cluster that conflicts with newly assigned policy
+definitions continue to function normally. If an existing assignment is updated and there's a
failure to sync the constraint template, the cluster is also marked as a conflict. For all conflict messages, see [AKS Resource Provider mode compliance reasons](../how-to/determine-non-compliance.md#aks-resource-provider-mode-compliance-reasons)
constraints on the cluster, it annotates both with Azure Policy information like
assignment ID and the policy definition ID. To configure your client to view the add-on related artifacts, use the following steps:
-1. Setup `kubeconfig` for the cluster.
+1. Set up `kubeconfig` for the cluster.
For an Azure Kubernetes Service cluster, use the following Azure CLI:
For more information about troubleshooting the Add-on for Kubernetes, see the
[Kubernetes section](../troubleshoot/general.md#add-on-for-kubernetes-general-errors) of the Azure Policy troubleshooting article.
-For Azure Policy extension for Arc extension related issues, please see:
+For Azure Policy extension for Arc extension related issues, go to:
- [Azure Arc enabled Kubernetes troubleshooting](../../../azure-arc/kubernetes/troubleshooting.md)
-For Azure Policy related issues, please see:
+For Azure Policy related issues, go to:
- [Inspect Azure Policy logs](#logging) - [General troubleshooting for Azure Policy on Kubernetes](../troubleshoot/general.md#add-on-for-kubernetes-general-errors)
governance Remediation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/remediation-structure.md
this property is a _string_. The value must match the value in the initiative de
Use **resource count** to determine how many non-compliant resources to remediate in a given remediation task. The default value is 500, with the maximum number being 50,000. **Parallel deployments** determines how many of those resources to remediate at the same time. The allowed range is between 1 to 30 with the default value being 10. > [!NOTE]
-> Parallel deployments are the number of deployments within a singular remediation task with a maxmimum of 30. 100 remediation tasks can be ran simultaneously in the tenant.
+> Parallel deployments are the number of deployments within a singular remediation task with a maximum of 30. There can be a maximum of 100 remediation tasks running in parallel for a single policy definition or policy reference within an initiative.
## Failure threshold
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[\[Preview\]: API endpoints in Azure API Management should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8ac833bd-f505-48d5-887e-c993a1d3eea0) |API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. Learn More about the OWASP API Threat for Broken User Authentication here: [https://learn.microsoft.com/azure/api-management/mitigate-owasp-api-threats#broken-user-authentication](../../../api-management/mitigate-owasp-api-threats.md#broken-user-authentication) |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMApiEndpointsShouldbeAuthenticated_AuditIfNotExists.json) |
|[API Management calls to API backends should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc15dcc82-b93c-4dcb-9332-fbf121685b54) |Calls from API Management to backends should use some form of authentication, whether via certificates or credentials. Does not apply to Service Fabric backends. |Audit, Disabled, Deny |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_BackendAuth_AuditDeny.json) | |[API Management calls to API backends should not bypass certificate thumbprint or name validation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92bb331d-ac71-416a-8c91-02f2cb734ce4) |To improve the API security, API Management should validate the backend server certificate for all API calls. Enable SSL certificate thumbprint and name validation. |Audit, Disabled, Deny |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_BackendCertificateChecks_AuditDeny.json) |
initiative definition.
|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
### Restrict the exposure of credential and secrets
initiative definition.
||||| |[API Management minimum API version should be set to 2019-12-01 or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F549814b6-3212-4203-bdc8-1548d342fb67) |To prevent service secrets from being shared with read-only users, the minimum API version should be set to 2019-12-01 or higher. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_MinimumApiVersion_AuditDeny.json) | |[API Management secret named values should be stored in Azure Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff1cc7827-022c-473e-836e-5a51cae0b249) |Named values are a collection of name and value pairs in each API Management service. Secret values can be stored either as encrypted text in API Management (custom secrets) or by referencing secrets in Azure Key Vault. To improve security of API Management and secrets, reference secret named values from Azure Key Vault. Azure Key Vault supports granular access management and secret rotation policies. |Audit, Disabled, Deny |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_NamedValueSecretsInKV_AuditDeny.json) |
-|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) |
+|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) |
## Privileged Access
initiative definition.
||||| |[API Management subscriptions should not be scoped to all APIs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3aa03346-d8c5-4994-a5bc-7652c2a2aef1) |API Management subscriptions should be scoped to a product or an individual API instead of all APIs, which could result in an excessive data exposure. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_AllApiSubscription_AuditDeny.json) | |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## Data Protection
initiative definition.
|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) |
### Encrypt sensitive data in transit
initiative definition.
|[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
+### Ensure security of asset lifecycle management
+
+**ID**: Microsoft cloud security benchmark AM-3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: API endpoints that are unused should be disabled and removed from the Azure API Management service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc8acafaf-3d23-44d1-9624-978ef0f8652c) |As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused and should be removed from the Azure API Management service. Keeping unused API endpoints may pose a security risk to your organization. These may be APIs that should have been deprecated from the Azure API Management service but may have been accidentally left active. Such APIs typically do not receive the most up to date security coverage. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMUnusedApiEndpointsShouldbeRemoved_AuditIfNotExists.json) |
+ ### Use only approved applications in virtual machine **ID**: Microsoft cloud security benchmark AM-5
initiative definition.
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) |
|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Enable threat detection for identity and access management
initiative definition.
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) |
|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Enable logging for security investigation
initiative definition.
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) |
### Detection and analysis - investigate an incident
initiative definition.
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) |
## Posture and Vulnerability Management
initiative definition.
|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Azure Arc enabled Kubernetes clusters should have the Azure Policy extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b2122c1-8120-4ff5-801b-17625a355590) |The Azure Policy extension for Azure Arc provides at-scale enforcements and safeguards on your Arc enabled Kubernetes clusters in a centralized, consistent manner. Learn more at [https://aka.ms/akspolicydoc](https://aka.ms/akspolicydoc). |AuditIfNotExists, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ArcPolicyExtension_Audit.json) |
-|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) |
+|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) |
|[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) | |[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
initiative definition.
|[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment applies to Trusted Launch and Confidential Windows virtual machines. |Audit, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) | |[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
-|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) |
+|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) |
|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | |[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Update%20Management%20Center/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
+|[\[Preview\]: Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.3.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Update%20Management%20Center/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
|[\[Preview\]: System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff85bf3e0-d513-442e-89c3-1784ad63382b) |Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdatesV2_Audit.json) | |[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | |[Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F090c7b07-b4ed-4561-ad20-e9075f3ccaff) |Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_AzureContainerRegistryVulnerabilityAssessment_Audit.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Encrypt sensitive information at rest
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 08/08/2023 Last updated : 08/30/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/08/2023 Last updated : 08/30/2023
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
This built-in initiative is deployed as part of the
||||| |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## 9 AppService
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
||||| |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## 9 AppService
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
||||| |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## 9 AppService
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
This built-in initiative is deployed as part of the
|[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) |
This built-in initiative is deployed as part of the
|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
This built-in initiative is deployed as part of the
|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Use non-privileged accounts or roles when accessing nonsecurity functions.
This built-in initiative is deployed as part of the
|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
This built-in initiative is deployed as part of the
|[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) |
This built-in initiative is deployed as part of the
||||| |[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | |[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
### Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Windows machines should meet requirements for 'System Audit Policies - Privilege Use'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F87845465-c458-45f3-af66-dcd62176f397) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Privilege Use' for auditing nonsensitive and other privilege use. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesPrivilegeUse_AINE.json) | ### Control and monitor user-installed software.
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](https://aka.ms/cs/auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
Policy And Procedures
|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
Policy And Procedures
|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) |
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](https://aka.ms/cs/auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
initiative definition.
|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
initiative definition.
|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) |
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## Data Protection
initiative definition.
|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | |[App Service apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
-|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) |
+|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) |
|[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) | |[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## 9 AppService
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## 9 AppService
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
This built-in initiative is deployed as part of the
||||| |[App Service apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
This built-in initiative is deployed as part of the
||||| |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[App Service apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
This built-in initiative is deployed as part of the
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Monitor and control remote access sessions.
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Storage accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | ### Separate the duties of individuals to reduce the risk of malevolent activity without collusion.
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Control and monitor user-installed software.
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Information Flow Enforcement
governance Guest Configuration Baseline Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-docker.md
For more information, see [Understand the guest configuration feature of Azure P
|Ensure the default ulimit's configured appropriately<br /><sub>(2.07)</sub> |Description: If the ulimits aren't set properly, the desired resource control might not be achieved and might even make the system unusable. |Run the docker in daemon mode and pass --default-ulimit as argument with respective ulimits as appropriate in your environment. Alternatively, you can also set a specific resource limitation to each container separately by using the `--ulimit` argument with respective ulimits as appropriate in your environment. | |Enable user namespace support<br /><sub>(2.08)</sub> |Description: The Linux kernel user namespace support in Docker daemon provides additional security for the Docker host system. It allows a container to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. For example, the root user will have expected administrative privilege inside the container but can effectively be mapped to an unprivileged UID on the host system. |Please consult Docker documentation for various ways in which this can be configured depending upon your requirements. Your steps might also vary based on platform - For example, on Red Hat, sub-UIDs and sub-GIDs mapping creation does not work automatically. You might have to create your own mapping. However, the high-level steps are as below: **Step 1:** Ensure that the files `/etc/subuid` and `/etc/subgid` exist.```touch /etc/subuid /etc/subgid```**Step 2:** Start the docker daemon with `--userns-remap` flag ```dockerd --userns-remap=default``` | |Ensure base device size isn't changed until needed<br /><sub>(2.10)</sub> |Description: Increasing the base device size allows all future images and containers to be of the new base device size, this may cause a denial of service by ending up in file system being over-allocated or full. |remove `--storage-opt dm.basesize` flag from the dockerd start command until you need it |
-|Ensure that authorization for Docker client commands is enabled<br /><sub>(2.11)</sub> |Description: DockerΓÇÖs out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using DockerΓÇÖs remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). |**Step 1**: Install/Create an authorization plugin. **Step 2**: Configure the authorization policy as desired. **Step 3**: Start the docker daemon as below: ```dockerd --authorization-plugin=``` |
-|Ensure centralized and remote logging is configured<br /><sub>(2.12)</sub> |Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. |**Step 1**: Setup the desired log driver by following its documentation. **Step 2**: Start the docker daemon with that logging driver. For example, ```dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx``` |
+|Ensure that authorization for Docker client commands is enabled<br /><sub>(2.11)</sub> |Description: DockerΓÇÖs out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using DockerΓÇÖs remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). |**Step 1:** Install/Create an authorization plugin. **Step 2:** Configure the authorization policy as desired. **Step 3:** Start the docker daemon as below: ```dockerd --authorization-plugin=``` |
+|Ensure centralized and remote logging is configured<br /><sub>(2.12)</sub> |Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. |**Step 1:** Setup the desired log driver by following its documentation. **Step 2:** Start the docker daemon with that logging driver. For example, ```dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx``` |
|Ensure live restore is Enabled<br /><sub>(2.14)</sub> |Description: One of the important security triads is availability. Setting `--live-restore` flag in the docker daemon ensures that container execution isn't interrupted when the docker daemon isn't available. This also means that it's now easier to update and patch the docker daemon without execution downtime. |Run the docker in daemon mode and pass `--live-restore` as an argument. For Example,```dockerd --live-restore``` | |Ensure Userland Proxy is Disabled<br /><sub>(2.15)</sub> |Description: Docker engine provides two mechanisms for forwarding ports from the host to containers, hairpin NAT, and a userland proxy. In most circumstances, the hairpin NAT mode is preferred as it improves performance and makes use of native Linux iptables functionality instead of an additional component. Where hairpin NAT is available, the userland proxy should be disabled on startup to reduce the attack surface of the installation. |Run the Docker daemon as below: ```dockerd --userland-proxy=false``` | |Ensure experimental features are avoided in production<br /><sub>(2.17)</sub> |Description: Experimental is now a runtime docker daemon flag instead of a separate build. Passing `--experimental` as a runtime flag to the docker daemon, activates experimental features. Experimental is now considered a stable release, but with a couple of features which might not have tested and guaranteed API stability. |Don't pass `--experimental` as a runtime parameter to the docker daemon. | |Ensure containers are restricted from acquiring new privileges.<br /><sub>(2.18)</sub> |Description: A process can set the `no_new_priv` bit in the kernel. It persists across fork, clone and execve. The `no_new_priv` bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. Setting this at the daemon level ensures that by default all new containers are restricted from acquiring new privileges. |Run the Docker daemon as below: ```dockerd --no-new-privileges``` |
-|Ensure that docker.service file ownership is set to root:root.<br /><sub>(3.01)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.service``` |
-|Ensure that docker .service file permissions are set to 644 or more restrictive<br /><sub>(3.02)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
-|Ensure that docker.socket file ownership is set to root:root.<br /><sub>(3.03)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.socket``` |
-|Ensure that docker.socket file permissions are set to `644` or more restrictive<br /><sub>(3.04)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker.service file ownership is set to root:root.<br /><sub>(3.01)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker .service file permissions are set to 644 or more restrictive<br /><sub>(3.02)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker.socket file ownership is set to root:root.<br /><sub>(3.03)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.socket``` |
+|Ensure that docker.socket file permissions are set to `644` or more restrictive<br /><sub>(3.04)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
|Ensure that /etc/docker directory ownership is set to `root:root`.<br /><sub>(3.05)</sub> |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should be owned and group-owned by `root` to maintain the integrity of the directory. | ```chown root:root /etc/docker``` This would set the ownership and group-ownership for the directory to `root`. | |Ensure that /etc/docker directory permissions are set to `755` or more restrictive<br /><sub>(3.06)</sub> |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should only be writable by `root` to maintain the integrity of the directory. | ```chmod 755 /etc/docker``` This would set the permissions for the directory to `755`. | |Ensure that registry certificate file ownership is set to root:root<br /><sub>(3.07)</sub> |Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must be owned and group-owned by `root` to maintain the integrity of the certificates. | ```chown root:root /etc/docker/certs.d//*``` This would set the ownership and group-ownership for the registry certificate files to `root`. |
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Contractors are provided with minimal system and physical access only after the organization assesses the contractor's ability to comply with its security requirements and the contractor agrees to comply.
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | |[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### 1166.01e1System.12-01.e 01.02 Authorized Access to Information Systems
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Define access authorizations to support separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F341bc9f1-7489-07d9-4ec6-971573e1546a) |CMA_0116 - Define access authorizations to support separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0116.json) | |[Document separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6f7b584-877a-0d69-77d4-ab8b923a9650) |CMA_0204 - Document separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0204.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) | ### 1230.09c2Organizational.1-09.c 09.01 Documented Operating Procedures
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/index.md
Azure:
- [Australian Government ISM PROTECTED](./australia-ism.md) - [Azure Security Benchmark](./azure-security-benchmark.md) - [Canada Federal PBMM](./canada-federal-pbmm.md)-- [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md)-- [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md)
+- [CIS Microsoft Azure Foundations Benchmark 1.1.0](./cis-azure-1-1-0.md)
+- [CIS Microsoft Azure Foundations Benchmark 1.3.0](./cis-azure-1-3-0.md)
+- [CIS Microsoft Azure Foundations Benchmark 1.4.0](./cis-azure-1-4-0.md)
- [CMMC Level 3](./cmmc-l3.md) - [FedRAMP Moderate](./fedramp-moderate.md) - [FedRAMP High](./fedramp-high.md)
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
### 17.9.25 Contents of KMPs
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](https://aka.ms/cs/auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Information Flow Enforcement
initiative definition.
|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
initiative definition.
|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) |
governance Nz Ism Restricted 3 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nz-ism-restricted-3-5.md
Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
### 17.9.25 Contents of KMPs
initiative definition.
|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F45e05259-1eb5-4f70-9574-baf73e9d219b) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit_V2.json) | |[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) |
initiative definition.
|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) | |[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) | |[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | |[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Pci Dss 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md
Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Rbi Itf Banks 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md
Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
### Authentication Framework For Customers-9.3
initiative definition.
|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
initiative definition.
|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | |[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
initiative definition.
|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### User Access Control / Management-8.2
initiative definition.
|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | |[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## Vulnerability Assessment And Penetration Test And Red Team Exercises
initiative definition.
|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | |[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) | |[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) | |[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
initiative definition.
|[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |
-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
+|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) |
|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
initiative definition.
|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | |[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
governance Rbi Itf Nbfc 2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Segregation of Functions-3.1
initiative definition.
|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) |
initiative definition.
|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | |[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 07/07/2022 Last updated : 08/31/2023
for Azure Policy. For a complete list of Azure Resource Graph samples, see
[!INCLUDE [azure-resource-graph-samples-cat-policy](../../../../includes/resource-graph/samples/bycat/azure-policy.md)]
+## Azure Policy exemptions
++ ## Azure Policy Guest Configuration [!INCLUDE [azure-resource-graph-samples-cat-policy-gc](../../../../includes/resource-graph/samples/bycat/azure-policy-guest-configuration.md)]
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
initiative definition.
|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Configuration should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d9f5e4c-9947-4579-9539-2a7695fbc187) |Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can limit exposure of your resources by creating private endpoints instead. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_PublicNetworkAccess_Audit.json) | |[App Service apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Function apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) |
initiative definition.
|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | ### Access Control - 10.55
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
### Access Control - 10.61
initiative definition.
||||| |[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | ### Access Control - 10.62
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
## Patch and End-of-Life System Management
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
In every query response, Azure Resource Graph adds two throttling headers:
- `x-ms-user-quota-resets-after` (hh:mm:ss): The time duration until a user's quota consumption is reset.
-When a security principal has access to more than 5,000 subscriptions within the tenant or
+When a security principal has access to more than 10,000 subscriptions within the tenant or
management group [query scope](./query-language.md#query-scope), the response is limited to the
-first 5,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`.
+first 10,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`.
To illustrate how the headers work, let's look at a query response that has the header and values of `x-ms-user-quota-remaining: 10` and `x-ms-user-quota-resets-after: 00:00:03`.
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md
resources.
The list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API `managementGroups` property takes the management group ID, which is different from the name of the management group. When `managementGroups` is specified,
-resources from the first 5,000 subscriptions in or under the specified management group hierarchy
+resources from the first 10,000 subscriptions in or under the specified management group hierarchy
are included. `managementGroups` can't be used at the same time as `subscriptions`. Example: Query all resources within the hierarchy of the management group named `My Management
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Title: Get resource changes
+ Title: Get resource configuration changes
description: Get resource configuration changes at scale Previously updated : 06/16/2022 Last updated : 08/17/2023
Resources change through the course of daily use, reconfiguration, and even redeployment. Most change is by design, but sometimes it isn't. You can: -- Find when changes were detected on an Azure Resource Manager property-- View property change details-- Query changes at scale across your subscriptions, management group, or tenant
+- Find when changes were detected on an Azure Resource Manager property.
+- View property change details.
+- Query changes at scale across your subscriptions, management group, or tenant.
This article shows how to query resource configuration changes through Resource Graph. - ## Prerequisites - To enable Azure PowerShell to query Azure Resource Graph, [add the module](../first-query-powershell.md#add-the-resource-graph-module).
Each change resource has the following properties:
| `targetResourceId` | The resourceID of the resource on which the change occurred. | ||| | `targetResourceType` | The resource type of the resource on which the change occurred. |
-| `changeType` | Describes the type of change detected for the entire change record. Values are: Create, Update, and Delete. The **changes** property dictionary is only included when `changeType` is _Update_. For the delete case, the change resource will still be maintained as an extension of the deleted resource for 14 days, even if the entire Resource group has been deleted. The change resource won't block deletions or impact any existing delete behavior. |
+| `changeType` | Describes the type of change detected for the entire change record. Values are: Create, Update, and Delete. The **changes** property dictionary is only included when `changeType` is _Update_. For the delete case, the change resource is maintained as an extension of the deleted resource for 14 days, even if the entire resource group was deleted. The change resource doesn't block deletions or affect any existing delete behavior. |
| `changes` | Dictionary of the resource properties (with property name as the key) that were updated as part of the change: | | `propertyChangeType` | This property is deprecated and can be derived as follows `previousValue` being empty indicates Insert, empty `newValue` indicates Remove, when both are present, it's Update.| | `previousValue` | The value of the resource property in the previous snapshot. Value is empty when `changeType` is _Insert_. |
-| `newValue` | The value of the resource property in the new snapshot. This property will be empty (absent) when `changeType` is _Remove_. |
-| `changeCategory` | This property was optional and has been deprecated, this field will no longer be available|
+| `newValue` | The value of the resource property in the new snapshot. This property is empty (absent) when `changeType` is _Remove_. |
+| `changeCategory` | This property was optional and has been deprecated, this field is no longer available. |
| `changeAttributes` | Array of metadata related to the change: | | `changesCount` | The number of properties changed as part of this change record. |
-| `correlationId` | Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template will share the same correlation ID. |
+| `correlationId` | Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template share the same correlation ID. |
| `timestamp` | The datetime of when the change was detected. | | `previousResourceSnapshotId` | Contains the ID of the resource snapshot that was used as the previous state of the resource. | | `newResourceSnapshotId` | Contains the ID of the resource snapshot that was used as the new state of the resource. |
-| `isTruncated` | When the number of property changes reaches beyond a certain number they're truncated and this property becomes present. |
+| `isTruncated` | When the number of property changes reaches beyond a certain number, they're truncated and this property becomes present. |
## Get change events using Resource Graph
resourcechangesΓÇ»
### Best practices -- Query for change events during a specific window of time and evaluate the change details. This query works best during incident management to understand _potentially_ related changes.
+- Query for change events during a specific window of time and evaluate the change details. This query works best during incident management to understand _potentially_ related changes.
- Keep a Configuration Management Database (CMDB) up to date. Instead of refreshing all resources and their full property sets on a scheduled frequency, only get what changed. - Understand what other properties may have been changed when a resource changed compliance state. Evaluation of these extra properties can provide insights into other properties that may need to be managed via an Azure Policy definition.-- The order of query commands is important. In this example, the `order by` must come before the `limit` command. This command order first orders the query results by the change time and then limits them to ensure that you get the five most recent results.-- Resource configuration changes supports changes to resource types from the [Resources table](../reference/supported-tables-resources.md#resources), `resourcecontainers` and `healthresources` table in Resource Graph. Changes are queryable for 14 days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query results to any of the Azure data stores (such as [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your desired retention.
+- The order of query commands is important. In this example, the `order by` must come before the `limit` command. This command orders the query results by the change time and then limits them to ensure that you get the five most recent results.
+- Resource configuration changes support changes to resource types from the Resource Graph tables [resources](../reference/supported-tables-resources.md#resources), [resourcecontainers](../reference/supported-tables-resources.md#resourcecontainers), and [healthresources](../reference/supported-tables-resources.md#healthresources). Changes are queryable for 14 days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query results to any of the Azure data stores like [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your desired retention.
## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 06/15/2022 Last updated : 08/15/2023
provide the following abilities:
- Query resources with complex filtering, grouping, and sorting by resource properties. - Explore resources iteratively based on governance requirements. - Assess the impact of applying policies in a vast cloud environment.-- [Query changes made to resource properties](./how-to/get-resource-changes.md)
- (preview).
+- [Query changes made to resource properties](./how-to/get-resource-changes.md).
-In this documentation, you'll go over each feature in detail.
+In this documentation, you review each feature in detail.
> [!NOTE] > Azure Resource Graph powers Azure portal's search bar, the new browse **All resources** experience,
With Azure Resource Graph, you can:
- Access the properties returned by resource providers without needing to make individual calls to each resource provider.-- View the last seven days of resource configuration changes to see what properties changed and
- when. (preview)
+- View the last 14 days of resource configuration changes to see which properties changed and
+ when.
> [!NOTE] > As a _preview_ feature, some `type` objects have additional non-Resource Manager properties
First, for details on operations and functions that can be used with Azure Resou
## Permissions in Azure Resource Graph
-To use Resource Graph, you must have appropriate rights in [Azure role-based access
-control (Azure RBAC)](../../role-based-access-control/overview.md) with at least read access to the
-resources you want to query. Without at least `read` permissions to the Azure object or object
-group, results won't be returned.
+To use Resource Graph, you must have appropriate rights in [Azure role-based access control (Azure
+RBAC)](../../role-based-access-control/overview.md) with at least `read` access to the resources you
+want to query. No results are returned if you don't have at least `read` permissions to the Azure
+object or object group.
> [!NOTE] > Resource Graph uses the subscriptions available to a principal during login. To see resources of a > new subscription added during an active session, the principal must refresh the context. This > action happens automatically when logging out and back in.
-Azure CLI and Azure PowerShell use subscriptions that the user has access to. When using REST API
-directly, the subscription list is provided by the user. If the user has access to any of the
+Azure CLI and Azure PowerShell use subscriptions that the user has access to. When you use a REST
+API, the subscription list is provided by the user. If the user has access to any of the
subscriptions in the list, the query results are returned for the subscriptions the user has access
-to. This behavior is the same as when calling
-[Resource Groups - List](/rest/api/resources/resourcegroups/list) \- you get resource groups you've
-access to without any indication that the result may be partial. If there are no subscriptions in
-the subscription list that the user has appropriate rights to, the response is a _403_ (Forbidden).
+to. This behavior is the same as when calling [Resource Groups - List](/rest/api/resources/resourcegroups/list)
+because you get resource groups that you can access, without any indication that the result may be
+partial. If there are no subscriptions in the subscription list that the user has appropriate rights
+to, the response is a _403_ (Forbidden).
> [!NOTE] > In the **preview** REST API version `2020-04-01-preview`, the subscription list may be omitted.
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.scvmm/virtualnetworks - microsoft.ScVmm/vmmServers (SCVMM management servers) - microsoft.Search/searchServices (Search services)
+- microsoft.security/apicollections
+- microsoft.security/apicollections/apiendpoints
- microsoft.security/assignments - microsoft.security/automations - microsoft.security/customassessmentautomations
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 07/07/2022 Last updated : 09/01/2023
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-azure-policy](../../../../includes/resource-graph/samples/bycat/azure-policy.md)] + ## Azure Policy guest configuration [!INCLUDE [azure-resource-graph-samples-cat-azure-policy-guest-configuration](../../../../includes/resource-graph/samples/bycat/azure-policy-guest-configuration.md)] ## Azure RBAC + [!INCLUDE [authorization-resources-role-assignments-key-properties](../../includes/resource-graph/query/authorization-resources-role-assignments-key-properties.md)] [!INCLUDE [authorization-resources-role-definitions-key-properties](../../includes/resource-graph/query/authorization-resources-role-definitions-key-properties.md)]
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [authorization-resources-role-definitions-permissions-list](../../includes/resource-graph/query/authorization-resources-role-definitions-permissions-list.md)] ## Azure Service Health
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 02/14/2023 Last updated : 09/01/2023
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
## AuthorizationResources + [!INCLUDE [authorization-resources-role-assignments-key-properties](../../includes/resource-graph/query/authorization-resources-role-assignments-key-properties.md)] [!INCLUDE [authorization-resources-role-definitions-key-properties](../../includes/resource-graph/query/authorization-resources-role-definitions-key-properties.md)]
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [authorization-resources-role-definitions-permissions-list](../../includes/resource-graph/query/authorization-resources-role-definitions-permissions-list.md)] ## ExtendedLocationResources
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [azure-resource-graph-samples-table-policyresources](../../../../includes/resource-graph/samples/bytable/policyresources.md)] + ## ResourceContainers [!INCLUDE [azure-resource-graph-samples-table-resourcecontainers](../../../../includes/resource-graph/samples/bytable/resourcecontainers.md)]
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Title: Starter query samples
description: Use Azure Resource Graph to run some starter queries, including counting resources, ordering resources, or by a specific tag. Previously updated : 07/19/2022 Last updated : 08/31/2023
The first step to understanding queries with Azure Resource Graph is a basic und
[KQL tutorial](/azure/kusto/query/tutorial) to understand how to compose requests for the resources you're looking for.
-We'll walk through the following starter queries:
+This article uses the following starter queries:
- [Count Azure resources](#count-resources) - [Count Key Vault resources](#count-keyvaults)
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+By default, Azure CLI queries all accessible subscriptions but you can specify the `--subscriptions` parameter to query specific subscriptions.
+
+```azurecli-interactive
az graph query -q "Resources | summarize count()" ```
+This example uses a variable for the subscription ID.
+
+```azurecli-interactive
+subid=$(az account show --query id --output tsv)
+az graph query -q "Resources | summarize count()" --subscriptions $subid
+```
+
+You can also query by the scopes for management group and tenant. Replace `<managementGroupId>` and `<tenantId>` with your values.
+
+```azurecli-interactive
+az graph query -q "Resources | summarize count()" --management-groups '<managementGroupId>'
+```
+
+```azurecli-interactive
+az graph query -q "Resources | summarize count()" --management-groups '<tenantId>'
+```
+
+You can also use a variable for the tenant ID.
+
+```azurecli-interactive
+tenantid=$(az account show --query tenantId --output tsv)
+az graph query -q "Resources | summarize count()" --management-groups $tenantid
+```
+ # [Azure PowerShell](#tab/azure-powershell)
+By default, Azure PowerShell gets results for all subscriptions in your tenant.
+ ```azurepowershell-interactive Search-AzGraph -Query "Resources | summarize count()" ```
+This example uses a variable to query a specific subscription ID.
+
+```azurepowershell-interactive
+$subid = (Get-AzContext).Subscription.Id
+Search-AzGraph -Query "authorizationresources | summarize count()" -Subscription $subid
+```
+
+You can query by the scopes for management group and tenant. Replace `<managementGroupId>`with your value. The `UseTenantScope` parameter doesn't require a value.
+
+```azurepowershell-interactive
+Search-AzGraph -Query "Resources | summarize count()" -ManagementGroup '<managementGroupId>'
+```
+
+```azurepowershell-interactive
+Search-AzGraph -Query "Resources | summarize count()" -UseTenantScope
+```
+ # [Portal](#tab/azure-portal) :::image type="icon" source="../media/resource-graph-small.png"::: Try this query in Azure Resource Graph Explorer:
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type =~ 'microsoft.keyvault/vaults' | count" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | project name, type, location | order by name asc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | project name, location, type| where type =~ 'Microsoft.Compute/virtualMachines' | order by name desc" ```
Search-AzGraph -Query "Resources | project name, location, type| where type =~ '
## <a name="show-sorted"></a>Show first five virtual machines by name and their OS type
-This query will use `top` to only retrieve five matching records that are ordered by name. The type
+This query uses `top` to only retrieve five matching records that are ordered by name. The type
of the Azure resource is `Microsoft.Compute/virtualMachines`. `project` tells Azure Resource Graph which properties to include.
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | project name, properties.storageProfile.osDisk.osType | top 5 by name desc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | summarize count() by tostring(properties.storageProfile.osDisk.osType)" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | extend os = properties.storageProfile.osDisk.osType | summarize count() by tostring(os)" ```
Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Compute/virtualMachi
## <a name="show-storage"></a>Show resources that contain storage
-Instead of explicitly defining the type to match, this example query will find any Azure resource
+Instead of explicitly defining the type to match, this example query finds any Azure resource
that `contains` the word **storage**. ```kusto
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type contains 'storage' | distinct type" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type == 'microsoft.network/virtualnetworks' | extend subnets = properties.subnets | mv-expand subnets | project name, subnets.name, subnets.properties.addressPrefix, location, resourceGroup, subscriptionId" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | project properties.ipAddress | limit 100" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | summarize count () by subscriptionId" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where tags.environment=~'internal' | project name" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where tags.environment=~'internal' | project name, tags" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type =~ 'Microsoft.Storage/storageAccounts' | where tags['tag with a space']=='Custom value'" ```
ResourceContainers
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "ResourceContainers | where isnotempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) | union (resources | where notempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) ) | distinct tagKey, tagValue | where tagKey !startswith "hidden-"" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli
+```azurecli-interactive
az graph query -q "Resources | where type =~ 'microsoft.network/networksecuritygroups' and isnull(properties.networkInterfaces) and isnull(properties.subnets) | project name, resourceGroup | sort by name asc" ```
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
Title: Azure HDInsight architecture with Enterprise Security Package
description: Learn how to plan Azure HDInsight security with Enterprise Security Package. -+ Last updated 05/11/2023
hdinsight Hdinsight Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/hdinsight-security-overview.md
HDInsight supports data encryption at rest with both platform managed and [custo
### Compliance
-Azure compliance offerings are based on various types of assurances, including formal certifications. Also, attestations, validations, and authorizations. Assessments produced by independent third-party auditing firms. Contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. For HDInsight compliance information, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center) and the [Overview of Microsoft Azure compliance](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942).
+Azure compliance offerings are based on various types of assurances, including formal certifications. Also, attestations, validations, and authorizations. Assessments produced by independent third-party auditing firms. Contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. For HDInsight compliance information, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
## Shared responsibility model
hdinsight Apache Ambari Troubleshoot Metricservice Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-metricservice-issues.md
Previously updated : 07/21/2022 Last updated : 08/21/2023 # Apache Ambari Metrics Collector issues in Azure HDInsight
java.lang.OutOfMemoryError: Java heap space
2021-04-13 05:57:37,546 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times. ```
-2. Get the Apache Ambari Metrics Collector pid and check GC performance
+2. Get the Apache Ambari Metrics Collector `pid` and check GC performance
``` ps -fu ams | grep 'org.apache.ambari.metrics.AMSApplicationServer'
hdinsight Apache Hadoop Deep Dive Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
description: Learn how advanced analytics uses algorithms to process big data in
Previously updated : 07/19/2022 Last updated : 08/22/2023 # Deep dive - advanced analytics
hdinsight Apache Hadoop Use Sqoop Mac Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md
description: Learn how to use Apache Sqoop to import and export between Apache H
Previously updated : 07/18/2022 Last updated : 08/21/2023 # Use Apache Sqoop to import and export data between Apache Hadoop on HDInsight and Azure SQL Database
hdinsight Connect Install Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/connect-install-beeline.md
or for private endpoint:
beeline -u 'jdbc:hive2://clustername-int.azurehdinsight.net:443/;ssl=true;transportMode=http;httpPath=/hive2' -n admin -p 'password' ```
-Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
+Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-virtual-network-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
### Use Beeline with Apache Spark
or for private endpoint:
beeline -u 'jdbc:hive2://clustername-int.azurehdinsight.net:443/;ssl=true;transportMode=http;httpPath=/sparkhive2' -n admin -p 'password' ```
-Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
+Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-virtual-network-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
#### From cluster head node or inside Azure Virtual Network with Apache Spark
hdinsight Apache Hbase Accelerated Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-accelerated-writes.md
Title: Azure HDInsight Accelerated Writes for Apache HBase
description: Gives an overview of the Azure HDInsight Accelerated Writes feature, which uses premium managed disks to improve performance of the Apache HBase Write Ahead Log. Previously updated : 07/18/2022 Last updated : 08/21/2023 # Azure HDInsight Accelerated Writes for Apache HBase
This article provides background on the **Accelerated Writes** feature for Apach
## Overview of HBase architecture
-In HBase, a **row** consists of one or more **columns** and is identified by a **row key**. Multiple rows make up a **table**. Columns contain **cells**, which are timestamped versions of the value in that column. Columns are grouped into **column families**, and all columns in a column-family are stored together in storage files called **HFiles**.
+In HBase, a **row** consists of one or more **columns** and is identified by a **row key**. Multiple rows make up a **table**. Columns contain **cells**, which are timestamped versions of the value in that column. Columns are grouped into **column families**, and all columns in a column-family are stored together in storage files called `HFiles`.
**Regions** in HBase are used to balance the data processing load. HBase first stores the rows of a table in a single region. The rows are spread across multiple regions as the amount of data in the table increases. **Region Servers** can handle requests for multiple regions. ## Write Ahead Log for Apache HBase
-HBase first writes data updates to a type of commit log called a Write Ahead Log (WAL). After the update is stored in the WAL, it's written to the in-memory **MemStore**. When the data in memory reaches its maximum capacity, it's written to disk as an **HFile**.
+HBase first writes data updates to a type of commit log called a Write Ahead Log (WAL). After the update is stored in the WAL, it's written to the in-memory **MemStore**. When the data in memory reaches its maximum capacity, it's written to disk as an `HFile`.
-If a **RegionServer** crashes or becomes unavailable before the MemStore is flushed, the Write Ahead Log can be used to replay updates. Without the WAL, if a **RegionServer** crashes before flushing updates to an **HFile**, all of those updates are lost.
+If a **RegionServer** crashes or becomes unavailable before the MemStore is flushed, the Write Ahead Log can be used to replay updates. Without the WAL, if a **RegionServer** crashes before flushing updates to an `HFile`, all of those updates are lost.
## Accelerated Writes feature in Azure HDInsight for Apache HBase
Follow similar steps when scaling down your cluster: flush your tables and disab
Following these steps will ensure a successful scale-down and avoid the possibility of a namenode going into safe mode due to under-replicated or temporary files.
-If your namenode does go into safemode after a scale down, use hdfs commands to re-replicate the under-replicated blocks and get hdfs out of safe mode. This re-replication will allow you to restart HBase successfully.
+If your namenode does go into safe mode after a scale down, use hdfs commands to re-replicate the under-replicated blocks and get hdfs out of safe mode. This re-replication will allow you to restart HBase successfully.
## Next steps
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md
The following list shows you some general usage cases and their parameter settin
> 1. Copy head node, worker node and ZooKeeper nodes host and IP mapping from /etc/hosts file of destination(sink) cluster. > 1. Add copied entries source cluster /etc/hosts file. These entries should be added to head nodes, worker nodes and ZooKeeper nodes.
-**Step: 1**
+**Step 1:**
Create keytab file for the user using `ktutil`. `$ ktutil` 1. `addent -password -p admin@ABC.EXAMPLE.COM -k 1 -e RC4-HMAC`
Create keytab file for the user using `ktutil`.
> [!NOTE] > Make sure the keytab file is stored in `/etc/security.keytabs/` folder in the `<username>.keytab` format.
-**Step 2**
+**Step 2:**
Run script action with `-ku` option 1. Provide `-ku <username>` on ESP clusters.
hdinsight Hdinsight 5X Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md
Title: Open-source components and versions - Azure HDInsight 5.x
description: Learn about the open-source components and versions in Azure HDInsight 5.x. Previously updated : 07/27/2023 Last updated : 08/29/2023 # HDInsight 5.x component versions
Azure HDInsight supports the following Apache Spark versions.
|--|--|--|--|--|--| |2.4|July 8, 2019|End of life announced (EOLA)| February 10, 2023| August 10, 2023|February 10, 2024| |3.1|March 11, 2022|General availability |-|-|-|
-|3.3|To be announced for preview|-|-|-|-|
+|3.3|Available for preview|-|-|-|-|
### Guide for migrating from Apache Spark 2.4 to Spark 3.x
hdinsight Hdinsight Hadoop Create Linux Clusters Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md
description: Learn how to create clusters for HDInsight by using Resource Manage
Previously updated : 07/31/2023 Last updated : 08/22/2023 # Create Apache Hadoop clusters in HDInsight by using Resource Manager templates
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-azure-cli.md
Previously updated : 07/21/2022 Last updated : 08/21/2023 # Create a cluster with Data Lake Storage Gen2 using Azure CLI
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-portal.md
Previously updated : 07/21/2022 Last updated : 08/22/2023 # Create a cluster with Data Lake Storage Gen2 using the Azure portal
hdinsight Hdinsight Log Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-log-management.md
Last updated 12/07/2022
# Manage logs for an HDInsight cluster
-An HDInsight cluster produces variois log files. For example, Apache Hadoop and related services, such as Apache Spark, produce detailed job execution logs. Log file management is part of maintaining a healthy HDInsight cluster. There can also be regulatory requirements for log archiving. Due to the number and size of log files, optimizing log storage and archiving helps with service cost management.
+An HDInsight cluster produces various log files. For example, Apache Hadoop and related services, such as Apache Spark, produce detailed job execution logs. Log file management is part of maintaining a healthy HDInsight cluster. There can also be regulatory requirements for log archiving. Due to the number and size of log files, optimizing log storage and archiving helps with service cost management.
Managing HDInsight cluster logs includes retaining information about all aspects of the cluster environment. This information includes all associated Azure Service logs, cluster configuration, job execution information, any error states, and other data as needed.
For certain log files, you can use a lower-priced log file archiving approach. F
:::image type="content" source="./media/hdinsight-log-management/hdi-export-log-files.png" alt-text="Azure portal export activity log preview":::
-Alternatively, you can script log archiving with PowerShell. For an example PowerShell script, see [Archive Azure Automation logs to Azure Blob Storage](https://gallery.technet.microsoft.com/scriptcenter/Archive-Azure-Automation-898a1aa8).
+Alternatively, you can script log archiving with PowerShell.
### Accessing Azure Storage metrics
hdinsight Hdinsight Plan Virtual Network Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
Using an Azure Virtual Network enables the following scenarios:
* Directly accessing Apache Hadoop services that aren't available publicly over the internet. For example, Apache Kafka APIs or the Apache HBase Java API. > [!IMPORTANT]
-> Creating an HDInsight cluster in a VNET will create several networking resources, such as NICs and load balancers. Do **not** delete these networking resources, as they are needed for your cluster to function correctly with the VNET.
->
-> After Feb 28, 2019, the networking resources (such as NICs, LBs, etc) for NEW HDInsight clusters created in a VNET will be provisioned in the same HDInsight cluster resource group. Previously, these resources were provisioned in the VNET resource group. There is no change to the current running clusters and those clusters created without a VNET.
+> Creating an HDInsight cluster in a VNET will create several networking resources, such as NICs and load balancers. Do **not** delete or modify these networking resources, as they are needed for your cluster to function correctly with the VNET.
## Planning
To connect to Apache Ambari and other web pages through the virtual network, use
## Load balancing
-When you create an HDInsight cluster, a load balancer is created as well. The type of this load balancer is at the [basic SKU level](../load-balancer/skus.md), which has certain constraints. One of these constraints is that if you have two virtual networks in different regions, you cannot connect to basic load balancers. See [virtual networks FAQ: constraints on global vnet peering](../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers), for more information.
+When you create an HDInsight cluster, a load balancer is created as well. The type of this load balancer is at the [basic SKU level](../load-balancer/skus.md), which has certain constraints. One of these constraints is that if you have two virtual networks in different regions, you cannot connect to basic load balancers. See [virtual networks FAQ: constraints on global vnet peering](../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-virtual-network-peering-and-load-balancers), for more information.
+
+Another constraint is that the HDInsight load balancers should not be deleted or modified. **Any changes to the load balancer rules will get overwritten during certain maintenance events such as certificate renewals.** If the load balancers are modified and it affects the cluster functionality, you may need to recreate the cluster.
## Next steps
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
For workload specific versions, see
If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
-You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](https://learn.microsoft.com/answers/tags/168/azure-hdinsight)
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [twitter](https://twitter.com/AzureHDInsight)
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
As mentioned above, Microsoft recommends that HDInsight clusters be regularly mi
* **Third-party software**. Customers have the ability to install third-party software on their HDInsight clusters; however, we'll recommend recreating the cluster if it breaks the existing functionality. * **Multiple workloads on the same cluster**. In HDInsight 4.0, the Hive Warehouse Connector needs separate clusters for Spark and Interactive Query workloads. [Follow these steps to set up both clusters in Azure HDInsight](interactive-query/apache-hive-warehouse-connector.md). Similarly, integrating [Spark with HBASE](hdinsight-using-spark-query-hbase.md) requires two different clusters. * **Custom Ambari DB password changed**. The Ambari DB password is set during cluster creation and there's no current mechanism to update it. If a customer deploys the cluster with a [custom Ambari DB](hdinsight-custom-ambari-db.md), they have the ability to change the DB password on the SQL DB; however, there's no way to update this password for a running HDInsight cluster.-
+ * **Modifying HDInsight Load Balancers**. The HDInsight load balancers that are automatically deployed for Ambari and SSH access **should not** be modified or deleted. If you modify the HDInsight load balancer(s) and it breaks the cluster functionality, you will be advised to redeploy the cluster.
+
## Next steps * [Learn how to create Linux-based HDInsight clusters](hdinsight-hadoop-provision-linux-clusters.md)
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
The following table compares Hive table types and ACID operations before an upgr
### HDInsight 3.x and HDInsight 4.x Table type comparison
-|**HDInsight 3.x**| | | |**HDInsight 4.x**| |
+|**HDInsight 3.x**| - | - | - |**HDInsight 4.x**| - |
|-|-|-|-|-|-| |**Table Type** |**ACID v1** |**Format** |**Owner (user) of Hive Table File** |**Table Type**|**ACID v2**| |External |No |Native or non-native| Hive or non-Hive |External |No|
Hive has changed table creation in the following ways
Find a table having the problematic table reference. `math.students` that appears in a CREATE TABLE statement. Enclose the database name and the table name in backticks.
- `CREATE TABLE `math`.`students` (name VARCHAR(64), age INT, gpa DECIMAL(3,2));`
+
+ ```sql
+ TABLE `math`.`students` (name VARCHAR(64), age INT, gpa DECIMAL(3,2));
+ ```
* CASTING TIMESTAMPS Results of applications that cast numerics to timestamps differ from Hive 2 to Hive 3. Apache Hive changed the behavior of CAST to comply with the SQL Standard, which doesn't associate a time zone with the TIMESTAMP type.
hdinsight Apache Hive Warehouse Connector Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-operations.md
Previously updated : 07/22/2022 Last updated : 08/21/2023 # Apache Spark operations supported by Hive Warehouse Connector in Azure HDInsight
hdinsight Apache Hive Warehouse Connector Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md
Last updated 07/18/2022
HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we'll focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector. > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
## Prerequisite
hdinsight Apache Kafka Mirror Maker 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirror-maker-2.md
This architecture features two clusters in different resource groups and virtual
vi /etc/kafka/conf/connect-mirror-maker.properties ``` > [!NOTE]
- > This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+ > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
1. Property file looks like this. ```
hdinsight Apache Kafka Mirroring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirroring.md
Configure IP advertising to enable a client to connect by using broker IP addres
## Start MirrorMaker > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
1. From the SSH connection to the secondary cluster, use the following command to start the MirrorMaker process:
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
hdinsight Apache Spark Job Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-job-debugging.md
description: Use YARN UI, Spark UI, and Spark History server to track and debug
Previously updated : 07/31/2023 Last updated : 08/22/2023 # Debug Apache Spark jobs running on Azure HDInsight
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
Title: Azure Health Data Services Authentication and Authorization
+ Title: Authentication and authorization
description: This article provides an overview of the authentication and authorization of Azure Health Data Services.
Last updated 06/06/2022
-# Authentication and Authorization for Azure Health Data Services
+# Authentication and authorization for Azure Health Data Services
## Authentication Azure Health Data Services is a collection of secured managed services using [Azure Active Directory (Azure AD)](../active-directory/index.yml), a global identity provider that supports [OAuth 2.0](https://oauth.net/2/).
-For the Azure Health Data Services to access Azure resources, such as storage accounts and event hubs, you must **enable the system managed identity**, and **grant proper permissions** to the managed identity. For more information, see [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+For Azure Health Data Services to access Azure resources, such as storage accounts and event hubs, you must **enable the system managed identity**, and **grant proper permissions** to the managed identity. For more information, see [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md).
-Azure Health Data Services doesn't support other identity providers. However, customers can use their own identity provider to secure applications, and enable them to interact with the Health Data Services by managing client applications and user data access controls.
+Azure Health Data Services doesn't support other identity providers. However, you can use their own identity provider to secure applications, and enable them to interact with the Health Data Services by managing client applications and user data access controls.
The client applications are registered in the Azure AD and can be used to access the Azure Health Data Services. User data access controls are done in the applications or services that implement business logic.
After being granted with proper application roles, the authenticated users and c
There are two common ways to obtain an access token, outlined in detail by the Azure AD documentation: [authorization code flow](../active-directory/develop/v2-oauth2-auth-code-flow.md) and [client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-For obtaining an access token for the Azure Health Data Services, these are the steps using **authorization code flow**:
+Here's how an access token for Azure Health Data Services is obtained using **authorization code flow**:
1. **The client sends a request to the Azure AD authorization endpoint.** Azure AD redirects the client to a sign-in page where the user authenticates using appropriate credentials (for example: username and password, or a two-factor authentication). **Upon successful authentication, an authorization code is returned to the client.** Azure AD only allows this authorization code to be returned to a registered reply URL configured in the client application registration.
-2. **The client application exchanges the authorization code for an access token at the Azure AD token endpoint.** When requesting a token, the client application may have to provide a client secret (which you can add during application registration).
+2. **The client application exchanges the authorization code for an access token at the Azure AD token endpoint.** When the client application requests a token, the application may have to provide a client secret (which you can add during application registration).
-3. **The client makes a request to the Azure Health Data Services**, for example, a `GET` request to search all patients in the FHIR service. When making the request, it **includes the access token in an `HTTP` request header**, for example, **`Authorization: Bearer xxx`**.
+3. **The client makes a request to the Azure Health Data Services**, for example, a `GET` request to search all patients in the FHIR service. The request **includes the access token in an `HTTP` request header**, for example, **`Authorization: Bearer xxx`**.
4. **Azure Health Data Services validates that the token contains appropriate claims (properties in the token).** If itΓÇÖs valid, it completes the request and returns data to the client. In the **client credentials flow**, permissions are granted directly to the application itself. When the application presents a token to a resource, the resource enforces that the application itself has authorization to perform an action since thereΓÇÖs no user involved in the authentication. Therefore, itΓÇÖs different from the **authorization code flow** in the following ways: -- The user or the client doesnΓÇÖt need to log in interactively
+- The user or the client doesnΓÇÖt need to sign in interactively.
- The authorization code isnΓÇÖt required. - The access token is obtained directly through application permissions.
In the **client credentials flow**, permissions are granted directly to the appl
The access token is a signed, [Base64](https://en.wikipedia.org/wiki/Base64) encoded collection of properties (claims) that convey information about the client's identity, roles, and privileges granted to the user or client.
-Azure Health Data Services typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token). It consists of three parts:
+Azure Health Data Services typically expects a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token). It consists of three parts:
* Header * Payload (the claims)
-* Signature, as shown in the image below. For more information, see [Azure access tokens](../active-directory/develop/configurable-token-lifetimes.md).
+* Signature, as shown in the image. For more information, see [Azure access tokens](../active-directory/develop/configurable-token-lifetimes.md).
[ ![JASON web token signature.](media/azure-access-token.png) ](media/azure-access-token.png#lightbox)
-You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the token content. For example, you can view the claims details.
+Use online tools such as [https://jwt.ms](https://jwt.ms/) to view the token content. For example, you can view the claims details.
|**Claim type** |**Value** |**Notes** | |||-| |aud |https://xxx.fhir.azurehealthcareapis.com|Identifies the intended recipient of the token. In `id_tokens`, the audience is your app's Application ID, assigned to your app in the Azure portal. Your app should validate this value and reject the token if the value doesnΓÇÖt match.|
-|iss |https://sts.windows.net/{tenantid}/|Identifies the security token service (STS) that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. Your app should use the GUID portion of the claim to restrict the set of tenants that can sign in to the app, if it's applicable.|
+|iss |https://sts.windows.net/{tenantid}/|Identifies the security token service (STS) that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. Your app should use the GUID portion of the claim to restrict the set of tenants that can sign in to the app, if it's applicable.|
|iat |(time stamp) |"Issued At" indicates when the authentication for this token occurred.| |nbf |(time stamp) |The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing.|
-|exp |(time stamp) |The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. It's important to note that a resource may reject the token before this time as well, if for example, a change in authentication is required, or a token revocation has been detected.|
+|exp |(time stamp) |The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. Note that a resource may reject the token before this time, for example if a change in authentication is required, or a token revocation has been detected.|
|aio |E2ZgYxxx |An internal claim used by Azure AD to record data for token reuse. Should be ignored.|
-|appid |e97e1b8c-xxx |This is the application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD.|
-|appidacr |1 |Indicates how the client was authenticated. For a public client, the value is "0". If client ID and client secret are used, the value is "1". If a client certificate was used for authentication, the value is "2".|
+|appid |e97e1b8c-xxx |The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD.|
+|appidacr |1 |Indicates how the client was authenticated. For a public client, the value is 0. If client ID and client secret are used, the value is 1. If a client certificate was used for authentication, the value is 2.|
|idp |https://sts.windows.net/{tenantid}/|Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isnΓÇÖt in the same tenant as the issuer - guests, for instance. If the claim isnΓÇÖt present, it means that the value of iss can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the idp claim may be 'live.com' or an STS URI containing the Microsoft account tenant 9188040d-6c67-4c5b-b112-36a304b66dad.|
-|oid |For example, tenantid |This is the immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user will receive the same value in the oid claim. The Microsoft Graph will return this ID as the ID property for a given user account. Because the oid allows multiple apps to correlate users, the profile scope is required to receive this claim. Note: If a single user exists in multiple tenants, the user will contain a different object ID in each tenant - theyΓÇÖre considered different accounts, even though the user logs into each account with the same credentials.|
+|oid |For example, tenantid |The immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user receives the same value in the oid claim. The Microsoft Graph returns this ID as the ID property for a given user account. Because the oid allows multiple apps to correlate users, the profile scope is required to receive this claim. Note: If a single user exists in multiple tenants, the user contains a different object ID in each tenant - theyΓÇÖre considered different accounts, even though the user logs into each account with the same credentials.|
|rh |0.ARoxxx |An internal claim used by Azure to revalidate tokens. It should be ignored.|
-|sub |For example, tenantid |The principal about which the token asserts information, such as the user of an app. This value is immutable and canΓÇÖt be reassigned or reused. The subject is a pairwise identifier - itΓÇÖs unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be desired depending on your architecture and privacy requirements.|
+|sub |For example, tenantid |The principle about which the token asserts information, such as the user of an app. This value is immutable and canΓÇÖt be reassigned or reused. The subject is a pairwise identifier - itΓÇÖs unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps receive two different values for the subject claim. You may or may not desire this result depending on your architecture and privacy requirements.|
|tid |For example, tenantid |A GUID that represents the Azure AD tenant that the user is from. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user belongs to. For personal accounts, the value is 9188040d-6c67-4c5b-b112-36a304b66dad. The profile scope is required in order to receive this claim. |uti |bY5glsxxx |An internal claim used by Azure to revalidate tokens. It should be ignored.| |ver |1 |Indicates the version of the token.|
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Title: Private link for Azure API for FHIR description: This article describes how to set up a private endpoint for Azure API for FHIR services -+ Last updated 06/03/2022-+ # Configure private link
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Title: Disaster recovery for Azure API for FHIR description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.-+ Last updated 06/03/2022-+ # Disaster recovery for Azure API for FHIR
-Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
+Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes.
Consider the following steps for DR test.
* Prepare a test environment with test data. It's recommended that you use a service instance with small amounts of data to reduce the time to replicate the data.
-* Create a support ticket and provide your Azure subscription and the service name for the Azure API for FHIR for your test environment.
+* Create a support ticket and provide your Azure subscription, preferred Azure region for the failover, and the service name for the Azure API for FHIR for your test environment.
* Come up with a test plan, as you would with any DR test.
-* The Microsoft support team enables the DR feature and confirms that the failover has taken place.
+* The Microsoft support team enables the DR feature and confirms that the preferred failover region by the customer has been added
* Conduct your DR test and record the testing results, which it should include any data loss and network latency issues.
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
+ Last updated 06/03/2022
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview.md
Previously updated : 09/30/2022 Last updated : 09/01/2023
-# Azure API for FHIR Overview
+# What is Azure API for FHIR?
> [!Note]
-> Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services. To learn about Azure Health Data Services [click here](https://azure.microsoft.com/products/health-data-services/).
+> Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services.
Azure API for FHIR enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
Azure API for FHIR allows you to create and deploy a FHIR service in just minute
The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
-The following video presents an overview of Azure API for FHIR:
+This video presents an overview of Azure API for FHIR.
>[!VIDEO https://www.youtube.com/embed/5vS7Iq9vpXE]
-## Leveraging the power of your data with FHIR
+## Leverage the power of your data with FHIR
The healthcare industry is rapidly transforming health data to the emerging standard of [FHIR&reg;](https://hl7.org/fhir). FHIR enables a robust, extensible data model with standardized semantics and data exchange that enables all systems using FHIR to work together. Transforming your data to FHIR allows you to quickly connect existing data sources such as the electronic health record systems or research databases. FHIR also enables the rapid exchange of data in modern implementations of mobile and web development. Most importantly, FHIR can simplify data ingestion and accelerate development with analytics and machine learning tools.
You could invest resources building and running your own FHIR service, but with
Using the Azure API for FHIR enables to you connect with any system that leverages FHIR APIs for read, write, search, and other functions. It can be used as a powerful tool to consolidate, normalize, and apply machine learning with clinical data from electronic health records, clinician and patient dashboards, remote monitoring programs, or with databases outside of your system that have FHIR APIs.
-### Control Data Access at Scale
+### Control data acess at scale
You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
FHIR capabilities from Microsoft are available in two configurations:
For use cases that require extending or customizing the FHIR server, or requires access to the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose the Azure API for FHIR.
-## Next Steps
+## Next steps
To start working with Azure API for FHIR, follow the 5-minute quickstart to deploy the Azure API for FHIR.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using AHDS Samples OSS
+## SMART on FHIR using AHDS Samples OSS (SMART on FHIR(Enhanced))
### Step 1: Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
+> This is another option to SMART on FHIR(Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
Last updated 06/03/2022 -
-# Add data to audit logs by using custom HTTP headers in Azure API for FHIR
-
-In the Azure Fast Healthcare Interoperability Resources (FHIR) API, a user might want to include additional information in the logs, which comes from the calling system.
-
-For example, when the user of the API is authenticated by an external system, that system forwards the call to the FHIR API. At the FHIR API layer, the information about the original user has been lost, because the call was forwarded. It might be necessary to log and retain this user information for auditing or management purposes. The calling system can provide user identity, caller location, or other necessary information in the HTTP headers, which will be carried along as the call is forwarded.
-
-You can see this data flow in the following diagram:
--
-You can use custom headers to capture several types of information. For example:
-
-* Identity or authorization information
-* Origin of the caller
-* Originating organization
-* Client system details (electronic health record, patient portal)
-
-> [!IMPORTANT]
-> Be aware that the information sent in custom headers is stored in a Microsoft internal logging system for 30 days after being available in Azure Log Monitoring. We recommend encrypting any information before adding it to custom headers. You should not pass any PHI information through customer headers.
-
-You must use the following naming convention for your HTTP headers: X-MS-AZUREFHIR-AUDIT-\<name>.
-
-These HTTP headers are included in a property bag that is added to the log. For example:
-
-* X-MS-AZUREFHIR-AUDIT-USERID: 1234
-* X-MS-AZUREFHIR-AUDIT-USERLOCATION: XXXX
-* X-MS-AZUREFHIR-AUDIT-XYZ: 1234
-
-This information is then serialized to JSON when it's added to the properties column in the log. For example:
-
-```json
-{ "X-MS-AZUREFHIR-AUDIT-USERID" : "1234",
-"X-MS-AZUREFHIR-AUDIT-USERLOCATION" : "XXXX",
-"X-MS-AZUREFHIR-AUDIT-XYZ" : "1234" }
-```
-As with any HTTP header, the same header name can be repeated with different values. For example:
-
-* X-MS-AZUREFHIR-AUDIT-USERLOCATION: HospitalA
-* X-MS-AZUREFHIR-AUDIT-USERLOCATION: Emergency
-
-When added to the log, the values are combined with a comma delimited list. For example:
-
-{ "X-MS-AZUREFHIR-AUDIT-USERLOCATION" : "HospitalA, Emergency" }
-
-You can add a maximum of 10 unique headers (repetitions of the same header with different values are only counted as one). The total maximum length of the value for any one header is 2048 characters.
-
-If you're using the Firefly C# client API library, the code looks something like this:
+ # Add custom HTTP headers to audit logs in FHIR service
-```C#
-FhirClient client;
-client = new FhirClient(serverUrl);
-client.OnBeforeRequest += (object sender, BeforeRequestEventArgs e) =>
-{
- // Add custom headers to be added to the logs
- e.RawRequest.Headers.Add("X-MS-AZUREFHIR-AUDIT-UserLocation", "HospitalA");
-};
-client.Get("Patient");
-```
+
## Next steps In this article, you learned how to add data to audit logs by using custom headers in the Azure API for FHIR. For information about Azure API for FHIR configuration settings, see
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
- Title: DICOMcast overview - Azure Health Data Services
-description: In this article, you'll learn the concepts of DICOMcast.
---- Previously updated : 06/03/2022---
-# DICOMcast overview
-
-> [!NOTE]
-> On **July 31, 2023** DICOMcast will be retired. DICOMcast will continue to be available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [migration guidance](https://aka.ms/dicomcast-migration).
-
-DICOMcast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOMcast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning.
-
-## Architecture
-
-[ ![Architecture diagram of DICOMcast](media/dicom-cast-architecture.png) ](media/dicom-cast-architecture.png#lightbox)
--
-1. **Poll for batch of changes**: DICOMcast polls for any changes via the [Change Feed](dicom-change-feed-overview.md), which captures any changes that occur in your Medical Imaging Server for DICOM.
-1. **Fetch corresponding FHIR resources, if any**: If any DICOM service changes and correspond to FHIR resources, DICOMcast will fetch the related FHIR resources. DICOMcast synchronizes DICOM tags to the FHIR resource types *Patient* and *ImagingStudy*.
-1. **Merge FHIR resources and 'PUT' as a bundle in a transaction**: The FHIR resources corresponding to the DICOMcast captured changes will be merged. The FHIR resources will be 'PUT' as a bundle in a transaction into your FHIR service.
-1. **Persist state and process next batch**: DICOMcast will then persist the current state to prepare for next batch of changes.
-
-The current implementation of DICOMcast:
--- Supports a single-threaded process that reads from the DICOM change feed and writes to a FHIR service.-- Is hosted by Azure Container Instance in our sample template, but can be run elsewhere.-- Synchronizes DICOM tags to *Patient* and *ImagingStudy* FHIR resource types*.-- Is configurated to ignore invalid tags when syncing data from the change feed to FHIR resource types.
- - If `EnforceValidationOfTagValues` is enabled, then the change feed entry won't be written to the FHIR service unless every tag that's mapped is valid. For more information, see the [Mappings](#mappings) section below.
- - If `EnforceValidationOfTagValues` is disabled (default), and if a value is invalid, but it's not required to be mapped, then that particular tag won't be mapped. The rest of the change feed entry will be mapped to FHIR resources. If a required tag is invalid, then the change feed entry won't be written to the FHIR service. For more information about the required tags, see [Patient](#patient) and [Imaging Study](#imagingstudy)
-- Logs errors to Azure Table Storage.
- - Errors occur when processing change feed entries that are persisted in Azure Table storage that are in different tables.
- - `InvalidDicomTagExceptionTable`: Stores information about tags with invalid values. Entries here don't necessarily mean that the entire change feed entry wasn't stored in FHIR service, but that the particular value had a validation issue.
- - `DicomFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the change feed entry (such as invalid required tag). All entries in this table weren't stored to FHIR service.
- - `FhirFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the FHIR service (such as conflicting resource already exists). All entries in this table weren't stored to FHIR service.
- - `TransientRetryExceptionTable`: Stores information about change feed entries that faced a transient error (such as FHIR service too busy) and are being retried. Entries in this table note how many times they've been retried, but it doesn't necessarily mean that they eventually failed or succeeded to store to FHIR service.
- - `TransientFailureExceptionTable`: Stores information about change feed entries that had a transient error, and went through the retry policy and still failed to store to FHIR service. All entries in this table failed to store to FHIR service.
-
-## Mappings
-
-The current implementation of DICOMcast has the following mappings:
-
-### Patient
-
-| Property | Tag ID | Tag Name | Required Tag?| Note |
-| :- | :-- | :- | :-- | :-- |
-| Patient.identifier.where(system = '') | (0010,0020) | PatientID | Yes | For now, the system will be empty string. We'll add support later for allowing the system to be specified. |
-| Patient.name.where(use = 'usual') | (0010,0010) | PatientName | No | PatientName will be split into components and added as HumanName to the Patient resource. |
-| Patient.gender | (0010,0040) | PatientSex | No | |
-| Patient.birthDate | (0010,0030) | PatientBirthDate | No | PatientBirthDate only contains the date. This implementation assumes that the FHIR and DICOM services have data from the same time zone. |
-
-### Endpoint
-
-| Property | Tag ID | Tag Name | Note |
-| :- | :-- | :- | : |
-| Endpoint.status ||| The value 'active' will be used when creating the endpoint. |
-| Endpoint.connectionType ||| The system 'http://terminology.hl7.org/CodeSystem/endpoint-connection-type' and value 'dicom-wado-rs' will be used when creating the endpoint. |
-| Endpoint.address ||| The root URL to the DICOMWeb service will be used when creating the endpoint. The rule is described in 'http://hl7.org/fhir/imagingstudy.html#endpoint'. |
-
-### ImagingStudy
-
-| Property | Tag ID | Tag Name | Required | Note |
-| :- | :-- | :- | : | : |
-| ImagingStudy.identifier.where(system = 'urn:dicom:uid') | (0020,000D) | StudyInstanceUID | Yes | The value will have prefix of `urn:oid:`. |
-| ImagingStudy.status | | | No | The value 'available' will be used when creating ImagingStudy. |
-| ImagingStudy.modality | (0008,0060) | Modality | No | |
-| ImagingStudy.subject | | | No | It will be linked to the [Patient](#mappings). |
-| ImagingStudy.started | (0008,0020), (0008,0030), (0008,0201) | StudyDate, StudyTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
-| ImagingStudy.endpoint | | | | It will be linked to the [Endpoint](#endpoint). |
-| ImagingStudy.note | (0008,1030) | StudyDescription | No | |
-| ImagingStudy.series.uid | (0020,000E) | SeriesInstanceUID | Yes | |
-| ImagingStudy.series.number | (0020,0011) | SeriesNumber | No | |
-| ImagingStudy.series.modality | (0008,0060) | Modality | Yes | |
-| ImagingStudy.series.description | (0008,103E) | SeriesDescription | No | |
-| ImagingStudy.series.started | (0008,0021), (0008,0031), (0008,0201) | SeriesDate, SeriesTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
-| ImagingStudy.series.instance.uid | (0008,0018) | SOPInstanceUID | Yes | |
-| ImagingStudy.series.instance.sopClass | (0008,0016) | SOPClassUID | Yes | |
-| ImagingStudy.series.instance.number | (0020,0013) | InstanceNumber | No| |
-| ImagingStudy.identifier.where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='ACSN')) | (0008,0050) | Accession Number | No | Refer to http://hl7.org/fhir/imagingstudy.html#notes. |
-
-### Timestamp
-
-DICOM has different date time VR types. Some tags (like Study and Series) have the date, time, and UTC offset stored separately. This means that the date might be partial. This code attempts to translate this into a partial date syntax allowed by the FHIR service.
-
-## Summary
-
-In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [deployment instructions](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md).
-
-> [!IMPORTANT]
-> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
-
-
-## Next steps
-
-To get started using the DICOM service, see
-
->[!div class="nextstepaction"]
->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
-
->[!div class="nextstepaction"]
->[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
Previously updated : 07/11/2022 Last updated : 09/01/2023
-# Overview of the DICOM service
-
-This article describes the concepts of DICOM and the DICOM service.
-
-## DICOM
+# What is the DICOM service?
DICOM (Digital Imaging and Communications in Medicine) is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare.
-## DICOM service
- The DICOM service is a managed service within [Azure Health Data Services](../healthcare-apis-overview.md) that ingests and persists DICOM objects at multiple thousands of images per second. It facilitates communication and transmission of imaging data with any DICOMweb&trade; enabled systems or applications via DICOMweb Standard APIs like [Store (STOW-RS)](dicom-services-conformance-statement-v2.md#store-stow-rs), [Search (QIDO-RS)](dicom-services-conformance-statement-v2.md#search-qido-rs), [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs). It's backed by a managed Platform-as-a Service (PaaS) offering in the cloud with complete [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) compliance that you can upload PHI data to the DICOM service and exchange it through secure networks. - **PHI Compliant**: Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The DICOM service implements a layered, in-depth defense and advanced threat protection for your data.
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
This article describes our open-source projects on GitHub that provide source co
* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, non-diagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. ### Medical imaging network demo environment
-* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-prem radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
+* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
## Next steps
For more information about using the DICOM service, see
For more information about DICOM cast, see >[!div class="nextstepaction"]
->[DICOM cast overview](dicom-cast-overview.md)
+>[DICOM cast overview](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md)
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Consume Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md
Follow these steps to create a Logic App workflow to consume FHIR events:
## Prerequisites
-Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](events-deploy-portal.md).
+Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy events using the Azure portal](events-deploy-portal.md).
## Creating a Logic App
You now need to fill out the details of your Logic App. Specify information for
:::image type="content" source="media/events-logic-apps/events-logic-tabs.png" alt-text="Screenshot of the five tabs for specifying your Logic App." lightbox="media/events-logic-apps/events-logic-tabs.png"::: -- Tab 1 - Basics-- Tab 2 - Hosting-- Tab 3 - Monitoring-- Tab 4 - Tags-- Tab 5 - Review + Create
+- Tab 1 - **Basics**
+- Tab 2 - **Hosting**
+- Tab 3 - **Monitoring**
+- Tab 4 - **Tags**
+- Tab 5 - **Review + Create**
### Basics - Tab 1
Enabling your plan makes it zone redundant.
### Hosting - Tab 2
-Continue specifying your Logic App by clicking "Next: Hosting".
+Continue specifying your Logic App by selecting **Next: Hosting**.
#### Storage
Choose the type of storage you want to use and the storage account. You can use
### Monitoring - Tab 3
-Continue specifying your Logic App by clicking "Next: Monitoring".
+Continue specifying your Logic App by selecting **Next: Monitoring**.
#### Monitoring with Application Insights
Enable Azure Monitor Application Insights to automatically monitor your applicat
### Tags - Tab 4
-Continue specifying your Logic App by clicking **Next: Tags**.
+Continue specifying your Logic App by selecting **Next: Tags**.
#### Use tags to categorize resources
This example doesn't use tagging.
### Review + create - Tab 5
-Finish specifying your Logic App by clicking **Next: Review + create**.
+Finish specifying your Logic App by selecting **Next: Review + create**.
#### Review your Logic App
If there are no errors, you'll finally see a notification telling you that your
#### Your Logic App dashboard
-Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard:
+Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by selecting **Overview** in the Logic App menu. Here's a Logic App dashboard:
:::image type="content" source="media/events-logic-apps/events-logic-overview.png" alt-text="Screenshot of your Logic Apps overview dashboard." lightbox="media/events-logic-apps/events-logic-overview.png":::
To set up a new workflow, fill in these details:
Specify a new name for your workflow. Indicate whether you want the workflow to be stateful or stateless. Stateful is for business processes and stateless is for processing IoT events.
-When you've specified the details, select "Create" to begin designing your workflow.
+When you've specified the details, select **Create** to begin designing your workflow.
### Designing the workflow In your new workflow, select the name of the enabled workflow.
-You can write code to design a workflow for your application, but for this tutorial, choose the Designer option on the Developer menu.
+You can write code to design a workflow for your application, but for this tutorial, choose the **Designer** option on the **Developer** menu.
-Next, select "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and select the "Azure" tab below. The Event Grid isn't a Logic App Built-in.
+Next, select **Choose an operation** to display the **Add a Trigger** blade on the right. Then search for "Azure Event Grid" and select the **Azure** tab below. The Event Grid isn't a Logic App Built-in.
:::image type="content" source="media/events-logic-apps/events-logic-grid.png" alt-text="Screenshot of the search results for Azure Event Grid." lightbox="media/events-logic-apps/events-logic-grid.png":::
-When you see the "Azure Event Grid" icon, select on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
+When you see the "Azure Event Grid" icon, select on it to display the **Triggers and Actions** available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
-Select "When a resource event occurs" to set up a trigger for the Azure Event Grid.
+Select **When a resource event occurs** to set up a trigger for the Azure Event Grid.
To tell Event Grid how to respond to the trigger, you must specify parameters and add actions.
Fill in the details for subscription, resource type, and resource name. Then you
- Resource deleted - Resource updated
-For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md#what-fhir-resource-events-does-events-support).
+For more information about supported event types, see [Frequently asked questions about events](events-faqs.md).
### Adding an HTTP action
-Once you've specified the trigger events, you must add more details. Select the "+" below the "When a resource event occurs" button.
+Once you've specified the trigger events, you must add more details. Select the **+** below the **When a resource event occurs** button.
-You need to add a specific action. Select "Choose an operation" to continue. Then, for the operation, search for "HTTP" and select on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
+You need to add a specific action. Select **Choose an operation** to continue. Then, for the operation, search for "HTTP" and select on **Built-in** to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
The options in this example are:
The options in this example are:
At this point, you need to give the FHIR Reader access to your app, so it can verify that the event details are correct. Follow these steps to give it access:
-1. The first step is to go back to your Logic App and select the Identity menu item.
+1. The first step is to go back to your Logic App and select the **Identity** menu item.
-2. In the System assigned tab, make sure the Status is "On".
+2. In the System assigned tab, make sure the **Status** is "On".
-3. Select on Azure role assignments. Select "Add role assignment".
+3. Select on Azure role assignments. Select **Add role assignment**.
4. Specify the following options:
At this point, you need to give the FHIR Reader access to your app, so it can ve
- Subscription = your subscription - Role = FHIR Data Reader.
-When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, select "Review + assign" to assign the role.
+When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by selecting the name and then selecting the **Select** button. Finally, select **Review + assign** to assign the role.
### Add a condition
-After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Select on "Built-in" to display the Control icon. Next select Actions and choose Condition.
+After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the **+** below HTTP to "Choose an operation". On the right, search for the word "condition". Select on **Built-in** to display the Control icon. Next select **Actions** and choose **Condition**.
When the condition is ready, you can specify what actions happen if the condition is true or false. ### Choosing a condition criteria
-In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on **Condition** in the workflow. A set of condition choices are then displayed.
+In order to specify whether you want to take action for the specific event, begin specifying the criteria by selecting on **Condition** in the workflow. A set of condition choices are then displayed.
Under the **And** box, add these two conditions:
The expression for getting the resourceType is `body('HTTP')?['resourceType']`.
#### Event Type
-You can select Event Type from the Dynamic Content.
+You can select **Event Type** from the Dynamic Content.
Here's an example of the Condition criteria:
When you've entered the condition criteria, save your workflow.
#### Workflow dashboard
-To check the status of your workflow, select Overview in the workflow menu. Here's a dashboard for a workflow:
+To check the status of your workflow, select **Overview** in the workflow menu. Here's a dashboard for a workflow:
:::image type="content" source="media/events-logic-apps/events-logic-dashboard.png" alt-text="Screenshot of the Logic App workflow dashboard." lightbox="media/events-logic-apps/events-logic-dashboard.png":::
You can do the following operations from your workflow dashboard:
### Condition testing
-Save your workflow by clicking the "Save" button.
+Save your workflow by selecting the **Save** button.
To test your new workflow, do the following steps:
In this tutorial, you learned how to use Logic Apps to process FHIR events.
To learn about Events, see > [!div class="nextstepaction"]
-> [What are Events?](events-overview.md)
+> [What are events?](events-overview.md)
To learn about the Events frequently asked questions (FAQs), see > [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+> [Frequently asked questions about events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
Title: Deploy Events using the Azure portal - Azure Health Data Services
-description: Learn how to deploy the Events feature using the Azure portal.
+ Title: Deploy events using the Azure portal - Azure Health Data Services
+description: Learn how to deploy the events feature using the Azure portal.
Last updated 06/23/2022
-# Quickstart: Deploy Events using the Azure portal
+# Quickstart: Deploy events using the Azure portal
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this quickstart, learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send FHIR and DICOM event messages.
+In this quickstart, learn how to deploy the events feature in the Azure portal to send FHIR and DICOM event messages.
## Prerequisites
-It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Health Data Services.
+It's important that you have the following prerequisites completed before you begin the steps of deploying the events feature.
* [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) * [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
It's important that you have the following prerequisites completed before you be
* [FHIR service deployed in the workspace](../fhir/fhir-portal-quickstart.md) or [DICOM service deployed in the workspace](../dicom/deploy-dicom-services-in-azure.md) > [!IMPORTANT]
-> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the Events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
> [!NOTE]
-> For the purposes of this quickstart, we'll be using a basic Events set up and an event hub as the endpoint for Events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
+> For the purposes of this quickstart, we'll be using a basic events set up and an event hub as the endpoint for events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
-## Deploy Events
+## Deploy events
-1. Browse to the workspace that contains the FHIR or DICOM service you want to send Events messages from and select the **Events** button on the left hand side of the portal.
+1. Browse to the workspace that contains the FHIR or DICOM service you want to send events messages from and select the **Events** button on the left hand side of the portal.
:::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png":::
It's important that you have the following prerequisites completed before you be
3. In the **Create Event Subscription** box, enter the following subscription information.
- * **Name**: Provide a name for your Events subscription.
- * **System Topic Name**: Provide a name for your System Topic.
+ * **Name**: Provide a name for your events subscription.
+ * **System Topic Name**: Provide a name for your system topic.
> [!NOTE]
- > The first time you set up the Events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional Events subscriptions that you create within the workspace.
+ > The first time you set up the events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional events subscriptions that you create within the workspace.
* **Event types**: Type of FHIR or DICOM events to send messages for (for example: create, updated, and deleted).
- * **Endpoint Details**: Endpoint to send Events messages to (for example: an Azure Event Hubs namespace + an event hub).
+ * **Endpoint Details**: Endpoint to send events messages to (for example: an Azure Event Hubs namespace + an event hub).
>[!NOTE] > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings at their default values.
It's important that you have the following prerequisites completed before you be
## Next steps
-In this quickstart, you learned how to deploy Events using the Azure portal.
+In this quickstart, you learned how to deploy events using the Azure portal.
-To learn how to enable the Events metrics, see
+To learn how to enable the events metrics, see
> [!div class="nextstepaction"]
-> [How to use Events metrics](events-use-metrics.md)
+> [How to use events metrics](events-use-metrics.md)
To learn how to export Event Grid system diagnostic logs and metrics, see > [!div class="nextstepaction"]
-> [How to enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+> [How to enable diagnostic settings for events](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: How to disable the Events feature and delete Azure Health Data Services workspaces - Azure Health Data Services
-description: Learn how to disable the Events feature and delete Azure Health Data Services workspaces.
+ Title: How to disable the events and delete Azure Health Data Services workspaces - Azure Health Data Services
+description: Learn how to disable events and delete Azure Health Data Services workspaces.
Last updated 07/11/2023
-# How to disable the Events feature and delete Azure Health Data Services workspaces
+# How to disable the events and delete Azure Health Data Services workspaces
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to disable the Events feature and delete Azure Health Data Services workspaces.
+In this article, learn how to disable events and delete Azure Health Data Services workspaces.
-## Disable Events
+## Disable events
-To disable Events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
+To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
1. Select the **Event Subscription** to be deleted. In this example, we select an Event Subscription named **fhir-events**. :::image type="content" source="media/disable-delete-workspaces/events-select-subscription.png" alt-text="Screenshot of Events subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription.png":::
-2. Select **Delete** and confirm the Event Subscription deletion.
+2. Select **Delete** and confirm the **Event Subscription** deletion.
:::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png":::
-3. To completely disable Events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
+3. To completely disable events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
:::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png":::
To avoid errors and successfully delete workspaces, follow these steps and in th
## Next steps
-In this article, you learned how to disable the Events feature and delete workspaces.
+In this article, you learned how to disable events and delete Azure Health Data Services workspaces.
-To learn about how to troubleshoot Events, see
+To learn about how to troubleshoot events, see
> [!div class="nextstepaction"]
-> [Troubleshoot Events](events-troubleshooting-guide.md)
+> [Troubleshoot events](events-troubleshooting-guide.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-enable-diagnostic-settings.md
Title: Enable Events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
-description: Learn how to enable Events diagnostic settings for diagnostic logs and metrics exporting.
+ Title: Enable events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
+description: Learn how to enable events diagnostic settings for diagnostic logs and metrics exporting.
Last updated 06/23/2022
-# How to enable diagnostic settings for Events
+# How to enable diagnostic settings for events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to enable the Events diagnostic settings for Azure Event Grid system topics.
+In this article, learn how to enable the events diagnostic settings for Azure Event Grid system topics.
## Resources
In this article, learn how to enable the Events diagnostic settings for Azure Ev
|More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)| > [!NOTE]
-> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice.
+> It might take up to 15 minutes for the first events diagnostic logs and metrics to display in the destination of your choice.
## Next steps
-In this article, you learned how to enable diagnostic settings for Events.
+In this article, you learned how to enable diagnostic settings for events.
-To learn how to use Events metrics using the Azure portal, see
+To learn how to use events metrics using the Azure portal, see
> [!div class="nextstepaction"]
-> [How to use Events metrics](events-use-metrics.md)
+> [How to use events metrics](events-use-metrics.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Title: Frequently asked questions about Events - Azure Health Data Services
-description: Learn about the frequently asked questions about Events.
+ Title: Frequently asked questions about events - Azure Health Data Services
+description: Learn about the frequently asked questions about events.
Last updated 07/11/2023
-# Frequently asked questions about Events
+# Frequently asked questions about events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification. ## Events: The basics
-## Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
+## Can I use events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
-No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
+No. The Azure Health Data Services events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
-## What FHIR resource events does Events support?
+## What FHIR resource changes does events support?
Events are generated from the following FHIR service types:
Events are generated from the following FHIR service types:
For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-## Does Events support FHIR bundles?
+## Does events support FHIR bundles?
-Yes. The Events feature is designed to emit notifications of data changes at the FHIR resource level.
+Yes. The events feature is designed to emit notifications of data changes at the FHIR resource level.
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html) in the following ways:
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-
> [!NOTE] > Events are not sent in the sequence of the data operations in the FHIR bundle.
-## What DICOM image events does Events support?
+## What DICOM image changes does events support?
Events are generated from the following DICOM service types:
Events are generated from the following DICOM service types:
* **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
-## What is the payload of an Events message?
+## What is the payload of an events message?
-For a detailed description of the Events message structure and both required and nonrequired elements, see [Events troubleshooting guide](events-troubleshooting-guide.md).
+For a detailed description of the events message structure and both required and nonrequired elements, see [Events message structures](events-message-structure.md).
-## What is the throughput for the Events messages?
+## What is the throughput for the events messages?
The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
-## How am I charged for using Events?
+## How am I charged for using events?
-There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
+There are no extra charges for using [Azure Health Data Services events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
Yes. We recommend that you use different subscribers for each individual FHIR or
Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
-## What is the expected time to receive an Events message?
+## What is the expected time to receive an events message?
On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met.
-## Is it possible to receive duplicate Events messages?
+## Is it possible to receive duplicate events messages?
-Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
+Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate.
Generally, we recommend that developers ensure idempotency for the event subscri
[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
- [FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
+[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
+ [FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md) FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md
Title: Events message structure - Azure Health Data Services
-description: Learn about the Events message structures and required values.
+description: Learn about the events message structures and required values.
# Events message structures
-In this article, learn about the Events message structures, required and nonrequired elements, and see samples of Events message payloads.
+In this article, learn about the events message structures, required and nonrequired elements, and see samples of events message payloads.
> [!IMPORTANT]
-> Events currently supports only the following operations:
+> Events currently supports the following operations:
> > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
In this article, learn about the Events message structures, required and nonrequ
## Next steps
-In this article, you learned about the Events message structures.
+In this article, you learned about the events message structures.
-To learn how to deploy Events using the Azure portal, see
+To learn how to deploy events using the Azure portal, see
> [!div class="nextstepaction"]
-> [Deploy Events using the Azure portal](events-deploy-portal.md)
+> [Deploy events using the Azure portal](events-deploy-portal.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Title: What are Events? - Azure Health Data Services
-description: Learn about Events, its features, integrations, and next steps.
+ Title: What are events? - Azure Health Data Services
+description: Learn about events, its features, integrations, and next steps.
Previously updated : 07/11/2023 Last updated : 09/01/2023
-# What are Events?
+# What are events?
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-Events are a notification and subscription feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
+Events are a subscription and notification feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
-When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
+When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the events feature sends notification messages to events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past resource changes or when the feature is turned off.
+> FHIR resource and DICOM image change data is only written and event messages are sent when the events feature is turned on. The event feature doesn't send messages on past resource changes or when the feature is turned off.
> [!TIP] > For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md) > [!IMPORTANT] > Events currently supports the following operations:
Events are designed to support growth and changes in healthcare technology needs
## Configurable
-Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune Events message delivery options.
+Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune events message delivery options.
> [!NOTE] > The advanced features come as part of the Event Grid service. ## Extensible
-Use Events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
+Use events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
## Secure
-Built on a platform that supports protected health information compliance with privacy, safety, and security in mind, the Events messages don't transmit sensitive data as part of the message payload.
+Events are built on a platform that supports protected health information compliance with privacy, safety, and security in mind.
-Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice.
+Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the events message receiving endpoints of your choice.
## Next steps
-To learn about deploying Events using the Azure portal, see
+To learn about deploying events using the Azure portal, see
> [!div class="nextstepaction"]
-> [Deploy Events using the Azure portal](./events-deploy-portal.md)
+> [Deploy events using the Azure portal](events-deploy-portal.md)
-To learn about the frequently asks questions (FAQs) about Events, see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](./events-faqs.md)
+To learn about troubleshooting events, see
-To learn about troubleshooting Events, see
+> [!div class="nextstepaction"]
+> [Troubleshoot events](events-troubleshooting-guide.md)
+To learn about the frequently asks questions (FAQs) about events, see
+
> [!div class="nextstepaction"]
-> [Troubleshoot Events](./events-troubleshooting-guide.md)
+> [Frequently asked questions about Events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md
Title: Troubleshoot Events - Azure Health Data Services
-description: Learn how to troubleshoot Events.
+ Title: Troubleshoot events - Azure Health Data Services
+description: Learn how to troubleshoot events.
Last updated 07/12/2023
-# Troubleshoot Events
+# Troubleshoot events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides resources for troubleshooting Events.
+This article provides resources to troubleshoot events.
> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off.
+> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off.
## Resources for troubleshooting > [!IMPORTANT]
-> Events currently supports only the following operations:
+> Events currently supports the following operations:
> > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
This article provides resources for troubleshooting Events.
### Events message structures
-Use this resource to learn about the Events message structures, required and nonrequired elements, and see sample Events messages:
-* [Events message structures](./events-message-structure.md)
+Use this resource to learn about the events message structures, required and nonrequired elements, and see sample Events messages:
+* [Events message structures](events-message-structure.md)
### How to's
-Use this resource to learn how to deploy Events in the Azure portal:
-* [Deploy Events using the Azure portal](./events-deploy-portal.md)
+Use this resource to learn how to deploy events in the Azure portal:
+* [Deploy events using the Azure portal](events-deploy-portal.md)
> [!IMPORTANT] > The Event Subscription requires access to whichever endpoint you chose to send Events messages to. For more information, see [Enable managed identity for a system topic](../../event-grid/enable-identity-system-topics.md).
-Use this resource to learn how to use Events metrics:
-* [How to use Events metrics](./events-display-metrics.md)
+Use this resource to learn how to use events metrics:
+* [How to use events metrics](events-display-metrics.md)
-Use this resource to learn how to enable diagnostic settings for Events:
-* [How to enable diagnostic settings for Events](./events-export-logs-metrics.md)
+Use this resource to learn how to enable diagnostic settings for events:
+* [How to enable diagnostic settings for events](events-export-logs-metrics.md)
## Contact support If you have a technical question about Events or if you have a support related issue, see [Create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) and complete the required fields under the **Problem description** tab. For more information about Azure support options, see [Azure support plans](https://azure.microsoft.com/support/options/#support-plans). ## Next steps
-In this article, you were provided resources for troubleshooting Events.
+In this article, you were provided resources for troubleshooting events.
-To learn about the frequently asked questions (FAQs) about Events, see
+To learn about the frequently asked questions (FAQs) about events, see
> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+> [Frequently asked questions about events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md
Title: Use Events metrics - Azure Health Data Services
-description: Learn how use Events metrics.
+ Title: Use events metrics - Azure Health Data Services
+description: Learn how use events metrics.
Last updated 07/11/2023
-# How to use Events metrics
+# How to use events metrics
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to use Events metrics using the Azure portal.
+In this article, learn how to use events metrics using the Azure portal.
> [!TIP] > To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md). > [!NOTE]
-> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the Events message endpoint.
+> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the events message endpoint.
## Use metrics
In this article, learn how to use Events metrics using the Azure portal.
:::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png":::
-4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages.
+4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events Subscription metrics pages.
:::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png"::: ## Next steps
-In this tutorial, you learned how to use Events metrics using the Azure portal.
+In this tutorial, you learned how to use events metrics using the Azure portal.
-To learn how to export Events Azure Event Grid system diagnostic logs and metrics, see
+To learn how to enable events diagnostic settings, see
> [!div class="nextstepaction"]
-> [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+> [Enable diagnostic settings for events](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
The FHIR service supports $import operation that allows you to import data into
* Incremental mode is optimized to load data into FHIR server periodically and doesn't block writes via API. It also allows you to load lastUpdated and versionId from resource Meta (if present in resource JSON).
-> [!IMPORTANT]
-> Incremental mode capability is currently in preview and offered free of charge. With General Availability, use of Incremental import will incur charges.
-> Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities.
-> For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- In this document we go over the three steps used in configuring import settings on the FHIR service: 1. Enable managed identity on the FHIR service.
To specify the Azure Storage account in JSON view, you need to use [REST API](/r
Below steps walk through setting configurations for initial and incremental import mode. Choose the right import mode for your use case.
-### Step 3.1: Set import configuration for Initial import mode.
+### Step 3a: Set import configuration for Initial import mode.
Do following changes to JSON: 1. Set enabled in importConfiguration to **true**. 2. Update the integrationDataStore with target storage account name.
Do following changes to JSON:
After you've completed this final step, you're ready to perform **Initial mode** import using $import.
-### Step 3.2: Set import configuration for Incremental import mode.
+### Step 3b: Set import configuration for Incremental import mode.
Do following changes to JSON: 1. Set enabled in importConfiguration to **true**.
healthcare-apis Configure Settings Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-settings-convert-data.md
Previously updated : 08/03/2022 Last updated : 08/28/2023
To access and use the default templates for your conversion requests, ensure tha
> > The default templates are provided only to help you get started with your data conversion workflow. These default templates are not intended for production and might change when Microsoft releases updates for the FHIR service. To have consistent data conversion behavior across different versions of the FHIR service, you must do the following: >
-> 1. Host your own copy of the templates in an Azure Container Registry instance.
+> 1. Host your own copy of the templates in an [Azure Container Registry](../../container-registry/container-registry-intro.md) (ACR) instance.
> 2. Register the templates to the FHIR service. > 3. Use your registered templates in your API calls. > 4. Verify that the conversion behavior meets your requirements.
You can use the [FHIR Converter Visual Studio Code extension](https://marketplac
> [!NOTE] > The FHIR Converter extension for Visual Studio Code is available for HL7v2, C-CDA, and JSON Liquid templates. FHIR STU3 to FHIR R4 Liquid templates are currently not supported.
-The provided default templates can be used as a base starting point if needed, on top of which your customizations can be added. When making updates to the templates, consider following these guidelines to avoid unintended conversion results. The template should be authored in a way such that it yields a valid structure for a FHIR Bundle resource.
+The provided default templates can be used as a base starting point if needed, on top of which your customizations can be added. When making updates to the templates, consider following these guidelines to avoid unintended conversion results. The template should be authored in a way such that it yields a valid structure for a FHIR bundle resource.
For instance, the Liquid templates should have a format such as the following code:
For instance, the Liquid templates should have a format such as the following co
} ```
-The overall template follows the structure and expectations for a FHIR Bundle resource, with the FHIR Bundle JSON being at the root of the file. If you choose to add custom fields to the template that arenΓÇÖt part of the FHIR specification for a bundle resource, the conversion request could still succeed. However, the converted result could potentially have unexpected output and wouldn't yield a valid FHIR Bundle resource that can be persisted in the FHIR service as is.
+The overall template follows the structure and expectations for a FHIR bundle resource, with the FHIR bundle JSON being at the root of the file. If you choose to add custom fields to the template that arenΓÇÖt part of the FHIR specification for a bundle resource, the conversion request could still succeed. However, the converted result could potentially have unexpected output and wouldn't yield a valid FHIR bundle resource that can be persisted in the FHIR service as is.
For example, consider the following code:
For example, consider the following code:
} ```
-In the example code, two example custom fields `customfield_message` and `customfield_data` that aren't FHIR properties per the specification and the FHIR Bundle resource seem to be nested under `customfield_data` (that is, the FHIR Bundle JSON isn't at the root of the file). This template doesnΓÇÖt align with the expected structure around a FHIR Bundle resource. As a result, the conversion request might succeed using the provided template. However, the returned converted result could potentially have unexpected output (due to certain post conversion processing steps being skipped). It wouldn't be considered a valid FHIR Bundle (since it's nested and has non FHIR specification properties) and attempting to persist the result in your FHIR service fails.
+In the example code, two example custom fields `customfield_message` and `customfield_data` that aren't FHIR properties per the specification and the FHIR bundle resource seem to be nested under `customfield_data` (that is, the FHIR bundle JSON isn't at the root of the file). This template doesnΓÇÖt align with the expected structure around a FHIR bundle resource. As a result, the conversion request might succeed using the provided template. However, the returned converted result could potentially have unexpected output (due to certain post conversion processing steps being skipped). It wouldn't be considered a valid FHIR bundle (since it's nested and has non FHIR specification properties) and attempting to persist the result in your FHIR service fails.
## Host your own templates
-We recommend that you host your own copy of templates in an Azure Container Registry (ACR) instance. Hosting your own templates and using them for `$convert-data` operations involves the following six steps:
+It's recommended that you host your own copy of templates in an [Azure Container Registry](../../container-registry/container-registry-intro.md) (ACR) instance. ACR can be used to host your custom templates and support with versioning.
+
+Hosting your own templates and using them for `$convert-data` operations involves the following seven steps:
1. [Create an Azure Container Registry instance](#step-1-create-an-azure-container-registry-instance) 2. [Push the templates to your Azure Container Registry instance](#step-2-push-the-templates-to-your-azure-container-registry-instance)
-3. [Enable Azure Managed Identity in your FHIR service instance](#step-3-enable-azure-managed-identity-in-your-fhir-service-instance)
+3. [Enable Azure Managed identity in your FHIR service instance](#step-3-enable-azure-managed-identity-in-your-fhir-service-instance)
4. [Provide Azure Container Registry access to the FHIR service managed identity](#step-4-provide-azure-container-registry-access-to-the-fhir-service-managed-identity) 5. [Register the Azure Container Registry server in the FHIR service](#step-5-register-the-azure-container-registry-server-in-the-fhir-service) 6. [Configure the Azure Container Registry firewall for secure access](#step-6-configure-the-azure-container-registry-firewall-for-secure-access)
+7. [Verify the $convert-data operation](#step-7-verify-the-convert-data-operation)
### Step 1: Create an Azure Container Registry instance
-Read the [Introduction to container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own Azure Container Registry instance. We recommend that you place your Azure Container Registry instance in the same resource group as your FHIR service.
+Read the [Introduction to container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own ACR instance. We recommend that you place your ACR instance in the same resource group as your FHIR service.
### Step 2: Push the templates to your Azure Container Registry instance
-After you create an Azure Container Registry instance, you can use the **FHIR Converter: Push Templates** command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your Azure Container Registry instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
+After you create an ACR instance, you can use the **FHIR Converter: Push Templates** command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your ACR instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
-To maintain different versions of custom templates in your ACR, you may push the image containing your custom templates into your ACR instance with different image tags.
+To maintain different versions of custom templates in your Azure Container Registry, you may push the image containing your custom templates into your ACR instance with different image tags.
* For more information about ACR registries, repositories, and artifacts, see [About registries, repositories, and artifacts](../../container-registry/container-registry-concepts.md). * For more information about image tag best practices, see [Recommendations for tagging and versioning container images](../../container-registry/container-registry-image-tag-version.md). To reference specific template versions in the API, be sure to use the exact image name and tag that contains the versioned template to be used. For the API parameter `templateCollectionReference`, use the appropriate **image name + tag** (for example: `<RegistryServer>/<imageName>:<imageTag>`).
-### Step 3: Enable Azure Managed Identity in your FHIR service instance
+### Step 3: Enable Azure Managed identity in your FHIR service instance
1. Go to your instance of the FHIR service in the Azure portal, and then select the **Identity** option.
-2. Change the status to **On** to enable Managed Identity in the FHIR service.
+2. Change the **Status** to **On** and select **Save** to enable the system-managed identity in the FHIR service.
- ![Screenshot of the FHIR pane for enabling the managed identity feature.](media/convert-data/fhir-mi-enabled.png#lightbox)
### Step 4: Provide Azure Container Registry access to the FHIR service managed identity
To reference specific template versions in the API, be sure to use the exact ima
2. Select **Add** > **Add role assignment**. If the **Add role assignment** option is unavailable, ask your Azure administrator to grant you the permissions for performing this task.
- ![Screenshot of the "Access control" pane and the "Add role assignment" menu.](../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
-
- :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot of the 'Access control' pane and the 'Add role assignment' menu.":::
+ :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot of the Access control pane and the 'Add role assignment' menu.":::
3. On the **Role** pane, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
- [![Screenshot showing the "Add role assignment" pane.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
+ :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot showing the Add role assignment pane." lightbox="../../../includes/role-based-access-control/media/add-role-assignment-page.png":::
4. On the **Members** tab, select **Managed identity**, and then select **Select members**.
For more information about assigning roles in the Azure portal, see [Azure built
### Step 5: Register the Azure Container Registry server in the FHIR service
-You can register the Azure Container Registry server by using the Azure portal.
+You can register the ACR server by using the Azure portal.
To use the Azure portal:
To use the Azure portal:
3. Select **Add** and then, in the dropdown list, select your registry server. 4. Select **Save**.
- ![Screenshot of the Artifacts screen for registering an Azure Container Registry with a FHIR service.](media/convert-data/fhir-acr-add-registry.png#lightbox)
+ :::image type="content" source="media/convert-data/configure-settings-convert-data/fhir-acr-add-registry.png" alt-text="Screenshot of the Artifacts screen for registering an Azure Container Registry with a FHIR service." lightbox="media/convert-data/configure-settings-convert-data/fhir-acr-add-registry.png":::
-You can register up to 20 Azure Container Registry servers in the FHIR service.
+You can register up to 20 ACR servers in the FHIR service.
> [!NOTE] > It might take a few minutes for the registration to take effect. ### Step 6: Configure the Azure Container Registry firewall for secure access
-1. In the Azure portal, on the left pane, select **Networking** for the Azure Container Registry instance.
-
- ![Screenshot of the Networking screen for configuring an Azure Container Registry firewall.](media/convert-data/networking-container-registry.png#lightbox)
-
-2. On the **Public access** tab, select **Selected networks**.
-
-3. In the **Firewall** section, specify the IP address in the **Address range** box.
+There are many methods for securing ACR using the built-in firewall depending on your particular use case.
-Add IP ranges to allow access from the Internet or your on-premises networks.
+* [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md)
+* [Configure public IP network rules](../../container-registry/container-registry-access-selected-networks.md)
+* [Azure Container Registry mitigating data exfiltration with dedicated data endpoints](../../container-registry/container-registry-dedicated-data-endpoints.md)
+* [Restrict access to a container registry using a service endpoint in an Azure virtual network](../../container-registry/container-registry-vnet.md)
+* [Allow trusted services to securely access a network-restricted container registry](../../container-registry/allow-access-trusted-services.md)
+* [Configure rules to access an Azure container registry behind a firewall](../../container-registry/container-registry-firewall-access-rules.md)
+* [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519)
-The following table lists the IP addresses for the Azure regions where the FHIR service is available:
-
-| Azure region | Public IP address |
-|:|:|
-| Australia East | 20.53.47.210 |
-| Brazil South | 191.238.72.227 |
-| Canada Central | 20.48.197.161 |
-| Central India | 20.192.47.66 |
-| East US | 20.62.134.242, 20.62.134.244, 20.62.134.245 |
-| East US 2 | 20.62.60.115, 20.62.60.116, 20.62.60.117 |
-| France Central | 51.138.211.19 |
-| Germany North | 51.116.60.240 |
-| Germany West Central | 20.52.88.224 |
-| Japan East | 20.191.167.146 |
-| Japan West | 20.189.228.225 |
-| Korea Central | 20.194.75.193 |
-| North Central US | 52.162.111.130, 20.51.0.209 |
-| North Europe | 52.146.137.179 |
-| Qatar Central | 20.21.36.225 |
-| South Africa North | 102.133.220.199 |
-| South Central US | 20.65.134.83 |
-| Southeast Asia | 20.195.67.208 |
-| Sweden Central | 51.12.28.100 |
-| Switzerland North | 51.107.247.97 |
-| UK South | 51.143.213.211 |
-| UK West | 51.140.210.86 |
-| West Central US | 13.71.199.119 |
-| West Europe | 20.61.103.243, 20.61.103.244 |
-| West US 2 | 20.51.13.80, 20.51.13.84, 20.51.13.85 |
-| West US 3 | 20.150.245.165 |
-
-You can also completely disable public access to your Azure Container Registry instance while still allowing access from your FHIR service. To do so:
-
-1. In the Azure portal container registry, select **Networking**.
-2. Select the **Public access** tab, select **Disabled**, and then select **Allow trusted Microsoft services to access this container registry**.
-
-![Screenshot of the "Networking" pane for disabling public network access to an Azure Container Registry instance.](media/convert-data/configure-private-network-container-registry.png#lightbox)
+> [!NOTE]
+> The FHIR service has been registered as a trusted Microsoft service with Azure Container Registry.
-### Verify the $convert-data operation
+### Step 7: Verify the $convert-data operation
Make a call to the `$convert-data` operation by specifying your template reference in the `templateCollectionReference` parameter: `<RegistryServer>/<imageName>@<imageDigest>`
-You should receive a `Bundle` response that contains the health data converted into the FHIR format.
+You should receive a `bundle` response that contains the health data converted into the FHIR format.
## Next steps
-In this article, you've learned how to configure settings for `$convert-data` for converting health data into FHIR by using the FHIR service in Azure Health Data Services.
+In this article, you've learned how to configure the settings for `$convert-data` to begin converting various health data formats into the FHIR format.
For an overview of `$convert-data`, see
healthcare-apis Convert Data With Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data-with-azure-data-factory.md
+
+ Title: Transform HL7v2 data to FHIR R4 with the $convert-data operation and Azure Data Factory - Azure Health Data Services
+description: Learn how to transform HL7v2 data to FHIR R4 with the $convert-data operation and Azure Data Factory
++++ Last updated : 09/05/2023+++
+# Transform HL7v2 data to FHIR R4 with the $convert-data operation and Azure Data Factory
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+In this article, we detail how to use [Azure Data Factory (ADF)](../../data-factory/introduction.md) with the `$convert-data` operation to transform [HL7v2](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=185) data to [FHIR R4](https://www.hl7.org/fhir/R4/). The transformed results are then persisted within an [Azure storage account](../../storage/common/storage-account-overview.md) with [Azure Data Lake Storage (ADLS) Gen2](../../storage/blobs/data-lake-storage-introduction.md) capabilities.
+
+## Prerequisites
+
+Before getting started, ensure you have taken the following steps:
+
+1. Deploy an instance of theΓÇ»[FHIR service](fhir-portal-quickstart.md). The FHIR service is used to invoke the [`$convert-data`](overview-of-convert-data.md) operation.
+2. By default, the ADF pipeline in this scenario uses the [predefined templates provided by Microsoft](configure-settings-convert-data.md#default-templates) for conversion. If your use case requires customized templates, set up your [Azure Container Registry instance to host your own templates](configure-settings-convert-data.md#host-your-own-templates) to be used for the conversion operation.
+3. Create storage account(s) with [Azure Data Lake Storage Gen2 (ADLS Gen2) capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace and container(s) to store the data to read from and write to.
+
+ > [!NOTE]
+ > You can create and use either one or separate ADLS Gen2 accounts and containers to:
+ > * Store the HL7v2 data to be transformed (for example: the source account and container the pipeline will read the data to be transformed from).
+ > * Store the transformed FHIR R4 bundles (for example: the destination account and container the pipeline will write the transformed result to).
+ > * Store the errors encountered during the transformation (for example: the destination account and container the pipeline will write execution errors to).
+
+4. Create an instance of [ADF](../../data-factory/quickstart-create-data-factory.md), which serves as a business logic orchestrator. Ensure that a [system-assigned managed identity](../../data-factory/data-factory-service-identity.md) has been enabled.
+5. Add the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) assignments to the ADF system-assigned managed identity:
+ * **FHIR Data Converter** role to [grant permission to the FHIR service](../../healthcare-apis/configure-azure-rbac.md#assign-roles-for-the-fhir-service).
+ * **Storage Blob Data Contributor** role to [grant permission to the ADLS Gen2 account](../../storage/blobs/assign-azure-role-data-access.md?tabs=portal).
+
+## Configure an Azure Data Factory pipeline
+
+In this example, an ADF [pipeline](../../data-factory/concepts-pipelines-activities.md?tabs=data-factory) is used to transform HL7v2 data and persist transformed FHIR R4 bundle in a JSON file within the configured destination ADLS Gen2 account and container.
+
+1. From the Azure portal, open your Azure Data Factory instance and select **Launch Studio** to begin.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/select-launch-studio.png" alt-text="Screenshot of Azure Data Factory." lightbox="media/convert-data/convert-data-with-azure-data-factory/select-launch-studio.png":::
+
+## Create a pipeline
+
+Azure Data Factory pipelines are a collection of activities that perform a task. This section details the creation of a pipeline that performs the task of transforming HL7v2 data to FHIR R4 bundles. Pipeline execution can be in an improvised fashion or regularly based on defined triggers.
+
+1. Select **Author** from the navigation menu. In the **Factory Resources** pane, select the **+** to add a new resource. Select **Pipeline** and then **Template gallery** from the menu.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/open-template-gallery.png" alt-text="Screenshot of the Artifacts screen for registering an Azure Container Registry with a FHIR service." lightbox="media/convert-data/convert-data-with-azure-data-factory/open-template-gallery.png":::
+
+2. In the Template gallery, search for **HL7v2**. Select the **Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2** tile and then select **Continue**.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/search-for-template.png" alt-text="Screenshot of the search for the Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2 template." lightbox="media/convert-data/convert-data-with-azure-data-factory/search-for-template.png":::
+
+3. Select **Use this template** to create the new pipeline.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/use-this-template.png" alt-text="Screenshot of the Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2 template preview." lightbox="media/convert-data/convert-data-with-azure-data-factory/use-this-template.png":::
+
+ ADF imports the template, which is composed of an end-to-end main pipeline and a set of individual pipelines/activities. The main end-to-end pipeline for this scenario is named **Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2** and can be accessed by selecting **Pipelines**. The main pipeline invokes the other individual pipelines/activities under the subcategories of **Extract**, **Load**, and **Transform**.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/overview-pipeline-template.png" alt-text="Screenshot of the Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2 Azure Data Factory template." lightbox="media/convert-data/convert-data-with-azure-data-factory/overview-pipeline-template.png":::
+
+ If needed, you can make any modifications to the pipelines/activities to fit your scenario (for example: if you don't intend to persist the results in a destination ADLS Gen2 storage account, you can modify the pipeline to remove the **Write converted result to ADLS Gen2** pipeline altogether).
+
+4. Select the **Parameters** tab and provide values based your configuration/setup. Some of the values are based on the resources setup as part of the [prerequisites](#prerequisites).
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/input-pipeline-parameters.png" alt-text="Screenshot of the pipeline parameters options." lightbox="media/convert-data/convert-data-with-azure-data-factory/input-pipeline-parameters.png":::
+
+ * **fhirService** ΓÇô Provide the URL of the FHIR service to target for the `$convert-data` operation. For example: `https://**myservice-fhir**.fhir.azurehealthcareapis.com/`
+ * **acrServer** ΓÇô Provide the name of the ACR server to pull the Liquid templates to use for conversion. By default, option is set to `microsofthealth`, which contains the predefined template collection published by Microsoft. To use your own template collection, replace this value with your ACR instance that hosts your templates and is registered to your FHIR service.
+ * **templateReference** ΓÇô Provide the reference to the image within the ACR that contains the Liquid templates to use for conversion. By default, this option is set to `hl7v2templates:default` to pull the latest published Liquid templates for HL7v2 conversion by Microsoft. To use your own template collection, replace this value with the reference to the image within your ACR that hosts your templates and is registered to your FHIR service.
+ * **inputStorageAccount** ΓÇô The primary endpoint of the ADLS Gen2 storage account containing the input HL7v2 data to transform. For example: `https://**mystorage**.blob.core.windows.net`.
+ * **inputStorageFolder** ΓÇô The container and folder path within the configured. For example: `**mycontainer**/**myHL7v2folder**`.
+
+ > [!NOTE]
+ > This can be a static folder path or can be left blank here and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
+
+ * **inputStorageFile** ΓÇô The name of the file within the configured container.
+ * **inputStorageAccount** and **inputStorageFolder** that contains the HL7v2 data to transform. For example: `**myHL7v2file**.hl7`.
+
+ > [!NOTE]
+ > This can be a static folder path or can be left blank here and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
+
+ * **outputStorageAccount** ΓÇô The primary endpoint of the ADLS Gen2 storage account to store the transformed FHIR bundle. For example: `https://**mystorage**.blob.core.windows.net`.
+ * **outputStorageFolder** ΓÇô The container and folder path within the configured **outputStorageAccount** to which the transformed FHIR bundle JSON files are written to.
+ * **rootTemplate** ΓÇô The root template to use while transforming the provided HL7v2 data. For example: ADT_A01, ADT_A02, ADT_A03, ADT_A04, ADT_A05, ADT_A08, ADT_A11, ADT_A13, ADT_A14, ADT_A15, ADT_A16, ADT_A25, ADT_A26, ADT_A27, ADT_A28, ADT_A29, ADT_A31, ADT_A47, ADT_A60, OML_O21, ORU_R01, ORM_O01, VXU_V04, SIU_S12, SIU_S13, SIU_S14, SIU_S15, SIU_S16, SIU_S17, SIU_S26, MDM_T01, MDM_T02.
+
+ > [!NOTE]
+ > This can be a static folder path or can be left blank here and dynamically configured when setting up storage account triggers for this pipeline execution (refer to the section titled [Executing a pipeline](#executing-a-pipeline)).
+
+ * **errorStorageFolder** - The container and folder path within the configured **outputStorageAccount** to which the errors encountered during execution are written to. For example: `**mycontainer**/**myerrorfolder**`.
+
+5. You can configure more pipeline settings under the **Settings** tab based on your requirements.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/settings-tab-overview.png" alt-text="Screenshot of the Settings option." lightbox="media/convert-data/convert-data-with-azure-data-factory/settings-tab-overview.png":::
+
+6. You can also optionally debug your pipeline to verify the setup. Select **Debug**.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/debug-pipeline.png" alt-text="Screenshot of the Azure Data Factory debugging option." lightbox="media/convert-data/convert-data-with-azure-data-factory/debug-pipeline.png":::
+
+7. Verify your pipeline run parameters and select **OK**.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/verify-pipeline-parameters.png" alt-text="Screenshot of the Azure Data Factory verify pipeline parameters." lightbox="media/convert-data/convert-data-with-azure-data-factory/verify-pipeline-parameters.png":::
+
+8. You can monitor the debug execution of the pipeline under the **Output** tab.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/output-pipeline-status.png" alt-text="Screenshot of the pipeline output status." lightbox="media/convert-data/convert-data-with-azure-data-factory/output-pipeline-status.png":::
+
+9. Once you're satisfied with your pipeline setup, select **Publish all**.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/select-publish-all.png" alt-text="Screenshot of the Azure Data Factory Publish all option." lightbox="media/convert-data/convert-data-with-azure-data-factory/select-publish-all.png":::
+
+10. Select **Publish** to save your pipeline within your own ADF instance.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/select-publish.png" alt-text="Screenshot of the Azure Data Factory Publish option." lightbox="media/convert-data/convert-data-with-azure-data-factory/select-publish.png":::
+
+## Executing a pipeline
+
+You can execute (or run) a pipeline either manually or by using a trigger. There are different types of triggers that can be created to help automate your pipeline execution. For example:
+
+* **Manual trigger**
+* **Schedule trigger**
+* **Tumbling window trigger**
+* **Event-based trigger**
+
+For more information on the different trigger types and how to configure them, see [Pipeline execution and triggers in Azure Data Factory or Azure Synapse Analytics](../../data-factory/concepts-pipeline-execution-triggers.md).
+
+By setting triggers, you can simulate batch transformation of HL7v2 data. The pipeline executes automatically based on the configured trigger parameters without requiring individual invocation of the `$convert-data` operation for each input message.
+
+> [!IMPORTANT]
+> In a scenario with batch processing of HL7v2 messages, this template does not take sequencing into account, so post processing will be needed if sequencing is a requirement.
+
+## Create a new storage event trigger
+
+In the following example, a storage event trigger is used. The storage event trigger automatically triggers the pipeline whenever a new HL7v2 data blob file to be processed is uploaded to the ADLS Gen2 storage account.
+
+To configure the pipeline to automatically run whenever a new HL7v2 blob file in the source ADLS Gen2 storage account is available to transform, follow these steps:
+
+1. Select **Author** from the navigation menu. Select the pipeline configured in the previous section and select **Add trigger** and **New/Edit** from the menu bar.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/select-add-trigger.png" alt-text="Screenshot of the Azure Data Factory Add trigger and New/Edit options." lightbox="media/convert-data/convert-data-with-azure-data-factory/select-add-trigger.png":::
+
+2. In the **Add triggers** panel, select the **Choose trigger** dropdown and then select **New**.
+3. Enter a **Name** and **Description** for the trigger.
+4. Select **Storage events** as the **Type**.
+5. Configure the storage account details containing the source HL7v2 data to transform (for example: ADLS Gen2 storage account name, container name, blob path, etc.) to reference for the trigger.
+6. Select **Blob created** as the **Event**.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/create-new-storage-event-trigger.png" alt-text="Screenshot of creating a new storage event trigger." lightbox="media/convert-data/convert-data-with-azure-data-factory/create-new-storage-event-trigger.png":::
+
+7. Select **Continue** to see the **Data preview** for the configured settings.
+8. Select **Continue** again at **Data preview** to continue configuring the trigger run parameters.
+
+## Configure trigger run parameters
+
+Triggers not only define when to run a pipeline, they also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. You can configure pipeline parameters dynamically using the trigger run parameters.
+
+The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For more information, see [Reference trigger metadata in pipeline runs](../../data-factory/how-to-use-trigger-parameterization.md).
+
+For theΓÇ»**Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2 template**, the storage event trigger properties can be used to configure certain pipeline parameters.
+
+> [!NOTE]
+> If no value is supplied during configuration, then the previously configured default value will be used for each parameter.
+
+1. In theΓÇ»**New trigger** pane, within the **Trigger Run Parameters** options, use the following values:
+ * For **inputStorageFolder** use `@triggerBody().folderPath`. This parameter provides the runtime value for this parameter based on the folder path associated with the event triggered (for example: folder path of the new HL7v2 blob created/updated in the storage account configured in the trigger).
+ * For **inputStorageFile** use `@triggerBody().fileName`. This parameter provides the runtime value for this parameter based on the file associated with the event triggered (for example: file name of the new HL7v2 blob created/updated in the storage account configured in the trigger).
+ * For **rootTemplate** specify the name of the template to be used for the pipeline executions associated with this trigger (for example: `ADT_A01`).
+
+2. Select **OK** to create the new trigger. Be sure to select **Publish** on the menu bar to begin your trigger running on the defined schedule.
+
+ :::image type="content" source="media/convert-data/convert-data-with-azure-data-factory/trigger-run-parameters.png" alt-text="Screenshot of Azure Data Factory trigger parameters." lightbox="media/convert-data/convert-data-with-azure-data-factory/trigger-run-parameters.png":::
+
+After the trigger is published, it can be triggered manually using the **Trigger now** option. If the start time was set for a value in the past, the pipeline starts immediately.
+
+## Monitoring pipeline runs
+
+Trigger runs and their associated pipeline runs can be viewed in the **Monitor** tab. Here, users can browse when each pipeline ran, how long it took to execute, and potentially debug any problems that arose.
++
+## Pipeline execution results
+
+### Transformed FHIR R4 results
+
+Successful pipeline executions result in the transformed FHIR R4 bundles as JSON files in the configured destination ADLS Gen2 storage account and container.
++
+### Errors
+
+Errors encountered during conversion, as part of the pipeline execution, result in error details captured as JSON file in the configured error destination ADLS Gen2 storage account and container. For information on how to troubleshoot `$convert-data`, see [Troubleshoot $convert-data](troubleshoot-convert-data.md).
++
+## Next steps
+
+In this article, you learned how to use Azure Data Factory templates to create a pipeline to transform HL7v2 data to FHIR R4 persisting the results within an Azure Data Lake Storage Gen2 account. You also learned how to configure a trigger to automate the pipeline execution based on incoming HL7v2 data to be transformed.
+
+For an overview of `$convert-data`, see
+
+> [!div class="nextstepaction"]
+> [Overview of $convert-data](overview-of-convert-data.md)
+
+To learn how to configure settings for `$convert-data` using the Azure portal, see
+
+> [!div class="nextstepaction"]
+> [Configure settings for $convert-data using the Azure portal](convert-data-with-azure-data-factory.md)
+
+To learn how to troubleshoot `$convert-data`, see
+
+> [!div class="nextstepaction"]
+> [Troubleshoot $convert-data](troubleshoot-convert-data.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
For more information, see [Supported FHIR features](fhir-features-supported.md).
FHIR service is our implementation of the FHIR specification that sits in the Azure Health Data Services, which allows you to have a FHIR service and a DICOM service within a single workspace. Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are: * FHIR service has a limit of 4 TB, and Azure API for FHIR supports more than 4 TB.
-* FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+* FHIR service support additional capabilities as
+** [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+** [Incremental Import](configure-import-data.md).
+** [Autoscaling](fhir-service-autoscale.md) is enabled by default.
* Azure API for FHIR has more platform features (such as customer managed keys, and cross region DR) that aren't yet available in FHIR service in Azure Health Data Services. ### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server?
SMART (Substitutable Medical Applications and Reusable Technology) on FHIR is a
### Does the FHIR service support SMART on FHIR?
-We have a basic SMART on FHIR proxy as part of the managed service. If this doesnΓÇÖt meet your needs, you can use the open-source FHIR proxy for more advanced SMART scenarios.
+Yes, SMART on FHIR capability is supported using [AHDS samples](https://aka.ms/azure-health-data-services-smart-on-fhir-sample). This is referred to as SMART on FHIR(Enhanced). SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg). For more information, visit [SMART on FHIR(Enhanced) Documentation](smart-on-fhir.md).
+ ### Can I create a custom FHIR resource?
There are two basic Delete types supported within the FHIR service. These are [D
### Can I perform health checks on FHIR service?
-To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is succesful.
-In case of errors, you will recieve error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
+To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is successful.
+In case of errors, you will receive error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
## Next steps
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Having access to diagnostic logs is essential for monitoring a service and provi
## Next steps
-For an overview of FHIR service, see
+To learn about setting custom headers on diagnostic logs ,visit
>[!div class="nextstepaction"]
->[What is FHIR service?](overview.md)
+>[Setting custom headers for logs](use-custom-headers-diagnosticlog.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Frequently Asked Questions Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/frequently-asked-questions-convert-data.md
Previously updated : 08/03/2023 Last updated : 08/28/2023
You can use the `$convert-data` endpoint as a component within an ETL (extract,
However, the `$convert-data` operation itself isn't an ETL pipeline.
-## How can I persist the data into the FHIR service?
+## Where can I find an example of an ETL pipeline that I can reference?
+
+There's an example published in the [Azure Data Factory Template Gallery](../../data-factory/solution-templates-introduction.md#template-gallery) named **Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2**. This template transforms HL7v2 messages read from an Azure Data Lake Storage (ADLS) Gen2 or an Azure Blob Storage account into the FHIR R4 format. It then persists the transformed FHIR bundle JSON file into an ADLS Gen2 or a Blob Storage account. Once youΓÇÖre in the Azure Data Factory Template Gallery, you can search for the template.
++
+> [!IMPORTANT]
+> The purpose of this template is to help you get started with an ETL pipeline. Any steps in this pipeline can be removed, added, edited, or customized to fit your needs.
+>
+> In a scenario with batch processing of HL7v2 messages, this template does not take sequencing into account. Post processing will be needed if sequencing is a requirement.
+
+## How can I persist the converted data into the FHIR service using Postman?
You can use the FHIR service's APIs to persist the converted data into the FHIR service by using `POST {{fhirUrl}}/{{FHIR resource type}}` with the request body containing the FHIR resource to be persisted in JSON format.
-* For more information about using Postman with the FHIR service, see [Access the Azure Health Data Services FHIR service using Postman](use-postman.md).
+For more information about using Postman with the FHIR service, see [Access the Azure Health Data Services FHIR service using Postman](use-postman.md).
## Is there a difference in the experience of the $convert-data endpoint in Azure API for FHIR versus in the Azure Health Data Services?
The experience and core `$convert-data` operation functionality is similar for b
## The conversion succeeded, does this mean I have a valid FHIR bundle?
-The outcome of FHIR conversion is a FHIR Bundle as a batch.
-* The FHIR Bundle should align with the expectations of the FHIR R4 specification - [Bundle - FHIR v4.0.1](http://hl7.org/fhir/R4/Bundle.html).
+The outcome of FHIR conversion is a FHIR bundle as a batch.
+* The FHIR bundle should align with the expectations of the FHIR R4 specification - [Bundle - FHIR v4.0.1](http://hl7.org/fhir/R4/Bundle.html).
* If you're trying to validate against a specific profile, you need to do some post processing by utilizing the FHIR [`$validate`](validation-against-profiles.md) operation. ## Can I customize a default Liquid template?
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
# Import Operation Import operation enables loading Fast Healthcare Interoperability Resources (FHIR&#174;) data to the FHIR server at high throughput using the $import operation. Import supports both initial and incremental data load into FHIR server.
-> [!IMPORTANT]
-> Incremental import mode is currently in public preview and offered free of charge. With General Availability, use of Incremental import will incur charges.
-> Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities.
->
-> For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Using $import operation > [!NOTE]
healthcare-apis Overview Of Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-convert-data.md
The `$convert-data` operation is integrated into the FHIR service as a REST API
`POST {{fhirurl}}/$convert-data`
-The health data for conversion is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service returns a [FHIR Bundle](https://www.hl7.org/fhir/R4/bundle.html) response with the data converted to FHIR R4.
+The health data for conversion is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service returns a [FHIR bundle](https://www.hl7.org/fhir/R4/bundle.html) response with the data converted to FHIR R4.
### Parameters
A `$convert-data` operation call packages the health data for conversion inside
} ```
-The outcome of FHIR conversion is a FHIR Bundle as a batch.
-* The FHIR Bundle should align with the expectations of the FHIR R4 specification - [Bundle - FHIR v4.0.1](http://hl7.org/fhir/R4/Bundle.html).
+The outcome of FHIR conversion is a FHIR bundle as a batch.
+* The FHIR bundle should align with the expectations of the FHIR R4 specification - [Bundle - FHIR v4.0.1](http://hl7.org/fhir/R4/Bundle.html).
* If you're trying to validate against a specific profile, you need to do some post processing by utilizing the FHIR [`$validate`](validation-against-profiles.md) operation. ## Next steps
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Previously updated : 09/20/2022 Last updated : 09/01/2023
-# What is the FHIR service in Azure Health Data Services?
+# What is the FHIR service?
The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. As part of a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
FHIR servers are essential for interoperability of health data. The FHIR service
- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another (often because the data is stored in different formats). Utilizing the FHIR service as a conversion layer between these systems allows organizations to standardize data in the FHIR format. Ingesting and persisting in FHIR enables health data querying and exchange across multiple disparate systems. -- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure machine learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
+- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure Machine Learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
## FHIR platforms from Microsoft
healthcare-apis Selectable Search Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/selectable-search-parameters.md
-# Selectable search parameter capability
+# Selectable search parameter capability
Searching for resources is fundamental to FHIR. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. As the FHIR service in Azure health data services is provisioned, inbuilt search parameters are enabled by default. During the ingestion of data in the FHIR service, specific properties from FHIR resources are extracted and indexed with these search parameters. This is done to perform efficient searches. The selectable search parameter functionality allows you to enable or disable inbuilt search parameters. This functionality helps you to store more resources in allocated storage space and have performance improvements, by enabling only needed search parameters.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR Enhanced using Azure Health Data Services Samples
+## SMART on FHIR using Azure Health Data Services Samples (SMART on FHIR (Enhanced))
-### Step 1 : Set up FHIR SMART user role
+### Step 1: Set up FHIR SMART user role
Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
-### Step 2 : FHIR server integration with samples
+### Step 2: FHIR server integration with samples
For integration with Azure Health Data Services samples, you would need to follow the steps in samples open source solution.
-**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This steps listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This step listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg) compliance, using Azure Active Directory as the identity provider workflow.
For integration with Azure Health Data Services samples, you would need to follo
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR Enhanced using AHDS Samples" mentioned above. We suggest you to adopt SMART on FHIR enhanced. SMART on FHIR Proxy option is legacy option.
-> SMART on FHIR enhanced version provides added capabilities than SMART on FHIR proxy. SMART on FHIR enhanced capability can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
+> This is another option to SMART on FHIR(Enhanced) using AHDS Samples mentioned above. We suggest you to adopt SMART on FHIR(Enhanced). SMART on FHIR Proxy option is legacy option.
+> SMART on FHIR(Enhanced) provides added capabilities than SMART on FHIR proxy. SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
-### Step 1 : Set admin consent for your client application
+### Step 1: Set admin consent for your client application
To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
Add the reply URL to the public client application that you created earlier for
<!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)>
-### Step 3 : Get a test patient
+### Step 3: Get a test patient
To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-### Step 4 : Download the SMART on FHIR app launcher
+### Step 4: Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-### Step 5 : Test the SMART on FHIR proxy
+### Step 5: Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Troubleshoot Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/troubleshoot-convert-data.md
Previously updated : 08/03/2023 Last updated : 08/28/2023
healthcare-apis Use Custom Headers Diagnosticlog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-custom-headers-diagnosticlog.md
+
+ Title: Add data to audit logs by using custom headers - FHIR service
+description: This article describes how to add data to audit logs by using custom HTTP headers in FHIR service.
++++++ Last updated : 06/03/2022+
+
+# Add data to audit logs by using custom HTTP headers in FHIR service
+
+
+## Next steps
+
+In this article, you learned how to add data to audit logs by using custom headers in the FHIR service. For information about FHIR service, see
+
+>[!div class="nextstepaction"]
+>[FHIR Overview](overview.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
In this article, you learned how to access the FHIR service in Azure Health Data
>[What is FHIR service?](overview.md)
-For a starter collection of sample Postman queries, please see our [samples repo](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/sample-postman-queries) on Github.
+For a starter collection of sample Postman queries, please see our [samples repo](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/sample-postman-queries) on GitHub.
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
Per specification, Mode can be specified with `$validate`, such as create and up
- `update`: Checks that the profile is an update against the nominated existing resource (that is no changes are made to the immutable fields). There are different ways provided for you to validate resource:-- Validate an existing resource with validate operation.-- Validate a new resource with validate operation.-- Validate on resource CREATE/ UPDATE using header.
+- Option 1: Validate an existing resource with validate operation.
+- Option 2: Validate a new resource with validate operation.
+- Option 3: Validate on resource CREATE/ UPDATE using header.
+
+On the successful validation of an existing/ new resource with the validate operation, resource is not persisted into the FHIR service. Use Option 3: Validate on resource CREATE/ UPDATE using header, to persist successfully validated resource to the FHIR service.
FHIR Service will always return an `OperationOutcome` as the validation results for $validate operation. FHIR service does two step validation, once a resource is passed into $validate endpoint - the first step is a basic validation to ensure resource can be parsed. During resource parsing, individual errors need to be fixed before proceeding further to next step. Once resource is successfully parsed, full validation is conducted as second step. > [!NOTE] > Any valuesets that are to be used for validation must be uploaded to the FHIR server. This includes any Valuesets which are part of the FHIR specification, as well as any ValueSets defined in Implementation Guides. Only fully expanded Valuesets which contain a full list of all codes are supported. Any ValueSet definitions which reference external sources are not supported.
-## Validating an existing resource
+## Option 1: Validating an existing resource
To validate an existing resource, use `$validate` in a `GET` request:
If you'd like to specify a profile as a parameter, you can specify the canonical
`GET https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Observation/12345678/$validate?profile=http://hl7.org/fhir/StructureDefinition/heartrate`
-## Validating a new resource
+## Option 2: Validating a new resource
If you'd like to validate a new resource that you're uploading to the server, you can do a `POST` request:
For example:
This request will first validate the resource. New resource you're specifying in the request will be created after validation. The server will always return an `OperationOutcome` as the result.
-## Validate on resource CREATE/UPDATE using header
+## Option 3: Validate on resource CREATE/UPDATE using header
You can choose when you'd like to validate your resource, such as on resource `CREATE` or `UPDATE`. By default, the FHIR service is configured to opt out of validation on resource `Create/Update`. This capability allows to validate on `Create/Update`, using the `x-ms-profile-validation` header. Set `x-ms-profile-validation' to true for validation.
You can choose when you'd like to validate your resource, such as on resource `C
} } ```
+To enable strict validation, use 'Prefer: handling' header with value strict. By setting this header, validation warning will be reported as an error.
## Next steps
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Ensure the region for the new private endpoint is the same as the region for you
[![Screen image of the Azure portal Basics Tab.](media/private-link/private-link-basics.png)](media/private-link/private-link-basics.png#lightbox)
-For the resource type, search and select **Microsoft.HealthcareApis/services** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
+For the resource type, search and select **Microsoft.HealthcareApis/workspaces** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
[![Screen image of the Azure portal Resource tab.](media/private-link/private-link-resource.png)](media/private-link/private-link-resource.png#lightbox)
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Title: What is Azure Health Data Services? description: This article is an overview of Azure Health Data Services. -+ Previously updated : 06/03/2022 Last updated : 09/01/2023 # What is Azure Health Data Services?
-Azure Health Data Services is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions. Using a set of managed API services and frameworks thatΓÇÖs dedicated to the healthcare industry is important and beneficial because health data collected from patients and healthcare consumers can be fragmented from across multiple systems, device types, and data formats. Gaining insights from health data is one of the biggest barriers to sustaining population and personal health and overall wellness understanding. Bringing disparate systems, workflows, and health data together is more important today. A unified and aligned approach to health data access, standardization, and trend capturing would enable the discovery of operational and clinical insights. We can streamline the process of connecting new device applications and enable new research projects. Using Azure Health Data Services as a scalable and secure healthcare solution can enable workflows to improve healthcare through insights discovered by bringing Protected Health Information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI.
+Azure Health Data Services is a cloud-based solution that helps you collect, store, and analyze health data from different sources and formats. It supports various healthcare standards, such as FHIR and DICOM, and converts data from legacy or proprietary device formats to FHIR.
-Azure Health Data Services provides the following benefits:
-* Empower new workloads to leverage PHI by enabling the data to be collected and accessed in one place, in a consistent way.
-* Discover new insight by bringing disparate PHI together and connecting it end-to-end with tools for machine learning, analytics, and AI.
-* Build on a trusted cloud with confidence in how Protected Health Information is managed, stored, and made available.
-The new Microsoft Azure Health Data Services will, in addition to Fast Healthcare Interoperability Resources (FHIR&#174;), support other healthcare industry data standards, like DICOM, extending healthcare data interoperability. The business model and infrastructure platform have been redesigned to accommodate the expansion and introduction of different and future healthcare data standards. Customers can use health data of different types across healthcare standards under the same compliance umbrella. Tools have been built into the managed service that allow customers to transform data from legacy or device proprietary formats, to FHIR. Some of these tools have been previously developed and open-sourced; others will be net new.
+Azure Health Data Services enables you to:
-Azure Health Data Services enables you to:
-* Quickly connect disparate health data sources and formats such as structured, imaging, and device data and normalize it to be persisted in the cloud.
-* Transform and ingest data into FHIR. For example, you can transform health data from legacy formats, such as HL7v2 or CDA, or from high frequency IoT data in device proprietary formats to FHIR.
-* Connect your data stored in Azure Health Data Services with services across the Azure ecosystem, like Synapse, and products across Microsoft, like Teams, to derive new insights through analytics and machine learning and to enable new workflows as well as connection to SMART on FHIR applications.
-* Manage advanced workloads with enterprise features that offer reliability, scalability, and security to ensure that your data is protected, meets privacy and compliance certifications required for the healthcare industry.
+- Connect health data from different systems, devices, and types in one place.
+- Discover new insights from health data using machine learning, analytics, and AI tools.
+- Build on a trusted cloud that protects your health data and meets privacy and compliance requirements.
+Designed to meet your current health data needs and built to adapt to future developments, Azure Health Data Services is a powerful and flexible solution that is always evolving. Microsoft engineers are continuously improving and updating the platform to support new and emerging health data standards so you don't have to worry about changing your data formats or systems in the future.
-## What are the key differences between Azure Health Data Services and Azure API for FHIR?
+## Differences between Azure Health Data Services and Azure API for FHIR
-**Linked services**
+Azure Health Data Services and Azure API for FHIR are two different offerings from Microsoft that enable healthcare organizations to manage and exchange health data in a secure, scalable way.
-Azure Health Data Services supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR, DICOM, and MedTech) that seamlessly work with one another. Services deployed within a workspace also share a compliance boundary and common configuration settings. The product scales automatically to meet the varying demands of your workloads, so you spend less time managing infrastructure and more time generating insights from health data.
-
-**DICOM service**
-
-Azure Health Data Services includes support for the DICOM service. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
-
-**MedTech service**
-
-Azure Health Data Services includes support for the MedTech service. The MedTech service enables you to ingest high-frequency IoT device data into the FHIR Service in a scalable, secure, and compliant manner. For more information about MedTech, see [Overview of MedTech](../healthcare-apis/iot/overview.md).
-
-**FHIR service**
-
-Azure Health Data Services includes support for FHIR service. The FHIR service enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. For more information about FHIR, see [Overview of FHIR](../healthcare-apis/fhir/overview.md).
-
-For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR.
+- **Azure API for FHIR** is a single service that provides a managed platform for exchanging health data using the FHIR standard. **Azure Health Data Services** is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions.
+- **Azure API for FHIR** only supports FHIR standard, which mainly covers structured data. **Azure Health Data Services** supports other healthcare industry data standards besides FHIR, such as DICOM, which allows it to work with different types of data, such as imaging and device data.
+- **Azure API for FHIR** requires you to create a separate resource for each FHIR service instance, which limits the interoperability and integration of health data. **Azure Health Data Services** allows you to deploy a FHIR service and a DICOM service in the same workspace, which enables you to connect and analyze health data from different sources and formats.
+- **Azure API for FHIR** doesn't have some platform features that are available in **Azure Health Data Services**, such as private link, customer managed keys, and logging. These features enhance the security and compliance of your health data.
-* **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](https://www.hl7.org/) and refer to batch/transaction interactions.
-* [Chained Search Improvements](./././fhir/overview-of-search.md#chained--reverse-chained-searching): Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query.
-* The `$convert-data` operation can now transform JSON objects to FHIR R4.
-* Events: Trigger new workflows when resources are created, updated, or deleted in a FHIR service.
-
+In addition, Azure Health Data Services has a business model and infrastructure platform that accommodates the expansion and introduction of different and future healthcare data standards.
## Next steps
-To start working with the Azure Health Data Services, follow the 5-minute quick start to deploying a workspace.
+To work with Azure Health Data Services, first you need to create an [Azure workspace](workspace-overview.md).
-> [!div class="nextstepaction"]
-> [Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
+Follow the steps in this quickstart guide:
> [!div class="nextstepaction"]
-> [Workspace overview](workspace-overview.md)
+> [Create a workspace](healthcare-apis-quickstart.md)
++ FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-samples.md
Title: The MedTech service scenario-based mappings samples - Azure Health Data Services
+ Title: MedTech service scenario-based mappings samples - Azure Health Data Services
description: Learn about the MedTech service scenario-based mappings samples.
> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings.
+The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings.
## Sample resources
Each MedTech service scenario-based sample contains the following resources:
## CalculatedContent
-[Conversions using functions](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/calculatedcontent/conversions-using-functions)
+[Conversions using functions](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings/calculatedcontent/conversions-using-functions)
## IotJsonPathContent
-[Single device message into multiple resources](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/iotjsonpathcontent/single-device-message-into-multiple-resources)
+[Single device message into multiple resources](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings/iotjsonpathcontent/single-device-message-into-multiple-resources)
## Next steps
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 07/28/2023 Last updated : 09/01/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides an overview of the MedTech service. The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
+The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
The MedTech service is built to help customers that are dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## August 2023
+#### Azure Health Data Services
+
+#### FHIR service
+**$convert-data documentation updates**
+
+Customers can now find detailed documentation on the $convert-data operation, allowing them to have an easier self-service experience. This includes an [overview](./fhir/overview-of-convert-data.md), how to [configure settings](./fhir/configure-settings-convert-data.md), a ready-made [pipeline template](./fhir/convert-data-with-azure-data-factory.md) in Azure Data Factory, [troubleshooting](./fhir/troubleshoot-convert-data.md) tips, and an [FAQ](./fhir/frequently-asked-questions-convert-data.md).
+ ## July 2023 #### Azure Health Data Services
Patient and Group level exports on interruption would restart from the beginning
The DICOM service API v2 is now Generally Available (GA) and introduces [several changes and new features](dicom/dicom-service-v2-api-changes.md). Most notable is the change to validation of DICOM attributes during store (STOW) operations - beginning with v2, the request fails only if **required attributes** fail validation. See the [DICOM Conformance Statement v2](dicom/dicom-services-conformance-statement-v2.md) for full details. - ## June 2023 #### Azure Health Data Services
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
industrial-iot Tutorial Publisher Deploy Opc Publisher Standalone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md
A typical set of IoT Edge Module Container Create Options for OPC Publisher is:
{ "Hostname": "opcpublisher", "Cmd": [
- "--pf=./pn.json",
+ "--pf=/appdata/pn.json",
"--aa" ], "HostConfig": {
industry Get Sensor Data From Sensor Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-sensor-data-from-sensor-partner.md
Title: Get sensor data from the partners
description: This article describes how to get sensor data from partners. + Last updated 11/04/2019
industry Ingest Historical Telemetry Data In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md
Last updated 11/04/2019 -+ # Ingest historical telemetry data
internet-peering Peering Service Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/peering-service-partner-overview.md
+
+ Title: Azure Peering Service partner overview
+description: Learn how to become an Azure Peering Service partner.
++++ Last updated : 08/18/2023++
+# Azure Peering Service partner overview
+
+This article helps you understand how to become an Azure Peering Service partner. It also describes the different types of Peering Service connections and the monitoring platform. For more information about Azure Peering Service, see [Azure Peering Service overview](../peering-service/about.md)
+
+## Peering Service partner requirements
+
+To become a Peering Service partner, follow these technical requirements:
+
+- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public.
+- The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints.
+- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST NOT terminate the peering on a device running a stateful firewall.
+- The Peer CANNOT have two local connections configured on the same router, as diversity is required.
+- The Peer CANNOT apply rate limiting to their connection.
+- The Peer CANNOT configure a local redundant connection as a backup connection. Backup connections must be in a different location than primary connections.
+- It's recommended to create Peering Service peerings in multiple locations so geo-redundancy can be achieved.
+- Primary, backup, and redundant sessions all must have the same bandwidth.
+- All infrastructure prefixes are registered in the Azure portal and advertised with community string 8075:8007.
+- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
+
+If you follow all of the requirements listed and would like to become a Peering Service partner, an agreement must be signed. Contact peeringservice@microsoft.com to get started.
+
+## Types of Peering Service connections
+
+To become a Peering Service partner, you must request a direct peering interconnect with Microsoft. They come in three types depending on your use case.
+
+- **AS8075** - A direct peering interconnect enabled for Peering Service made for Internet Service providers (ISPs)
+- **AS8075 (with Voice)** - A direct peering interconnect enabled for Peering Service made for Internet Service providers (ISPs). This type is optimized for communications services (messaging, conferencing, etc.), and allows you to integrate your communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
+- **AS8075 (with exchange route server)** - A direct peering interconnect enabled for Peering Service and made for Internet Exchange providers (IXPs) who require a route server.
+
+### Monitoring platform
+
+Service monitoring is offered to analyze user traffic and routing. Metrics are available in the Azure portal to track the performance and availability of your Peering Service connection. For more information, see [Peering Service monitoring platform](../peering-service/about.md#monitoring-platform)
+
+In addition, Peering Service partners are able to see received routes reported in the Azure portal.
++
+## Next steps
+
+- To establish a Direct interconnect for Peering Service, see [Internet peering for Peering Service walkthrough](walkthrough-peering-service-all.md).
+- To establish a Direct interconnect for Peering Service Voice, see [Internet peering for Peering Service Voice walkthrough](walkthrough-communications-services-partner.md).
+- To establish a Direct interconnect for Communications Exchange with Route Server, see [Internet peering for MAPS Exchange with Route Server walkthrough](walkthrough-exchange-route-server-partner.md).
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Title: Internet peering for Peering Service Voice Services walkthrough
+ Title: Internet peering for Peering Service Voice walkthrough
description: Learn about Internet peering for Peering Service Voice Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix.
Last updated 08/09/2023
-# Internet peering for Peering Service Voice Services walkthrough
+# Internet peering for Peering Service Voice walkthrough
-In this article, you learn steps to establish a Peering Service interconnect between a voice services provider and Microsoft.
+In this article, you learn how to establish a Peering Service interconnect between a voice services provider and Microsoft.
**Voice Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
To establish direct interconnect for Peering Service Voice Services, follow thes
- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public. - The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.-- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints (for example, SBC). -- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints (for example, SBC).
- The Peer MUST run BGP over Bidirectional Forwarding Detection (BFD) to facilitate subsecond route convergence. - The Peer MUST NOT terminate the peering on a device running a stateful firewall. - The Peer CANNOT have two local connections configured on the same router, as diversity is required
internet-peering Walkthrough Exchange Route Server Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-exchange-route-server-partner.md
To establish a Peering Service Exchange with Route Server peering, follow these
- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public. - The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.-- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints (for example, SBC). -- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints (for example, SBC).
- The Peer MUST NOT terminate the peering on a device running a stateful firewall. - The Peer CANNOT have two local connections configured on the same router, as diversity is required - The Peer CANNOT apply rate limiting to their connection
internet-peering Walkthrough Peering Service All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md
To establish direct interconnect for Peering Service, follow these requirements:
- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public. - The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.-- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints. -- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints.
- The Peer MUST NOT terminate the peering on a device running a stateful firewall. - The Peer CANNOT have two local connections configured on the same router, as diversity is required. - The Peer CANNOT apply rate limiting to their connection.
To establish direct interconnect for Peering Service, follow these requirements:
## Establish Direct Interconnect for Peering Service
+Ensure that you sign a Microsoft Azure Peering Service agreement before proceeding. For more information, see [Azure Peering Service partner overview requirements](./peering-service-partner-overview.md#peering-service-partner-requirements).
+ To establish a Peering Service interconnect with Microsoft, follow the following steps: ### 1. Associate your public ASN with your Azure subscription
iot-central Howto Export To Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-azure-data-explorer.md
To create the Azure Data Explorer destination in IoT Central on the **Data expor
:::image type="content" source="media/howto-export-data/export-destination-managed.png" alt-text="Screenshot of Azure Data Explorer export destination that uses a managed identity.":::
+If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md).
+ [!INCLUDE [iot-central-data-export-setup](../../../includes/iot-central-data-export-setup.md)]
iot-central Howto Export To Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md
To create the Blob Storage destination in IoT Central on the **Data export** pag
1. Select **Save**.
+If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md).
+ [!INCLUDE [iot-central-data-export-setup](../../../includes/iot-central-data-export-setup.md)]
iot-central Howto Export To Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md
Event Hubs destinations let you configure the connection with a *connection stri
# [Connection string](#tab/connection-string) -- If you don't have an existing Event Hubs namespace to export to, run the following script in the Azure Cloud Shell bash environment. The script creates a resource group, Event Hubs namespace, and event hub. It then prints the connection string to use when you configure the data export in IoT Central: ```azurecli-interactive
To create the Event Hubs destination in IoT Central on the **Data export** page:
1. Select **Save**.
+If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md).
+ [!INCLUDE [iot-central-data-export-setup](../../../includes/iot-central-data-export-setup.md)]
iot-central Howto Export To Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md
To create the Service Bus destination in IoT Central on the **Data export** page
1. Select **Save**.
+If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md).
+ [!INCLUDE [iot-central-data-export-setup](../../../includes/iot-central-data-export-setup.md)]
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
The following video walks you through monitoring device connectivity status:
### Device status values
-When a device connects to your IoT Central application, its device status changes as follows:
+Every device has a single status value in the UI. The device status can be one of:
-1. The device status is first **Registered**. This status means the device is created in IoT Central, and has a device ID. A device is registered when:
- - A new real device is added on the **Devices** page.
- - A set of devices is added using **Import** on the **Devices** page.
+- The device status is first **Registered**. This status means the device is created in IoT Central, and has a device ID. A device is registered when:
-1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS. The *Device ID* that was used to register the device. Either a SAS key or X.509 certificatTo find these values: to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
+ - A new real device is added on the **Devices** page.
+ - A set of devices is added using **Import** on the **Devices** page.
-1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
+- The device status changes to **Provisioned** when a registered device completes the provisioning step by using DPS. To complete the provisioning process, the device needs the *Device ID* that was used to register the device, either a SAS key or X.509 certificate, and the *ID scope*. After provisioning, the device can connect to your IoT Central application and start sending data.
-1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button.
+- Blocked devices have a status of **Blocked**. An operator can block and unblock devices. When a device is blocked, it can't send data to your IoT Central application. An operator must unblock the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
-1. If the device status is **Unassigned**, it means the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios:
+- If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled on the **Device connection groups** page. An operator must explicitly approve a device before it can be provisioned and sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button.
- - A set of devices is added using **Import** on the **Devices** page without specifying the device template.
- - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
+The following table shows how the status value for a device in the UI maps to the values used by the REST API to interact with devices:
- An operator can assign a device to a device template from the **Devices** page using the **Migrate** button.
+| UI Device status | Notes | REST API Get |
+| - | -- | |
+| Waiting for approval | The auto-approve option is disabled in the device connection group and the device was not added through the UI. <br/> A user must manually approve the device through the UI before it can be used. | `Provisioned: false` <br/> `Enabled: false` |
+| Registered | A device has been approved either automatically or manually. | `Provisioned: false` <br/> `Enabled: true` |
+| Provisioned | The device has been provisioned and can connect to your IoT Central application. | `Provisioned: true` <br/> `Enabled: true` |
+| Blocked | The device is not allowed to connect to your IoT Central application. You can block a device that is in any of the other states. | `Provisioned:` depends on `Waiting for approval`/`Registered`/`Provisioned status` <br/> `Enabled: false` |
+
+A device can also have a status of **Unassigned**. This status isn't shown in the **Device status** field in the UI, it is shown in the **Device template** field in the UI. However, you can filter the device list for devices with the **Unassigned** status. If the device status is **Unassigned**, the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios:
+
+- A set of devices is added using **Import** on the **Devices** page without specifying the device template.
+- A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
+
+An operator can assign a device to a device template from the **Devices** page by using the **Migrate** button.
### Device connection status
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The response to this request looks like the following example:
} ```
+The following table shows how the status value for a device in the UI maps to the values used by the REST API to interact with devices:
+
+| UI Device status | Notes | REST API Get |
+| - | -- | |
+| Waiting for approval | The auto-approve option is disabled in the device connection group and the device was not added through the UI. <br/> A user must manually approve the device through the UI before it can be used. | `Provisioned: false` <br/> `Enabled: false` |
+| Registered | A device has been approved either automatically or manually. | `Provisioned: false` <br/> `Enabled: true` |
+| Provisioned | The device has been provisioned and can connect to your IoT Central application. | `Provisioned: true` <br/> `Enabled: true` |
+| Blocked | The device is not allowed to connect to your IoT Central application. You can block a device that is in any of the other states. | `Provisioned:` depends on `Waiting for approval`/`Registered`/`Provisioned status` <br/> `Enabled: false` |
+ ### Get device credentials Use the following request to retrieve credentials of a device from your application:
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
The following screenshot shows how the successful command response displays in t
:::image type="content" source="media/howto-use-commands/simple-command-ui.png" alt-text="Screenshot showing how to view command payload for a standard command." lightbox="media/howto-use-commands/simple-command-ui.png":::
+> [!NOTE]
+> For standard commands, there's a timeout of 30 seconds. If a device doesn't respond within 30 seconds, IoT Central assumes that the command failed. This timeout period isn't configurable.
+ ## Long-running commands In a long-running command, a device doesn't immediately complete the command. Instead, the device acknowledges receipt of the command and then later confirms that the command completed. This approach lets a device complete a long-running operation without keeping the connection to IoT Central open.
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
https://aka.ms/iotcentral-docs-dps-SAS",
| Unapproved | The device isn't approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values) | | Unassigned | The device isn't assigned to a device template. | Assign the device to a device template so that IoT Central knows how to parse the data. |
-Learn more about [Device status values](howto-manage-devices-individually.md#device-status-values).
+Learn more about [Device status values in the UI](howto-manage-devices-individually.md#device-status-values) and [Device status values in the REST API](howto-manage-devices-with-rest-api.md#get-a-device).
### Error codes
iot-central Troubleshoot Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md
You're using a managed identity to authorize the connection to an export destina
Before you configure or enable the export destination, make sure that you complete the following steps: -- Enable the managed identity for the application.-- Configure the permissions for the managed identity.
+- Enable the managed identity for the IoT Central application. To verify that the managed identity is enabled, go to the **Identity** page for your application in the Azure portal or use the following CLI command:
+
+ ```azurecli
+ az iot central app identity show --name {your app name} --resource-group {your resource group name}
+ ```
+
+- Configure the permissions for the managed identity. To view the assigned permissions, select **Azure role assignments** on the **Identity** page for your app in the Azure portal or use the `az role assignment list` CLI command. The required permissions are:
+
+ | Destination | Permission |
+ |-||
+ | Azure Blob storage | Storage Blob Data Contributor |
+ | Azure Service Bus | Azure Service Bus Data Sender |
+ | Azure Event Hubs | Azure Event Hubs Data Sender |
+ | Azure Data Explorer | Admin |
+
+ If the permissions were not set correctly before you created the destination in your IoT Central application, try removing the destination and then adding it again.
+ - Configure any virtual networks, private endpoints, and firewall policies. To learn more, see [Export data](howto-export-data.md?tabs=managed-identity).
iot-develop Quickstart Devkit Stm B L475e Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md
Title: Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub quickstart
-description: Use Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry.
+ Title: Quickstart - Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub
+description: A quickstart that uses Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry.
ms.devlang: c Last updated 06/27/2023
+# CustomerIntent: As an embedded device developer, I want to use Azure RTOS to connect my device to Azure IoT Hub, so that I can learn about device connectivity and development.
# Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub
For debugging the application, see [Debugging with Visual Studio Code](https://g
[!INCLUDE [iot-develop-cleanup-resources](../../includes/iot-develop-cleanup-resources.md)]
-## Next steps
+## Next step
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you prepare a development environment that's used to build the
4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT Device SDK for C](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step, for example: `lts_01_2023`.
- ```cmd
- git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
- ```
+ ```cmd
+ git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
+ cd azure-iot-sdk-c
+ git submodule update --init
+ ```
- This operation could take several minutes to complete.
+ This operation could take several minutes to complete.
5. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
- ```cmd
- mkdir cmake
- cd cmake
- ```
+ ```cmd
+ mkdir cmake
+ cd cmake
+ ```
6. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
- When specifying the path used with `-Dhsm_custom_lib` in the following command, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
+ When specifying the path used with `-Dhsm_custom_lib` in the following command, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
- ```cmd
- cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
- ```
+ **Windows:**
- >[!TIP]
- >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
+ ```cmd
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
+ ```
+
+ **Linux:**
+
+ ```bash
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=/home/<USER>/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/custom_hsm_example.a ..
+ ```
+
+ >[!TIP]
+ >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
7. When the build succeeds, the last few output lines look similar to the following output:
- ```cmd
- cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
- -- Building for: Visual Studio 17 2022
- -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22000.
- -- The C compiler identification is MSVC 19.32.31329.0
- -- The CXX compiler identification is MSVC 19.32.31329.0
+ ```output
+ -- Building for: Visual Studio 17 2022
+ -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22000.
+ -- The C compiler identification is MSVC 19.32.31329.0
+ -- The CXX compiler identification is MSVC 19.32.31329.0
- ...
+ ...
- -- Configuring done
- -- Generating done
- -- Build files have been written to: C:/azure-iot-sdk-c/cmake
- ```
+ -- Configuring done
+ -- Generating done
+ -- Build files have been written to: C:/azure-iot-sdk-c/cmake
+ ```
::: zone-end
iot-edge Debug Module Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/debug-module-vs-code.md
Select **Start Debugging** or select **F5**. Select the process to attach to. In
The Docker and Moby engines support SSH connections to containers allowing you to debug in Visual Studio Code connected to a remote device. You need to meet the following prerequisites before you can use this feature.
+Remote SSH debugging prerequisites may be different depending on the language you are using. The following sections describe the setup for .NET. For information on other languages, see [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) for an overview. Details about how to configure remote debugging are included in debugging sections for each language in the Visual Studio Code documentation.
+ ### Configure Docker SSH tunneling 1. Follow the steps in [Docker SSH tunneling](https://code.visualstudio.com/docs/containers/ssh#_set-up-ssh-tunneling) to configure SSH tunneling on your development computer. SSH tunneling requires public/private key pair authentication and a Docker context defining the remote device endpoint.
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Title: Manage IoT Edge certificates+ description: How to install and manage certificates on an Azure IoT Edge device to prepare for production deployment.
All IoT Edge devices use certificates to create secure connections between the runtime and any modules running on the device. IoT Edge devices functioning as gateways use these same certificates to connect to their downstream devices, too. > [!NOTE]
-> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You do not need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. In many cases, it's actually an intermediate CA certificate.
+> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You don't need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. Often, it's actually an intermediate CA certificate.
## Prerequisites
Edge Daemon issues module server and identity certificates for use by Edge modul
### Renewal
-Server certificates may be issued off the Edge CA certificate or through a DPS-configured CA. Regardless of the issuance method, these certificates must be renewed by the module.
+Server certificates may be issued off the Edge CA certificate. Regardless of the issuance method, these certificates must be renewed by the module. If you develop a custom module, you must implement the renewal logic in your module.
+
+The *edgeHub* module supports a certificate renewal feature. You can configure the *edgeHub* module server certificate renewal using the following environment variables:
+
+* **ServerCertificateRenewAfterInMs**: Sets the duration in milliseconds when the *edgeHub* server certificate is renewed irrespective of certificate expiry time.
+* **MaxCheckCertExpiryInMs**: Sets the duration in milliseconds when *edgeHub* service checks the *edgeHub* server certificate expiration. If the variable is set, the check happens irrespective of certificate expiry time.
+
+For more information about the environment variables, see [EdgeHub and EdgeAgent environment variables](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md).
## Changes in 1.2 and later
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
Title: Understand how IoT Edge uses certificates for security+ description: How Azure IoT Edge uses certificate to validate devices, modules, and downstream devices enabling secure connections between them.
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
Using Device Provisioning Service allows you to automatically issue and renew ce
1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service. 1. Under **Settings**, select **Manage enrollments**.
-1. Select **Add enrollment group** then complete the following steps to configure the enrollment:
+1. Select **Add enrollment group** then complete the following steps to configure the enrollment.
+1. On the **Registration + provisioning** tab, choose the following settings:
- :::image type="content" source="./media/tutorial-configure-est-server/dps-add-enrollment.png" alt-text="A screenshot adding DPS enrollment group using the Azure portal.":::
+ :::image type="content" source="./media/tutorial-configure-est-server/device-provisioning-service-add-enrollment-latest.png" alt-text="A screenshot adding DPS enrollment group using the Azure portal.":::
|Setting | Value | |--||
- |Group name | Provide a friendly name for this group enrollment |
- |Attestation Type | Select **Certificate** |
- |IoT Edge device | Select **True** |
- |Certificate Type | Select **CA Certificate** |
+ |Attestation mechanism| Select **X.509 certificates uploaded to this Device Provisioning Service instance** |
|Primary certificate | Choose your certificate from the dropdown list |
+ |Group name | Provide a friendly name for this group enrollment |
+ |Provisioning status | Select **Enable this enrollment** checkbox |
+
+1. On the **IoT hubs** tab, choose your IoT Hub from the list.
+1. On the **Device settings** tab, select the **Enable IoT Edge on provisioned devices** checkbox.
The other settings aren't relevant to the tutorial. You can accept the default settings.
-1. Select **Save**.
+1. Select **Review + create**.
Now that an enrollment exists for the device, the IoT Edge runtime can automatically manage device certificates for the linked IoT Hub.
iot-hub-device-update Create Update Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update-group.md
An Azure CLI environment:
To create a device group, the first step is to add a tag to the target set of devices in IoT Hub. Tags can only be successfully added to your device after it has been connected to Device Update.
-Device Update tags use the following format:
+Device Update tags use the format in the following example:
```json
+"etag": "",
+"deviceId": "",
+"deviceEtag": "",
+"version": <version>,
"tags": { "ADUGroup": "<CustomTagValue>" }
iot-hub-device-update Create Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update.md
The [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5)
* `--update-provider`, `--update-name`, and `--update-version`: These three parameters define the **updateId** object that is a unique identifier for each update. * `--compat`: The **compatibility** object is a set of name-value pairs that describe the properties of a device that this update is compatible with. * The same exact set of compatibility properties can't be used with more than one provider and name combination.
-* `--step`: The update handler on the device (for example, `microsoft/script:1`, `microsoft/swupdate:1`, or `microsoft/apt:1`) and its associated properties for this update.
+* `--step`: The update **handler** on the device (for example, `microsoft/script:1`, `microsoft/swupdate:1`, or `microsoft/apt:1`) and its associated **properties** for this update.
* `--file`: The paths to your update file or files. For more information about these parameters, see [Import schema and API information](import-schema.md).
For handler properties, you may need to escape certain characters in your JSON.
The `init` command supports advanced scenarios, including the [related files feature](related-files.md) that allows you to define the relationship between different update files. For more examples and a complete list of optional parameters, see [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5).
-Once you've created your import manifest and saved it as a JSON file, you're ready to [import your update](import-update.md).
+Once you've created your import manifest and saved it as a JSON file, you're ready to [import your update](import-update.md). If you are planning to use the Azure portal UI for importing, be sure to name your import manifest in the following format: "\<manifestname\>.importmanifest.json".
## Create an advanced Device Update import manifest for a proxy update
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Previously updated : 02/09/2023 Last updated : 08/15/2023
IoT Hub enforces other operational limits:
| Operation | Limit | | | -- |
-| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. The only way to increase this limit is to contact [Microsoft Support](https://azure.microsoft.com/support/options/).|
+| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. |
| File uploads | 10 concurrent file uploads per device. | | Jobs<sup>1</sup> | Maximum concurrent jobs are 1 (for Free and S1), 5 (for S2), and 10 (for S3). However, the max concurrent [device import/export jobs](iot-hub-bulk-identity-mgmt.md) is 1 for all tiers. <br/>Job history is retained up to 30 days. | | Additional endpoints | Basic and standard SKU hubs may have 10 additional endpoints. Free SKU hubs may have one additional endpoint. |
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
In IoT Hub, managed identities can be used for egress connectivity from IoT Hub
## System-assigned managed identity
-### Add and remove a system-assigned managed identity in Azure portal
+### Enable or disable system-assigned managed identity in Azure portal
-1. Sign in to the Azure portal and navigate to your desired IoT hub.
-2. Navigate to **Identity** in your IoT Hub portal
-3. Under **System-assigned** tab, select **On** and click **Save**.
-4. To remove system-assigned managed identity from an IoT hub, select **Off** and click **Save**.
+1. Sign in to the Azure portal and navigate to your IoT hub.
+2. Select **Identity** from the **Security settings** section of the navigation menu.
+3. Select the **System-assigned** tab.
+4. Set the system-assigned managed identity **Status** to **On** or **Off**, then select **Save**.
- :::image type="content" source="./media/iot-hub-managed-identity/system-assigned.png" alt-text="Screenshot showing where to turn on system-assigned managed identity for an I O T hub.":::
+ >[!NOTE]
+ >You can't turn off system-assigned managed identity while it's in use. Make sure that no custom endpoints are using system-assigned managed identity authentication before disabling the feature.
+
+ :::image type="content" source="./media/iot-hub-managed-identity/system-assigned.png" alt-text="Screenshot showing where to turn on system-assigned managed identity for an IoT hub.":::
### Enable system-assigned managed identity at hub creation time using ARM template
iot-hub Module Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-cli.md
This article shows you how to create an Azure CLI session in which you:
* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
+## Module authentication
+
+You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example:
+
+```bash
+openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01"
+```
+ ## Prepare the Cloud Shell If you want to use the Azure Cloud Shell, you must first launch and configure it. If you use the CLI locally, skip to the [Prepare a CLI session](#prepare-a-cli-session) section.
iot-hub Module Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md
At the end of this article, you have two .NET console apps:
* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+## Module authentication
+
+You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example:
+
+```bash
+openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01"
+```
+ ## Get the IoT hub connection string [!INCLUDE [iot-hub-howto-module-twin-shared-access-policy-text](../../includes/iot-hub-howto-module-twin-shared-access-policy-text.md)]
iot-hub Module Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-node.md
At the end of this article, you have two Node.js apps:
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
+## Module authentication
+
+You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example:
+
+```bash
+openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01"
+```
+ ## Get the IoT hub connection string [!INCLUDE [iot-hub-howto-module-twin-shared-access-policy-text](../../includes/iot-hub-howto-module-twin-shared-access-policy-text.md)]
iot-hub Module Twins Portal Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md
In this article, you will learn how to:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
+## Module authentication
+
+You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example:
+
+```bash
+openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01"
+```
+ ## Create a module identity in the portal Within one device identity, you can create up to 20 module identities. To add an identity, follow these steps:
iot-hub Module Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-python.md
At the end of this article, you have three Python apps:
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
+## Module authentication
+
+You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example:
+
+```bash
+openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01"
+```
+ ## Get the IoT hub connection string In this article, you create a back-end service that adds a device in the identity registry and then adds a module to that device. This service requires the **registry write** permission (which also includes **registry read**). You also create a service that adds desired properties to the module twin for the newly created module. This service needs the **service connect** permission. Although there are default shared access policies that grant these permissions individually, in this section, you create a custom shared access policy that contains both of these permissions.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
IoT Hub and IoT Central both let you [route device telemetry to different endpoi
In addition to device telemetry, both IoT Hub and IoT Central can send property update and device connection status messages to other endpoints. Routing these messages enables you to build integrations with other services that need device status information: -- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).-- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).
+- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such as [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).
- [IoT Hub Event Grid integration](../iot-hub/iot-hub-event-grid.md) uses Azure Event Grid to distribute IoT Hub events such as device connectivity, device lifecycle, and telemetry events to other Azure services. - [IoT Central rules](../iot-central/core/howto-configure-rules.md) can send device telemetry and property values to webhooks, [Microsoft Power Automate](/power-automate/getting-started/), and [Azure Logic Apps](/azure/logic-apps/logic-apps-overview/). - [IoT Central data export](../iot-central/core/howto-export-data.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview/), [Azure Event Hubs](../event-hubs/event-hubs-about.md), and webhooks.
key-vault Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/certificate-scenarios.md
Certificates are composed of three interrelated resources linked together as a K
## Creating your first Key Vault certificate Before a certificate can be created in a Key Vault (KV), prerequisite steps 1 and 2 must be successfully accomplished and a key vault must exist for this user / organization.
-**Step 1** - Certificate Authority (CA) Providers
+**Step 1:** Certificate Authority (CA) Providers
- On-boarding as the IT Admin, PKI Admin or anyone managing accounts with CAs, for a given company (ex. Contoso) is a prerequisite to using Key Vault certificates. The following CAs are the current partnered providers with Key Vault. Learn more [here](./create-certificate.md#partnered-ca-providers) - DigiCert - Key Vault offers OV TLS/SSL certificates with DigiCert. - GlobalSign - Key Vault offers OV TLS/SSL certificates with GlobalSign.
-**Step 2** - An account admin for a CA provider creates credentials to be used by Key Vault to enroll, renew, and use TLS/SSL certificates via Key Vault.
+**Step 2:** An account admin for a CA provider creates credentials to be used by Key Vault to enroll, renew, and use TLS/SSL certificates via Key Vault.
-**Step 3** - A Contoso admin, along with a Contoso employee (Key Vault user) who owns certificates, depending on the CA, can get a certificate from the admin or directly from the account with the CA.
+**Step 3a:** A Contoso admin, along with a Contoso employee (Key Vault user) who owns certificates, depending on the CA, can get a certificate from the admin or directly from the account with the CA.
- Begin an add credential operation to a key vault by [setting a certificate issuer](/rest/api/keyvault/certificates/set-certificate-issuer/set-certificate-issuer) resource. A certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It is used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details. - Ex. MyDigiCertIssuer
Certificates are composed of three interrelated resources linked together as a K
For more information on creating accounts with CA Providers, see the related post on the [Key Vault blog](/archive/blogs/kv/manage-certificates-via-azure-key-vault).
-**Step 3.1** - Set up [certificate contacts](/rest/api/keyvault/certificates/set-certificate-contacts/set-certificate-contacts) for notifications. This is the contact for the Key Vault user. Key Vault does not enforce this step.
+**Step 3b:** Set up [certificate contacts](/rest/api/keyvault/certificates/set-certificate-contacts/set-certificate-contacts) for notifications. This is the contact for the Key Vault user. Key Vault does not enforce this step.
-Note - This process, through step 3.1, is a onetime operation.
+Note - This process, through **Step 3b**, is a onetime operation.
## Creating a certificate with a CA partnered with Key Vault ![Create a certificate with a Key Vault partnered certificate authority](../media/certificate-authority-2.png)
-**Step 4** - The following descriptions correspond to the green numbered steps in the preceding diagram.
+**Step 4:** The following descriptions correspond to the green numbered steps in the preceding diagram.
(1) - In the diagram above, your application is creating a certificate which internally begins by creating a key in your key vault. (2) - Key Vault sends an TLS/SSL Certificate Request to the CA. (3) - Your application polls, in a loop and wait process, for your Key Vault for certificate completion. The certificate creation is complete when Key Vault receives the CAΓÇÖs response with x509 certificate.
key-vault Assign Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/assign-access-policy.md
description: How to use the Azure CLI to assign a Key Vault access policy to a s
tags: azure-resource-manager-+
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/best-practices.md
Azure Key Vault safeguards encryption keys and secrets like certificates, connec
## Use separate key vaults
-Our recommendation is to use a vault per application per environment (development, pre-production, and production), per region. This helps you not share secrets across environments and regions. It will also reduce the threat in case of a breach.
+Our recommendation is to use a vault per application per environment (development, pre-production, and production), per region. Granular isolation helps you not share secrets across applications, environments and regions, and it also reduce the threat if there is a breach.
### Why we recommend separate key vaults
Key vaults define security boundaries for stored secrets. Grouping secrets into
Encryption keys and secrets like certificates, connection strings, and passwords are sensitive and business critical. You need to secure access to your key vaults by allowing only authorized applications and users. [Azure Key Vault security features](security-features.md) provides an overview of the Key Vault access model. It explains authentication and authorization. It also describes how to secure access to your key vaults.
-Suggestions for controlling access to your vault are as follows:
+Recommendations for controlling access to your vault are as follows:
- Lock down access to your subscription, resource group, and key vaults using role-based access control (RBAC).-- Create access policies for every vault.-- Use the principle of least privilege access to grant access.-- Turn on firewall and [virtual network service endpoints](overview-vnet-service-endpoints.md).
+ - Assign RBAC roles at Key Vault scope for applications, services, and workloads requiring persistent access to Key Vault
+ - Assign just-in-time eligible RBAC roles for operators, administrators and other user accounts requiring privileged access to Key Vault using [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md)
+ - Require at least one approver
+ - Enforce multi-factor authentication
+- Restrict network access with [Private Link](private-link-service.md), [firewall and virtual networks](network-security.md)
## Turn on data protection for your vault
For more information, see [Azure Key Vault soft-delete overview](soft-delete-ove
## Backup
-Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault.
+Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios, when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault.
For more information about backup, see [Azure Key Vault backup and restore](backup.md)
A multitenant solution is built on an architecture where components are used to
## Frequently Asked Questions: ### Can I use Key Vault role-based access control (RBAC) permission model object-scope assignments to provide isolation for application teams within Key Vault?
-No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but any administrative operations like network access control, monitoring, and objects management require vault level permissions which will then expose secure information to operators across application teams.
+No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but only for read. Any administrative operations like network access control, monitoring, and objects management require vault level permissions. Having one Key Vault per application provides secure isolation for operators across application teams.
## Next steps
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignments for 'Microsoft Azure App Service' global identity.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model, but you can use Azure PowerShell, Azure CLI, ARM template deployments. App Service certificate management requires **Key Vault Secrets User** and **Key Vault Reader** role assignments for App Service global identity, for example Microsoft Azure App Service' in public cloud.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md
See the following table for a list of prerequisites for bring your own key (BYOK
| A subscription to Azure |To create an Azure Key Vault, you need an Azure subscription: [Sign up for free trial](https://azure.microsoft.com/pricing/free-trial/) | | The Azure Key Vault Premium service tier to support HSM-protected keys |For more information about the service tiers and capabilities for Azure Key Vault, see the [Azure Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/) website. | | nCipher nShield HSMs, smartcards, and support software |You must have access to a nCipher Hardware Security Module and basic operational knowledge of nCipher nShield HSMs. See [nCipher nShield Hardware Security Module](https://www.arrow.com/ecs-media/8441/33982ncipher_nshield_family_brochure.pdf) for the list of compatible models, or to purchase an HSM if you do not have one. |
-| The following hardware and software:<ol><li>An offline x64 workstation with a minimum Windows operation system of Windows 7 and nCipher nShield software that is at least version 11.50.<br/><br/>If this workstation runs Windows 7, you must [install Microsoft .NET Framework 4.5](https://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe).</li><li>A workstation that is connected to the Internet and has a minimum Windows operating system of Windows 7 and [Azure PowerShell](/powershell/azure/) **minimum version 1.1.0** installed.</li><li>A USB drive or other portable storage device that has at least 16-MB free space.</li></ol> |For security reasons, we recommend that the first workstation is not connected to a network. However, this recommendation is not programmatically enforced.<br/><br/>In the instructions that follow, this workstation is referred to as the disconnected workstation.</p></blockquote><br/>In addition, if your tenant key is for a production network, we recommend that you use a second, separate workstation to download the toolset, and upload the tenant key. But for testing purposes, you can use the same workstation as the first one.<br/><br/>In the instructions that follow, this second workstation is referred to as the Internet-connected workstation.</p></blockquote><br/> |
+| The following hardware and software:<ol><li>An offline x64 workstation with a minimum Windows operation system of Windows 7 and nCipher nShield software that is at least version 11.50.<br/><br/>If this workstation runs Windows 7, you must [install Microsoft .NET Framework 4.5](https://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe).</li><li>A workstation that is connected to the Internet and has a minimum Windows operating system of Windows 7 and [Azure PowerShell](/powershell/azure/) **minimum version 1.1.0** installed.</li><li>A USB drive or other portable storage device that has at least 16-MB free space.</li></ol> |For security reasons, we recommend that the first workstation is not connected to a network. However, this recommendation is not programmatically enforced.<br/><br/>In the instructions that follow, this workstation is referred to as the disconnected workstation.</p><br/>In addition, if your tenant key is for a production network, we recommend that you use a second, separate workstation to download the toolset, and upload the tenant key. But for testing purposes, you can use the same workstation as the first one.<br/><br/>In the instructions that follow, this second workstation is referred to as the Internet-connected workstation.</p><br/> |
## Generate and transfer your key to Azure Key Vault HSM
This program creates a **Security World** file at %NFAST_KMDATA%\local\world, wh
> If your HSM does not support the newer cypher suite DLf3072s256mRijndael, you can replace `--cipher-suite= DLf3072s256mRijndael` with `--cipher-suite=DLf1024s160mRijndael`. > > Security world created with new-world.exe that ships with nCipher software version 12.50 is not compatible with this BYOK procedure. There are two options available:
-> 1) Downgrade nCipher software version to 12.40.2 to create a new security world.
-> 2) Contact nCipher support and request them to provide a hotfix for 12.50 software version, which allows you to use 12.40.2 version of new-world.exe that is compatible with this BYOK procedure.
+> 1. Downgrade nCipher software version to 12.40.2 to create a new security world.
+> 2. Contact nCipher support and request them to provide a hotfix for 12.50 software version, which allows you to use 12.40.2 version of new-world.exe that is compatible with this BYOK procedure.
Then:
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/azure-policy.md
Title: Integrate Azure Managed HSM with Azure Policy
description: Learn how to integrate Azure Managed HSM with Azure Policy Previously updated : 03/31/2021 Last updated : 08/23/2023
-# Integrate Azure Managed HSM with Azure Policy (preview)
+# Integrate Azure Managed HSM with Azure Policy
[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they're compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they'll be able to see a drill-down of which resources and components are compliant and which aren't. For more information, see the [Overview of the Azure Policy service](../../governance/policy/overview.md).
Using RSA keys with smaller key sizes is not a secure design practice. You may b
## Enabling and managing a Managed HSM policy through the Azure CLI
-### Register preview feature in your subscription
-
-In the subscription that customer owns, run the following Azure CLI command line as Contributor or Owner role of the subscription,
-
-```azurecli-interactive
-az feature register --namespace Microsoft.KeyVault --name MHSMGovernance
-```
-
-If there is an existing HSM pool in this subscription, update will be carried to these pools. Full enablement of the policy may take up to 30 mins. See [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md?tabs=azure-cli).
- ### Giving permission to scan daily To check the compliance of the pool's inventory keys, the customer must assign the "Managed HSM Crypto Auditor" role to "Azure Key Vault Managed HSM Key Governance Service"(App ID: a1b76039-a76c-499f-a2dd-846b4cc32627) so it can access key's metadata. Without the grant of permission, inventory keys are not going to be reported on Azure Policy compliance report, only new keys, updated keys, imported keys and rotated keys will be checked on compliance. To do so, a user who has role of "Managed HSM Administrator" to the Managed HSM needs to run the following Azure CLI commands:
To check the compliance of the pool's inventory keys, the customer must assign t
On windows: ```azurecli-interactive
-az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query objectId
+az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query id
``` Copy the `id` printed, paste it in the following command:
az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor"
On Linux or Windows Subsystem of Linux: ```azurecli-interactive
-spId=$(az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query objectId|cut -d "\"" -f2)
+spId=$(az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query id|cut -d "\"" -f2)
echo $spId az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor" --assignee-object-id $spId --hsm-name <hsm name> ```
az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor"
Policy assignments have concrete values defined for policy definitions' parameters. In the [Azure portal](https://portal.azure.com/?Microsoft_Azure_ManagedHSM_assettypeoptions=%7B%22ManagedHSM%22:%7B%22options%22:%22%22%7D%7D&Microsoft_Azure_ManagedHSM=true&feature.canmodifyextensions=true}), go to "Policy", filter on the "Key Vault" category, find these four preview key governance policy definitions. Select one, then select "Assign" button on top. Fill in each field. If the policy assignment is for request denials, use a clear name about the policy because, when a request is denied, the policy assignment's name will appear in the error. Select Next, uncheck "Only show parameters that need input or review", and enter values for parameters of the policy definition. Skip the "Remediation", and create the assignment. The service will need up to 30 minutes to enforce "Deny" assignments. -- [Preview]: Azure Key Vault Managed HSM keys should have an expiration date-- [Preview]: Azure Key Vault Managed HSM keys using RSA cryptography should have a specified minimum key size-- [Preview]: Azure Key Vault Managed HSM Keys should have more than the specified number of days before expiration-- [Preview]: Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names
+- Azure Key Vault Managed HSM keys should have an expiration date
+- Azure Key Vault Managed HSM keys using RSA cryptography should have a specified minimum key size
+- Azure Key Vault Managed HSM Keys should have more than the specified number of days before expiration
+- Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names
You can also do this operation using the Azure CLI. See [Create a policy assignment to identify non-compliant resources with Azure CLI](../../governance/policy/assign-policy-azurecli.md).
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
Title: Administrator guide | Microsoft Docs
+ Title: Administrator guide
description: This guide helps administrators who create and manage lab plans by using Azure Lab Services. Previously updated : 07/04/2022 Last updated : 08/28/2023
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md
Title: Capacity limits in Azure Lab Services
+ Title: Capacity limits
description: Learn about VM capacity limits in Azure Lab Services. Previously updated : 07/04/2022 Last updated : 08/28/2023 # Capacity limits in Azure Lab Services
-Azure Lab Services has default capacity limits on Azure subscriptions that adhere to Azure Compute quota limitations and to mitigate fraud. All Azure subscriptions will have an initial capacity limit, which can vary based on subscription type, number of standard compute cores, and GPU cores available inside Azure Lab Services. It restricts how many virtual machines you can create inside your lab before you need to request for a limit increase.
+Azure Lab Services has default capacity limits on Azure subscriptions that adhere to Azure Compute quota limitations and to mitigate fraud. All Azure subscriptions have an initial capacity limit, which can vary based on subscription type, number of standard compute cores, and GPU cores available inside Azure Lab Services. The capacity limit restricts how many virtual machines you can create inside your lab before you need to request a limit increase.
-If you're close to or have reached your subscription's core limit, you'll see messages from Azure Lab Services. Actions that are affected by core limits include:
+If you're close to, or have reached your subscription's core limit, you see warning messages from Azure Lab Services in the portal. The core limits affect the following actions:
- Create a lab - Publish a lab - Increase lab capacity
-These actions may be disabled if there no more cores that can be enabled for your subscription.
+These actions may be disabled if there are no more cores available for your subscription.
:::image type="content" source="./media/capacity-limits/warning-message.png" alt-text="Screenshot of core limit warning in Azure Lab Services.":::
Before you set up a large number of VMs across your labs, we recommend that you
Azure Lab Services enables you to create labs in different Azure regions. The default limit for the total number of regions you can use for creating labs varies by offer category type. For example, the default for Pay-As-You-Go subscriptions is two regions.
-If you have reached the Azure regions limit for your subscription, you can only create labs in regions that you're already using. When you create a new lab in another region, the lab creation will fail with an error message.
+If you have reached the Azure regions limit for your subscription, you can only create labs in regions that you're already using. When you create a new lab in another region, the lab creation fails with an error message.
To overcome the Azure region restriction, you have the following options:
You can contact Azure support and create a support ticket to lift the region res
## Best practices for requesting a limit increase [!INCLUDE [lab-services-request-capacity-best-practices](includes/lab-services-request-capacity-best-practices.md)]
-## Next steps
-
-See the following articles:
+## Related content
- As an admin, see [VM sizing](administrator-guide.md#vm-sizing). - As an admin, see [Request a capacity increase](./how-to-request-capacity-increase.md)
lab-services How To Configure Firewall Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-firewall-settings.md
Title: Firewall settings for Azure Lab Services description: Learn how to determine the public IP address of VMs in a lab created using a lab plan so information can be added to firewall rules. ms.lab- Previously updated : 08/01/2022 Last updated : 08/28/2023
-# Firewall settings for Azure Lab Services
+# Determine firewall settings for Azure Lab Services
+This article covers how to find the specific public IP address used by a lab in Azure Lab Services. You can use these IP addresses to configure your firewall settings and specify inbound and outbound rules to enable lab users to connect to their lab virtual machines.
-> [!NOTE]
-> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Firewall settings for labs when using lab accounts](how-to-configure-firewall-settings-1.md).
+Each organization or school configures their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow lab users to access their VM when connecting from the local network.
-Each organization or school will configure their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network.
+Each lab uses single public IP address and multiple ports. All VMs, both the template VM and lab VMs, use this public IP address. The public IP address doesn't change for a lab. Each VM is assigned a different port number. The port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect lab creators and lab users to the correct VM.
-Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address won't change for the life of lab. Each VM will have a different port number. The port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs.
+If you're using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Firewall settings for labs when using lab accounts](how-to-configure-firewall-settings-1.md).
>[!IMPORTANT]
->Each lab will have a different public IP address.
+>Each lab has a different public IP address.
> [!NOTE]
-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+> If your organization needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
## Find public IP for a lab
-If using a customizable lab, then we can get the public ip anytime after the lab is created. If using a non-customizable lab, the lab must be published and have capacity of at least 1 to be able to get the public IP for the lab.
+If you're using a customizable lab, then you can get the public IP address anytime after the lab is created. If you're using a non-customizable lab, the lab must be published and have capacity of at least 1 to be able to get the public IP address for the lab.
-We're going to use the Az.LabServices PowerShell module to get the public IP address for a lab. For more examples using Az.LabServices PowerShell module and how to use it, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](how-to-create-lab-plan-powershell.md) and [Quickstart: Create a lab using PowerShell and the Azure module](how-to-create-lab-powershell.md). For more information about cmdlets available in the Az.LabServices PowerShell module, see [Az.LabServices reference](/powershell/module/az.labservices/)
+You can use the `Az.LabServices` PowerShell module to get the public IP address for a lab.
```powershell $ResourceGroupName = "MyResourceGroup"
if ($LabPublicIP){
} ```
+For more examples of using the `Az.LabServices` PowerShell module and how to use it, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](how-to-create-lab-plan-powershell.md) and [Quickstart: Create a lab using PowerShell and the Azure module](how-to-create-lab-powershell.md). For more information about cmdlets available in the Az.LabServices PowerShell module, see [Az.LabServices reference](/powershell/module/az.labservices/).
+ ## Conclusion
-Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999. Once the rules are updated, students can access their VMs without the network firewall blocking access.
+You can now determine the public IP address for a lab. You can create inbound and outbound rules for the organization's firewall for the public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999. Once the rules are updated, lab users can then access their VMs without the network firewall blocking access.
-## Next steps
+## Related content
-- As an admin, [enable labs to connect your vnet](how-to-connect-vnet-injection.md).-- As an educator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md).-- As an educator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm).
+- As an admin, [enable labs to connect to your virtual network](how-to-connect-vnet-injection.md).
+- As a lab creator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md).
+- As a lab creator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm).
lab-services How To Create Lab Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-bicep.md
In this article, you learn how to create a lab using a Bicep file. For a detail
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] + ## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)] [!INCLUDE [Create and manage labs](./includes/lab-services-prerequisite-create-lab.md)] [!INCLUDE [Existing lab plan](./includes/lab-services-prerequisite-lab-plan.md)]
-## Review the Bicep file
+## Review the code
+
+# [Bicep](#tab/bicep)
The Bicep file used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab/). :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.labservices/lab/main.bicep":::
-## Deploy the Bicep file
+# [ARM](#tab/arm)
+
+The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab/).
++
+One Azure resource is defined in the template:
+
+- **[Microsoft.LabServices/labs](/azure/templates/microsoft.labservices/labs)**: resource type description.
+
+More Azure Lab Services template samples can be found in [Azure Quickstart Templates](/samples/browse/?expanded=azure&products=azure-resource-manager&terms=lab%20services). For more information how to create a lab without a lab plan using automation, see [Create Azure LabServices lab template](/samples/azure/azure-quickstart-templates/lab/).
+++
+## Deploy the resources
+
+# [Bicep](#tab/bicep)
1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
The Bicep file used in this article is from [Azure Quickstart Templates](/sample
When the deployment finishes, you should see a messaged indicating the deployment succeeded.
+# [ARM](#tab/arm)
+
+1. Select the following link to sign in to Azure and open a template. The template creates a lab.
+
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-using-lab-plan%2fazuredeploy.json":::
+
+2. Optionally, change the name of the lab.
+3. Select the **resource group** the lab plan you're going to use.
+4. Enter the required values for the template:
+
+ 1. **adminUser**. The name of the user that will be added as an administrator for the lab VM.
+ 2. **adminPassword**. The password for the administrator user for the lab VM.
+ 3. **labPlanId**. The resource ID for the lab plan to be used. The **Id** is listed in the **Properties** page of the lab plan resource in Azure.
+
+ :::image type="content" source="./media/how-to-create-lab-template/lab-plan-properties-id.png" alt-text="Screenshot of properties page for lab plan in Azure Lab Services with ID property highlighted.":::
+
+5. Select **Review + create**.
+6. Select **Create**.
+
+The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md).
+++ ## Review deployed resources Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLab** cmdlet.
+ # [CLI](#tab/CLI) ```azurecli-interactive
Remove-AzResourceGroup -Name exampleRG
## Next steps
-In this article, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+In this article, you deployed a simple virtual machine using a Bicep file or ARM template. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
> [!div class="nextstepaction"] > [Configure a template VM](how-to-create-manage-template.md)
lab-services How To Create Lab Plan Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-bicep.md
Title: Create a lab plan using Bicep
+ Title: Create a lab plan using Bicep or ARM
-description: Learn how to create an Azure Lab Services lab plan by using Bicep.
+description: Learn how to create an Azure Lab Services lab plan by using Bicep or ARM templates.
Previously updated : 05/23/2022- Last updated : 08/28/2023+
-# Create a lab plan in Azure Lab Services using a Bicep file
+# Create a lab plan using a Bicep file or ARM template
-In this article, you learn how to create a lab plan using a Bicep file. For a detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
+In this article, you learn how to create a lab plan using a Bicep file or Azure Resource Manager (ARM) template. For a detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] + ## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)]
In this article, you learn how to create a lab plan using a Bicep file. For a d
## Review the Bicep file
+# [Bicep](#tab/bicep)
+ The Bicep file used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab-plan/). :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.labservices/lab-plan/main.bicep":::
-## Deploy the Bicep file
+# [ARM](#tab/arm)
+
+The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab-plan/).
++
+One Azure resource is defined in the template:
+
+- **[Microsoft.LabServices/labplans](/azure/templates/microsoft.labservices/labplans)**: The lab plan serves as a collection of configurations and settings that apply to the labs created from it.
+
+More Azure Lab Services template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Labservices&pageNumber=1&sort=Popular).
+++
+## Deploy the resources
+
+# [Bicep](#tab/bicep)
1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
The Bicep file used in this article is from [Azure Quickstart Templates](/sample
When the deployment finishes, you should see a messaged indicating the deployment succeeded.
+# [ARM](#tab/arm)
+
+1. Select the following link to sign in to Azure and open a template. The template creates a lab plan.
+
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-plan%2fazuredeploy.json":::
+
+1. Optionally, change the name of the lab plan.
+1. Select the **Resource group**.
+1. Select **Review + create**.
+1. Select **Create**.
+
+The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md).
+++ ## Review deployed resources Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+To use Azure PowerShell, first verify the `Az.LabServices` module is installed. Then use the `Get-AzLabServicesLabPlan` cmdlet.
+ # [CLI](#tab/CLI) ```azurecli-interactive
Remove-AzResourceGroup -Name exampleRG
-## Next steps
+## Next step
-In this article, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+In this article, you deployed a simple virtual machine using a Bicep file or ARM template. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
> [!div class="nextstepaction"] > [Managing Labs](how-to-manage-labs.md)
lab-services How To Create Lab Plan Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-template.md
- Title: Create a lab plan by using Azure Resource Manager template (ARM template)-
-description: Learn how to create an Azure Lab Services lab plan by using Azure Resource Manager template (ARM template).
--- Previously updated : 06/04/2022--
-# Create a lab plan in Azure Lab Services using an ARM template
-
-In this article, you learn how to use an Azure Resource Manager (ARM) template to create a lab plan. Lab plans are used when creating labs for Azure Lab Services. For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
--
-## Prerequisites
--
-## Review the template
-
-The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab-plan/).
--
-One Azure resource is defined in the template:
--- **[Microsoft.LabServices/labplans](/azure/templates/microsoft.labservices/labplans)**: The lab plan serves as a collection of configurations and settings that apply to the labs created from it.-
-More Azure Lab Services template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Labservices&pageNumber=1&sort=Popular).
-
-## Deploy the template
-
-1. Select the following link to sign in to Azure and open a template. The template creates a lab plan.
-
- :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-plan%2fazuredeploy.json":::
-
-1. Optionally, change the name of the lab plan.
-1. Select the **Resource group**.
-1. Select **Review + create**.
-1. Select **Create**.
-
-The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md).
-
-## Review deployed resources
-
-You can either use the Azure portal to check the lab plan, or use the Azure PowerShell script to list the lab plan created.
-
-To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLabPlan** cmdlet.
-
-```azurepowershell-interactive
-Import-Module Az.LabServices
-
-$labplanName = Read-Host -Prompt "Enter your lab plan name"
-Get-AzLabServicesLabPlan -Name $labplanName
-
-Write-Host "Press [ENTER] to continue..."
-```
-
-## Clean up resources
-
-When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group
-), which deletes the lab plan.
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-
-Write-Host "Press [ENTER] to continue..."
-```
-
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating a lab, see:
-
-> [!div class="nextstepaction"]
-> [Create a lab using an ARM template](how-to-create-lab-template.md)
lab-services How To Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-template.md
- Title: Create a lab by using Azure Resource Manager template (ARM template)-
-description: Learn how to create an Azure Lab Services lab by using Azure Resource Manager template (ARM template).
--- Previously updated : 05/10/2022--
-# Create a lab in Azure Lab Services using an ARM template
-
-In this article, you learn how to use an Azure Resource Manager (ARM) template to create a lab. You learn how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-manage-lab-users.md), and [publishes the lab](tutorial-setup-lab.md#publish-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
--
-## Prerequisites
--
-## Review the template
-
-The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab/).
--
-One Azure resource is defined in the template:
--- **[Microsoft.LabServices/labs](/azure/templates/microsoft.labservices/labs)**: resource type description.-
-More Azure Lab Services template samples can be found in [Azure Quickstart Templates](/samples/browse/?expanded=azure&products=azure-resource-manager&terms=lab%20services). For more information how to create a lab without a lab plan using automation, see [Create Azure LabServices lab template](/samples/azure/azure-quickstart-templates/lab/).
-
-## Deploy the template
-
-1. Select the following link to sign in to Azure and open a template. The template creates a lab.
-
- :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-using-lab-plan%2fazuredeploy.json":::
-
-2. Optionally, change the name of the lab.
-3. Select the **resource group** the lab plan you're going to use.
-4. Enter the required values for the template:
-
- 1. **adminUser**. The name of the user that will be added as an administrator for the lab VM.
- 2. **adminPassword**. The password for the administrator user for the lab VM.
- 3. **labPlanId**. The resource ID for the lab plan to be used. The **Id** is listed in the **Properties** page of the lab plan resource in Azure.
-
- :::image type="content" source="./media/how-to-create-lab-template/lab-plan-properties-id.png" alt-text="Screenshot of properties page for lab plan in Azure Lab Services with I D property highlighted.":::
-
-5. Select **Review + create**.
-6. Select **Create**.
-
-The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md).
-
-## Review deployed resources
-
-You can either use the Azure portal to check the lab, or use the Azure PowerShell script to list the lab resource created.
-
-To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLab** cmdlet.
-
-```azurepowershell-interactive
-Import-Module Az.LabServices
-
-$lab = Read-Host -Prompt "Enter your lab name"
-Get-AzLabServicesLab -Name $lab
-
-Write-Host "Press [ENTER] to continue..."
-```
-
-To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](./how-to-manage-labs.md).
-
-## Clean up resources
-
-When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group
-), which deletes the lab and other resources in the same group.
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-
-Write-Host "Press [ENTER] to continue..."
-```
-
-Alternately, an educator may delete a lab from the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about deleting labs, see [Delete a lab](how-to-manage-labs.md#delete-a-lab).
-
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating a template, see:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
lab-services How To Create Manage Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-manage-template.md
Title: Manage a template of a lab in Azure Lab Services | Microsoft Docs
-description: Learn how to create and manage a lab template in Azure Lab Services.
+ Title: Manage a lab template
+description: Learn how to create and manage a lab template in Azure Lab Services. You can use a template to customize the base VM image for lab VMs.
Previously updated : 07/04/2022 Last updated : 08/28/2023
-# Create and manage a template in Azure Lab Services
+# Create and manage a lab template in Azure Lab Services
-A template in a lab is a base VM image from which all users' virtual machines are created. Modify the template VM so that it's configured with exactly what you want to provide to the lab users. You can provide a name and description of the template that the lab users see. Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab using the template. The number of VMs created during publish equals lab capacity. If using [Teams integration](lab-services-within-teams-overview.md), or [Canvas integration](lab-services-within-canvas-overview.md), the number of VMs created during publish equals the number of users in the lab. All virtual machines have the same configuration as the template.
+A lab template is a base VM image from which all lab users' virtual machines are created. You can use a template to customize the base VM image for lab VMs. For example, you might install extra software components, such as Visual Studio, or configure the operating system to disable the web server process. In this article, you learn how to create and manage a lab template.
-When you create a lab, the template VM is created but it's not started. You can start it, connect to it, and install any pre-requisite software for the lab, and then publish it. When you publish the template VM, it's automatically shut down for you if you haven't done so. This article describes how to manage a template VM of a lab.
+When you [publish a lab](./tutorial-setup-lab.md#publish-lab), Azure Lab Services creates the lab VMs, based on the template VM image. If you modify the template VM at a later stage, when you republish the template VM, all lab VMs are updated to match the new template. When you republish a template VM, Azure Lab Services reimages the lab VMs and removes all changes and data on the VM.
+
+When you create a lab, the template VM is created but it's not started. You can start it, connect to it, and install any prerequisite software for the lab, and then publish it. When you publish the template VM, it's automatically shut down for you if you haven't done so.
+
+The number of VMs created during publish equals lab capacity. If you're using [Teams integration](lab-services-within-teams-overview.md), or [Canvas integration](lab-services-within-canvas-overview.md), the number of VMs created during publish equals the number of users in the lab.
> [!NOTE] > Template VMs incur cost when running, so ensure that the template VM is shutdown when you aren't using it. ## Set or update template title and description
-Use the following steps to set title and description for the lab. Educators and students will see the title and description on the tiles of the [My Virtual Machines](instructor-access-virtual-machines.md) page.
+Lab creators and lab users can see the title and description on the tiles of the [My Virtual Machines](instructor-access-virtual-machines.md) page.
+
+Use the following steps to set title and description for the lab:
+
+1. On the **Template** page, enter the new **title** for the lab.
-1. On the **Template** page, enter the new **title** for the lab.
2. Enter the new **description** for the template. When you move the focus out of the text box, it's automatically saved.
- ![Template name and description](./media/how-to-create-manage-template/template-name-description.png)
+ :::image type="content" source="./media/how-to-create-manage-template/template-name-description.png" alt-text="Screenshot that shows the Template page in the Lab Services portal, allowing users to edit the template title and description.":::
## Update a template VM
-Use the following steps to update a template VM.
+Use the following steps to update a template VM:
1. On the **Template** page for the lab, select **Start template** on the toolbar.
-1. Wait until the template VM is started, and then select **Connect to template** on the toolbar to connect to the template VM. Depending on the setting for the lab, you'll connect using Remote Desktop Protocol (RDP) or Secure Shell (SSH).
-1. Once you connect to the template and make changes, it will no longer have the same setup as the virtual machines last published to your users. Template changes won't be reflected on your students' existing virtual machines until after you publish again.
- ![Connect to the template VM](./media/how-to-create-manage-template/connect-template-vm.png)
+1. Wait until the template VM is started, and then select **Connect to template** on the toolbar to connect to the template VM.
+
+ Depending on the setting for the lab, you connect using Remote Desktop Protocol (RDP) or Secure Shell (SSH).
+
+ :::image type="content" source="./media/how-to-create-manage-template/connect-template-vm.png" alt-text="Screenshot that shows the Template page in the Lab Service portal, highlighting the Connect to template button.":::
1. Install any software that's required for students to do the lab (for example, Visual Studio, Azure Storage Explorer, etc.).+ 1. Disconnect (close your remote desktop session) from the template VM.+ 1. **Stop** the template VM by selecting **Stop template**.
-1. Follow steps in the next section to **Publish** the updated template VM.
+
+> [!NOTE]
+> Template changes are not available on lab users' existing virtual machines until after you publish the lab template again. Follow steps in the next section to publish the updated template VM.
## Publish the template VM In this step, you publish the template VM. When you publish the template VM, Azure Lab Services creates VMs in the lab by using the template. All virtual machines have the same configuration as the template.
+> [!CAUTION]
+> When you republish a template VM, Azure Lab Services reimages the lab VMs and removes all changes and data on the VM.
+ 1. On the **Template** page, select **Publish** on the toolbar.
- ![Publish template button](./media/how-to-create-manage-template/template-page-publish-button.png)
+ Publishing is a permanent action and can't be undone.
+
+1. On the **Publish template** page, enter the number of virtual machines you want to create in the lab, and then select **Publish**.
- > [!WARNING]
- > Publishing is a permanent action. It can't be undone.
+ :::image type="content" source="./media/how-to-create-manage-template/publish-template-number-vms.png" alt-text="Screenshot that shows the Publish template window, allowing you to specify the lab capacity (number of lab VMs in the lab).":::
-2. On the **Publish template** page, enter the number of virtual machines you want to create in the lab, and then select **Publish**.
+ You can track the publishing status on the template. If you're using [lab plans](lab-services-whats-new.md), publishing can take up to 20 minutes.
- ![Publish template - number of VMs](./media/how-to-create-manage-template/publish-template-number-vms.png)
-3. You see the **status of publishing** the template on page. If using [Azure Lab Services August 2022 Update](lab-services-whats-new.md), publishing can take up to 20 minutes.
+1. Wait until the publishing is complete and then switch to the **Virtual machine pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile.
- ![Publish template - progress](./media/how-to-create-manage-template/publish-template-progress.png)
-4. Wait until the publishing is complete and then switch to the **Virtual machines pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile. Confirm that you see virtual machines that are in **Unassigned** state. These VMs aren't assigned to students yet. They should be in **Stopped** state. You can start a student VM, connect to the VM, stop the VM, and delete the VM on this page. You can start them in this page or let your students start the VMs.
+ Confirm that you see virtual machines that are marked **Unassigned**, which indicates the lab VMs aren't assigned to lab users yet. The lab VMs should be in **Stopped** state. You can start a lab VM, connect to the VM, stop the VM, and delete the VM on this page.
![Virtual machines in stopped state](./media/how-to-create-manage-template/virtual-machines-stopped.png)
+ :::image type="content" source="./media/how-to-create-manage-template/virtual-machines-stopped.png" alt-text="Screenshot that shows the Virtual machine pool page in the Lab Services portal, showing the list of unassigned lab VMs.":::
## Known issues
-When you create a new lab from an exported lab VM image, youΓÇÖre unable to login with the credentials you used for creating the lab. Follow these steps to [troubleshoot the login problem](./troubleshoot-access-lab-vm.md#unable-to-login-with-the-credentials-you-used-for-creating-the-lab).
-
-## Next steps
+When you create a new lab from an exported lab VM image, youΓÇÖre unable to sign in with the credentials you used for creating the lab. Follow these steps to [troubleshoot the sign-in problem](./troubleshoot-access-lab-vm.md#unable-to-login-with-the-credentials-you-used-for-creating-the-lab).
-See the following articles:
+## Related content
- [As an admin, create and manage lab plans](how-to-manage-lab-plans.md) - [As a lab owner, create and manage labs](how-to-manage-labs.md)
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
Title: Prepare Windows lab template
description: Prepare a Windows-based lab template in Azure Lab Services. Configure commonly used software and OS settings, such as Windows Update, OneDrive, and Microsoft 365. +
Install other apps commonly used for teaching through the Windows Store app. Sug
## Next steps -- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)
+- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)
lab-services How To Use Restrict Allowed Virtual Machine Sku Sizes Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md
Title: How to restrict the virtual machine sizes allowed for labs
+ Title: Restrict allowed lab VM sizes
description: Learn how to use the Lab Services should restrict allowed virtual machine SKU sizes Azure Policy to restrict educators to specified virtual machine sizes for their labs. Previously updated : 08/23/2022 Last updated : 08/28/2023
-# How to restrict the virtual machine sizes allowed for labs
+# Restrict allowed virtual machine sizes for labs
-In this how to, you'll learn how to use the *Lab Services should restrict allowed virtual machine SKU sizes* Azure policy to control the SKUs available to educators when they're creating labs. In this example, you'll see how a lab administrator can allow only non-GPU SKUs, so educators can create only non-GPU SKU labs.
+In this article, you learn how to restrict the list of allowed lab virtual machine sizes for creating new labs by using an Azure policy. As a platform administrator, you can use policies to lay out guardrails for teams to manage their own resources. [Azure Policy](/azure/governance/policy/) helps audit and govern resource state.
[!INCLUDE [lab plans only note](./includes/lab-services-new-update-focused-article.md)] ## Configure the policy
-1. In the [Azure portal](https://portal.azure.com), go to your subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com), and then go to your subscription.
1. From the left menu, under **Settings**, select **Policies**.
-1. Under **Authoring**, select **Assignments**.
+1. Under **Compliance**, select **Assign Policy**.
-1. Select **Assign Policy**.
:::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy.png" alt-text="Screenshot showing the Policy Compliance dashboard with Assign policy highlighted."::: 1. Select the **Scope** which you would like to assign the policy to, and then select **Select**.
- You can also select a resource group if you need the policy to apply more granularly.
+
+ Select the subscription to apply the policy to all resources. You can also select a resource group if you need the policy to apply more granularly.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-scope.png" alt-text="Screenshot showing the Scope pane with subscription highlighted.":::
-1. Select Policy Definition. In Available definitions, search for *Lab Services*, select **Lab Services should restrict allowed virtual machine SKU sizes** and then select **Select**.
+1. Select **Policy definition**.
+
+1. In **Available Definitions**, search for *Lab Services*, select **Lab Services should restrict allowed virtual machine SKU sizes**, and then select **Add**.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-definitions.png" alt-text="Screenshot showing the Available definitions pane with Lab Services should restrict allowed virtual machine SKU sizes highlighted. ":::
-1. On the Basics tab, select **Next**.
+1. On the **Basics** tab, select **Next**.
-1. On the Parameters tab, clear **Only show parameters that need input or review** to show all parameters.
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters.png" alt-text="Screenshot showing the Parameters tab with Only show parameters that need input or review highlighted. ":::
+1. On the **Parameters** tab, clear **Only show parameters that need input or review** to show all parameters.
-1. The **Allowed SKU names** parameter shows the SKUs allowed when the policy is applied. By default all the available SKUs are allowed. You must clear the check boxes for any SKU that you don't wish to allow educators to use to create labs. In this example, only the following non-GPU SKUs are allowed:
- - CLASSIC_FSV2_2_4GB_128_S_SSD
- - CLASSIC_FSV2_4_8GB_128_S_SSD
- - CLASSIC_FSV2_8_16GB_128_S_SSD
- - CLASSIC_DSV4_4_16GB_128_P_SSD
- - CLASSIC_DSV4_8_32GB_128_P_SSD
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters.png" alt-text="Screenshot showing the Parameters tab with Only show parameters that need input or review highlighted. ":::
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters-vms.png" alt-text="Screenshot showing the Allowed SKUs.":::
+1. In **Allowed SKU names**, clear the check boxes for any SKU that you don't allow for creating labs.
- Use the table below to determine which SKU names to apply.
+ By default all the available SKUs are allowed. Use the following table to determine which SKU names you want to allow.
|SKU Name|VM Size|VM Size Details| |--|--|--|
In this how to, you'll learn how to use the *Lab Services should restrict allowe
|CLASSIC_NVV4_8_28GB_128_S_SSD| Small GPU (Visualization) |8vCPUs, 28 GB RAM, 128 GB, Standard SSD |CLASSIC_NVV3_12_112GB_128_S_SSD| Medium GPU (Visualization) |12vCPUs, 112 GB RAM, 128 GB, Standard SSD
-1. In **Effect**, select **Deny**. Selecting deny will prevent a lab from being created if an educator tries to use a GPU SKU.
+1. In **Effect**, select **Deny** to prevent a lab from being created when a VM SKU isn't allowed.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters-effect.png" alt-text="Screenshot showing the effect list.":::
-1. Select **Next**.
-
-1. On the Remediation tab, select **Next**.
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-remediation.png" alt-text="Screenshot showing the Remediation tab with Next highlighted.":::
-
-1. On the Non-compliance tab, in **Non-compliance messages**, enter a non-compliance message of your choice like ΓÇ£Selected SKU is not allowedΓÇ¥, and then select **Next**.
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-message.png" alt-text="Screenshot showing the Non-compliance tab with an example non-compliance message.":::
+1. Optionally, on the **Non-compliance messages** tab, enter a noncompliance message.
-1. On the Review + Create tab, select **Create** to create the policy assignment.
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-review-create.png" alt-text="Screenshot showing the Review and Create tab.":::
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-message.png" alt-text="Screenshot showing the Non-compliance tab with an example noncompliance message.":::
-You've created a policy assignment for *Lab Services should restrict allowed virtual machine SKU sizes* and allowed only the use of non-GPU SKUs for labs. Attempting to create a lab with any other SKU will fail.
+1. On the **Review + Create** tab, select **Create** to create the policy assignment.
+
+You've created a policy assignment to allow only specific virtual machine sizes for creating labs. If a lab creator attempts to create a lab with any other SKU, the creation fails.
> [!NOTE] > New policy assignments can take up to 30 minutes to take effect. ## Exclude resources
-When applying a built-in policy, you can choose to exclude certain resources, with the exception of lab plans. For example, if the scope of your policy assignment is a subscription, you can exclude resources in a specified resource group. Exclusions are configured using the Exclusions property on the Basics tab when creating a policy definition.
+When applying a built-in policy, you can choose to exclude certain resources, except for lab plans. For example, if the scope of your policy assignment is a subscription, you can exclude resources in a specified resource group.
+
+You can configure exclusions when creating a policy definition by specifying the **Exclusions** property on the **Basics** tab.
:::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-exclusions.png" alt-text="Screenshot showing the Basics tab with Exclusions highlighted."::: ## Exclude a lab plan
-Lab plans cannot be excluded using the Exclusions property on the Basics tab. To exclude a lab plan from a policy assignment, you first need to get the lab plan resource ID, and then use it to specify the lab pan you want to exclude on the Parameters tab.
+You can exclude a lab plan from a policy assignment by specifying the lab plan ID in the policy definition.
+
+1. To get the lab plan ID:
+
+ 1. In the [Azure portal](https://portal.azure.com), select your lab plan.
+ 1. Under **Setting**, select **Properties**, and then copy the **Id**.
+
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/resource-id.png" alt-text="Screenshot showing the lab plan properties with Id highlighted.":::
+
+1. To exclude the lab plan from the policy assignment:
-### Locate and copy lab plan resource ID
-Use the following steps to locate and copy the resource ID so that you can paste it into the exclusion configuration.
-1. In the [Azure portal](https://portal.azure.com), go to the lab plan you want to exclude.
+ 1. Assign a new policy definition.
+ 1. On the **Parameters** tab, clear **Only show parameters that need input or review**.
+ 1. For **Lab Plan Id to exclude**, enter the lab plan ID you copied earlier.
-1. Under Settings, select Properties, and then copy the **Resource ID**.
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/resource-id.png" alt-text="Screenshot showing the lab plan properties with resource ID highlighted.":::
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-exclude-lab-plan-id.png" alt-text="Screenshot showing the Parameter tab with Lab Plan ID to exclude highlighted.":::
-### Enter the lab plan to exclude in the policy
-Now you have a lab plan resource ID, you can use it to exclude the lab plan as you assign the policy.
-1. On the Parameters tab, clear **Only show parameters that need input or review**.
-1. For **Lab Plan ID to exclude**, enter the lab plan resource ID you copied earlier.
- :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-exclude-lab-plan-id.png" alt-text="Screenshot showing the Parameter tab with Lab Plan ID to exclude highlighted.":::
+## Related content
+- [Use Azure Policy to audit and manage Azure Lab Services?](./azure-polices-for-lab-services.md)
-## Next steps
-See the following articles:
-- [WhatΓÇÖs new with Azure Policy for Lab Services?](azure-polices-for-lab-services.md)-- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)-- [What is Azure policy?](../governance/policy/overview.md)
+- [What is Azure policy?](/azure/governance/policy/overview)
lab-services Lab Account Owner Support Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-account-owner-support-information.md
Title: Set up support information (lab account owner) description: Describes how a lab account owner can set support contact information. Lab creators and lab users can view and use it to get help. Previously updated : 04/25/2022 Last updated : 08/28/2023
The support information includes:
1. Select **Save** on the toolbar. :::image type="content" source="./media/lab-account-owner-support-information/lab-account-internal-support-page.png" alt-text="Screenshot of the Internal support page.":::-
-## Next steps
-
-See the following articles:
--- [View contact information (lab creator)](lab-creator-support-information.md)-- [View contact information (lab user)](lab-user-support-information.md)
lab-services Lab Creator Support Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-creator-support-information.md
- Title: View support information (lab creator)
-description: This article explains how lab creators can view support information that they can use to get help.
Previously updated : 04/25/2022---
-# View support information (lab creator in Azure Lab Services)
-
-This article explains how you (as a lab creator) can view the following support information:
--- URL-- Email-- Phone-- Additional instructions-
-You can use this information to get help when you run into any technical issues while creating a lab in a lab plan.
-
-## View support information
-
-1. Sign in to Azure Lab Services web portal: [https://labs.azure.com](https://labs.azure.com).
-2. Select question mark (**?**) at the top-right corner of the page.
-3. Confirm that you see the links to the **view support website**, **email support**, and **support phone number**.
-
- :::image type="content" source="./media/lab-creator-support-information/support-information.png" alt-text="Screenshot that shows the links to the support information.":::
-
-## Next steps
-
-See the following article to learn about how a lab user views the support contact information:
--- [View contact information (lab user)](lab-user-support-information.md)
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
Title: What's New in Azure Lab Services | Microsoft Docs
+ Title: What's New in Azure Lab Services
description: Learn what's new in the Azure Lab Services August 2022 Updates. Previously updated : 07/04/2022 Last updated : 08/28/2023
lab-services Lab User Support Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-user-support-information.md
- Title: View support information (lab user)
-description: This article explains how a lab user or educator can view support information that he/she can use to get help.
Previously updated : 06/26/2020---
-# View support information (lab user in Azure Lab Services)
-This article explains how you (as a lab user) can view the following support information:
--- URL-- Email-- Phone-- Additional instructions-
-You can use this information to get help when you run into any technical issues while using a lab in a lab account.
-
-
-## View support information
-1. Sign in to Lab Services web portal: [https://labs.azure.com](https://labs.azure.com).
-2. Select the **lab or virtual machine** for which you need help, and select **?** at the top-right corner of the page.
-3. Confirm that you see links to the **view support website**, **email support**, and **support phone number**.
-
- ![View support information](./media/lab-user-support-information/support-information.png)
-4. You can view support contact information for another lab by switching to that lab in the drop-down list.
-
- ![Switch to another lab](./media/lab-user-support-information/switch-another-lab.png)
-5. Now, you see the support contact information for the other lab.
-
- ![Other lab's support information](./media/lab-user-support-information/second-lab-support-information.png)
-
-## Next steps
-See the following article to learn about how a lab user views the support contact information:
--- [How a lab account owner can set support contact information](lab-account-owner-support-information.md)-- [How a lab creator can view support contact information](lab-creator-support-information.md)
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
Title: Use advanced networking in Azure Lab Services | Microsoft Docs
description: Create an Azure Lab Services lab plan with advanced networking. Create two labs and verify they share same virtual network when published. Previously updated : 07/27/2022 Last updated : 08/28/2023
[!INCLUDE [update focused article](includes/lab-services-new-update-focused-article.md)]
-Azure Lab Services provides a feature called advanced networking. Advanced networking enables you to control the network for labs created using lab plans. You can use advanced networking to implement various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), or lab to lab communication. Learn more about the [supported networking scenarios in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md).
+Azure Lab Services advanced networking enables you to control the network for labs created using lab plans. You can use advanced networking to implement various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), or lab to lab communication. In this tutorial, you set up lab-to-lab communication for a web development class.
-Let's focus on the lab to lab communication scenario. For our example, we'll create labs for a web development class. Each student will need access to both a server VM and a client VM. The server and client VMs must be able to communicate with each other. We'll test communication by configuring Internet Control Message Protocol (ICMP) for each VM and allowing the VMs to ping each other.
+After you complete this tutorial, you'll have a lab with two lab virtual machines that are able to communicate with each other: a server VM and a client VM.
:::image type="content" source="media/tutorial-create-lab-with-advanced-networking/architecture-two-labs-with-advanced-networking.png" alt-text="Architecture diagram showing two labs that use the same subnet of a virtual network.":::
+Learn more about the [supported networking scenarios in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md).
+ In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)]+ [!INCLUDE [Azure manage resources](./includes/lab-services-prerequisite-manage-resources.md)] ## Create a resource group [!INCLUDE [resource group definition](../../includes/resource-group.md)]
-The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, we'll put all resources for this tutorial in the same resource group.
+The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, you create all resources for this tutorial in the same resource group.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Resource groups**.
The following steps show how to use the Azure portal to create a virtual network
1. Select **Next: IP Addresses**. :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-virtual-network-basics-page.png" alt-text="Screenshot of Basics tab of Create virtual network page in the Azure portal.":::
-1. On the **IP Addresses** tab, create a subnet that will be used by the labs.
+1. On the **IP Addresses** tab, create a subnet that is used by the labs.
1. Select **+ Add subnet** 1. For **Subnet name**, enter **labservices-subnet**.
- 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](../virtual-network/virtual-network-manage-subnet.md).
+ 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 has enough IP addresses for 251 lab VMs. (Azure reserves five IP addresses for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](../virtual-network/virtual-network-manage-subnet.md).
1. Select **OK**. 1. Select **Review + Create**.
+ :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-virtual-network-ip-addresses-page.png" alt-text="Screenshot of IP addresses tab of the Create virtual network page in the Azure portal.":::
1. Once validation passes, select **Create**. ## Delegate subnet to Azure Lab Services
-In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](../virtual-network/manage-subnet-delegation.md).
+Next, you configure the subnet to be used with Azure Lab Services. To use a subnet with Azure Lab Services, the subnet must be [delegated to the service](../virtual-network/manage-subnet-delegation.md).
1. Open the **MyVirtualNetwork** resource. 1. Select the **Subnets** item on the left menu.
In this section, we'll configure the subnet to be used with Azure Lab Services.
[!INCLUDE [nsg intro](../../includes/virtual-networks-create-nsg-intro-include.md)]
-An NSG is required when using advanced networking in Azure Lab Services. In this section, we'll create the NSG. In the following section, we'll add some inbound rules needed to access lab VMs.
+An NSG is required when using advanced networking in Azure Lab Services.
To create an NSG, complete the following steps:
To create an NSG, complete the following steps:
## Update the network security group inbound rules
-To ensure that students can RDP to the lab VMs, we need to create an **Allow** security rule. When using Linux, we need to adapt the rule for SSH. Let's create a rule that allows both RDP and SSH traffic. We'll use the subnet range defined in the previous section.
+To ensure that lab users can use remote desktop to connect to the lab VMs, you need to create a security rule to allow this type of traffic. When you use Linux, you need to adapt the rule for SSH.
+
+To create a rule that allows both RDP and SSH traffic for the subnet you created previously:
1. Open **MyNsg**. 1. Select **Inbound security rules** on the left menu.
To ensure that students can RDP to the lab VMs, we need to create an **Allow** s
1. Select **Add**. :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for Network security group.":::+ 1. Wait for the rule to be created.
-1. Select **Refresh** on the menu bar. Our new rule will now show in the list of rules.
+1. Select **Refresh** on the menu bar. The new rule now shows in the list of rules.
## Associate network security group to virtual network
-We now have an NSG with an inbound security rule to allow lab VMs to connect to the virtual network. Let's associate the NSG with the virtual network we created earlier.
+You now have an NSG with an inbound security rule to allow lab VMs to connect to the virtual network.
+
+To associate the NSG with the virtual network you created earlier:
1. Open **MyVirtualNetwork**. 1. Select **Subnets** on the left menu.
We now have an NSG with an inbound security rule to allow lab VMs to connect to
:::image type="content" source="media/tutorial-create-lab-with-advanced-networking/associate-nsg-with-subnet.png" alt-text="Screenshot of Associate subnet page in the Azure portal."::: > [!WARNING]
-> Connecting the network security group to the subnet is a **required step**. Students will not be able to connect to their VMs if there is no network security group associated with the subnet.
+> Connecting the network security group to the subnet is a **required step**. Lab users are not able to connect to their lab VMs if there is no network security group associated with the subnet.
## Create a lab plan using advanced networking
-Now that we have the network created and configured, we can create the lab plan.
+Now that the virtual network is created and configured, you can create the lab plan:
1. Select **Create a resource** in the upper left-hand corner of the Azure portal. 1. Search for **lab plan**.
Now that we have the network created and configured, we can create the lab plan.
## Create two labs
-Next, let's create two labs that are using advanced networking. These labs will use the **labservices-subnet** we associated with Azure Lab Services. Any lab VMs created using **MyLabPlan** will be able to communicate with each other. Communication can be restricted by using NSGs, firewalls, etc.
+Next, create two labs that use advanced networking. These labs use the **labservices-subnet** that's associated with Azure Lab Services. Any lab VMs created using **MyLabPlan** can communicate with each other. Communication can be restricted by using NSGs, firewalls, and more.
-To create a lab, see the following steps. We'll run the steps twice. Once to create the lab with the server VMs and once to create the lab with the client VMs.
+Perform the following steps to create both labs. Repeat these steps the server VM and the client VM.
-1. Navigate to Lab Services web site: [https://labs.azure.com](https://labs.azure.com).
+1. Navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com).
1. Select **Sign in** and enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts. 1. Select **MyResourceGroup** from the dropdown on the menu bar. 1. Select **New lab**. :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-button.png" alt-text="Screenshot of Azure Lab Services portal. New lab button is highlighted."::: 1. In the **New Lab** window, do the following actions:
- 1. Specify a **name**. The name should be easily identifiable. We'll use **MyServerLab** for the lab with the server VMs and **MyClientLab** for the lab with the client VMs. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices).
- 1. Choose a **virtual machine image**. For simplicity we'll use **Windows 11 Pro**, but you can choose another available image if you want. For more information about enabling virtual machine images, see [Specify Marketplace images available to lab creators](specify-marketplace-images.md).
+ 1. Specify a **name**. The name should be easily identifiable. Use **MyServerLab** for the lab with the server VMs and **MyClientLab** for the lab with the client VMs. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices).
+ 1. Choose a **virtual machine image**. For this tutorial, use **Windows 11 Pro**, but you can choose another available image if you want. For more information about enabling virtual machine images, see [Specify Marketplace images available to lab creators](specify-marketplace-images.md).
1. For **size**, select **Medium**.
- 1. **Region** will only have one region. When a lab uses advanced networking, the lab must be in the same region as the associated subnet.
+ 1. **Region** only has one region. When a lab uses advanced networking, the lab must be in the same region as the associated subnet.
1. Select **Next**. :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-window.png" alt-text="Screenshot of the New lab window for Azure Lab Services.":::
-1. On the **Virtual machine credentials** page, specify default administrator credentials for all VMs in the lab. Specify the **name** and **password** for the administrator. By default all the student VMs will have the same password as the one specified here. Select **Next**.
+1. On the **Virtual machine credentials** page, specify default administrator credentials for all VMs in the lab. Specify the **name** and **password** for the administrator. By default all the lab VMs have the same password as the one specified here. Select **Next**.
:::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-credentials.png" alt-text="Screenshot that shows the Virtual machine credentials window when creating a new Azure Lab Services lab.":::
To create a lab, see the following steps. We'll run the steps twice. Once to c
## Enable ICMP on the lab templates
-Once the labs have been created, we'll enable ICMP (ping). Using ping is a simple example to show the template and lab VMs from different labs may communicate with each other. First, we'll enable ICMP on the template VMs for both labs. Enabling ICMP on the template VM will also enable it on the lab VMs. Once the labs are published, the lab VMs will be able to ping each other.
+Once the labs are created, enable ICMP (ping) for testing communication between the lab VMs. First, enable ICMP on the template VMs for both labs. Enabling ICMP on the template VM also enables it on the lab VMs. Once the labs are published, the lab VMs are able to ping each other.
To enable ICMP, complete the following steps for each template VM in each lab.
To enable ICMP, complete the following steps for each template VM in each lab.
:::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-connect-to-template.png" alt-text="Screenshot of Azure Lab Services template page. The Connect to template menu button is highlighted.":::
-Now that were logged on to the template VM, let's modify the firewall rules on the VM to allow ICMP. Since we're using Windows 11, we can use PowerShell and the [Enable-NetFilewallRule](/powershell/module/netsecurity/enable-netfirewallrule) cmdlet. To open a PowerShell window:
+When you're logged in to the template VM, modify the firewall rules on the VM to allow ICMP. Because you're using Windows 11, you can use PowerShell and the [Enable-NetFilewallRule](/powershell/module/netsecurity/enable-netfirewallrule) cmdlet. To open a PowerShell window:
1. Select the Start button. 1. Type "PowerShell"
In this step, you publish the lab. When you publish the template VM, Azure Lab S
## Test communication between lab VMs
-In this section weΓÇÖll, wrap up by showing that the two student virtual machines in different labs are able to communicate with each other.
+In this section, confirm that the two lab virtual machines in different labs are able to communicate with each other.
-First, let's start and connect to a lab VM from each lab. Complete the following steps for each lab.
+First, start and connect to a lab VM from each lab. Complete the following steps for each lab.
1. Open the lab in the [Azure Lab Services website](https://labs.azure.com). 1. Select **Virtual machine pool** on the left menu. 1. Select a single VM listed in the virtual machine pool.
-1. Take note of the **Private IP Address** for the VM. We'll need the private IP addresses of both the server lab and client lab VMs later.
+1. Take note of the **Private IP Address** for the VM. You need the private IP addresses of both the server lab and client lab VMs later.
1. Select the **State** slider to change the state from **Stopped** to **Starting**. > [!NOTE]
- > When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users).
+ > When an lab educator starts a lab VM, quota for the lab user isn't affected. Quota for a user specifies the number of lab hours available to a lab user outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users).
1. Once the **State** is **Running**, select the connect icon for the running VM. Open the download RDP file to connect to the VM. For more information about connection experiences on different operating systems, see [Connect to a lab VM](connect-virtual-machine.md). :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/virtual-machine-pool-running-vm.png" alt-text="Screen shot of virtual machine pool page for Azure Lab Services lab.":::
-Now we can use the ping utility to test cross-lab communication. From the lab VM in the server lab, open a command prompt. Use `ping {ip-address}`. The `{ip-address}` is the **Private IP Address** of the client VM, that we noted previously. Test can also be done from the VM from the client lab to the lab VM in the server lab.
+Now, use the ping utility to test cross-lab communication. From the lab VM in the server lab, open a command prompt. Use `ping {ip-address}`. The `{ip-address}` is the **Private IP Address** of the client VM, that you noted previously. This test can also be done from the lab VM from the client lab to the lab VM in the server lab.
:::image type="content" source="medi.png" alt-text="Screen shot command window with the ping command executed.":::
If you're not going to continue to use this application, delete the virtual netw
## Next steps >[!div class="nextstepaction"]
->[Add students to the labs](how-to-manage-lab-users.md)
+>[Add lab users to the labs](how-to-manage-lab-users.md)
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
Ports are used to generate unique identifiers used to maintain distinct flows. T
If a port is used for inbound connections, it has a **listener** for inbound connection requests on that port. That port can't be used for outbound connections. To establish an outbound connection, an **ephemeral port** is used to provide the destination with a port on which to communicate and maintain a distinct traffic flow. When these ephemeral ports are used for SNAT, they're called **SNAT ports**.
-By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, 64,000 ports are eligible for SNAT. While all public IPs that are added as frontend IPs can be allocated, frontend IPs are consumed one at a time. For example, if two backend instances are allocated 64,000 ports each, with access to two frontend IPs, both backend instances consume ports from the first frontend IP until all 64,000 ports have been exhausted.
+By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, 64,000 ports are eligible for SNAT.
Each port used in a load balancing or inbound NAT rule consumes a range of eight ports from the 64,000 available SNAT ports. This usage reduces the number of ports eligible for SNAT, if the same frontend IP is used for outbound connectivity. If load-balancing or inbound NAT rules consumed ports are in the same block of eight ports consumed by another rule, the rules don't require extra ports.
If using SNAT without outbound rules via a public load balancer, SNAT ports are
## <a name="preallocatedports"></a> Default port allocation table
-The following <a name="snatporttable"></a>table shows the SNAT port preallocations for backend pool sizes:
+When load balancing rules are selected to use default port allocation, or outbound rules are configured with "Use the default number of outbound ports", SNAT ports are allocated by default based on the backend pool size. Backends will receive the number of ports defined by the table, per frontend IP, up to a maximum of 1024 ports.
-| Pool size (VM instances) | Default SNAT ports per IP configuration |
+As an example, with 100 VMs in a backend pool and only one frontend IP, each VM will receive 512 ports. If a second frontend IP is added, each VM will receive an additional 512 ports. This means each VM is allocated a total of 1024 ports. As a result, adding a third frontend IP will NOT increase the number of allocated SNAT ports beyond 1024 ports.
+
+As a rule of thumb, the number of SNAT ports provided when default port allocation is leveraged can be computed as: MIN(# of default SNAT ports provided based on pool size * number of frontend IPs associated with the pool, 1024)
+
+The following <a name="snatporttable"></a>table shows the SNAT port preallocations for a single frontend IP, depending on the backend pool size:
+
+| Pool size (VM instances) | Default SNAT ports |
| | | | 1-50 | 1,024 | | 51-100 | 512 |
The following <a name="snatporttable"></a>table shows the SNAT port preallocatio
| 401-800 | 64 | | 801-1,000 | 32 | + ## Port exhaustion Every connection to the same destination IP and destination port uses a SNAT port. This connection maintains a distinct **traffic flow** from the backend instance or **client** to a **server**. This process gives the server a distinct port on which to address traffic. Without this process, the client machine is unaware of which flow a packet is part of.
load-balancer Load Balancer Test Frontend Reachability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md
Based on the current health probe state of your backend instances, you receive d
## Usage considerations - ICMP pings can't be disabled and are allowed by default on Standard Public Load Balancers.
+- ICMP pings with packet sizes larger than 64 bytes will be dropped, leading to timeouts.
> [!NOTE] > ICMP ping requests are not sent to the backend instances; they are handled by the Load Balancer.
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Create a virtual network by using [az network vnet create](/cli/azure/network/vn
In this example, you create an Azure Bastion host. The Azure Bastion host is used later in this article to securely manage the virtual machines and test the load balancer deployment. > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
- >
- ### Create a bastion public IP address Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host.
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Previously updated : 07/18/2022 Last updated : 08/17/2023 -+ #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs. # Quickstart: Create an internal load balancer to load balance VMs using the Azure portal
-Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer for a backend pool with two virtual machines. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
+Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer for a backend pool with two virtual machines. Other resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
:::image type="content" source="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png" alt-text="Diagram of resources deployed for internal load balancer.":::
+> [!NOTE]
+> In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Get started with Azure Load Balancer by using the Azure portal to create an inte
Sign in to the [Azure portal](https://portal.azure.com).
+## Create NAT gateway
+
+All outbound internet traffic traverses the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+1. Select **+ Create**.
+
+1. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **CreateIntLBQS-rg** in Name. </br> Select **OK**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **East US**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+1. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+
+1. Select **Create a new public IP address** under **Public IP addresses**.
+
+1. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
+
+1. Select **OK**.
+
+1. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+1. Select **Create**.
+ ## Create the virtual network When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
A private IP address in the virtual network is configured as the frontend for th
An Azure Bastion host is created to securely manage the virtual machines and install IIS.
-In this section, you'll create a virtual network, subnet, and Azure Bastion host.
+In this section, you create a virtual network, subnet, and Azure Bastion host.
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-2. In **Virtual networks**, select **+ Create**.
+1. In **Virtual networks**, select **+ Create**.
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+1. In **Create virtual network**, enter or select this information in the **Basics** tab:
| **Setting** | **Value** | ||--| | **Project Details** | | | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **CreateIntLBQS-rg**. </br> Select **OK**. |
+ | Resource Group | Select **CreateIntLBQS-rg**. |
| **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **West US 3** |
+ | Region | Select **East US** |
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. Select the **IP Addresses** tab or select the **Next** button at the bottom of the page.
-5. In the **IP Addresses** tab, enter this information:
+1. In the **IP Addresses** tab, enter this information:
| Setting | Value | |--|-| | IPv4 address space | Enter **10.1.0.0/16** |
-6. Under **Subnet name**, select the word **default**.
+1. Under **Subnets**, select the word **default**.
-7. In **Edit subnet**, enter this information:
+1. In **Edit subnet**, enter this information:
| Setting | Value | |--|-| | Subnet name | Enter **myBackendSubnet** | | Subnet address range | Enter **10.1.0.0/24** |
+ | **Security** | |
+ | NAT Gateway | Select **myNATgateway**. |
-8. Select **Save**.
+1. Select **Add**.
-9. Select the **Security** tab.
+1. Select the **Security** tab.
-10. Under **BastionHost**, select **Enable**. Enter this information:
+1. Under **BastionHost**, select **Enable**. Enter this information:
| Setting | Value | |--|-| | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
+ | Public IP Address | Select **Create new**. </br> Enter **myBastionIP** in Name. </br> Select **OK**. |
> [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
- >
-
-11. Select the **Review + create** tab or select the **Review + create** button.
+1. Select the **Review + create** tab or select the **Review + create** button.
-12. Select **Create**.
+1. Select **Create**.
> [!NOTE]
In this section, you'll create a virtual network, subnet, and Azure Bastion host
In this section, you create a load balancer that load balances virtual machines.
-During the creation of the load balancer, you'll configure:
+During the creation of the load balancer, you configure:
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
+- Frontend IP address
+- Backend pool
+- Inbound load-balancing rules
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-2. In the **Load balancer** page, select **Create**.
+1. In the **Load balancer** page, select **Create**.
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
- | Setting | Value |
- | | |
+ | Setting | Value |
+ | | |
| **Project details** | | | Subscription | Select your subscription. | | Resource group | Select **CreateIntLBQS-rg**. | | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **West US 3**. |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **East US**. |
| SKU | Leave the default **Standard**. |
- | Type | Select **Internal**. |
+ | Type | Select **Internal**. |
| Tier | Leave the default of **Regional**. | - :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/create-standard-internal-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-
-6. Enter **myFrontend** in **Name**.
-
-7. Select **myBackendSubnet** in **Subnet**.
-
-8. Select **Dynamic** for **Assignment**.
-
-9. Select **Zone-redundant** in **Availability zone**.
-
-10. Select **Add**.
-
-11. Select **Next: Backend pools** at the bottom of the page.
-
-12. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-13. Enter **myBackendPool** for **Name** in **Add backend pool**.
+1. Select **Next: Frontend IP configuration** at the bottom of the page.
-14. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter or select the following information:
-15. Select **IPv4** or **IPv6** for **IP version**.
-
-16. Select **Add**.
-
-17. Select the **Next: Inbound rules** button at the bottom of the page.
-
-18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-
-19. In **Add load balancing rule**, enter or select the following information:
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myFrontend** |
+ | Private IP address version | Select **IPv4** or **IPv6** depending on your requirements. |
| Setting | Value | | - | -- |
+ | Name | Enter **myFrontend** |
+ | Virtual network | Select **myVNet** |
+ | Subnet | Select **myBackendSubnet** |
+ | Assignment | Select **Dynamic** |
+ | Availability zone | Select **Zone-redundant** |
+
+1. Select **Add**.
+1. Select **Next: Backend pools** at the bottom of the page.
+1. In the **Backend pools** tab, select **+ Add a backend pool**.
+1. Enter **myBackendPool** for **Name** in **Add backend pool**.
+1. Select **IP Address** for **Backend Pool Configuration**.
+1. Select **Save**.
+1. Select the **Next: Inbound rules** button at the bottom of the page.
+1. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+1. In **Add load balancing rule**, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
| Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **myFrontend**. |
During the creation of the load balancer, you'll configure:
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |
- | TCP reset | Select **Enabled**. |
- | Floating IP | Select **Disabled**. |
-
-20. Select **Add**.
-
-21. Select the blue **Review + create** button at the bottom of the page.
-
-22. Select **Create**.
-
- > [!NOTE]
- > In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
- > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
-
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
+ | Enable TCP reset | Select **checkbox** . |
+ | Enable Floating IP | Leave the default of unselected. |
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+1. Select **Save**.
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateIntLBQS-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Region | Select **West US 3**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+1. Select the blue **Review + create** button at the bottom of the page.
-5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-
-6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-
-9. In **Virtual network**, select **myVNet**.
-
-10. Select **myBackendSubnet** under **Subnet name**.
-
-11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-
-12. Select **Create**.
+1. Select **Create**.
## Create virtual machines
-In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
+In this section, you create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
These VMs are added to the backend pool of the load balancer that was created earlier.
These VMs are added to the backend pool of the load balancer that was created ea
| Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **(US) West US 3** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **1** | | Security type | Select **Standard**. |
These VMs are added to the backend pool of the load balancer that was created ea
## Create test virtual machine
-In this section, you'll create a VM named **myTestVM**. This VM will be used to test the load balancer configuration.
+In this section, you create a VM named **myTestVM**. This VM is used to test the load balancer configuration.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
In this section, you'll create a VM named **myTestVM**. This VM will be used to
| Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myTestVM** |
- | Region | Select **(US) West US 3** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **No infrastructure redundancy required** | | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Image | Select **Windows Server 2022 Datacenter - x64 Gen2** |
| Azure Spot instance | Leave the default of unselected. | | Size | Choose VM size or take default setting | | **Administrator account** | |
In this section, you'll create a VM named **myTestVM**. This VM will be used to
## Test the load balancer
-In this section, you'll test the load balancer by connecting to the **myTestVM** and verifying the webpage.
+In this section, you test the load balancer by connecting to the **myTestVM** and verifying the webpage.
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
In this section, you'll test the load balancer by connecting to the **myTestVM**
7. Enter the username and password entered during VM creation.
-8. Open **Internet Explorer** on **myTestVM**.
+8. Open **Microsoft Edge** on **myTestVM**.
9. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**. :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot shows a browser window displaying the customized page, as expected." border="true":::
-To see the load balancer distribute traffic across both VMs, you can force-refresh your web browser from the client machine.
+1. To see the load balancer distribute traffic across both VMs, navigate to the VM shown in the browser message, and stop the VM.
+1. Refresh the browser window. The page should still display the customized page. The load balancer is now only sending traffic to the remaining VM.
## Clean up resources
When no longer needed, delete the resource group, load balancer, and all related
In this quickstart, you:
-* Created an internal Azure Load Balancer
+- Created an internal Azure Load Balancer
-* Attached 2 VMs to the load balancer
+- Attached 2 VMs to the load balancer
-* Configured the load balancer traffic rule, health probe, and then tested the load balancer
+- Configured the load balancer traffic rule, health probe, and then tested the load balancer
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
$gwpublicip = New-AzPublicIpAddress @gwpublicip
* Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network
- > [!IMPORTANT]
-
- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-
- >
+> [!IMPORTANT]
+> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
```azurepowershell-interactive
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Create a network security group rule using [az network nsg rule create](/cli/azu
In this section, you'll create the resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer. > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
- ### Create a public IP address Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host. The public IP is used by the bastion host for secure access to the virtual machine resources.
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Use a NAT gateway to provide outbound internet access to resources in the backen
* Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
- ```azurepowershell-interactive ## Create public IP address for NAT gateway ## $ip = @{
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
Title: Troubleshoot common outbound connectivity issues with Azure Load Balancer -
-description: In this article, learn to troubleshoot for common problems with outbound connectivity with Azure Load Balancer. This includes most common issues of SNAT exhaustion and connection timeouts.
+ Title: Troubleshoot Azure Load Balancer outbound connectivity issues
+description: Learn troubleshooting guidance for outbound connections in Azure Load Balancer. This includes issues of SNAT exhaustion and connection timeouts.
Previously updated : 05/22/2023 Last updated : 08/24/2023
-# Troubleshoot common outbound connectivity issues with Azure Load Balancer
+# Troubleshoot Azure Load Balancer outbound connectivity issues
-This article provides troubleshooting guidance for common problems that can occur with outbound connections from an Azure Load Balancer. Most problems with outbound connectivity that customers experience is due to source network address translation (SNAT) port exhaustion and connection timeouts leading to dropped packets.
+Learn troubleshooting guidance for outbound connections in Azure Load Balancer. This includes understanding source network address translation (SNAT) and it's impact on connections, using individual public IPs on VMs, and designing applications for connection efficiency to avoid SNAT port exhaustion. Most problems with outbound connectivity that customers experience is due to SNAT port exhaustion and connection timeouts leading to dropped packets.
To learn more about SNAT ports, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md).
Follow [Standard load balancer diagnostics with metrics, alerts, and resource he
It's important to optimize your Azure deployments for outbound connectivity. Optimization can prevent or alleviate issues with outbound connectivity.
-### Use a NAT gateway for outbound connectivity to the Internet
+### Deploy NAT gateway for outbound Internet connectivity
Azure NAT Gateway is a highly resilient and scalable Azure service that provides outbound connectivity to the internet from your virtual network. A NAT gatewayΓÇÖs unique method of consuming SNAT ports helps resolve common SNAT exhaustion and connection issues. For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md).
To learn more about default outbound access and default port allocation, see [So
To increase the number of available SNAT ports per VM, configure outbound rules with manual port allocation on your load balancer. For example, if you know you have a maximum of 10 VMs in your backend pool, you can allocate up to 6,400 SNAT ports per VM rather than the default 1,024. If you need more SNAT ports, you can add multiple frontend IP addresses for outbound connections to multiply the number of SNAT ports available. Make sure you understand why you're exhausting SNAT ports before adding more frontend IP addresses.
-For detailed guidance, see [Design your applications to use connections efficiently](#design-your-applications-to-use-connections-efficiently) later in this article. To add more IP addresses for outbound connections, create a frontend IP configuration for each new IP. When outbound rules are configured, you're able to select multiple frontend IP configurations for a backend pool. It's recommended to use different IP addresses for inbound and outbound connectivity. Different IP addresses isolate traffic for improved monitoring and troubleshooting.
+For detailed guidance, see [Design your applications to use connections efficiently](#design-connection-efficient-applications) later in this article. To add more IP addresses for outbound connections, create a frontend IP configuration for each new IP. When outbound rules are configured, you're able to select multiple frontend IP configurations for a backend pool. It's recommended to use different IP addresses for inbound and outbound connectivity. Different IP addresses isolate traffic for improved monitoring and troubleshooting.
### Configure an individual public IP on VM
We highly recommend considering utilizing NAT gateway instead, as assigning indi
> >Private Link is the recommended option over service endpoints for private access to Azure hosted services. For more information on the difference between Private Link and service endpoints, see [Compare Private Endpoints and Service Endpoints](../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
-## Design your applications to use connections efficiently
+## Design connection-efficient applications
When you design your applications, ensure they use connections efficiently. Connection efficiency can reduce or eliminate SNAT port exhaustion in your deployed applications.
load-testing Concept Azure Load Testing Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-azure-load-testing-vnet-injection.md
Previously updated : 08/03/2022 Last updated : 08/22/2023 # Scenarios for deploying Azure Load Testing in a virtual network
-In this article, you'll learn about the scenarios for deploying Azure Load Testing in a virtual network (VNET). This deployment is sometimes called VNET injection.
+In this article, you learn about the scenarios for deploying Azure Load Testing in a virtual network (VNET). This deployment is sometimes called VNET injection.
This functionality enables the following usage scenarios:
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
Azure Load Testing uses the customer-managed key to encrypt the following data i
- Once customer-managed key encryption is enabled on a resource, it can't be disabled.
+- If the customer-managed key is stored in an Azure Key Vault behind a firewall, public access should be enabled on the firewall to allow Azure Load Testing to access the key.
+ ## Configure your Azure key vault To use customer-managed encryption keys with Azure Load Testing, you need to store the key in Azure Key Vault. You can use an existing or create a new key vault. The load testing resource and key vault may be in different regions or subscriptions in the same tenant.
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
Title: Configure Azure Load Testing for high-scale load tests
+ Title: Configure high-scale load tests
-description: Learn how to configure Azure Load Testing to run high-scale load tests by simulating large amounts of virtual users.
+description: Learn how to configure test engine instances in Azure Load Testing to run high-scale load tests. Monitor engine health metrics to find an optimal configuration for your load test.
Previously updated : 07/18/2022 Last updated : 08/22/2023 # Configure Azure Load Testing for high-scale load
-In this article, learn how to set up a load test for high-scale load with Azure Load Testing.
-
-Configure multiple test engine instances to scale out the number of virtual users for your load test and simulate a high number of requests per second. To achieve an optimal load distribution, you can monitor the test instance health metrics in the Azure Load Testing dashboard.
+In this article, you learn how to configure your load test for high-scale with Azure Load Testing. Configure multiple test engine instances to scale out the number of virtual users for your load test and simulate a high number of requests per second. To achieve an optimal load distribution, you can monitor the test instance health metrics in the Azure Load Testing dashboard.
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An existing Azure Load Testing resource. To create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An existing Azure load testing resource. To create an Azure load testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
## Determine requests per second
To achieve a target number of requests per second, configure the total number of
## Test engine instances and virtual users
-In the Apache JMeter script, you can specify the number of parallel threads. Each thread represents a virtual user that accesses the application endpoint in parallel. We recommend that you keep the number of threads in a script below a maximum of 250.
+In the Apache JMeter script, you can specify the number of parallel threads. Each thread represents a virtual user that accesses the application endpoint. We recommend that you keep the number of threads in a script below a maximum of 250.
-In Azure Load Testing, *test engine* instances are responsible for running the Apache JMeter script. You can configure the number of instances for a load test. All test engine instances run in parallel.
+In Azure Load Testing, *test engine* instances are responsible for running the Apache JMeter script. All test engine instances run in parallel. You can configure the number of instances for a load test.
The total number of virtual users for a load test is then: VUs = (# threads) * (# test engine instances).
For example, to simulate 1,000 virtual users, set the number of threads in the A
The location of the Azure Load Testing resource determines the location of the test engine instances. All test engine instances within a Load Testing resource are hosted in the same Azure region.
-## Configure your test plan
+## Configure test engine instances
+
+You can specify the number of test engine instances for each test. Your test script runs in parallel across each of these instances to simulate load to your application.
+
+To configure the number of instances for a test:
-In this section, you configure the scaling settings of your load test.
+# [Azure portal](#tab/portal)
1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
In this section, you configure the scaling settings of your load test.
1. Select **Apply** to modify the test and use the new configuration when you rerun it.
+# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
+
+For CI/CD workflows, you configure the number of engine instances in the [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository.
+
+1. Open the YAML test configuration file for your load test in your editor of choice.
+
+1. Configure the number of test engine instances in the `engineInstances` setting.
+
+ The following example configures a load test that runs across 10 parallel test engine instances.
+
+ ```yaml
+ version: v0.1
+ testId: SampleTestCICD
+ displayName: Sample test from CI/CD
+ testPlan: SampleTest.jmx
+ description: Load test website home page
+ engineInstances: 10
+ ```
+
+1. Save the YAML configuration file, and commit the changes to source control.
+++ ## Monitor engine instance metrics
-To make sure that the test engine instances themselves aren't a performance bottleneck, you can monitor resource metrics of the test engine instance. A high resource usage for a test instance might negatively influence the results of the load test.
+To make sure that the test engine instances, themselves aren't a performance bottleneck, you can monitor resource metrics of the test engine instance. A high resource usage for a test instance might negatively influence the results of the load test.
Azure Load Testing reports four resource metrics for each instance:
To view the engine resource metrics:
### Troubleshoot unhealthy engine instances
-If one or multiple instances show a high resource usage, it could impact the test results. To resolve the issue, try one or more of the following steps:
+If one or multiple instances show a high resource usage, it could affect the test results. To resolve the issue, try one or more of the following steps:
- Reduce the number of threads (virtual users) per test engine. To achieve a target number of virtual users, you might increase the number of engine instances for the load test. - Ensure that your script is effective, with no redundant code. -- If the engine health status is unknown, re-run the test.
+- If the engine health status is unknown, rerun the test.
## Next steps
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
Last updated 05/12/2023 - # Test private endpoints by deploying Azure Load Testing in an Azure virtual network
When you start the load test, Azure Load Testing service injects the following A
These resources are ephemeral and exist only during the load test run. If you restrict access to your virtual network, you need to [configure your virtual network](#configure-virtual-network) to enable communication between these Azure Load Testing and the injected VMs.
-> [!NOTE]
-> Virtual network support for Azure Load Testing is available in the following Azure regions: Australia East, East Asia, East US, East US 2, North Europe, South Central US, Sweden Central, UK South, West Europe, West US 2 and West US 3.
->
- ## Prerequisites - Your Azure account has the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
Follow these steps to [update the subnet settings](/azure/virtual-network/virtua
### Starting the load test fails with `Management Lock is enabled on Resource Group of VNET (ALTVNET015)` If there is a lock on the resource group that contains the virtual network, the service can't inject the test engine virtual machines in your virtual network. Remove the management lock before running the load test. Learn how to [configure locks in the Azure portal](/azure/azure-resource-manager/management/lock-resources?tabs=json#configure-locks).
-
+
+### Starting the load test fails with `Insufficient public IP address quota in VNET subscription (ALTVNET016)`
+
+When you start the load test, Azure Load Testing injects the following Azure resources in the virtual network that contains the application endpoint:
+
+- The test engine virtual machines. These VMs invoke your application endpoint during the load test.
+- A public IP address.
+- A network security group (NSG).
+- An Azure Load Balancer.
+
+Ensure that you have quota for at least one public IP address available in your subscription to use in the load test.
+
+### Starting the load test fails with `Subnet with name "AzureFirewallSubnet" cannot be used for load testing (ALTVNET017)`
+
+The subnet *AzureFirewallSubnet* is reserved and you can't use it for Azure Load Testing. Select another subnet for your load test.
+ ## Next steps - Learn more about the [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md).
logic-apps Add Artifacts Integration Service Environment Ise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-artifacts-integration-service-environment-ise.md
ms.suite: integration Previously updated : 11/04/2022 Last updated : 08/23/2023 # Add resources to your integration service environment (ISE) in Azure Logic Apps
After you create an [integration service environment (ISE)](../logic-apps/connec
* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The ISE that you created to run your logic apps. If you don't have an ISE, [create an ISE first](../logic-apps/connect-virtual-network-vnet-isolated-environment.md).
+* The ISE that you created to run your Consumption logic app workflows
* To create, add, or update resources that are deployed to an ISE, you need to be assigned the Owner or Contributor role on that ISE, or you have permissions inherited through the Azure subscription or Azure resource group associated with the ISE. For people who don't have owner, contributor, or inherited permissions, they can be assigned the Integration Service Environment Contributor role or Integration Service Environment Developer role. For more information, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)?
logic-apps Connect Virtual Network Vnet Isolated Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md
ms.suite: integration Previously updated : 11/04/2022 Last updated : 08/23/2023 # Access to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE)
Last updated 11/04/2022
> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), > which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard > logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure
-> Logic Apps and provide the same capabilities plus more.
->
-> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing
-> before this date are supported through August 31, 2024. For more information, see the following resources:
->
-> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
-> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
-> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
-> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
+> Logic Apps and provide the same capabilities plus more. For example Standard workflows support using private
+> endpoints for inbound traffic so that your workflows can communicate privately and securely with virtual
+> networks. Standard workflows also support virtual network integration for outbound traffic. For more information,
+> review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
-Sometimes, your logic app workflows need access to protected resources, such as virtual machines (VMs) and other systems or services, that are inside or connected to an Azure virtual network. To directly access these resources from workflows that usually run in multi-tenant Azure Logic Apps, you can create and run your logic apps in an *integration service environment* (ISE) instead. An ISE is actually an instance of Azure Logic Apps that runs separately on dedicated resources, apart from the global multi-tenant Azure environment, and doesn't [store, process, or replicate data outside the region where you deploy the ISE](https://azure.microsoft.com/global-infrastructure/data-residency#select-geography).
+Since November 1, 2022, the capability to create new ISE resources is no longer available, which also means that capability to set up your own encryption keys, known as "Bring Your Own Key" (BYOK), during ISE creation using the Logic Apps REST API is also no longer available. However, ISE resources existing before this date are supported through August 31, 2024.
-For example, some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md)) for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, partner services, or customer services that are hosted on Azure. If your logic app workflows require access to virtual networks that use private endpoints, you have these options:
+For more information, see the following resources:
-* If you want to develop workflows using the **Logic App (Consumption)** resource type, and your workflows need to use private endpoints, you *must* create, deploy, and run your logic apps in an ISE. For more information, review [Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment.md).
+- [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
+- [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+- [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+- [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
+- [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
+- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
-* If you want to develop workflows using the **Logic App (Standard)** resource type, and your workflows need to use private endpoints, you don't need an ISE. Instead, your workflows can communicate privately and securely with virtual networks by using private endpoints for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
+This overview provides more information about [how an ISE works with a virtual network](#how-ise-works), the [benefits to using an ISE](#benefits), the [differences between the dedicated and multi-tenant Logic Apps service](#difference), and how you can directly access resources that are inside or connected your Azure virtual network.
-For more information, review the [differences between multi-tenant Azure Logic Apps and integration service environments](logic-apps-overview.md#resource-environment-differences).
+<a name="how-ise-works"></a>
## How an ISE works with a virtual network
-When you create an ISE, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location for those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/).
-
-![Select integration service environment](./media/connect-virtual-network-vnet-isolated-environment-overview/select-logic-app-integration-service-environment.png)
-
-For more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". For more information, review [Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps](../logic-apps/customer-managed-keys-integration-service-environment.md).
+At ISE creation, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location for those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/).
-This overview provides more information about [why you'd want to use an ISE](#benefits), the [differences between the dedicated and multi-tenant Logic Apps service](#difference), and how you can directly access resources that are inside or connected your Azure virtual network.
+![Screenshot shows Azure portal with integration service environment selected.](./media/connect-virtual-network-vnet-isolated-environment-overview/select-logic-app-integration-service-environment.png)
<a name="benefits"></a> ## Why use an ISE
-Running logic apps in your own separate dedicated instance helps reduce the impact that other Azure tenants might have on your apps' performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits:
+Running logic app workflows in your own separate dedicated instance helps reduce the impact that other Azure tenants might have on your apps' performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits:
* Direct access to resources that are inside or connected to your virtual network
When you create and run logic apps in an ISE, you get the same user experiences
## Access to on-premises systems
-Logic apps that run inside an ISE can directly access on-premises systems and data sources that are inside or connected to an Azure virtual network by using these items:<p>
+Logic app workflows that run inside an ISE can directly access on-premises systems and data sources that are inside or connected to an Azure virtual network by using these items:<p>
* The HTTP trigger or action, which displays the **CORE** label
Logic apps that run inside an ISE can directly access on-premises systems and da
To access on-premises systems and data sources that don't have ISE connectors, are outside your virtual network, or aren't connected to your virtual network, you still have to use the on-premises data gateway. Logic apps within an ISE can continue using connectors that don't have the **CORE** or **ISE** label. Those connectors run in the multi-tenant Logic Apps service, rather than in your ISE.
+<a name="data-at-rest"></a>
+
+## Encrypted data at rest
+
+By default, Azure Storage uses Microsoft-managed keys to encrypt your data. Azure Logic Apps relies on Azure Storage to store and automatically [encrypt data at rest](../storage/common/storage-service-encryption.md). This encryption protects your data and helps you meet your organizational security and compliance commitments. For more information about how Azure Storage encryption works, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md) and [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md).
+
+For more control over the encryption keys used by Azure Storage, ISE supports using and managing your own key using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". However, this capability is available *only when you create your ISE*, not afterwards. You can't disable this key after your ISE is created. Currently, no support exists for rotating a customer-managed key for an ISE.
+
+* Customer-managed key support for an ISE is available only in the following regions:
+
+ * Azure: West US 2, East US, and South Central US.
+
+ * Azure Government: Arizona, Virginia, and Texas.
+
+* The key vault that stores your customer-managed key must exist in the same Azure region as your ISE.
+
+* To support customer-managed keys, your ISE requires that you enable either the [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials.
+
+* You must give your key vault access to your ISE's managed identity, but the timing depends on which managed identity that you use.
+
+ * **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE. Otherwise, ISE creation fails, and you get a permissions error.
+
+ * **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE
+ <a name="ise-level"></a> ## ISE SKUs
When you create your ISE, you can select the Developer SKU or Premium SKU. This
> [!IMPORTANT] > This SKU has no service-level agreement (SLA), scale up capability,
- > or redundancy during recycling, which means that you might experience delays or downtime. Backend updates might intermittently interrupt service.
+ > or redundancy during recycling, which means that you might experience
+ > delays or downtime. Backend updates might intermittently interrupt service.
For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). To learn how billing works for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing).
When you create your ISE, you can select the Developer SKU or Premium SKU. This
## ISE endpoint access
-When you create your ISE, you can choose to use either internal or external access endpoints. Your selection determines whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. These endpoints also affect the way that you can access the inputs and outputs from your logic apps' runs history.
+During ISE creation, you can choose to use either internal or external access endpoints. Your selection determines whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. These endpoints also affect the way that you can access the inputs and outputs from your logic apps' runs history.
> [!IMPORTANT] > You can select the access endpoint only during ISE creation and can't change this option later.
-* **Internal**: Private endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic apps' runs history *only from inside your virtual network*.
+* **Internal**: Private endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic app workflow run history *only from inside your virtual network*.
> [!IMPORTANT] > If you need to use these webhook-based triggers, and the service is outside your virtual network and
When you create your ISE, you can choose to use either internal or external acce
> * SAP (multi-tenant version) > > Also, make sure that you have network connectivity between the private endpoints and the computer from
- > where you want to access the run history. Otherwise, when you try to view your logic app's run history,
+ > where you want to access the run history. Otherwise, when you try to view your workflow's run history,
> you get an error that says "Unexpected error. Failed to fetch". >
- > ![Azure Storage action error resulting from inability to send traffic through firewall](./media/connect-virtual-network-vnet-isolated-environment-overview/integration-service-environment-error.png)
+ > ![Screenshot shows Azure portal and Azure Storage action error resulting from inability to send traffic through firewall.](./media/connect-virtual-network-vnet-isolated-environment-overview/integration-service-environment-error.png)
> > For example, your client computer can exist inside the ISE's virtual network or inside a virtual network that's connected to the ISE's virtual network through peering or a virtual private network.
-* **External**: Public endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic apps' runs history *from outside your virtual network*. If you use network security groups (NSGs), make sure they're set up with inbound rules to allow access to the run history's inputs and outputs. For more information, see [Enable access for ISE](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#enable-access).
+* **External**: Public endpoints permit calls to logic app workflows in your ISE where you can view and access inputs and outputs from logic apps' runs history *from outside your virtual network*. If you use network security groups (NSGs), make sure they're set up with inbound rules to allow access to the run history's inputs and outputs.
To determine whether your ISE uses an internal or external access endpoint, on your ISE's menu, under **Settings**, select **Properties**, and find the **Access endpoint** property:
-![Find ISE access endpoint](./media/connect-virtual-network-vnet-isolated-environment-overview/find-ise-access-endpoint.png)
+![Screenshot shows Azure portal, ISE menu, with the options selected for Settings, Properties, and Access endpoint.](./media/connect-virtual-network-vnet-isolated-environment-overview/find-ise-access-endpoint.png)
<a name="pricing-model"></a> ## Pricing model
-Logic apps, built-in triggers, built-in actions, and connectors that run in your ISE use a fixed pricing plan that differs from the consumption-based pricing plan. For more information, see [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
+Logic apps, built-in triggers, built-in actions, and connectors that run in your ISE use a fixed pricing plan that differs from the Consumption pricing plan. For more information, see [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
<a name="create-integration-account-environment"></a> ## Integration accounts with ISE
-You can use integration accounts with logic apps inside an integration service environment (ISE). However, those integration accounts must use the *same ISE* as the linked logic apps. Logic apps in an ISE can reference only those integration accounts that are in the same ISE. When you create an integration account, you can select your ISE as the location for your integration account. To learn how pricing and billing work for integration accounts with an ISE, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For limits information, see [Integration account limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
+You can use integration accounts with logic apps inside an integration service environment (ISE). However, those integration accounts must use the *same ISE* as the linked logic apps. Logic apps in an ISE can reference only those integration accounts that are in the same ISE. When you create an integration account, you can select your ISE as the location for your integration account. To learn how pricing and billing work for integration accounts with an ISE, see the [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For limits information, see [Integration account limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
## Next steps
-* [Connect to Azure virtual networks from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment.md)
-* Learn more about [Azure Virtual Network](../virtual-network/virtual-networks-overview.md)
-* Learn about [virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
- Title: Connect to Azure virtual networks with an ISE
-description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) from Azure Logic Apps.
--- Previously updated : 11/04/2022--
-# Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE)
-
-> [!IMPORTANT]
->
-> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic),
-> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard
-> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure
-> Logic Apps and provide the same capabilities plus more.
->
-> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing
-> before this date are supported through August 31, 2024. For more information, see the following resources:
->
-> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
-> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
-> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
-> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
-
-For scenarios where Consumption logic app resources and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), create an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is an environment that uses dedicated storage and other resources that are kept separate from the "global" multi-tenant Azure Logic Apps. This separation also reduces any impact that other Azure tenants might have on your apps' performance. An ISE also provides you with your own static IP addresses. These IP addresses are separate from the static IP addresses that are shared by the logic apps in the public, multi-tenant service.
-
-When you create an ISE, Azure *injects* that ISE into your Azure virtual network, which then deploys Azure Logic Apps into your virtual network. When you create a logic app or integration account, select your ISE as their location. Your logic app or integration account can then directly access resources, such as virtual machines (VMs), servers, systems, and services, in your virtual network.
-
-![Select integration service environment](./media/connect-virtual-network-vnet-isolated-environment/select-logic-app-integration-service-environment.png)
-
-> [!IMPORTANT]
-> For logic apps and integration accounts to work together in an ISE, both must use the *same ISE* as their location.
-
-An ISE has increased limits on:
-
-* Run duration
-* Storage retention
-* Throughput
-* HTTP request and response timeouts
-* Message sizes
-* Custom connector requests
-
-For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). To learn more about ISEs, review [Access to Azure Virtual Network resources from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment-overview.md).
-
-This article shows you how to complete these tasks by using the Azure portal:
-
-* Enable access for your ISE.
-* Create your ISE.
-* Add extra capacity to your ISE.
-
-You can also create an ISE by using the [sample Azure Resource Manager quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/integration-service-environment) or by using the Azure Logic Apps REST API, including setting up customer-managed keys:
-
-* [Create an integration service environment (ISE) by using the Azure Logic Apps REST API](create-integration-service-environment-rest-api.md)
-* [Set up customer-managed keys to encrypt data at rest for ISEs](customer-managed-keys-integration-service-environment.md)
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
- > [!IMPORTANT]
- > Logic app workflows, built-in triggers, built-in actions, and connectors that run in your ISE use a pricing plan
- > that differs from the Consumption pricing plan. To learn how pricing and billing work for ISEs, review the
- > [Azure Apps pricing model](logic-apps-pricing.md#ise-pricing).
- > For pricing rates, review [Azure Apps pricing](logic-apps-pricing.md).
-
-* An [Azure virtual network](../virtual-network/virtual-networks-overview.md) that has four *empty* subnets, which are required for creating and deploying resources in your ISE and are used by these internal and hidden components:
-
- * Azure Logic Apps Compute
- * Internal App Service Environment (connectors)
- * Internal API Management (connectors)
- * Internal Redis for caching and performance
-
- You can create the subnets in advance or when you create your ISE so that you can create the subnets at the same time. However, before you create your subnets, make sure that you review the [subnet requirements](#create-subnet).
-
- * The Developer ISE SKU uses three subnets, but you still have to create four subnets. The fourth subnet doesn't incur any extra cost.
-
- * Make sure that your virtual network [enables access for your ISE](#enable-access) so that your ISE can work correctly and stay accessible.
-
- * If you use a [network virtual appliance (NVA)](../virtual-network/virtual-networks-udr-overview.md#user-defined), make sure that you don't enable TLS/SSL termination or change the outbound TLS/SSL traffic. Also, make sure that you don't enable inspection for traffic that originates from your ISE's subnet. For more information, review [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md).
-
- * If you want to use custom Domain Name System (DNS) servers for your Azure virtual network, [set up those servers by following these steps](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) before you deploy your ISE to your virtual network. For more information about managing DNS server settings, review [Create, change, or delete a virtual network](../virtual-network/manage-virtual-network.md#change-dns-servers).
-
- > [!NOTE]
- > If you change your DNS server or DNS server settings, you must restart your ISE so that the ISE can pick up those changes. For more information, review [Restart your ISE](ise-manage-integration-service-environment.md#restart-ISE).
-
-<a name="enable-access"></a>
-
-## Enable access for ISE
-
-When you use an ISE with an Azure virtual network, a common setup problem is having one or more blocked ports. The connectors that you use for creating connections between your ISE and destination systems might also have their own port requirements. For example, if you communicate with an FTP system by using the FTP connector, the port that you use on your FTP system needs to be available, for example, port 21 for sending commands.
-
-To make sure that your ISE is accessible and that the logic apps in that ISE can communicate across each subnet in your virtual network, [open the ports described in this table for each subnet](#network-ports-for-ise). If any required ports are unavailable, your ISE won't work correctly.
-
-* If you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then [set up a single, outbound, public, static, and predictable IP address](connect-virtual-network-vnet-set-up-single-ip-address.md) that all the ISE instances in your virtual network can use to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
-
- > [!NOTE]
- > You can use this approach for a single ISE when your scenario requires limiting the
- > number of IP addresses that need access. Consider whether the extra costs for
- > the firewall or virtual network appliance make sense for your scenario. Learn more about
- > [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
-
-* If you created a new Azure virtual network and subnets without any constraints, you don't need to set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) in your virtual network to control traffic across subnets.
-
-* For an existing virtual network, you can *optionally* set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) to [filter network traffic across subnets](../virtual-network/tutorial-filter-network-traffic.md). If you want to go this route, or if you're already using NSGs, make sure that you [open the ports described in this table](#network-ports-for-ise) for those NSGs.
-
- When you set up [NSG security rules](../virtual-network/network-security-groups-overview.md#security-rules), you need to use *both* the **TCP** and **UDP** protocols, or you can select **Any** instead so you don't have to create separate rules for each protocol. NSG security rules describe the ports that you must open for the IP addresses that need access to those ports. Make sure that any firewalls, routers, or other items that exist between these endpoints also keep those ports accessible to those IP addresses.
-
-* If you set up forced tunneling through your firewall to redirect Internet-bound traffic, review the [forced tunneling requirements](#forced-tunneling).
-
-<a name="network-ports-for-ise"></a>
-
-### Network ports used by your ISE
-
-This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the [access endpoint that's selected during ISE creation](connect-virtual-network-vnet-isolated-environment.md#create-environment). For more information, review [Endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
-
-> [!IMPORTANT]
->
-> For all rules, make sure that you set source ports to `*` because source ports are ephemeral.
-
-#### Inbound security rules
-
-| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes |
-|--|-||--||-|
-| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network. | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
-| * | 443 | Internal ISE: <br>**VirtualNetwork** <br><br>External ISE: **Internet** or see **Notes** | **VirtualNetwork** | - Communication to your logic app <br><br>- Runs history for your logic app | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <br><br>- The computer or service that calls any request triggers or webhooks in your logic app <br><br>- The computer or service from where you want to access logic app runs history <br><br>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history. |
-| * | 454 | **LogicAppsManagement** |**VirtualNetwork** | Azure Logic Apps designer - dynamic properties| Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
-| * | 454 | **LogicApps** | **VirtualNetwork** | Network health check | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
-| * | 454 | **AzureConnectors** | **VirtualNetwork** | Connector deployment | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <br><br>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
-| * | 454, 455 | **AppServiceManagement** | **VirtualNetwork** | App Service Management dependency ||
-| * | Internal ISE: 454 <br><br>External ISE: 443 | **AzureTrafficManager** | **VirtualNetwork** | Communication from Azure Traffic Manager ||
-| * | 3443 | **APIManagement** | **VirtualNetwork** | Connector policy deployment <br><br>API Management - management endpoint | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
-| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
-
-#### Outbound security rules
-
-| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes |
-|--|-||--||-|
-| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
-| * | 443, 80 | **VirtualNetwork** | Internet | Communication from your logic app | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. |
-| * | Varies based on destination | **VirtualNetwork** | Varies based on destination | Communication from your logic app | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <br><br>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. |
-| * | 80, 443 | **VirtualNetwork** | **AzureActiveDirectory** | Azure Active Directory ||
-| * | 80, 443, 445 | **VirtualNetwork** | **Storage** | Azure Storage dependency ||
-| * | 443 | **VirtualNetwork** | **AppService** | Connection management ||
-| * | 443 | **VirtualNetwork** | **AzureMonitor** | Publish diagnostic logs & metrics ||
-| * | 1433 | **VirtualNetwork** | **SQL** | Azure SQL dependency ||
-| * | 1886 | **VirtualNetwork** | **AzureMonitor** | Azure Resource Health | Required for publishing health status to Resource Health. |
-| * | 5672 | **VirtualNetwork** | **EventHub** | Dependency from Log to Event Hubs policy and monitoring agent ||
-| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
-| * | 53 | **VirtualNetwork** | IP addresses for any custom Domain Name System (DNS) servers on your virtual network | DNS name resolution | Required only when you use custom DNS servers on your virtual network |
-
-In addition, you need to add outbound rules for [App Service Environment (ASE)](../app-service/environment/intro.md):
-
-* If you use Azure Firewall, you need to set up your firewall with the App Service Environment (ASE) [fully qualified domain name (FQDN) tag](../firewall/fqdn-tags.md#current-fqdn-tags), which permits outbound access to ASE platform traffic.
-
-* If you use a firewall appliance other than Azure Firewall, you need to set up your firewall with *all* the rules listed in the [firewall integration dependencies](../app-service/environment/firewall-integration.md#dependencies) that are required for App Service Environment.
-
-<a name="forced-tunneling"></a>
-
-#### Forced tunneling requirements
-
-If you set up or use [forced tunneling](../firewall/forced-tunneling.md) through your firewall, you have to permit extra external dependencies for your ISE. Forced tunneling lets you redirect Internet-bound traffic to a designated next hop, such as your virtual private network (VPN) or to a virtual appliance, rather than to the Internet so that you can inspect and audit outbound network traffic.
-
-If you don't permit access for these dependencies, your ISE deployment fails and your deployed ISE stops working.
-
-* User-defined routes
-
- To prevent asymmetric routing, you must define a route for each and every IP address that's listed below with **Internet** as the next hop.
-
- * [Azure Logic Apps inbound and outbound addresses for the ISE region](logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags)
- * [Azure IP addresses for connectors in the ISE region, available in this download file](https://www.microsoft.com/download/details.aspx?id=56519)
- * [App Service Environment management addresses](../app-service/environment/management-addresses.md)
- * [Azure Traffic Manager management addresses](https://azuretrafficmanagerdata.blob.core.windows.net/probes/azure/probe-ip-ranges.json)
- * [Azure API Management Control Plane IP addresses](../api-management/virtual-network-reference.md#control-plane-ip-addresses)
-
-* Service endpoints
-
- You need to enable service endpoints for Azure SQL, Storage, Service Bus, KeyVault, and Event Hubs because you can't send traffic through a firewall to these services.
-
-* Other inbound and outbound dependencies
-
- Your firewall *must* allow the following inbound and outbound dependencies:
-
- * [Azure App Service Dependencies](../app-service/environment/firewall-integration.md#deploying-your-ase-behind-a-firewall)
- * [Azure Cache Service Dependencies](../azure-cache-for-redis/cache-how-to-premium-vnet.md#what-are-some-common-misconfiguration-issues-with-azure-cache-for-redis-and-virtual-networks)
- * [Azure API Management Dependencies](../api-management/virtual-network-reference.md)
-
-<a name="create-environment"></a>
-
-## Create your ISE
-
-1. In the [Azure portal](https://portal.azure.com), in the main Azure search box, enter `integration service environments` as your filter, and select **Integration Service Environments**.
-
- ![Find and select "Integration Service Environments"](./media/connect-virtual-network-vnet-isolated-environment/find-integration-service-environment.png)
-
-1. On the **Integration Service Environments** pane, select **Add**.
-
- ![Select "Add" to create integration service environment](./media/connect-virtual-network-vnet-isolated-environment/add-integration-service-environment.png)
-
-1. Provide these details for your environment, and then select **Review + create**, for example:
-
- ![Provide environment details](./media/connect-virtual-network-vnet-isolated-environment/integration-service-environment-details.png)
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | The Azure subscription to use for your environment |
- | **Resource group** | Yes | <*Azure-resource-group-name*> | A new or existing Azure resource group where you want to create your environment |
- | **Integration service environment name** | Yes | <*environment-name*> | Your ISE name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), and periods (`.`). |
- | **Location** | Yes | <*Azure-datacenter-region*> | The Azure datacenter region where to deploy your environment |
- | **SKU** | Yes | **Premium** or **Developer (No SLA)** | The ISE SKU to create and use. For differences between these SKUs, review [ISE SKUs](connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). <p><p>**Important**: This option is available only at ISE creation and can't be changed later. |
- | **Additional capacity** | Premium: <br>Yes <p><p>Developer: <br>Not applicable | Premium: <br>0 to 10 <p><p>Developer: <br>Not applicable | The number of extra processing units to use for this ISE resource. To add capacity after creation, review [Add ISE capacity](ise-manage-integration-service-environment.md#add-capacity). |
- | **Access endpoint** | Yes | **Internal** or **External** | The type of access endpoints to use for your ISE. These endpoints determine whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. <p><p>For example, if you want to use the following webhook-based triggers, make sure that you select **External**: <p><p>- Azure DevOps <br>- Azure Event Grid <br>- Common Data Service <br>- Office 365 <br>- SAP (ISE version) <p><p>Your selection also affects the way that you can view and access inputs and outputs in your logic app runs history. For more information, review [ISE endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). <p><p>**Important**: You can select the access endpoint only during ISE creation and can't change this option later. |
- | **Virtual network** | Yes | <*Azure-virtual-network-name*> | The Azure virtual network where you want to inject your environment so logic apps in that environment can access your virtual network. If you don't have a network, [create an Azure virtual network first](../virtual-network/quick-create-portal.md). <p><p>**Important**: You can *only* perform this injection when you create your ISE. |
- | **Subnets** | Yes | <*subnet-resource-list*> | Regardless you use ISE Premium or Developer, your virtual network requires four *empty* subnets for creating and deploying resources in your ISE. These subnets are used by internal Azure Logic Apps components, such as connectors and caching for performance. <p>**Important**: Make sure that you [review the subnet requirements before continuing with these steps to create your subnets](#create-subnet). |
- |||||
-
- <a name="create-subnet"></a>
-
- **Create subnets**
-
- Whether you plan to use ISE Premium or Developer, make sure that your virtual network has four *empty* subnets. These subnets are used for creating and deploying resources in your ISE and are used by internal Azure Logic Apps components, such as connectors and caching for performance. You *can't* change these subnet addresses after you create your environment. If you create and deploy your ISE through the Azure portal, make sure that you don't delegate these subnets to any Azure services. However, if you create and deploy your ISE through the REST API, Azure PowerShell, or an Azure Resource Manager template, you need to [delegate](../virtual-network/manage-subnet-delegation.md) one empty subnet to `Microsoft.integrationServiceEnvironment`. For more information, review [Add a subnet delegation](../virtual-network/manage-subnet-delegation.md).
-
- Each subnet needs to meet these requirements:
-
- * Uses a name that starts with either an alphabetic character or an underscore (no numbers), and doesn't use these characters: `<`, `>`, `%`, `&`, `\\`, `?`, `/`.
-
- * Uses the [Classless Inter-Domain Routing (CIDR) format](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
-
- > [!IMPORTANT]
- >
- > Don't use the following IP address spaces for your virtual network or subnets because they aren't resolvable by Azure Logic Apps:<p>
- >
- > * 0.0.0.0/8
- > * 100.64.0.0/10
- > * 127.0.0.0/8
- > * 168.63.129.16/32
- > * 169.254.169.254/32
-
- * Uses a `/27` in the address space because each subnet requires 32 addresses. For example, `10.0.0.0/27` has 32 addresses because 2<sup>(32-27)</sup> is 2<sup>5</sup> or 32. More addresses won't provide extra benefits. To learn more about calculating addresses, review [IPv4 CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#IPv4_CIDR_blocks).
-
- * If you use [ExpressRoute](../expressroute/expressroute-introduction.md), you have to [create a route table](../virtual-network/manage-route-table.md) that has the following route and link that table with each subnet that's used by your ISE:
-
- **Name**: <*route-name*><br>
- **Address prefix**: 0.0.0.0/0<br>
- **Next hop**: Internet
-
- 1. Under the **Subnets** list, select **Manage subnet configuration**.
-
- ![Manage subnet configuration](./media/connect-virtual-network-vnet-isolated-environment/manage-subnet-configuration.png)
-
- 1. On the **Subnets** pane, select **Subnet**.
-
- ![Add four empty subnets](./media/connect-virtual-network-vnet-isolated-environment/add-empty-subnets.png)
-
- 1. On the **Add subnet** pane, provide this information.
-
- * **Name**: The name for your subnet
- * **Address range (CIDR block)**: Your subnet's range in your virtual network and in CIDR format
-
- ![Add subnet details](./media/connect-virtual-network-vnet-isolated-environment/provide-subnet-details.png)
-
- 1. When you're done, select **OK**.
-
- 1. Repeat these steps for three more subnets.
-
- > [!NOTE]
- > If the subnets you try to create aren't valid, the Azure portal shows a message,
- > but doesn't block your progress.
-
- For more information about creating subnets, review [Add a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md).
-
-1. After Azure successfully validates your ISE information, select **Create**, for example:
-
- ![After successful validation, select "Create"](./media/connect-virtual-network-vnet-isolated-environment/ise-validation-success.png)
-
- Azure starts deploying your environment, which usually takes within two hours to finish. Occasionally, deployment might take up to four hours. To check deployment status, on your Azure toolbar, select the notifications icon, which opens the notifications pane.
-
- ![Check deployment status](./media/connect-virtual-network-vnet-isolated-environment/environment-deployment-status.png)
-
- If deployment finishes successfully, Azure shows this notification:
-
- ![Deployment succeeded](./media/connect-virtual-network-vnet-isolated-environment/deployment-success-message.png)
-
- Otherwise, follow the Azure portal instructions for troubleshooting deployment.
-
- > [!NOTE]
- > If deployment fails or you delete your ISE, Azure might take up to an hour,
- > or possibly longer in rare cases, before releasing your subnets. So, you might
- > have to wait before you can reuse those subnets in another ISE.
- >
- > If you delete your virtual network, Azure generally takes up to two hours
- > before releasing up your subnets, but this operation might take longer.
- > When deleting virtual networks, make sure that no resources are still connected.
- > For more information, review [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network).
-
-1. To view your environment, select **Go to resource** if Azure doesn't automatically go to your environment after deployment finishes.
-
-1. For an ISE that has *external* endpoint access, you need to create a network security group (NSG), if you don't have one already. You need to add an inbound security rule to the NSG to allow traffic from managed connector outbound IP addresses. To set up this rule, follow these steps:
-
- 1. On your ISE menu, under **Settings**, select **Properties**.
-
- 1. Under **Connector outgoing IP addresses**, copy the public IP address ranges, which also appear in this article, [Limits and configuration - Outbound IP addresses](logic-apps-limits-and-config.md#outbound).
-
- 1. Create a network security group, if you don't have one already.
-
- 1. Based on the following information, add an inbound security rule for the public outbound IP addresses that you copied. For more information, review [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group).
-
- | Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes |
- |||--|--|-|-|
- | Permit traffic from connector outbound IP addresses | <*connector-public-outbound-IP-addresses*> | * | Address space for the virtual network with ISE subnets | * | |
- |||||||
-
-1. To check the network health for your ISE, review [Manage your integration service environment](ise-manage-integration-service-environment.md#check-network-health).
-
- > [!CAUTION]
- > If your ISE's network becomes unhealthy, the internal App Service Environment (ASE) that's used by your ISE can also become unhealthy.
- > If the ASE is unhealthy for more than seven days, the ASE is suspended. To resolve this state, check your virtual network setup.
- > Resolve any problems that you find, and then restart your ISE. Otherwise, after 90 days, the suspended ASE is deleted, and your
- > ISE becomes unusable. So, make sure that you keep your ISE healthy to permit the necessary traffic.
- >
- > For more information, review these topics:
- >
- > * [Azure App Service diagnostics overview](../app-service/overview-diagnostics.md)
- > * [Message logging for Azure App Service Environment](../app-service/environment/using-an-ase.md#logging)
-
-1. To start creating logic apps and other artifacts in your ISE, review [Add resources to integration service environments](add-artifacts-integration-service-environment-ise.md).
-
- > [!IMPORTANT]
- > After you create your ISE, managed ISE connectors become available for you to use, but they don't automatically appear
- > in the connector picker on the Logic App Designer. Before you can use these ISE connectors, you have to manually
- > [add and deploy these connectors to your ISE](add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment)
- > so that they appear in the Logic App Designer.
-
-## Next steps
-
-* [Add resources to integration service environments](add-artifacts-integration-service-environment-ise.md)
-* [Manage integration service environments](ise-manage-integration-service-environment.md#check-network-health)
-* Learn more about [Azure Virtual Network](../virtual-network/virtual-networks-overview.md)
-* Learn about [virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
logic-apps Create Integration Service Environment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-integration-service-environment-rest-api.md
- Title: Create integration service environments (ISEs) with Logic Apps REST API
-description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) using the Azure Logic Apps REST API.
--- Previously updated : 11/04/2022--
-# Create an integration service environment (ISE) by using the Logic Apps REST API
-
-> [!IMPORTANT]
->
-> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic),
-> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard
-> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure
-> Logic Apps and provide the same capabilities plus more.
->
-> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing
-> before this date are supported through August 31, 2024. For more information, see the following resources:
->
-> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
-> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
-> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
-> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
-
-For scenarios where your logic apps and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), you can create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) by using the Logic Apps REST API. To learn more about ISEs, see [Access to Azure Virtual Network resources from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment-overview.md).
-
-This article shows you how to create an ISE by using the Logic Apps REST API in general. Optionally, you can also enable a [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) on your ISE, but only by using the Logic Apps REST API at this time. This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials.
-
-For more information about other ways to create an ISE, see these articles:
-
-* [Create an ISE by using the Azure portal](../logic-apps/connect-virtual-network-vnet-isolated-environment.md)
-* [Create an ISE by using the sample Azure Resource Manager quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/integration-service-environment)
-* [Create an ISE that supports using customer-managed keys for encrypting data at rest](customer-managed-keys-integration-service-environment.md)
-
-## Prerequisites
-
-* The same [prerequisites](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#prerequisites) and [access requirements](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#enable-access) as when you create an ISE in the Azure portal
-
-* Any additional resources that you want to use with your ISE so that you can include their information in the ISE definition, for example:
-
- * To enable self-signed certificate support, you need to include information about that certificate in the ISE definition.
-
- * To enable the user-assigned managed identity, you need to create that identity in advance and include the `objectId`, `principalId` and `clientId` properties and their values in the ISE definition. For more information, see [Create a user-assigned managed identity in the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
-
-* A tool that you can use to create your ISE by calling the Logic Apps REST API with an HTTPS PUT request. For example, you can use [Postman](https://www.getpostman.com/downloads/), or you can build a logic app that performs this task.
-
-## Create the ISE
-
-To create your ISE by calling the Logic Apps REST API, make this HTTPS PUT request:
-
-`PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01`
-
-> [!IMPORTANT]
-> The Logic Apps REST API 2019-05-01 version requires that you make your own HTTP PUT request for ISE connectors.
-
-Deployment usually takes within two hours to finish. Occasionally, deployment might take up to four hours. To check deployment status, in the [Azure portal](https://portal.azure.com), on your Azure toolbar, select the notifications icon, which opens the notifications pane.
-
-> [!NOTE]
-> If deployment fails or you delete your ISE, Azure might take up to an hour before releasing your subnets.
-> This delay means you might have to wait before reusing those subnets in another ISE.
->
-> If you delete your virtual network, Azure generally takes up to two hours
-> before releasing up your subnets, but this operation might take longer.
-> When deleting virtual networks, make sure that no resources are still connected.
-> See [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network).
-
-## Request header
-
-In the request header, include these properties:
-
-* `Content-type`: Set this property value to `application/json`.
-
-* `Authorization`: Set this property value to the bearer token for the customer who has access to the Azure subscription or resource group that you want to use.
-
-<a name="request-body"></a>
-
-## Request body
-
-In the request body, provide the resource definition to use for creating your ISE, including information for additional capabilities that you want to enable on your ISE, for example:
-
-* To create an ISE that permits using a self-signed certificate and certificate issued by Enterprise Certificate Authority that's installed at the `TrustedRoot` location, include the `certificates` object inside the ISE definition's `properties` section, as this article later describes.
-
-* To create an ISE that uses a system-assigned or user-assigned managed identity, include the `identity` object with the managed identity type and other required information in the ISE definition, as this article later describes.
-
-* To create an ISE that uses customer-managed keys and Azure Key Vault to encrypt data at rest, include the [information that enables customer-managed key support](customer-managed-keys-integration-service-environment.md). You can set up customer-managed keys *only at creation*, not afterwards.
-
-### Request body syntax
-
-Here is the request body syntax, which describes the properties to use when you create your ISE:
-
-```json
-{
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}",
- "name": "{ISE-name}",
- "type": "Microsoft.Logic/integrationServiceEnvironments",
- "location": "{Azure-region}",
- "sku": {
- "name": "Premium",
- "capacity": 1
- },
- // Include the `identity` object to enable the system-assigned identity or user-assigned identity
- "identity": {
- "type": <"SystemAssigned" | "UserAssigned">,
- // When type is "UserAssigned", include the following "userAssignedIdentities" object:
- "userAssignedIdentities": {
- "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{user-assigned-managed-identity-object-ID}": {
- "principalId": "{principal-ID}",
- "clientId": "{client-ID}"
- }
- }
- },
- "properties": {
- "networkConfiguration": {
- "accessEndpoint": {
- // Your ISE can use the "External" or "Internal" endpoint. This example uses "External".
- "type": "External"
- },
- "subnets": [
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-1}",
- },
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-2}",
- },
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-3}",
- },
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-4}",
- }
- ]
- },
- // Include `certificates` object to enable self-signed certificate and the certificate issued by Enterprise Certificate Authority
- "certificates": {
- "testCertificate": {
- "publicCertificate": "{base64-encoded-certificate}",
- "kind": "TrustedRoot"
- }
- }
- }
-}
-```
-
-### Request body example
-
-This example request body shows the sample values:
-
-```json
-{
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Logic/integrationServiceEnvironments/Fabrikam-ISE",
- "name": "Fabrikam-ISE",
- "type": "Microsoft.Logic/integrationServiceEnvironments",
- "location": "WestUS2",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/*********************************": {
- "principalId": "*********************************",
- "clientId": "*********************************"
- }
- }
- },
- "sku": {
- "name": "Premium",
- "capacity": 1
- },
- "properties": {
- "networkConfiguration": {
- "accessEndpoint": {
- // Your ISE can use the "External" or "Internal" endpoint. This example uses "External".
- "type": "External"
- },
- "subnets": [
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-1",
- },
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-2",
- },
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-3",
- },
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-4",
- }
- ]
- },
- "certificates": {
- "testCertificate": {
- "publicCertificate": "LS0tLS1CRUdJTiBDRV...",
- "kind": "TrustedRoot"
- }
- }
- }
-}
-```
-
-## Add custom root certificates
-
-You often use an ISE to connect to custom services on your virtual network or on premises. These custom services are often protected by a certificate that's issued by custom root certificate authority, such as an Enterprise Certificate Authority or a self-signed certificate. For more information about using self-signed certificates, see [Secure access and data - Access for outbound calls to other services and systems](../logic-apps/logic-apps-securing-a-logic-app.md#secure-outbound-requests). For your ISE to successfully connect to these services through Transport Layer Security (TLS), your ISE needs access to these root certificates.
-
-#### Considerations for adding custom root certificates
-
-Before you update your ISE with a custom trusted root certificate, review these considerations:
-
-* Make sure that you upload the root certificate *and* all the intermediate certificates. The maximum number of certificates is 20.
-
-* The subject name on the certificate must match the host name for the target endpoint that you want to call from Azure Logic Apps.
-
-* Uploading root certificates is a replacement operation where the latest upload overwrites previous uploads. For example, if you send a request that uploads one certificate, and then send another request to upload another certificate, your ISE uses only the second certificate. If you need to use both certificates, add them together in the same request.
-
-* Uploading root certificates is an asynchronous operation that might take some time. To check the status or result, you can send a `GET` request by using the same URI. The response message has a `provisioningState` field that returns the `InProgress` value when the upload operation is still working. When `provisioningState` value is `Succeeded`, the upload operation is complete.
-
-#### Request syntax
-
-To update your ISE with a custom trusted root certificate, send the following HTTPS PATCH request to the [Azure Resource Manager URL, which differs based on your Azure environment](../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane), for example:
-
-| Environment | Azure Resource Manager URL |
-|-|-|
-| Azure global (multi-tenant) | `PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` |
-| Azure Government | `PATCH https://management.usgovcloudapi.net/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` |
-| Microsoft Azure operated by 21Vianet | `PATCH https://management.chinacloudapi.cn/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` |
-|||
-
-#### Request body syntax for adding custom root certificates
-
-Here is the request body syntax, which describes the properties to use when you add root certificates:
-
-```json
-{
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}",
- "name": "{ISE-name}",
- "type": "Microsoft.Logic/integrationServiceEnvironments",
- "location": "{Azure-region}",
- "properties": {
- "certificates": {
- "testCertificate1": {
- "publicCertificate": "{base64-encoded-certificate}",
- "kind": "TrustedRoot"
- },
- "testCertificate2": {
- "publicCertificate": "{base64-encoded-certificate}",
- "kind": "TrustedRoot"
- }
- }
- }
-}
-```
-
-## Next steps
-
-* [Add resources to integration service environments](../logic-apps/add-artifacts-integration-service-environment-ise.md)
-* [Manage integration service environments](../logic-apps/ise-manage-integration-service-environment.md#check-network-health)
logic-apps Customer Managed Keys Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/customer-managed-keys-integration-service-environment.md
- Title: Set up customer-managed keys to encrypt data at rest in ISEs
-description: Create and manage your own encryption keys to secure data at rest for integration service environments (ISEs) in Azure Logic Apps.
--- Previously updated : 11/04/2022--
-# Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps
-
-> [!IMPORTANT]
->
-> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic),
-> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard
-> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure
-> Logic Apps and provide the same capabilities plus more.
->
-> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing
-> before this date are supported through August 31, 2024. For more information, see the following resources:
->
-> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
-> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
-> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
-> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
-
-Azure Logic Apps relies on Azure Storage to store and automatically [encrypt data at rest](../storage/common/storage-service-encryption.md). This encryption protects your data and helps you meet your organizational security and compliance commitments. By default, Azure Storage uses Microsoft-managed keys to encrypt your data. For more information about how Azure Storage encryption works, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md) and [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md).
-
-When you create an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) for hosting your logic apps, and you want more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". With this capability, Azure Storage automatically enables [double encryption or *infrastructure encryption* using platform-managed keys](../security/fundamentals/double-encryption.md) for your key. To learn more, see [Doubly encrypt data with infrastructure encryption](../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption).
-
-This topic shows how to set up and specify your own encryption key to use when you create your ISE by using the Logic Apps REST API. For the general steps to create an ISE through Logic Apps REST API, see [Create an integration service environment (ISE) by using the Logic Apps REST API](../logic-apps/create-integration-service-environment-rest-api.md).
-
-## Considerations
-
-* At this time, customer-managed key support for an ISE is available only in the following regions:
-
- * Azure: West US 2, East US, and South Central US.
-
- * Azure Government: Arizona, Virginia, and Texas.
-
-* You can specify a customer-managed key *only when you create your ISE*, not afterwards. You can't disable this key after your ISE is created. Currently, no support exists for rotating a customer-managed key for an ISE.
-
-* The key vault that stores your customer-managed key must exist in the same Azure region as your ISE.
-
-* To support customer-managed keys, your ISE requires that you enable either the [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials.
-
-* Currently, to create an ISE that supports customer-managed keys and has either managed identity type enabled, you have to call the Logic Apps REST API by using an HTTPS PUT request.
-
-* You must [give key vault access to your ISE's managed identity](#identity-access-to-key-vault), but the timing depends on which managed identity that you use.
-
- * **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE, you must [give key vault access to your ISE's managed identity](#identity-access-to-key-vault). Otherwise, ISE creation fails, and you get a permissions error.
-
- * **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE, [give key vault access to your ISE's managed identity](#identity-access-to-key-vault).
-
-## Prerequisites
-
-* The same [prerequisites](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#prerequisites) and [requirements to enable access for your ISE](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#enable-access) as when you create an ISE in the Azure portal
-
-* An Azure key vault that has the **Soft Delete** and **Do Not Purge** properties enabled
-
- For more information about enabling these properties, see [Azure Key Vault soft-delete overview](../key-vault/general/soft-delete-overview.md) and [Configure customer-managed keys with Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md). If you're new to [Azure Key Vault](../key-vault/general/overview.md), learn how to create a key vault using [Azure portal](../key-vault/general/quick-create-portal.md), [Azure CLI](../key-vault/general/quick-create-cli.md), or [Azure PowerShell](../key-vault/general/quick-create-powershell.md).
-
-* In your key vault, a key that's created with these property values:
-
- | Property | Value |
- |-|-|
- | **Key Type** | RSA |
- | **RSA Key Size** | 2048 |
- | **Enabled** | Yes |
- |||
-
- ![Create your customer-managed encryption key](./media/customer-managed-keys-integration-service-environment/create-customer-managed-key-for-encryption.png)
-
- For more information, see [Configure customer-managed keys with Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md) or the Azure PowerShell command, [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey).
-
-* A tool that you can use to create your ISE by calling the Logic Apps REST API with an HTTPS PUT request. For example, you can use [Postman](https://www.getpostman.com/downloads/), or you can build a logic app that performs this task.
-
-<a name="enable-support-key-managed-identity"></a>
-
-## Create ISE with key vault and managed identity support
-
-To create your ISE by calling the Logic Apps REST API, make this HTTPS PUT request:
-
-`PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01`
-
-> [!IMPORTANT]
-> The Logic Apps REST API 2019-05-01 version requires that you make your own HTTPS PUT request for ISE connectors.
-
-Deployment usually takes within two hours to finish. Occasionally, deployment might take up to four hours. To check deployment status, in the [Azure portal](https://portal.azure.com), on your Azure toolbar, select the notifications icon, which opens the notifications pane.
-
-> [!NOTE]
-> If deployment fails or you delete your ISE, Azure might take up to an hour
-> before releasing your subnets. This delay means means you might have to wait
-> before reusing those subnets in another ISE.
->
-> If you delete your virtual network, Azure generally takes up to two hours
-> before releasing up your subnets, but this operation might take longer.
-> When deleting virtual networks, make sure that no resources are still connected.
-> See [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network).
-
-### Request header
-
-In the request header, include these properties:
-
-* `Content-type`: Set this property value to `application/json`.
-
-* `Authorization`: Set this property value to the bearer token for the customer who has access to the Azure subscription or resource group that you want to use.
-
-### Request body
-
-In the request body, enable support for these additional items by providing their information in your ISE definition:
-
-* The managed identity that your ISE uses to access your key vault
-* Your key vault and the customer-managed key that you want to use
-
-#### Request body syntax
-
-Here is the request body syntax, which describes the properties to use when you create your ISE:
-
-```json
-{
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}",
- "name": "{ISE-name}",
- "type": "Microsoft.Logic/integrationServiceEnvironments",
- "location": "{Azure-region}",
- "sku": {
- "name": "Premium",
- "capacity": 1
- },
- "identity": {
- "type": <"SystemAssigned" | "UserAssigned">,
- // When type is "UserAssigned", include the following "userAssignedIdentities" object:
- "userAssignedIdentities": {
- "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{user-assigned-managed-identity-object-ID}": {
- "principalId": "{principal-ID}",
- "clientId": "{client-ID}"
- }
- }
- },
- "properties": {
- "networkConfiguration": {
- "accessEndpoint": {
- // Your ISE can use the "External" or "Internal" endpoint. This example uses "External".
- "type": "External"
- },
- "subnets": [
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-1}",
- },
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-2}",
- },
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-3}",
- },
- {
- "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-4}",
- }
- ]
- },
- "encryptionConfiguration": {
- "encryptionKeyReference": {
- "keyVault": {
- "id": "subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.KeyVault/vaults/{key-vault-name}",
- },
- "keyName": "{customer-managed-key-name}",
- "keyVersion": "{key-version-number}"
- }
- }
- }
-}
-```
-
-#### Request body example
-
-This example request body shows the sample values:
-
-```json
-{
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Logic/integrationServiceEnvironments/Fabrikam-ISE",
- "name": "Fabrikam-ISE",
- "type": "Microsoft.Logic/integrationServiceEnvironments",
- "location": "WestUS2",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/*********************************": {
- "principalId": "*********************************",
- "clientId": "*********************************"
- }
- }
- },
- "sku": {
- "name": "Premium",
- "capacity": 1
- },
- "properties": {
- "networkConfiguration": {
- "accessEndpoint": {
- // Your ISE can use the "External" or "Internal" endpoint. This example uses "External".
- "type": "External"
- },
- "subnets": [
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-1",
- },
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-2",
- },
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-3",
- },
- {
- "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-4",
- }
- ]
- },
- "encryptionConfiguration": {
- "encryptionKeyReference": {
- "keyVault": {
- "id": "subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.KeyVault/vaults/FabrikamKeyVault",
- },
- "keyName": "Fabrikam-Encryption-Key",
- "keyVersion": "********************"
- }
- }
- }
-}
-```
-
-<a name="identity-access-to-key-vault"></a>
-
-## Grant access to your key vault
-
-Although the timing differs based on the managed identity that you use, you must [give key vault access to your ISE's managed identity](#identity-access-to-key-vault).
-
-* **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE, you must add an access policy to your key vault for your ISE's system-assigned managed identity. Otherwise, creation for your ISE fails, and you get a permissions error.
-
-* **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE, add an access policy to your key vault for your ISE's user-assigned managed identity.
-
-For this task, you can use either the Azure PowerShell [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) command, or you can follow these steps in the Azure portal:
-
-1. In the [Azure portal](https://portal.azure.com), open your Azure key vault.
-
-1. On your key vault menu, select **Access policies** > **Add Access Policy**, for example:
-
- ![Add access policy for system-assigned managed identity](./media/customer-managed-keys-integration-service-environment/add-ise-access-policy-key-vault.png)
-
-1. After the **Add access policy** pane opens, follow these steps:
-
- 1. Select these options:
-
- | Setting | Values |
- ||--|
- | **Configure from template (optional) list** | Key Management |
- | **Key permissions** | - **Key Management Operations**: Get, List <p><p>- **Cryptographic Operations**: Unwrap Key, Wrap Key |
- |||
-
- ![Select "Key Management" > "Key permissions"](./media/customer-managed-keys-integration-service-environment/select-key-permissions.png)
-
- 1. For **Select principal**, select **None selected**. After the **Principal** pane opens, in the search box, find and select your ISE. When you're done, choose **Select** > **Add**.
-
- ![Select your ISE to use as the principal](./media/customer-managed-keys-integration-service-environment/select-service-principal-ise.png)
-
- 1. When you're finished with the **Access policies** pane, select **Save**.
-
-For more information, see [How to authenticate to Key Vault](../key-vault/general/authentication.md) and [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
-
-## Next steps
-
-* Learn more about [Azure Key Vault](../key-vault/general/overview.md)
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
The following settings work only for workflows that start with a recurrence-base
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. The minimum value for this setting is 7 days. |
+| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. The minimum value for this setting is 7 days. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. |
| `Runtime.FlowMaintenanceJob.RetentionCooldownInterval` | `7.00:00:00` <br>(7 days) | Sets the amount of time in days as the interval between when to check for and delete run history that you no longer want to keep. | <a name="run-actions"></a>
logic-apps Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/enterprise-integration/create-integration-account.md
+
+ Title: Create and manage integration accounts
+description: Create and manage integration accounts for building B2B enterprise integration workflows in Azure Logic Apps with the Enterprise Integration Pack.
++
+ms.suite: integration
+++++ Last updated : 08/29/2023++
+# Create and manage integration accounts for B2B workflows in Azure Logic Apps with the Enterprise Integration Pack
++
+Before you can build business-to-business (B2B) and enterprise integration workflows using Azure Logic Apps, you need to create an *integration account* resource. This account is a scalable cloud-based container in Azure that simplifies how you store and manage B2B artifacts that you define and use in your workflows for B2B scenarios, for example:
+
+* [Trading partners](../logic-apps-enterprise-integration-partners.md)
+* [Agreements](../logic-apps-enterprise-integration-agreements.md)
+* [Maps](../logic-apps-enterprise-integration-maps.md)
+* [Schemas](../logic-apps-enterprise-integration-schemas.md)
+* [Certificates](../logic-apps-enterprise-integration-certificates.md)
+
+You also need an integration account to electronically exchange B2B messages with other organizations. When other organizations use protocols and message formats different from your organization, you have to convert these formats so your organization's system can process those messages. With Azure Logic Apps, you can build workflows that support the following industry-standard protocols:
+
+* [AS2](../logic-apps-enterprise-integration-as2.md)
+* [EDIFACT](../logic-apps-enterprise-integration-edifact.md)
+* [RosettaNet](../logic-apps-enterprise-integration-rosettanet.md)
+* [X12](../logic-apps-enterprise-integration-x12.md)
+
+This guide shows how to complete the following tasks:
+
+* Create an integration account.
+* Set up storage access for a Premium integration account.
+* Link your integration account to a logic app resource.
+* Change the pricing tier for your integration account.
+* Unlink your integration account from a logic app resource.
+* Move an integration account to another Azure resource group or subscription.
+* Delete an integration account.
+
+If you're new to creating B2B enterprise integration workflows in Azure Logic Apps, see [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](../logic-apps-enterprise-integration-overview.md).
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Make sure that you use the same Azure subscription for both your integration account and logic app resource.
+
+* Whether you're working on a Consumption or Standard logic app workflow, your logic app resource must already exist before you can link your integration account.
+
+ * For Consumption logic app resources, this link is required before you can use the artifacts from your integration account with your workflow. Although you can create your artifacts without this link, the link is required when you're ready to use these artifacts.
+
+ * For Standard logic app resources, this link is optional, based on your scenario:
+
+ * If you have an integration account with the artifacts that you need or want to use, you can link the integration account to each Standard logic app resource where you want to use the artifacts.
+
+ * Some Azure-hosted integration account connectors, such as **AS2**, **EDIFACT**, and **X12**, let you create a connection to your integration account. If you're just using these connectors, you don't need the link.
+
+ * The built-in connectors named **Liquid** and **Flat File** let you select maps and schemas that you previously uploaded to your logic app resource or to a linked integration account.
+
+ If you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option, which also means you don't have to upload maps and schemas to each logic app resource. Either way, you can use these artifacts across all child workflows within the *same logic app resource*.
+
+* Basic knowledge about how to create logic app workflows. For more information, see the following documentation:
+
+ * [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../quickstart-create-example-consumption-workflow.md)
+
+ * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](../create-single-tenant-workflows-azure-portal.md)
+
+## Create integration account
+
+Integration accounts are available in different tiers that [vary in pricing](https://azure.microsoft.com/pricing/details/logic-apps/). Based on the tier you choose, creating an integration account might incur costs. For more information, see [Azure Logic Apps pricing and billing models](../logic-apps-pricing.md#integration-accounts) and [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
+
+Based on your requirements and scenarios, determine the appropriate integration account tier to create. The following table describes the available tiers:
+
+Your integration account uses an automatically created and enabled system-assigned managed identity to authenticate access.
+
+| Tier | Description |
+||-|
+| **Premium** (preview) | **Note:** This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). <br><br>For scenarios with the following criteria: <br><br>- Store and use unlimited artifacts, such as partners, agreements, schemas, maps, certificates, and so on. <br><br>- Bring and use your own storage, which contains the relevant runtime states for specific B2B actions and EDI standards. For example, these states include the MIC number for AS2 actions and the control numbers for X12 actions, if configured on your agreements. <br><br>To access this storage, your integration account uses its system-assigned managed identity, which is automatically created and enabled for your integration account. <br><br>You can also apply more governance and policies to data, such as customer-managed ("Bring Your Own") keys for data encryption. To store these keys, you'll need a key vault. <br><br>- Set up and use a key vault to store private certificates or customer-managed keys. To access these keys, your Premium integration account uses its system-assigned managed identity, not an Azure Logic Apps shared service principal. <br><br>Pricing follows [Standard integration account pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Note**: During preview, your Azure bill uses the same meter name and ID as a Standard integration account, but changes when the Premium level becomes generally available. <br><br>**Limitations and known issues**: <br><br>- Currently doesn't support virtual networks. <br><br>- If you use a key vault to store private certificates, your integration account's managed identity might not work. For now, use the linked logic app's managed identity instead. <br><br>- Currently doesn't support the [Azure CLI for Azure Logic Apps](/cli/azure/service-page/logic%20apps). |
+| **Standard** | For scenarios where you have more complex B2B relationships and increased numbers of entities that you must manage. <br><br>Supported by the Azure Logic Apps SLA. |
+| **Basic** | For scenarios where you want only message handling or to act as a small business partner that has a trading partner relationship with a larger business entity. <br><br>Supported by the Azure Logic Apps SLA. |
+| **Free** | For exploratory scenarios, not production scenarios. This tier has limits on region availability, throughput, and usage. For example, the Free tier is available only for public regions in Azure, for example, West US or Southeast Asia, but not for [Microsoft Azure operated by 21Vianet](/azure/chin). <br><br>**Note**: Not supported by the Azure Logic Apps SLA. |
+
+For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-resource-create), or [Azure PowerShell](/powershell/module/Az.LogicApp/New-AzIntegrationAccount).
+
+> [!IMPORTANT]
+>
+> For you to successfully link and use your integration account with your logic app,
+> make sure that both resources exist in the *same* Azure subscription and Azure region.
+
+### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **integration accounts**, and select **Integration accounts**.
+
+1. Under **Integration accounts**, select **Create**.
+
+1. On the **Create an integration account** pane, provide the following information about your integration account:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | The name for your Azure subscription |
+ | **Resource group** | Yes | <*Azure-resource-group-name*> | The name for the [Azure resource group](../../azure-resource-manager/management/overview.md) to use for organizing related resources. For this example, create a new resource group named **FabrikamIntegration-RG**. |
+ | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`()`), and periods (`.`). This example uses **Fabrikam-Integration**. |
+ | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, review the following documentation: <br><br>- [Logic Apps pricing model](../logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](../logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) |
+ | **Storage account** | Available only for the Premium (preview) integration account | None | The name for an existing [Azure storage account](../../storage/common/storage-account-create.md). For the example in this guide, this option doesn't apply. |
+ | **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app resource, or create your logic apps in the same location as your integration account. For this example, use **West US**. <br><br>To use your integration account with an [integration service environment (ISE)](../connect-virtual-network-vnet-isolated-environment-overview.md), select **Associate with integration service environment**, and then select your ISE as the location. To create an integration account from inside an ISE, see [Create integration accounts from inside an ISE](../add-artifacts-integration-service-environment-ise.md#create-integration-account-environment). <br><br>**Note**: The ISE resource will retire on August 31, 2024, due to its dependency on Azure Cloud Services (classic), which retires at the same time. Currently in preview, the capability is available for you to [export a Standard integration account for an ISE to a Premium integration account](../ise-manage-integration-service-environment.md#export-integration-account). |
+ | **Enable log analytics** | No | Unselected | For this example, don't select this option. |
+
+1. When you're done, select **Review + create**.
+
+ After deployment completes, Azure opens your integration account.
+
+1. If you created a Premium integration account, make sure to [set up access to the associated Azure storage account](#set-up-access-storage-account).
+
+### [Azure CLI](#tab/azure-cli)
++
+1. To add the [az logic integration-account](/cli/azure/logic/integration-account) extension, use the [az extension add](/cli/azure/extension#az-extension-add) command:
+
+ ```azurecli
+ az extension add ΓÇô-name logic
+ ```
+
+1. To create a resource group or use an existing resource group, run the [az group create](/cli/azure/group#az-group-create) command:
+
+ ```azurecli
+ az group create --name myresourcegroup --location westus
+ ```
+
+ To list the integration accounts for a resource group, use the [az logic integration-account list](/cli/azure/logic/integration-account#az-logic-integration-account-list) command:
+
+ ```azurecli
+ az logic integration-account list --resource-group myresourcegroup
+ ```
+
+1. To create an integration account, run the [az logic integration-account create](/cli/azure/logic/integration-account#az-logic-integration-account-create) command:
+
+ ```azurecli
+ az logic integration-account create --resource-group myresourcegroup \
+ --name integration_account_01 --location westus --sku name=Standard
+ ```
+
+ Your integration account name can contain only letters, numbers, hyphens (-), underscores (_), parentheses (()), and periods (.).
+
+ To view a specific integration account, use the [az logic integration-account show](/cli/azure/logic/integration-account#az-logic-integration-account-show) command:
+
+ ```azurecli
+ az logic integration-account show --name integration_account_01 --resource-group myresourcegroup
+ ```
+
+ You can change your SKU, or pricing tier, by using the [az logic integration-account update](/cli/azure/logic/integration-account#az-logic-integration-account-update) command:
+
+ ```azurecli
+ az logic integration-account update --sku name=Basic --name integration_account_01 \
+ --resource-group myresourcegroup
+ ```
+
+ For more information about pricing, see these resources:
+
+ * [Azure Logic Apps pricing model](../logic-apps-pricing.md#integration-accounts)
+ * [Azure Logic Apps limits and configuration](../logic-apps-limits-and-config.md#integration-account-limits)
+ * [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+
+To import an integration account by using a JSON file, use the [az logic integration-account import](/cli/azure/logic/integration-account#az-logic-integration-account-import) command:
+
+```azurecli
+az logic integration-account import --name integration_account_01 \
+ --resource-group myresourcegroup --input-path integration.json
+```
+++
+<a name="set-up-access-storage-account"></a>
+
+## Set up storage access for Premium integration account
+
+To read artifacts and write any state information, your Premium integration account needs access to the selected and associated Azure storage account. Your integration account uses its automatically created and enabled system-assigned managed identity to authenticate access.
+
+1. In the [Azure portal](https://portal.azure.com), open your Premium integration account.
+
+1. On the integration account menu, under **Settings**, select **Identity**.
+
+1. On the **System assigned** tab, which shows the enabled system-assigned managed identity, under **Permissions**, select **Azure role assignments**.
+
+1. On the **Azure role assignments** toolbar, select **Add role assignment (preview)**, provide the following information, select **Save**, and then repeat for each required role:
+
+ | Parameter | Value | Description |
+ |--|-|-|
+ | **Scope** | **Storage** | For more information, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md). |
+ | **Subscription** | <*Azure-subscription*> | The Azure subscription for the resource to access. |
+ | **Resource** | <*Azure-storage-account-name*> | The name for the Azure storage account to access. <br><br>**Note** If you get an error that you don't have permissions to add role assignments at this scope, you need to get those permissions. For more information, see [Azure AD built-in roles](../../active-directory/roles/permissions-reference.md). |
+ | **Role** | - **Storage Account Contributor** <br><br>- **Storage Blob Data Contributor** <br><br>- **Storage Table Data Contributor** | The roles that your Premium integration account requires to access your storage account. |
+
+ For more information, see [Assign Azure role to system-assigned managed identity](../../role-based-access-control/role-assignments-portal-managed-identity.md)
+
+1. Next, link your integration account to your logic app resource.
+
+<a name="link-account"></a>
+
+## Link to logic app
+
+For you to successfully link your integration account to your logic app resource, make sure that both resources use the *same* Azure subscription and Azure region.
+
+### [Consumption](#tab/consumption)
+
+This section describes how to complete this task using the Azure portal. If you use Visual Studio and your logic app is in an [Azure Resource Group project](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md), you can [link your logic app to an integration account by using Visual Studio](../manage-logic-apps-with-visual-studio.md#link-integration-account).
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
+
+1. On your logic app's navigation menu, under **Settings**, select **Workflow settings**. Under **Integration account**, open the **Select an Integration account** list, and select the integration account you want.
+
+ ![Screenshot shows Azure portal, integration account menu with open page named Workflow settings, and opened list named Select an Integration account.](./media/create-integration-account/select-integration-account.png)
+
+1. To finish linking, select **Save**.
+
+ ![Screenshot shows page named Workflow settings, and selected Save option.](./media/create-integration-account/save-link.png)
+
+ After your integration account is successfully linked, Azure shows a confirmation message.
+
+ ![Screenshot shows Azure confirmation message.](./media/create-integration-account/link-confirmation.png)
+
+Now your logic app workflow can use the artifacts in your integration account plus the B2B connectors, such as XML validation and flat file encoding or decoding.
+
+### [Standard](#tab/standard)
+
+#### Find your integration account's callback URL
+
+Before you can link your integration account to a Standard logic app resource, you need to have your integration account's **callback URL**.
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **integration accounts**, and then select **Integration accounts**.
+
+1. From the **Integration accounts** list, select your integration account.
+
+1. On your selected integration account's navigation menu, under **Settings**, select **Callback URL**.
+
+1. Find the **Generated Callback URL** property value, copy the value, and save the URL to use later for linking.
+
+#### Link integration account to Standard logic app
+
+##### Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On your logic app's navigation menu, under **Settings**, select **Environment variables**.
+
+1. On the **Environment variables** page, check whether the app setting named **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** exists.
+
+1. If the app setting doesn't exist, at the end of the settings list, add a new app setting by entering the following:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
+ | **Value** | <*integration-account-callback-URL*> |
+
+1. When you're done, select **Apply**.
+
+##### Visual Studio Code
+
+1. From your Standard logic app project in Visual Studio Code, open the **local.settings.json** file.
+
+1. In the `Values` object, add an app setting that has the following properties and values, including the previously saved callback URL:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
+ | **Value** | <*integration-account-callback-URL*> |
+
+ This example shows how a sample app setting might appear:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "node",
+ "WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL": "https://prod-03.westus.logic.azure.com:443/integrationAccounts/...."
+ }
+ }
+ ```
+
+1. When you're done, save your changes.
+++
+<a name="change-pricing-tier"></a>
+
+## Change pricing tier
+
+To increase the [limits](../logic-apps-limits-and-config.md#integration-account-limits) for an integration account, you can [upgrade to a higher pricing tier](#upgrade-pricing-tier), if available. For example, you can upgrade from the Free tier to the Basic tier, Standard tier, or Premium tier. You can also [downgrade to a lower tier](#downgrade-pricing-tier), if available. For more information pricing information, review the following documentation:
+
+* [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+* [Azure Logic Apps pricing model](../logic-apps-pricing.md#integration-accounts)
+
+<a name="upgrade-pricing-tier"></a>
+
+### Upgrade pricing tier
+
+To make this change, you can use either the Azure portal or the Azure CLI.
+
+#### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **integration accounts**, and select **Integration accounts**.
+
+ Azure shows all the integration accounts in your Azure subscriptions.
+
+1. Under **Integration accounts**, select the integration account that you want to move. On your integration account resource menu, select **Overview**.
+
+ ![Screenshot shows Azure portal with integration account menu and selected Overview option.](./media/create-integration-account/integration-account-overview.png)
+
+1. On the **Overview** page, select **Upgrade Pricing Tier**, which lists any available higher tiers. When you select a tier, the change immediately takes effect.
+
+ ![Screenshot shows integration account, Overview page, and selected option to Upgrade Pricing Tier.](media/create-integration-account/upgrade-pricing-tier.png)
+
+<a name="upgrade-tier-azure-cli"></a>
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. If you haven't done so already, [install the Azure CLI prerequisites](/cli/azure/get-started-with-azure-cli).
+
+1. In the Azure portal, open the [Azure Cloud Shell](../../cloud-shell/overview.md) environment.
+
+ ![Screenshot shows Azure portal toolbar with selected Cloud Shell optiond.](./media/create-integration-account/open-azure-cloud-shell-window.png)
+
+1. At the command prompt, enter the [**az resource** command](/cli/azure/resource#az-resource-update), and set `skuName` to the higher tier that you want.
+
+ ```azurecli
+ az resource update --resource-group {ResourceGroupName} --resource-type Microsoft.Logic/integrationAccounts --name {IntegrationAccountName} --subscription {AzureSubscriptionID} --set sku.name={SkuName}
+ ```
+
+ For example, if you have the Basic tier, you can set `skuName` to `Standard`:
+
+ ```azurecli
+ az resource update --resource-group FabrikamIntegration-RG --resource-type Microsoft.Logic/integrationAccounts --name Fabrikam-Integration --subscription XXXXXXXXXXXXXXXXX --set sku.name=Standard
+ ```
+++
+<a name="downgrade-pricing-tier"></a>
+
+### Downgrade pricing tier
+
+To make this change, use the [Azure CLI](/cli/azure/get-started-with-azure-cli).
+
+1. If you haven't done so already, [install the Azure CLI prerequisites](/cli/azure/get-started-with-azure-cli).
+
+1. In the Azure portal, open the [Azure Cloud Shell](../../cloud-shell/overview.md) environment.
+
+ ![Screenshot shows Azure portal toolbar with selected Cloud Shell.](./media/create-integration-account/open-azure-cloud-shell-window.png)
+
+1. At the command prompt, enter the [**az resource** command](/cli/azure/resource#az-resource-update) and set `skuName` to the lower tier that you want.
+
+ ```azurecli
+ az resource update --resource-group <resourceGroupName> --resource-type Microsoft.Logic/integrationAccounts --name <integrationAccountName> --subscription <AzureSubscriptionID> --set sku.name=<skuName>
+ ```
+
+ For example, if you have the Standard tier, you can set `skuName` to `Basic`:
+
+ ```azurecli
+ az resource update --resource-group FabrikamIntegration-RG --resource-type Microsoft.Logic/integrationAccounts --name Fabrikam-Integration --subscription XXXXXXXXXXXXXXXXX --set sku.name=Basic
+ ```
+
+## Unlink from logic app
+
+### [Consumption](#tab/consumption)
+
+If you want to link your logic app to another integration account, or no longer use an integration account with your logic app, delete the link by using Azure Resource Explorer.
+
+1. Open your browser window, and go to [Azure Resource Explorer (https://resources.azure.com)](https://resources.azure.com). Sign in with the same Azure account credentials.
+
+ ![Screenshot shows a web browser with Azure Resource Explorer.](./media/create-integration-account/resource-explorer.png)
+
+1. In the search box, enter your logic app's name to find and open your logic app.
+
+ ![Screenshot shows explorer search box, which contains your logic app name.](./media/create-integration-account/resource-explorer-find-logic-app.png)
+
+1. On the explorer title bar, select **Read/Write**.
+
+ ![Screenshot shows title bar with selected option for Read/Write.](./media/create-integration-account/resource-explorer-select-read-write.png)
+
+1. On the **Data** tab, select **Edit**.
+
+ ![Screenshot shows Data tab with selected option for Edit.](./media/create-integration-account/resource-explorer-select-edit.png)
+
+1. In the editor, find the **integrationAccount** object, which has the following format, and delete the object:
+
+ ```json
+ {
+ // <other-attributes>
+ "integrationAccount": {
+ "name": "<integration-account-name>",
+ "id": "<integration-account-resource-ID>",
+ "type": "Microsoft.Logic/integrationAccounts"
+ },
+ }
+ ```
+
+ For example:
+
+ ![Screenshot shows how to find the object named integrationAccount.](./media/create-integration-account/resource-explorer-delete-integration-account.png)
+
+1. On the **Data** tab, select **Put** to save your changes.
+
+ ![Screenshot shows Data tab with Put selected.](./media/create-integration-account/resource-explorer-save-changes.png)
+
+1. In the Azure portal, open your logic app. On your logic app menu, under **Workflow settings**, confirm that the **Integration account** property now appears empty.
+
+ ![Screenshot shows Azure portal, logic app menu, and selected Workflow settings.](./media/create-integration-account/unlinked-account.png)
+
+### [Standard](#tab/standard)
+
+#### Azure portal
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On your logic app menu, under **Settings**, select **Environment variables**.
+
+1. On the **Environment variables** page, find the app setting named **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL**.
+
+1. Clear the app setting name and its value.
+
+1. When you're done, select **Apply**.
+
+#### Visual Studio Code
+
+1. From your Standard logic app project in Visual Studio Code, open the **local.settings.json** file.
+
+1. In the `Values` object, find and delete the app setting that has the following properties and values:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
+ | **Value** | <*integration-account-callback-URL*> |
+
+1. When you're done, save your changes.
+++
+## Move integration account
+
+You can move your integration account to another Azure resource group or Azure subscription. When you move resources, Azure creates new resource IDs, so make sure that you use the new IDs instead and update any scripts or tools associated with the moved resources. If you want to change the subscription, you must also specify an existing or new resource group.
+
+For this task, you can use either the Azure portal by following the steps in this section or the [Azure CLI](/cli/azure/resource#az-resource-move).
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **integration accounts**, and select **Integration accounts**.
+
+ Azure shows all the integration accounts in your Azure subscriptions.
+
+1. Under **Integration accounts**, select the integration account that you want to move. On your integration account menu, select **Overview**.
+
+1. On the **Overview** page, next to either **Resource group** or **Subscription name**, select **change**.
+
+ ![Screenshot shows Azure portal, integration account, Overview page, and selected change option, which is next to Resource group or Subscription name.](./media/create-integration-account/change-resource-group-subscription.png)
+
+1. Select any related resources that you also want to move.
+
+1. Based on your selection, follow these steps to change the resource group or subscription:
+
+ * Resource group: From the **Resource group** list, select the destination resource group. Or, to create a different resource group, select **Create a new resource group**.
+
+ * Subscription: From the **Subscription** list, select the destination subscription. From the **Resource group** list, select the destination resource group. Or, to create a different resource group, select **Create a new resource group**.
+
+1. To acknowledge your understanding that any scripts or tools associated with the moved resources won't work until you update them with the new resource IDs, select the confirmation box, and then select **OK**.
+
+1. After you finish, make sure that you update all scripts with the new resource IDs for your moved resources.
+
+## Delete integration account
+
+For this task, you can use either the Azure portal by following the steps in this section, [Azure CLI](/cli/azure/resource#az-resource-delete), or [Azure PowerShell](/powershell/module/az.logicapp/remove-azintegrationaccount).
+
+### [Portal](#tab/azure-portal)
+
+1. In to the [Azure portal](https://portal.azure.com) search box, enter **integration accounts**, and select **Integration accounts**.
+
+ Azure shows all the integration accounts in your Azure subscriptions.
+
+1. Under **Integration accounts**, select the integration account that you want to delete. On your integration account menu, select **Overview**.
+
+ ![Screenshot shows Azure portal with integration accounts list and integration account menu with Overview selected.](./media/create-integration-account/integration-account-overview.png)
+
+1. On the **Overview** page, select **Delete**.
+
+ ![Screenshot shows Overview page with Delete selected.](./media/create-integration-account/delete-integration-account.png)
+
+1. To confirm that you want to delete your integration account, select **Yes**.
+
+ ![Screenshot shows confirmation box with Yes selected.](./media/create-integration-account/confirm-delete.png)
+
+<a name="delete-account-azure-cli"></a>
+
+#### [Azure CLI](#tab/azure-cli)
+
+You can delete an integration account by using the [az logic integration-account delete](/cli/azure/logic/integration-account#az-logic-integration-account-delete) command:
+
+```azurecli
+az logic integration-account delete --name integration_account_01 --resource-group myresourcegroup
+```
+++
+## Next steps
+
+* [Create trading partners in your integration account](../logic-apps-enterprise-integration-partners.md)
+* [Create agreements between partners in your integration account](../logic-apps-enterprise-integration-agreements.md)
logic-apps Ise Manage Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/ise-manage-integration-service-environment.md
ms.suite: integration Previously updated : 11/04/2022 Last updated : 08/29/2023 # Manage your integration service environment (ISE) in Azure Logic Apps
You can view and manage the logic apps that are in your ISE.
> [!NOTE] > If you delete and recreate a child logic app, you must resave the parent logic app. The recreated child app will have different metadata.
-> If you don't resave the parent logic app after recreating its child, your calls to the child logic app will fail with an error of "unauthorized." This behavior applies to parent-child logic apps, for example, those that use artifacts in integration accounts or call Azure functions.
+> If you don't resave the parent logic app after recreating its child, your calls to the child logic app will fail with an error of "unauthorized".
+> This behavior applies to parent-child logic apps, for example, those that use artifacts in integration accounts or call Azure functions.
<a name="find-api-connections"></a>
You can view and manage the custom connectors that you deployed to your ISE.
1. To remove integration accounts from your ISE when no longer needed, select those integration accounts, and then select **Delete**.
+<a name="export-integration-account"></a>
+
+## Export integration account (preview)
+
+For a Standard integration account created from inside an ISE, you can export that integration account to an existing Premium integration account. The export process has two steps: export the artifacts, and then export the agreement states. Artifacts include partners, agreements, certificates, schemas, and maps. However, the export process currently doesn't support assemblies and RosettaNet PIPs.
+
+Your integration account also stores the runtime states for specific B2B actions and EDI standards, such as the MIC number for AS2 actions and the control numbers for X12 actions. If you configured your agreements to update these states every time that a transaction is processed and to use these states for message reconciliation and duplicate detection, make sure that you also export these states. You can export either all agreement states or one agreement state at a time.
+
+> [!IMPORTANT]
+>
+> Make sure to choose a time window when your source integration account doesn't have any activity in your agreements to avoid state inconsistencies.
+
+### Prerequisites
+
+If you don't have a Premium integration account, [create a Premium integration account](./enterprise-integration/create-integration-account.md).
+
+### Export artifacts
+
+This process copies artifacts from the source to the destination.
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard integration account.
+
+1. On the integration account menu, under **Settings**, select **Export**.
+
+ > [!NOTE]
+ >
+ > If the **Export** option doesn't appear, make sure that you selected a Standard integration account that was created from inside an ISE.
+
+1. On the **Export** page toolbar, select **Export Artifacts**.
+
+1. Open the **Target integration account** list, which contains all the Premium accounts in your Azure subscription, select the Premium integration account that you want, and then select **OK**.
+
+ The **Export** page now shows the export status for your artifacts.
+
+1. To confirm the exported artifacts, open your destination Premium integration account.
+
+### Export agreement state (optional)
+
+1. On the **Export** page toolbar, select **Export Agreement State**.
+
+1. On the **Export Agreement State** pane, open the **Target integration account** list, and select the Premium integration account that you want.
+
+1. To export all agreement states, don't select any agreement from the **Agreement** list. To export an individual agreement state, select an agreement from the **Agreement** list.
+
+1. When you're done, select **OK**.
+
+ The **Export** page now shows the export status for your agreement states.
+ <a name="add-capacity"></a> ## Add ISE capacity
If you change your DNS server or DNS server settings, you have to restart your I
1. On the ISE menu, select **Overview**. On the Overview toolbar, **Restart**.
- ![Restart integration service environment](./media/connect-virtual-network-vnet-isolated-environment/restart-integration-service-environment.png)
- <a name="delete-ise"></a> ## Delete ISE
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-add-run-inline-code.md
ms.suite: integration
Last updated 08/07/2023-+ # Run code snippets in workflows with Inline Code operations in Azure Logic Apps
logic-apps Logic Apps Control Flow Conditional Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-conditional-statement.md
Title: Add conditions to workflows
-description: Create conditions that control actions in workflows in Azure Logic Apps.
+description: Create conditions that control workflow actions in Azure Logic Apps.
ms.suite: integration
Last updated 08/08/2023
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-To specify a condition that returns either true or false and have your workflow run either one action path or another based on the result, add the **Control** action named **Condition** to your workflow. You can nest conditions inside each other.
+When you want to set up a condition that returns true or false and have the result determine whether your workflow runs one path of actions or another, add the **Control** action named **Condition** to your workflow. You can also nest conditions inside each other.
For example, suppose you have a workflow that sends too many emails when new items appear on a website's RSS feed. You can add the **Condition** action to send email only when the new item includes a specific word. > [!NOTE] >
-> To specify more than two paths from which your workflow can choose or if the condition criteria isn't restricted
-> to only true or false, use a [*switch statement* instead](logic-apps-control-flow-switch-statement.md).
+> If you want to specify more than two paths from which your workflow can choose
+> or condition criteria that's not restricted to only true or false, use a
+> [*switch action* instead](logic-apps-control-flow-switch-statement.md).
-This how-to guide shows how to add a condition to your workflow and use the result to help your workflow choose from two action paths.
+This guide shows how to add a condition to your workflow and use the result to help your workflow choose between two action paths.
## Prerequisites
This how-to guide shows how to add a condition to your workflow and use the resu
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. [Follow these general steps to add the **Condition** action to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. In the **Condition** action, follow these steps build your condition:
+1. In the **Condition** action, follow these steps to build your condition:
- 1. In the left **Choose a value** box, specify the first value or field that you want to compare.
+ 1. In the left-side box named **Choose a value**, enter the first value or field that you want to compare.
- When you select inside the left box, the dynamic content list opens so that you can select outputs from previous steps in your workflow.
+ When you select inside the **Choose a value** box, the dynamic content list opens automatically. From this list, you can select outputs from previous steps in your workflow.
This example selects the RSS trigger output named **Feed summary**. ![Screenshot shows Azure portal, Consumption workflow designer. RSS trigger, and Condition action with criteria construction.](./media/logic-apps-control-flow-conditional-statement/edit-condition-consumption.png)
- 1. From the middle list, select the operation to perform.
+ 1. Open the middle list, select the operation to perform.
This example selects **contains**.
- 1. In the right **Choose a value** box, specify the value or field that you want to compare with the first.
+ 1. In the right-side box named **Choose a value**, enter the value or field that you want to compare with the first.
This example specifies the following string: **Microsoft**
- The following example shows the complete condition:
+ The complete condition now looks like the following example:
![Screenshot shows the Consumption workflow and the complete condition criteria.](./media/logic-apps-control-flow-conditional-statement/complete-condition-consumption.png)
- - To add another row to your condition, open the **Add** menu, and select **Add row**.
+ - To add another row to your condition, from the **Add** menu, select **Add row**.
- - To add a group with subconditions, open the **Add** menu, and select **Add group**.
+ - To add a group with subconditions, from the **Add** menu, select **Add group**.
- To group existing rows, select the checkboxes for those rows, select the ellipses (...) button for any row, and then select **Make group**.
-1. In the **True** and **False** action paths, add the actions to run based on whether the condition is true or false, respectively, for example:
+1. In the **True** and **False** action paths, add the actions that you want to run, based on whether the condition is true or false respectively, for example:
![Screenshot shows the Consumption workflow and the condition with true and false paths.](./media/logic-apps-control-flow-conditional-statement/condition-true-false-path-consumption.png)
This how-to guide shows how to add a condition to your workflow and use the resu
### [Standard](#tab/standard)
-1. In the [Azure portal](https://portal.azure.com), open your Standard logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. [Follow these general steps to add the **Condition** action to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. On the designer, select the **Condition** action to open the information pane and follow these steps build your condition:
+1. On the designer, select the **Condition** action to open the information pane. Follow these steps to build your condition:
+
+ 1. In the left-side box named **Choose a value**, enter the first value or field that you want to compare.
+
+ After you select inside the **Choose a value** box, the options to open the dynamic content list (lightning icon) or expression editor (formula icon) appear.
+
+ 1. Select the lightning icon to open the dynamic content list.
- 1. In the left **Choose a value** box, specify the first value or field that you want to compare. When you select inside the left box, select the lightning button that appears to open the dynamic content list so that you can select outputs from previous steps in your workflow.
+ From this list, you can select outputs from previous steps in your workflow.
![Screenshot shows Azure portal, Standard workflow designer, RSS trigger, and Condition action with information pane open, and dynamic content button selected.](./media/logic-apps-control-flow-conditional-statement/open-dynamic-content-standard.png)
This how-to guide shows how to add a condition to your workflow and use the resu
This example selects **contains**.
- 1. In the right **Choose a value** box, specify the value or field that you want to compare with the first.
+ 1. In the right-side box named **Choose a value**, enter the value or field that you want to compare with the first.
This example specifies the following string: **Microsoft**
This how-to guide shows how to add a condition to your workflow and use the resu
![Screenshot shows the Standard workflow and the complete condition criteria.](./media/logic-apps-control-flow-conditional-statement/complete-condition-standard.png)
- - To add another row to your condition, open the **New item** menu, and select **Add Row**.
+ - To add another row to your condition, from the **New item** menu, select **Add Row**.
- - To add a group with subconditions, open the **New item** menu, and select **Add Group**.
+ - To add a group with subconditions, from the **New item** menu, select **Add Group**.
- To group existing rows, select the checkboxes for those rows, select the ellipses (...) button for any row, and then select **Make Group**.
-1. In the **True** and **False** action paths, add the actions to run based on whether the condition is true or false, respectively, for example:
+1. In the **True** and **False** action paths, add the actions to run, based on whether the condition is true or false respectively, for example:
![Screenshot shows the Standard workflow and the condition with true and false paths.](./media/logic-apps-control-flow-conditional-statement/condition-true-false-path-standard.png)
This workflow now sends mail only when the new items in the RSS feed meet your c
## JSON definition
-The following shows the high-level code definition behind the **Condition** action, but for the full definition, see [If action - Schema reference guide for trigger and action types in Azure Logic Apps](logic-apps-workflow-actions-triggers.md#if-action).
+The following code shows the high-level JSON definition for the **Condition** action. For the full definition, see [If action - Schema reference guide for trigger and action types in Azure Logic Apps](logic-apps-workflow-actions-triggers.md#if-action).
``` json "actions": {
The following shows the high-level code definition behind the **Condition** acti
* [Run steps based on different values (switch actions)](logic-apps-control-flow-switch-statement.md) * [Run and repeat steps (loops)](logic-apps-control-flow-loops.md) * [Run or merge parallel steps (branches)](logic-apps-control-flow-branches.md)
-* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md)
+* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Logic Apps Enterprise Integration As2 Mdn Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md
Title: AS2 MDN acknowledgments
description: Learn about Message Disposition Notification (MDN) acknowledgments for AS2 messages in Azure Logic Apps. ms.suite: integration-- Previously updated : 08/23/2022 Last updated : 08/15/2023 # MDN acknowledgments for AS2 messages in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration As2 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-message-settings.md
Previously updated : 08/23/2022 Last updated : 08/15/2023 # Reference for AS2 message settings in agreements for Azure Logic Apps
Last updated 08/23/2022
This reference describes the properties that you can set in an AS2 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
-<a name="AS2-incoming-messages"></a>
+<a name="as2-inbound-messages"></a>
## AS2 Receive settings
-![Select "Receive Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png)
+![Screenshot shows Azure portal and AS2 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png)
| Property | Required | Description | |-|-|-| | **Override message properties** | No | Overrides the properties on incoming messages with your property settings. |
-| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
+| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
| **Message should be compressed** | No | Specifies whether all incoming messages must be compressed. Non-compressed messages are rejected. | | **Disallow Message ID duplicates** | No | Specifies whether to allow messages with duplicate IDs. If you disallow duplicate IDs, select the number of days between checks. You can also choose whether to suspend duplicates. | | **MDN Text** | No | Specifies the default message disposition notification (MDN) that you want sent to the message sender. |
-| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. |
+| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. |
| **Send signed MDN** | No | Specifies whether to send signed MDNs for received messages. If you require signing, from the **MIC Algorithm** list, select the algorithm to use for signing messages. | | **Send asynchronous MDN** | No | Specifies whether to send MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. |
-||||
-<a name="AS2-outgoing-messages"></a>
+<a name="as2-outbound-messages"></a>
## AS2 Send settings
-![Select "Send Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png)
+![Screenshot shows Azure portal and AS2 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png)
| Property | Required | Description | |-|-|-|
-| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <p>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <p>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <br><br>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
+| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <br><br>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
| **Enable message compression** | No | Specifies whether all outgoing messages must be compressed. | | **Unfold HTTP headers** | No | Puts the HTTP `content-type` header onto a single line. | | **Transmit file name in MIME header** | No | Specifies whether to include the file name in the MIME header. |
This reference describes the properties that you can set in an AS2 agreement for
| **Request asynchronous MDN** | No | Specifies whether to receive MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. | | **Enable NRR** | No | Specifies whether to require non-repudiation receipt (NRR). This communication attribute provides evidence that the data was received as addressed. | | **SHA2 Algorithm format** | No | Specifies the MIC algorithm format to use for signing in the headers for the outgoing AS2 messages or MDN |
-||||
## Next steps
-[Exchange AS2 messages](../logic-apps/logic-apps-enterprise-integration-as2.md)
+[Exchange AS2 messages](logic-apps-enterprise-integration-as2.md)
logic-apps Logic Apps Enterprise Integration As2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md
Previously updated : 10/20/2022 Last updated : 08/15/2023 # Exchange AS2 messages using workflows in Azure Logic Apps
To send and receive AS2 messages in workflows that you create using Azure Logic
Except for tracking capabilities, the **AS2 (v2)** connector provides the same capabilities as the original **AS2** connector, runs natively with the Azure Logic Apps runtime, and offers significant performance improvements in message size, throughput, and latency. Unlike the original **AS2** connector, the **AS2 (v2)** connector doesn't require that you create a connection to your integration account. Instead, as described in the prerequisites, make sure that you link your integration account to the logic app resource where you plan to use the connector.
-This article shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this article use the [Request](../connectors/connectors-native-reqres.md) trigger.
+This how-to guide shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
## Connector technical reference
The **AS2 (v2)** connector has no triggers. The following table describes the ac
* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) to define and store artifacts for use in enterprise integration and B2B workflows.
- > [!IMPORTANT]
- >
- > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
-* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario.
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the AS2 operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario.
-* An [AS2 agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type.
+ * Defines an [AS2 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md).
* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**.
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**.
-
-1. From the actions list, select the action named **AS2 Encode**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-consumption.png)
-
-1. In the action information box, provide the following information.
+1. In the action information box, provide the following information:
| Property | Required | Description | |-|-|-|
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **Encode to AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **AS2 Encode**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-built-in-standard.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. In the action information pane, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **Encode to AS2 message**.
-
- ![Screenshot showing the Azure portal, workflow designer for Standard, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-message-managed-standard.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**.
-
-1. From the actions list, select the action named **AS2 Decode**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Decode" action selected.](media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. In the action information box, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **Decode AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Decode AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **AS2 Decode**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Decode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-built-in-standard.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. In the action information pane, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **Decode AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "Decode AS2 message" operation selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-message-managed-standard.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
logic-apps Logic Apps Enterprise Integration Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-create-integration-account.md
- Title: Create and manage integration accounts
-description: Create and manage integration accounts for building B2B enterprise integration workflows in Azure Logic Apps with the Enterprise Integration Pack.
---- Previously updated : 08/23/2022--
-# Create and manage integration accounts for B2B workflows in Azure Logic Apps with the Enterprise Integration Pack
--
-Before you can build business-to-business (B2B) and enterprise integration workflows using Azure Logic Apps, you need to create an *integration account* resource. This account is a scalable cloud-based container in Azure that simplifies how you store and manage B2B artifacts that you define and use in your workflows for B2B scenarios, for example:
-
-* [Trading partners](logic-apps-enterprise-integration-partners.md)
-* [Agreements](logic-apps-enterprise-integration-agreements.md)
-* [Maps](logic-apps-enterprise-integration-maps.md)
-* [Schemas](logic-apps-enterprise-integration-schemas.md)
-* [Certificates](logic-apps-enterprise-integration-certificates.md)
-
-You also need an integration account to electronically exchange B2B messages with other organizations. When other organizations use protocols and message formats different from your organization, you have to convert these formats so your organization's system can process those messages. With Azure Logic Apps, you can build workflows that support the following industry-standard protocols:
-
-* [AS2](logic-apps-enterprise-integration-as2.md)
-* [EDIFACT](logic-apps-enterprise-integration-edifact.md)
-* [RosettaNet](logic-apps-enterprise-integration-rosettanet.md)
-* [X12](logic-apps-enterprise-integration-x12.md)
-
-This article shows how to complete the following tasks:
-
-* Create an integration account.
-* Link your integration account to a logic app resource.
-* Change the pricing tier for your integration account.
-* Unlink your integration account from a logic app resource.
-* Move an integration account to another Azure resource group or subscription.
-* Delete an integration account.
-
-> [!NOTE]
->
-> If you use an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md),
-> and you need to create an integration account to use with that ISE, review
-> [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment).
-
-If you're new to creating B2B enterprise integration workflows in Azure Logic Apps, review the following documentation:
-
-* [What is Azure Logic Apps](logic-apps-overview.md)
-* [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md)
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Make sure that you use the same Azure subscription for both your integration account and logic app resource.
-
-* Whether you're working on a Consumption or Standard logic app workflow, your logic app resource must already exist before you can link your integration account.
-
- * For Consumption logic app resources, this link is required before you can use the artifacts from your integration account with your workflow. Although you can create your artifacts without this link, the link is required when you're ready to use these artifacts.
-
- * For Standard logic app resources, this link is optional, based on your scenario:
-
- * If you have an integration account with the artifacts that you need or want to use, you can link the integration account to each Standard logic app resource where you want to use the artifacts.
-
- * Some Azure-hosted integration account connectors, such as **AS2**, **EDIFACT**, and **X12**, let you create a connection to your integration account. If you're just using these connectors, you don't need the link.
-
- * The built-in connectors named **Liquid** and **Flat File** let you select maps and schemas that you previously uploaded to your logic app resource or to a linked integration account.
-
- If you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option, which also means you don't have to upload maps and schemas to each logic app resource. Either way, you can use these artifacts across all child workflows within the *same logic app resource*.
-
-* Basic knowledge about how to create logic app workflows. For more information, see the following documentation:
-
- * [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-## Create integration account
-
-Integration accounts are available in different tiers that [vary in pricing](https://azure.microsoft.com/pricing/details/logic-apps/). Based on the tier you choose, creating an integration account might incur costs. For more information, review [Azure Logic Apps pricing and billing models](logic-apps-pricing.md#integration-accounts) and [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
-
-Based on your requirements and scenarios, determine the appropriate integration account tier to create. The following table describes the available tiers:
-
-| Tier | Description |
-||-|
-| **Basic** | For scenarios where you want only message handling or to act as a small business partner that has a trading partner relationship with a larger business entity. <br><br>Supported by the Azure Logic Apps SLA. |
-| **Standard** | For scenarios where you have more complex B2B relationships and increased numbers of entities that you must manage. <br><br>Supported by the Azure Logic Apps SLA. |
-| **Free** | For exploratory scenarios, not production scenarios. This tier has limits on region availability, throughput, and usage. For example, the Free tier is available only for public regions in Azure, for example, West US or Southeast Asia, but not for [Microsoft Azure operated by 21Vianet](/azure/chin). <br><br>**Note**: Not supported by the Azure Logic Apps SLA. |
-|||
-
-For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-resource-create), or [Azure PowerShell](/powershell/module/Az.LogicApp/New-AzIntegrationAccount).
-
-> [!IMPORTANT]
->
-> For you to successfully link and use your integration account with your logic app,
-> make sure that both resources exist in the *same* Azure subscription and Azure region.
-
-### [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-
-1. In the Azure portal search box, enter **integration accounts**, and select **Integration accounts**.
-
-1. Under **Integration accounts**, select **Create**.
-
-1. On the **Create an integration account** pane, provide the following information about your integration account:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | The name for your Azure subscription |
- | **Resource group** | Yes | <*Azure-resource-group-name*> | The name for the [Azure resource group](../azure-resource-manager/management/overview.md) to use for organizing related resources. For this example, create a new resource group named **FabrikamIntegration-RG**. |
- | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`()`), and periods (`.`). This example uses **Fabrikam-Integration**. |
- | **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app resource, or create your logic apps in the same location as your integration account. For this example, use **West US**. <br><br>**Note**: To create an integration account inside an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), select **Associate with integration service environment** and select your ISE as the location. For more information, see [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment). |
- | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, review the following documentation: <br><br>- [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) |
- | **Enable log analytics** | No | Unselected | For this example, don't select this option. |
- |||||
-
-1. When you're done, select **Review + create**.
-
- After deployment completes, Azure opens your integration account.
-
-### [Azure CLI](#tab/azure-cli)
--
-1. To add the [az logic integration-account](/cli/azure/logic/integration-account) extension, use the [az extension add](/cli/azure/extension#az-extension-add) command:
-
- ```azurecli
- az extension add ΓÇô-name logic
- ```
-
-1. To create a resource group or use an existing resource group, run the [az group create](/cli/azure/group#az-group-create) command:
-
- ```azurecli
- az group create --name myresourcegroup --location westus
- ```
-
- To list the integration accounts for a resource group, use the [az logic integration-account list](/cli/azure/logic/integration-account#az-logic-integration-account-list) command:
-
- ```azurecli
- az logic integration-account list --resource-group myresourcegroup
- ```
-
-1. To create an integration account, run the [az logic integration-account create](/cli/azure/logic/integration-account#az-logic-integration-account-create) command:
-
- ```azurecli
- az logic integration-account create --resource-group myresourcegroup \
- --name integration_account_01 --location westus --sku name=Standard
- ```
-
- Your integration account name can contain only letters, numbers, hyphens (-), underscores (_), parentheses (()), and periods (.).
-
- To view a specific integration account, use the [az logic integration-account show](/cli/azure/logic/integration-account#az-logic-integration-account-show) command:
-
- ```azurecli
- az logic integration-account show --name integration_account_01 --resource-group myresourcegroup
- ```
-
- You can change your SKU, or pricing tier, by using the [az logic integration-account update](/cli/azure/logic/integration-account#az-logic-integration-account-update) command:
-
- ```azurecli
- az logic integration-account update --sku name=Basic --name integration_account_01 \
- --resource-group myresourcegroup
- ```
-
- For more information about pricing, see these resources:
-
- * [Azure Logic Apps pricing model](logic-apps-pricing.md#integration-accounts)
- * [Azure Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits)
- * [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-
-To import an integration account by using a JSON file, use the [az logic integration-account import](/cli/azure/logic/integration-account#az-logic-integration-account-import) command:
-
-```azurecli
-az logic integration-account import --name integration_account_01 \
- --resource-group myresourcegroup --input-path integration.json
-```
---
-<a name="link-account"></a>
-
-## Link to logic app
-
-For you to successfully link your integration account to your logic app resource, make sure that both resources use the *same* Azure subscription and Azure region.
-
-### [Consumption](#tab/consumption)
-
-This section describes how to complete this task using the Azure portal. If you use Visual Studio and your logic app is in an [Azure Resource Group project](../azure-resource-manager/templates/create-visual-studio-deployment-project.md), you can [link your logic app to an integration account by using Visual Studio](manage-logic-apps-with-visual-studio.md#link-integration-account).
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
-
-1. On your logic app's navigation menu, under **Settings**, select **Workflow settings**. Under **Integration account**, open the **Select an Integration account** list, and select the integration account you want.
-
- ![Screenshot that shows the Azure portal with integration account menu with "Workflow settings" pane open and "Select an Integration account" list open.](./media/logic-apps-enterprise-integration-create-integration-account/select-integration-account.png)
-
-1. To finish linking, select **Save**.
-
- ![Screenshot that shows "Workflow settings" pane and "Save" selected.](./media/logic-apps-enterprise-integration-create-integration-account/save-link.png)
-
- After your integration account is successfully linked, Azure shows a confirmation message.
-
- ![Screenshot that shows Azure confirmation message.](./media/logic-apps-enterprise-integration-create-integration-account/link-confirmation.png)
-
-Now your logic app workflow can use the artifacts in your integration account plus the B2B connectors, such as XML validation and flat file encoding or decoding.
-
-### [Standard](#tab/standard)
-
-#### Find your integration account's callback URL
-
-Before you can link your integration account to a Standard logic app resource, you need to have your integration account's **callback URL**.
-
-1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-
-1. In the Azure portal search box, find and select your integration account. To browse existing accounts, enter **integration accounts**, and then select **Integration accounts**.
-
-1. From the **Integration accounts** list, select your integration account.
-
-1. On your selected integration account's navigation menu, under **Settings**, select **Callback URL**.
-
-1. Find the **Generated Callback URL** property value, copy the value, and save the URL to use later for linking.
-
-#### Link integration account to Standard logic app
-
-##### Azure portal
-
-1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
-
-1. On your logic app's navigation menu, under **Settings**, select **Configuration**.
-
-1. On the **Configuration** pane, check whether the app setting named **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** exists.
-
-1. If the app setting doesn't exist, under the **Configuration** pane toolbar, select **New application setting**.
-
-1. Provide the following values for the app setting:
-
- | Property | Value |
- |-|-|
- | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
- | **Value** | <*integration-account-callback-URL*> |
-
-1. When you're done, select **OK**. When you return to the **Configuration** pane, make sure to save your changes. On the **Configuration** pane toolbar, select **Save**.
-
-##### Visual Studio Code
-
-1. From your Standard logic app project in Visual Studio Code, open the **local.settings.json** file.
-
-1. In the `Values` object, add an app setting that has the following properties and values, including the previously saved callback URL:
-
- | Property | Value |
- |-|-|
- | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
- | **Value** | <*integration-account-callback-URL*> |
-
- This example shows how a sample app setting might appear:
-
- ```json
- {
- "IsEncrypted": false,
- "Values": {
- "AzureWebJobStorage": "UseDevelopmentStorage=true",
- "FUNCTIONS_WORKER_RUNTIME": "node",
- "WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL": "https://prod-03.westus.logic.azure.com:443/integrationAccounts/...."
- }
- }
- ```
-
-1. When you're done, save your changes.
---
-<a name="change-pricing-tier"></a>
-
-## Change pricing tier
-
-To increase the [limits](logic-apps-limits-and-config.md#integration-account-limits) for an integration account, you can [upgrade to a higher pricing tier](#upgrade-pricing-tier), if available. For example, you can upgrade from the Free tier to the Basic tier or Standard tier. You can also [downgrade to a lower tier](#downgrade-pricing-tier), if available. For more information pricing information, review the following documentation:
-
-* [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-* [Azure Logic Apps pricing model](logic-apps-pricing.md#integration-accounts)
-
-<a name="upgrade-pricing-tier"></a>
-
-### Upgrade pricing tier
-
-To make this change, you can use either the Azure portal or the Azure CLI.
-
-#### [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials.
-
-1. In the main Azure search box, enter `integration accounts`, and select **Integration accounts**.
-
- Azure shows all the integration accounts in your Azure subscriptions.
-
-1. Under **Integration accounts**, select the integration account that you want to move. On your integration account menu, select **Overview**.
-
- ![Screenshot that shows Azure portal with integration account menu and "Overview" selected.](./media/logic-apps-enterprise-integration-create-integration-account/integration-account-overview.png)
-
-1. On the Overview pane, select **Upgrade Pricing Tier**, which lists any available higher tiers. When you select a tier, the change immediately takes effect.
-
- ![Screenshot that shows integration account "Overview" pane with "Upgrade Pricing Tier" selected.](media/logic-apps-enterprise-integration-create-integration-account/upgrade-pricing-tier.png)
-
-<a name="upgrade-tier-azure-cli"></a>
-
-#### [Azure CLI](#tab/azure-cli)
-
-1. If you haven't done so already, [install the Azure CLI prerequisites](/cli/azure/get-started-with-azure-cli).
-
-1. In the Azure portal, open the [Azure Cloud Shell](../cloud-shell/overview.md) environment.
-
- ![Screenshot that shows the Azure portal toolbar with "Cloud Shell" selected.](./media/logic-apps-enterprise-integration-create-integration-account/open-azure-cloud-shell-window.png)
-
-1. At the command prompt, enter the [**az resource** command](/cli/azure/resource#az-resource-update), and set `skuName` to the higher tier that you want.
-
- ```azurecli
- az resource update --resource-group {ResourceGroupName} --resource-type Microsoft.Logic/integrationAccounts --name {IntegrationAccountName} --subscription {AzureSubscriptionID} --set sku.name={SkuName}
- ```
-
- For example, if you have the Basic tier, you can set `skuName` to `Standard`:
-
- ```azurecli
- az resource update --resource-group FabrikamIntegration-RG --resource-type Microsoft.Logic/integrationAccounts --name Fabrikam-Integration --subscription XXXXXXXXXXXXXXXXX --set sku.name=Standard
- ```
---
-<a name="downgrade-pricing-tier"></a>
-
-### Downgrade pricing tier
-
-To make this change, use the [Azure CLI](/cli/azure/get-started-with-azure-cli).
-
-1. If you haven't done so already, [install the Azure CLI prerequisites](/cli/azure/get-started-with-azure-cli).
-
-1. In the Azure portal, open the [Azure Cloud Shell](../cloud-shell/overview.md) environment.
-
- ![Screenshot that shows the Azure portal toolbar with "Cloud Shell" selected.](./media/logic-apps-enterprise-integration-create-integration-account/open-azure-cloud-shell-window.png)
-
-1. At the command prompt, enter the [**az resource** command](/cli/azure/resource#az-resource-update) and set `skuName` to the lower tier that you want.
-
- ```azurecli
- az resource update --resource-group <resourceGroupName> --resource-type Microsoft.Logic/integrationAccounts --name <integrationAccountName> --subscription <AzureSubscriptionID> --set sku.name=<skuName>
- ```
-
- For example, if you have the Standard tier, you can set `skuName` to `Basic`:
-
- ```azurecli
- az resource update --resource-group FabrikamIntegration-RG --resource-type Microsoft.Logic/integrationAccounts --name Fabrikam-Integration --subscription XXXXXXXXXXXXXXXXX --set sku.name=Basic
- ```
-
-## Unlink from logic app
-
-### [Consumption](#tab/consumption)
-
-If you want to link your logic app to another integration account, or no longer use an integration account with your logic app, delete the link by using Azure Resource Explorer.
-
-1. Open your browser window, and go to [Azure Resource Explorer (https://resources.azure.com)](https://resources.azure.com). Sign in with the same Azure account credentials.
-
- ![Screenshot that shows a web browser with Azure Resource Explorer.](./media/logic-apps-enterprise-integration-create-integration-account/resource-explorer.png)
-
-1. In the search box, enter your logic app's name to find and open your logic app.
-
- ![Screenshot that shows the explorer search box, which contains your logic app name.](./media/logic-apps-enterprise-integration-create-integration-account/resource-explorer-find-logic-app.png)
-
-1. On the explorer title bar, select **Read/Write**.
-
- ![Screenshot that shows the title bar with "Read/Write"selected.](./media/logic-apps-enterprise-integration-create-integration-account/resource-explorer-select-read-write.png)
-
-1. On the **Data** tab, select **Edit**.
-
- ![Screenshot that shows the "Data" tab with "Edit" selected.](./media/logic-apps-enterprise-integration-create-integration-account/resource-explorer-select-edit.png)
-
-1. In the editor, find the `integrationAccount` object, and delete that property, which has this format:
-
- ```json
- {
- // <other-attributes>
- "integrationAccount": {
- "name": "<integration-account-name>",
- "id": "<integration-account-resource-ID>",
- "type": "Microsoft.Logic/integrationAccounts"
- },
- }
- ```
-
- For example:
-
- ![Screenshot that shows how to find the "integrationAccount" object.](./media/logic-apps-enterprise-integration-create-integration-account/resource-explorer-delete-integration-account.png)
-
-1. On the **Data** tab, select **Put** to save your changes.
-
- ![Screenshot that shows the "Data" tab with "Put" selected.](./media/logic-apps-enterprise-integration-create-integration-account/resource-explorer-save-changes.png)
-
-1. In the Azure portal, open your logic app. On your logic app menu, under **Workflow settings**, check that the **Integration account** property now appears empty.
-
- ![Screenshot that shows the Azure portal with the logic app menu and "Workflow settings" selected.](./media/logic-apps-enterprise-integration-create-integration-account/unlinked-account.png)
-
-### [Standard](#tab/standard)
-
-#### Azure portal
-
-1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
-
-1. On your logic app's navigation menu, under **Settings**, select **Configuration**.
-
-1. On the **Configuration** pane, find the app setting named **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL**.
-
-1. In the **Delete** column, select **Delete** (trash can icon).
-
-1. On the **Configuration** pane toolbar, select **Save**.
-
-#### Visual Studio Code
-
-1. From your Standard logic app project in Visual Studio Code, open the **local.settings.json** file.
-
-1. In the `Values` object, find and delete the app setting that has the following properties and values:
-
- | Property | Value |
- |-|-|
- | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
- | **Value** | <*integration-account-callback-URL*> |
-
-1. When you're done, save your changes.
---
-## Move integration account
-
-You can move your integration account to another Azure resource group or Azure subscription. When you move resources, Azure creates new resource IDs, so make sure that you use the new IDs instead and update any scripts or tools associated with the moved resources. If you want to change the subscription, you must also specify an existing or new resource group.
-
-For this task, you can use either the Azure portal by following the steps in this section or the [Azure CLI](/cli/azure/resource#az-resource-move).
-
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials.
-
-1. In the main Azure search box, enter `integration accounts`, and select **Integration accounts**.
-
- Azure shows all the integration accounts in your Azure subscriptions.
-
-1. Under **Integration accounts**, select the integration account that you want to move. On your integration account menu, select **Overview**.
-
-1. On the Overview pane, next to either **Resource group** or **Subscription name**, select **change**.
-
- ![Screenshot that shows the Azure portal and the "Overview" pane with "change" selected next to "Resource group" or "Subscription name".](./media/logic-apps-enterprise-integration-create-integration-account/change-resource-group-subscription.png)
-
-1. Select any related resources that you also want to move.
-
-1. Based on your selection, follow these steps to change the resource group or subscription:
-
- * Resource group: From the **Resource group** list, select the destination resource group. Or, to create a different resource group, select **Create a new resource group**.
-
- * Subscription: From the **Subscription** list, select the destination subscription. From the **Resource group** list, select the destination resource group. Or, to create a different resource group, select **Create a new resource group**.
-
-1. To acknowledge your understanding that any scripts or tools associated with the moved resources won't work until you update them with the new resource IDs, select the confirmation box, and then select **OK**.
-
-1. After you finish, make sure that you update all scripts with the new resource IDs for your moved resources.
-
-## Delete integration account
-
-For this task, you can use either the Azure portal by following the steps in this section, [Azure CLI](/cli/azure/resource#az-resource-delete), or [Azure PowerShell](/powershell/module/az.logicapp/remove-azintegrationaccount).
-
-### [Portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials.
-
-1. In the main Azure search box, enter `integration accounts`, and select **Integration accounts**.
-
- Azure shows all the integration accounts in your Azure subscriptions.
-
-1. Under **Integration accounts**, select the integration account that you want to delete. On your integration account menu, select **Overview**.
-
- ![Screenshot that shows Azure portal with "Integration accounts" list and integration account menu with "Overview" selected.](./media/logic-apps-enterprise-integration-create-integration-account/integration-account-overview.png)
-
-1. On the Overview pane, select **Delete**.
-
- ![Screenshot that shows "Overview" pane with "Delete" selected.](./media/logic-apps-enterprise-integration-create-integration-account/delete-integration-account.png)
-
-1. To confirm that you want to delete your integration account, select **Yes**.
-
- ![Screenshot that shows confirmation box and "Yes" selected.](./media/logic-apps-enterprise-integration-create-integration-account/confirm-delete.png)
-
-<a name="delete-account-azure-cli"></a>
-
-#### [Azure CLI](#tab/azure-cli)
-
-You can delete an integration account by using the [az logic integration-account delete](/cli/azure/logic/integration-account#az-logic-integration-account-delete) command:
-
-```azurecli
-az logic integration-account delete --name integration_account_01 --resource-group myresourcegroup
-```
---
-## Next steps
-
-* [Create trading partners in your integration account](logic-apps-enterprise-integration-partners.md)
-* [Create agreements between partners in your integration account](logic-apps-enterprise-integration-agreements.md)
logic-apps Logic Apps Enterprise Integration Edifact Contrl Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-contrl-acknowledgment.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # CONTRL acknowledgments and error codes for EDIFACT messages in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration Edifact Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-message-settings.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # Reference for EDIFACT message settings in agreements for Azure Logic Apps
Last updated 08/20/2022
This reference describes the properties that you can set in an EDIFACT agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
-<a name="EDIFACT-inbound-messages"></a>
+<a name="edifact-inbound-messages"></a>
-## EDIFACT Receive Settings
+## EDIFACT Receive settings
-![Screenshot showing Azure portal, EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png)
+![Screenshot showing Azure portal and EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png)
### Identifiers
This reference describes the properties that you can set in an EDIFACT agreement
|-|-| | **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. |
-|||
### Acknowledgments | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | Return a technical (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send Settings. |
-| **Acknowledgement (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. |
-|||
+| **Acknowledgment (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. |
<a name="receive-settings-schemas"></a>
This reference describes the properties that you can set in an EDIFACT agreement
| **UNH2.5 (Associated Assigned Code)** | The assigned code that is alphanumeric and is 1-6 characters. | | **UNG2.1 (App Sender ID)** |Enter an alphanumeric value with a minimum of one character and a maximum of 35 characters. | | **UNG2.2 (App Sender Code Qualifier)** |Enter an alphanumeric value, with a maximum of four characters. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
-|||
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
### Control Numbers
This reference describes the properties that you can set in an EDIFACT agreement
| **Check for duplicate UNB5 every (days)** | If you chose to disallow duplicate interchange control numbers, you can specify the number of days between running the check. | | **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers (UNG5). | | **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers (UNH1). |
-| **EDIFACT Acknowledgement Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. |
-|||
+| **EDIFACT Acknowledgment Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. |
### Validation
After you finish setting up a validation row, the next row automatically appears
| **Extended Validation** | If the data type isn't EDI, validation runs on the data element requirement and allowed repetition, enumerations, and data element length validation (min and max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove the leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p> - **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The received interchange must have trailing delimiters and separators. |
-|||
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The received interchange must have trailing delimiters and separators. |
### Internal Settings
After you finish setting up a validation row, the next row automatically appears
| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. | | **Preserve Interchange - suspend transaction sets on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, while continuing to process all other transaction sets. | | **Preserve Interchange - suspend interchange on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
-|||
-<a name="EDIFACT-outbound-messages"></a>
+<a name="edifact-outbound-messages"></a>
-## EDIFACT Send Settings
+## EDIFACT Send settings
-![Screenshot showing Azure portal, EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png)
+![Screenshot showing Azure portal and EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png)
### Identifiers
After you finish setting up a validation row, the next row automatically appears
| **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. | | **UNB7 (Application Reference ID)** | An alphanumeric value that is 1-14 characters. |
-|||
### Acknowledgment | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | The host partner that sends the message requests a technical (CONTRL) acknowledgment from the guest partner. |
-| **Acknowledgement (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. |
+| **Acknowledgment (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. |
| **Generate SG1/SG4 loop for accepted transaction sets** | If you chose to request a functional acknowledgment, this setting forces the generation of SG1/SG4 loops in the functional acknowledgments for accepted transaction sets. |
-|||
### Schemas
After you finish setting up a validation row, the next row automatically appears
| **UNH2.1 (Type)** | The transaction set type. | | **UNH2.2 (Version)** | The message version number. | | **UNH2.3 (Release)** | The message release number. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
-|||
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
### Envelopes
After you finish setting up an envelope row, the next row automatically appears.
| **UNB10 (Communication Agreement)** | An alphanumeric value that is 1-40 characters. | | **UNB11 (Test Indicator)** | Indicate that the generated interchange is test data. | | **Apply UNA Segment (Service String Advice)** | Generate a UNA segment for the interchange to send. |
-| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <p>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>- **UNG1**: An alphanumeric value that is 1-6 characters. <p>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG6**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <p>- **UNG8**: An alphanumeric value that is 1-14 characters. |
-|||
+| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <br><br>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>- **UNG1**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG6**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG8**: An alphanumeric value that is 1-14 characters. |
### Character Sets and Separators
Other than the character set, you can specify a different set of delimiters to u
| Property | Description | |-|-| | **UNB1.1 (System Identifier)** | The EDIFACT character set to apply to the outbound interchange. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. |
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. |
| **Input Type** | The input type for the message. | | **Component Separator** | A single character to use for separating composite data elements. | | **Data Element Separator** | A single character to use for separating simple data elements within composite data elements. |
Other than the character set, you can specify a different set of delimiters to u
| **UNA5 (Repetition Separator)** | A value to use for the repetition separator that separates segments that repeat within a transaction set. | | **Segment Terminator** | A single character that indicates the end in an EDI segment. | | **Suffix** | The character to use with the segment identifier. If you designate a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you have to designate a suffix. |
-|||
### Control Numbers
Other than the character set, you can specify a different set of delimiters to u
| **UNB5 (Interchange Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate an outbound interchange. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message, while the prefix and suffix stay the same. | | **UNG5 (Group Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate the group control number. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message until the maximum value is reached, while the prefix and suffix stay the same. | | **UNH1 (Message Header Reference Number)** | A prefix, a range of values for the interchange control number, and a suffix. These values are used to generate the message header reference number. The reference number is required, but the prefix and suffix are optional. The prefix and suffix are optional, while the reference number is required. The reference number is incremented for each new message, while the prefix and suffix stay the same. |
-|||
### Validation
After you finish setting up a validation row, the next row automatically appears
| **Extended Validation** | If the data type isn't EDI, run validation on the data element requirement and allowed repetition, enumerations, and data element length validation (min/max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove leading or trailing zero characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The sent interchange must have trailing delimiters and separators. |
-|||
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The sent interchange must have trailing delimiters and separators. |
## Next steps
-[Exchange EDIFACT messages](../logic-apps/logic-apps-enterprise-integration-edifact.md)
+[Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md)
logic-apps Logic Apps Enterprise Integration Edifact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact.md
Previously updated : 09/29/2021 Last updated : 08/15/2023 # Exchange EDIFACT messages using workflows in Azure Logic Apps
-To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides triggers and actions that support and manage EDIFACT communication.
+To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides operations that support and manage EDIFACT communication.
-This article shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. Although you can use any trigger to start your workflow, the examples use the [Request](../connectors/connectors-native-reqres.md) trigger. For more information about the **EDIFACT** connector's triggers, actions, and limits version, review the [connector's reference page](/connectors/edifact/) as documented by the connector's Swagger file.
+This how-to guide shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. The **EDIFACT** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
-![Overview screenshot showing the "Decode EDIFACT message" operation with the message decoding properties.](./media/logic-apps-enterprise-integration-edifact/overview-edifact-message-consumption.png)
+## Connector technical reference
-## EDIFACT encoding and decoding
+The **EDIFACT** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **EDIFACT** connector, see the following documentation:
-The following sections describe the tasks that you can complete using the EDIFACT encoding and decoding actions.
+* [Connector reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+
+* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+
+ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
+
+The following sections provide more information about the tasks that you can complete using the EDIFACT encoding and decoding actions.
### Encode to EDIFACT message action
+This action performs the following tasks:
+ * Resolve the agreement by matching the sender qualifier & identifier and receiver qualifier and identifier. * Serialize the Electronic Data Interchange (EDI), which converts XML-encoded messages into EDI transaction sets in the interchange.
The following sections describe the tasks that you can complete using the EDIFAC
### Decode EDIFACT message action
+This action performs the following tasks:
+ * Validate the envelope against the trading partner agreement. * Resolve the agreement by matching the sender qualifier and identifier along with the receiver qualifier and identifier.
The following sections describe the tasks that you can complete using the EDIFAC
* A functional acknowledgment that acknowledges the acceptance or rejection for the received interchange or group.
-## Connector reference
-
-For technical information about the **EDIFACT** connector, review the [connector's reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file. Also, review the [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) for workflows running in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, or the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
- * Is associated with the same Azure subscription as your logic app resource.
-
- * Exists in the same location or Azure region as your logic app resource.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- * When you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your logic app resource doesn't need a link to your integration account. However, you still need this account to store artifacts, such as partners, agreements, and certificates, along with using the EDIFACT, [X12](logic-apps-enterprise-integration-x12.md), or [AS2](logic-apps-enterprise-integration-as2.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **EDIFACT** operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario.
- * When you use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your workflow requires a connection to your integration account that you create directly from your workflow when you add the AS2 operation.
+ * Defines an [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md).
-* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario.
+ > [!IMPORTANT]
+ >
+ > The EDIFACT connector supports only UTF-8 characters. If your output contains
+ > unexpected characters, check that your EDIFACT messages use the UTF-8 character set.
-* An [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type.
+* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
- > [!IMPORTANT]
- > The EDIFACT connector supports only UTF-8 characters. If your output contains
- > unexpected characters, check that your EDIFACT messages use the UTF-8 character set.
+ | Logic app workflow | Link required? |
+ |--|-|
+ | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. |
+ | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. |
* The logic app resource and workflow where you want to use the EDIFACT operations.
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. For this example, select the action named **Encode to EDIFACT message by agreement name**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" action selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-consumption.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
> [!NOTE]
- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to
- > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by
- > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from
- > the trigger or a preceding action.
+ >
+ > If you want to use **Encode to EDIFACT message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your EDIFACT agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the XML message payload can be the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Encode to EDIFACT message by agreement name**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-standard.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
> [!NOTE]
- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to
- > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by
- > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from
- > the trigger or a preceding action.
+ >
+ > If you want to use **Encode to EDIFACT message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your EDIFACT agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT details pane appears on the designer, provide information for the following properties:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the message payload is the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**.
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**.
-
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **EDIFACT flat file message to decode** | Yes | The XML flat file message to decode. | | Other parameters | No | This operation includes the following other parameters: <p>- **Component separator** <br>- **Data element separator** <br>- **Release indicator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br>- **Payload character set** <br>- **Segment terminator suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the XML message payload to decode can be the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Decode EDIFACT message" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-decode-edifact-message-standard.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT details pane appears on the designer, provide information for the following properties:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the message payload is the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
## Handle UNH2.5 segments in EDIFACT documents
-In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`:
+In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`:
`UNH+SSDD1+ORDERS:D:03B:UN:EAN008`
To handle an EDIFACT document or process an EDIFACT message that has a UN2.5 seg
For example, suppose the schema root name for the sample UNH field is `EFACT_D03B_ORDERS_EAN008`. For each `D03B_ORDERS` that has a different UNH2.5 segment, you have to deploy an individual schema.
-1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, which is based on whether you're working with the **Logic App (Consumption)** or **Logic App (Standard)** resource type respectively.
+1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, based on whether you have a Consumption or Standard logicapp workflow respectively.
1. Whether you're using the EDIFACT decoding or encoding action, upload your schema and set up the schema settings in your EDIFACT agreement's **Receive Settings** or **Send Settings** sections respectively.
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
For more information, review the following documentation:
<a name="create-template"></a>
-## Step 1 - Create the template
+## Step 1: Create the template
Before you can perform a Liquid transformation in your logic app workflow, you must first create a Liquid template that defines the mapping that you want.
Before you can perform a Liquid transformation in your logic app workflow, you m
<a name="upload-template"></a>
-## Step 2 - Upload Liquid template
+## Step 2: Upload Liquid template
After you create your Liquid template, you now have to upload the template based on the following scenario:
After you create your Liquid template, you now have to upload the template based
After your map file finishes uploading, the map appears in the **Maps** list. On your integration account's **Overview** page, under **Artifacts**, your uploaded map also appears.
-## Step 3 - Add the Liquid transformation action
+## Step 3: Add the Liquid transformation action
The following steps show how to add a Liquid transformation action for Consumption and Standard logic app workflows.
logic-apps Logic Apps Enterprise Integration X12 997 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-997-acknowledgment.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # 997 functional acknowledgments and error codes for X12 messages in Azure Logic Apps
The optional AK3 segment reports errors in a data segment and identifies the loc
||-| | AK301 | Mandatory, identifies the segment in error with the X12 segment ID, for example, NM1. | | AK302 | Mandatory, identifies the segment count of the segment in error. The ST segment is `1`, and each segment increments the segment count by one. |
-| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by an Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
+| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by a Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
| AK304 | Optional, specifies the code for the error in the data segment. Although AK304 is optional, the element is required when an error exists for the identified segment. For AK304 error codes, review [997 ACK error codes - Data Segment Note](#997-ack-error-codes). | |||
logic-apps Logic Apps Enterprise Integration X12 Decode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-decode.md
- Title: Decode X12 messages
-description: Validate EDI and generate acknowledgements with X12 message decoder in Azure Logic Apps with Enterprise Integration Pack.
----- Previously updated : 01/27/2017--
-# Decode X12 messages in Azure Logic Apps with Enterprise Integration Pack
-
-With the Decode X12 message connector, you can validate the envelope against a trading partner agreement, validate EDI and partner-specific properties, split interchanges into transactions sets or preserve entire interchanges, and generate acknowledgments for processed transactions.
-To use this connector, you must add the connector to an existing trigger in your logic app.
-
-## Before you start
-
-Here's the items you need:
-
-* An Azure account; you can create a [free account](https://azure.microsoft.com/free)
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md)
-that's already defined and associated with your Azure subscription.
-You must have an integration account to use the Decode X12 message connector.
-* At least two [partners](logic-apps-enterprise-integration-partners.md)
-that are already defined in your integration account
-* An [X12 agreement](logic-apps-enterprise-integration-x12.md)
-that's already defined in your integration account
-
-## Decode X12 messages
-
-1. Create a logic app workflow. For more information, see the following documentation:
-
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-2. The Decode X12 message connector doesn't have triggers,
-so you must add a trigger for starting your logic app, like a Request trigger.
-In the Logic App Designer, add a trigger, and then add an action to your logic app.
-
-3. In the search box, enter "x12" for your filter.
-Select **X12 - Decode X12 message**.
-
- ![Search for "x12"](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage1.png)
-
-3. If you didn't previously create any connections to your integration account,
-you're prompted to create that connection now. Name your connection,
-and select the integration account that you want to connect.
-
- ![Provide integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage4.png)
-
- Properties with an asterisk are required.
-
- | Property | Details |
- | | |
- | Connection Name * |Enter any name for your connection. |
- | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. |
-
-5. When you're done, your connection details should look similar to this example.
-To finish creating your connection, choose **Create**.
-
- ![integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage5.png)
-
-6. After your connection is created, as shown in this example,
-select the X12 flat file message to decode.
-
- ![integration account connection created](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage6.png)
-
- For example:
-
- ![Select X12 flat file message for decoding](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage7.png)
-
- > [!NOTE]
- > The actual message content or payload for the message array, good or bad,
- > is base64 encoded. So, you must specify an expression that processes this content.
- > Here is an example that processes the content as XML that you can
- > enter in code view
- > or by using expression builder in the designer.
- > ``` json
- > "content": "@xml(base64ToBinary(item()?['Payload']))"
- > ```
- > ![Content example](media/logic-apps-enterprise-integration-x12-decode/content-example.png)
- >
--
-## X12 Decode details
-
-The X12 Decode connector performs these tasks:
-
-* Validates the envelope against trading partner agreement
-* Validates EDI and partner-specific properties
- * EDI structural validation, and extended schema validation
- * Validation of the structure of the interchange envelope.
- * Schema validation of the envelope against the control schema.
- * Schema validation of the transaction-set data elements against the message schema.
- * EDI validation performed on transaction-set data elements
-* Verifies that the interchange, group, and transaction set control numbers are not duplicates
- * Checks the interchange control number against previously received interchanges.
- * Checks the group control number against other group control numbers in the interchange.
- * Checks the transaction set control number against other transaction set control numbers in that group.
-* Splits the interchange into transaction sets, or preserves the entire interchange:
- * Split Interchange as transaction sets - suspend transaction sets on error:
- Splits interchange into transaction sets and parses each transaction set.
- The X12 Decode action outputs only those transaction sets
- that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
- * Split Interchange as transaction sets - suspend interchange on error:
- Splits interchange into transaction sets and parses each transaction set.
- If one or more transaction sets in the interchange fail validation,
- the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`.
- * Preserve Interchange - suspend transaction sets on error:
- Preserve the interchange and process the entire batched interchange.
- The X12 Decode action outputs only those transaction sets that fail validation to `badMessages`,
- and outputs the remaining transactions sets to `goodMessages`.
- * Preserve Interchange - suspend interchange on error:
- Preserve the interchange and process the entire batched interchange.
- If one or more transaction sets in the interchange fail validation,
- the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`.
-* Generates a Technical and/or Functional acknowledgment (if configured).
- * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
- * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document
-
-## View the swagger
-See the [swagger details](/connectors/x12/).
-
-## Next steps
-[Learn more about the Enterprise Integration Pack](../logic-apps/logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack")
-
logic-apps Logic Apps Enterprise Integration X12 Encode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-encode.md
- Title: Encode X12 messages
-description: Validate EDI and convert XML-encoded messages with X12 message encoder in Azure Logic Apps with Enterprise Integration Pack.
----- Previously updated : 01/27/2017--
-# Encode X12 messages in Azure Logic Apps with Enterprise Integration Pack
-
-With the Encode X12 message connector, you can validate EDI and partner-specific properties,
-convert XML-encoded messages into EDI transaction sets in the interchange,
-and request a Technical Acknowledgement, Functional Acknowledgment, or both.
-To use this connector, you must add the connector to an existing trigger in your logic app.
-
-## Before you start
-
-Here's the items you need:
-
-* An Azure account; you can create a [free account](https://azure.microsoft.com/free)
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md)
-that's already defined and associated with your Azure subscription.
-You must have an integration account to use the Encode X12 message connector.
-* At least two [partners](logic-apps-enterprise-integration-partners.md)
-that are already defined in your integration account
-* An [X12 agreement](logic-apps-enterprise-integration-x12.md)
-that's already defined in your integration account
-
-## Encode X12 messages
-
-1. Create a logic app workflow. For more information, see the following documentation:
-
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-2. The Encode X12 message connector doesn't have triggers,
-so you must add a trigger for starting your logic app, like a Request trigger.
-In the Logic App Designer, add a trigger, and then add an action to your logic app.
-
-3. In the search box, enter "x12" for your filter.
-Select either **X12 - Encode to X12 message by agreement name**
-or **X12 - Encode to X12 message by identities**.
-
- ![Search for "x12"](./media/logic-apps-enterprise-integration-x12-encode/x12decodeimage1.png)
-
-3. If you didn't previously create any connections to your integration account,
-you're prompted to create that connection now. Name your connection,
-and select the integration account that you want to connect.
-
- ![integration account connection](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage1.png)
-
- Properties with an asterisk are required.
-
- | Property | Details |
- | | |
- | Connection Name * |Enter any name for your connection. |
- | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. |
-
-5. When you're done, your connection details should look similar to this example.
-To finish creating your connection, choose **Create**.
-
- ![integration account connection created](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage2.png)
-
- Your connection is now created.
-
- ![integration account connection details](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage3.png)
-
-#### Encode X12 messages by agreement name
-
-If you chose to encode X12 messages by agreement name,
-open the **Name of X12 agreement** list,
-enter or select your existing X12 agreement. Enter the XML message to encode.
-
-![Enter X12 agreement name and XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage4.png)
-
-#### Encode X12 messages by identities
-
-If you choose to encode X12 messages by identities, enter the sender identifier,
-sender qualifier, receiver identifier, and receiver qualifier as
-configured in your X12 agreement. Select the XML message to encode.
-
-![Provide identities for sender and receiver, select XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage5.png)
-
-## X12 Encode details
-
-The X12 Encode connector performs these tasks:
-
-* Agreement resolution by matching sender and receiver context properties.
-* Serializes the EDI interchange, converting XML-encoded messages into EDI transaction sets in the interchange.
-* Applies transaction set header and trailer segments
-* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange
-* Replaces separators in the payload data
-* Validates EDI and partner-specific properties
- * Schema validation of the transaction-set data elements against the message Schema
- * EDI validation performed on transaction-set data elements.
- * Extended validation performed on transaction-set data elements
-* Requests a Technical and/or Functional acknowledgment (if configured).
- * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver
- * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document
-
-## View the swagger
-See the [swagger details](/connectors/x12/).
-
-## Next steps
-[Learn more about the Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack")
-
logic-apps Logic Apps Enterprise Integration X12 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-message-settings.md
+
+ Title: X12 message settings
+description: Reference guide for X12 message settings in agreements for Azure Logic Apps with Enterprise Integration Pack.
+
+ms.suite: integration
++++ Last updated : 08/15/2023++
+# Reference for X12 message settings in agreements for Azure Logic Apps
++
+This reference describes the properties that you can set in an X12 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
+
+<a name="x12-inbound-messages"></a>
+
+## X12 Receive Settings
+
+![Screenshot showing Azure portal and X12 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-receive-settings.png)
+
+<a name="inbound-identifiers"></a>
+
+### Identifiers
+
+| Property | Description |
+|-|-|
+| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. |
+| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. |
+| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+
+<a name="inbound-acknowledgment"></a>
+
+### Acknowledgment
+
+| Property | Description |
+|-|-|
+| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <br><br>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. |
+
+<a name="inbound-schemas"></a>
+
+### Schemas
+
+For this section, select a [schema](logic-apps-enterprise-integration-schemas.md) from your [integration account](logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Version** | The X12 version for the schema |
+| **Transaction Type (ST01)** | The transaction type |
+| **Sender Application (GS02)** | The sender application |
+| **Schema** | The schema file that you want to use |
+
+<a name="inbound-envelopes"></a>
+
+### Envelopes
+
+| Property | Description |
+|-|-|
+| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
+
+<a name="inbound-control-numbers"></a>
+
+### Control Numbers
+
+| Property | Description |
+|-|-|
+| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <br><br><br><br>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. |
+| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. |
+| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. |
+
+<a name="inbound-validations"></a>
+
+### Validations
+
+The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Message Type** | The EDI message type |
+| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
+| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
+| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
+| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. |
+
+<a name="inbound-internal-settings"></a>
+
+### Internal Settings
+
+| Property | Description |
+|-|-|
+| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. |
+| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. |
+| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. |
+| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
+| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. |
+| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. |
+
+<a name="x12-outbound-settings"></a>
+
+## X12 Send settings
+
+![Screenshot showing Azure portal and X12 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-send-settings.png)
+
+<a name="outbound-identifiers"></a>
+
+### Identifiers
+
+| Property | Description |
+|-|-|
+| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. |
+| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. |
+| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+
+<a name="outbound-acknowledgment"></a>
+
+### Acknowledgment
+
+| Property | Description |
+|-|-|
+| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+
+<a name="outbound-schemas"></a>
+
+### Schemas
+
+For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Version** | The X12 version for the schema |
+| **Transaction Type (ST01)** | The transaction type for the schema |
+| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. |
+
+<a name="outbound-envelopes"></a>
+
+### Envelopes
+
+| Property | Description |
+|-|-|
+| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
+
+<a name="outbound-control-version-number"></a>
+
+#### Control Version Number
+
+For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Control Version Number (ISA12)** | The version of the X12 standard |
+| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data |
+| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. |
+| **GS1** | Optional, select the functional code. |
+| **GS2** | Optional, specify the application sender. |
+| **GS3** | Optional, specify the application receiver. |
+| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. |
+| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. |
+| **GS7** | Optional, select a value for the responsible agency. |
+| **GS8** | Optional, specify the schema document version. |
+
+<a name="outbound-control-numbers"></a>
+
+### Control Numbers
+
+| Property | Description |
+|-|-|
+| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 |
+| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 |
+| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <br><br>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value |
+
+<a name="outbound-character-sets-separators"></a>
+
+### Character Sets and Separators
+
+The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears.
+
+> [!TIP]
+>
+> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
+
+| Property | Description |
+|-|-|
+| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. |
+| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. |
+| **Input Type** | The input type for the character set |
+| **Component Separator** | A single character that separates composite data elements |
+| **Data Element Separator** | A single character that separates simple data elements within composite data |
+| **Replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message |
+| **Segment Terminator** | A single character that indicates the end of an EDI segment |
+| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. |
+
+<a name="outbound-validation"></a>
+
+### Validation
+
+The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Message Type** | The EDI message type |
+| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
+| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
+| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
+| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. |
+
+<a name="hipaa-schemas"></a>
+
+## HIPAA schemas and message types
+
+When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+
+`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
+
+This table lists the affected message types, any variants, and the document version numbers that map to those message types:
+
+| Message type or variant | Description | Document version number (GS8) |
+|-|--|-|
+| 277 | Health Care Information Status Notification | 005010X212 |
+| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 |
+| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 |
+| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 |
+
+You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid.
+
+To specify these document version numbers and message types, follow these steps:
+
+1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use.
+
+ For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
+
+ To update your schema, follow these steps:
+
+ 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema).
+
+ 1. In your agreement's message settings, select the revised schema.
+
+1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
+
+ For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values:
+
+ ```json
+ "schemaReferences": [
+ {
+ "messageId": "837",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837"
+ }
+ ]
+ ```
+
+ In this `schemaReferences` section, add another entry that has these values:
+
+ * `"messageId": "837_P"`
+ * `"schemaVersion": "00501"`
+ * `"schemaName": "X12_00501_837_P"`
+
+ When you're done, your `schemaReferences` section looks like this:
+
+ ```json
+ "schemaReferences": [
+ {
+ "messageId": "837",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837"
+ },
+ {
+ "messageId": "837_P",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837_P"
+ }
+ ]
+ ```
+
+1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values.
+
+ ![Screenshot shows X12 agreement settings to disable validation for all message types or each message type.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-disable-validation.png)
+
+## Next steps
+
+[Exchange X12 messages](logic-apps-enterprise-integration-x12.md)
logic-apps Logic Apps Enterprise Integration X12 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12.md
Title: Exchange X12 messages for B2B integration
-description: Send, receive, and process X12 messages when building B2B enterprise integration solutions with Azure Logic Apps and the Enterprise Integration Pack.
+ Title: Exchange X12 messages in B2B workflows
+description: Exchange X12 messages between partners by creating workflows with Azure Logic Apps and Enterprise Integration Pack.
ms.suite: integration -+ Previously updated : 08/20/2022 Last updated : 08/15/2023
-# Exchange X12 messages for B2B enterprise integration using Azure Logic Apps and Enterprise Integration Pack
+# Exchange X12 messages using workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In Azure Logic Apps, you can create workflows that work with X12 messages by using **X12** operations. These operations include triggers and actions that you can use in your workflow to handle X12 communication. You can add X12 triggers and actions in the same way as any other trigger and action in a workflow, but you need to meet extra prerequisites before you can use X12 operations.
+To send and receive X12 messages in workflows that you create using Azure Logic Apps, use the **X12** connector, which provides operations that support and manage X12 communication.
-This article describes the requirements and settings for using X12 triggers and actions in your workflow. If you're looking for EDIFACT messages instead, review [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md). If you're new to logic apps, see [What is Azure Logic Apps](logic-apps-overview.md) and the following documentation:
+This how-to guide shows how to add the X12 encoding and decoding actions to an existing logic app workflow. The **X12** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
-* [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+## Connector technical reference
-* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
+The **X12** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **X12** connector, see the following documentation:
+
+* [Connector reference page](/connectors/x12/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+
+* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+
+ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A logic app resource and workflow where you want to use an X12 trigger or action. To use an X12 trigger, you need a blank workflow. To use an X12 action, you need a workflow that has an existing trigger.
+* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app resource. Both your logic app and integration account have to use the same Azure subscription and exist in the same Azure region or location.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- Your integration account also need to include the following B2B artifacts:
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **X12** operation used in your workflow. The definitions for both partners must use the same X12 business identity qualifier.
- * At least two [trading partners](logic-apps-enterprise-integration-partners.md) that use the X12 identity qualifier.
-
- * An X12 [agreement](logic-apps-enterprise-integration-agreements.md) defined between your trading partners. For information about settings to use when receiving and sending messages, review [Receive Settings](#receive-settings) and [Send Settings](#send-settings).
+ * Defines an [X12 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md).
> [!IMPORTANT]
+ >
> If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, you have to add a
- > `schemaReferences` section to your agreement. For more information, review [HIPAA schemas and message types](#hipaa-schemas).
+ > `schemaReferences` section to your agreement. For more information, see [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas).
- * The [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
+ * Defines the [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
> [!IMPORTANT]
- > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](#hipaa-schemas).
-
-## Connector reference
-
-For more technical information about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/x12/).
-
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [B2B message limits for ISE](../logic-apps/logic-apps-limits-and-config.md#b2b-protocol-limits).
-
-<a name="receive-settings"></a>
-
-## Receive Settings
-
-After you set the properties in your trading partner agreement, you can configure how this agreement identifies and handles inbound messages that you receive from your partner through this agreement.
-
-1. Under **Add**, select **Receive Settings**.
-
-1. Based on the agreement with the partner that exchanges messages with you, set the properties in the **Receive Settings** pane, which is organized into the following sections:
-
- * [Identifiers](#inbound-identifiers)
- * [Acknowledgement](#inbound-acknowledgement)
- * [Schemas](#inbound-schemas)
- * [Envelopes](#inbound-envelopes)
- * [Control Numbers](#inbound-control-numbers)
- * [Validations](#inbound-validations)
- * [Internal Settings](#inbound-internal-settings)
-
-1. When you're done, make sure to save your settings by selecting **OK**.
-
-<a name="inbound-identifiers"></a>
-
-### Receive Settings - Identifiers
-
-![Identifier properties for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-identifiers.png)
-
-| Property | Description |
-|-|-|
-| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. |
-| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. |
-| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-|||
+ >
+ > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas).
-<a name="inbound-acknowledgement"></a>
+* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
-### Receive Settings - Acknowledgement
+ | Logic app workflow | Link required? |
+ |--|-|
+ | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. |
+ | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. |
-![Acknowledgement for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-acknowledgement.png)
+* The logic app resource and workflow where you want to use the X12 operations.
-| Property | Description |
-|-|-|
-| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <p>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <p>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. |
+ For more information, see the following documentation:
-<a name="inbound-schemas"></a>
+ * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-### Receive Settings - Schemas
+ * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-![Schemas for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-schemas.png)
+<a name="encode"></a>
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears.
+## Encode X12 messages
-| Property | Description |
-|-|-|
-| **Version** | The X12 version for the schema |
-| **Transaction Type (ST01)** | The transaction type |
-| **Sender Application (GS02)** | The sender application |
-| **Schema** | The schema file that you want to use |
-|||
+The **Encode to X12 message** operation performs the following tasks:
-<a name="inbound-envelopes"></a>
+* Resolves the agreement by matching sender and receiver context properties.
+* Serializes the EDI interchange and converts XML-encoded messages into EDI transaction sets in the interchange.
+* Applies transaction set header and trailer segments.
+* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange.
+* Replaces separators in the payload data.
+* Validates EDI and partner-specific properties.
+ * Schema validation of transaction-set data elements against the message schema.
+ * EDI validation on transaction-set data elements.
+ * Extended validation on transaction-set data elements.
+* Requests a Technical and Functional Acknowledgment, if configured.
+ * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
+ * Generates a Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document.
-### Receive Settings - Envelopes
+### [Consumption](#tab/consumption)
-![Separators to use in transaction sets for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-envelopes.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-<a name="inbound-control-numbers"></a>
+ > [!NOTE]
+ >
+ > If you want to use **Encode to X12 message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your X12 agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-### Receive Settings - Control Numbers
+1. When prompted, provide the following connection information for your integration account:
-![Handling control number duplicates for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-control-numbers.png)
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for the connection |
+ | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. |
-| Property | Description |
-|-|-|
-| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <p><p>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. |
-| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. |
-| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. |
-|||
+ For example:
-<a name="inbound-validations"></a>
+ ![Screenshot showing Consumption workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-consumption.png)
-### Receive Settings - Validations
+1. When you're done, select **Create**.
-![Validations for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-validations.png)
+1. In the X12 action information box, provide the following property values:
-The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Name of X12 agreement** | Yes | The X12 agreement to use. |
+ | **XML message to encode** | Yes | The XML message to encode |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-| Property | Description |
-|-|-|
-| **Message Type** | The EDI message type |
-| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
-| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
-| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
-| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. |
-|||
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload:
-<a name="inbound-internal-settings"></a>
+ ![Screenshot showing Consumption workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-consumption.png)
-### Receive Settings - Internal Settings
+### [Standard](#tab/standard)
-![Internal settings for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-internal-settings.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. |
-| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. |
-| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. |
-| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
-| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. |
-| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-<a name="send-settings"></a>
+ > [!NOTE]
+ >
+ > If you want to use **Encode to X12 message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your X12 agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-## Send Settings
+1. When prompted, provide the following connection information for your integration account:
-After you set the agreement properties, you can configure how this agreement identifies and handles outbound messages that you send to your partner through this agreement.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection Name** | Yes | A name for the connection |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
-1. Under **Add**, select **Send Settings**.
+ For example:
-1. Configure these properties based on your agreement with the partner that exchanges messages with you. For property descriptions, see the tables in this section.
+ ![Screenshot showing Standard workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-standard.png)
- The **Send Settings** are organized into these sections:
+1. When you're done, select **Create**.
- * [Identifiers](#outbound-identifiers)
- * [Acknowledgement](#outbound-acknowledgement)
- * [Schemas](#outbound-schemas)
- * [Envelopes](#outbound-envelopes)
- * [Control Version Number](#outbound-control-version-number)
- * [Control Numbers](#outbound-control-numbers)
- * [Character Sets and Separators](#outbound-character-sets-separators)
- * [Validation](#outbound-validation)
+1. In the X12 action information box, provide the following property values:
-1. When you're done, make sure to save your settings by selecting **OK**.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Name Of X12 Agreement** | Yes | The X12 agreement to use. |
+ | **XML Message To Encode** | Yes | The XML message to encode |
+ | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-<a name="outbound-identifiers"></a>
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload:
-### Send Settings - Identifiers
+ ![Screenshot showing Standard workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-standard.png)
-![Identifier properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-identifiers.png)
-
-| Property | Description |
-|-|-|
-| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. |
-| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. |
-| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-|||
-
-<a name="outbound-acknowledgement"></a>
-
-### Send Settings - Acknowledgement
-
-![Acknowledgement properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-acknowledgement.png)
-
-| Property | Description |
-|-|-|
-| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgements. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgement from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-|||
-
-<a name="outbound-schemas"></a>
-
-### Send Settings - Schemas
-
-![Schemas for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-schemas.png)
-
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears.
-
-| Property | Description |
-|-|-|
-| **Version** | The X12 version for the schema |
-| **Transaction Type (ST01)** | The transaction type for the schema |
-| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. |
-|||
-
-<a name="outbound-envelopes"></a>
-
-### Send Settings - Envelopes
-
-![Separators in a transaction set to use for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-envelopes.png)
-
-| Property | Description |
-|-|-|
-| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
-|||
-
-<a name="outbound-control-version-number"></a>
-
-### Send Settings - Control Version Number
+
-![Control version number for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-version-number.png)
+<a name="decode"></a>
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears.
+## Decode X12 messages
-| Property | Description |
-|-|-|
-| **Control Version Number (ISA12)** | The version of the X12 standard |
-| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data |
-| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. |
-| **GS1** | Optional, select the functional code. |
-| **GS2** | Optional, specify the application sender. |
-| **GS3** | Optional, specify the application receiver. |
-| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. |
-| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. |
-| **GS7** | Optional, select a value for the responsible agency. |
-| **GS8** | Optional, specify the schema document version. |
-|||
+The **Decode X12 message** operation performs the following tasks:
-<a name="outbound-control-numbers"></a>
+* Validates the envelope against trading partner agreement.
-### Send Settings - Control Numbers
+* Validates EDI and partner-specific properties.
-![Control numbers for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-numbers.png)
+ * EDI structural validation and extended schema validation
+ * Interchange envelope structural validation
+ * Schema validation of the envelope against the control schema
+ * Schema validation of the transaction set data elements against the message schema
+ * EDI validation on transaction-set data elements
-| Property | Description |
-|-|-|
-| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 |
-| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 |
-| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <p>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value |
-|||
+* Verifies that the interchange, group, and transaction set control numbers aren't duplicates.
-<a name="outbound-character-sets-separators"></a>
+ * Checks the interchange control number against previously received interchanges.
+ * Checks the group control number against other group control numbers in the interchange.
+ * Checks the transaction set control number against other transaction set control numbers in that group.
-### Send Settings - Character Sets and Separators
+* Splits an interchange into transaction sets, or preserves the entire interchange:
-![Delimiters for message types in outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-character-sets-separators.png)
+ * Split the interchange into transaction sets or suspend transaction sets on error: Parse each transaction set. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears.
+ * Split the interchange into transaction sets or suspend interchange on error: Parse each transaction set. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`.
-> [!TIP]
-> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
+ * Preserve the interchange or suspend transaction sets on error: Preserve the interchange and process the entire batched interchange. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-| Property | Description |
-|-|-|
-| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. |
-| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. |
-| **Input Type** | The input type for the character set |
-| **Component Separator** | A single character that separates composite data elements |
-| **Data Element Separator** | A single character that separates simple data elements within composite data |
-| **replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message |
-| **Segment Terminator** | A single character that indicates the end of an EDI segment |
-| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. |
-|||
+ * Preserve the interchange or suspend interchange on error: Preserve the interchange and process the entire batched interchange. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`.
-<a name="outbound-validation"></a>
+* Generates a Technical and Functional Acknowledgment, if configured.
-### Send Settings - Validation
+ * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
+ * Generates a Functional Acknowledgment as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document.
-![Validation properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-validation.png)
+### [Consumption](#tab/consumption)
-The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **Message Type** | The EDI message type |
-| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
-| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
-| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
-| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-<a name="hipaa-schemas"></a>
+1. When prompted, provide the following connection information for your integration account:
-## HIPAA schemas and message types
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for the connection |
+ | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. |
-When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+ For example:
-`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
+ ![Screenshot showing Consumption workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-consumption.png)
-This table lists the affected message types, any variants, and the document version numbers that map to those message types:
+1. When you're done, select **Create**.
-| Message type or variant | Description | Document version number (GS8) |
-|-|--|-|
-| 277 | Health Care Information Status Notification | 005010X212 |
-| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 |
-| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 |
-| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 |
-|||
+1. In the X12 action information box, provide the following property values:
-You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid.
+ | Property | Required | Description |
+ |-|-|-|
+ | **X12 flat file message to decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-To specify these document version numbers and message types, follow these steps:
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression:
-1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use.
+ ![Screenshot showing Consumption workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-consumption.png)
- For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
+### [Standard](#tab/standard)
- To update your schema, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
- 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema).
+1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- 1. In your agreement's message settings, select the revised schema.
+1. When prompted, provide the following connection information for your integration account:
-1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection Name** | Yes | A name for the connection |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
- For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values:
+ For example:
- ```json
- "schemaReferences": [
- {
- "messageId": "837",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- }
- ]
- ```
+ ![Screenshot showing Standard workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-standard.png)
- In this `schemaReferences` section, add another entry that has these values:
+1. When you're done, select **Create**.
- * `"messageId": "837_P"`
- * `"schemaVersion": "00501"`
- * `"schemaName": "X12_00501_837_P"`
+1. In the X12 action information box, provide the following property values:
- When you're done, your `schemaReferences` section looks like this:
+ | Property | Required | Description |
+ |-|-|-|
+ | **X12 Flat File Message To Decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** |
+ | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
- ```json
- "schemaReferences": [
- {
- "messageId": "837",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- },
- {
- "messageId": "837_P",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837_P"
- }
- ]
- ```
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression:
-1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values.
+ ![Screenshot showing Standard workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-standard.png)
- ![Disable validation for all message types or each message type](./media/logic-apps-enterprise-integration-x12/x12-disable-validation.png)
+ ## Next steps * [X12 TA1 technical acknowledgments and error codes](logic-apps-enterprise-integration-x12-ta1-acknowledgment.md) * [X12 997 functional acknowledgments and error codes](logic-apps-enterprise-integration-x12-997-acknowledgment.md)
-* [Managed connectors for Azure Logic Apps](../connectors/managed.md)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
+* [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 07/24/2023 Last updated : 08/29/2023 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for a single workflow definition:
| `parameters` - Maximum number of items | 50 parameters || | `outputs` - Maximum number items | 10 outputs || | `trackedProperties` - Maximum size | 8,000 characters ||
-||||
<a name="run-duration-retention-limits"></a>
The following table lists the values for a single workflow run:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <p><p>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <p><p>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). |
-| Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <p><p>- Stateless workflow: 5 min <br>(Default) | 366 days | The amount of time that a workflow can continue running before forcing a timeout. <p><p>The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <p><p>For more information, review [Change run duration and history retention in storage](#change-duration). |
-| Recurrence interval | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days ||
-||||||
+| Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <br><br>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <br><br>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <br><br>For more information, review [Change duration and run history retention in storage](#change-retention). |
+| Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <br><br>- Stateless workflow: 5 min <br>(Default) | 366 days | The amount of time that a workflow can continue running before forcing a timeout. The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <br><br>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <br><br>For more information, review [Change run duration and history retention in storage](#change-duration). |
+| Recurrence interval | - Min: 1 sec <br><br>- Max: 500 days | - Min: 1 sec <br><br>- Max: 500 days | - Min: 1 sec <br><br>- Max: 500 days ||
<a name="change-duration"></a> <a name="change-retention"></a>
The following table lists the values for a **For each** loop:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Array items | 100,000 items | - Stateful workflow: 100,000 items <br>(Default) <p><p>- Stateless workflow: 100 items <br>(Default) | 100,000 items | The number of array items that a **For each** loop can process. <p><p>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Concurrent iterations | Concurrency off: 20 <p><p>Concurrency on: <p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br>(Default) <p><p>Concurrency on: <p><p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <p><p>Concurrency on: <p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <p><p>To change this value in the multi-tenant service, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-for-each). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-||||||
+| Array items | 100,000 items | - Stateful workflow: 100,000 items <br>(Default) <br><br>- Stateless workflow: 100 items <br>(Default) | 100,000 items | The number of array items that a **For each** loop can process. <br><br>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Concurrent iterations | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br>(Default) <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <br><br>To change this value in the multi-tenant service, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-for-each). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="until-loop"></a>
The following table lists the values for an **Until** loop:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <p><p>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <p><p>Stateless workflow: <p><p>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <p><p>To change this value in the multi-tenant service, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <p><p>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <p><p>To change this value in the multi-tenant service, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-||||||
+| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <br><br>Stateless workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <br><br>To change this value in the multi-tenant service, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <br><br>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <br><br>To change this value in the multi-tenant service, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="concurrency-debatching"></a>
The following table lists the values for an **Until** loop:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Trigger - concurrent runs | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 100 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <p><p>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <p><p>To change this value in the multi-tenant service, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Maximum waiting runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | Concurrency off: <p><p>- Min: 1 run <br>(Default) <p>- Max: 50 runs <br>(Default) <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 200 runs <br>(Default) | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <p><p>To change this value in the multi-tenant service, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| **SplitOn** items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | Concurrency off: 100,000 items <br>(Default) <p><p>Concurrency on: 100 items <br>(Default) | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <p><p>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. |
-||||||
+| Trigger - concurrent runs | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 100 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <br><br>To change this value in the multi-tenant service, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Maximum waiting runs | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | Concurrency off: <br><br>- Min: 1 run <br>(Default) <br><br>- Max: 50 runs <br>(Default) <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 200 runs <br>(Default) | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <br><br>To change this value in the multi-tenant service, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| **SplitOn** items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br>(Default) <br><br>Concurrency on: 100 items <br>(Default) | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. |
<a name="throughput-limits"></a>
The following table lists the values for a single workflow definition:
||--||-| | Action - Executions per 5-minute rolling interval | Default: 100,000 executions <br>- High throughput mode: 300,000 executions | None | In the multi-tenant service, you can raise the default value to the maximum value for your workflow. For more information, see [Run in high throughput mode](#run-high-throughput-mode), which is in preview. Or, you can [distribute the workload across more than one workflow](handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary. | | Action - Concurrent outbound calls | ~2,500 calls | None | You can reduce the number of concurrent requests or reduce the duration as necessary. |
-| Managed connector throttling | Throttling limit varies based on connector | Throttling limit varies based on connector | For multi-tenant, review [each managed connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors). <p><p>For more information about handling connector throttling, review [Handle throttling problems ("429 - Too many requests" errors)](handle-throttling-problems-429-errors.md#connector-throttling). |
+| Managed connector throttling | Throttling limit varies based on connector | Throttling limit varies based on connector | For multi-tenant, review [each managed connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors). <br><br>For more information about handling connector throttling, review [Handle throttling problems ("429 - Too many requests" errors)](handle-throttling-problems-429-errors.md#connector-throttling). |
| Runtime endpoint - Concurrent inbound calls | ~1,000 calls | None | You can reduce the number of concurrent requests or reduce the duration as necessary. | | Runtime endpoint - Read calls per 5 min | 60,000 read calls | None | This limit applies to calls that get the raw inputs and outputs from a workflow's run history. You can distribute the workload across more than one workflow as necessary. | | Runtime endpoint - Invoke calls per 5 min | 45,000 invoke calls | None | You can distribute workload across more than one workflow as necessary. | | Content throughput per 5 min | 600 MB | None | You can distribute workload across more than one workflow as necessary. |
-|||||
<a name="run-high-throughput-mode"></a>
For more information about your logic app resource definition, review [Overview:
| Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month | | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 more action executions per minute, which is ~80 million more action executions per month | | Maximum scale units that you can add | 10 scale units | |
- ||||
<a name="gateway-limits"></a>
The following table lists the retry policy limits for a trigger or action, based
| Name | Consumption limit | Standard limit | Notes | ||-|-|-| | Retry attempts | - Default: 4 attempts <br> - Max: 90 attempts | - Default: 4 attempts | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Retry interval | None | Default: 7 sec | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-|||||
+| Retry interval | None | Default: 7 sec | To change the default limit in Consumption logic app workflows, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <br><br>To change the default limit in Standard logic app workflows, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="variables-action-limits"></a>
The following table lists the values for a single workflow definition:
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-| | Variables per workflow | 250 variables | 250 variables <br>(Default) | 250 variables ||
-| Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <br>(Default) <p><p>Stateless workflow: 1,024 characters <br>(Default) | 104,857,600 characters | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items <br>(Default) | Premium SKU: 100,000 items <p><p>Developer SKU: 5,000 items | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-||||||
+| Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <br>(Default) <br><br>Stateless workflow: 1,024 characters <br>(Default) | 104,857,600 characters | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items <br>(Default) | Premium SKU: 100,000 items <br><br>Developer SKU: 5,000 items | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="http-limits"></a>
By default, the HTTP action and APIConnection actions follow the [standard async
| Name | Multi-tenant | Single-tenant | Integration service environment | Notes | ||--|||-|
-| Outbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of outbound requests include calls made by the HTTP trigger or action. <p><p>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Inbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <p><p>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). <p><p>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-||||||
+| Outbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of outbound requests include calls made by the HTTP trigger or action. <br><br>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Inbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <br><br>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="content-storage-size-limits"></a>
By default, the HTTP action and APIConnection actions follow the [standard async
| Name | Chunking enabled | Multi-tenant | Single-tenant | Integration service environment | Notes | |||--|-||-| | Content download - Maximum number of requests | Yes | 1,000 requests | 1,000 requests <br>(Default) | 1,000 requests ||
-| Message size | No | 100 MB | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <p><p>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <p>- ISE connectors use the ISE limit, not the non-ISE connector limits. <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Message size per action | Yes | 1 GB | 1,073,741,824 bytes <br>(1 GB) <br>(Default) | 5 GB | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <p><p>If you're using an ISE, the Azure Logic Apps engine supports this limit, but connectors have their own chunking limits up to the engine limit, for example, see the [Azure Blob Storage connector's API reference](/connectors/azureblob/). For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Content chunk size per action | Yes | Varies per connector | 52,428,800 bytes (52 MB) <br>(Default) | Varies per connector | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <p><p>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-|||||||
+| Message size | No | 100 MB | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <br><br>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <br><br>- ISE connectors use the ISE limit, not the non-ISE connector limits. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Message size per action | Yes | 1 GB | 1,073,741,824 bytes <br>(1 GB) <br>(Default) | 5 GB | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <br><br>If you're using an ISE, the Azure Logic Apps engine supports this limit, but connectors have their own chunking limits up to the engine limit, for example, see the [Azure Blob Storage connector's API reference](/connectors/azureblob/). For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Content chunk size per action | Yes | Varies per connector | 52,428,800 bytes (52 MB) <br>(Default) | Varies per connector | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
### Character limits
By default, the HTTP action and APIConnection actions follow the [standard async
||-|-| | Expression evaluation limit | 131,072 characters | The `@concat()`, `@base64()`, `@string()` expressions can't be longer than this limit. | | Request URL character limit | 16,384 characters | |
-||||
<a name="authentication-limits"></a>
The following table lists the values for a workflow that starts with a Request t
| Azure AD authorization policies | 5 policies | | | Claims per authorization policy | 10 claims | | | Claim value - Maximum number of characters | 150 characters |
-||||
<a name="switch-action-limits"></a>
The following table lists the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- | | Maximum number of cases per action | 25 ||
-||||
<a name="inline-code-action-limits"></a>
The following table lists the values for a single workflow definition:
||--|||-| | Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). | | Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). |
-||||||
<a name="custom-connector-limits"></a>
The following table lists the values for custom connectors:
| APIs per service | SOAP-based: 50 | Not applicable | SOAP-based: 50 || | Parameters per API | SOAP-based: 50 | Not applicable | SOAP-based: 50 || | Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation | 2,000 requests per minute per *custom connector* ||
-| Connection timeout | 2 min | Idle connection: <br>4 min <p><p>Active connection: <br>10 min | 2 min ||
-||||||
+| Connection timeout | 2 min | Idle connection: <br>4 min <br><br>Active connection: <br>10 min | 2 min ||
For more information, review the following documentation:
For more information, review the following documentation:
| Name | Limit | ||-|
-| Managed identities per logic app resource | - Consumption: Either the system-assigned identity *or* only one user-assigned identity <p>- Standard: The system-assigned identity *and* any number of user-assigned identities <p>**Note**: By default, a **Logic App (Standard)** resource has the system-assigned managed identity automatically enabled to authenticate connections at runtime. This identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this identity, connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**. |
+| Managed identities per logic app resource | - Consumption: Either the system-assigned identity *or* only one user-assigned identity <br><br>- Standard: The system-assigned identity *and* any number of user-assigned identities <br><br>**Note**: By default, a **Logic App (Standard)** resource has the system-assigned managed identity automatically enabled to authenticate connections at runtime. This identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this identity, connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**. |
| Number of logic apps that have a managed identity in an Azure subscription per region | - Consumption: 1,000 logic apps <br>- Standard: Per [Azure App Service limits, if any](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) |
-|||
<a name="integration-account-limits"></a>
Each Azure subscription has these integration account limits:
| ISE SKU | Integration account limits | ||-| | **Premium** | 20 total accounts, including one Standard account at no extra cost. With this SKU, you can have only [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. No Free or Basic accounts are permitted. |
- | **Developer** | 20 total accounts, including one [Free](../logic-apps/logic-apps-pricing.md#integration-accounts) account (limited to 1). With this SKU, you can have either combination: <p>- A Free account and up to 19 [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. <br>- No Free account and up to 20 Standard accounts. <p>No Basic or more Free accounts are permitted. <p><p>**Important**: Use the [Developer SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) for experimenting, development, and testing, but not for production or performance testing. |
- |||
+ | **Developer** | 20 total accounts, including one [Free](../logic-apps/logic-apps-pricing.md#integration-accounts) account (limited to 1). With this SKU, you can have either combination: <br><br>- A Free account and up to 19 [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. <br>- No Free account and up to 20 Standard accounts. <br><br>No Basic or more Free accounts are permitted. <br><br>**Important**: Use the [Developer SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) for experimenting, development, and testing, but not for production or performance testing. |
To learn how pricing and billing work for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
The following tables list the values for the number of artifacts limited to each
> Use the Free tier only for exploratory scenarios, not production scenarios. > This tier restricts throughput and usage, and has no service-level agreement (SLA).
-| Artifact | Free | Basic | Standard |
-|-||-|-|
-| EDI trading agreements | 10 | 1 | 1,000 |
-| EDI trading partners | 25 | 2 | 1,000 |
-| Maps | 25 | 500 | 1,000 |
-| Schemas | 25 | 500 | 1,000 |
-| Assemblies | 10 | 25 | 1,000 |
-| Certificates | 25 | 2 | 1,000 |
-| Batch configurations | 5 | 1 | 50 |
-| RosettaNet partner interface process (PIP) | 10 | 1 | 500 |
+| Artifact | Free | Basic | Standard | Premium (preview) |
+|-||-|-|-|
+| EDI trading agreements | 10 | 1 | 1,000 | Unlimited |
+| EDI trading partners | 25 | 2 | 1,000 | Unlimited |
+| Maps | 25 | 500 | 1,000 | Unlimited |
+| Schemas | 25 | 500 | 1,000 | Unlimited |
+| Assemblies | 10 | 25 | 1,000 | Unlimited, but currently unsupported for export from an ISE. |
+| Certificates | 25 | 2 | 1,000 | Unlimited |
+| Batch configurations | 5 | 1 | 50 | Unlimited |
+| RosettaNet partner interface process (PIP) | 10 | 1 | 500 | Unlimited, but currently unsupported for export from an ISE. |
<a name="artifact-capacity-limits"></a>
The following tables list the values for the number of artifacts limited to each
| Artifact | Limit | Notes | | -- | -- | -- | | Assembly | 8 MB | To upload files larger than 2 MB, use an [Azure storage account and blob container](../logic-apps/logic-apps-enterprise-integration-schemas.md). |
-| Map (XSLT file) | 8 MB | To upload files larger than 2 MB, use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). <p><p>**Note**: The amount of data or records that a map can successfully process is based on the message size and action timeout limits in Azure Logic Apps. For example, if you use an HTTP action, based on [HTTP message size and timeout limits](#http-limits), a map can process data up to the HTTP message size limit if the operation completes within the HTTP timeout limit. |
+| Map (XSLT file) | 8 MB | To upload files larger than 2 MB, use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). <br><br>**Note**: The amount of data or records that a map can successfully process is based on the message size and action timeout limits in Azure Logic Apps. For example, if you use an HTTP action, based on [HTTP message size and timeout limits](#http-limits), a map can process data up to the HTTP message size limit if the operation completes within the HTTP timeout limit. |
| Schema | 8 MB | To upload files larger than 2 MB, use an [Azure storage account and blob container](../logic-apps/logic-apps-enterprise-integration-schemas.md). |
-||||
<a name="integration-account-throughput-limits"></a>
The following tables list the values for the number of artifacts limited to each
| Invoke calls per 5 min | 3,000 | 30,000 | 45,000 | You can distribute the workload across more than one account as necessary. | | Tracking calls per 5 min | 3,000 | 30,000 | 45,000 | You can distribute the workload across more than one account as necessary. | | Blocking concurrent calls | ~1,000 | ~1,000 | ~1,000 | Same for all SKUs. You can reduce the number of concurrent requests or reduce the duration as necessary. |
-||||
<a name="b2b-protocol-limits"></a>
The following table lists the message size limits that apply to B2B protocols:
| AS2 | v2 - 100 MB<br>v1 - 25 MB | Unavailable | v2 - 200 MB <br>v1 - 25 MB | Applies to decode and encode | | X12 | 50 MB | Unavailable | 50 MB | Applies to decode and encode | | EDIFACT | 50 MB | Unavailable | 50 MB | Applies to decode and encode |
-||||
<a name="configuration"></a> <a name="firewall-ip-configuration"></a>
Before you set up your firewall with IP addresses, review these considerations:
* [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup) * [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
-* For Consumption logic app workflows that run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), make sure that you [open these ports too](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#network-ports-for-ise).
- * If your logic apps have problems accessing Azure storage accounts that use [firewalls and firewall rules](../storage/common/storage-network-security.md), you have [various other options to enable access](../connectors/connectors-create-api-azureblobstorage.md#access-storage-accounts-behind-firewalls). For example, logic apps can't directly access storage accounts that use firewall rules and exist in the same region. However, if you permit the [outbound IP addresses for managed connectors in your region](/connectors/common/outbound-ip-addresses), your logic apps can access storage accounts that are in a different region except when you use the Azure Table Storage or Azure Queue Storage connectors. To access your Table Storage or Queue Storage, you can use the HTTP trigger and actions instead. For other options, see [Access storage accounts behind firewalls](../connectors/connectors-create-api-azureblobstorage.md#access-storage-accounts-behind-firewalls).
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
In a Standard logic app workflow that starts with the Request trigger (but not a
* An inbound call to the request endpoint can use only one authorization scheme, either Azure AD OAuth or [Shared Access Signature (SAS)](#sas). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because Azure Logic Apps doesn't know which scheme to choose.
- To enable Azure AD OAuth so that this option is the only way to call the request endpoint, use the following steps:
-
- 1. To enable the capability to check the OAuth access token, [follow the steps to include 'Authorization' header in the Request or HTTP webhook trigger outputs](#include-auth-header).
-
- > [!NOTE]
- >
- > This step makes the `Authorization` header visible in the workflow's run history
- > and in the trigger's outputs.
-
- 1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
-
- 1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**.
-
- 1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter the following expression, and select **Done**.
-
- `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')`
-
- > [!NOTE]
- > If you call the trigger endpoint without the correct authorization,
- > the run history just shows the trigger as `Skipped` without any
- > message that the trigger condition has failed.
-
-* Only [Bearer-type](../active-directory/develop/active-directory-v2-protocols.md#tokens) authorization schemes are supported for Azure AD OAuth access tokens, which means that the `Authorization` header for the access token must specify the `Bearer` type.
+* Azure Logic Apps supports either the [bearer type](../active-directory/develop/active-directory-v2-protocols.md#tokens) or [proof-of-possession type (Consumption logic app only)](/entra/msal/dotnet/advanced/proof-of-possession-tokens) authorization schemes for Azure AD OAuth access tokens. However, the `Authorization` header for the access token must specify either the `Bearer` type or `PoP` type. For more information about how to get and use a PoP token, see [Get a Proof of Possession (PoP) token](#get-pop).
* Your logic app resource is limited to a maximum number of authorization policies. Each authorization policy also has a maximum number of [claims](../active-directory/develop/developer-glossary.md#claim). For more information, review [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#authentication-limits).
In a Standard logic app workflow that starts with the Request trigger (but not a
} ```
+#### Enable Azure AD OAuth as the only option to call a request endpoint
+
+1. Set up your Request or HTTP webhook trigger with the capability to check the OAuth access token by [following the steps to include the 'Authorization' header in the Request or HTTP webhook trigger outputs](#include-auth-header).
+
+ > [!NOTE]
+ >
+ > This step makes the `Authorization` header visible in the
+ > workflow's run history and in the trigger's outputs.
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
+
+1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**.
+
+1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter either of the following expressions, based on the token type you want to use, and select **Done**.
+
+ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')`
+
+ -or-
+
+ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'PoP')`
+
+If you call the trigger endpoint without the correct authorization, the run history just shows the trigger as `Skipped` without any message that the trigger condition has failed.
+
+<a name="get-pop"></a>
+
+#### Get a Proof-of-Possession (PoP) token
+
+The Microsoft Authentication Library (MSAL) libraries provide PoP tokens for you to use. If the logic app workflow that you want to call requires a PoP token, you can get this token using the MSAL libraries. The following samples show how to acquire PoP tokens:
+
+* [A .NET Core daemon console application calling a protected Web API with its own identity](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)
+
+* [SignedHttpRequest aka PoP (Proof of Possession)](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki/SignedHttpRequest-aka-PoP-(Proof-of-Possession))
+
+To use the PoP token with your Consumption logic app, follow the next section to [set up Azure AD OAuth](#enable-azure-ad-inbound).
+ <a name="enable-azure-ad-inbound"></a> #### Enable Azure AD OAuth for your Consumption logic app resource
In the [Azure portal](https://portal.azure.com), add one or more authorization p
1. On the logic app menu, under **Settings**, select **Authorization**. After the Authorization pane opens, select **Add policy**.
- ![Select "Authorization" > "Add policy"](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png)
+ ![Screenshot that shows Azure portal, Consumption logic app menu, Authorization page, and selected button to add policy.](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png)
1. Provide information about the authorization policy by specifying the [claim types](../active-directory/develop/developer-glossary.md#claim) and values that your logic app expects in the access token presented by each inbound call to the Request trigger:
- ![Provide information for authorization policy](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
+ ![Screenshot that shows Azure portal, Consumption logic app Authorization page, and information for authorization policy.](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
| Property | Required | Type | Description | |-|-||-| | **Policy name** | Yes | String | The name that you want to use for the authorization policy |
- | **Claims** | Yes | String | The claim types and values that your workflow accepts from inbound calls. Here are the available claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. |
+ | **Policy type** | Yes | String | Either **AAD** for bearer type tokens or **AADPOP** for Proof-of-Possession type tokens. |
+ | **Claims** | Yes | String | A key-value pair that specifies the claim type and value that the workflow's Request trigger expects in the access token presented by each inbound call to the trigger. You can add any standard claim you want by selecting **Add standard claim**. To add a claim that's specific to a PoP token, select **Add custom claim**. <br><br>Available standard claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br><br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br><br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. |
+
+ The following example shows the information for a PoP token:
+
+ ![Screenshot that shows Azure portal, Consumption logic app Authorization page, and information for a proof-of-possession policy.](./media/logic-apps-securing-a-logic-app/pop-policy-example.png)
1. To add another claim, select from these options:
In the [Azure portal](https://portal.azure.com), add one or more authorization p
1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request and HTTP webhook trigger outputs](#include-auth-header).
-Workflow properties such as policies don't appear in your logic app's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource
+Workflow properties such as policies don't appear in your workflow's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource
<a name="define-authorization-policy-template"></a>
This list includes information about TLS/SSL self-signed certificates:
If you want to use client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type instead, you still have to complete a few extra steps for this authentication type. Otherwise, the call fails. For more information, review [Client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type for single-tenant Azure Logic Apps](../connectors/connectors-native-http.md#client-certificate-authentication).
-* For logic app workflows in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), the HTTP connector permits self-signed certificates for TLS/SSL handshakes. However, you must first [enable self-signed certificate support](../logic-apps/create-integration-service-environment-rest-api.md#request-body) for an existing ISE or new ISE by using the Azure Logic Apps REST API, and install the public certificate at the `TrustedRoot` location.
- Here are more ways that you can help secure endpoints that handle calls sent from your logic app workflows: * [Add authentication to outbound requests](#add-authentication-outbound).
You can use Azure Logic Apps in [Azure Government](../azure-government/documenta
* [Virtual machine isolation in Azure](../virtual-machines/isolation.md) * [Deploy dedicated Azure services into virtual networks](../virtual-network/virtual-network-for-azure-services.md)
-* Based on whether you have Consumption or Standard logic apps, you have these options:
+* Based on whether you have Consumption or Standard logic app workflows, you have these options:
- * For Standard logic apps, you can privately and securely communicate between logic app workflows and an Azure virtual network by setting up private endpoints for inbound traffic and use virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
+ * Standard logic app workflows can privately and securely communicate with an Azure virtual network through private endpoints that you set up for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
- * For Consumption logic apps, you can create and deploy those logic apps in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). That way, your logic apps run on dedicated resources and can access resources protected by an Azure virtual network. For more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". For more information, review [Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps](../logic-apps/customer-managed-keys-integration-service-environment.md).
+ * Consumption logic app workflows can run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) where they can use dedicated resources and access resources protected by an Azure virtual network. However, the ISE resource retires on August 31, 2024, due to its dependency on Azure Cloud Services (classic), which retires at the same time.
- > [!IMPORTANT]
- > Some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md))
- > for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database,
- > partner services, or customer services that are hosted on Azure.
- >
- > If your workflows need access to virtual networks that use private endpoints, and you want to develop those workflows
- > using the **Logic App (Consumption)** resource type, you *must create and run your logic apps in an ISE*. However,
- > if you want to develop those workflows using the **Logic App (Standard)** resource type, *you don't need an ISE*.
- > Instead, your workflows can communicate privately and securely with virtual networks by using private endpoints
- > for inbound traffic and virtual network integration for outbound traffic. For more information, review
+ > [!IMPORTANT]
+ > Some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md))
+ > for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database,
+ > partner services, or customer services that are hosted on Azure.
+ >
+ > If you want to create Consumption logic app workflows that need access to virtual networks with private endpoints,
+ > you *must create and run your Consumption workflows in an ISE*. Or, you can create Standard workflows instead,
+ > which don't need an ISE. Instead, your workflows can communicate privately and securely with virtual networks
+ > by using private endpoints for inbound traffic and virtual network integration for outbound traffic. For more information, see
> [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). For more information about isolation, review the following documentation:
logic-apps Logic Apps Using File Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-file-connector.md
- Title: Connect to on-premises file systems
-description: Connect to file systems on premises from workflows in Azure Logic Apps using the File System connector.
--- Previously updated : 11/08/2022--
-# Connect to on-premises file systems from workflows in Azure Logic Apps
--
-This how-to guide shows how to access an on-premises file share from a workflow in Azure Logic Apps by using the File System connector. You can then create automated workflows that run when triggered by events in your file share or in other systems and run actions to manage your files. The connector provides the following capabilities:
--- Create, get, append, update, and delete files.-- List files in folders or root folders.-- Get file content and metadata.-
-In this article, the example scenarios demonstrate the following tasks:
--- Trigger a workflow when a file is created or added to a file share, and then send an email.-- Trigger a workflow when copying a file from a Dropbox account to a file share, and then send an email.-
-## Connector technical reference
-
-The File System connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-
-| Logic app | Environment | Connector version |
-|--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
-| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector differs in the following ways: <br><br>- The built-in connector supports only Standard logic apps that run in an App Service Environment v3 with Windows plans only. <br><br>- The built-in version can connect directly to a file share and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [File System built-in connector reference](/azure/logic-apps/connectors/built-in/reference/filesystem/) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
-
-## General limitations
--- The File System connector currently supports only Windows file systems on Windows operating systems.-- Mapped network drives aren't supported.-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* To connect to your file share, different requirements apply, based on your logic app and the hosting environment:
-
- - Consumption logic app workflows
-
- - In multi-tenant Azure Logic Apps, you need to meet the following requirements, if you haven't already:
-
- 1. [Install the on-premises data gateway on a local computer](logic-apps-gateway-install.md).
-
- The File System managed connector requires that your gateway installation and file system server must exist in the same Windows domain.
-
- 1. [Create an on-premises data gateway resource in Azure](logic-apps-gateway-connection.md).
-
- 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system.
-
- - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector.
-
- - Standard logic app workflows
-
- You can use the File System built-in connector or managed connector.
-
- * To use the File System managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
-
- * To use the File System built-in connector, your Standard logic app workflow must run in App Service Environment v3, but doesn't require the data gateway resource.
-
-* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer.
-
-* To follow the example scenario in this how-to guide, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This example uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
-
- > [!IMPORTANT]
- > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps.
- > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can
- > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application).
- > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-
-* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free.
-
-* The logic app workflow where you want to access your file share. To start your workflow with a File System trigger, you have to start with a blank workflow. To add a File System action, start your workflow with any trigger.
-
-<a name="add-file-system-trigger"></a>
-
-## Add a File System trigger
-
-### [Consumption](#tab/consumption)
-
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-
-1. On the designer, under the search box, select **Standard**. In the search box, enter **file system**.
-
-1. From the triggers list, select the [File System trigger](/connectors/filesystem/#triggers) that you want. This example continues with the trigger named **When a file is created**.
-
- ![Screenshot showing Azure portal, designer for Consumption logic app workflow, search box with "file system", and File System trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-consumption.png)
-
-1. In the connection information box, provide the following information as required:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
- | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
- | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
- | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
-
- The following example shows the connection information for the File System managed connector trigger:
-
- ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
-
- The following example shows the connection information for the File System ISE-based trigger:
-
- ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
-
-1. When you're done, select **Create**.
-
- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
-
-1. Continue building your workflow.
-
- 1. Provide the required information for your trigger.
-
- For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
-
- ![Screenshot showing Consumption workflow designer and the "When a file is created" trigger.](media/logic-apps-using-file-connector/trigger-file-system-when-file-created-consumption.png)
-
- 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-
- ![Screenshot showing Consumption workflow designer, managed connector "When a file is created" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-file-system-send-email-consumption.png)
-
- > [!TIP]
- >
- > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
- > When the dynamic content list appears, select from the available outputs.
-
-1. Save your logic app. Test your workflow by uploading a file and triggering the workflow.
-
- If successful, your workflow sends an email about the new file.
-
-### [Standard](#tab/standard)
-
-#### Built-in connector trigger
-
-These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only.
-
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-
-1. On the designer, under the search box, select **Built-in**. In the search box, enter **file system**.
-
-1. From the triggers list, select the [File System trigger](/azure/logic-apps/connectors/built-in/reference/filesystem/#triggers) that you want. This example continues with the trigger named **When a file is added**.
-
- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and "When a file is added" selected.](media/logic-apps-using-file-connector/select-file-system-trigger-built-in-standard.png)
-
-1. In the connection information box, provide the following information as required:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
- | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
-
- The following example shows the connection information for the File System built-in connector trigger:
-
- ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/logic-apps-using-file-connector/trigger-file-system-connection-built-in-standard.png)
-
-1. When you're done, select **Create**.
-
- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
-
-1. Continue building your workflow.
-
- 1. Provide the required information for your trigger.
-
- For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check.
-
- ![Screenshot showing Standard workflow designer and "When a file is added" trigger information.](media/logic-apps-using-file-connector/trigger-when-file-added-built-in-standard.png)
-
- 1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-
- ![Screenshot showing Standard workflow designer, managed connector "When a file is added" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-send-email-built-in-standard.png)
-
- > [!TIP]
- >
- > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
- > When the dynamic content list appears, select from the available outputs.
-
-1. Save your logic app. Test your workflow by uploading a file and triggering the workflow.
-
- If successful, your workflow sends an email about the new file.
-
-#### Managed connector trigger
-
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-
-1. On the designer, under the search box, select **Azure**. In the search box, enter **file system**.
-
-1. From the triggers list, select the [File System trigger](/connectors/filesystem/#triggers/#triggers) that you want. This example continues with the trigger named **When a file is created**.
-
- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and the "When a file is created" trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-managed-standard.png)
-
-1. In the connection information box, provide the following information as required:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
- | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
- | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
- | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
-
- The following example shows the connection information for the File System managed connector trigger:
-
- ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/logic-apps-using-file-connector/trigger-file-system-connection-managed-standard.png)
-
-1. When you're done, select **Create**.
-
- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
-
-1. Continue building your workflow.
-
- 1. Provide the required information for your trigger.
-
- For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
-
- ![Screenshot showing Standard workflow designer and "When a file is created" trigger information.](media/logic-apps-using-file-connector/trigger-when-file-created-managed-standard.png)
-
- 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-
- ![Screenshot showing Standard workflow designer, managed connector "When a file is created" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-send-email-managed-standard.png)
-
- > [!TIP]
- >
- > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
- > When the dynamic content list appears, select from the available outputs.
-
-1. Save your logic app. Test your workflow by uploading a file and triggering the workflow.
-
- If successful, your workflow sends an email about the new file.
---
-<a name="add-file-system-action"></a>
-
-## Add a File System action
-
-The example logic app workflow starts with the [Dropbox trigger](/connectors/dropbox/#triggers), but you can use any trigger that you want.
-
-### [Consumption](#tab/consumption)
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-
-1. Find and select the [File System action](/connectors/filesystem/#actions) that you want to use. This example continues with the action named **Create file**.
-
- 1. Under the trigger or action where you want to add the File System action, select **New step**.
-
- Or, to add an action between existing steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **file system**.
-
-1. From the actions list, select the File System action named **Create file**.
-
- ![Screenshot showing Azure portal, designer for Consumption logic app workflow, search box with "file system", and "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-consumption.png)
-
-1. In the connection information box, provide the following information as required:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
- | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
- | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
- | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
-
- The following example shows the connection information for the File System managed connector action:
-
- ![Screenshot showing connection information for File System managed connector action.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
-
- The following example shows the connection information for the File System ISE-based connector action:
-
- ![Screenshot showing connection information for File System ISE-based connector action.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
-
-1. When you're done, select **Create**.
-
- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
-
-1. Continue building your workflow.
-
- 1. Provide the required information for your action.
-
- For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox.
-
- ![Screenshot showing Consumption workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-consumption.png)
-
- > [!TIP]
- >
- > To add outputs from previous steps in the workflow, click inside the action's edit boxes.
- > When the dynamic content list appears, select from the available outputs.
-
- 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-
- ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-consumption.png)
-
-1. Save your logic app. Test your workflow by uploading a file to Dropbox.
-
- If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
-
-### [Standard](#tab/standard)
-
-#### Built-in connector action
-
-These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only.
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-
-1. Find and select the [File System action](/azure/logic-apps/connectors/built-in/reference/filesystem/#actions) that you want to use. This example continues with the action named **Create file**.
-
- 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
-
- Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
-
- 1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **file system**.
-
- 1. From the actions list, select the File System action named **Create file**.
-
- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and built-in connector "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-built-in-standard.png)
-
-1. In the connection information box, provide the following information as required:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
- | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
-
- The following example shows the connection information for the File System built-in connector action:
-
- ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/logic-apps-using-file-connector/action-file-system-connection-built-in-standard.png)
-
- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
-
-1. Continue building your workflow.
-
- 1. Provide the required information for your action. For this example, follow these steps:
-
- 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
-
- 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
-
- 1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**.
-
- ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-built-in-standard.png)
-
- 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-
- ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-built-in-standard.png)
-
-1. Save your logic app. Test your workflow by uploading a file to Dropbox.
-
- If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
-
-#### Managed connector action
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-
-1. Find and select the [File System action](/connectors/filesystem/#actions) that you want to use. This example continues with the action named **Create file**.
-
- 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
-
- Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
-
- 1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **file system**.
-
- 1. From the actions list, select the File System action named **Create file**.
-
- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and managed connector "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-managed-standard.png)
-
-1. In the connection information box, provide the following information as required:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
- | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
- | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
- | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
- | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
- | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
-
- The following example shows the connection information for the File System managed connector action:
-
- ![Screenshot showing connection information for File System managed connector action.](media/logic-apps-using-file-connector/action-file-system-connection-managed-standard.png)
-
- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
-
-1. Continue building your workflow.
-
- 1. Provide the required information for your action. For this example, follow these steps:
-
- 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
-
- 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
-
- 1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**.
-
- ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-managed-standard.png)
-
- 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
-
- ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-managed-standard.png)
-
-1. Save your logic app. Test your workflow by uploading a file to Dropbox.
-
- If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
---
-## Next steps
-
-* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
ms.suite: integration Previously updated : 08/07/2023 Last updated : 08/18/2023 tags: connectors
tags: connectors
This multipart how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the SAP connector. You can use the SAP connector's operations to create automated workflows that run when triggered by events in your SAP server or in other systems and run actions to manage resources on your SAP server.
-Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
## SAP compatibility
The SAP connector has different versions, based on [logic app type and host envi
|--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector (preview), which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
## Connector differences
The SAP built-in connector significantly differs from the SAP managed connector
The SAP built-in connector doesn't use the shared or global connector infrastructure, which means timeouts are longer at 5 minutes compared to the SAP managed connector (two minutes) and the SAP ISE-versioned connector (four minutes). Long-running requests work without you having to implement the [long-running webhook-based request action pattern](logic-apps-scenario-function-sb-trigger.md).
-* By default, the preview SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md).
+* By default, the SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md).
In stateful mode, the SAP built-in connector supports high availability and horizontal scale-out configurations. By comparison, the SAP managed connector has restrictions regarding the on-premises data gateway limited to a single instance for triggers and to clusters only in failover mode for actions. For more information, see [SAP managed connector - Known issues and limitations](#known-issues-limitations).
Along with simple string and number inputs, the SAP connector accepts the follow
1. In the action named **\[BAPI] Call method in SAP**, disable the auto-commit feature. 1. Call the action named **\[BAPI] Commit transaction** instead.
-### SAP built-in connector
-
-The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code:
-
- ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The preview SAP built-in connector trigger named **Register SAP RFC server for t
> When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites).
-* By default, the preview SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
+* By default, the SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
* To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks:
The preview SAP built-in connector trigger named **Register SAP RFC server for t
> In Standard workflows, the SAP built-in trigger named **Register SAP RFC server for trigger** uses the Azure > Functions trigger instead, and shows only the actual callbacks from SAP.
+ * For the SAP built-in connector trigger named **Register SAP RFC server for trigger**, you have to enable virtual network integration and private ports by following the article at [Enabling Service Bus and SAP built-in connectors for stateful Logic Apps in Standard](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/enabling-service-bus-and-sap-built-in-connectors-for-stateful/ba-p/3820381). You can also run the workflow in Visual Studio Code to fire the trigger locally. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code:
+
+ - **WEBSITE_PRIVATE_IP**: Set this environment variable value to **127.0.0.1** as the localhost address.
+ - **WEBSITE_PRIVATE_PORTS**: Set this environment variable value to two free and usable ports on your local computer, separating the values with a comma (**,**), for example, **8080,8088**.
+ * The message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](/connectors/sap/#actions) that you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](sap-create-example-scenario-workflows.md#send-flat-file-idocs). <a name="network-prerequisites"></a>
For a Consumption workflow in multi-tenant Azure Logic Apps, the SAP managed con
<a name="single-tenant-prerequisites"></a>
-For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway.
+For a Standard workflow in single-tenant Azure Logic Apps, use the SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway. For additional requirements regarding the SAP built-in connector trigger named **Register SAP RFC server for trigger**, see [Prerequisites](#prerequisites).
1. To use the SAP connector, you need to download the following files and have them read to upload to your Standard logic app resource. For more information, see [SAP NCo client library prerequisites](#sap-client-library-prerequisites):
For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *
1. In the **net472** folder, upload the assembly files larger than 4 MB.
-#### SAP trigger requirements
-
-The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code:
-
- ### [ISE](#tab/ise) <a name="ise-prerequisites"></a>
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023 ms.suite: integration
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 07/14/2023 Last updated : 08/30/2023
For the **Standard** logic app workflow, these capabilities have changed, or the
* **Trigger history and run history**: For a **Standard** logic app, trigger history and run history in the Azure portal appears at the workflow level, not the logic app resource level. For more information, review [Create single-tenant based workflows using the Azure portal](create-single-tenant-workflows-azure-portal.md).
+* **Back up and restore for workflow run history**: **Standard** logic apps currently don't support back up and restore for workflow run history.
+ * **Zoom control**: The zoom control is currently unavailable on the designer. * **Deployment targets**: You can't deploy a **Standard** logic app resource to an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) nor to Azure deployment slots.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
See the [AutoML package](/python/api/azure-ai-ml/azure.ai.ml.automl) for changin
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
machine-learning Concept Automl Forecasting Calendar Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-calendar-features.md
Previously updated : 12/15/2022 Last updated : 08/15/2023 # Calendar features for time series forecasting in AutoML
This article focuses on the calendar-based features that AutoML creates to incre
As a part of feature engineering, AutoML transforms datetime type columns provided in the training data into new columns of calendar-based features. These features can help regression models learn seasonal patterns at several cadences. AutoML can always create calendar features from the time index of the time series since this is a required column in the training data. Calendar features are also made from other columns with datetime type, if any are present. See the [how AutoML uses your data](./concept-automl-forecasting-methods.md#how-automl-uses-your-data) guide for more information on data requirements.
-AutoML considers two categories of calendar features: standard features that are based entirely on date and time values and holiday features which are specific to a country or region of the world. We'll go over these features in the remainder of the article.
+AutoML considers two categories of calendar features: standard features that are based entirely on date and time values and holiday features which are specific to a country or region of the world. We go over these features in the remainder of the article.
## Standard calendar features
Th following table shows the full set of AutoML's standard calendar features alo
| | -- | -- | |`year`|Numeric feature representing the calendar year |2011| |`year_iso`|Represents ISO year as defined in ISO 8601. ISO years start on the first week of year that has a Thursday. For example, if January 1 is a Friday, the ISO year begins on January 4. ISO years may differ from calendar years.|2010|
-|`half`| Feature indicating whether the date is in the first or second half of the year. It is 1 if the date is prior to July 1 and 2 otherwise.
+|`half`| Feature indicating whether the date is in the first or second half of the year. It's 1 if the date is prior to July 1 and 2 otherwise.
|`quarter`|Numeric feature representing the quarter of the given date. It takes values 1, 2, 3, or 4 representing first, second, third, fourth quarter of calendar year.|1| |`month`|Numeric feature representing the calendar month. It takes values 1 through 12.|1| |`month_lbl`|String feature representing the name of month.|'January'| |`day`|Numeric feature representing the day of the month. It takes values from 1 through 31.|1| |`hour`|Numeric feature representing the hour of the day. It takes values 0 through 23.|0| |`minute`|Numeric feature representing the minute within the hour. It takes values 0 through 59.|25|
-|`second`|Numeric feature representing the second of the given datetime. In the case where only date format is provided, then it is assumed as 0. It takes values 0 through 59.|30|
-|`am_pm`|Numeric feature indicating whether the time is in the morning or evening. It is 0 for times before 12PM and 1 for times after 12PM. |0|
+|`second`|Numeric feature representing the second of the given datetime. In the case where only date format is provided, then it's assumed as 0. It takes values 0 through 59.|30|
+|`am_pm`|Numeric feature indicating whether the time is in the morning or evening. It's 0 for times before 12PM and 1 for times after 12PM. |0|
|`am_pm_lbl`|String feature indicating whether the time is in the morning or evening.|'am'| |`hour12`|Numeric feature representing the hour of the day on a 12 hour clock. It takes values 0 through 12 for first half of the day and 1 through 11 for second half.|0| |`wday`|Numeric feature representing the day of the week. It takes values 0 through 6, where 0 corresponds to Monday. |5|
Other datetime column | A reduced set consisting of `Year`, `Month`, `Day`,
## Holiday features
-AutoML can optionally create features representing holidays from a specific country or region. These features are configured in AutoML using the `country_or_region_for_holidays` parameter which accepts an [ISO country code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes).
+AutoML can optionally create features representing holidays from a specific country or region. These features are configured in AutoML using the `country_or_region_for_holidays` parameter, which accepts an [ISO country code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes).
> [!NOTE] > Holiday features can only be made for time series with daily frequency.
forecasting_job.set_forecast_settings(
country_or_region_for_holidays='US' ) ```
-The generated holiday features look like the following:
+The generated holiday features look like the following output:
<a name='output'><img src='./media/concept-automl-forecasting-calendar-features/sample_dataset_holiday_feature_generated.png' alt='sample_data_output' width=75%></img></a>
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
Each Series in Own Group (1:1) | All Series in Single Group (N:1)
-| -- Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster
-More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb).
+More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb).
## Next steps
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
-+ Last updated 04/01/2023 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
Visual Studio Code enables you to interactively debug endpoints.
Optionally, you can secure communication with a managed online endpoint by using private endpoints.
-You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created per deployment.
+You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created for the workspace's managed virtual network (preview).
-For more information, see [Secure online endpoints](how-to-secure-online-endpoint.md).
+For more information, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md).
## Managed online endpoints vs Kubernetes online endpoints
The following table highlights the key differences between managed online endpoi
| **Cluster sizing (scaling)** | [Managed manual and autoscale](how-to-autoscale-endpoints.md), supporting additional nodes provisioning | [Manual and autoscale](how-to-kubernetes-inference-routing-azureml-fe.md#autoscaling), supporting scaling the number of replicas within fixed cluster boundaries | | **Compute type** | Managed by the service | Customer-managed Kubernetes cluster (Kubernetes) | | **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported |
-| **Virtual Network (VNET)** | [Supported via managed network isolation](how-to-secure-online-endpoint.md) | User responsibility |
+| **Virtual Network** | [Supported via managed network isolation](concept-secure-online-endpoint.md) | User responsibility |
| **Out-of-box monitoring & logging** | [Azure Monitor and Log Analytics powered](how-to-monitor-online-endpoints.md) (includes key metrics and log tables for endpoints and deployments) | User responsibility | | **Logging with Application Insights (legacy)** | Supported | Supported | | **View costs** | [Detailed to endpoint / deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
The following table shows a summary of the different features available to onlin
| Swagger support | Yes | No | | Authentication | Key and token | Azure AD | | Private network support | Yes | Yes |
-| Managed network isolation<sup>1</sup> | Yes | No |
+| Managed network isolation | Yes | No |
| Customer-managed keys | Yes | No | | Cost basis | None | None |
-<sup>1</sup> [*Managed network isolation*](how-to-secure-online-endpoint.md) allows you to manage the networking configuration of the endpoint independently of the configuration of the Azure Machine Learning workspace.
- #### Deployments The following table shows a summary of the different features available to online and batch endpoints at the deployment level. These concepts apply to each deployment under the endpoint.
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
Last updated 08/26/2022
# Enterprise security and governance for Azure Machine Learning
-In this article, you'll learn about security and governance features available for Azure Machine Learning. These features are useful for administrators, DevOps, and MLOps who want to create a secure configuration that is compliant with your companies policies. With Azure Machine Learning and the Azure platform, you can:
+In this article, you learn about security and governance features available for Azure Machine Learning. These features are useful for administrators, DevOps, and MLOps who want to create a secure configuration that is compliant with your companies policies. With Azure Machine Learning and the Azure platform, you can:
* Restrict access to resources and operations by user account or groups * Restrict incoming and outgoing network communications
Each workspace has an associated system-assigned [managed identity](../active-di
| Azure Container Registry | Contributor | | Resource group that contains the workspace | Contributor |
-The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. The identity token is not accessible to users and cannot be used by them to gain access to these resources. Users can only access the resources through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions.
+The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. The identity token isn't accessible to users and they can't use it to gain access to these resources. Users can only access the resources through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions.
We don't recommend that admins revoke the access of the managed identity to the resources mentioned in the preceding table. You can restore access by using the [resync keys operation](how-to-change-storage-access-key.md).
For more information, see the following articles:
## Network security and isolation
-To restrict network access to Azure Machine Learning resources, you can use [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md) and [Azure Machine Learning managed virtual network (preview)](how-to-managed-network.md). Using a virtual network reduces the attack surface for your solution, as well as the chances of data exfiltration.
+To restrict network access to Azure Machine Learning resources, you can use an [Azure Machine Learning managed virtual network](how-to-managed-network.md) (preview) or [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). Using a virtual network reduces the attack surface for your solution, and the chances of data exfiltration.
You don't have to pick one or the other. For example, you can use a managed virtual network to secure managed compute resources and an Azure Virtual Network for your unmanaged resources or to secure client access to the workspace.
-* __Azure Machine Learning managed virtual network__ (preview) provides a fully managed solution that enables network isolation for your workspace and managed compute resources. You can use private endpoints to secure communication with other Azure services, and can restrict outbound communications.
+* __Azure Machine Learning managed virtual network__ (preview) provides a fully managed solution that enables network isolation for your workspace and managed compute resources. You can use private endpoints to secure communication with other Azure services, and can restrict outbound communications. The following managed compute resources are secured with a managed network:
- [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
+ * Serverless compute (including Spark serverless)
+ * Compute cluster
+ * Compute instance
+ * Managed online endpoints
+ * Batch online endpoints
- For more information, see [Azure Machine Learning managed virtual network (preview)](how-to-managed-network.md).
+ For more information, see [Azure Machine Learning managed virtual network](how-to-managed-network.md) (preview).
* __Azure Virtual Networks__ provides a more customizable virtual network offering. However, you're responsible for configuration and management. You may need to use network security groups, user-defined routing, or a firewall to restrict outbound communication.
You don't have to pick one or the other. For example, you can use a managed virt
## Data encryption
-Azure Machine Learning uses a variety of compute resources and data stores on the Azure platform. To learn more about how each of these supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md).
+Azure Machine Learning uses various compute resources and data stores on the Azure platform. To learn more about how each of these resources supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md).
## Data exfiltration prevention
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
description: Learn how network traffic flows between components when your Azure
-+
If you use Visual Studio Code on a compute instance, you must allow other outbou
:::moniker range="azureml-api-2" ## Scenario: Use online endpoints
-__Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` restricts the online endpoint to receiving traffic only from the virtual network. For secure inbound communications, the Azure Machine Learning workspace's private endpoint is used.
+Security for inbound and outbound communication are configured separately for managed online endpoints.
-__Outbound__ communication from a deployment can be secured on a per-deployment basis by using the `egress_public_network_access` flag. Outbound communication in this case is from the deployment to Azure Container Registry, storage blob, and workspace. Setting the flag to `true` will restrict communication with these resources to the virtual network.
+#### Inbound communication
-> [!NOTE]
-> For secure outbound communication, a private endpoint is created for each deployment where `egress_public_network_access` is set to `disabled`.
+__Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` ensures that the online endpoint receives traffic only from a client's virtual network through the Azure Machine Learning workspace's private endpoint.
+
+The `public_network_access` flag of the Azure Machine Learning workspace also governs the visibility of the online endpoint. If this flag is `disabled`, then the scoring endpoints can only be accessed from virtual networks that contain a private endpoint for the workspace. If it is `enabled`, then the scoring endpoint can be accessed from the virtual network and public networks.
-Visibility of the endpoint is also governed by the `public_network_access` flag of the Azure Machine Learning workspace. If this flag is `disabled`, then the scoring endpoints can only be accessed from virtual networks that contain a private endpoint for the workspace. If it is `enabled`, then the scoring endpoint can be accessed from the virtual network and public networks.
+#### Outbound communication
-### Supported configurations
+__Outbound__ communication from a deployment can be secured at the workspace level by enabling managed virtual network isolation for your Azure Machine Learning workspace (preview). Enabling this setting causes Azure Machine Learning to create a managed virtual network for the workspace. Any deployments in the workspace's managed virtual network can use the virtual network's private endpoints for outbound communication.
+
+The [legacy network isolation method for securing outbound communication](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) worked by disabling a deployment's `egress_public_network_access` flag. We strongly recommend that you secure outbound communication for deployments by using a [workspace managed virtual network](concept-secure-online-endpoint.md) instead. Unlike the legacy approach, the `egress_public_network_access` flag for the deployment no longer applies when you use a workspace managed virtual network with your deployment (preview). Instead, outbound communication will be controlled by the rules set for the workspace's managed virtual network.
-| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? |
-| -- | -- | | |
-| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes |
-| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled | Yes |
-| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes |
-| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled | Yes |
:::moniker-end+ ## Scenario: Use Azure Kubernetes Service For information on the outbound configuration required for Azure Kubernetes Service, see the connectivity requirements section of [How to secure inference](how-to-secure-inferencing-vnet.md).
machine-learning Concept Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-online-endpoint.md
+
+ Title: Network isolation with managed online endpoints
+
+description: Learn how private endpoints provide network isolation for Azure Machine Learning managed online endpoints.
+++++++
+reviewer: msakande
+ Last updated : 08/15/2023++
+# Network isolation with managed online endpoints
++
+When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md). In this article, you'll learn how a private endpoint can be used to secure inbound communication to a managed online endpoint. You'll also learn how a workspace managed virtual network can be used to provide secure communication between deployments and resources.
++
+You can secure inbound scoring requests from clients to an _online endpoint_ and secure outbound communications between a _deployment_, the Azure resources it uses, and private resources. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints-online.md).
+
+The following architecture diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from a client's virtual network flow through the workspace's private endpoint to the managed online endpoint. Outbound communications from deployments to services are handled through private endpoints from the workspace's managed virtual network to those service instances.
++
+> [!NOTE]
+> This article focuses on network isolation using the workspace's managed virtual network. For a description of the legacy method for network isolation, in which Azure Machine Learning creates a managed virtual network for each deployment in an endpoint, see the [Appendix](#appendix).
+
+## Limitations
++
+## Secure inbound scoring requests
+
+Secure inbound communication from a client to a managed online endpoint is possible by using a [private endpoint for the Azure Machine Learning workspace](./how-to-configure-private-link.md). This private endpoint on the client's virtual network communicates with the workspace of the managed online endpoint and is the means by which the managed online endpoint can receive incoming scoring requests from the client.
+
+To secure scoring requests to the online endpoint, so that a client can access it only through the workspace's private endpoint, set the `public_network_access` flag for the endpoint to `disabled`. After you've created the endpoint, you can update this setting to enable public network access if desired.
+
+Set the endpoint's `public_network_access` flag to `disabled`:
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml online-endpoint create -f endpoint.yml --set public_network_access=disabled
+```
+
+# [Python](#tab/python)
+
+```python
+from azure.ai.ml.entities import ManagedOnlineEndpoint
+
+endpoint = ManagedOnlineEndpoint(name='my-online-endpoint',
+ description='this is a sample online endpoint',
+ tags={'foo': 'bar'},
+ auth_mode="key",
+ public_network_access="disabled"
+ # public_network_access="enabled"
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. Select the **Workspaces** page from the left navigation bar.
+1. Enter a workspace by clicking its name.
+1. Select the **Endpoints** page from the left navigation bar.
+1. Select **+ Create** to open the **Create deployment** setup wizard.
+1. Disable the **Public network access** flag at the **Create endpoint** step.
+
+ :::image type="content" source="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png" alt-text="A screenshot of how to disable public network access for an endpoint." lightbox="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png":::
+++
+When `public_network_access` is `disabled`, inbound scoring requests are received using the workspace's private endpoint, and the endpoint can't be reached from public networks.
+
+Alternatively, if you set the `public_network_access` to `enabled`, the endpoint can receive inbound scoring requests from the internet.
+
+## Secure outbound access with workspace managed virtual network
+
+To secure outbound communication from a deployment to services, you need to enable managed virtual network isolation for your Azure Machine Learning workspace so that Azure Machine Learning can create a managed virtual network for the workspace.
+All managed online endpoints in the workspace (and managed compute resources for the workspace, such as compute clusters and compute instances) automatically use this workspace managed virtual network, and the deployments under the endpoints share the managed virtual network's private endpoints for communication with the workspace's resources.
+
+When you secure your workspace with a managed virtual network, the `egress_public_access` flag for managed online deployments no longer applies. Avoid setting this flag when creating the managed online deployment.
+
+For outbound communication with a workspace managed virtual network, Azure Machine Learning:
+
+- Creates private endpoints for the managed virtual network to use for communication with Azure resources that are used by the workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry.
+- Allows deployments to access the Microsoft Container Registry (MCR), which can be useful when you want to use curated environments or MLflow no-code deployment.
+- Allows users to configure private endpoint outbound rules to private resources and configure outbound rules (service tag or FQDN) for public resources. For more information on how to manage outbound rules, see [Manage outbound rules](how-to-managed-network.md#manage-outbound-rules).
+
+Furthermore, you can configure two isolation modes for outbound traffic from the workspace managed virtual network, namely:
+
+- **Allow internet outbound**, to allow all internet outbound traffic from the managed virtual network
+- **Allow only approved outbound**, to control outbound traffic using private endpoints, FQDN outbound rules, and service tag outbound rules.
+
+For example, say your workspace's managed virtual network contains two deployments under a managed online endpoint, both deployments can use the workspace's private endpoints to communicate with:
+
+- The Azure Machine Learning workspace
+- The Azure Storage blob that is associated with the workspace
+- The Azure Container Registry for the workspace
+- The Azure Key Vault
+- (Optional) additional private resources that support private endpoints.
+
+To learn more about configurations for the workspace managed virtual network, see [Managed virtual network architecture](how-to-managed-network.md#managed-virtual-network-architecture).
+
+## Scenarios for network isolation configuration
+
+Suppose a managed online endpoint has a deployment that uses an AI model, and you want to use an app to send scoring requests to the endpoint. You can decide what network isolation configuration to use for the managed online endpoint as follows:
+
+**For inbound communication**:
+
+If the app is publicly available on the internet, then you need to **enable** `public_network_access` for the endpoint so that it can receive inbound scoring requests from the app.
+
+However, say the app is private, such as an internal app within your organization. In this scenario, you want the AI model to be used only within your organization rather than expose it to the internet. Therefore, you need to **disable** the endpoint's `public_network_access` so that it can receive inbound scoring requests only through its workspace's private endpoint.
+
+**For outbound communication (deployment)**:
+
+Suppose your deployment needs to access private Azure resources (such as the Azure Storage blob, ACR, and Azure Key Vault), or it's unacceptable for the deployment to access the internet. In this case, you need to **enable** the _workspace's managed virtual network_ with the **allow only approved outbound** isolation mode. This isolation mode allows outbound communication from the deployment to approved destinations only, thereby protecting against data exfiltration. Furthermore, you can add outbound rules for the workspace, to allow access to more private or public resources. For more information, see [Configure a managed virtual network to allow only approved outbound](how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-only-approved-outbound).
+
+However, if you want your deployment to access the internet, you can use the workspace's managed virtual network with the **allow internet outbound** isolation mode. Apart from being able to access the internet, you'll be able to use the private endpoints of the managed virtual network to access private Azure resources that you need.
+
+Finally, if your deployment doesn't need to access private Azure resources and you don't need to control access to the internet, then you don't need to use a workspace managed virtual network.
+
+## Appendix
+
+### Secure outbound access with legacy network isolation method
+
+For managed online endpoints, you can also secure outbound communication between deployments and resources by using an Azure Machine Learning managed virtual network for each deployment in the endpoint. The secure outbound communication is also handled by using private endpoints to those service instances.
+
+> [!NOTE]
+> We strongly recommend that you use the approach described in [Secure outbound access with workspace managed virtual network](#secure-outbound-access-with-workspace-managed-virtual-network) instead of this legacy method.
+
+To restrict communication between a deployment and external resources, including the Azure resources it uses, you should ensure that:
+
+- The deployment's `egress_public_network_access` flag is `disabled`. This flag ensures that the download of the model, code, and images needed by the deployment are secured with a private endpoint. Once you've created the deployment, you can't update (enable or disable) the `egress_public_network_access` flag. Attempting to change the flag while updating the deployment fails with an error.
+
+- The workspace has a private link that allows access to Azure resources via a private endpoint.
+
+- The workspace has a `public_network_access` flag that can be enabled or disabled, if you plan on using a managed online deployment that uses __public outbound__, then you must also [configure the workspace to allow public access](how-to-configure-private-link.md#enable-public-access). This is because outbound communication from the online deployment is to the _workspace API_. When the deployment is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
+
+When you have multiple deployments, and you configure the `egress_public_network_access` to `disabled` for each deployment in a managed online endpoint, each deployment has its own independent Azure Machine Learning managed virtual network. For each virtual network, Azure Machine Learning creates three private endpoints for communication to the following
+
+- The Azure Machine Learning workspace
+- The Azure Storage blob that is associated with the workspace
+- The Azure Container Registry for the workspace
+
+For example, if you set the `egress_public_network_access` flag to `disabled` for two deployments of a managed online endpoint, a total of six private endpoints are created. Each deployment would use three private endpoints to communicate with the workspace, blob, and container registry.
+
+> [!IMPORTANT]
+> Azure Machine Learning does not support peering between a deployment's managed virtual network and your client's virtual network. For secure access to resources needed by the deployment, we use private endpoints to communicate with the resources.
+
+The following diagram shows incoming scoring requests from a client's virtual network flowing through the workspace's private endpoint to the managed online endpoint. The diagram also shows two online deployments, each in its own Azure Machine Learning managed virtual network. Each deployment's virtual network has three private endpoints for outbound communication with the Azure Machine Learning workspace, the Azure Storage blob associated with the workspace, and the Azure Container Registry for the workspace.
++
+To disable the `egress_public_network_access` and create the private endpoints:
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml online-deployment create -f deployment.yml --set egress_public_network_access=disabled
+```
+
+# [Python](#tab/python)
+
+```python
+blue_deployment = ManagedOnlineDeployment(name='blue',
+ endpoint_name='my-online-endpoint',
+ model=model,
+ code_configuration=CodeConfiguration(code_local_path='./model-1/onlinescoring/',
+ scoring_script='score.py'),
+ environment=env,
+ instance_type='Standard_DS2_v2',
+ instance_count=1,
+ egress_public_network_access="disabled"
+ # egress_public_network_access="enabled"
+)
+
+ml_client.begin_create_or_update(blue_deployment)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Follow the steps in the **Create deployment** setup wizard to the **Deployment** step.
+1. Disable the **Egress public network access** flag.
+
+ :::image type="content" source="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png" alt-text="A screenshot of how to disable the egress public network access for a deployment." lightbox="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png":::
+++
+To confirm the creation of the private endpoints, first check the storage account and container registry associated with the workspace (see [Download a configuration file](how-to-manage-workspace.md#download-a-configuration-file)), find each resource from the Azure portal, and check the `Private endpoint connections` tab under the `Networking` menu.
+
+> [!IMPORTANT]
+> - As mentioned earlier, outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__ (in other words, `public_network_access` flag for the endpoint is set to `enabled`), then the workspace must be able to accept that public communication (`public_network_access` flag for the workspace set to `enabled`).
+> - When online deployments are created with `egress_public_network_access` flag set to `disabled`, they will have access to the secured resources (workspace, blob, and container registry) only. For instance, if the deployment uses model assets uploaded to other storage accounts, the model download will fail. Ensure model assets are on the storage account associated with the workspace.
+> - When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network. On the contrary, when `egress_public_network_access` is set to `enabled`, the deployment can only access the resources with public access, which means it cannot access the resources secured in the virtual network.
+
+The following table lists the supported configurations when configuring inbound and outbound communications for an online endpoint:
+
+| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? |
+| -- | -- | | |
+| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes |
+| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes |
+| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes |
+| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes |
+
+## Next steps
+
+- [Workspace managed network isolation](how-to-managed-network.md)
+- [How to secure managed online endpoints with network isolation](how-to-secure-online-endpoint.md)
machine-learning Dsvm Common Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-common-identity.md
Azure AD DS makes it simple to manage your identities by providing a fully manag
1. In the Azure portal, add the user to Active Directory:
- 1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) by using an account that's a global admin for the directory.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
- 1. Select **Azure Active Directory** and then **Users and groups**.
+ 1. Browse to **Azure Active Directory** > **Users** > **All users**.
- 1. In **Users and groups**, select **All users**, and then select **New user**.
+ 1. Select **New user**.
The **User** pane opens:
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
monikerRange: 'azureml-api-2 || azureml-api-1'
Azure Machine Learning requires access to servers and services on the public internet. When implementing network isolation, you need to understand what access is required and how to enable it. > [!NOTE]
-> The information in this article applies to Azure Machine Learning workspace configured with a private endpoint.
+> The information in this article applies to Azure Machine Learning workspace configured to use an _Azure Virtual Network_. When using a _managed virtual network_, the required inbound and outbound configuration for the workspace is automatically applied. For more information, see [Azure Machine Learning managed virtual network](how-to-managed-network.md).
## Common terms and information
__Outbound traffic__
__To allow installation of Python packages for training and deployment__, allow __outbound__ traffic to the following host names:
-> [!NOTE]
-> This is not a complete list of the hosts required for all Python resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
-
-| __Host name__ | __Purpose__ |
-| - | - |
-| `anaconda.com`<br>`*.anaconda.com` | Used to install default packages. |
-| `*.anaconda.org` | Used to get repo data. |
-| `pypi.org` | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow `*.pythonhosted.org`. |
-| `*pytorch.org` | Used by some examples based on PyTorch. |
-| `*.tensorflow.org` | Used by some examples based on Tensorflow. |
## Scenario: Install RStudio on compute instance
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
subscription = '<subscription_id>'
resource_group = '<resource_group>' workspace = '<workspace>' datastore_name = '<datastore>'
-path_on_datastore '<path>'
+path_on_datastore = '<path>'
# long-form Datastore uri format:
-uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'.
+uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'
``` These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
validation_data:
# [Python SDK](#tab/python)
- [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code:
In individual trials, you directly control the model architecture and hyperparam
#### Supported model architectures
-The following table summarizes the supported models for each computer vision task.
+The following table summarizes the supported legacy models for each computer vision task. Using only these legacy models will trigger runs using the legacy runtime (where each individual run or trial is submitted as a command job). Please see below for HuggingFace and MMDetection support.
Task | model architectures | String literal syntax<br> ***`default_model`\**** denoted with \* |-|-
Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-wei
Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn` Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn`
+#### Supported model architectures - HuggingFace and MMDetection (preview)
+
+With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any image classification model from the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers) which is part of the transformers library (such as microsoft/beit-base-patch16-224), as well as any object detection or instance segmentation model from the [MMDetection Version 2.28.2 Model Zoo](https://mmdetection.readthedocs.io/en/v2.28.2/model_zoo.html) (such as atss_r50_fpn_1x_coco).
+
+In addition to supporting any model from HuggingFace Transfomers and MMDetection 2.28.2, we also offer a list of curated models from these libraries in the azureml-staging registry. These curated models have been tested thoroughly and use default hyperparameters selected from extensive benchmarking to ensure effective training. The table below summarizes these curated models.
+
+Task | model architectures | String literal syntax
+|-|-
+Image classification<br> (multi-class and multi-label)| **BEiT** <br> **ViT** <br> **DeiT** <br> **SwinV2]** | [`microsoft/beit-base-patch16-224-pt22k-ft22k`](https://ml.azure.com/registries/azureml/models/microsoft-beit-base-patch16-224-pt22k-ft22k/version/5)<br> [`google/vit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/google-vit-base-patch16-224/version/5)<br> [`facebook/deit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/facebook-deit-base-patch16-224/version/5)<br> [`microsoft/swinv2-base-patch4-window12-192-22k`](https://ml.azure.com/registries/azureml/models/microsoft-swinv2-base-patch4-window12-192-22k/version/5)
+Object Detection | **Sparse R-CNN** <br> **Deformable DETR** <br> **VFNet** <br> **YOLOF** <br> **Swin** | [`sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco`](https://ml.azure.com/registries/azureml/models/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco/version/3)<br> [`sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco`](https://ml.azure.com/registries/azureml/models/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco/version/3) <br> [`deformable_detr_twostage_refine_r50_16x2_50e_coco`](https://ml.azure.com/registries/azureml/models/deformable_detr_twostage_refine_r50_16x2_50e_coco/version/3) <br> [`vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco`](https://ml.azure.com/registries/azureml/models/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco/version/3) <br> [`vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco`](https://ml.azure.com/registries/azureml/models/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco/version/3) <br> [`yolof_r50_c5_8x8_1x_coco`](https://ml.azure.com/registries/azureml/models/yolof_r50_c5_8x8_1x_coco/version/3)
+Instance Segmentation | **Swin** | [`mask_rcnn_swin-t-p4-w7_fpn_1x_coco`](https://ml.azure.com/registries/azureml/models/mask_rcnn_swin-t-p4-w7_fpn_1x_coco/version/3)
+
+We constantly update the list of curated models. You can get the most up-to-date list of the curated models for a given task using the Python SDK:
+```
+credential = DefaultAzureCredential()
+ml_client = MLClient(credential, registry_name="azureml-staging")
+
+models = ml_client.models.list()
+classification_models = []
+for model in models:
+ model = ml_client.models.get(model.name, label="latest")
+ if model.tags['task'] == 'image-classification': # choose an image task
+ classification_models.append(model.name)
+
+classification_models
+```
+Output:
+```
+['google-vit-base-patch16-224',
+ 'microsoft-swinv2-base-patch4-window12-192-22k',
+ 'facebook-deit-base-patch16-224',
+ 'microsoft-beit-base-patch16-224-pt22k-ft22k']
+```
+Using any HuggingFace or MMDetection model will trigger runs using pipeline components. If both legacy and HuggingFace/MMdetection models are used, all runs/trials will be triggered using components.
+ In addition to controlling the model architecture, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](includes/machine-learning-automl-sdk-version.md)]
AutoML NLP allows you to provide a list of models and combinations of hyperparam
All the pre-trained text DNN models currently available in AutoML NLP for fine-tuning are listed below:
-* bert_base_cased
-* bert_large_uncased
-* bert_base_multilingual_cased
-* bert_base_german_cased
-* bert_large_cased
-* distilbert_base_cased
-* distilbert_base_uncased
-* roberta_base
-* roberta_large
-* distilroberta_base
-* xlm_roberta_base
-* xlm_roberta_large
-* xlnet_base_cased
-* xlnet_large_cased
+* bert-base-cased
+* bert-large-uncased
+* bert-base-multilingual-cased
+* bert-base-german-cased
+* bert-large-cased
+* distilbert-base-cased
+* distilbert-base-uncased
+* roberta-base
+* roberta-large
+* distilroberta-base
+* xlm-roberta-base
+* xlm-roberta-large
+* xlnet-base-cased
+* xlnet-large-cased
Note that the large models are larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results.
+## Supported model algorithms - HuggingFace (preview)
+
+With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any text/token classification model from the HuggingFace Hub for [Text Classification](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers), [Token Classification](https://huggingface.co/models?pipeline_tag=token-classification&sort=trending) which is part of the transformers library (such as microsoft/deberta-large-mnli). You may also find a curated list of models in [Azure Machine Learning model registry](concept-foundation-models.md?view=azureml-api-2&preserve-view=true) that have been validated with the pipeline components.
+
+Using any HuggingFace model will trigger runs using pipeline components. If both legacy and HuggingFace models are used, all runs/trials will be triggered using components.
+ ## Supported hyperparameters The following table describes the hyperparameters that AutoML NLP supports.
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
You can start by reading the [Set up AutoML to train a time-series forecasting m
- [Bike share example](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb) - [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb)-- [Many Models solution](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)-- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)-- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
+- [Many Models solution](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)
+- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)
+- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
## Why is AutoML slow on my data?
To choose between them, note that NRMSE penalizes outliers in the training data
## How can I improve the accuracy of my model? - Ensure that you're configuring AutoML the best way for your data. For more information, see the [What modeling configuration should I use?](#what-modeling-configuration-should-i-use) answer.-- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models. -- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb).
+- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models.
+- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb).
- If the data is noisy, consider aggregating it to a coarser frequency to increase the signal-to-noise ratio. For more information, see [Frequency and target data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation). - Add new features that can help predict the target. Subject matter expertise can help greatly when you're selecting training data. - Compare validation and test metric values, and determine if the selected model is underfitting or overfitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to overfitting.
AutoML supports the following advanced prediction scenarios:
- Forecasting beyond the forecast horizon - Forecasting when there's a gap in time between training and forecasting periods
-For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
## How do I view metrics from forecasting training jobs?
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
In this article, learn how to:
Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
-Compute clusters can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
+Compute clusters can run jobs securely in either a [managed virtual network](how-to-managed-network.md) or an [Azure virtual network](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
## Limitations
Compute clusters can run jobs securely in a [virtual network environment](how-to
* Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](how-to-manage-quotas.md).
-* Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure Machine Learning compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
+* Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace prevents scaling operations for Azure Machine Learning compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
## Create
Create a single- or multi- node compute cluster for your training, batch inferen
|Field |Description | |||
- | Location | The Azure region where the compute cluster will be created. By default, this is the same location as the workspace. If you don't have sufficient quota in the default region, switch to a different region for more options.</br>When using a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. |
+ | Location | The Azure region where the compute cluster is created. By default, this is the same location as the workspace. If you don't have sufficient quota in the default region, switch to a different region for more options. </br>When using a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. |
|Virtual machine type | Choose CPU or GPU. This type can't be changed after creation | |Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be preempted. |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
Create a single- or multi- node compute cluster for your training, batch inferen
|Field |Description | |||
- |Compute name | * Name is required and must be between 3 to 24 characters long.<br><br> * Valid characters are upper and lower case letters, digits, and the **-** character.<br><br> * Name must start with a letter<br><br> * Name needs to be unique across all existing computes within an Azure region. You'll see an alert if the name you choose isn't unique<br><br> * If **-** character is used, then it needs to be followed by at least one letter later in the name |
- |Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you won't pay for any nodes when the cluster is idle. |
- |Maximum number of nodes | Maximum number of nodes that you want to provision. The compute will autoscale to a maximum of this node count when a job is submitted. |
+ |Compute name | * Name is required and must be between 3 to 24 characters long.<br><br> * Valid characters are upper and lower case letters, digits, and the **-** character.<br><br> * Name must start with a letter<br><br> * Name needs to be unique across all existing computes within an Azure region. You see an alert if the name you choose isn't unique<br><br> * If **-** character is used, then it needs to be followed by at least one letter later in the name |
+ |Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you don't pay for any nodes when the cluster is idle. |
+ |Maximum number of nodes | Maximum number of nodes that you want to provision. The compute automatically scales to a maximum of this node count when a job is submitted. |
| Idle seconds before scale down | Idle time before scaling the cluster down to the minimum node count. | | Enable SSH access | Use the same instructions as [Enable SSH access](#enable-ssh-access) for a compute instance (above). |
- |Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. Also attach [managed identities](#set-up-managed-identity) to grant access to resources.
+ |Advanced settings | Optional. Configure network settings.<br><br> * If an *Azure Virtual Network*, Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside the network). For more information, see [network requirements](./how-to-secure-training-vnet.md).<br><br> * If an *Azure Machine Learning managed network*, the compute cluster is automatically in the managed network. For more information, see [managed computes with a managed network](how-to-managed-network-compute.md).<br><br> * No public IP configures whether the compute cluster has a public IP address when in a network.<br><br> * Assign a [managed identity](#set-up-managed-identity) to grant access to resources.
1. Select __Create__.
SSH access is disabled by default. SSH access can't be changed after creation.
## Lower your compute cluster cost with low priority VMs
-You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs don't have guaranteed availability and may be preempted while in use. You'll have to restart a preempted job.
+You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs don't have guaranteed availability and may be preempted while in use. You have to restart a preempted job.
-Using Azure Low Priority Virtual Machines allows you to take advantage of Azure's unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Low Priority Virtual Machines. Therefore, Azure Low Priority Virtual Machines are great for workloads that can handle interruptions. The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Low Priority Virtual Machines, Azure will allocate the VMs if there's capacity available, but there's no SLA for these VMs. An Azure Low Priority Virtual Machine offers no high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Low Priority Virtual Machines
+Using Azure Low Priority Virtual Machines allows you to take advantage of Azure's unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Low Priority Virtual Machines. Therefore, Azure Low Priority Virtual Machine is great for workloads that can handle interruptions. The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Low Priority Virtual Machines, Azure allocates the VMs if there's capacity available, but there's no SLA for these VMs. An Azure Low Priority Virtual Machine offers no high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Low Priority Virtual Machines
Use any of these ways to specify a low-priority VM:
There's a chance that some users who created their Azure Machine Learning worksp
### Stuck at resizing
-If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0) for the node state, this may be caused by Azure resource locks.
+If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0) for the node state, Azure resource locks may be the cause.
[!INCLUDE [resource locks](includes/machine-learning-resource-lock.md)]
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
You can also [use a setup script](how-to-customize-compute-instance.md) to creat
Compute instances can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container. > [!NOTE]
-> This article shows CLI v2 in the sections below. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1)](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true).
+> This article uses CLI v2 in some examples. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1)](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true).
## Prerequisites * An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). In the storage account, the "Allow storage account key access" option must be enabled for compute instance creation to be successful.
-Choose the tab for the environment you are using for additional prerequisites.
+Choose the tab for the environment you're using for other prerequisites.
# [Python SDK](#tab/python)
Choose the tab for the environment you are using for additional prerequisites.
# [Studio](#tab/azure-studio)
-* No additional prerequisites.
+* No extra prerequisites.
# [Studio (preview)](#tab/azure-studio-preview)
Choose the tab for the environment you are using for additional prerequisites.
Creating a compute instance is a one time process for your workspace. You can reuse the compute as a development workstation or as a compute target for training. You can have multiple compute instances attached to your workspace.
-The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you'll be able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you are able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
The fastest way to create a compute instance is to follow the [Create resources you need to get started](quickstart-create-resources.md).
Where the file *create-instance.yml* is:
|Field |Description | |||
- |Compute name | - Name is required and must be between 3 to 24 characters long.<br/> - Valid characters are upper and lower case letters, digits, and the **-** character.<br/> - Name must start with a letter<br/> - Name needs to be unique across all existing computes within an Azure region. You'll see an alert if the name you choose isn't unique<br/> - If **-** character is used, then it needs to be followed by at least one letter later in the name |
+ |Compute name | - Name is required and must be between 3 to 24 characters long.<br/> - Valid characters are upper and lower case letters, digits, and the **-** character.<br/> - Name must start with a letter<br/> - Name needs to be unique across all existing computes within an Azure region. You see an alert if the name you choose isn't unique<br/> - If **-** character is used, then it needs to be followed by at least one letter later in the name |
|Virtual machine type | Choose CPU or GPU. This type can't be changed after creation | |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
Where the file *create-instance.yml* is:
1. <a name="advanced-settings"></a> Select **Next: Advanced Settings** if you want to: * Enable idle shutdown. Configure a compute instance to automatically shut down if it's inactive. For more information, see [enable idle shutdown](#configure-idle-shutdown).
- * Add schedule. Schedule times for the compute instance to automatically start and/or shut down. See [schedule details](#schedule-automatic-start-and-stop) below.
- * Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh-access) below.
- * Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
+ * Add schedule. Schedule times for the compute instance to automatically start and/or shut down. For more information, see [schedule details](#schedule-automatic-start-and-stop).
+ * Enable SSH access. Follow the information in the [detailed SSH access instructions](#enable-ssh-access) section.
+ * Enable virtual network:
+
+ * If you're using an __Azure Virtual Network__, specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network. You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
+
+ * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
+ * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of). * Provision with a setup script - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
Where the file *create-instance.yml* is:
|Field |Description | |||
- |Compute name | - Name is required and must be between 3 to 24 characters long.<br/> - Valid characters are upper and lower case letters, digits, and the **-** character.<br/> - Name must start with a letter<br/> - Name needs to be unique across all existing computes within an Azure region. You'll see an alert if the name you choose isn't unique<br/> - If **-** character is used, then it needs to be followed by at least one letter later in the name |
+ |Compute name | - Name is required and must be between 3 to 24 characters long.<br/> - Valid characters are upper and lower case letters, digits, and the **-** character.<br/> - Name must start with a letter<br/> - Name needs to be unique across all existing computes within an Azure region. You see an alert if the name you choose isn't unique<br/> - If **-** character is used, then it needs to be followed by at least one letter later in the name |
|Virtual machine type | Choose CPU or GPU. This type can't be changed after creation | |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) | 1. Select **Review + Create** unless you want to configure advanced settings for the compute instance.
-1. Select **Next** to go to **Scheduling** if you want to to schedule the compute to start or stop on a recurring basis. See [enable idle shutdown](#configure-idle-shutdown) & [add schedule](#schedule-automatic-start-and-stop) below.
+1. Select **Next** to go to **Scheduling** if you want to schedule the compute to start or stop on a recurring basis. See [enable idle shutdown](#configure-idle-shutdown) & [add schedule](#schedule-automatic-start-and-stop) sections.
1. <a name="security-settings"></a>Select **Security** if you want to configure security settings such as SSH, virtual network, root access, and managed identity for your compute instance. Use this section to: * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of) * Assign a managed identity. See [Assign managed identity](#assign-managed-identity). * Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh-access).
- * Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
+ * Enable virtual network:
+
+ * If you're using an __Azure Virtual Network__, specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network. You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
+
+ * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
* Allow root access. (preview) 1. Select **Applications** if you want to add custom applications to use on your compute instance, such as RStudio or Posit Workbench. See [Add custom applications such as RStudio or Posit Workbench](#add-custom-applications-such-as-rstudio-or-posit-workbench).
A compute instance is considered inactive if the below conditions are met:
* No active Jupyter terminal sessions * No active Azure Machine Learning runs or experiments * No SSH connections
-* No VS code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are auto-terminated if VS code detects no activity for 3 hours.
+* No VS Code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are autoterminated if VS Code detects no activity for 3 hours.
* No custom applications are running on the compute
-A compute instance will not be considered idle if any custom application is running. There are also some basic bounds around inactivity time periods; compute instance must be inactive for a minimum of 15 mins and a maximum of three days.
+A compute instance won't be considered idle if any custom application is running. There are also some basic bounds around inactivity time periods; compute instance must be inactive for a minimum of 15 mins and a maximum of three days.
-Also, if a compute instance has already been idle for a certain amount of time, if idle shutdown settings are updated to an amount of time shorter than the current idle duration, the idle time clock will be reset to 0. For example, if the compute instance has already been idle for 20 minutes, and the shutdown settings are updated to 15 minutes, the idle time clock will be reset to 0.
+Also, if a compute instance has already been idle for a certain amount of time, if idle shutdown settings are updated to an amount of time shorter than the current idle duration, the idle time clock is reset to 0. For example, if the compute instance has already been idle for 20 minutes, and the shutdown settings are updated to 15 minutes, the idle time clock is reset to 0.
The setting can be configured during compute instance creation or for existing compute instances via the following interfaces:
When creating a new compute instance, add the `idle_time_before_shutdown_minutes
ComputeInstance(name=ci_basic_name, size="STANDARD_DS3_v2", idle_time_before_shutdown_minutes="30") ```
-You cannot change the idle time of an existing compute instance with the Python SDK.
+You can't change the idle time of an existing compute instance with the Python SDK.
# [Azure CLI](#tab/azure-cli)
When creating a new compute instance, add `idle_time_before_shutdown_minutes` to
idle_time_before_shutdown_minutes: 30 ```
-You cannot change the idle time of an existing compute instance with the CLI.
+You can't change the idle time of an existing compute instance with the CLI.
# [Studio](#tab/azure-studio)
You can also change the idle time using:
## Schedule automatic start and stop
-Define multiple schedules for auto-shutdown and auto-start. For instance, create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance.
+Define multiple schedules for autoshutdown and autostart. For instance, create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance.
Schedules can also be defined for [create on behalf of](#create-on-behalf-of) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are useful when you create a compute instance on behalf of another user.
-Prior to a scheduled shutdown, users will see a notification alerting them that the Compute Instance is about to shut down. At that point, the user can choose to dismiss the upcoming shutdown event, if for example they are in the middle of using their Compute Instance.
+Prior to a scheduled shutdown, users see a notification alerting them that the Compute Instance is about to shut down. At that point, the user can choose to dismiss the upcoming shutdown event. For example, if they are in the middle of using their Compute Instance.
## Create a schedule
Then use either cron or LogicApps expressions to define the schedule that starts
} ```
-* Action can have value of "Start" or "Stop".
+* Action can have value of `Start` or `Stop`.
* For trigger type of `Recurrence` use the same syntax as logic app, with this [recurrence schema](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger). * For trigger type of `cron`, use standard cron syntax:
identity:
# [Studio](#tab/azure-studio)
-You can create compute instance with managed identity from Azure Machine Learning Studio:
+You can create compute instance with managed identity from Azure Machine Learning studio:
1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create). 1. Select **Next: Advanced Settings**.
You can create compute instance with managed identity from Azure Machine Learnin
# [Studio (preview)](#tab/azure-studio-preview)
-You can create compute instance with managed identity from Azure Machine Learning Studio:
+You can create compute instance with managed identity from Azure Machine Learning studio:
1. Fill out the form to [create a new compute instance](?tabs=azure-studio-preview#create). 1. Select **Security**.
SSH access is disabled by default. SSH access can't be enabled or disabled afte
### Set up an SSH key later
-Although SSH cannot be enabled or disabled after creation, you do have the option to set up an SSH key later on an SSH-enabled compute instance. This allows you to set up the SSH key post-creation. To do this, select to enable SSH on your compute instance, and select to "Set up an SSH key later" as the SSH public key source. After the compute instance is created, you can visit the Details page of your compute instance and click to edit your SSH keys. From there, you will be able to add your SSH key.
+Although SSH can't be enabled or disabled after creation, you do have the option to set up an SSH key later on an SSH-enabled compute instance. This allows you to set up the SSH key post-creation. To do this, select to enable SSH on your compute instance, and select to "Set up an SSH key later" as the SSH public key source. After the compute instance is created, you can visit the Details page of your compute instance and select to edit your SSH keys. From there, you are able to add your SSH key.
-An example of a common use case for this is when creating a compute instance on behalf of another user (see [Create on behalf of](#create-on-behalf-of)) When provisioning a compute instance on behalf of another user, you can enable SSH for the new compute instance owner by selecting "Set up an SSH key later". This allows for the new owner of the compute instance to set up their SSH key for their newly owned compute instance once it has been created and assigned to them following the steps above.
+An example of a common use case for this is when creating a compute instance on behalf of another user (see [Create on behalf of](#create-on-behalf-of)) When provisioning a compute instance on behalf of another user, you can enable SSH for the new compute instance owner by selecting __Set up an SSH key later__. This allows for the new owner of the compute instance to set up their SSH key for their newly owned compute instance once it has been created and assigned to them following the previous steps.
### Connect with SSH
Use either Studio or Studio (preview) to see how to set up applications.
# [Azure CLI](#tab/azure-cli)
-USe either Studio or Studio (preview) to see how to set up applications.
+Use either Studio or Studio (preview) to see how to set up applications.
# [Studio](#tab/azure-studio)
RStudio is one of the most popular IDEs among R developers for ML and data scien
To use RStudio, set up a custom application as follows:
-1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Follow the previous steps to **Add application** when creating your compute instance.
1. Select **Custom Application** on the **Application** dropdown 1. Configure the **Application name** you would like to use. 1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port.
To use RStudio, set up a custom application as follows:
Set up other custom applications on your compute instance by providing the application on a Docker image.
-1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Follow the previous steps to **Add application** when creating your compute instance.
1. Select **Custom Application** on the **Application** dropdown. 1. Configure the **Application name**, the **Target port** you wish to run the application on, the **Published port** you wish to access the application on and the **Docker image** that contains your application. 1. Optionally, add **Environment variables** you wish to use for your application. 1. Use **Bind mounts** to add access to the files in your default storage account: * Specify **/home/azureuser/cloudfiles** for **Host path**. * Specify **/home/azureuser/cloudfiles** for the **Container path**.
- * Select **Add** to add this mounting. Because the files are mounted, changes you make to them will be available in other compute instances and applications.
+ * Select **Add** to add this mounting. Because the files are mounted, changes you make to them are available in other compute instances and applications.
1. Select **Create** to set up the custom application on your compute instance. :::image type="content" source="media/how-to-create-compute-instance/custom-service.png" alt-text="Screenshot show custom application settings." lightbox="media/how-to-create-compute-instance/custom-service.png":::
Access the custom applications that you set up in studio:
:::image type="content" source="media/how-to-create-compute-instance/custom-service-access.png" alt-text="Screenshot shows studio access for your custom applications."::: > [!NOTE]
-> It might take a few minutes after setting up a custom application until you can access it via the links above. The amount of time taken will depend on the size of the image used for your custom application. If you see a 502 error message when trying to access the application, wait for some time for the application to be set up and try again.
+> It might take a few minutes after setting up a custom application until you can access it via the links. The amount of time taken will depend on the size of the image used for your custom application. If you see a 502 error message when trying to access the application, wait for some time for the application to be set up and try again.
## Next steps
machine-learning How To Create Vector Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-vector-index.md
After you create a vector index, you can add it to a prompt flow from the prompt
:::image type="content" source="media/how-to-create-vector-index/vector-index-lookup-tool.png" alt-text="Screenshot that shows the Vector Index Lookup tool.":::
-1. Enter the path to your vector index, along with the query that you want to perform against the index.
+1. Enter the path to your vector index, along with the query that you want to perform against the index. The 'path' is the location for the MLIndex created in the create a vector index section of this tutorial. To know this location select the desired Vector Index, select 'Details', and select 'Index Data'. Then on the 'Index data' page, copy the 'Datasource URI' in the Data sources section.
+
+1. Enter a query that you want to perform against the index. A query is a question either as plain string or an embedding from the input cell of the previous step. If you choose to enter an embedding, be sure your query is defined in the input section of your prompt flow like the example here:
+
+ :::image type="content" source="media/how-to-create-vector-index/query-example.png" alt-text="Screenshot that shows the Vector Index Lookup tool query.":::
+
+ An example of a plain string you can input in this case would be: `How to use SDK V2?'. Here is an example of an embedding as an input: `${embed_the_question.output}`. Passing a plain string will only work when the Vector Index is getting used on the workspace which created it.
## Next steps
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
endpoint_name = "endpt-" + datetime.datetime.now().strftime("%m%d%H%M%f")
# create an online endpoint endpoint = ManagedOnlineEndpoint( name = endpoint_name,
- description="this is a sample endpoint"
+ description="this is a sample endpoint",
auth_mode="key" ) ```
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
monikerRange: 'azureml-api-2 || azureml-api-1'
# Use Azure Machine Learning studio in an Azure virtual network + In this article, you learn how to use Azure Machine Learning studio in a virtual network. The studio includes features like AutoML, the designer, and data labeling. Some of the studio's features are disabled by default in a virtual network. To re-enable these features, you must enable managed identity for storage accounts you intend to use in the studio.
In this article, you learn how to:
### Designer sample pipeline
-There's a known issue where user cannot run sample pipeline in Designer homepage. This is the sample dataset used in the sample pipeline is Azure Global dataset, and it cannot satisfy all virtual network environment.
+There's a known issue where user can't run sample pipeline in Designer homepage. This problem occurs because the sample dataset used in the sample pipeline is an Azure Global dataset. It can't be accessed from a virtual network environment.
-To resolve this issue, you can use a public workspace to run sample pipeline to get to know how to use the designer and then replace the sample dataset with your own dataset in the workspace within virtual network.
+To resolve this issue, use a public workspace to run the sample pipeline. Or replace the sample dataset with your own dataset in the workspace within a virtual network.
## Datastore: Azure Storage Account
Use the following steps to enable access to data stored in Azure Blob and File s
> [!TIP] > The first step is not required for the default storage account for the workspace. All other steps are required for *any* storage account behind the VNet and used by the workspace, including the default storage account.
-1. **If the storage account is the *default* storage for your workspace, skip this step**. If it is not the default, __Grant the workspace managed identity the 'Storage Blob Data Reader' role__ for the Azure storage account so that it can read data from blob storage.
+1. **If the storage account is the *default* storage for your workspace, skip this step**. If it isn't the default, __Grant the workspace managed identity the 'Storage Blob Data Reader' role__ for the Azure storage account so that it can read data from blob storage.
For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role.
Use the following steps to enable access to data stored in Azure Blob and File s
For more information, see the [Reader](../role-based-access-control/built-in-roles.md#reader) built-in role. <a id='enable-managed-identity'></a>
-1. __Enable managed identity authentication for default storage accounts__. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account, which are defined when you create your workspace. You can also set new defaults in the __Datastore__ management page.
+1. __Enable managed identity authentication for default storage accounts__. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account. Both are defined when you create your workspace. You can also set new defaults in the __Datastore__ management page.
![Screenshot showing where default datastores can be found](./media/how-to-enable-studio-virtual-network/default-datastores.png)
Use the following steps to enable access to data stored in Azure Blob and File s
|Storage account | Notes | |||
- |Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment will fail regardless of any other datastores in use.|
+ |Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment fails regardless of any other datastores in use.|
|Workspace default file store| Stores AutoML experiment assets. Enable managed identity authentication on this storage account to submit AutoML experiments. | 1. __Configure datastores to use managed identity authentication__. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
After you create a SQL contained user, grant permissions to it by using the [GRA
## Intermediate component output
-When using the Azure Machine Learning designer intermediate component output, you can specify the output location for any component in the designer. Use this to store intermediate datasets in separate location for security, logging, or auditing purposes. To specify output, use the following steps:
+When using the Azure Machine Learning designer intermediate component output, you can specify the output location for any component in the designer. Use this output to store intermediate datasets in separate location for security, logging, or auditing purposes. To specify output, use the following steps:
1. Select the component whose output you'd like to specify. 1. In the component settings pane that appears to the right, select __Output settings__. 1. Specify the datastore you want to use for each component output.
-Make sure that you have access to the intermediate storage accounts in your virtual network. Otherwise, the pipeline will fail.
+Make sure that you have access to the intermediate storage accounts in your virtual network. Otherwise, the pipeline fails.
[Enable managed identity authentication](#enable-managed-identity) for intermediate storage accounts to visualize output data. ## Access the studio from a resource inside the VNet
-If you are accessing the studio from a resource inside of a virtual network (for example, a compute instance or virtual machine), you must allow outbound traffic from the virtual network to the studio.
+If you're accessing the studio from a resource inside of a virtual network (for example, a compute instance or virtual machine), you must allow outbound traffic from the virtual network to the studio.
-For example, if you are using network security groups (NSG) to restrict outbound traffic, add a rule to a __service tag__ destination of __AzureFrontDoor.Frontend__.
+For example, if you're using network security groups (NSG) to restrict outbound traffic, add a rule to a __service tag__ destination of __AzureFrontDoor.Frontend__.
## Firewall settings
-Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. __This is not supported__ when using Azure Machine Learning studio. It is supported when using the Azure Machine Learning SDK or CLI.
+Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. __This is not supported__ when using Azure Machine Learning studio. It's supported when using the Azure Machine Learning SDK or CLI.
> [!TIP] > Azure Machine Learning studio is supported when using the Azure Firewall service. For more information, see [Use your workspace behind a firewall](how-to-access-azureml-behind-firewall.md).
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
GitHub Actions uses a workflow YAML (.yml) file in the `/.github/workflows/` pat
* A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
-## Step 1. Get the code
+## Step 1: Get the code
Fork the following repo at GitHub:
Fork the following repo at GitHub:
https://github.com/azure/azureml-examples ```
-## Step 2. Authenticate with Azure
+## Step 2: Authenticate with Azure
You'll need to first define how to authenticate with Azure. You can use a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or [OpenID Connect](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect).
You'll need to first define how to authenticate with Azure. You can use a [servi
[!INCLUDE [include](~/articles/reusable-content/github-actions/create-secrets-with-openid.md)]
-## Step 3. Update `setup.sh` to connect to your Azure Machine Learning workspace
+## Step 3: Update `setup.sh` to connect to your Azure Machine Learning workspace
You'll need to update the CLI setup file variables to match your workspace.
You'll need to update the CLI setup file variables to match your workspace.
|LOCATION | Location of your workspace (example: `eastus2`) | |WORKSPACE | Name of Azure Machine Learning workspace |
-## Step 4. Update `pipeline.yml` with your compute cluster name
+## Step 4: Update `pipeline.yml` with your compute cluster name
You'll use a `pipeline.yml` file to deploy your Azure Machine Learning pipeline. This is a machine learning pipeline and not a DevOps pipeline. You only need to make this update if you're using a name other than `cpu-cluster` for your computer cluster name.
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
Title: Create and manage instance types for efficient compute resource utilization
-description: Learn about what is instance types, and how to create and manage them, and what are benefits of using instance types
+ Title: Create and manage instance types for efficient utilization of compute resources
+description: Learn about what instance types are, how to create and manage them, and what the benefits of using them are.
-# Create and manage instance types for efficient compute resource utilization
+# Create and manage instance types for efficient utilization of compute resources
-## What are instance types?
+Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure virtual machine, an example of an instance type is `STANDARD_D2_V3`.
-Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure VM, an example for an instance type is `STANDARD_D2_V3`.
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that's installed with the Azure Machine Learning extension. Two elements in the Azure Machine Learning extension represent the instance types:
-In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the Azure Machine Learning extension. Two elements in Azure Machine Learning extension represent the instance types:
-[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
-and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+- Use [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) to specify which node a pod should run on. The node must have a corresponding label.
+- In the [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) section, you can set the compute resources (CPU, memory, and NVIDIA GPU) for the pod.
-In short, a `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label. In the `resources` section, you can set the compute resources (CPU, memory and NVIDIA GPU) for the pod.
+If you [specify a nodeSelector field when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the `nodeSelector` field will be applied to all instance types. This means that:
->[!IMPORTANT]
->
-> If you have [specified a nodeSelector when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
-> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector.
-> - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector.
-> - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector.
+- For each instance type that you create, the specified `nodeSelector` field should be a subset of the extension-specified `nodeSelector` field.
+- If you use an instance type with `nodeSelector`, the workload will run on any node that matches both the extension-specified `nodeSelector` field and the instance-type-specified `nodeSelector` field.
+- If you use an instance type without a `nodeSelector` field, the workload will run on any node that matches the extension-specified `nodeSelector` field.
+## Create a default instance type
-## Default instance type
-
-By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace:
-- If you don't apply a `nodeSelector`, it means the pod can get scheduled on any node.-- The workload's pods are assigned default resources with 0.1 cpu cores, 2-GB memory and 0 GPU for request.-- The resources used by the workload's pods are limited to 2 cpu cores and 8-GB memory:
+By default, an instance type called `defaultinstancetype` is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace. Here's the definition:
```yaml resources:
resources:
nvidia.com/gpu: null ```
-> [!NOTE]
-> - The default instance type purposefully uses little resources. To ensure all ML workloads run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
-> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
-> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
+If you don't apply a `nodeSelector` field, the pod can be scheduled on any node. The workload's pods are assigned default resources with 0.1 CPU cores, 2 GB of memory, and 0 GPUs for the request. The resources that the workload's pods use are limited to 2 CPU cores and 8 GB of memory.
+
+The default instance type purposefully uses few resources. To ensure that all machine learning workloads run with appropriate resources (for example, GPU resource), we highly recommend that you [create custom instance types](#create-a-custom-instance-type).
+
+Keep in mind the following points about the default instance type:
+
+- `defaultinstancetype` doesn't appear as an `InstanceType` custom resource in the cluster when you're running the command ```kubectl get instancetype```, but it does appear in all clients (UI, Azure CLI, SDK).
+- `defaultinstancetype` can be overridden with the definition of a custom instance type that has the same name.
-### Create custom instance types
+## Create a custom instance type
-To create a new instance type, create a new custom resource for the instance type CRD. For example:
+To create a new instance type, create a new custom resource for the instance type CRD. For example:
```bash kubectl apply -f my_instance_type.yaml ```
-With `my_instance_type.yaml`:
+Here are the contents of *my_instance_type.yaml*:
+ ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceType
spec:
memory: "1500Mi" ```
-The following steps create an instance type with the labeled behavior:
-- Pods are scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods are assigned resource requests of `700m` CPU and `1500Mi` memory.-- Pods are assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.
+The preceding code creates an instance type with the labeled behavior:
-Creation of custom instance types must meet the following parameters and definition rules, otherwise the instance type creation fails:
+- Pods are scheduled only on nodes that have the label `mylabel: mylabelvalue`.
+- Pods are assigned resource requests of `700m` for CPU and `1500Mi` for memory.
+- Pods are assigned resource limits of `1` for CPU, `2Gi` for memory, and `1` for NVIDIA GPU.
-| Parameter | Required | Description |
-| | | |
-| name | required | String values, which must be unique in cluster.|
-| CPU request | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.|
-| Memory request | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
-| CPU limit | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.|
-| Memory limit | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
-| GPU | optional | Integer values, which can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). |
-| nodeSelector | optional | Map of string keys and values. |
+Creation of custom instance types must meet the following parameters and definition rules, or it will fail:
+| Parameter | Required or optional | Description |
+| | | |
+| `name` | Required | String values, which must be unique in a cluster.|
+| `CPU request` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `Memory request` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 mebibytes (MiB).|
+| `CPU limit` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `Memory limit` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
+| `GPU` | Optional | Integer values, which can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). |
+| `nodeSelector` | Optional | Map of string keys and values. |
It's also possible to create multiple instance types at once:
It's also possible to create multiple instance types at once:
kubectl apply -f my_instance_type_list.yaml ```
-With `my_instance_type_list.yaml`:
+Here are the contents of *my_instance_type_list.yaml*:
+ ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceTypeList
items:
memory: "1Gi" ```
-The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition created when Kubernetes cluster was attached to Azure Machine Learning workspace.
+The preceding example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition that was created when you attached the Kubernetes cluster to the Azure Machine Learning workspace.
-If you submit a training or inference workload without an instance type, it uses the `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype`. It's automatically recognized as the default.
+If you submit a training or inference workload without an instance type, it uses `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with the name `defaultinstancetype`. It's automatically recognized as the default.
+## Select an instance type to submit a training job
-### Select instance type to submit training job
+### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli)
-#### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli)
-
-To select an instance type for a training job using CLI (V2), specify its name as part of the
-`resources` properties section in job YAML. For example:
+To select an instance type for a training job by using the Azure CLI (v2), specify its name as part of the
+`resources` properties section in the job YAML. For example:
```yaml command: python -c "print('Hello world!')"
environment:
image: library/python:latest compute: azureml:<Kubernetes-compute_target_name> resources:
- instance_type: <instance_type_name>
+ instance_type: <instance type name>
```
-#### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk)
+### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk)
-To select an instance type for a training job using SDK (V2), specify its name for `instance_type` property in `command` class. For example:
+To select an instance type for a training job by using the SDK (v2), specify its name for the `instance_type` property in the `command` class. For example:
```python from azure.ai.ml import command
command_job = command(
command="python -c "print('Hello world!')"", environment="AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest", compute="<Kubernetes-compute_target_name>",
- instance_type="<instance_type_name>"
+ instance_type="<instance type name>"
) ```+
-In the above example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute
-target and replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to submit the job.
+In the preceding example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute target. Replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to submit the job.
-### Select instance type to deploy model
+## Select an instance type to deploy a model
-#### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli)
+### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli)
-To select an instance type for a model deployment using CLI (V2), specify its name for the `instance_type` property in the deployment YAML. For example:
+To select an instance type for a model deployment by using the Azure CLI (v2), specify its name for the `instance_type` property in the deployment YAML. For example:
```yaml name: blue
environment:
image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest ```
-#### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
+### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
-To select an instance type for a model deployment using SDK (V2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example:
+To select an instance type for a model deployment by using the SDK (v2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example:
```python from azure.ai.ml import KubernetesOnlineDeployment,Model,Environment,CodeConfiguration
blue_deployment = KubernetesOnlineDeployment(
instance_type="<instance type name>", ) ```+
-In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to deploy the model.
+In the preceding example, replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to deploy the model.
> [!IMPORTANT]
->
-> For MLFlow model deployment, the resource request require at least 2 CPU and 4 GB memory, otherwise the deployment will fail.
+> For MLflow model deployment, the resource request requires at least 2 CPU cores and 4 GB of memory. Otherwise, the deployment will fail.
+
+### Resource section validation
-#### Resource section validation
-If you're using the `resource section` to define the resource request and limit of your model deployments, for example:
+You can use the `resources` section to define the resource request and limit of your model deployments. For example:
#### [Azure CLI](#tab/define-resource-to-modeldeployment-with-cli)
resources:
memory: "0.5Gi" instance_type: <instance type name> ```+ #### [Python SDK](#tab/define-resource-to-modeldeployment-with-sdk) ```python
blue_deployment = KubernetesOnlineDeployment(
instance_type="<instance type name>", ) ```+
-If you use the `resource section`, the valid resource definition need to meet the following rules, otherwise the model deployment fails due to the invalid resource definition:
+If you use the `resources` section, a valid resource definition needs to meet the following rules. An invalid resource definition will cause the model deployment to fail.
-| Parameter | If necessary | Description |
+| Parameter | Required or optional | Description |
| | | |
-| `requests:`<br>`cpu:`| Required | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`.|
-| `requests:`<br>`memory:` | Required | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB. <br>Memory can't be less than **1 MBytes**.|
-| `limits:`<br>`cpu:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`. |
-| `limits:`<br>`memory:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB.|
-| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(only required when need GPU) | Integer values, which can't be empty and can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If require CPU only, you can omit the entire `limits` section.|
-
-> [!NOTE]
->
->If the resource section definition is invalid, the deployment will fail.
->
-> The `instance type` is **required** for model deployment. If you have defined the resource section, and it will be validated against the instance type, the rules are as follows:
- > * With a valid resource section definition, the resource limits must be less than instance type limits, otherwise deployment will fail.
- > * If the user does not define instance type, the `defaultinstancetype` will be used to be validated with resource section.
- > * If the user does not define resource section, the instance type will be used to create deployment.
+| `requests:`<br>`cpu:`| Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `requests:`<br>`memory:` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB. <br>Memory can't be less than 1 MB.|
+| `limits:`<br>`cpu:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`. |
+| `limits:`<br>`memory:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 MiB.|
+| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(required only when you need GPU) | Integer values, which can't be empty and can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If you require CPU only, you can omit the entire `limits` section.|
+
+The instance type is *required* for model deployment. If you defined the `resources` section, and it will be validated against the instance type, the rules are as follows:
+- With a valid `resource` section definition, the resource limits must be less than the instance type limits. Otherwise, deployment will fail.
+- If you don't define an instance type, the system uses `defaultinstancetype` for validation with the `resources` section.
+- If you don't define the `resources` section, the system uses the instance type to create the deployment.
## Next steps - [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)-- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure Azure Kubernetes Service inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
Previously updated : 06/08/2021 Last updated : 08/01/2023 # Manage and optimize Azure Machine Learning costs
For information on planning and monitoring costs, see the [plan to manage costs
With constantly changing data, you need fast and streamlined model training and retraining to maintain accurate models. However, continuous training comes at a cost, especially for deep learning models on GPUs.
-Azure Machine Learning users can use the managed Azure Machine Learning compute cluster, also called AmlCompute. AmlCompute supports a variety of GPU and CPU options. The AmlCompute is internally hosted on behalf of your subscription by Azure Machine Learning. It provides the same enterprise grade security, compliance and governance at Azure IaaS cloud scale.
+Azure Machine Learning users can use the managed Azure Machine Learning compute cluster, also called AmlCompute. AmlCompute supports various GPU and CPU options. The AmlCompute is internally hosted on behalf of your subscription by Azure Machine Learning. It provides the same enterprise grade security, compliance and governance at Azure IaaS cloud scale.
Because these compute pools are inside of Azure's IaaS infrastructure, you can deploy, scale, and manage your training with the same security and compliance requirements as the rest of your infrastructure. These deployments occur in your subscription and obey your governance rules. Learn more about [Azure Machine Learning compute](how-to-create-attach-compute-cluster.md).
Because these compute pools are inside of Azure's IaaS infrastructure, you can d
Autoscaling clusters based on the requirements of your workload helps reduce your costs so you only use what you need.
-AmlCompute clusters are designed to scale dynamically based on your workload. The cluster can be scaled up to the maximum number of nodes you configure. As each job completes, the cluster will release nodes and scale to your configured minimum node count.
+AmlCompute clusters are designed to scale dynamically based on your workload. The cluster can be scaled up to the maximum number of nodes you configure. As each job completes, the cluster releases nodes and scale to your configured minimum node count.
[!INCLUDE [min-nodes-note](includes/machine-learning-min-nodes.md)]
To set quotas at the workspace level, start in the [Azure portal](https://portal
## Set job autotermination policies
-In some cases, you should configure your training runs to limit their duration or terminate them early. For example, when you are using Azure Machine Learning's built-in hyperparameter tuning or automated machine learning.
+In some cases, you should configure your training runs to limit their duration or terminate them early. For example, when you're using Azure Machine Learning's built-in hyperparameter tuning or automated machine learning.
Here are a few options that you have: * Define a parameter called `max_run_duration_seconds` in your RunConfiguration to control the maximum duration a run can extend to on the compute you choose (either local or remote cloud compute).
Low-Priority VMs have a single quota separate from the dedicated quota value, wh
## Schedule compute instances
-When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work.
+When you create a [compute instance](concept-compute-instance.md), the VM stays on so it's available for your work.
* [Enable idle shutdown (preview)](how-to-create-compute-instance.md#configure-idle-shutdown) to save on cost when the VM has been idle for a specified time period. * Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
When you create a [compute instance](concept-compute-instance.md), the VM stays
Another way to save money on compute resources is Azure Reserved VM Instance. With this offering, you commit to one-year or three-year terms. These discounts range up to 72% of the pay-as-you-go prices and are applied directly to your monthly Azure bill.
-Azure Machine Learning Compute supports reserved instances inherently. If you purchase a one-year or three-year reserved instance, we will automatically apply discount against your Azure Machine Learning managed compute.
+Azure Machine Learning Compute supports reserved instances inherently. If you purchase a one-year or three-year reserved instance, we'll automatically apply discount against your Azure Machine Learning managed compute.
## Parallelize training
-One of the key methods of optimizing cost and performance is by parallelizing the workload with the help of a parallel component in Azure Machine Learning. A parallel component allows you to use many smaller nodes to execute the task in parallel, hence allowing you to scale horizontally. There is an overhead for parallelization. Depending on the workload and the degree of parallelism that can be achieved, this may or may not be an option. For further information, see the [ParallelComponent](/python/api/azure-ai-ml/azure.ai.ml.entities.parallelcomponent) documentation.
+One of the key methods of optimizing cost and performance is by parallelizing the workload with the help of a parallel component in Azure Machine Learning. A parallel component allows you to use many smaller nodes to execute the task in parallel, hence allowing you to scale horizontally. There's an overhead for parallelization. Depending on the workload and the degree of parallelism that can be achieved, this may or may not be an option. For more details, follow this link for [ParallelComponent](/python/api/azure-ai-ml/azure.ai.ml.entities.parallelcomponent) documentation.
## Set data retention & deletion policies
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
Previously updated : 05/23/2023 Last updated : 08/24/2023
The response should provide an access token good for one hour. Make note of the
``` To create a registry, use the following command. You can edit the JSON to change the inputs as needed. Replace the `<YOUR-ACCESS-TOKEN>` value with the access token retrieved previously:
-
+
+> [!TIP]
+> We recommend using the latest API version when working with the REST API. For a list of the current REST API versions for Azure Machine Learning, see the [Machine Learning REST API reference](/rest/api/azureml/). The current API versions are listed in the table of contents on the left side of the page.
+
+```bash
```bash
-curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2022-12-01-preview -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
+curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2023-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
{ "properties": {
Decide if you want to allow users to only use assets (models, environments and c
### Allow users to use assets from the registry
-To let a user only read assets, you can grant the user the built-in __Reader__ role. If don't want to use the built-in role, create a custom role with the following permissions
+To let a user only read assets, you can grant the user the built-in __Reader__ role. If you don't want to use the built-in role, create a custom role with the following permissions
Permission | Description --|--
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Last updated 09/21/2022 -+ # Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)
As your needs change or requirements for automation increase you can also manage
[!INCLUDE [register-namespace](includes/machine-learning-register-namespace.md)]
-* If you're using Azure Container Registry (ACR), Storage Account, Key Vault, or Application Insights in the different subscription than the workspace, you can't use network isolation with managed online endpoints. If you want to use network isolation with managed online endpoints, you must have ACR, Storage Account, Key Vault, and Application Insights in the same subscription with the workspace. For limitations that apply to network isolation with managed online endpoints, see [How to secure online endpoint](how-to-secure-online-endpoint.md#limitations).
+* When you use network isolation that is based on a workspace's managed virtual network (preview) with a deployment, you can use resources (Azure Container Registry (ACR), Storage account, Key Vault, and Application Insights) from a different resource group or subscription than that of your workspace. However, these resources must belong to the same tenant as your workspace. For limitations that apply to securing managed online endpoints using a workspace's managed virtual network, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md#limitations).
* By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters.
machine-learning How To Managed Network Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network-compute.md
+
+ Title: Managed computes in managed virtual network isolation (preview)
+
+description: Use managed compute resources with managed virtual network isolation with Azure Machine Learning.
++++++ Last updated : 08/22/2023+++
+# Use managed compute in a managed virtual network
+
+Learn how to configure compute clusters or compute instances in an Azure Machine Learning managed virtual network (preview).
+
+When using a managed network, compute resources managed by Azure Machine Learning can participate in the virtual network. Azure Machine Learning _compute clusters_, _compute instances_, and _managed online endpoints_ are created in the managed network.
+
+This article focuses on configuring compute clusters and compute instances in a managed network. For information on managed online endpoints, see [secure online endpoints with network isolation](how-to-secure-online-endpoint.md).
+
+> [!IMPORTANT]
+> If you plan on using serverless _Spark_ jobs, see the [managed virtual network](how-to-managed-network.md) article for configuration information. These steps must be followed when configuring the managed virtual network.
+
+## Prerequisites
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+# [Azure CLI](#tab/azure-cli)
+
+* An Azure Machine Learning workspace configured to use a [managed virtual network](how-to-managed-network.md).
+
+* The [Azure CLI](/cli/azure/) and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+ >[!TIP]
+ > Azure Machine Learning managed virtual network was introduced on May 23rd, 2023. If you have an older version of the ml extension, you may need to update it for the examples in this article work. To update the extension, use the following Azure CLI command:
+ >
+ > ```azurecli
+ > az extension update -n ml
+ > ```
+
+* The CLI examples in this article assume that you're using the Bash (or compatible) shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
+
+* The Azure CLI examples in this article use `ws` to represent the name of the workspace, and `rg` to represent the name of the resource group. Change these values as needed when using the commands with your Azure subscription.
+
+# [Python SDK](#tab/python)
+
+* An Azure Machine Learning workspace configured to use a [managed virtual network](how-to-managed-network.md).
+
+* The Azure Machine Learning Python SDK v2. For more information on the SDK, see [Install the Python SDK v2 for Azure Machine Learning](/python/api/overview/azure/ai-ml-readme).
+
+ > [!TIP]
+ > Azure Machine learning managed virtual network was introduced on May 23rd, 2023. If you have an older version of the SDK installed, you may need to update it for the examples in this article to work. To update the SDK, use the following command:
+ >
+ > ```bash
+ > pip install --upgrade azure-ai-ml azure-identity
+ > ```
+
+* The examples in this article assume that your code begins with the following Python. This code imports the classes required when creating a workspace with managed VNet, sets variables for your Azure subscription and resource group, and creates the `ml_client`:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import (
+ Workspace,
+ ManagedNetwork,
+ IsolationMode,
+ ServiceTagDestination,
+ PrivateEndpointDestination,
+ FqdnDestination
+ )
+ from azure.identity import DefaultAzureCredential
+
+ # Replace with the values for your Azure subscription and resource group.
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+
+ # get a handle to the subscription
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
+ ```
+
+# [Studio](#tab/studio)
+
+* An Azure Machine Learning workspace configured to use a [managed virtual network](how-to-managed-network.md).
+++
+## Configure compute resources
+
+Use the tabs below to learn how to configure compute clusters and compute instances in a managed virtual network:
+
+> [!TIP]
+> When using a managed virtual network, compute clusters and compute instances are automatically created in the managed network. The steps below focus on configuring the compute resources to not use a public IP address.
+
+# [Azure CLI](#tab/azure-cli)
+
+To create a __compute cluster__ with no public IP, use the following command:
+
+```azurecli
+az ml compute create --name cpu-cluster --resource-group rg --workspace-name ws --type AmlCompute --set enable_node_public_ip=False
+```
+
+To create a __compute instance__ with no public IP, use the following command:
+
+```azurecli
+az ml compute create --name myci --resource-group rg --workspace-name ws --type ComputeInstance --set enable_node_public_ip=False
+```
+
+# [Python SDK](#tab/python)
+
+The following Python SDK example shows how to create a compute cluster and compute instance with no public IP:
+
+```python
+from azure.ai.ml.entities import AmlCompute
+
+# Create a compute cluster
+compute_cluster = AmlCompute(
+ name="mycomputecluster,
+ size="STANDARD_D2_V2",
+ min_instances=0,
+ max_instances=4,
+ enable_node_public_ip=False
+)
+ml_client.begin_create_or_update(entity=compute_cluster)
+
+# Create a compute instance
+from azure.ai.ml.entities import ComputeInstance
+
+compute_instance = ComputeInstance(
+ name="mycomputeinstance",
+ size="STANDARD_DS3_V2",
+ enable_node_public_ip=False
+)
+ml_client.begin_create_or_update(compute_instance)
+```
+
+# [Studio](#tab/studio)
+
+You can't create a compute cluster or compute instance from the Azure portal. Instead, use the following steps to create these computes from Azure Machine Learning [studio](https://ml.azure.com):
+
+1. From [studio](https://ml.azure.com), select your workspace.
+1. Select the __Compute__ page from the left navigation bar.
+1. Select the __+ New__ from the navigation bar of _compute instance_ or _compute cluster_.
+1. Configure the VM size and configuration you need, then select __Next__ until you arrive at the following pages:
+
+ * For a __compute cluster__, use the __Advanced Settings__ page and select the __No Public IP__ option to remove the public IP address.
+
+ :::image type="content" source="./media/how-to-managed-network-compute/compute-cluster-no-public-ip.png" alt-text="A screenshot of how to configure no public IP for compute cluster." lightbox="./media/how-to-managed-network-compute/compute-cluster-no-public-ip.png":::
+
+ * For a __compute instance__, use the __Security__ page and select the __No Public IP__ option to remove the public IP address.
+
+ :::image type="content" source="./media/how-to-managed-network-compute/compute-instance-no-public-ip.png" alt-text="A screenshot of how to configure no public IP for compute instance." lightbox="./media/how-to-managed-network-compute/compute-instance-no-public-ip.png":::
+
+1. Continue with the creation of the compute resource.
+++
+## Limitations
+
+* Creating a compute cluster in a different region than the workspace isn't supported when using a managed virtual network.
+
+### Migration of compute resources
+
+If you have an existing workspace and want to enable managed virtual network for it, there's currently no supported migration path for existing manged compute resources. You'll need to delete all existing managed compute resources and recreate them after enabling the managed virtual network. The following list contains the compute resources that must be deleted and recreated:
+
+* Compute cluster
+* Compute instance
+* Managed online endpoints
+
+## Next steps
+
+* [Managed virtual network isolation](how-to-managed-network.md)
+* [Secure online endpoints with network isolation](how-to-secure-online-endpoint.md)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
Title: Managed virtual network isolation (Preview)
+ Title: Managed virtual network isolation (preview)
description: Use managed virtual network isolation for network security with Azure Machine Learning.
Previously updated : 07/19/2023 Last updated : 08/22/2023
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Azure Machine Learning provides preview support for managed virtual network (VNet) isolation. Managed VNet isolation streamlines and automates your network isolation configuration with a built-in, workspace-level Azure Machine Learning managed virtual network.
+Azure Machine Learning provides support for managed virtual network (VNet) isolation. Managed VNet isolation streamlines and automates your network isolation configuration with a built-in, workspace-level Azure Machine Learning managed virtual network.
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] ## Managed virtual network architecture
-When you enable managed virtual network isolation, a managed VNet is created for the workspace. Managed compute resources (compute clusters and compute instances) for the workspace automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry.
-
-The following diagram shows how a managed virtual network uses private endpoints to communicate with the storage, key vault, and container registry used by the workspace.
-
+When you enable managed virtual network isolation, a managed VNet is created for the workspace. Managed compute resources you create for the workspace automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry.
There are two different configuration modes for outbound traffic from the managed virtual network:
There are two different configuration modes for outbound traffic from the manage
| Outbound mode | Description | Scenarios | | -- | -- | -- | | Allow internet outbound | Allow all internet outbound traffic from the managed VNet. | Recommended if you need access to machine learning artifacts on the Internet, such as python packages or pretrained models. |
-| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | Recommended if you want to minimize the risk of data exfiltration but you will need to prepare all required machine learning artifacts in your private locations. |
+| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | Recommended if you want to minimize the risk of data exfiltration but you need to prepare all required machine learning artifacts in your private locations. |
+
+The managed virtual network is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your workspace default storage, container registry and key vault __if they're configured as private__. After choosing the isolation mode, you only need to consider other outbound requirements you may need to add.
+
+The following diagram shows a managed virtual network configured to __allow internet outbound__:
++
+The following diagram shows a managed virtual network configured to __allow only approved outbound__:
+
+> [!NOTE]
+> In this configuration, the storage, key vault, and container registry used by the workspace are flagged as private. Since they are flagged as private, a private endpoint is used to communicate with them.
++
+### Azure Machine Learning studio
+
+If you want to use the integrated notebook or create datasets in the default storage account from studio, your client needs access to the default storage account. Create a _private endpoint_ or _service endpoint_ for the default storage account in the Azure Virtual Network that the clients use.
+
+Part of Azure Machine Learning studio runs locally in the client's web browser, and communicates directly with the default storage for the workspace. Creating a private endpoint or service endpoint for the default storage account in the virtual network ensures that the client can communicate with the storage account.
+
+> [!TIP]
+> A using a service endpoint in this configuration can reduce costs.
-The managed virtual network is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your workspace default storage, container registry and key vault if they're configured as private. After choosing the isolation mode, you only need to consider other outbound requirements you may need to add.
+For more information on creating a private endpoint or service endpoint, see the [Connect privately to a storage account](/azure/storage/common/storage-private-endpoints) and [Service Endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview) articles.
-## Supported scenarios in preview and to be supported scenarios
+## Supported scenarios
-|Scenarios|Supported in preview|To be supported|
-||||
-|Isolation Mode| &#x2022; Allow internet outbound<br>&#x2022; Allow only approved outbound||
-|Compute|&#x2022; [Compute Instance](concept-compute-instance.md)<br>&#x2022; [Compute Cluster](how-to-create-attach-compute-cluster.md)<br>&#x2022; [Serverless](how-to-use-serverless-compute.md)<br>&#x2022; [Serverless spark](apache-spark-azure-ml-concepts.md)|&#x2022; New managed online endpoint creation<br>&#x2022; Migration of existing managed online endpoint<br>&#x2022; No Public IP option of Compute Instance, Compute Cluster and Serverless|
-|Outbound|&#x2022; Private Endpoint<br>&#x2022; Service Tag|&#x2022; FQDN|
+|Scenarios|Supported|
+|||
+|Isolation Mode| &#x2022; Allow internet outbound<br>&#x2022; Allow only approved outbound|
+|Compute|&#x2022; [Compute Instance](concept-compute-instance.md)<br>&#x2022; [Compute Cluster](how-to-create-attach-compute-cluster.md)<br>&#x2022; [Serverless](how-to-use-serverless-compute.md)<br>&#x2022; [Serverless spark](apache-spark-azure-ml-concepts.md)<br>&#x2022; New managed online endpoint creation<br>&#x2022; No Public IP option of Compute Instance, Compute Cluster and Serverless |
+|Outbound|&#x2022; Private Endpoint<br>&#x2022; Service Tag<br>&#x2022; FQDN |
## Prerequisites Before following the steps in this article, make sure you have the following prerequisites:
-> [!IMPORTANT]
-> To use the information in this article, you must enable this preview feature for your subscription. To check whether it has been registered, or to register it, use the steps in the [Set up preview features in Azure subscription](/azure/azure-resource-manager/management/preview-features). Depending on whether you use the Azure portal, Azure CLI, or Azure PowerShell, you may need to register the feature with a different name. Use the following table to determine the name of the feature to register:
->
-> | Registration method | Feature name |
-> | -- | -- |
-> | Azure portal | `Azure Machine Learning Managed Network` |
-> | Azure CLI | `AMLManagedNetworkEnabled` |
-> | Azure PowerShell | `AMLManagedNetworkEnabled` |
- # [Azure CLI](#tab/azure-cli) * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
Before following the steps in this article, make sure you have the following pre
ManagedNetwork, IsolationMode, ServiceTagDestination,
- PrivateEndpointDestination
+ PrivateEndpointDestination,
+ FqdnDestination
) from azure.identity import DefaultAzureCredential
Before following the steps in this article, make sure you have the following pre
## Configure a managed virtual network to allow internet outbound > [!IMPORTANT]
-> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. [Manually start provisioning if you plan to submit serverless spark jobs](#configure-for-serverless-spark-jobs).
+> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. If you want to provision the managed virtual network and private endpoints, use the `az ml workspace provision-network` command from the Azure CLI. For example, `az ml workspace provision-network --name ws --resource-group rg`.
+>
+> __If you plan to submit serverless spark jobs__, you must manually start provisioning. For more information, see the [configure for serverless spark jobs](#configure-for-serverless-spark-jobs) section.
# [Azure CLI](#tab/azure-cli)
You can configure a managed VNet using either the `az ml workspace create` or `a
* __Update an existing workspace__:
- > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
+ [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)]
The following example updates an existing workspace. The `--managed-network allow_internet_outbound` parameter configures a managed VNet for the workspace:
To configure a managed VNet that allows internet outbound communications, use th
* __Update an existing workspace__:
- > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
+ [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)]
The following example demonstrates how to create a managed VNet for an existing Azure Machine Learning workspace named `myworkspace`:
To configure a managed VNet that allows internet outbound communications, use th
:::image type="content" source="./media/how-to-managed-network/use-managed-network-internet-outbound.png" alt-text="Screenshot of creating a workspace with an internet outbound managed network." lightbox="./media/how-to-managed-network/use-managed-network-internet-outbound.png":::
+ 1. To add an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information:
+
+ * __Rule name__: A name for the rule. The name must be unique for this workspace.
+ * __Destination type__: Private Endpoint is the only option when the network isolation is private with internet outbound. Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
+ * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for.
+ * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for.
+ * __Resource type__: The type of the Azure resource.
+ * __Resource name__: The name of the Azure resource.
+ * __Sub Resource__: The sub resource of the Azure resource type.
+ * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of adding an outbound rule for a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png":::
+
+ Select __Save__ to save the rule. You can continue using __Add user-defined outbound rules__ to add rules.
+ 1. Continue creating the workspace as normal. * __Update an existing workspace__:
- > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
+ [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)]
1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure Machine Learning workspace that you want to enable managed virtual network isolation for.
- 1. Select __Networking__, then select __Private with Internet Outbound__. Select __Save__ to save the changes.
+ 1. Select __Networking__, then select __Private with Internet Outbound__.
:::image type="content" source="./media/how-to-managed-network/update-managed-network-internet-outbound.png" alt-text="Screenshot of updating a workspace to managed network with internet outbound." lightbox="./media/how-to-managed-network/update-managed-network-internet-outbound.png":::
+ * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information:
+
+ * __Rule name__: A name for the rule. The name must be unique for this workspace.
+ * __Destination type__: Private Endpoint is the only option when the network isolation is private with internet outbound. Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
+ * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for.
+ * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for.
+ * __Resource type__: The type of the Azure resource.
+ * __Resource name__: The name of the Azure resource.
+ * __Sub Resource__: The sub resource of the Azure resource type.
+ * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of updating a managed network by adding a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png":::
+
+ * To __delete__ an outbound rule, select __delete__ for the rule.
+
+ :::image type="content" source="./media/how-to-managed-network/delete-outbound-rule.png" alt-text="Screenshot of the delete rule icon for an approved outbound managed network.":::
+
+ 1. Select __Save__ at the top of the page to save the changes to the managed network.
+ ## Configure a managed virtual network to allow only approved outbound > [!IMPORTANT]
-> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. [Manually start provisioning if you plan to submit serverless spark jobs](#configure-for-serverless-spark-jobs).
+> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. If you want to provision the managed virtual network and private endpoints, use the `az ml workspace provision-network` command from the Azure CLI. For example, `az ml workspace provision-network --name ws --resource-group rg`.
+>
+> __If you plan to submit serverless spark jobs__, you must manually start provisioning. For more information, see the [configure for serverless spark jobs](#configure-for-serverless-spark-jobs) section.
# [Azure CLI](#tab/azure-cli)
managed_network:
isolation_mode: allow_only_approved_outbound ```
-You can also define _outbound rules_ to define approved outbound communication. An outbound rule can be created for a type of `service_tag`. You can also define _private endpoints_ that allow an Azure resource to securely communicate with the managed VNet. The following rule demonstrates adding a private endpoint to an Azure Blob resource, a service tag to Azure Data Factory:
+You can also define _outbound rules_ to define approved outbound communication. An outbound rule can be created for a type of `service_tag` or `fqdn`. You can also define _private endpoints_ that allow an Azure resource to securely communicate with the managed VNet. The following rule demonstrates adding a private endpoint to an Azure Blob resource, a service tag to Azure Data Factory, and an FQDN to `pypi.org`:
-> [!TIP]
-> Adding an outbound for a service tag is only valid when the managed VNet is configured to `allow_only_approved_outbound`.
+> [!IMPORTANT]
+> * Adding an outbound for a service tag or FQDN is only valid when the managed VNet is configured to `allow_only_approved_outbound`.
+> * If you add outbound rules, Microsoft can't guarantee data exfiltration.
```yaml managed_network:
managed_network:
protocol: TCP service_tag: DataFactory type: service_tag
+ - name: add-fqdnrule
+ destination: 'pypi.org'
+ type: fqdn
- name: added-perule destination: service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
You can configure a managed VNet using either the `az ml workspace create` or `a
* __Update an existing workspace__
- > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
+ [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)]
The following example uses the `--managed-network allow_only_approved_outbound` parameter to configure the managed VNet for an existing workspace:
You can configure a managed VNet using either the `az ml workspace create` or `a
protocol: TCP service_tag: DataFactory type: service_tag
+ - name: add-fqdnrule
+ destination: 'pypi.org'
+ type: fqdn
- name: added-perule destination: service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
You can configure a managed VNet using either the `az ml workspace create` or `a
# [Python SDK](#tab/python)
-To configure a managed VNet that allows only approved outbound communications, use the `ManagedNetwork` class to define a network with `IsolationMode.ALLOw_ONLY_APPROVED_OUTBOUND`. You can then use the `ManagedNetwork` object to create a new workspace or update an existing one. To define _outbound rules_ to Azure services that the workspace relies on, use the `PrivateEndpointDestination` class to define a new private endpoint to the service.
+To configure a managed VNet that allows only approved outbound communications, use the `ManagedNetwork` class to define a network with `IsolationMode.ALLOw_ONLY_APPROVED_OUTBOUND`. You can then use the `ManagedNetwork` object to create a new workspace or update an existing one. To define _outbound rules_, use the following classes:
+
+| Destination | Class |
+| -- | -- |
+| __Azure service that the workspace relies on__ | `PrivateEndpointDestination` |
+| __Azure service tag__ | `ServiceTagDestination` |
+| __Fully qualified domain name (FQDN)__ | `FqdnDestination` |
* __Create a new workspace__:
To configure a managed VNet that allows only approved outbound communications, u
* `myrule` - Adds a private endpoint for an Azure Blob store. * `datafactory` - Adds a service tag rule to communicate with Azure Data Factory.
- > [!TIP]
- > Adding an outbound for a service tag is only valid when the managed VNet is configured to `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`.
+ > [!IMPORTANT]
+ > * Adding an outbound for a service tag is only valid when the managed VNet is configured to `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`.
+ > * If you add outbound rules, Microsoft can't guarantee data exfiltration.
```python # Basic managed virtual network configuration
To configure a managed VNet that allows only approved outbound communications, u
) )
+ # Example FQDN rule
+ ws.managed_network.outbound_rules.append(
+ FqdnDestination(
+ name="fqdnrule",
+ destination="pypi.org"
+ )
+ )
+ # Create the workspace ws = ml_client.workspaces.begin_create(ws).result() ``` * __Update an existing workspace__:
- > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
+ [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)]
The following example demonstrates how to create a managed VNet for an existing Azure Machine Learning workspace named `myworkspace`. The example also adds several outbound rules for the managed VNet:
To configure a managed VNet that allows only approved outbound communications, u
) )
+ # Example FQDN rule
+ ws.managed_network.outbound_rules.append(
+ FqdnDestination(
+ name="fqdnrule",
+ destination="pypi.org"
+ )
+ )
+ # Update the workspace ml_client.workspaces.begin_update(ws) ```
To configure a managed VNet that allows only approved outbound communications, u
:::image type="content" source="./media/how-to-managed-network/use-managed-network-approved-outbound.png" alt-text="Screenshot of creating a workspace with an approved outbound managed network." lightbox="./media/how-to-managed-network/use-managed-network-approved-outbound.png":::
+ 1. To add an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information:
+
+ * __Rule name__: A name for the rule. The name must be unique for this workspace.
+ * __Destination type__: Private Endpoint, Service Tag, or FQDN. Service Tag and FQDN are only available when the network isolation is private with approved outbound.
+
+ If the destination type is __Private Endpoint__, provide the following information:
+
+ * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for.
+ * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for.
+ * __Resource type__: The type of the Azure resource.
+ * __Resource name__: The name of the Azure resource.
+ * __Sub Resource__: The sub resource of the Azure resource type.
+ * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage.
+
+ > [!TIP]
+ > Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of updating an approved outbound network by adding a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png":::
+
+ If the destination type is __Service Tag__, provide the following information:
+
+ * __Service tag__: The service tag to add to the approved outbound rules.
+ * __Protocol__: The protocol to allow for the service tag.
+ * __Port ranges__: The port ranges to allow for the service tag.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-service-tag.png" alt-text="Screenshot of updating an approved outbound network by adding a service tag." lightbox="./media/how-to-managed-network/outbound-rule-service-tag.png" :::
+
+ If the destination type is __FQDN__, provide the following information:
+
+ * __FQDN destination__: The fully qualified domain name to add to the approved outbound rules.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-fqdn.png" alt-text="Screenshot of updating an approved outbound network by adding an FQDN rule for an approved outbound managed network." lightbox="./media/how-to-managed-network/outbound-rule-fqdn.png":::
+
+ Select __Save__ to save the rule. You can continue using __Add user-defined outbound rules__ to add rules.
+ 1. Continue creating the workspace as normal. * __Update an existing workspace__:
- > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
+ [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)]
1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure Machine Learning workspace that you want to enable managed virtual network isolation for.
- 1. Select __Networking__, then select __Private with Approved Outbound__. Select __Save__ to save the changes.
+ 1. Select __Networking__, then select __Private with Approved Outbound__.
:::image type="content" source="./media/how-to-managed-network/update-managed-network-approved-outbound.png" alt-text="Screenshot of updating a workspace to managed network with approved outbound." lightbox="./media/how-to-managed-network/update-managed-network-approved-outbound.png":::
+ * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information:
+
+ * __Rule name__: A name for the rule. The name must be unique for this workspace.
+ * __Destination type__: Private Endpoint, Service Tag, or FQDN. Service Tag and FQDN are only available when the network isolation is private with approved outbound.
+
+ If the destination type is __Private Endpoint__, provide the following information:
+
+ * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for.
+ * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for.
+ * __Resource type__: The type of the Azure resource.
+ * __Resource name__: The name of the Azure resource.
+ * __Sub Resource__: The sub resource of the Azure resource type.
+ * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage.
+
+ > [!TIP]
+ > Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of updating an approved outbound network by adding a private endpoint rule." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png":::
+
+ If the destination type is __Service Tag__, provide the following information:
+
+ * __Service tag__: The service tag to add to the approved outbound rules.
+ * __Protocol__: The protocol to allow for the service tag.
+ * __Port ranges__: The port ranges to allow for the service tag.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-service-tag.png" alt-text="Screenshot of updating an approved outbound network by adding a service tag rule." lightbox="./media/how-to-managed-network/outbound-rule-service-tag.png" :::
+
+ If the destination type is __FQDN__, provide the following information:
+
+ * __FQDN destination__: The fully qualified domain name to add to the approved outbound rules.
+
+ :::image type="content" source="./media/how-to-managed-network/outbound-rule-fqdn.png" alt-text="Screenshot of updating an approved outbound network by adding an FQDN rule." lightbox="./media/how-to-managed-network/outbound-rule-fqdn.png":::
+
+ Select __Save__ to save the rule. You can continue using __Add user-defined outbound rules__ to add rules.
+
+ * To __delete__ an outbound rule, select __delete__ for the rule.
+
+ :::image type="content" source="./media/how-to-managed-network/delete-outbound-rule.png" alt-text="Screenshot of the delete rule icon for an approved outbound managed network.":::
+
+ 1. Select __Save__ at the top of the page to save the changes to the managed network.
+ ## Configure for serverless spark jobs > [!TIP]
-> The steps in this section are only needed for __Spark serverless__. If you are using [serverless __compute cluster__](how-to-use-serverless-compute.md), you can skip this section.
+> The steps in this section are only needed if you plan to submit __serverless spark jobs__. If you aren't going to be submitting serverless spark jobs, you can skip this section.
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the managed VNet, you must perform the following actions:
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag
Use the __Azure CLI__ or __Python SDK__ tabs to learn how to manually provision the managed VNet with serverless spark support.
-
+
## Manage outbound rules
ml_client._workspace_outbound_rules.begin_remove(resource_group, ws_name, rule_n
:::image type="content" source="./media/how-to-managed-network/manage-outbound-rules.png" alt-text="Screenshot of the outbound rules section." lightbox="./media/how-to-managed-network/manage-outbound-rules.png":::
+* To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information:
+
+* To __enable__ or __disable__ a rule, use the toggle in the __Active__ column.
+
+* To __delete__ an outbound rule, select __delete__ for the rule.
+ ## List of required rules
ml_client._workspace_outbound_rules.begin_remove(resource_group, ws_name, rule_n
> These rules are automatically added to the managed VNet. __Private endpoints__:
-* When the isolation mode for the managed network is `Allow internet outbound`, private endpoint outbound rules will be automatically created as required rules from the managed network for the workspace and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure ML Workspace).
-* When the isolation mode for the managed network is `Allow only approved outbound`, private endpoint outbound rules will be automatically created as required rules from the managed network for the workspace and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure ML Workspace).
+* When the isolation mode for the managed network is `Allow internet outbound`, private endpoint outbound rules are automatically created as required rules from the managed network for the workspace and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure Machine Learning workspace).
+* When the isolation mode for the managed network is `Allow only approved outbound`, private endpoint outbound rules are automatically created as required rules from the managed network for the workspace and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure Machine Learning workspace).
__Outbound__ service tag rules:
__Outbound__ service tag rules:
__Inbound__ service tag rules: * `AzureMachineLearning`
-## List of recommended outbound rules
+## List of scenario specific outbound rules
-Currently we don't have any recommended outbound rules.
+### Scenario: Access public machine learning packages
+
+To allow installation of __Python packages for training and deployment__, add outbound _FQDN_ rules to allow traffic to the following host names:
++
+### Scenario: Use Visual Studio Code desktop or web with compute instance
+
+If you plan to use __Visual Studio Code__ with Azure Machine Learning, add outbound _FQDN_ rules to allow traffic to the following hosts:
+
+* `*.vscode.dev`
+* `vscode.blob.core.windows.net`
+* `*.gallerycdn.vsassets.io`
+* `raw.githubusercontent.com`
+* `*.vscode-unpkg.net`
+* `*.vscode-cdn.net`
+* `*.vscodeexperiments.azureedge.net`
+* `default.exp-tas.com`
+* `code.visualstudio.com`
+* `update.code.visualstudio.com`
+* `*.vo.msecnd.net`
+* `marketplace.visualstudio.com`
+
+### Scenario: Use batch endpoints
+
+If you plan to use __Azure Machine Learning batch endpoints__ for deployment, add outbound _private endpoint_ rules to allow traffic to the following sub resources for the default storage account:
+
+* `queue`
+* `table`
+
+### Scenario: Use prompt flow with Azure Open AI, content safety, and cognitive search
+
+* Private endpoint to Azure AI Services
+* Private endpoint to Azure Cognitive Search
+
+## Private endpoints
+
+Private endpoints are currently supported for the following Azure
+
+* Azure Machine Learning
+* Azure Machine Learning registries
+* Azure Storage (all sub resource types)
+* Azure Container Registry
+* Azure Key Vault
+* Azure AI services
+* Azure Cognitive Search
+* Azure SQL Server
+* Azure Data Factory
+* Azure Cosmos DB (all sub resource types)
+* Azure Event Hubs
+* Azure Redis Cache
+* Azure Databricks
+* Azure Database for MariaDB
+* Azure Database for PostgreSQL
+* Azure Database for MySQL
+* Azure SQL Managed Instance
+
+When you create a private endpoint, you provide the _resource type_ and _subresource_ that the endpoint connects to. Some resources have multiple types and subresources. For more information, see [what is a private endpoint](/azure/private-link/private-endpoint-overview).
+
+When you create a private endpoint for Azure Machine Learning dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the Azure Machine Learning workspace.
+
+> [!IMPORTANT]
+> When configuring private endpoints for an Azure Machine Learning managed virtual network, the private endpoints are only created when created when the first _compute is created_ or when managed network provisioning is forced. For more information on forcing the managed network provisioning, see [Configure for serverless spark jobs](#configure-for-serverless-spark-jobs).
+
+## Pricing
+
+The Azure Machine Learning managed virtual network feature is free. However, you're charged for the following resources that are used by the managed virtual network:
+
+* Azure Private Link - Private endpoints used to secure communications between the managed virtual network and Azure resources relies on Azure Private Link. For more information on pricing, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+* FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing.
+
+ > [!IMPORTANT]
+ > The firewall isn't created until you add an outbound FQDN rule. If you don't use FQDN rules, you will not be charged for Azure Firewall. For more information on pricing, see [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
## Limitations * Once you enable managed virtual network isolation of your workspace, you can't disable it. * Managed virtual network uses private endpoint connection to access your private resources. You can't have a private endpoint and a service endpoint at the same time for your Azure resources, such as a storage account. We recommend using private endpoints in all scenarios.
-* The managed network will be deleted and cleaned up when the workspace is deleted.
+* The managed network is deleted when the workspace is deleted.
+* Data exfiltration protection is automatically enabled for the only approved outbound mode. If you add other outbound rules, such as to FQDNs, Microsoft can't guarantee that you're protected from data exfiltration to those outbound destinations.
+* Creating a compute cluster in a different region than the workspace isn't supported when using a managed virtual network.
+
+### Migration of compute resources
+
+If you have an existing workspace and want to enable managed virtual network for it, there's currently no supported migration path for existing manged compute resources. You'll need to delete all existing managed compute resources and recreate them after enabling the managed virtual network. The following list contains the compute resources that must be deleted and recreated:
+
+* Compute cluster
+* Compute instance
+* Managed online endpoints
## Next steps * [Troubleshoot managed virtual network](how-to-troubleshoot-managed-network.md)
+* [Configure managed computes in a managed virtual network](how-to-managed-network-compute.md)
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
You can optionally choose to load the MLTable object into Pandas, using:
``` #### Save the data loading steps
-Next, save all your data loading steps into an MLTable file. If you save your data loading steps, you can reproduce your Pandas data frame at a later point in time, and you don't need to redefine the data loading steps in your code.
+Next, save all your data loading steps into an MLTable file. Saving your data loading steps in an MLTable file allows you to reproduce your Pandas data frame at a later point in time, without need to redefine the code each time.
+You can choose to save the MLTable yaml file to a cloud storage, or you can also save it to local paths.
```python
-# serialize the data loading steps into an MLTable file
-tbl.save("./nyc_taxi")
+# save the data loading steps in an MLTable file to a cloud storage
+# NOTE: the tbl object was defined in the previous snippet.
+tbl.save(save_path_dirc= "azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/titanic", collocated=True, show_progress=True, allow_copy_errors=False, overwrite=True)
```
-You can optionally view the contents of the MLTable file, to understand how the data loading steps are serialized into a file:
- ```python
-with open("./nyc_taxi/MLTable", "r") as f:
- print(f.read())
+# save the data loading steps in an MLTable file to local
+# NOTE: the tbl object was defined in the previous snippet.
+tbl.save("./titanic")
```
+> [!IMPORTANT]
+> - If collocated == True, then we will copy the data to the same folder with MLTable yaml file if they are not currently collocated, and we will use relative paths in MLTable yaml.
+> - If collocated == False, we will not move the data and we will use absolute paths for cloud data and use relative paths for local data.
+> - We donΓÇÖt support this parameter combination: data is in local, collocated == False, `save_path_dirc` is a cloud directory. Please upload your local data to cloud and use the cloud data paths for MLTable instead.
+> - Parameters `show_progress` (default as True), `allow_copy_errors` (default as False), `overwrite`(default as True) are optional.
+>
++ ### Reproduce data loading steps Now that the data loading steps have been serialized into a file, you can reproduce them at any point in time, with the load() method. This way, you don't need to redefine your data loading steps in code, and you can more easily share the file.
machine-learning How To Network Isolation Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-isolation-planning.md
Last updated 02/14/2023 -+ # Plan for network isolation
-In this article, you learn how to plan your network isolation for Azure Machine Learning and our recommendations. This is a document for IT administrators who want to design network architecture.
+In this article, you learn how to plan your network isolation for Azure Machine Learning and our recommendations. This article is for IT administrators who want to design network architecture.
-## Key considerations
+## Recommended architecture (Managed Network Isolation pattern)
-### Azure Machine Learning managed virtual network and Azure Virtual Network
+[Using a Managed virtual network](how-to-managed-network.md) (preview) provides an easier configuration for network isolation. It automatically secures your workspace and managed compute resources in a managed virtual network. You can add private endpoint connections for other Azure services that the workspace relies on, such as Azure Storage Accounts. Depending on your needs, you can allow all outbound traffic to the public network or allow only the outbound traffic you approve. Outbound traffic required by the Azure Machine Learning service is automatically enabled for the managed virtual network. We recommend using workspace managed network isolation for a built-in friction less network isolation method. We have two patterns: allow internet outbound mode or allow only approved outbound mode.
-Azure Machine Learning can use a managed virtual network (preview) or Azure Virtual Network to enable network isolation.
+### Allow internet outbound mode
-> [!IMPORTANT]
-> Azure Machine Learning managed virtual network is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Use this option if you want to allow your machine learning engineers access the internet freely. You can create other private endpoint outbound rules to let them access your private resources on Azure.
-Using a Managed virtual network provides an easier configuration for network isolation. It automatically secures your workspace and managed compute resources in a managed virtual network. You can add private endpoint connections for other Azure services that the workspace relies on, such as Azure Storage Accounts. Depending on your needs, you can allow all outbound traffic to the public network or allow only the outbound traffic you approve. Outbound traffic required by the Azure Machine Learning service is automatically enabled for the managed virtual network.
-Using Azure Virtual Networks provides a more customizable network isolation solution, with the caveat that you are responsible for configuration and management. An Azure Virtual Network can be used to connect unmanaged resources to your workspace. For example, an Azure Virtual Network might be used to enable clients to connect to the workspace through a Virtual Private Network (VPN) gateway, or to allow you to [attach on-premises kubernetes](how-to-attach-kubernetes-anywhere.md) as a compute resource.
+### Allow only approved outbound mode
-> [!TIP]
-> The information in this article is primarily about using Azure Virtual Networks. For more information on Azure Machine Learning managed virtual networks, see the [Managed virtual network](how-to-managed-network.md) article.
+Use this option if you want to minimize data exfiltration risk and control what your machine learning engineers can access. You can control outbound rules using private endpoint, service tag and FQDN.
++
+## Recommended architecture (use your Azure VNet)
+
+If you have a specific requirement or company policy that prevents you from using a managed virtual network, you can use an __Azure virtual network__ for network isolation.
+
+The following diagram is our recommended architecture to make all resources private but allow outbound internet access from your VNet. This diagram describes the following architecture:
+* Put all resources in the same region.
+* A hub VNet, which contains your firewall.
+* A spoke VNet, which contains the following resources:
+ * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP.
+ * A scoring subnet contains an AKS cluster.
+ * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.)
+* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage.
+
+This architecture balances your network security and your ML engineers' productivity.
++
+You can automate this environments creation using [a template](tutorial-create-secure-workspace-template.md) without managed online endpoint or AKS. Managed online endpoint is the solution if you don't have an existing AKS cluster for your AI model scoring. See [how to secure online endpoint](how-to-secure-online-endpoint.md) documentation for more info. AKS with Azure Machine Learning extension is the solution if you have an existing AKS cluster for your AI model scoring. See [how to attach kubernetes](how-to-attach-kubernetes-anywhere.md) documentation for more info.
+
+### Removing firewall requirement
+
+If you want to remove the firewall requirement, you can use network security groups and [Azure virtual network NAT](/azure/virtual-network/nat-gateway/nat-overview) to allow internet outbound from your private computing resources.
++
+### Using public workspace
+
+You can use a public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace.
+
+## Recommended architecture with data exfiltration prevention
+
+This diagram shows the recommended architecture to make all resources private and control outbound destinations to prevent data exfiltration. We recommend this architecture when using Azure Machine Learning with your sensitive data in production. This diagram describes the following architecture:
+* Put all resources in the same region.
+* A hub VNet, which contains your firewall.
+ * In addition to service tags, the firewall uses FQDNs to prevent data exfiltration.
+* A spoke VNet, which contains the following resources:
+ * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. Additionally, a service endpoint and service endpoint policy are in place to prevent data exfiltration.
+ * A scoring subnet contains an AKS cluster.
+ * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.)
+* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage.
++
+The following tables list the required outbound [Azure Service Tags](/azure/virtual-network/service-tags-overview) and fully qualified domain names (FQDN) with data exfiltration protection setting:
+
+| Outbound service tag | Protocol | Port |
+| - | -- | - |
+| `AzureActiveDirectory` | TCP | 80, 443 |
+| `AzureResourceManager` | TCP | 443 |
+| `AzureMachineLearning` | UDP | 5831 |
+| `BatchNodeManagement` | TCP | 443 |
+
+| Outbound FQDN | Protocol | Port |
+| - | - | - |
+| `mcr.microsoft.com` | TCP | 443 |
+| `*.data.mcr.microsoft.com` | TCP | 443 |
+| `ml.azure.com` | TCP | 443 |
+| `automlresources-prod.azureedge.net` | TCP | 443 |
+
+### Using public workspace
+
+You can use the public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace.
+
+## Key considerations to understand details
### Azure Machine Learning has both IaaS and PaaS resources
The following tables list the required outbound [Azure Service Tags](/azure/virt
### Managed online endpoint
-Azure Machine Learning managed online endpoint uses Azure Machine Learning managed VNet, instead of using your VNet. If you want to disallow public access to your endpoint, set the `public_network_access` flag to disabled. When this flag is disabled, your endpoint can be accessed via the private endpoint of your workspace, and it can't be reached from public networks. If you want to use a private storage account for your deployment, set the `egress_public_network_access` flag disabled. It automatically creates private endpoints to access your private resources.
+Security for inbound and outbound communication are configured separately for managed online endpoints.
-> [!TIP]
-> The workspace default storage account is the only private storage account supported by managed online endpoint.
+#### Inbound communication
+
+Azure Machine Learning uses a private endpoint to secure inbound communication to a managed online endpoint. Set the endpoint's `public_network_access` flag to `disabled` to prevent public access to it. When this flag is disabled, your endpoint can be accessed only via the private endpoint of your Azure Machine Learning workspace, and it can't be reached from public networks.
+#### Outbound communication
-For more information, see the [Network isolation of managed online endpoints](how-to-secure-online-endpoint.md) article.
+
+To secure outbound communication from a deployment to resources, Azure Machine Learning uses a workspace managed virtual network (preview). The deployment needs to be created in the workspace managed VNet so that it can use the private endpoints of the workspace managed virtual network for outbound communication.
+
+The following architecture diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from a client's virtual network flow through the workspace's private endpoint to the managed online endpoint. Outbound communication from deployments to services is handled through private endpoints from the workspace's managed virtual network to those service instances.
++
+For more information, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md).
### Private IP address shortage in your main network
In this diagram, your main VNet requires the IPs for private endpoints. You can
### Network policy enforcement You can use [built-in policies](how-to-integrate-azure-policy.md) if you want to control network isolation parameters with self-service workspace and computing resources creation.
-### Other considerations
+### Other minor considerations
#### Image build compute setting for ACR behind VNet
If you plan on using the Azure Machine Learning studio, there are extra configur
<!-- ### Registry -->
-## Recommended architecture
-
-The following diagram is our recommended architecture to make all resources private but allow outbound internet access from your VNet. This diagram describes the following architecture:
-* Put all resources in the same region.
-* A hub VNet, which contains your firewall.
-* A spoke VNet, which contains the following resources:
- * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP.
- * A scoring subnet contains an AKS cluster.
- * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.)
-* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage.
-
-This architecture balances your network security and your ML engineers' productivity.
--
-You can automate this environments creation using [a template](tutorial-create-secure-workspace-template.md) without managed online endpoint or AKS. Managed online endpoint is the solution if you don't have an existing AKS cluster for your AI model scoring. See [how to secure online endpoint](how-to-secure-online-endpoint.md) documentation for more info. AKS with Azure Machine Learning extension is the solution if you have an existing AKS cluster for your AI model scoring. See [how to attach kubernetes](how-to-attach-kubernetes-anywhere.md) documentation for more info.
-
-### Removing firewall requirement
-
-If you want to remove the firewall requirement, you can use network security groups and [Azure virtual network NAT](/azure/virtual-network/nat-gateway/nat-overview) to allow internet outbound from your private computing resources.
--
-### Using public workspace
-
-You can use a public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace.
-
-## Recommended architecture with data exfiltration prevention
-
-This diagram shows the recommended architecture to make all resources private and control outbound destinations to prevent data exfiltration. We recommend this architecture when using Azure Machine Learning with your sensitive data in production. This diagram describes the following architecture:
-* Put all resources in the same region.
-* A hub VNet, which contains your firewall.
- * In addition to service tags, the firewall uses FQDNs to prevent data exfiltration.
-* A spoke VNet, which contains the following resources:
- * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. Additionally, a service endpoint and service endpoint policy are in place to prevent data exfiltration.
- * A scoring subnet contains an AKS cluster.
- * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.)
-* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage.
--
-The following tables list the required outbound [Azure Service Tags](/azure/virtual-network/service-tags-overview) and fully qualified domain names (FQDN) with data exfiltration protection setting:
-
-| Outbound service tag | Protocol | Port |
-| - | -- | - |
-| `AzureActiveDirectory` | TCP | 80, 443 |
-| `AzureResourceManager` | TCP | 443 |
-| `AzureMachineLearning` | UDP | 5831 |
-| `BatchNodeManagement` | TCP | 443 |
-
-| Outbound FQDN | Protocol | Port |
-| - | - | - |
-| `mcr.microsoft.com` | TCP | 443 |
-| `*.data.mcr.microsoft.com` | TCP | 443 |
-| `ml.azure.com` | TCP | 443 |
-| `automlresources-prod.azureedge.net` | TCP | 443 |
+## Next steps
-### Using public workspace
+For more information on using a __managed virtual network__, see the following articles:
-You can use the public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace.
+* [Managed Network Isolation](how-to-managed-network.md)
+* [Use private endpoint to access your workspace](how-to-configure-private-link.md)
+* [Use custom DNS](how-to-custom-dns.md)
-## Next steps
+For more information on using an __Azure Virtual Network__, see the following articles:
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md)
-* [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md)
-* [Use custom DNS](how-to-custom-dns.md)
+* [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md)
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
monikerRange: 'azureml-api-2 || azureml-api-1'
[!INCLUDE [dev v1](includes/machine-learning-dev-v1.md)] :::moniker-end
-Secure Azure Machine Learning workspace resources and compute environments using Azure Virtual Networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network.
- [!INCLUDE [managed-vnet-note](includes/managed-vnet-note.md)]
+Secure Azure Machine Learning workspace resources and compute environments using Azure Virtual Networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network.
+ This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: :::moniker range="azureml-api-2"
+* [Use managed networks](how-to-managed-network.md) (preview)
* [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure machine learning registries](how-to-registry-network-isolation.md) * [Secure the training environment](how-to-secure-training-vnet.md)
For a tutorial on creating a secure workspace, see [Tutorial: Create a secure wo
## Prerequisites
-This article assumes that you have familiarity with the following topics:
+This article assumes that you have familiarity with the following articles:
+ [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) + [IP networking](../virtual-network/ip-services/public-ip-addresses.md) + [Azure Machine Learning workspace with private endpoint](how-to-configure-private-link.md)
The following table compares how services access different parts of an Azure Mac
* **Inference compute access** - Access Azure Kubernetes Services (AKS) compute clusters with private IP addresses.
-The next sections show you how to secure the network scenario described above. To secure your network, you must:
+The next sections show you how to secure the network scenario described previously. To secure your network, you must:
1. Secure the [**workspace and associated resources**](#secure-the-workspace-and-associated-resources). 1. Secure the [**training environment**](#secure-the-training-environment).
The next sections show you how to secure the network scenario described above. T
If you want to access the workspace over the public internet while keeping all the associated resources secured in a virtual network, use the following steps:
-1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace.
+1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). This network secures the resources used by the workspace.
1. Use __one__ of the following options to create a publicly accessible workspace: :::moniker range="azureml-api-2"
If you want to access the workspace over the public internet while keeping all t
Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network. :::moniker range="azureml-api-2"
-1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
+1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). This network secures the workspace and other resources. Then create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these | Service | Endpoint information | Allow trusted information |
Use the following steps to secure your workspace and associated resources. These
| __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) | :::moniker-end :::moniker range="azureml-api-1"
-1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](./v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
+1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). This virtual network secures the workspace and other resources. Then create a [Private Link-enabled workspace](./v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these | Service | Endpoint information | Allow trusted information |
For detailed instructions on how to complete these steps, see [Secure a training
### Example training job submission
-In this section, you learn how Azure Machine Learning securely communicates between services to submit a training job. This shows you how all your configurations work together to secure communication.
+In this section, you learn how Azure Machine Learning securely communicates between services to submit a training job. This example shows you how all your configurations work together to secure communication.
1. The client uploads training scripts and training data to storage accounts that are secured with a service or private endpoint.
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
In this article, you learn how to prepare image data for training computer visio
To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an `MLTable`. You can create an `MLTable` from labeled training data in JSONL format.
-If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
+If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
## Prerequisites
my_data = Data(
Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type.
-If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs).
+If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs).
Once you have created jsonl file following the above steps, you can register it as a data asset using UI. Make sure you select `stream` type in schema section as shown below.
machine-learning How To Registry Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-registry-network-isolation.md
To connect to a registry that's secured behind a VNet, use one of the following
* [Azure Bastion](/azure/bastion/bastion-overview)ΓÇ»- In this scenario, you create an Azure Virtual Machine (sometimes called a jump box) inside the VNet. You then connect to the VM using Azure Bastion. Bastion allows you to connect to the VM using either an RDP or SSH session from your local web browser. You then use the jump box as your development environment. Since it is inside the VNet, it can directly access the registry. ### Share assets from workspace to registry
+> [!NOTE]
+> Currently, sharing an asset from secure workspace to a Azure machine learning registry is not supported if the storage account containing the asset has public access disabled.
Create a private endpoint to the registry, storage and ACR from the VNet of the workspace. If you're trying to connect to multiple registries, create private endpoint for each registry and associated storage and ACRs. For more information, see the [How to create a private endpoint](#how-to-create-a-private-endpoint) section.
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Last updated 09/06/2022
# Secure an Azure Machine Learning inferencing environment with virtual networks - In this article, you learn how to secure inferencing environments (online endpoints) with a virtual network in Azure Machine Learning. There are two inference options that can be secured using a VNet: * Azure Machine Learning managed online endpoints+
+ > [!TIP]
+ > Microsoft recommends using an Azure Machine Learning **managed virtual networks** (preview) instead of the steps in this article when securing managed online endpoints. With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes. You can also add private endpoints for resources needed by the workspace, such as Azure Storage Account. For more information, see [Workspace managed network isolation](how-to-managed-network.md).
+ * Azure Kubernetes Service > [!TIP]
In this article, you learn how to secure inferencing environments (online endpoi
+ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.
-+ An existing virtual network and subnet, that is used to secure the Azure Machine Learning workspace.
++ An existing virtual network and subnet that is used to secure the Azure Machine Learning workspace. [!INCLUDE [network-rbac](includes/network-rbac.md)]
To use Azure Kubernetes Service cluster for secure inference, use the following
* CLI v2 - https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes * Python SDK V2 - https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes
- * Studio UI - Follow the steps in [managed online endpoint deployment](how-to-use-managed-online-endpoint-studio.md) through the Studio. After entering the __Endpoint name__ select __Kubernetes__ as the compute type instead of __Managed__
+ * Studio UI - Follow the steps in [managed online endpoint deployment](how-to-use-managed-online-endpoint-studio.md) through the Studio. After you enter the __Endpoint name__, select __Kubernetes__ as the compute type instead of __Managed__.
## Limit outbound connectivity from the virtual network
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Title: Network isolation of managed online endpoints
+ Title: How to secure managed online endpoints with network isolation
description: Use private endpoints to provide network isolation for Azure Machine Learning managed online endpoints.
Previously updated : 04/26/2023- Last updated : 08/18/2023+
-# Use network isolation with managed online endpoints
+# Secure your managed online endpoints with network isolation
-When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md).
+In this article, you'll use network isolation to secure a managed online endpoint. You'll create a managed online endpoint that uses an Azure Machine Learning workspace's private endpoint for secure inbound communication. You'll also configure the workspace with a **managed virtual network** that **allows only approved outbound** communication for deployments (preview). Finally, you'll create a deployment that uses the private endpoints of the workspace's managed virtual network for outbound communication.
-You can secure the inbound scoring requests from clients to an _online endpoint_. You can also secure the outbound communications between a _deployment_ and the Azure resources it uses. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints-online.md).
-The following diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from clients are received through the workspace private endpoint from your virtual network. Outbound communication with services is handled through private endpoints to those service instances from the deployment:
-
+For examples that use the legacy method for network isolation, see the deployment files [deploy-moe-vnet-legacy.sh](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet-legacy.sh) (for deployment using a generic model) and [deploy-moe-vnet-mlflow-legacy.sh](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet-mlflow-legacy.sh) (for deployment using an MLflow model) in the azureml-examples GitHub repo.
## Prerequisites
-To start with, you would need an Azure subscription, CLI or SDK to interact with Azure Machine Learning workspace and related entities, and the right permission.
+To begin, you need an Azure subscription, CLI or SDK to interact with Azure Machine Learning workspace and related entities, and the right permission.
* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* You must install and configure the Azure CLI and `ml` extension or the Azure Machine Learning Python SDK v2. For more information, see the following articles:
+* install and configure the [Azure CLI](/cli/azure/) and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+ >[!TIP]
+ > Azure Machine Learning managed virtual network was introduced on May 23rd, 2023. If you have an older version of the ml extension, you may need to update it for the examples in this article work. To update the extension, use the following Azure CLI command:
+ >
+ > ```azurecli
+ > az extension update -n ml
+ > ```
- * [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
- * [Install the Python SDK v2](https://aka.ms/sdk-v2-install).
+* The CLI examples in this article assume that you're using the Bash (or compatible) shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
-* You must have an Azure Resource Group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your `ml` extension per the above article.
+* You must have an Azure Resource Group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you've configured your `ml` extension.
* If you want to use a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) to create and manage online endpoints and online deployments, the identity should have the proper permissions. For details about the required permissions, see [Set up service authentication](./how-to-identity-based-service-authentication.md#workspace). For example, you need to assign the proper RBAC permission for Azure Key Vault on the identity.
-There are additional prerequisites for workspace and its related entities.
-
-* You must have an Azure Machine Learning workspace, and the workspace must use a private endpoint. If you don't have one, the steps in this article create an example workspace, virtual network (VNet), and VM. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](./how-to-configure-private-link.md).
-
-* The workspace has its `public_network_access` flag that can be either enabled or disabled. If you plan on using managed online endpoint deployments that use __public outbound__, then you must also [configure the workspace to allow public access](how-to-configure-private-link.md#enable-public-access). This is because outbound communication from managed online endpoint deployment is to the _workspace API_. When the deployment is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
-
-* When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
- ## Limitations
-* The `v1_legacy_mode` flag must be disabled (false) on your Azure Machine Learning workspace. If this flag is enabled, you won't be able to create a managed online endpoint. For more information, see [Network isolation with v2 API](how-to-configure-network-isolation-with-v2.md).
-* If your Azure Machine Learning workspace has a private endpoint that was created before May 24, 2022, you must recreate the workspace's private endpoint before configuring your online endpoints to use a private endpoint. For more information on creating a private endpoint for your workspace, see [How to configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
+## Prepare your system
+1. Create the environment variables used by this example by running the following commands. Replace `<YOUR_WORKSPACE_NAME>` with the name to use for your workspace. Replace `<YOUR_RESOURCEGROUP_NAME>` with the resource group that will contain your workspace.
> [!TIP]
- > To confirm when a workspace is created, you can check the workspace properties. In Studio, click `View all properties in Azure Portal` from `Directory + Subscription + Workspace` section (top right of the Studio), Click JSON View from top right of the Overview page, and choose the latest API Version. You can check the value of `properties.creationTime`. You can do the same by using `az ml workspace show` with [CLI](how-to-manage-workspace-cli.md#get-workspace-information), or `my_ml_client.workspace.get("my-workspace-name")` with [SDK](how-to-manage-workspace.md?tabs=python#find-a-workspace), or `curl` on workspace with [REST API](how-to-manage-rest.md#drill-down-into-workspaces-and-their-resources).
-
-* When you use network isolation with a deployment, Azure Log Analytics is partially supported. All metrics and the `AMLOnlineEndpointTrafficLog` table are supported via Azure Log Analytics. `AMLOnlineEndpointConsoleLog` and `AMLOnlineEndpointEventLog` tables are currently not supported. As a workaround, you can use the [az ml online-deployment get_logs](/cli/azure/ml/online-deployment#az-ml-online-deployment-get-logs) CLI command, the [OnlineDeploymentOperations.get_logs()](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-get-logs) Python SDK, or the Deployment log tab in the Azure Machine Learning studio instead. For more information, see [Monitoring online endpoints](how-to-monitor-online-endpoints.md).
-
-* When you use network isolation with a deployment, you can use Azure Container Registry (ACR), Storage account, Key Vault and Application Insights from a different resource group in the same subscription, but you cannot use them if they are in a different subscription.
-
-* For online deployments with `egress_public_network_access` flag set to `disabled`, access from the deployments to Microsoft Container Registry (MCR) is restricted. If you want to leverage container images from MCR (such as when using curated environment or mlflow no-code deployment), recommendation is to build the image locally inside the virtual network ([docker build](https://docs.docker.com/engine/reference/commandline/build/)) and push the image into the private Azure Container Registry (ACR) which is attached with the workspace (for instance, using [docker push](../container-registry/container-registry-get-started-docker-cli.md#push-the-image-to-your-registry)). The images in this ACR is accessible to secured deployments via the private endpoints which are automatically created on behalf of you when you set `egress_public_network_access` flag to `disabled`. For a quick example, please refer to [build image under virtual network](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh) and [end to end example for model deployment under virtual network](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet.sh).
-
-> [!NOTE]
-> Requests to create, update, or retrieve the authentication keys are sent to the Azure Resource Manager over the public network.
-
-## Inbound (scoring)
-
-To secure scoring requests to the online endpoint to your virtual network, set the `public_network_access` flag for the endpoint to `disabled`:
-
-# [Azure CLI](#tab/cli)
-
-```azurecli
-az ml online-endpoint create -f endpoint.yml --set public_network_access=disabled
-```
-
-# [Python](#tab/python)
-
-```python
-from azure.ai.ml.entities import ManagedOnlineEndpoint
-
-endpoint = ManagedOnlineEndpoint(name='my-online-endpoint',
- description='this is a sample online endpoint',
- tags={'foo': 'bar'},
- auth_mode="key",
- public_network_access="disabled"
- # public_network_access="enabled"
-)
-```
-
-# [Studio](#tab/azure-studio)
-
-1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
-1. Select the **Workspaces** page from the left navigation bar.
-1. Enter a workspace by clicking its name.
-1. Select the **Endpoints** page from the left navigation bar.
-1. Select **+ Create** to open the **Create deployment** setup wizard.
-1. Disable the **Public network access** flag at the **Create endpoint** step.
-
- :::image type="content" source="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png" alt-text="A screenshot of how to disable public network access for an endpoint." lightbox="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png":::
---
-When `public_network_access` is `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](./how-to-configure-private-link.md), and the endpoint can't be reached from public networks.
-
-> [!NOTE]
-> You can update (enable or disable) the `public_network_access` flag of an online endpoint after creating it.
-
-## Outbound (resource access)
-
-To restrict communication between a deployment and external resources, including the Azure resources it uses, set the deployment's `egress_public_network_access` flag to `disabled`. Use this flag to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. Note that disabling the flag alone is not enough ΓÇö your workspace must also have a private link that allows access to Azure resources via a private endpoint. See the [Prerequisites](#prerequisites) for more details.
-
-Secure outbound communication creates three private endpoints per deployment. One to the Azure Blob storage, one to the Azure Container Registry, and one to your workspace.
-
-> [!IMPORTANT]
-> * Each managed online endpoint deployment has its own independent Azure Machine Learning managed VNet. If the endpoint has multiple deployments, each deployment has its own managed VNet.
-> * We do not support peering between a deployment's managed VNet and your client VNet. For secure access to resources needed by the deployment, we use private endpoints to communicate with the resources.
-
-> [!WARNING]
-> * You cannot update (enable or disable) the `egress_public_network_access` flag after creating the deployment. Attempting to change the flag while updating the deployment will fail with an error.
-
-# [Azure CLI](#tab/cli)
-
-```azurecli
-az ml online-deployment create -f deployment.yml --set egress_public_network_access=disabled
-```
-
-# [Python](#tab/python)
-
-```python
-blue_deployment = ManagedOnlineDeployment(name='blue',
- endpoint_name='my-online-endpoint',
- model=model,
- code_configuration=CodeConfiguration(code_local_path='./model-1/onlinescoring/',
- scoring_script='score.py'),
- environment=env,
- instance_type='Standard_DS2_v2',
- instance_count=1,
- egress_public_network_access="disabled"
- # egress_public_network_access="enabled"
-)
-
-ml_client.begin_create_or_update(blue_deployment)
-```
-
-# [Studio](#tab/azure-studio)
-
-1. Follow the steps in the **Create deployment** setup wizard to the **Deployment** step.
-1. Disable the **Egress public network access** flag.
-
- :::image type="content" source="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png" alt-text="A screenshot of how to disable the egress public network access for a deployment." lightbox="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png":::
---
-The deployment communicates with these resources over the private endpoint:
-
-* The Azure Machine Learning workspace
-* The Azure Storage blob that is associated with the workspace
-* The Azure Container Registry for the workspace
-
-When you configure the `egress_public_network_access` to `disabled`, a new private endpoint is created per deployment, per service. For example, if you set the flag to `disabled` for three deployments to an online endpoint, a total of nine private endpoints are created. Each deployment would have three private endpoints to communicate with the workspace, blob, and container registry. To confirm the creation of the private endpoints, first check the storage account and container registry associated with the workspace (see [Download a configuration file](how-to-manage-workspace.md#download-a-configuration-file)), find each resource from Azure portal and check `Private endpoint connections` tab under the `Networking` menu.
-
-> [!IMPORTANT]
-> - As mentioned earlier, outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__ (in other words, `public_network_access` flag for the endpoint is set to `enabled`), then the workspace must be able to accept that public communication (`public_network_access` flag for the workspace set to `enabled`).
-> - When online deployments are created with `egress_public_network_access` flag set to `disabled`, they will have access to above secured resources only. For instance, if the deployment uses model assets uploaded to other storage accounts, the model download will fail. Ensure model assets are on the storage account associated with the workspace.
-> - When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network. On the contrary, when `egress_public_network_access` is set to `enabled`, the deployment can only access the resources with public access, which means it cannot access the resources secured in the virtual network.
+ > before creating a new workspace, you must create an Azure Resource Group to contain it. For more information, see [Manage Azure Resource Groups](/azure/azure-resource-manager/management/manage-resource-groups-cli).
+ ```azurecli
+ export RESOURCEGROUP_NAME="<YOUR_RESOURCEGROUP_NAME>"
+ export WORKSPACE_NAME="<YOUR_WORKSPACE_NAME>"
+ ```
-## Scenarios
-
-The following table lists the supported configurations when configuring inbound and outbound communications for an online endpoint:
-
-| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? |
-| -- | -- | | |
-| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes |
-| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes |
-| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes |
-| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes |
--
-## End-to-end example
-
-Use the information in this section to create an example configuration that uses private endpoints to secure online endpoints.
-
-> [!TIP]
-> The end-to-end example in this article comes from the files in the __azureml-examples__ GitHub repository. To clone the samples repository and switch to the repository's `cli/` directory, use the following commands:
->
-> ```azurecli
-> git clone https://github.com/Azure/azureml-examples
-> cd azureml-examples/cli
-> ```
->
-> In this example, an Azure Virtual Machine is created inside the virtual network. You connect to the VM using SSH, and run the deployment from the VM. This configuration is used to simplify the steps in this example, and does not represent a typical secure configuration. For example, in a production environment you would most likely use a VPN client or Azure ExpressRoute to directly connect clients to the virtual network.
-
-### Create workspace and secured resources
-
-The steps in this section use an Azure Resource Manager template to create the following Azure resources:
-
-* Azure Virtual Network
-* Azure Machine Learning workspace
-* Azure Container Registry
-* Azure Key Vault
-* Azure Storage account (blob & file storage)
-
-Public access is disabled for all the services. While the Azure Machine Learning workspace is secured behind a virtual network, it's configured to allow public network access. For more information, see [CLI 2.0 secure communications](how-to-configure-cli.md#secure-communications). A scoring subnet is created, along with outbound rules that allow communication with the following Azure
-
-* Azure Active Directory
-* Azure Resource Manager
-* Azure Front Door
-* Microsoft Container Registries
-
-The following diagram shows the different components created in this architecture:
-
-The following diagram shows the overall architecture of this example:
+1. Create your workspace. The `-m allow_only_approved_outbound` parameter configures a managed virtual network for the workspace and blocks outbound traffic except to approved destinations.
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_workspace_allow_only_approved_outbound" :::
-To create the resources, use the following Azure CLI commands. To create a resource group. Replace `<my-resource-group>` and `<my-location>` with the desired values.
+ Alternatively, if you'd like to allow the deployment to send outbound traffic to the internet, uncomment the following code and run it instead.
-```azurecli
-# create resource group
-az group create --name <my-resource-group> --location <my-location>
-```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_workspace_internet_outbound" :::
-Clone the example files for the deployment, use the following command:
+ For more information on how to create a new workspace or to upgrade your existing workspace to use a manged virtual network, see [Configure a managed virtual network to allow internet outbound](how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-internet-outbound).
-```azurecli
-#Clone the example files
-git clone https://github.com/Azure/azureml-examples
-```
+ When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
-To create the resources, use the following Azure CLI commands. Replace `<UNIQUE_SUFFIX>` with a unique suffix for the resources that are created.
+1. Configure the defaults for the CLI so that you can avoid passing in the values for your workspace and resource group multiple times.
-```azurecli
-az deployment group create --template-file endpoints/online/managed/vnet/setup_ws/main.bicep --parameters suffix=$SUFFIX --resource-group <my-resource-group>
-```
-### Create the virtual machine jump box
+ ```azurecli
+ az configure --defaults workspace=$WORKSPACE_NAME group=$RESOURCEGROUP_NAME
+ ```
-To create an Azure Virtual Machine that can be used to connect to the virtual network, use the following command. Replace `<your-new-password>` with the password you want to use when connecting to this VM:
+1. Clone the examples repository to get the example files for the endpoint and deployment, then go to the repository's `/cli` directory.
-```azurecli
-# create vm
-az vm create --name test-vm --vnet-name vnet-$SUFFIX --subnet snet-scoring --image UbuntuLTS --admin-username azureuser --admin-password <your-new-password> --resource-group <my-resource-group>
-```
+ ```azurecli
+ git clone --depth 1 https://github.com/Azure/azureml-examples
+ cd /cli
+ ```
-> [!IMPORTANT]
-> The VM created by these commands has a public endpoint that you can connect to over the public network.
+The commands in this tutorial are in the file `deploy-managed-online-endpoint-workspacevnet.sh` in the `cli` directory, and the YAML configuration files are in the `endpoints/online/managed/sample/` subdirectory.
-The response from this command is similar to the following JSON document:
+## Create a secured managed online endpoint
-```json
-{
- "fqdns": "",
- "id": "/subscriptions/<GUID>/resourceGroups/<my-resource-group>/providers/Microsoft.Compute/virtualMachines/test-vm",
- "location": "westus",
- "macAddress": "00-0D-3A-ED-D8-E8",
- "powerState": "VM running",
- "privateIpAddress": "192.168.0.12",
- "publicIpAddress": "20.114.122.77",
- "resourceGroup": "<my-resource-group>",
- "zones": ""
-}
-```
+To create a secured managed online endpoint, create the endpoint in your workspace and set the endpoint's `public_network_access` to `disabled` to control inbound communication. The endpoint will then have to use the workspace's private endpoint for inbound communication.
-Use the following command to connect to the VM using SSH. Replace `publicIpAddress` with the value of the public IP address in the response from the previous command:
+Because the workspace is configured to have a managed virtual network, any deployments of the endpoint will use the private endpoints of the managed virtual network for outbound communication (preview).
-```azurecli
-ssh azureusere@publicIpAddress
-```
+1. Set the endpoint's name.
-When prompted, enter the password you used when creating the VM.
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="set_endpoint_name" :::
-### Configure the VM
+1. Create an endpoint with `public_network_access` disabled to block inbound traffic.
-1. Use the following commands from the SSH session to install the CLI and Docker:
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_endpoint_inbound_blocked" :::
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="setup_docker_az_cli":::
+ Alternatively, if you'd like to allow the endpoint to receive scoring requests from the internet, uncomment the following code and run it instead.
-1. To create the environment variables used by this example, run the following commands. Replace `<YOUR_SUBSCRIPTION_ID>` with your Azure subscription ID. Replace `<YOUR_RESOURCE_GROUP>` with the resource group that contains your workspace. Replace `<SUFFIX_USED_IN_SETUP>` with the suffix you provided earlier. Replace `<LOCATION>` with the location of your Azure workspace. Replace `<YOUR_ENDPOINT_NAME>` with the name to use for the endpoint.
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_endpoint_inbound_allowed" :::
- > [!TIP]
- > Use the tabs to select whether you want to perform a deployment using an MLflow model or generic ML model.
+1. Create a deployment in the workspace managed virtual network.
- # [Generic model](#tab/model)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_deployment" :::
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet.sh" id="set_env_vars":::
+1. Get the status of the deployment.
- # [MLflow model](#tab/mlflow)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="get_status" :::
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet-mlflow.sh" id="set_env_vars":::
+1. Test the endpoint with a scoring request, using the CLI.
-
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="test_endpoint" :::
-1. To sign in to the Azure CLI in the VM environment, use the following command:
+1. Get deployment logs.
- :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_login":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="get_logs" :::
-1. To configure the defaults for the CLI, use the following commands:
+1. Delete the endpoint if you no longer need it.
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="configure_defaults":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="delete_endpoint" :::
-1. To clone the example files for the deployment, use the following command:
+1. Delete all the resources created in this article. Replace `<resource-group-name>` with the name of the resource group used in this example:
```azurecli
- sudo mkdir -p /home/samples; sudo git clone -b main --depth 1 https://github.com/Azure/azureml-examples.git /home/samples/azureml-examples
+ az group delete --resource-group <resource-group-name>
```
-1. To build a custom docker image to use with the deployment, use the following commands:
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh" id="build_image":::
-
- > [!TIP]
- > In this example, we build the Docker image before pushing it to Azure Container Registry. Alternatively, you can build the image in your virtual network by using an Azure Machine Learning compute cluster and environments. For more information, see [Secure Azure Machine Learning workspace](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
-
-### Create a secured managed online endpoint
-
-1. To create a managed online endpoint that is secured using a private endpoint for inbound and outbound communication, use the following commands:
-
- > [!TIP]
- > You can test or debug the Docker image locally by using the `--local` flag when creating the deployment. For more information, see the [Deploy and debug locally](how-to-deploy-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints) article.
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/create_moe.sh" id="create_vnet_deployment":::
--
-1. To make a scoring request with the endpoint, use the following commands:
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/score_endpoint.sh" id="check_deployment":::
-
-### Cleanup
-
-To delete the endpoint, use the following command:
--
-To delete the VM, use the following command:
--
-To delete all the resources created in this article, use the following command. Replace `<resource-group-name>` with the name of the resource group used in this example:
-
-```azurecli
-az group delete --resource-group <resource-group-name>
-```
- ## Troubleshooting [!INCLUDE [network isolation issues](includes/machine-learning-online-endpoint-troubleshooting.md)] ## Next steps
+- [Network isolation with managed online endpoints](concept-secure-online-endpoint.md)
+- [Workspace managed network isolation](how-to-managed-network.md)
+- [Tutorial: How to create a secure workspace](tutorial-create-secure-workspace.md)
- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
ms.devlang: azurecli
[!INCLUDE [SDK v2](includes/machine-learning-sdk-v2.md)] Azure Machine Learning compute instance and compute cluster can be used to securely train models in an Azure Virtual Network. When planning your environment, you can configure the compute instance/cluster with or without a public IP address. The general differences between the two are:
The following table contains the differences between these configurations:
You can also use Azure Databricks or HDInsight to train models in a virtual network.
-> [!TIP]
-> Azure Machine Learning also provides **managed virtual networks** (preview). With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes. You can also add private endpoints for resources needed by the workspace, such as Azure Storage Account.
->
-> At this time, the managed virtual networks preview **doesn't** support no public IP configuration for compute resources. For more information, see [Workspace managed network isolation](how-to-managed-network.md).
- > [!IMPORTANT] > Items marked (preview) in this article are currently in public preview. > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
To create a compute cluster in an Azure Virtual Network in a different region th
## Compute instance/cluster with no public IP
+> [!WARNING]
+> This information is only valid when using an _Azure Virtual Network_. If you are using a _managed virtual network_, see [managed compute with a managed network](how-to-managed-network-compute.md).
+ > [!IMPORTANT] > If you have been using compute instances or compute clusters configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20, 2023 (when the feature is generally available). >
ml_client.begin_create_or_update(entity=compute)
## Compute instance/cluster with public IP
+> [!IMPORTANT]
+> This information is only valid when using an _Azure Virtual Network_. If you are using a _managed virtual network_, see [managed compute with a managed network](how-to-managed-network-compute.md).
+ The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** compute instances/clusters that have a public IP: + If you put multiple compute instances/clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 01/19/2023 Last updated : 08/22/2023
[!INCLUDE [sdk/cli v2](includes/machine-learning-dev-v2.md)] In this article, you learn how to secure an Azure Machine Learning workspace and its associated resources in an Azure Virtual Network. - This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: * [Virtual network overview](how-to-network-security-overview.md)
In this article you learn how to enable the following workspaces resources in a
### Azure storage account
-* If you plan to use Azure Machine Learning studio and the storage account is also in the VNet, there are extra validation requirements:
+* If you plan to use Azure Machine Learning studio and the storage account is also in the virtual network, there are extra validation requirements:
- * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.
- * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets.
+ * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the virtual network.
+ * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same virtual network. In this case, they can be in different subnets.
### Azure Container Instances
-When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet isn't supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md).
+When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a virtual network isn't supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md).
### Azure Container Registry
Azure Container Registry can be configured to use a private endpoint. Use the fo
1. Configure the ACR for the workspace to [Allow access by trusted services](../container-registry/allow-access-trusted-services.md).
-1. Create an Azure Machine Learning compute cluster. This cluster is used to build Docker images when ACR is behind a VNet. For more information, see [Create a compute cluster](how-to-create-attach-compute-cluster.md).
+1. Create an Azure Machine Learning compute cluster. This cluster is used to build Docker images when ACR is behind a virtual network. For more information, see [Create a compute cluster](how-to-create-attach-compute-cluster.md).
1. Use one of the following methods to configure the workspace to build Docker images using the compute cluster.
Azure Container Registry can be configured to use a private endpoint. Use the fo
To enable network isolation for Azure Monitor and the Application Insights instance for the workspace, use the following steps:
-1. Open your Application Insights resource in the Azure Portal. The __Overview__ tab may or may not have a Workspace property. If it _doesn't_ have the property, perform step 2. If it _does_, then you can proceed directly to step 3.
+1. Open your Application Insights resource in the Azure portal. The __Overview__ tab may or may not have a Workspace property. If it _doesn't_ have the property, perform step 2. If it _does_, then you can proceed directly to step 3.
> [!TIP] > New workspaces create a workspace-based Application Insights resource by default. If your workspace was recently created, then you would not need to perform step 2. 1. Upgrade the Application Insights instance for your workspace. For steps on how to upgrade, see [Migrate to workspace-based Application Insights resources](/azure/azure-monitor/app/convert-classic-resource).
-1. Create an Azure Monitor Private Link Scope and add the Application Insights instance from step 1 to the scope. For steps on how to do this, see [Configure your Azure Monitor private link](/azure/azure-monitor/logs/private-link-configure).
+1. Create an Azure Monitor Private Link Scope and add the Application Insights instance from step 1 to the scope. For more information, see [Configure your Azure Monitor private link](/azure/azure-monitor/logs/private-link-configure).
## Securely connect to your workspace
To enable network isolation for Azure Monitor and the Application Insights insta
> [!IMPORTANT] > While this is a supported configuration for Azure Machine Learning, Microsoft doesn't recommend it. You should verify this configuration with your security team before using it in production.
-In some cases, you may need to allow access to the workspace from the public network (without connecting through the VNet using the methods detailed the [Securely connect to your workspace](#securely-connect-to-your-workspace) section). Access over the public internet is secured using TLS.
+In some cases, you may need to allow access to the workspace from the public network (without connecting through the virtual network using the methods detailed the [Securely connect to your workspace](#securely-connect-to-your-workspace) section). Access over the public internet is secured using TLS.
To enable public network access to the workspace, use the following steps: 1. [Enable public access](how-to-configure-private-link.md#enable-public-access) to the workspace after configuring the workspace's private endpoint.
-1. [Configure the Azure Storage firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet. You may need to change the allowed IP address if the clients don't have a static IP. For example, if one of your Data Scientists is working from home and can't establish a VPN connection to the VNet.
+1. [Configure the Azure Storage firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet. You may need to change the allowed IP address if the clients don't have a static IP. For example, if one of your Data Scientists is working from home and can't establish a VPN connection to the virtual network.
## Next steps
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
For information and samples on authenticating with MSAL, see the following artic
* JavaScript - [How to migrate a JavaScript app from ADAL.js to MSAL.js](../active-directory/develop/msal-compare-msal-js-and-adal-js.md). * Node.js - [How to migrate a Node.js app from Microsoft Authentication Library to MSAL](../active-directory/develop/msal-node-migration.md).
-* Python - [Microsoft Authentication Library to MSAL migration guide for Python](../active-directory/develop/migrate-python-adal-msal.md).
+* Python - [Microsoft Authentication Library to MSAL migration guide for Python](/entra/msal/python/advanced/migrate-python-adal-msal).
## Use managed identity authentication
machine-learning How To Setup Mlops Azureml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md
Before you can set up an MLOps project with Azure Machine Learning, you need to
} ```
-1. Repeat **Step 3.** if you're creating service principals for Dev and Prod environments. For this demo, we'll be creating only one environment, which is Prod.
+1. Repeat **Step 3** if you're creating service principals for Dev and Prod environments. For this demo, we'll be creating only one environment, which is Prod.
1. Close the Cloud Shell once the service principals are created.
Before you can set up an MLOps project with Azure Machine Learning, you need to
5. Select **Azure Resource Manager**, select **Next**, select **Service principal (manual)**, select **Next** and select the Scope Level **Subscription**. - **Subscription Name** - Use the name of the subscription where your service principal is stored.
- - **Subscription Id** - Use the `subscriptionId` you used in **Step 1.** input as the Subscription ID
- - **Service Principal Id** - Use the `appId` from **Step 1.** output as the Service Principal ID
- - **Service principal key** - Use the `password` from **Step 1.** output as the Service Principal Key
- - **Tenant ID** - Use the `tenant` from **Step 1.** output as the Tenant ID
+ - **Subscription Id** - Use the `subscriptionId` you used in **Step 1** input as the Subscription ID
+ - **Service Principal Id** - Use the `appId` from **Step 1** output as the Service Principal ID
+ - **Service principal key** - Use the `password` from **Step 1** output as the Service Principal Key
+ - **Tenant ID** - Use the `tenant` from **Step 1** output as the Tenant ID
6. Name the service connection **Azure-ARM-Prod**.
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
These prerequisites cover the submission of a Spark job from Azure Machine Learn
### Attach user assigned managed identity using `ARMClient`
-1. Install [ARMClient](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API.
+1. Install [`ARMClient`](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API.
1. Create a JSON file that defines the user-assigned managed identity that should be attached to the workspace: ```json {
These prerequisites cover the submission of a Spark job from Azure Machine Learn
> - To ensure successful execution of the Spark job, assign the **Contributor** and **Storage Blob Data Contributor** roles, on the Azure storage account used for data input and output, to the identity that the Spark job uses > - Public Network Access should be enabled in Azure Synapse workspace to ensure successful execution of the Spark job using an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md). > - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool, in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.
-> - Serverless Spark compute supports a managed virtual network (preview). If a [managed network is provisioned for the serverless Spark compute, the corresponding private endpoints for the storage account should also be provisioned](./how-to-managed-network.md#configure-for-serverless-spark-jobs) to ensure data access.
+> - Serverless Spark compute supports Azure Machine Learning managed virtual network (preview). If a [managed network is provisioned for the serverless Spark compute, the corresponding private endpoints for the storage account should also be provisioned](./how-to-managed-network.md#configure-for-serverless-spark-jobs) to ensure data access.
## Submit a standalone Spark job A Python script developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md) can be used to submit a batch job to process a larger volume of data, after making necessary changes for Python script parameterization. A simple data wrangling batch job can be submitted as a standalone Spark job.
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
+
+ Title: Troubleshoot automated ML experiments
+
+description: Learn how to troubleshoot and resolve issues in your automated machine learning experiments.
+++++ Last updated : 10/21/2021++++
+# Troubleshoot automated ML experiments
++
+In this guide, learn how to identify and resolve issues in your automated machine learning experiments.
+
+## Troubleshoot automated ML for Images and NLP in Studio
+
+In case of failures in runs for Automated ML for Images and NLP, you can use the following steps to understand the error.
+1. In the studio UI, the AutoML run should have a failure message indicating the reason for failure.
+2. For more details, go to the child run of this AutoML run. This child run is a HyperDrive run.
+3. In the "Trials" tab, you can check all the trials done for this HyperDrive run.
+4. Go to the failed trial runs.
+5. These runs should have an error message in the "Status" section of the "Overview" tab indicating the reason for failure.
+ Please click on "See more details" to get more details about the failure.
+6. You can look at "std_log.txt" in the "Outputs + Logs" tab to look at detailed logs and exception traces.
+
+If your Automated ML runs uses pipeline runs for trials, you can follow the following steps to understand the error.
+1. Please follow the steps 1-4 above to identify the failed trial run.
+2. This run should show you the pipeline run and the failed nodes in the pipeline are marked with Red color.
+3. Double click the failed node in the pipeline.
+4. These runs should have an error message in the "Status" section of the "Overview" tab indicating the reason for failure.
+ Please click on "See more details" to get more details about the failure.
+5. You can look at "std_log.txt" in the "Outputs + Logs" tab to look at detailed logs and exception traces.
+
+## Next steps
+++ [Train computer vision models with automated machine learning](how-to-auto-train-image-models.md).++ [Train natural language processing models with automated machine learning](how-to-auto-train-nlp-models.md).
machine-learning How To Troubleshoot Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-extension.md
To update the extension with a custom controller class:
``` az ml extension update --config nginxIngress.controller="k8s.io/amlarc-ingress-nginx" ```
+#### Nginx ingress controller installed with the Azure Machine Learning extension crashes due to out-of-memory (OOM) errors
+**Symptom**
+ The nginx ingress controller installed with the Azure Machine Learning extension crashes due to out-of-memory (OOM) errors even when there is no workload. The controller logs do not show any useful information to diagnose the problem.
+**Possible Cause**
+This issue may occur if the nginx ingress controller runs on a node with many CPUs. By default, the nginx ingress controller spawns worker processes according to the number of CPUs, which may consume more resources and cause OOM errors on nodes with more CPUs. This is a known [issue](https://github.com/kubernetes/ingress-nginx/issues/8166) reported on GitHub
+
+**Resolution**
+
+To resolve this issue, you can:
+* Adjust the number of worker processes by installing the extension with the parameter `nginxIngress.controllerConfig.worker-processes=8`.
+* Increase the memory limit by using the parameter `nginxIngress.resources.controller.limits.memory=<new limit>`.
+
+Ensure to adjust these two parameters according to your specific node specifications and workload requirements to optimize your workloads effectively.
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
ONNX is an open-source format for AI models. ONNX supports interoperability betw
- [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
- [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application
machine-learning How To Use Batch Model Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md
A model deployment is a set of resources required for hosting the model that doe
| `settings.retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. | | `settings.error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. | | `settings.logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
+ | `settings.environment_variables` | [Optional] Dictionary of environment variable name-value pairs to set for each batch scoring job. |
# [Python](#tab/python)
A model deployment is a set of resources required for hosting the model that doe
| `settings.retry_settingstimeout` | The timeout in seconds for scoring a mini batch (default is 30) | | `settings.output_action` | Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row` | | `settings.logging_level` | The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`. |
- | `environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. |
+ | `settings.environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. |
# [Studio](#tab/studio)
machine-learning How To Use Batch Pipeline Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md
Once the deployment is created, it's ready to receive jobs. You can invoke the d
> [!TIP]
-> In this example, the pipeline doesn't have inputs or outputs. However, they can be indicated at invocation time if any. To learn more about how to indicate inputs and outputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+> In this example, the pipeline doesn't have inputs or outputs. However, if the pipeline component requires some, they can be indicated at invocation time. To learn about how to indicate inputs and outputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md) or see the tutorial [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md).
You can monitor the progress of the show and stream the logs using:
ml_client.compute.begin_delete(name="batch-cluster")
- [How to deploy a training pipeline with batch endpoints (preview)](how-to-use-batch-training-pipeline.md) - [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md) - [Create batch endpoints from pipeline jobs (preview)](how-to-use-batch-pipeline-from-job.md)-- [Access data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md)
+- [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md)
- [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Batch Pipeline From Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-from-job.md
To deploy the pipeline component, we have to create a batch deployment from the
Notice now how we use the property `job_definition` instead of `component`: ```python
- deployment = BatchPipelineComponentDeployment(
+ deployment = PipelineComponentBatchDeployment(
name="hello-batch-from-job", description="A hello world deployment with a single step. This deployment is created from a pipeline job.", endpoint_name=endpoint.name,
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
You can quickly test out any pre-trained model using the Sample Inference widget
> * When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network. > * When `egress_public_network_access` is set to `enabled` for a managed online endpoint deployment, the deployment can only access the resources with public access. Which means that it cannot access resources secured in the virtual network. >
-> For more information, see [Outbound resource access for managed online endpoints](how-to-secure-online-endpoint.md#outbound-resource-access).
+> For more information, see [Secure outbound access with legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method).
## How to evaluate foundation models using your own test data
machine-learning How To Use Serverless Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md
Last updated 05/09/2023
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a fully-managed, on-demand compute. It is created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up.
+You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a fully managed, on-demand compute. It is created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up.
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
-Machine learning professionals can specify the resources the job needs. Azure Machine Learning manages the compute infrastructure, and provides managed network isolation reducing the burden on you.
+Machine learning professionals can specify the resources the job needs. Azure Machine Learning manages the compute infrastructure, and provides managed network (preview) isolation reducing the burden on you.
Enterprises can also reduce costs by specifying optimal resources for each job. IT Admins can still apply control by specifying cores quota at subscription and workspace level and apply Azure policies.
Serverless compute can be used to run command, sweep, AutoML, pipeline, distribu
* When using [Azure Machine Learning designer](concept-designer.md), select **Serverless** as default compute. > [!IMPORTANT]
-> If you want to use serverless compute with a workspace that is configured for network isolation, the workspace must be using a managed network isolation (preview). For more information, see [workspace managed network isolation](how-to-managed-network.md).
+> If you want to use serverless compute with a workspace that is configured for network isolation, the workspace must be using managed network isolation. For more information, see [workspace managed network isolation](how-to-managed-network.md).
## Performance considerations
machine-learning Application Sharing Policy Not Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/application-sharing-policy-not-supported.md
+
+ Title: Known issue - Application Sharing Policy isn't supported
+
+description: Configuring the applicationSharingPolicy property for a compute instance has no effect
+++++ Last updated : 08/14/2023+++
+# Known issue - The ApplicationSharingPolicy property isn't supported for compute instances
+
+Configuring the `applicationSharingPolicy` property for a compute instance has no effect as that property isn't supported
+
+
+
+**Status:** Open
+
+**Problem area:** Compute
++
+## Symptoms
+
+When creating a compute instance, the documentation lists an `applicationSharingPolicy` property with the options of:
+
+- Personal only the creator can access applications on this compute instance.
+- Shared, any workspace user can access applications on this instance depending on their assigned role.
+
+Neither of these configurations have any effect on the compute instance.
+
+## Solutions and workarounds
+
+There's no workaround as this property isn't supported. The documentation will be updated to remove reference to this property.
+
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Azure Machine Learning Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/azure-machine-learning-known-issues.md
+
+ Title: Azure Machine Learning known issues
+description: Identify issues that are affecting Azure Machine Learning features.
+++++ Last updated : 08/04/2023+++
+# Azure Machine Learning known issues
+
+This page lists known issues for Azure Machine Learning features. Before submitting a Support request, review this list to see if the issue that you're experiencing is already known and being addressed.
++
+## Currently active known issues
+
+Select the **Title** to view more information about that specific known issue.
++
+|Area |Title |Issue publish date |
+||||
+|Compute | [Jupyter R Kernel doesn't start in new compute instance images](jupyter-r-kernel-not-starting.md) | August 14, 2023 |
+|Compute | [Provisioning error when creating a compute instance with A10 SKU](compute-a10-sku-not-supported.md) | August 14, 2023 |
+|Compute | [Idleshutdown property in Bicep template causes error](compute-idleshutdown-bicep.md) | August 14, 2023 |
+|Compute | [Slowness in compute instance terminal from a mounted path](compute-slowness-terminal-mounted-path.md)| August 14, 2023|
+|Compute| [Creating compute instance after a workspace move results in an Etag conflict error.](workspace-move-compute-instance-same-name.md)| August 14, 2023 |
+
+
+## Next steps
++
+- [See Azure service level outages](https://azure.status.microsoft/status)
+- [Get your questions answered by the Azure Machine Learning community](https://learn.microsoft.com/answers/tags/75/azure-machine-learning)
machine-learning Compute A10 Sku Not Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-a10-sku-not-supported.md
+
+ Title: Known issue - A10 SKU not supported
+
+description: While trying to create a compute instance with A10 SKU, users encounter a provisioning error.
+++++ Last updated : 08/14/2023+++
+# Known issue - Provisioning error when creating a compute instance with A10 SKU
+
+While trying to create a compute instance with A10 SKU, you'll encounter a provisioning error.
+++
+**Status:** Open
+
+**Problem area:** Compute Instance
+
+## Solutions and workarounds
+
+A10 AKUs aren't supported for compute instances. Consult this list of supported SKUs: [Supported VM series and sizes](https://learn.microsoft.com/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes&preserve-view=true)
+
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Compute Idleshutdown Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-idleshutdown-bicep.md
+
+ Title: Known issue - Compute | Idleshutdown property in Bicep template causes error
+
+description: When creating an Azure Machine Learning compute instance through Bicep compiled using MSBuild NuGet, using the `idleTimeBeforeShutdown` property as described in the API reference results in an error.
+++++ Last updated : 08/04/2023+++
+# Known issue - Idleshutdown property in Bicep template causes error
+
+When creating an Azure Machine Learning compute instance through Bicep compiled using [MSBuild/NuGet](../../azure-resource-manager/bicep/msbuild-bicep-file.md), using the `idleTimeBeforeShutdown` property as described in the API reference [Microsoft.MachineLearningServices workspaces/computes API reference](/azure/templates/microsoft.machinelearningservices/workspaces/computes?pivots=deployment-language-bicep) results in an error.
+
+
+++
+**Status:** Open
++
+**Problem area:** Compute
+
+## Symptoms
+
+When creating an Azure Machine Learning compute instance through Bicep compiled using [msbuild/nuget](../../azure-resource-manager/bicep/msbuild-bicep-file.md), using the `idleTimeBeforeShutdown` property as described in the API reference [Microsoft.MachineLearningServices workspaces/computes API reference](/azure/templates/microsoft.machinelearningservices/workspaces/computes?pivots=deployment-language-bicep) results in an error.
++
+## Solutions and workarounds
+
+To allow the property to be set, you can suppress warnings with the `#disable-next-line` directive. Enter `#disable-next-line BCP037` in the template above the line with the warning:
++
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Compute Slowness Terminal Mounted Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-slowness-terminal-mounted-path.md
+
+ Title: Known issue - Slowness in compute instance terminal from a mounted path
+
+description: While using the compute instance terminal inside a mounted path of data folder, any commands executed from the terminal result in slowness.
+++++ Last updated : 08/04/2023+++
+# Known issue - Slowness in compute instance terminal from a mounted path
+
+While using the compute instance terminal inside a mounted path of a data folder, any commands executed from the terminal result in slowness. This issue is restricted to the terminal; running the commands from SDK using a notebook works as expected.
++
+<! Choose the correct include >
+
+**Status:** Open
+
+**Problem area:** Compute
+
+## Symptoms
+
+While using the compute instance terminal inside a mounted path of a data folder, any commands executed from the terminal result in slowness. This issue is restricted to the terminal; running the commands from SDK using a notebook works as expected.
+
+### Cause
+
+The `LD_LIBRARY_PATH` contains an empty string by default, which is treated as the current directory. This causes many library lookups on remote storage, resulting in slowness.
+
+As an example:
+
+```python
+LD_LIBRARY_PATH /opt/intel/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib:/opt/intel/compilers_and_libraries_2018.3.222/linux/mpi/mic/lib::/anaconda/envs/azureml_py38/lib/:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64/
+```
+
+Notice the `::` in the path. This is the empty string, which is treated as the current directory.
+
+When one of the paths in a list is "" - every executable tries to find the dynamic libraries it needs relative to current working directory.
+
+## Solutions and workarounds
+
+On the CI set the path making sure that `LD_LIBRARY_PATH` doesn't contain an empty string.
+
+```export LD_LIBRARY_PATH="$(echo $LD_LIBRARY_PATH | sed 's/\(:\)\1\+/\1/g')"```
+
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Jupyter R Kernel Not Starting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/jupyter-r-kernel-not-starting.md
+
+ Title: Known issue - Compute instance | Jupyter R Kernel doesn't start in new compute instance images
+
+description: When trying to launch an R kernel in JupyterLab or a notebook in a new compute instance, the kernel fails to start
+++++ Last updated : 08/14/2023+++
+# Known issue - Jupyter R Kernel doesn't start in new compute instance images
+
+When trying to launch an R kernel in JupyterLab or a notebook in a new compute instance, the kernel fails to start with `Error: .onLoad failed in loadNamespace()`
+++
+**Status:** Open
++
+**Problem area:** Compute
++
+## Symptoms
+
+After creating a new compute instance, try to launch R kernel in JupyterLab or a Jupyter notebook. The kernel fails to launch. You'll see the following messages in the Jupyter logs:
++
+```
+Aug 01 14:18:48 august-compute2Q6DP2A jupyter[11568]: Error: .onLoad failed in loadNamespace() for 'pbdZMQ', details:
+Aug 01 14:18:48 august-compute2Q6DP2A jupyter[11568]: call: dyn.load(file, DLLpath = DLLpath, ...)
+Aug 01 14:18:48 august-compute2Q6DP2A jupyter[11568]: error: unable to load shared object '/usr/local/lib/R/site-library/pbdZMQ/libs/pbdZMQ.so':
+Aug 01 14:18:48 august-compute2Q6DP2A jupyter[11568]: libzmq.so.5: cannot open shared object file: No such file or directory
+Aug 01 14:18:48 august-compute2Q6DP2A jupyter[11568]: Execution halted
+```
+
+## Solutions and workarounds
+
+To work around this issue, run this code in the compute instance terminal:
+
+```azurecli
+jupyter kernelspec list
+
+sudo rm -r <path/to/kernel/directory>
+
+conda create -n r -y -c conda-forge r-irkernel jupyter_client
+conda run -n r bash -c 'Rscript <(echo "IRkernel::installspec()")'
+jupyter kernelspec list
+
+```
+
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Workspace Move Compute Instance Same Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/workspace-move-compute-instance-same-name.md
+
+ Title: Known issue - After a workspace move, creating a compute instance with the same name as a previous compute instance will fail
+
+description: After moving a workspace to a different subscription or resource group, creating a compute instance with the same name as a previous compute instance will fail with an Etag conflict error.
+++++ Last updated : 08/14/2023+++
+# Known issue - Creating compute instance after a workspace move results in an Etag conflict error.
+
+After a moving a workspace to a different subscription or resource group, creating a compute instance with the same name as a previous compute instance will fail with an Etag conflict error.
++
+<! Choose the correct include >
+
+**Status:** Open
+
+**Problem area:** Compute
+
+## Symptoms
+
+After a workspace move, creating a compute instance with the same name as a previous compute instance will fail due to an Etag conflict error.
+
+When you make a workspace move the compute resources aren't moved to the target subscription. However, you can't use the same compute instance names that you were using previously.
++
+## Solutions and workarounds
+
+To resolve this issue, use a different name for the compute instance.
+
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
ml_client.models.create_or_update(run_model) ```
+For more information about models, see [Work with models in Azure Machine Learning](how-to-manage-models.md).
+ ## Mapping of key functionality in SDK v1 and SDK v2 |Functionality in SDK v1|Rough mapping in SDK v2|
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
## Submit AutoML run
-* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb).
+* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb).
```python # Imports
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
For more information, see the documentation here:
* [steps in SDK v1](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py&preserve-view=true) * [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md)
-* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
+* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
* [OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true) * [`mldesigner`](https://pypi.org/project/mldesigner/)
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
If your compute instance is behind a VNet, you need to make the following change
- Make sure the managed identity of workspace have `Storage Blob Data Contributor`, `Storage Table Data Contributor` roles on the workspace default storage account. > [!NOTE]
-> This only works if your AOAI and other cognitive services allow access from all networks.
+> This only works if your AOAI and other Azure AI services allow access from all networks.
### Managed endpoint runtime related
machine-learning How To Develop A Standard Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-a-standard-flow.md
We also support the input type of int, bool, double, list and object.
:::image type="content" source="./media/how-to-develop-a-standard-flow/flow-input-datatype.png" alt-text="Screenshot of inputs showing the type drop-down menu with string selected. " lightbox = "./media/how-to-develop-a-standard-flow/flow-input-datatype.png":::
-You should first set the input schema (name: url; type: string), then set a value manually or by:
+## Develop the flow using different tools
-1. Inputting data manually in the value field.
-2. Selecting a row of existing dataset in **fill value from data**.
--
-The dataset selection supports search and autosuggestion.
--
-After selecting a row, the url is backfilled to the value field.
-
-If the existing datasets don't meet your needs, upload new data from files. We support **.csv** and **.txt** for now.
--
-## Develop tool in your flow
-
-In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety and Vector Search.
+In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety, Vector Search, etc.
### Add tool as your need
First define flow output schema, then select in drop-down the node whose output
## Next steps -- [Develop a customized evaluation flow](how-to-develop-an-evaluation-flow.md)
+- [Bulk test using more data and evaluate the flow performance](how-to-bulk-test-evaluate-flow.md)
- [Tune prompts using variants](how-to-tune-prompts-using-variants.md) - [Deploy a flow](how-to-deploy-for-real-time-inference.md)
machine-learning How To Integrate With Langchain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-langchain.md
Prompt Flow can also be used together with the [LangChain](https://python.langch
> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+We introduce the following sections:
+* [Benefits of LangChain integration](#benefits-of-langchain-integration)
+* [How to convert LangChain code into flow](#how-to-convert-langchain-code-into-flow)
+ * [Prerequisites for environment and runtime](#prerequisites-for-environment-and-runtime)
+ * [Convert credentials to prompt flow connection](#convert-credentials-to-prompt-flow-connection)
+ * [LangChain code conversion to a runnable flow](#langchain-code-conversion-to-a-runnable-flow)
+ ## Benefits of LangChain integration We consider the integration of LangChain and Prompt flow as a powerful combination that can help you to build and test your custom language models with ease, especially in the case where you may want to use LangChain modules to initially build your flow and then use our Prompt Flow to easily scale the experiments for bulk testing, evaluating then eventually deploying.
Then you can create a [Prompt flow runtime](./how-to-create-manage-runtime.md) b
:::image type="content" source="./media/how-to-integrate-with-langchain/runtime-custom-env.png" alt-text="Screenshot of flows on the runtime tab with the add compute instance runtime popup. " lightbox = "./media/how-to-integrate-with-langchain/runtime-custom-env.png":::
-### Convert credentials to custom connection
+### Convert credentials to prompt flow connection
+
+When developing your LangChain code, you might have [defined environment variables to store your credentials, such as the AzureOpenAI API KEY](https://python.langchain.com/docs/integrations/llms/azure_openai_example), which is necessary for invoking the AzureOpenAI model.
-Custom connection helps you to securely store and manage secret keys or other sensitive credentials required for interacting with LLM, rather than exposing them in environment variables hard code in your code and running on the cloud, protecting them from potential security breaches.
-#### Create a custom connection
+Instead of directly coding the credentials in your code and exposing them as environment variables when running LangChain code in the cloud, it is recommended to convert the credentials from environment variables into a connection in prompt flow. This allows you to securely store and manage the credentials separately from your code.
-Create a custom connection that stores all your LLM API KEY or other required credentials.
+#### Create a connection
+
+Create a connection that securely stores your credentials, such as your LLM API KEY or other required credentials.
1. Go to Prompt flow in your workspace, then go to **connections** tab.
-2. Select **Create** and select **Custom**.
+2. Select **Create** and select a connection type to store your credentials. (Take custom connection as an example)
:::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-1.png" alt-text="Screenshot of flows on the connections tab highlighting the custom button in the create drop-down menu. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-1.png":::
-1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
+3. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
:::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-2.png" alt-text="Screenshot of add custom connection point to the add key-value pairs button. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-2.png"::: > [!NOTE]
Create a custom connection that stores all your LLM API KEY or other required cr
Then this custom connection is used to replace the key and credential you explicitly defined in LangChain code, if you already have a LangChain integration Prompt flow, you can jump toΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï [Configure connection, input and output](#configure-connection-input-and-output). + ### LangChain code conversion to a runnable flow All LangChain code can directly run in the Python tools in your flow as long as your runtime environment contains the dependency packages, you can easily convert your LangChain code into a flow by following the steps below.
-#### Create a flow with Prompt tools and Python tools
+#### Convert LangChain code to flow structure
> [!NOTE] > There are two ways to convert your LangChain code into a flow.
All LangChain code can directly run in the Python tools in your flow as long as
- To simplify the conversion process, you can directly initialize the LLM model for invocation in a Python node by utilizing the LangChain integrated LLM library. - Another approach is converting your LLM consuming from LangChain code to our LLM tools in the flow, for better further experimental management. - For quick conversion of LangChain code into a flow, we recommend two types of flow structures, based on the use case: || Types | Desc | Case | |-| -- | -- | -- |
-|**Type A**| A flow that includes both **prompt tools** and **python tools**| You can extract your prompt template from your code into a prompt node, then combine the remaining code in a single Python node or multiple Python tools. | This structure is ideal for who want to easily **tune the prompt** by running flow variants and then choose the optimal one based on evaluation results.|
-|**Type B**| A flow that includes **python tools** only| You can create a new flow with python tools only, all code including prompt definition will run in python tools.| This structure is suitable for who don't need to explicit tune the prompt in workspace, but require faster batch testing based on larger scale datasets. |
+|**Type A**| A flow that includes both **prompt nodes** and **python nodes**| You can extract your prompt template from your code into a prompt node, then combine the remaining code in a single Python node or multiple Python tools. | This structure is ideal for who want to easily **tune the prompt** by running flow variants and then choose the optimal one based on evaluation results.|
+|**Type B**| A flow that includes **python nodes** only| You can create a new flow with python nodes only, all code including prompt definition will run in python nodes.| This structure is suitable for who don't need to explicit tune the prompt in workspace, but require faster batch testing based on larger scale datasets. |
For example the type A flow from the chart is like:
To create a flow in Azure Machine Learning, you can go to your workspace, then s
#### Configure connection, input and output
-After you have a properly structured flow and are done moving the code to specific tools, you need to configure the input, output, and connection settings in your flow and code to replace your original definitions.
+After you have a properly structured flow and are done moving the code to specific tool nodes, you need to replace the original environment variables with the corresponding key in the connection, and configure the input and output of the flow.
+
+**Configure connection**
+
+To utilize a connection that replaces the environment variables you originally defined in LangChain code, you need to import promptflow connection library `promptflow.connections` in the python node.
-To utilize a [custom connection](#create-a-custom-connection) that stores all the required keys and credentials, follow these steps:
+For example:
-1. In the python tools, import custom connection library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function.
+If you have a LangChain code that consumes the AzureOpenAI model, you can replace the environment variables with the corresponding key in the Azure OpenAI connection:
+
+Import library `from promptflow.connections import AzureOpenAIConnection`
+++
+For custom connection, you need to follow the steps:
+
+1. Import library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function.
:::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png" alt-text="Screenshot of doc search chain node highlighting the custom connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png"::: 1. Parse the input to the input section, then select your target custom connection in the value dropdown. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png" alt-text="Screenshot of the chain node highlighting the connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png"::: 1. Replace the environment variables that originally defined the key and credential with the corresponding key added in the connection. 1. Save and return to authoring page, and configure the connection parameter in the node input.
+**Configure input and output**
+ Before running the flow, configure the **node input and output**, as well as the overall **flow input and output**. This step is crucial to ensure that all the required data is properly passed through the flow and that the desired results are obtained. ## Next steps
machine-learning Faiss Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/faiss-index-lookup-tool.md
Last updated 06/30/2023
Faiss Index Lookup is a tool tailored for querying within a user-provided Faiss-based vector store. In combination with our Large Language Model (LLM) tool, it empowers users to extract contextually relevant information from a domain knowledge base.
-## Requirements
-- embeddingstore --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore- ## Prerequisites - Prepare an accessible path on Azure Blob Storage. Here's the guide if a new storage account needs to be created: [Azure Storage Account](../../../storage/common/storage-account-create.md).-- Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing our EmbeddingStore SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Please refer to [the sample notebook for creating Faiss index](https://aka.ms/pf-sample-build-faiss-index) for building index using EmbeddingStore SDK.
+- Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing promptflow-vectordb SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Please refer to [the sample notebook for creating Faiss index](https://aka.ms/pf-sample-build-faiss-index) for building index using promptflow-vectordb SDK.
- Based on where you put your own index files, the identity used by the promptflow runtime should be granted with certain roles. Please refer to [Steps to assign an Azure role](../../../role-based-access-control/role-assignments-steps.md): | Location | Role | | - | - | | workspace datastores or workspace default blob | AzureML Data Scientist | | other blobs | Storage Blob Data Reader |
+> [!NOTE]
+> When legacy tools switching to code first mode, if you encounter "'embeddingstore.tool.faiss_index_lookup.search' is not found" error, please refer to the [Troubleshoot Guidance](./troubleshoot-guidance.md).
## Inputs
The tool accepts the following inputs:
## Outputs
-The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by our EmbeddingStore SDK. For the Faiss Index Search, the following fields are populated:
+The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by promptflow-vectordb SDK. For the Faiss Index Search, the following fields are populated:
| Field Name | Type | Description | | - | - | -- |
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
+
+ Title: Troubleshoot guidance
+
+description: This article addresses frequent questions about tool usage.
+++++++ Last updated : 09/05/2023++
+# Troubleshoot guidance
+
+This article addresses frequent questions about tool usage.
+
+## Error "package tool is not found" occurs when updating the flow for code first experience.
+
+When you update flows for code first experience, if the flow utilized these tools (Faiss Index Lookup, Vector Index Lookup, Vector DB Lookup, Content Safety (Text)), you may encounter the error message like below:
+
+<code><i>Package tool 'embeddingstore.tool.faiss_index_lookup.search' is not found in the current environment.</i></code>
+
+To resolve the issue, you have two options:
+
+- **Option 1**
+ - Update your runtime to latest version.
+ ![how-to-switch-to-raw-file-mode](../media/faq/switch-to-raw-file-mode.png)
+ - Update the tool names.
+ ![how-to-update-tool-name](../media/faq/update-tool-name.png)
+
+ | Tool | New tool name |
+ | - | - |
+ | Faiss Index Lookup tool | promptflow_vectordb.tool.faiss_index_lookup.FaissIndexLookup.search |
+ | Vector Index Lookup | promptflow_vectordb.tool.vector_index_lookup.VectorIndexLookup.search |
+ | Vector DB Lookup | promptflow_vectordb.tool.vector_db_lookup.VectorDBLookup.search |
+ | Content Safety (Text) | content_safety_text.tools.content_safety_text_tool.analyze_text |
+ - Save the "flow.dag.yaml" file.
+
+- **Option 2**
+ - Update your runtime to latest version.
+ - Remove the old tool and re-create a new tool.
machine-learning Vector Db Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/vector-db-lookup-tool.md
Vector DB Lookup is a vector search tool that allows users to search top k simil
| Name | Description | | | | | Azure Cognitive Search | Microsoft's cloud search service with built-in AI capabilities that enrich all types of information to help identify and explore relevant content at scale. |
+| Qdrant | Qdrant is a vector similarity search engine that provides a production-ready service with a convenient API to store, search and manage points (i.e. vectors) with an additional payload. |
+| Weaviate | Weaviate is an open source vector database that stores both objects and vectors. This allows for combining vector search with structured filtering. |
-This tool adds support for more vector databases, including Pinecone, Weaviete, Qdrant etc.
+This tool will support more vector databases.
> [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Requirements
--- embeddingstore --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore- ## Prerequisites
-The tool searches data from a third-party vector database. To use it, you should create resources in advance and establish connections between the tool and the resource.
+The tool searches data from a third-party vector database. To use it, you should create resources in advance and establish connection between the tool and the resource.
- **Azure Cognitive Search:** - Create resource [Azure Cognitive Search](../../../search/search-create-service-portal.md).
- - Add "CognitiveSearchConnection" connection. Fill "API key" field with "Primary admin key" from "Keys" section of created resource, and fill "Api Base" field with the URL, the URL format is `https://{your_serive_name}.search.windows.net`.
+ - Add "Cognitive search" connection. Fill "API key" field with "Primary admin key" from "Keys" section of created resource, and fill "API base" field with the URL, the URL format is `https://{your_serive_name}.search.windows.net`.
+
+ - **Qdrant:**
+ - Follow the [installation](https://qdrant.tech/documentation/quick-start/) to deploy Qdrant to a self-maintained cloud server.
+ - Add "Qdrant" connection. Fill "API base" with your self-maintained cloud server address and fill "API key" field.
+
+ - **Weaviate:**
+ - Follow the [installation](https://weaviate.io/developers/weaviate/installation) to deploy Weaviate to a self-maintained instance.
+ - Add "Weaviate" connection. Fill "API base" with your self-maintained instance address and fill "API key" field.
+
+> [!NOTE]
+> When legacy tools switching to code first mode, if you encounter "'embeddingstore.tool.vector_db_lookup.search' is not found" error, please refer to the [Troubleshoot Guidance](./troubleshoot-guidance.md).
## Inputs
-The following are available input parameters:
+The tool accepts the following inputs:
- **Azure Cognitive Search:** | Name | Type | Description | Required | | - | - | -- | -- |
- | connection | CognitiveSearchConnection | The created workspace connection for accessing to Cognitive Search service endpoint. | Yes |
+ | connection | CognitiveSearchConnection | The created connection for accessing to Cognitive Search endpoint. | Yes |
| index_name | string | The index name created in Cognitive Search resource. | Yes |
- | text_field | string | The text field name. The returned text filed will populate the result of text. | No |
+ | text_field | string | The text field name. The returned text field will populate the text of output. | No |
| vector_field | string | The vector field name. The target vector is searched in this vector field. | Yes |
- | search_params | dict | The search parameters. It's key-value pairs. Except for parameters in the tool input list mentioned above, additional search parameters can be formed into a JSON object as search_params. For example, use `{"select": ""}` as search_params to select the returned fields, use `{"search": "", "queryType": "", ""semanticConfiguration": "", "queryLanguage": ""}` to perform a hybrid search. | No |
+ | search_params | dict | The search parameters. It's key-value pairs. Except for parameters in the tool input list mentioned above, additional search parameters can be formed into a JSON object as search_params. For example, use `{"select": ""}` as search_params to select the returned fields, use `{"search": ""}` to perform a [hybrid search](../../../search/search-get-started-vector.md#hybrid-search). | No |
| search_filters | dict | The search filters. It's key-value pairs, the input format is like `{"filter": ""}` | No |
- | vector | list | The target vector to be queried, which can be generated by the LLM tool. | Yes |
+ | vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
+- **Qdrant:**
-## Outputs
+ | Name | Type | Description | Required |
+ | - | - | -- | -- |
+ | connection | QdrantConnection | The created connection for accessing to Qdrant server. | Yes |
+ | collection_name | string | The collection name created in self-maintained cloud server. | Yes |
+ | text_field | string | The text field name. The returned text field will populate the text of output. | No |
+ | search_params | dict | The search parameters can be formed into a JSON object as search_params. For example, use `{"params": {"hnsw_ef": 0, "exact": false, "quantization": null}}` to set search_params. | No |
+ | search_filters | dict | The search filters. It's key-value pairs, the input format is like `{"filter": {"should": [{"key": "", "match": {"value": ""}}]}}` | No |
+ | vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
+ | top_k | int | The count of top-scored entities to return. Default value is 3 | No |
-The following is an example JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by our EmbeddingStore SDK.
+- **Weaviate:**
-**Azure Cognitive Search:**
+ | Name | Type | Description | Required |
+ | - | - | -- | -- |
+ | connection | WeaviateConnection | The created connection for accessing to Weaviate. | Yes |
+ | class_name | string | The class name. | Yes |
+ | text_field | string | The text field name. The returned text field will populate the text of output. | No |
+ | vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
+ | top_k | int | The count of top-scored entities to return. Default value is 3 | No |
- For the Azure Cognitive Search, the following fields are populated:
+## Outputs
+
+The following is an example JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by promptflow-vectordb SDK.
+- **Azure Cognitive Search:**
-| Field Name | Type | Description |
-|--|--|-|
-| vector | list | vector of the entity, the vector field name is specified in input |
-| text | string | text of the entity, the text field name is specified in input |
-| score | float | computed by the BM25 similarity algorithm |
-| original_entity | dict | the original response json from search REST API |
+ For Azure Cognitive Search, the following fields are populated:
+ | Field Name | Type | Description |
+ | - | - | -- |
+ | original_entity | dict | the original response json from search REST API|
+ | score | float | @search.score from the original entity, which evaluates the similarity between the entity and the query vector |
+ | text | string | text of the entity|
+ | vector | list | vector of the entity|
+ <details>
+ <summary>Output</summary>
+
```json [ {
The following is an example JSON format response returned by the tool, which inc
"original_entity": { "@search.score": 0.5099789, "id": "",
- "your_text_filed_name": "text",
+ "your_text_filed_name": "sample text1",
"your_vector_filed_name": [-0.40517663431890405, 0.5856996257406859, -0.1593078462266455, -0.9776269170785785, -0.6145604369828972], "your_additional_field_name": "" }, "score": 0.5099789,
- "text": "text",
+ "text": "sample text1",
"vector": [-0.40517663431890405, 0.5856996257406859, -0.1593078462266455, -0.9776269170785785, -0.6145604369828972] } ] ```
+ </details>
+
+- **Qdrant:**
+
+ For Qdrant, the following fields are populated:
+
+ | Field Name | Type | Description |
+ | - | - | -- |
+ | original_entity | dict | the original response json from search REST API|
+ | metadata | dict | payload from the original entity|
+ | score | float | score from the original entity, which evaluates the similarity between the entity and the query vector|
+ | text | string | text of the payload|
+ | vector | list | vector of the entity|
+
+ <details>
+ <summary>Output</summary>
+
+ ```json
+ [
+ {
+ "metadata": {
+ "text": "sample text1"
+ },
+ "original_entity": {
+ "id": 1,
+ "payload": {
+ "text": "sample text1"
+ },
+ "score": 1,
+ "vector": [0.18257418, 0.36514837, 0.5477226, 0.73029673],
+ "version": 0
+ },
+ "score": 1,
+ "text": "sample text1",
+ "vector": [0.18257418, 0.36514837, 0.5477226, 0.73029673]
+ }
+ ]
+ ```
+ </details>
+
+- **Weaviate:**
+
+ For Weaviate, the following fields are populated:
+
+ | Field Name | Type | Description |
+ | - | - | -- |
+ | original_entity | dict | the original response json from search REST API|
+ | score | float | certainty from the original entity, which evaluates the similarity between the entity and the query vector|
+ | text | string | text in the original entity|
+ | vector | list | vector of the entity|
+
+ <details>
+ <summary>Output</summary>
+
+ ```json
+ [
+ {
+ "metadata": null,
+ "original_entity": {
+ "_additional": {
+ "certainty": 1,
+ "distance": 0,
+ "vector": [
+ 0.58,
+ 0.59,
+ 0.6,
+ 0.61,
+ 0.62
+ ]
+ },
+ "text": "sample text1."
+ },
+ "score": 1,
+ "text": "sample text1.",
+ "vector": [
+ 0.58,
+ 0.59,
+ 0.6,
+ 0.61,
+ 0.62
+ ]
+ }
+ ]
+ ```
+ </details>
machine-learning Vector Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/vector-index-lookup-tool.md
Vector index lookup is a tool tailored for querying within an Azure Machine Lear
> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Requirements
--- embeddingstore --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore- ## Prerequisites - Follow the instructions from sample flow `Bring your own Data QnA` to prepare a Vector Index as an input.
Vector index lookup is a tool tailored for querying within an Azure Machine Lear
| - | - | | workspace datastores or workspace default blob | AzureML Data Scientist | | other blobs | Storage Blob Data Reader |
+> [!NOTE]
+> When legacy tools switching to code first mode, if you encounter "'embeddingstore.tool.vector_index_lookup.search' is not found" error, please refer to the [Troubleshoot Guidance](./troubleshoot-guidance.md).
## Inputs
The tool accepts the following inputs:
## Outputs
-The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by our EmbeddingStore SDK. For the Vector Index Search, the following fields are populated:
+The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by promptflow-vectordb SDK. For the Vector Index Search, the following fields are populated:
| Field Name | Type | Description | | - | - | -- | | text | string | Text of the entity |
-| score | float | Depends on index type defined in Vector Index. Might be value of distance or similarity |
+| score | float | Depends on index type defined in Vector Index. If index type is Faiss, score is L2 distance. If index type is Azure Cognitive Search, score is cosine similarity. |
| metadata | dict | Customized key-value pairs provided by user when creating the index | | original_entity | dict | Depends on index type defined in Vector Index. The original response json from search REST API|
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Title: "Create workspace resources"
+ Title: "Tutorial: Create workspace resources"
description: Create an Azure Machine Learning workspace and cloud resources that can be used to train machine learning models.
Previously updated : 03/15/2023 Last updated : 08/17/2023 adobe-target: true
+content_well_notification:
+ - AI-contribution
#Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
-# Create resources you need to get started
+# Tutorial: Create resources you need to get started
-In this article, you'll create the resources you need to start working with Azure Machine Learning.
+In this tutorial, you will create the resources you need to start working with Azure Machine Learning.
-* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create.
-* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials.
+> [!div class="checklist"]
+>* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create.
+>* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials.
+
+This video shows you how to create a workspace and compute instance. The steps are also described in the sections below.
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=a0e901d2-e82a-4e96-9c7f-3b5467859969]
## Prerequisites
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
In instance segmentation, output consists of multiple boxes with their scaled to
> These settings are currently in public preview. They are provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!WARNING]
-> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
+> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
In this section, we document the input data format required to make predictions and generate explanations for the predicted class/classes using a deployed model. There's no separate deployment needed for explainability. The same endpoint for online scoring can be utilized to generate explanations. We just need to pass some extra explainability related parameters in input schema and get either visualizations of explanations and/or attribution score matrices (pixel level explanations).
If `model_explainability`, `visualizations`, `attributions` are set to `True` in
> [!WARNING]
-> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
+> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
```json [
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| **SDK support** | | | | | [Python SDK support](/python/api/overview/azure/ml/) | GA | YES | YES | | **[Security](concept-enterprise-security.md)** | | | |
+| Managed virtual network support | Preview | Preview | Preview |
| Virtual Network (VNet) support for training | GA | YES | YES | | Virtual Network (VNet) support for inference | GA | YES | YES | | Scoring endpoint authentication | Public Preview | YES | YES |
The information in the rest of this document provides information on what featur
| **Compute instance** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | N/A | | Jupyter, JupyterLab Integration | GA | YES | N/A |
+| Managed virtual network support | Preview | Preview | N/A |
| Virtual Network (VNet) support | GA | YES | N/A | | **SDK support** | | | | | Python SDK support | GA | YES | N/A |
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
When `type: model`, the following syntax is enforced:
| `settings.retry_settings.timeout` | integer | The timeout in seconds for scoring a single mini batch. Use larger values when the mini-batch size is bigger or the model is more expensive to run. | | `30` | | `settings.output_action` | string | Indicates how the output should be organized in the output file. Use `summary_only` if you are generating the output files as indicated at [Customize outputs in model deployments](how-to-deploy-model-custom-output.md). Use `append_row` if you are returning predictions as part of the `run()` function `return` statement. | `append_row`, `summary_only` | `append_row` | | `settings.output_file_name` | string | Name of the batch scoring output file. | | `predictions.csv` |
-| `environment_variables` | object | Dictionary of environment variable key-value pairs to set for each batch scoring job. | | |
+| `settings.environment_variables` | object | Dictionary of environment variable key-value pairs to set for each batch scoring job. | | |
### YAML syntax for pipeline component deployments
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
-+ Last updated 01/24/2023
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | | | `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | | | `readiness_probe` | object | Readiness probe settings for validating if the container is ready to serve traffic. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
-| `egress_public_network_access` | string | This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` |
+| `egress_public_network_access` | string |**Note:** This key is applicable when you use the [legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) to secure outbound communication for a deployment. We strongly recommend that you secure outbound communication for deployments using [a workspace managed VNet](concept-secure-online-endpoint.md) (preview) instead. <br><br>This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` |
### RequestSettings
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Previously updated : 05/26/2022 Last updated : 08/26/2023
This object detection model identifies whether the image contains objects, such
Automated ML accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
-You'll write code using the Python SDK in this tutorial and learn the following tasks:
+You write code using the Python SDK in this tutorial and learn the following tasks:
> [!div class="checklist"] > * Download and transform data
You'll write code using the Python SDK in this tutorial and learn the following
You first need to set up a compute target to use for your automated ML model training. Automated ML models for image tasks require GPU SKUs.
-This tutorial uses the NCsv3-series (with V100 GPUs) as this type of compute target leverages multiple GPUs to speed up training. Additionally, you can set up multiple nodes to take advantage of parallelism when tuning hyperparameters for your model.
+This tutorial uses the NCsv3-series (with V100 GPUs) as this type of compute target uses multiple GPUs to speed up training. Additionally, you can set up multiple nodes to take advantage of parallelism when tuning hyperparameters for your model.
The following code creates a GPU compute of size `Standard_NC24s_v3` with four nodes.
compute: azureml:gpu-cluster
> [!IMPORTANT] > This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In your AutoML job, you can perform an automatic hyperparameter sweep in order to find the optimal model (we call this functionality AutoMode). You only specify the number of trials; the hyperparameter search space, sampling method and early termination policy are not needed. The system will automatically determine the region of the hyperparameter space to sweep based on the number of trials. A value between 10 and 20 will likely work well on many datasets.
+In your AutoML job, you can perform an automatic hyperparameter sweep in order to find the optimal model (we call this functionality AutoMode). You only specify the number of trials; the hyperparameter search space, sampling method and early termination policy aren't needed. The system will automatically determine the region of the hyperparameter space to sweep based on the number of trials. A value between 10 and 20 will likely work well on many datasets.
# [Azure CLI](#tab/cli)
When you've configured your AutoML Job to the desired settings, you can submit t
In your AutoML job, you can specify the model architectures by using `model_name` parameter and configure the settings to perform a hyperparameter sweep over a defined search space to find the optimal model.
-In this example, we will train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
+In this example, we'll train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
You can perform a hyperparameter sweep over a defined search space to find the optimal model. #### Job limits
-You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings. Please refer to [detailed description on Job Limits parameters](./how-to-auto-train-image-models.md#job-limits).
+You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings. Refer to [detailed description on Job Limits parameters](./how-to-auto-train-image-models.md#job-limits).
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
limits:
-The following code defines the search space in preparation for the hyperparameter sweep for each defined architecture, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each architecture.
+The following code defines the search space in preparation for the hyperparameter sweep for each defined architecture, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for each architecture.
For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. The job limits configured above, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
-The Bandit early termination policy is also used. This policy terminates poor performing trials; that is, those trials that are not within 20% slack of the best performing trial, which significantly saves compute resources.
+The Bandit early termination policy is also used. This policy terminates poor performing trials; that is, those trials that aren't within 20% slack of the best performing trial, which significantly saves compute resources.
# [Azure CLI](#tab/cli)
auth_mode: key
### Create the endpoint
-Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command starts the endpoint creation and return a confirmation response while the endpoint creation continues.
# [Azure CLI](#tab/cli)
We can also create a batch endpoint for batch inferencing on large volumes of da
### Configure online deployment
-A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. You can use either GPU or CPU VM SKUs for your deployment cluster.
+A deployment is a set of resources required for hosting the model that does the actual inferencing. We create a deployment for our endpoint using the `ManagedOnlineDeployment` class. You can use either GPU or CPU VM SKUs for your deployment cluster.
# [Azure CLI](#tab/cli)
readiness_probe:
### Create the deployment
-Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+Using the `MLClient` created earlier, we'll create the deployment in the workspace. This command starts the deployment creation and return a confirmation response while the deployment creation continues.
# [Azure CLI](#tab/cli)
az ml online-deployment create --file .\create_deployment.yml --workspace-name [
### Update traffic:
-By default the current deployment is set to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point should not exceed 100%.
+By default the current deployment is set to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point shouldn't exceed 100%.
# [Azure CLI](#tab/cli)
CLI example not available, please use Python SDK.
## Clean up resources
-Do not complete this section if you plan on running other Azure Machine Learning tutorials.
+Don't complete this section if you plan on running other Azure Machine Learning tutorials.
If you don't plan to use the resources you created, delete them, so you don't incur any charges.
In this automated machine learning tutorial, you did the following tasks:
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
- * Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs). Please check the folders with 'cli-automl-image-' prefix for samples specific to building computer vision models.
+ * Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs). Check the folders with 'cli-automl-image-' prefix for samples specific to building computer vision models.
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
- * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
+ * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs). Check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Also try automated machine learning for these other model types:
* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
-* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
+* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
## Sign in to the studio
Before you configure your experiment, upload your data file to your workspace in
1. Select **Upload files** from the **Upload** drop-down..
- 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
+ 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
1. Select **Next**
machine-learning Tutorial Create Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-vnet.md
+
+ Title: Create a secure workspace with Azure Virtual Network
+
+description: Create an Azure Machine Learning workspace and required Azure services inside an Azure Virtual Network.
++++++ Last updated : 08/22/2023++
+monikerRange: 'azureml-api-2 || azureml-api-1'
+
+# Tutorial: How to create a secure workspace with an Azure Virtual Network
+
+In this article, learn how to create and connect to a secure Azure Machine Learning workspace. The steps in this article use an Azure Virtual Network to create a security boundary around resources used by Azure Machine Learning.
+
+> [!IMPORTANT]
+> We recommend using the Azure Machine Learning managed virtual network instead of an Azure Virtual Network. For a version of this tutorial that uses a managed virtual network, see [Tutorial: Create a secure workspace with a managed virtual network](tutorial-create-secure-workspace.md).
+
+In this tutorial, you accomplish the following tasks:
+
+> [!div class="checklist"]
+> * Create an Azure Virtual Network (VNet) to __secure communications between services in the virtual network__.
+> * Create an Azure Storage Account (blob and file) behind the VNet. This service is used as __default storage for the workspace__.
+> * Create an Azure Key Vault behind the VNet. This service is used to __store secrets used by the workspace__. For example, the security information needed to access the storage account.
+> * Create an Azure Container Registry (ACR). This service is used as a repository for Docker images. __Docker images provide the compute environments needed when training a machine learning model or deploying a trained model as an endpoint__.
+> * Create an Azure Machine Learning workspace.
+> * Create a jump box. A jump box is an Azure Virtual Machine that is behind the VNet. Since the VNet restricts access from the public internet, __the jump box is used as a way to connect to resources behind the VNet__.
+> * Configure Azure Machine Learning studio to work behind a VNet. The studio provides a __web interface for Azure Machine Learning__.
+> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__. In configurations where Azure Container Registry is behind the VNet, it is also used to build Docker images.
+> * Connect to the jump box and use the Azure Machine Learning studio.
+
+> [!TIP]
+> If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
+
+After completing this tutorial, you'll have the following architecture:
+
+* An Azure Virtual Network, which contains three subnets:
+ * __Training__: Contains the Azure Machine Learning workspace, dependency services, and resources used for training models.
+ * __Scoring__: For the steps in this tutorial, it isn't used. However if you continue using this workspace for other tutorials, we recommend using this subnet when deploying models to [endpoints](concept-endpoints.md).
+ * __AzureBastionSubnet__: Used by the Azure Bastion service to securely connect clients to Azure Virtual Machines.
+* An Azure Machine Learning workspace that uses a private endpoint to communicate using the VNet.
+* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the VNet.
+* An Azure Container Registry that uses a private endpoint communicate using the VNet.
+* Azure Bastion, which allows you to use your browser to securely communicate with the jump box VM inside the VNet.
+* An Azure Virtual Machine that you can remotely connect to and access resources secured inside the VNet.
+* An Azure Machine Learning compute instance and compute cluster.
+
+> [!TIP]
+> The Azure Batch Service listed on the diagram is a back-end service required by the compute clusters and compute instances.
++
+## Prerequisites
+
+* Familiarity with Azure Virtual Networks and IP networking. If you aren't familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module.
+* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2.
+
+## Create a virtual network
+
+To create a virtual network, use the following steps:
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Network__ in the search field. Select the __Virtual Network__ entry, and then select __Create__.
++
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-resource-search-vnet.png" alt-text="Screenshot of the create resource search form with virtual network selected.":::
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-resource-vnet.png" alt-text="Screenshot of the virtual network create form.":::
+
+1. From the __Basics__ tab, select the Azure __subscription__ to use for this resource and then select or create a new __resource group__. Under __Instance details__, enter a friendly __name__ for your virtual network and select the __region__ to create it in.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-basics.png" alt-text="Screenshot of the basic virtual network configuration form.":::
+
+1. Select __Security__. Select to __Enable Azure Bastion__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields:
+
+ * __Bastion name__: A unique name for this Bastion instance
+ * __Public IP address__: Create a new public IP address.
+
+ Leave the other fields at the default values.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-bastion.png" alt-text="Screenshot of Bastion config.":::
+
+1. Select __IP Addresses__. The default settings should be similar to the following image:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-ip-address-default.png" alt-text="Screenshot of the default IP Address form.":::
+
+ Use the following steps to configure the IP address and configure a subnet for training and scoring resources:
+
+ > [!TIP]
+ > While you can use a single subnet for all Azure Machine Learning resources, the steps in this article show how to create two subnets to separate the training & scoring resources.
+ >
+ > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet.
+
+ 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.16.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__.
+
+ > [!IMPORTANT]
+ > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+
+ 1. Select the __Default__ subnet and then select __Remove subnet__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet.":::
+
+ 1. To create a subnet to contain the workspace, dependency services, and resources used for _training_, select __+ Add subnet__ and set the subnet name, starting address, and subnet size. The following are the values used in this tutorial:
+ * __Name__: Training
+ * __Starting address__: 172.16.0.0
+ * __Subnet size__: /24 (256 addresses)
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet.":::
+
+ 1. To create a subnet for compute resources used to _score_ your models, select __+ Add subnet__ again, and set the name and address range:
+ * __Subnet name__: Scoring
+ * __Starting address__: 172.16.1.0
+ * __Subnet size__: /24 (256 addresses)
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet.":::
+
+ 1. To create a subnet for _Azure Bastion_, select __+ Add subnet__ and set the template, starting address, and subnet size:
+ * __Subnet template__: Azure Bastion
+ * __Starting address__: 172.16.2.0
+ * __Subnet size__: /26 (64 addresses)
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/vnet-add-azure-bastion-subnet.png" alt-text="Screenshot of Azure Bastion subnet.":::
+
+1. Select __Review + create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-ip-address-final.png" alt-text="Screenshot of the review + create button.":::
+
+1. Verify that the information is correct, and then select __Create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-review.png" alt-text="Screenshot of the virtual network review + create page.":::
+
+## Create a storage account
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Storage account__. Select the __Storage Account__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Storage account name__, and set __Redundancy__ to __Locally-redundant storage (LRS)__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-storage.png" alt-text="Screenshot of storage account basic config.":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add private endpoint__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-enable-private-endpoint.png" alt-text="Screenshot of the form to add the blob private network.":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: blob
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.16.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: privatelink.blob.core.windows.net
+
+ Select __OK__ to create the private endpoint.
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+
+1. Once the Storage Account has been created, select __Go to resource__:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-go-to-resource.png" alt-text="Screenshot of the go to new storage resource button.":::
+
+1. From the left navigation, select __Networking__ the __Private endpoint connections__ tab, and then select __+ Private endpoint__:
+
+ > [!NOTE]
+ > While you created a private endpoint for Blob storage in the previous steps, you must also create one for File storage.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-networking.png" alt-text="Screenshot of the storage account networking form.":::
+
+1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you've used for previous resources. Enter a unique __Name__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-private-endpoint.png" alt-text="Screenshot of the basics form when adding the file private endpoint.":::
+
+1. Select __Next : Resource__, and then set __Target sub-resource__ to __file__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-private-endpoint-resource.png" alt-text="Screenshot of the resource form when selecting a sub-resource of 'file'.":::
+
+1. Select __Next : Configuration__, and then use the following values:
+ * __Virtual network__: The network you created previously
+ * __Subnet__: Training
+ * __Integrate with private DNS zone__: Yes
+ * __Private DNS zone__: privatelink.file.core.windows.net
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-private-endpoint-config.png" alt-text="Screenshot of the configuration form when adding the file private endpoint.":::
+
+1. Select __Review + Create__. Verify that the information is correct, and then select __Create__.
+
+> [!TIP]
+> If you plan to use a [batch endpoint](concept-endpoints.md) or an Azure Machine Learning pipeline that uses a [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md), it is also required to configure private endpoints target **queue** and **table** sub-resources. ParallelRunStep uses queue and table under the hood for task scheduling and dispatching.
+
+## Create a key vault
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Key Vault__. Select the __Key Vault__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Key vault name__. Leave the other fields at the default value.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-key-vault.png" alt-text="Screenshot of the basics form when creating a new key vault.":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/key-vault-networking.png" alt-text="Screenshot of the networking form when adding a private endpoint for the key vault.":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: Vault
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.16.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: privatelink.vaultcore.azure.net
+
+ Select __OK__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/key-vault-private-endpoint.png" alt-text="Screenshot of the key vault private endpoint configuration form.":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+
+## Create a container registry
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Container Registry__. Select the __Container Registry__ entry, and then select __Create__.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __location__ you previously used for the virtual network. Enter a unique __Registry name__ and set the __SKU__ to __Premium__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-container-registry.png" alt-text="Screenshot of the basics form when creating a container registry.":::
+
+1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-networking.png" alt-text="Screenshot of the networking form when adding a container registry private endpoint.":::
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: registry
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.16.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: privatelink.azurecr.io
+
+ Select __OK__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-private-endpoint.png" alt-text="Screenshot of the configuration form for the container registry private endpoint.":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+1. After the container registry has been created, select __Go to resource__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-go-to-resource.png" alt-text="Screenshot of the 'go to resource' button.":::
+
+1. From the left of the page, select __Access keys__, and then enable __Admin user__. This setting is required when using Azure Container Registry inside a virtual network with Azure Machine Learning.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-admin-user.png" alt-text="Screenshot of the container registry access keys form, with the 'admin user' option enabled.":::
+
+## Create a workspace
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Machine Learning__. Select the __Machine Learning__ entry, and then select __Create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/machine-learning-create.png" alt-text="Screenshot of the create page for Azure Machine Learning.":::
+
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the following values for the other fields:
+ * __Workspace name__: A unique name for your workspace.
+ * __Storage account__: Select the storage account you created previously.
+ * __Key vault__: Select the key vault you created previously.
+ * __Application insights__: Use the default value.
+ * __Container registry__: Use the container registry you created previously.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-machine-learning-workspace.png" alt-text="Screenshot of the basic workspace configuration form.":::
+
+1. From the __Networking__ tab, select __Private with Internet Outbound__. In the __Workspace inbound access__ section, select __+ add__.
+
+1. On the __Create private endpoint__ form, use the following values:
+ * __Subscription__: The same Azure subscription that contains the previous resources you've created.
+ * __Resource group__: The same Azure resource group that contains the previous resources you've created.
+ * __Location__: The same Azure region that contains the previous resources you've created.
+ * __Name__: A unique name for this private endpoint.
+ * __Target sub-resource__: amlworkspace
+ * __Virtual network__: The virtual network you created earlier.
+ * __Subnet__: Training (172.16.0.0/24)
+ * __Private DNS integration__: Yes
+ * __Private DNS Zone__: Leave the two private DNS zones at the default values of __privatelink.api.azureml.ms__ and __privatelink.notebooks.azure.net__.
+
+ Select __OK__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/machine-learning-workspace-private-endpoint.png" alt-text="Screenshot of the workspace private network configuration form.":::
+
+1. From the __Networking__ tab, in the __Workspace outbound access__ section, select __Use my own virtual network__.
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+1. Once the workspace has been created, select __Go to resource__.
+1. From the __Settings__ section on the left, select __Private endpoint connections__ and then select the link in the __Private endpoint__ column:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/workspace-private-endpoint-connections.png" alt-text="Screenshot of the private endpoint connections for the workspace.":::
+
+1. Once the private endpoint information appears, select __DNS configuration__ from the left of the page. Save the IP address and fully qualified domain name (FQDN) information on this page, as it will be used later.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/workspace-private-endpoint-dns.png" alt-text="screenshot of the IP and FQDN entries for the workspace.":::
+
+> [!IMPORTANT]
+> There are still some configuration steps needed before you can fully use the workspace. However, these require you to connect to the workspace.
+
+## Enable studio
+
+Azure Machine Learning studio is a web-based application that lets you easily manage your workspace. However, it needs some extra configuration before it can be used with resources secured inside a VNet. Use the following steps to enable studio:
+
+1. When using an Azure Storage Account that has a private endpoint, add the service principal for the workspace as a __Reader__ for the storage private endpoint(s). From the Azure portal, select your storage account and then select __Networking__. Next, select __Private endpoint connections__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-private-endpoint-select.png" alt-text="Screenshot of storage private endpoint connections.":::
+
+1. For __each private endpoint listed__, use the following steps:
+
+ 1. Select the link in the __Private endpoint__ column.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-private-endpoint-selected.png" alt-text="Screenshot of the endpoint links in the private endpoint column.":::
+
+ 1. Select __Access control (IAM)__ from the left side.
+ 1. Select __+ Add__, and then __Add role assignment (Preview)__.
+
+ ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+
+ 1. On the __Role__ tab, select the __Reader__.
+
+ ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+
+ 1. On the __Members__ tab, select __User, group, or service principal__ in the __Assign access to__ area and then select __+ Select members__. In the __Select members__ dialog, enter the name as your Azure Machine Learning workspace. Select the service principal for the workspace, and then use the __Select__ button.
+
+ 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+## Secure Azure Monitor and Application Insights
+
+> [!NOTE]
+> For more information on securing Azure Monitor and Application Insights, see the following links:
+> * [Migrate to workspace-based Application Insights resources](../azure-monitor/app/convert-classic-resource.md).
+> * [Configure your Azure Monitor private link](../azure-monitor/logs/private-link-configure.md).
+
+1. In the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace. From __Overview__, select the __Application Insights__ link.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/workspace-application-insight.png" alt-text="Screenshot of the Application Insights link.":::
+
+1. In the __Properties__ for Application Insights, check the __WORKSPACE__ entry to see if it contains a value. If it _doesn't_, select __Migrate to Workspace-based__, select the __Subscription__ and __Log Analytics Workspace__ to use, then select __Apply__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/migrate-workspace-based.png" alt-text="Screenshot of the link to migrate to workspace-based.":::
+
+1. In the Azure portal, select __Home__, and then search for __Private link__. Select the __Azure Monitor Private Link Scope__ result and then select __Create__.
+1. From the __Basics__ tab, select the same __Subscription__, __Resource Group__, and __Resource group region__ as your Azure Machine Learning workspace. Enter a __Name__ for the instance, and then select __Review + Create__. To create the instance, select __Create__.
+1. Once the Azure Monitor Private Link Scope instance has been created, select the instance in the Azure portal. From the __Configure__ section, select __Azure Monitor Resources__ and then select __+ Add__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/add-monitor-resources.png" alt-text="Screenshot of the add button.":::
+
+1. From __Select a scope__, use the filters to select the Application Insights instance for your Azure Machine Learning workspace. Select __Apply__ to add the instance.
+1. From the __Configure__ section, select __Private Endpoint connections__ and then select __+ Private Endpoint__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/private-endpoint-connections.png" alt-text="Screenshot of the add private endpoint button.":::
+
+1. Select the same __Subscription__, __Resource Group__, and __Region__ that contains your VNet. Select __Next: Resource__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/monitor-private-endpoint-basics.png" alt-text="Screenshot of the Azure Monitor private endpoint basics.":::
+
+1. Select `Microsoft.insights/privateLinkScopes` as the __Resource type__. Select the Private Link Scope you created earlier as the __Resource__. Select `azuremonitor` as the __Target sub-resource__. Finally, select __Next: Virtual Network__ to continue.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/monitor-private-endpoint-resource.png" alt-text="Screenshot of the Azure Monitor private endpoint resources.":::
+
+1. Select the __Virtual network__ you created earlier, and the __Training__ subnet. Select __Next__ until you arrive at __Review + Create__. Select __Create__ to create the private endpoint.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/monitor-private-endpoint-network.png" alt-text="Screenshot of the Azure Monitor private endpoint network.":::
+
+1. After the private endpoint has been created, return to the __Azure Monitor Private Link Scope__ resource in the portal. From the __Configure__ section, select __Access modes__. Select __Private only__ for __Ingestion access mode__ and __Query access mode__, then select __Save__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/access-modes.png" alt-text="Screenshot of the private link scope access modes.":::
+
+## Connect to the workspace
+
+There are several ways that you can connect to the secured workspace. The steps in this article use a __jump box__, which is a virtual machine in the VNet. You can connect to it using your web browser and Azure Bastion. The following table lists several other ways that you might connect to the secure workspace:
+
+| Method | Description |
+| -- | -- |
+| [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) | Connects on-premises networks to the VNet over a private connection. Connection is made over the public internet. |
+| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | Connects on-premises networks into the cloud over a private connection. Connection is made using a connectivity provider. |
+
+> [!IMPORTANT]
+> When using a __VPN gateway__ or __ExpressRoute__, you will need to plan how name resolution works between your on-premises resources and those in the VNet. For more information, see [Use a custom DNS server](how-to-custom-dns.md).
+
+### Create a jump box (VM)
+
+Use the following steps to create an Azure Virtual Machine to use as a jump box. Azure Bastion enables you to connect to the VM desktop through your browser. From the VM desktop, you can then use the browser on the VM to connect to resources inside the VNet, such as Azure Machine Learning studio. Or you can install development tools on the VM.
+
+> [!TIP]
+> The steps below create a Windows 11 enterprise VM. Depending on your requirements, you may want to select a different VM image. The Windows 11 (or 10) enterprise image is useful if you need to join the VM to your organization's domain.
+
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Machine__. Select the __Virtual Machine__ entry, and then select __Create__.
+
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields:
+
+ * __Virtual machine name__: A unique name for the VM.
+ * __Username__: The username you'll use to log in to the VM.
+ * __Password__: The password for the username.
+ * __Security type__: Standard.
+ * __Image__: Windows 11 Enterprise.
+
+ > [!TIP]
+ > If Windows 11 Enterprise isn't in the list for image selection, use _See all images__. Find the __Windows 11__ entry from Microsoft, and use the __Select__ drop-down to select the enterprise image.
++
+ You can leave other fields at the default values.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-virtual-machine-basic.png" alt-text="Screenshot of the virtual machine basics configuration.":::
+
+1. Select __Networking__, and then select the __Virtual network__ you created earlier. Use the following information to set the remaining fields:
+
+ * Select the __Training__ subnet.
+ * Set the __Public IP__ to __None__.
+ * Leave the other fields at the default value.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-virtual-machine-network.png" alt-text="Screenshot of the virtual machine network configuration.":::
+
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
++
+### Connect to the jump box
+
+1. Once the virtual machine has been created, select __Go to resource__.
+1. From the top of the page, select __Connect__ and then __Bastion__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/virtual-machine-connect.png" alt-text="Screenshot of the 'connect' list, with 'Bastion' selected.":::
+
+1. Select __Use Bastion__, and then provide your authentication information for the virtual machine, and a connection will be established in your browser.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/use-bastion.png" alt-text="Screenshot of the Use Bastion button.":::
+
+## Create a compute cluster and compute instance
+
+A compute cluster is used by your training jobs. A compute instance provides a Jupyter Notebook experience on a shared compute resource attached to your workspace.
+
+1. From an Azure Bastion connection to the jump box, open the __Microsoft Edge__ browser on the remote desktop.
+1. In the remote browser session, go to __https://ml.azure.com__. When prompted, authenticate using your Azure AD account.
+1. From the __Welcome to studio!__ screen, select the __Machine Learning workspace__ you created earlier and then select __Get started__.
+
+ > [!TIP]
+ > If your Azure AD account has access to multiple subscriptions or directories, use the __Directory and Subscription__ dropdown to select the one that contains the workspace.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-select-workspace.png" alt-text="Screenshot of the select Machine Learning workspace form.":::
+
+1. From studio, select __Compute__, __Compute clusters__, and then __+ New__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-new-compute-cluster.png" alt-text="Screenshot of the compute clusters page, with the 'new' button selected.":::
+
+1. From the __Virtual Machine__ dialog, select __Next__ to accept the default virtual machine configuration.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-new-compute-vm.png" alt-text="Screenshot of the compute cluster virtual machine configuration.":::
+
+1. From the __Configure Settings__ dialog, enter __cpu-cluster__ as the __Compute name__. Set the __Subnet__ to __Training__ and then select __Create__ to create the cluster.
+
+ > [!TIP]
+ > Compute clusters dynamically scale the nodes in the cluster as needed. We recommend leaving the minimum number of nodes at 0 to reduce costs when the cluster is not in use.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-new-compute-settings.png" alt-text="Screenshot of the configure settings form.":::
+
+1. From studio, select __Compute__, __Compute instance__, and then __+ New__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-compute-instance.png" alt-text="Screenshot of the compute instances page, with the 'new' button selected.":::
+
+1. From the __Virtual Machine__ dialog, enter a unique __Computer name__ and select __Next: Advanced Settings__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-compute-instance-vm.png" alt-text="Screenshot of compute instance virtual machine configuration.":::
+
+1. From the __Advanced Settings__ dialog, set the __Subnet__ to __Training__, and then select __Create__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-compute-instance-settings.png" alt-text="Screenshot of the advanced settings.":::
+
+> [!TIP]
+> When you create a compute cluster or compute instance, Azure Machine Learning dynamically adds a Network Security Group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+>
+> * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
+> * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
+>
+> The following screenshot shows an example of these rules:
+>
+> :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
+
+For more information on creating a compute cluster and compute cluster, including how to do so with Python and the CLI, see the following articles:
+
+* [Create a compute cluster](how-to-create-attach-compute-cluster.md)
+* [Create a compute instance](how-to-create-compute-instance.md)
+
+## Configure image builds
++
+When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images:
+
+1. Navigate to [https://shell.azure.com/](https://shell.azure.com/) to open the Azure Cloud Shell.
+1. From the Cloud Shell, use the following command to install the 2.0 CLI for Azure Machine Learning:
+
+ ```azurecli-interactive
+ az extension add -n ml
+ ```
+
+1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use:
+
+ ```azurecli-interactive
+ az ml workspace update \
+ -n myworkspace \
+ -g myresourcegroup \
+ -i mycomputecluster
+ ```
+
+ > [!NOTE]
+ > You can use the same compute cluster to train models and build Docker images for the workspace.
+
+## Use the workspace
+
+> [!IMPORTANT]
+> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment (SDK/CLI v1)](./v1/how-to-secure-inferencing-vnet.md).
+>
+> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md).
+
+At this point, you can use the studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
+
+## Stop compute instance and jump box
+
+> [!WARNING]
+> While it is running (started), the compute instance and jump box will continue charging your subscription. To avoid excess cost, __stop__ them when they are not in use.
+
+The compute cluster dynamically scales between the minimum and maximum node count set when you created it. If you accepted the defaults, the minimum is 0, which effectively turns off the cluster when not in use.
+### Stop the compute instance
+
+From studio, select __Compute__, __Compute clusters__, and then select the compute instance. Finally, select __Stop__ from the top of the page.
++
+### Stop the jump box
+
+Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you're ready to use it again, use the __Start__ button to start it.
++
+You can also configure the jump box to automatically shut down at a specific time. To do so, select __Auto-shutdown__, __Enable__, set a time, and then select __Save__.
++
+## Clean up resources
+
+If you plan to continue using the secured workspace and other resources, skip this section.
+
+To delete all resources created in this tutorial, use the following steps:
+
+1. In the Azure portal, select __Resource groups__ on the far left.
+1. From the list, select the resource group that you created in this tutorial.
+1. Select __Delete resource group__.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/delete-resources.png" alt-text="Screenshot of the delete resource group link.":::
+
+1. Enter the resource group name, then select __Delete__.
+## Next steps
+
+Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
+Now that you've created a secure workspace, learn how to [deploy a model](./v1/how-to-deploy-and-where.md).
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Title: Create a secure workspace
+ Title: Create a secure workspace with a managed virtual network
-description: Create an Azure Machine Learning workspace and required Azure services inside a secure virtual network.
+description: Create an Azure Machine Learning workspace and required Azure services inside a managed virtual network.
Previously updated : 09/06/2022 Last updated : 08/11/2023 - monikerRange: 'azureml-api-2 || azureml-api-1'
-# Tutorial: How to create a secure workspace
+# Tutorial: How to create a secure workspace with a managed virtual network
-In this article, learn how to create and connect to a secure Azure Machine Learning workspace. The steps in this article use an Azure Virtual Network to create a security boundary around resources used by Azure Machine Learning.
-
+In this article, learn how to create and connect to a secure Azure Machine Learning workspace. The steps in this article use an Azure Machine Learning managed virtual network to create a security boundary around resources used by Azure Machine Learning.
In this tutorial, you accomplish the following tasks: > [!div class="checklist"]
-> * Create an Azure Virtual Network (VNet) to __secure communications between services in the virtual network__.
-> * Create an Azure Storage Account (blob and file) behind the VNet. This service is used as __default storage for the workspace__.
-> * Create an Azure Key Vault behind the VNet. This service is used to __store secrets used by the workspace__. For example, the security information needed to access the storage account.
-> * Create an Azure Container Registry (ACR). This service is used as a repository for Docker images. __Docker images provide the compute environments needed when training a machine learning model or deploying a trained model as an endpoint__.
-> * Create an Azure Machine Learning workspace.
-> * Create a jump box. A jump box is an Azure Virtual Machine that is behind the VNet. Since the VNet restricts access from the public internet, __the jump box is used as a way to connect to resources behind the VNet__.
-> * Configure Azure Machine Learning studio to work behind a VNet. The studio provides a __web interface for Azure Machine Learning__.
-> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__. In configurations where Azure Container Registry is behind the VNet, it is also used to build Docker images.
-> * Connect to the jump box and use the Azure Machine Learning studio.
-
-> [!TIP]
-> If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
+> * Create an Azure Machine Learning workspace configured to use a managed virtual network.
+> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__.
After completing this tutorial, you'll have the following architecture:
-* An Azure Virtual Network, which contains three subnets:
- * __Training__: Contains the Azure Machine Learning workspace, dependency services, and resources used for training models.
- * __Scoring__: For the steps in this tutorial, it isn't used. However if you continue using this workspace for other tutorials, we recommend using this subnet when deploying models to [endpoints](concept-endpoints.md).
- * __AzureBastionSubnet__: Used by the Azure Bastion service to securely connect clients to Azure Virtual Machines.
-* An Azure Machine Learning workspace that uses a private endpoint to communicate using the VNet.
-* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the VNet.
-* An Azure Container Registry that uses a private endpoint communicate using the VNet.
-* Azure Bastion, which allows you to use your browser to securely communicate with the jump box VM inside the VNet.
-* An Azure Virtual Machine that you can remotely connect to and access resources secured inside the VNet.
-* An Azure Machine Learning compute instance and compute cluster.
-
-> [!TIP]
-> The Azure Batch Service listed on the diagram is a back-end service required by the compute clusters and compute instances.
-
+* An Azure Machine Learning workspace that uses a private endpoint to communicate using the managed network.
+* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the managed network.
+* An Azure Container Registry that uses a private endpoint communicate using the managed network.
+* An Azure Key Vault that uses a private endpoint to communicate using the managed network.
+* An Azure Machine Learning compute instance and compute cluster secured by the managed network.
## Prerequisites
-* Familiarity with Azure Virtual Networks and IP networking. If you aren't familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module.
-* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2.
-
-## Create a virtual network
-
-To create a virtual network, use the following steps:
-
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Network__ in the search field. Select the __Virtual Network__ entry, and then select __Create__.
--
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-resource-search-vnet.png" alt-text="The create resource UI search":::
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-resource-vnet.png" alt-text="Virtual network create":::
-
-1. From the __Basics__ tab, select the Azure __subscription__ to use for this resource and then select or create a new __resource group__. Under __Instance details__, enter a friendly __name__ for your virtual network and select the __region__ to create it in.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-basics.png" alt-text="Image of the basic virtual network config":::
-
-1. Select __Security__. Select to __Enable Azure Bastion__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields:
-
- * __Bastion name__: A unique name for this Bastion instance
- * __Public IP address__: Create a new public IP address.
-
- Leave the other fields at the default values.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-bastion.png" alt-text="Screenshot of Bastion config.":::
-
-1. Select __IP Addresses__. The default settings should be similar to the following image:
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-default.png" alt-text="Default IP Address screen.":::
-
- Use the following steps to configure the IP address and configure a subnet for training and scoring resources:
-
- > [!TIP]
- > While you can use a single subnet for all Azure Machine Learning resources, the steps in this article show how to create two subnets to separate the training & scoring resources.
- >
- > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet.
-
- 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.16.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__.
-
- > [!IMPORTANT]
- > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
-
- 1. Select the __Default__ subnet and then select __Remove subnet__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet.":::
-
- 1. To create a subnet to contain the workspace, dependency services, and resources used for _training_, select __+ Add subnet__ and set the subnet name, starting address, and subnet size. The following are the values used in this tutorial:
- * __Name__: Training
- * __Starting address__: 172.16.0.0
- * __Subnet size__: /24 (256 addresses)
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet.":::
-
- 1. To create a subnet for compute resources used to _score_ your models, select __+ Add subnet__ again, and set the name and address range:
- * __Subnet name__: Scoring
- * __Starting address__: 172.16.1.0
- * __Subnet size__: /24 (256 addresses)
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet.":::
-
- 1. To create a subnet for _Azure Bastion_, select __+ Add subnet__ and set the template, starting address, and subnet size:
- * __Subnet template__: Azure Bastion
- * __Starting address__: 172.16.2.0
- * __Subnet size__: /26 (64 addresses)
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-azure-bastion-subnet.png" alt-text="Screenshot of Azure Bastion subnet.":::
-
-1. Select __Review + create__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-final.png" alt-text="Screenshot showing the review + create button":::
-
-1. Verify that the information is correct, and then select __Create__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-review.png" alt-text="Screenshot of the review page":::
-
-## Create a storage account
-
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Storage account__. Select the __Storage Account__ entry, and then select __Create__.
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Storage account name__, and set __Redundancy__ to __Locally-redundant storage (LRS)__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-storage.png" alt-text="Image of storage account basic config":::
-
-1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add private endpoint__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-enable-private-endpoint.png" alt-text="UI to add the blob private network":::
-
-1. On the __Create private endpoint__ form, use the following values:
- * __Subscription__: The same Azure subscription that contains the previous resources you've created.
- * __Resource group__: The same Azure resource group that contains the previous resources you've created.
- * __Location__: The same Azure region that contains the previous resources you've created.
- * __Name__: A unique name for this private endpoint.
- * __Target sub-resource__: blob
- * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.16.0.0/24)
- * __Private DNS integration__: Yes
- * __Private DNS Zone__: privatelink.blob.core.windows.net
-
- Select __OK__ to create the private endpoint.
-
-1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-
-1. Once the Storage Account has been created, select __Go to resource__:
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-go-to-resource.png" alt-text="Go to new storage resource":::
-
-1. From the left navigation, select __Networking__ the __Private endpoint connections__ tab, and then select __+ Private endpoint__:
-
- > [!NOTE]
- > While you created a private endpoint for Blob storage in the previous steps, you must also create one for File storage.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-networking.png" alt-text="UI for storage account networking":::
-
-1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you've used for previous resources. Enter a unique __Name__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint.png" alt-text="UI to add the file private endpoint":::
-
-1. Select __Next : Resource__, and then set __Target sub-resource__ to __file__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint-resource.png" alt-text="Add the subresource of 'file'":::
-
-1. Select __Next : Configuration__, and then use the following values:
- * __Virtual network__: The network you created previously
- * __Subnet__: Training
- * __Integrate with private DNS zone__: Yes
- * __Private DNS zone__: privatelink.file.core.windows.net
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint-config.png" alt-text="UI to configure the file private endpoint":::
-
-1. Select __Review + Create__. Verify that the information is correct, and then select __Create__.
-
-> [!TIP]
-> If you plan to use a [batch endpoint](concept-endpoints.md) or an Azure Machine Learning pipeline that uses a [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md), it is also required to configure private endpoints target **queue** and **table** sub-resources. ParallelRunStep uses queue and table under the hood for task scheduling and dispatching.
-
-## Create a key vault
-
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Key Vault__. Select the __Key Vault__ entry, and then select __Create__.
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Key vault name__. Leave the other fields at the default value.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-key-vault.png" alt-text="Create a new key vault":::
-
-1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/key-vault-networking.png" alt-text="Key vault networking":::
-
-1. On the __Create private endpoint__ form, use the following values:
- * __Subscription__: The same Azure subscription that contains the previous resources you've created.
- * __Resource group__: The same Azure resource group that contains the previous resources you've created.
- * __Location__: The same Azure region that contains the previous resources you've created.
- * __Name__: A unique name for this private endpoint.
- * __Target sub-resource__: Vault
- * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.16.0.0/24)
- * __Private DNS integration__: Yes
- * __Private DNS Zone__: privatelink.vaultcore.azure.net
-
- Select __OK__ to create the private endpoint.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/key-vault-private-endpoint.png" alt-text="Configure a key vault private endpoint":::
-
-1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-
-## Create a container registry
-
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Container Registry__. Select the __Container Registry__ entry, and then select __Create__.
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __location__ you previously used for the virtual network. Enter a unique __Registry name__ and set the __SKU__ to __Premium__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-container-registry.png" alt-text="Create a container registry":::
-
-1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-networking.png" alt-text="Container registry networking":::
-
-1. On the __Create private endpoint__ form, use the following values:
- * __Subscription__: The same Azure subscription that contains the previous resources you've created.
- * __Resource group__: The same Azure resource group that contains the previous resources you've created.
- * __Location__: The same Azure region that contains the previous resources you've created.
- * __Name__: A unique name for this private endpoint.
- * __Target sub-resource__: registry
- * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.16.0.0/24)
- * __Private DNS integration__: Yes
- * __Private DNS Zone__: privatelink.azurecr.io
-
- Select __OK__ to create the private endpoint.
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-private-endpoint.png" alt-text="Configure container registry private endpoint":::
+## Create a jump box (VM)
-1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-1. After the container registry has been created, select __Go to resource__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-go-to-resource.png" alt-text="Select 'go to resource'":::
-
-1. From the left of the page, select __Access keys__, and then enable __Admin user__. This setting is required when using Azure Container Registry inside a virtual network with Azure Machine Learning.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-admin-user.png" alt-text="Screenshot of admin user toggle":::
-
-## Create a workspace
-
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Machine Learning__. Select the __Machine Learning__ entry, and then select __Create__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-create.png" alt-text="{alt-text}":::
-
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the following values for the other fields:
- * __Workspace name__: A unique name for your workspace.
- * __Storage account__: Select the storage account you created previously.
- * __Key vault__: Select the key vault you created previously.
- * __Application insights__: Use the default value.
- * __Container registry__: Use the container registry you created previously.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-machine-learning-workspace.png" alt-text="Basic workspace configuration":::
-
-1. From the __Networking__ tab, select __Private with Internet Outbound__. In the __Workspace inbound access__ section, select __+ add__.
-
-1. On the __Create private endpoint__ form, use the following values:
- * __Subscription__: The same Azure subscription that contains the previous resources you've created.
- * __Resource group__: The same Azure resource group that contains the previous resources you've created.
- * __Location__: The same Azure region that contains the previous resources you've created.
- * __Name__: A unique name for this private endpoint.
- * __Target sub-resource__: amlworkspace
- * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.16.0.0/24)
- * __Private DNS integration__: Yes
- * __Private DNS Zone__: Leave the two private DNS zones at the default values of __privatelink.api.azureml.ms__ and __privatelink.notebooks.azure.net__.
-
- Select __OK__ to create the private endpoint.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-workspace-private-endpoint.png" alt-text="Screenshot of workspace private network config":::
-
-1. From the __Networking__ tab, in the __Workspace outbound access__ section, select __Use my own virtual network__.
-1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-1. Once the workspace has been created, select __Go to resource__.
-1. From the __Settings__ section on the left, select __Private endpoint connections__ and then select the link in the __Private endpoint__ column:
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-private-endpoint-connections.png" alt-text="Screenshot of workspace private endpoint connections":::
-
-1. Once the private endpoint information appears, select __DNS configuration__ from the left of the page. Save the IP address and fully qualified domain name (FQDN) information on this page, as it will be used later.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-private-endpoint-dns.png" alt-text="screenshot of IP and FQDN entries":::
-
-> [!IMPORTANT]
-> There are still some configuration steps needed before you can fully use the workspace. However, these require you to connect to the workspace.
-
-## Enable studio
-
-Azure Machine Learning studio is a web-based application that lets you easily manage your workspace. However, it needs some extra configuration before it can be used with resources secured inside a VNet. Use the following steps to enable studio:
-
-1. When using an Azure Storage Account that has a private endpoint, add the service principal for the workspace as a __Reader__ for the storage private endpoint(s). From the Azure portal, select your storage account and then select __Networking__. Next, select __Private endpoint connections__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-select.png" alt-text="Screenshot of storage private endpoints":::
-
-1. For __each private endpoint listed__, use the following steps:
-
- 1. Select the link in the __Private endpoint__ column.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-selected.png" alt-text="Screenshot of endpoints to select":::
-
- 1. Select __Access control (IAM)__ from the left side.
- 1. Select __+ Add__, and then __Add role assignment (Preview)__.
-
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
-
- 1. On the __Role__ tab, select the __Reader__.
-
- ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
-
- 1. On the __Members__ tab, select __User, group, or service principal__ in the __Assign access to__ area and then select __+ Select members__. In the __Select members__ dialog, enter the name as your Azure Machine Learning workspace. Select the service principal for the workspace, and then use the __Select__ button.
-
- 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-## Secure Azure Monitor and Application Insights
-
-> [!NOTE]
-> For more information on securing Azure Monitor and Application Insights, see the following links:
-> * [Migrate to workspace-based Application Insights resources](../azure-monitor/app/convert-classic-resource.md).
-> * [Configure your Azure Monitor private link](../azure-monitor/logs/private-link-configure.md).
-
-1. In the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace. From __Overview__, select the __Application Insights__ link.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-application-insight.png" alt-text="Screenshot of the Application Insights link.":::
+There are several ways that you can connect to the secured workspace. In this tutorial, a __jump box__ is used. A jump box is a virtual machine in an Azure Virtual Network. You can connect to it using your web browser and Azure Bastion.
-1. In the __Properties__ for Application Insights, check the __WORKSPACE__ entry to see if it contains a value. If it _doesn't_, select __Migrate to Workspace-based__, select the __Subscription__ and __Log Analytics Workspace__ to use, then select __Apply__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/migrate-workspace-based.png" alt-text="Screenshot of the link to migrate to workspace-based.":::
-
-1. In the Azure portal, select __Home__, and then search for __Private link__. Select the __Azure Monitor Private Link Scope__ result and then select __Create__.
-1. From the __Basics__ tab, select the same __Subscription__, __Resource Group__, and __Resource group region__ as your Azure Machine Learning workspace. Enter a __Name__ for the instance, and then select __Review + Create__. To create the instance, select __Create__.
-1. Once the Azure Monitor Private Link Scope instance has been created, select the instance in the Azure portal. From the __Configure__ section, select __Azure Monitor Resources__ and then select __+ Add__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/add-monitor-resources.png" alt-text="Screenshot of the add button.":::
-
-1. From __Select a scope__, use the filters to select the Application Insights instance for your Azure Machine Learning workspace. Select __Apply__ to add the instance.
-1. From the __Configure__ section, select __Private Endpoint connections__ and then select __+ Private Endpoint__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/private-endpoint-connections.png" alt-text="Screenshot of the add private endpoint button.":::
-
-1. Select the same __Subscription__, __Resource Group__, and __Region__ that contains your VNet. Select __Next: Resource__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/monitor-private-endpoint-basics.png" alt-text="Screenshot of the Azure Monitor private endpoint basics.":::
-
-1. Select `Microsoft.insights/privateLinkScopes` as the __Resource type__. Select the Private Link Scope you created earlier as the __Resource__. Select `azuremonitor` as the __Target sub-resource__. Finally, select __Next: Virtual Network__ to continue.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/monitor-private-endpoint-resource.png" alt-text="Screenshot of the Azure Monitor private endpoint resources.":::
-
-1. Select the __Virtual network__ you created earlier, and the __Training__ subnet. Select __Next__ until you arrive at __Review + Create__. Select __Create__ to create the private endpoint.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/monitor-private-endpoint-network.png" alt-text="Screenshot of the Azure Monitor private endpoint network.":::
-
-1. After the private endpoint has been created, return to the __Azure Monitor Private Link Scope__ resource in the portal. From the __Configure__ section, select __Access modes__. Select __Private only__ for __Ingestion access mode__ and __Query access mode__, then select __Save__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/access-modes.png" alt-text="Screenshot of the private link scope access modes.":::
-
-## Connect to the workspace
-
-There are several ways that you can connect to the secured workspace. The steps in this article use a __jump box__, which is a virtual machine in the VNet. You can connect to it using your web browser and Azure Bastion. The following table lists several other ways that you might connect to the secure workspace:
+The following table lists several other ways that you might connect to the secure workspace:
| Method | Description | | -- | -- |
-| [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) | Connects on-premises networks to the VNet over a private connection. Connection is made over the public internet. |
+| [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) | Connects on-premises networks to an Azure Virtual Network over a private connection. A private endpoint for your workspace is created within that virtual network. Connection is made over the public internet. |
| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | Connects on-premises networks into the cloud over a private connection. Connection is made using a connectivity provider. | > [!IMPORTANT]
-> When using a __VPN gateway__ or __ExpressRoute__, you will need to plan how name resolution works between your on-premises resources and those in the VNet. For more information, see [Use a custom DNS server](how-to-custom-dns.md).
+> When using a __VPN gateway__ or __ExpressRoute__, you will need to plan how name resolution works between your on-premises resources and those in the cloud. For more information, see [Use a custom DNS server](how-to-custom-dns.md).
-### Create a jump box (VM)
-
-Use the following steps to create an Azure Virtual Machine to use as a jump box. Azure Bastion enables you to connect to the VM desktop through your browser. From the VM desktop, you can then use the browser on the VM to connect to resources inside the VNet, such as Azure Machine Learning studio. Or you can install development tools on the VM.
+Use the following steps to create an Azure Virtual Machine to use as a jump box. From the VM desktop, you can then use the browser on the VM to connect to resources inside the managed virtual network, such as Azure Machine Learning studio. Or you can install development tools on the VM.
> [!TIP]
-> The steps below create a Windows 11 enterprise VM. Depending on your requirements, you may want to select a different VM image. The Windows 11 (or 10) enterprise image is useful if you need to join the VM to your organization's domain.
+> The following steps create a Windows 11 enterprise VM. Depending on your requirements, you may want to select a different VM image. The Windows 11 (or 10) enterprise image is useful if you need to join the VM to your organization's domain.
1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Machine__. Select the __Virtual Machine__ entry, and then select __Create__.
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields:
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ to create the service in. Provide values for the following fields:
* __Virtual machine name__: A unique name for the VM.
- * __Username__: The username you'll use to log in to the VM.
+ * __Username__: The username you use to sign in to the VM.
* __Password__: The password for the username. * __Security type__: Standard. * __Image__: Windows 11 Enterprise.
Use the following steps to create an Azure Virtual Machine to use as a jump box.
You can leave other fields at the default values.
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-basic.png" alt-text="Image of VM basic configuration":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-basic.png" alt-text="Screenshot of the virtual machine basics configuration.":::
-1. Select __Networking__, and then select the __Virtual network__ you created earlier. Use the following information to set the remaining fields:
+1. Select __Networking__. Review the networking information and make sure that it's not using the 172.17.0.0/16 IP address range. If it is, select a different range such as 172.16.0.0/16; the 172.17.0.0/16 range can cause conflicts with Docker.
- * Select the __Training__ subnet.
- * Set the __Public IP__ to __None__.
- * Leave the other fields at the default value.
+ > [!NOTE]
+ > The Azure Virtual Machine creates its own Azure Virtual Network for network isolation. This network is separate from the managed virtual network used by Azure Machine Learning.
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-network.png" alt-text="Image of VM network configuration":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-networking.png" alt-text="Screenshot of the networking tab for the virtual machine.":::
1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
+### Enable Azure Bastion for the VM
-### Connect to the jump box
-
-1. Once the virtual machine has been created, select __Go to resource__.
-1. From the top of the page, select __Connect__ and then __Bastion__.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-connect.png" alt-text="Image of the connect/bastion UI":::
-
-1. Select __Use Bastion__, and then provide your authentication information for the virtual machine, and a connection will be established in your browser.
+Azure Bastion enables you to connect to the VM desktop through your browser.
- :::image type="content" source="./media/tutorial-create-secure-workspace/use-bastion.png" alt-text="Image of use bastion dialog":::
+1. In the Azure portal, select the VM you created earlier. From the __Operations__ section of the page, select __Bastion__ and then __Deploy Bastion__.
-## Create a compute cluster and compute instance
+ :::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-deploy-bastion.png" alt-text="Screenshot of the deploy Bastion option.":::
-A compute cluster is used by your training jobs. A compute instance provides a Jupyter Notebook experience on a shared compute resource attached to your workspace.
+1. Once the Bastion service has been deployed, you're presented with a connection page. Leave this dialog for now.
-1. From an Azure Bastion connection to the jump box, open the __Microsoft Edge__ browser on the remote desktop.
-1. In the remote browser session, go to __https://ml.azure.com__. When prompted, authenticate using your Azure AD account.
-1. From the __Welcome to studio!__ screen, select the __Machine Learning workspace__ you created earlier and then select __Get started__.
-
- > [!TIP]
- > If your Azure AD account has access to multiple subscriptions or directories, use the __Directory and Subscription__ dropdown to select the one that contains the workspace.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-select-workspace.png" alt-text="Screenshot of the select workspace dialog":::
+## Create a workspace
-1. From studio, select __Compute__, __Compute clusters__, and then __+ New__.
+1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Azure Machine Learning__. Select the __Azure Machine Learning__ entry, and then select __Create__.
- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-cluster.png" alt-text="Screenshot of new compute cluster workflow":::
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ to create the service in. Enter a unique name for the __Workspace name__. Leave the rest of the fields at the default values; new instances of the required services are created for the workspace.
-1. From the __Virtual Machine__ dialog, select __Next__ to accept the default virtual machine configuration.
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-workspace.png" alt-text="Screenshot of the workspace creation form.":::
- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-vm.png" alt-text="Screenshot of compute cluster vm settings":::
-
-1. From the __Configure Settings__ dialog, enter __cpu-cluster__ as the __Compute name__. Set the __Subnet__ to __Training__ and then select __Create__ to create the cluster.
+1. From the __Networking__ tab, select __Private with Internet Outbound__.
- > [!TIP]
- > Compute clusters dynamically scale the nodes in the cluster as needed. We recommend leaving the minimum number of nodes at 0 to reduce costs when the cluster is not in use.
+ :::image type="content" source="./media/tutorial-create-secure-workspace/private-internet-outbound.png" alt-text="Screenshot of the workspace network tab with internet outbound selected.":::
- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-settings.png" alt-text="Screenshot of new compute cluster settings":::
+1. From the __Networking__ tab, in the __Workspace inbound access__ section, select __+ Add__.
-1. From studio, select __Compute__, __Compute instance__, and then __+ New__.
+ :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-inbound-access.png" alt-text="Screenshot showing the add button for inbound access.":::
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance.png" alt-text="Screenshot of new compute instance workflow":::
+1. From the __Create private endpoint__ form, enter a unique value in the __Name__ field. Select the __Virtual network__ created earlier with the VM, and select the default __Subnet__. Leave the rest of the fields at the default values. Select __OK__ to save the endpoint.
-1. From the __Virtual Machine__ dialog, enter a unique __Computer name__ and select __Next: Advanced Settings__.
+ :::image type="content" source="./media/tutorial-create-secure-workspace/private-endpoint-workspace.png" alt-text="Screenshot of the create private endpoint form.":::
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-vm.png" alt-text="Screenshot of compute instance vm settings":::
+1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-1. From the __Advanced Settings__ dialog, set the __Subnet__ to __Training__, and then select __Create__.
+1. Once the workspace has been created, select __Go to resource__.
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-settings.png" alt-text="Screenshot of compute instance settings":::
+## Connect to the VM desktop
-> [!TIP]
-> When you create a compute cluster or compute instance, Azure Machine Learning dynamically adds a Network Security Group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
->
-> * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
-> * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
->
-> The following screenshot shows an example of these rules:
->
-> :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
+1. From the [Azure portal](https://portal.azure.com), select the VM you created earlier.
+1. From the __Connect__ section, select __Bastion__. Enter the username and password you configured for the VM, and then select __Connect__.
-For more information on creating a compute cluster and compute cluster, including how to do so with Python and the CLI, see the following articles:
+ :::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-bastion-connect.png" alt-text="Screenshot of the Bastion connect form.":::
-* [Create a compute cluster](how-to-create-attach-compute-cluster.md)
-* [Create a compute instance](how-to-create-compute-instance.md)
+## Connect to studio
-## Configure image builds
+At this point, the workspace has been created __but the managed virtual network has not__. The managed virtual network is _configured_ when you create the workspace, but it isn't created until you create the first compute resource or manually provision it.
+Use the following steps to create a compute instance.
-When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images:
+1. From the __VM desktop__, use the browser to open the [Azure Machine Learning studio](https://ml.azure.com) and select the workspace you created earlier.
-1. Navigate to [https://shell.azure.com/](https://shell.azure.com/) to open the Azure Cloud Shell.
-1. From the Cloud Shell, use the following command to install the 2.0 CLI for Azure Machine Learning:
-
- ```azurecli-interactive
- az extension add -n ml
- ```
+1. From studio, select __Compute__, __Compute instances__, and then __+ New__.
-1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use:
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-new-compute-instance.png" alt-text="Screenshot of the new compute option in studio.":::
- ```azurecli-interactive
- az ml workspace update \
- -n myworkspace \
- -g myresourcegroup \
- -i mycomputecluster
- ```
+1. From the __Configure required settings__ dialog, enter a unique value as the __Compute name__. Leave the rest of the selections at the default value.
- > [!NOTE]
- > You can use the same compute cluster to train models and build Docker images for the workspace.
+1. Select __Create__. The compute instance takes a few minutes to create. The compute instance is created within the managed network.
-## Use the workspace
-
-> [!IMPORTANT]
-> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment (SDK/CLI v1)](./v1/how-to-secure-inferencing-vnet.md).
->
-> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md).
+ > [!TIP]
+ > It may take several minutes to create the first compute resource. This delay occurs because the managed virtual network is also being created. The managed virtual network isn't created until the first compute resource is created. Subsequent managed compute resources will be created much faster.
-At this point, you can use the studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
+### Enable studio access to storage
-## Stop compute instance and jump box
+Since the Azure Machine Learning studio partially runs in the web browser on the client, the client needs to be able to directly access the default storage account for the workspace to perform data operations. To enable this, use the following steps:
-> [!WARNING]
-> While it is running (started), the compute instance and jump box will continue charging your subscription. To avoid excess cost, __stop__ them when they are not in use.
+1. From the [Azure portal](https://portal.azure.com), select the jump box VM you created earlier. From the __Overview__ section, copy the __Private IP address__.
+1. From the [Azure portal](https://portal.azure.com), select the workspace you created earlier. From the __Overview__ section, select the link for the __Storage__ entry.
+1. From the storage account, select __Networking__, and add the jump box's _private_ IP address to the __Firewall__ section.
-The compute cluster dynamically scales between the minimum and maximum node count set when you created it. If you accepted the defaults, the minimum is 0, which effectively turns off the cluster when not in use.
-### Stop the compute instance
+ > [!TIP]
+ > In a scenario where you use a VPN gateway or ExpressRoute instead of a jump box, you could add a private endpoint or service endpoint for the storage account to the Azure Virtual Network. Using a private endpoint or service endpoint would allow multiple clients connecting through the Azure Virtual Network to successfully perform storage operations through studio.
-From studio, select __Compute__, __Compute clusters__, and then select the compute instance. Finally, select __Stop__ from the top of the page.
+ At this point, you can use the studio to interactively work with notebooks on the compute instance and run training jobs. For a tutorial, see [Tutorial: Model development](tutorial-cloud-workstation.md).
-### Stop the jump box
+## Stop compute instance
-Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you're ready to use it again, use the __Start__ button to start it.
+While it's running (started), the compute instance continues charging your subscription. To avoid excess cost, __stop__ it when not in use.
+From studio, select __Compute__, __Compute instances__, and then select the compute instance. Finally, select __Stop__ from the top of the page.
-You can also configure the jump box to automatically shut down at a specific time. To do so, select __Auto-shutdown__, __Enable__, set a time, and then select __Save__.
- ## Clean up resources
If you plan to continue using the secured workspace and other resources, skip th
To delete all resources created in this tutorial, use the following steps:
-1. In the Azure portal, select __Resource groups__ on the far left.
+1. In the Azure portal, select __Resource groups__.
1. From the list, select the resource group that you created in this tutorial. 1. Select __Delete resource group__. :::image type="content" source="./media/tutorial-create-secure-workspace/delete-resources.png" alt-text="Screenshot of delete resource group button"::: 1. Enter the resource group name, then select __Delete__.+ ## Next steps Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
-Now that you've created a secure workspace, learn how to [deploy a model](./v1/how-to-deploy-and-where.md).
+
+For more information on the managed virtual network, see [Secure your workspace with a managed virtual network](how-to-managed-network.md).
machine-learning Tutorial Enable Materialization Backfill Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-materialization-backfill-data.md
Title: "Tutorial #2: enable materialization and backfill feature data (preview)"-
-description: Managed Feature Store tutorial part 2.
+ Title: "Tutorial 2: Enable materialization and backfill feature data (preview)"
+
+description: This is part 2 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #2: Enable materialization and backfill feature data (preview)
+# Tutorial 2: Enable materialization and backfill feature data (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. This tutorial describes materialization, which computes the feature values for a given feature window, and then stores those values in a materialization store. All feature queries can then use the values from the materialization store. A feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This works well for the prototyping phase. However, for training and inference operations in a production environment, it's recommended that you materialize the features, for greater reliability and availability.
+This tutorial is the second part of a four-part series. The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. This tutorial describes materialization.
-This tutorial is part two of a four part series. In this tutorial, you'll learn how to:
+Materialization computes the feature values for a feature window and then stores those values in a materialization store. All feature queries can then use the values from the materialization store.
+
+Without materialization, a feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This process works well for the prototyping phase. However, for training and inference operations in a production environment, we recommend that you materialize the features for greater reliability and availability.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Enable offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user assigned managed identity
-> * Enable offline materialization on the feature sets, and backfill the feature data
+> * Enable an offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user-assigned managed identity (UAI).
+> * Enable offline materialization on the feature sets, and backfill the feature data.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
-Before you proceed with this article, make sure you cover these prerequisites:
-
-* Complete the part 1 tutorial, to create the required feature store, account entity and transaction feature set
-* An Azure Resource group, where you (or the service principal you use) have `User Access Administrator`and `Contributor` roles.
+Before you proceed with this tutorial, be sure to cover these prerequisites:
-To proceed with this article, your user account needs the owner role or contributor role for the resource group that holds the created feature store.
+* Completion of [Tutorial 1: Develop and register a feature set with managed feature store](tutorial-get-started-with-feature-store.md), to create the required feature store, account entity, and `transactions` feature set.
+* An Azure resource group, where you (or the service principal that you use) have User Access Administrator and Contributor roles.
+* On your user account, the Owner or Contributor role for the resource group that holds the created feature store.
## Set up This list summarizes the required setup steps:
-1. In your project workspace, create an Azure Machine Learning compute resource, to run the training pipeline
-1. In your feature store workspace, create an offline materialization store: create an Azure gen2 storage account and a container inside it, and attach it to the feature store. Optional: you can use an existing storage container
-1. Create and assign a user-assigned managed identity to the feature store. Optionally, you can use an existing managed identity. The system managed materialization jobs - in other words, the recurrent jobs - use the managed identity. Part 3 of the tutorial relies on this
-1. Grant required role-based authentication control (RBAC) permissions to the user-assigned managed identity
-1. Grant required role-based authentication control (RBAC) to your Azure AD identity. Users, including yourself, need read access to the sources and the materialization store
+1. In your project workspace, create an Azure Machine Learning compute resource to run the training pipeline.
+1. In your feature store workspace, create an offline materialization store. Create an Azure Data Lake Storage Gen2 account and a container inside it, and attach it to the feature store. Optionally, you can use an existing storage container.
+1. Create and assign a UAI to the feature store. Optionally, you can use an existing managed identity. The system-managed materialization jobs - in other words, the recurrent jobs - use the managed identity. The third tutorial in the series relies on it.
+1. Grant required role-based access control (RBAC) permissions to the UAI.
+1. Grant required RBAC permissions to your Azure Active Directory (Azure AD) identity. Users, including you, need read access to the sources and the materialization store.
-### Configure the Azure Machine Learning spark notebook
+### Configure the Azure Machine Learning Spark notebook
-1. Running the tutorial:
+You can create a new notebook and execute the instructions in this tutorial step by step. You can also open the existing notebook named *2. Enable materialization and backfill feature data.ipynb* from the *featurestore_sample/notebooks* directory, and then run it. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- You can create a new notebook, and execute the instructions in this document, step by step. You can also open the existing notebook named `2. Enable materialization and backfill feature data.ipynb`, and then run it. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
-
-1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav.
+1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
1. Configure the session:
- * Select "configure session" in the bottom nav
- * Select **upload conda file**
- * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
- * Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ 1. On the toolbar, select **Configure session**.
+ 1. On the **Python packages** tab, select **Upload Conda file**.
+ 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Increase the session time-out (idle time) to avoid frequent prerequisite reruns.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=start-spark-session)]
This list summarizes the required setup steps:
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=root-dir)]
- 1. Set up the CLI
+1. Set up the CLI.
+
+ # [Python SDK](#tab/python)
+
+ Not applicable.
+
+ # [Azure CLI](#tab/cli)
+
+ 1. Install the Azure Machine Learning extension.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
+
+ 1. Authenticate.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)]
+
+ 1. Set the default subscription.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
- # [Python SDK](#tab/python)
-
- Not applicable
-
- # [Azure CLI](#tab/cli)
-
- 1. Install the Azure Machine Learning extension
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
-
- 1. Authentication
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)]
-
- 1. Set the default subscription
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
-
-1. Initialize the project workspace properties
+1. Initialize the project workspace properties.
This is the current workspace. You'll run the tutorial notebook from this workspace. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)]
-1. Initialize the feature store properties
+1. Initialize the feature store properties.
- Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial.
+ Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)]
-1. Initialize the feature store core SDK client
+1. Initialize the feature store core SDK client.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)]
-1. Set up the offline materialization store
+1. Set up the offline materialization store.
- You can create a new gen2 storage account and a container. You can also reuse an existing gen2 storage account and container as the offline materialization store for the feature store.
+ You can create a new storage account and a container. You can also reuse an existing storage account and container as the offline materialization store for the feature store.
# [Python SDK](#tab/python)
This list summarizes the required setup steps:
# [Azure CLI](#tab/cli)
- Not applicable
+ Not applicable.
-## Set values for the Azure Data Lake Storage Gen2 storage
+## Set values for Azure Data Lake Storage Gen2 storage
- The materialization store uses these values. You can optionally override the default settings.
+The materialization store uses these values. You can optionally override the default settings.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
-1. Storage containers
+1. Create storage containers.
- Option 1: create new storage and container resources
+ The first option is to create new storage and container resources.
# [Python SDK](#tab/python)
This list summarizes the required setup steps:
- Option 2: reuse an existing storage container
+ The second option is to reuse an existing storage container.
# [Python SDK](#tab/python)
-
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
-
+ # [Azure CLI](#tab/cli)
-
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
-
+
-1. Set up user assigned managed identity (UAI)
+1. Set up a UAI.
- The system-managed materialization jobs will use the UAI. For example, the recurrent job in part 3 of this tutorial uses this UAI.
+ The system-managed materialization jobs will use the UAI. For example, the recurrent job in the third tutorial uses this UAI.
### Set the UAI values
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
-
-### User assigned managed identity (option 1)
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
- Create a new one
+### Set up a UAI
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
+The first option is to create a new managed identity.
-### User assigned managed identity (option 2)
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
- Reuse an existing managed identity
+The second option is to reuse an existing managed identity.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
### Retrieve UAI properties
- Run this code sample in the SDK to retrieve the UAI properties:
+Run this code sample in the SDK to retrieve the UAI properties.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
-
+
-## Grant RBAC permission to the user assigned managed identity (UAI)
+## Grant RBAC permission to the UAI
- This UAI is assigned to the feature store shortly. It requires these permissions:
+This UAI is assigned to the feature store shortly. It requires these permissions:
- | **Scope** | **Action/Role** |
- ||--|
- | Feature Store | Azure Machine Learning Data Scientist role |
- | Storage account of feature store offline store | Blob storage data contributor role |
- | Storage accounts of source data | Blob storage data reader role |
+| Scope | Role |
+||--|
+| Feature store | Azure Machine Learning Data Scientist role |
+| Storage account of the offline store on the feature store | Storage Blob Data Contributor role |
+| Storage accounts of the source data | Storage Blob Data Reader role |
- The next CLI commands will assign the first two roles to the UAI. In this example, "Storage accounts of source data" doesn't apply because we read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see the [access control document]() in the documentation resources.
+The next CLI commands assign the first two roles to the UAI. In this example, the "storage accounts of the source data" scope doesn't apply because you read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
- # [Python SDK](#tab/python)
+# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
- # [Azure CLI](#tab/cli)
+# [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
-
+
-### Grant the blob data reader role access to your user account in the offline store
+### Grant the Storage Blob Data Reader role access to your user account in the offline store
- If the feature data is materialized, you need this role to read feature data from the offline materialization store.
+If the feature data is materialized, you need the Storage Blob Data Reader role to read feature data from the offline materialization store.
- Obtain your Azure AD object ID value from the Azure portal as described [here](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
+Obtain your Azure AD object ID value from the Azure portal, as described in [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
- To learn more about access control, see the [access control document](./how-to-setup-access-control-feature-store.md).
+To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
- The following steps grant the blob data reader role access to your user account.
+The following steps grant the Storage Blob Data Reader role access to your user account:
- 1. Attach the offline materialization store and UAI, to enable the offline store on the feature store
+1. Attach the offline materialization store and UAI, to enable the offline store on the feature store.
# [Python SDK](#tab/python)
This list summarizes the required setup steps:
# [Azure CLI](#tab/cli)
- Action: inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store.
+ Inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)]
This list summarizes the required setup steps:
- 2. Enable offline materialization on the transactions feature set
+2. Enable offline materialization on the `transactions` feature set.
- Once materialization is enabled on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. See [part 3](./tutorial-experiment-train-models-using-features.md) of this tutorial series for more information.
+ After you enable materialization on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. For more information, see [the third tutorial in the series](./tutorial-experiment-train-models-using-features.md).
- # [Python SDK](#tab/python)
+ # [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
-
+
- Optional: you can save the feature set asset as a YAML resource
+ Optionally, you can save the feature set asset as a YAML resource.
- # [Python SDK](#tab/python)
+ # [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- Not applicable
+ Not applicable.
-
+
- 3. Backfill data for the transactions feature set
+3. Backfill data for the `transactions` feature set.
- As explained earlier in this tutorial, materialization computes the feature values for a given feature window, and stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill, for a feature window of three months.
+ As explained earlier in this tutorial, materialization computes the feature values for a feature window, and it stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill for a feature window of three months.
> [!NOTE]
- > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two year window.
+ > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two-year window.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)]
- We'll print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data, and it also uses the materialization store by default.
+ Next, print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data. It also uses the materialization store by default.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)]
-## Cleanup
+## Clean up
-The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
+The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources.
## Next steps
-* [Part 3: tutorial features and the machine learning lifecycle](./tutorial-experiment-train-models-using-features.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Go to the next tutorial in the series: [Experiment and train models by using features](./tutorial-experiment-train-models-using-features.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
Title: "Tutorial #4: enable recurrent materialization and run batch inference (preview)"-
-description: Managed Feature Store tutorial part 4
+ Title: "Tutorial 4: Enable recurrent materialization and run batch inference (preview)"
+
+description: This is part 4 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #4: Enable recurrent materialization and run batch inference (preview)
+# Tutorial 4: Enable recurrent materialization and run batch inference (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Part 3 of this tutorial showed how to experiment with features, as a way to improve model performance. Part 3 also showed how a feature store increases agility in the experimentation and training flows. Tutorial 4 explains how to
+The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. The second tutorial showed how to enable materialization and perform a backfill. The third tutorial showed how to experiment with features as a way to improve model performance. It also showed how a feature store increases agility in the experimentation and training flows.
+
+This tutorial explains how to:
> [!div class="checklist"]
-> * Run batch inference for the registered model
-> * Enable recurrent materialization for the `transactions` feature set
-> * Run a batch inference pipeline on the registered model
+> * Run batch inference for the registered model.
+> * Enable recurrent materialization for the `transactions` feature set.
+> * Run a batch inference pipeline on the registered model.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
-Before you proceed with this article, make sure you complete parts 1, 2, and 3 of this tutorial series.
+Before you proceed with the following procedures, be sure to complete the first, second, and third tutorials in the series.
## Set up
-### Configure the Azure Machine Learning spark notebook
-
- 1. In the "Compute" dropdown in the top nav, select "Configure session"
-
- To run this tutorial, you can create a new notebook, and execute the instructions in this document, step by step. You can also open and run the existing notebook named `4. Enable recurrent materialization and run batch inference`. You can find that notebook, and all the notebooks in this series, at the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
+1. Configure the Azure Machine Learning Spark notebook.
- 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav.
+ To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *4. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- 1. Configure session:
+ 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
- * Select "configure session" in the bottom nav
- * Select **upload conda file**
- * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
- * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ 1. Configure the session:
+
+ 1. When the toolbar displays **Configure session**, select it.
+ 1. On the **Python packages** tab, select **Upload conda file**.
+ 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
-### Start the spark session
+ 1. Start the Spark session.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)]
-### Set up the root directory for the samples
+ 1. Set up the root directory for the samples.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)]
- ### [Python SDK](#tab/python)
+ ### [Python SDK](#tab/python)
- Not applicable
+ Not applicable.
- ### [Azure CLI](#tab/cli)
+ ### [Azure CLI](#tab/cli)
- **Set up the CLI**
+ Set up the CLI:
- 1. Install the Azure Machine Learning extension
+ 1. Install the Azure Machine Learning extension.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
- 1. Authentication
+ 1. Authenticate.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
- 1. Set the default subscription
+ 1. Set the default subscription.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
-
+
-1. Initialize the project workspace CRUD client
+1. Initialize the project workspace CRUD (create, read, update, and delete) client.
- The tutorial notebook runs from this current workspace
+ The tutorial notebook runs from this current workspace.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-ws-crud-client)]
-1. Initialize the feature store variables
+1. Initialize the feature store variables.
- Make sure that you update the `featurestore_name` value, to reflect what you created in part 1 of this tutorial.
+ Be sure to update the `featurestore_name` value, to reflect what you created in the first tutorial.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-crud-client)]
-1. Initialize the feature store SDK client
+1. Initialize the feature store SDK client.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-core-sdk)]
-## Enable recurrent materialization on the `transactions` feature set
+## Enable recurrent materialization on the transactions feature set
-We enabled materialization in tutorial part 2, and we also performed backfill on the transactions feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store. However, to handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up-to-date. These jobs run on user-defined schedules. The recurrent job schedule works this way:
+In the second tutorial, you enabled materialization and performed backfill on the `transactions` feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store.
-* Interval and frequency values define a window. For example, values of
+To handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up to date. These jobs run on user-defined schedules. The recurrent job schedule works this way:
- * interval = 3
- * frequency = Hour
+* Interval and frequency values define a window. For example, the following values define a three-hour window:
- define a three-hour window.
+ * `interval` = `3`
+ * `frequency` = `Hour`
-* The first window starts at the start_time defined in the RecurrenceTrigger, and so on.
+* The first window starts at the `start_time` value defined in `RecurrenceTrigger`, and so on.
* The first recurrent job is submitted at the start of the next window after the update time.
-* Later recurrent jobs will be submitted at every window after the first job.
+* Later recurrent jobs are submitted at every window after the first job.
-As explained in earlier parts of this tutorial, once data is materialized (backfill / recurrent materialization), feature retrieval uses the materialized data by default.
+As explained in earlier tutorials, after data is materialized (backfill or recurrent materialization), feature retrieval uses the materialized data by default.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=enable-recurrent-mat-txns-fset)]
-## (Optional) Save the feature set asset yaml file
+## (Optional) Save the YAML file for the feature set asset
- We use the updated settings to save the yaml file
+You use the updated settings to save the YAML file.
- ### [Python SDK](#tab/python)
+### [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)]
- ### [Azure CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
- Not applicable
+Not applicable.
-
++
+## Run the batch inference pipeline
-## Run the batch-inference pipeline
+The batch inference has these steps:
- The batch-inference has these steps:
+1. You use the same built-in feature retrieval component for feature retrieval that you used in the training pipeline (covered in the third tutorial). For pipeline training, you provided a feature retrieval specification as a component input. For batch inference, you pass the registered model as the input. The component looks for the feature retrieval specification in the model artifact.
- 1. Feature retrieval: this uses the same built-in feature retrieval component used in the training pipeline, covered in tutorial part 3. For pipeline training, we provided a feature retrieval spec as a component input. However, for batch inference, we pass the registered model as the input, and the component looks for the feature retrieval spec in the model artifact.
-
- Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features, and outputs the data for batch inference.
+ Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features and outputs the data for batch inference.
- 1. Batch inference: This step uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output.
+1. The pipeline uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output.
> [!NOTE]
- > We use a job for batch inference in this example. You can also use Azure ML's batch endpoints.
+ > You use a job for batch inference in this example. You can also use batch endpoints in Azure Machine Learning.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=run-batch-inf-pipeline)]
- ### Inspect the batch inference output data
+### Inspect the output data for batch inference
+
+In the pipeline view:
- In the pipeline view
- 1. Select `inference_step` in the `outputs` card
- 1. Copy the Data field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`
- 1. Paste the Data field value in the following cell, with separate name and version values (note that the last character is the version, preceded by a `:`).
- 1. Note the `predict_is_fraud` column that the batch inference pipeline generated
+1. Select `inference_step` in the `outputs` card.
+1. Copy the `Data` field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`.
+1. Paste the `Data` field value in the following cell, with separate name and version values. The last character is the version, preceded by a colon (`:`).
+1. Note the `predict_is_fraud` column that the batch inference pipeline generated.
- Explanation: In the batch inference pipeline (`/project/fraud_mode/pipelines/batch_inference_pipeline.yaml`) outputs, since we didn't provide `name` or `version` values in the `outputs` of the `inference_step`, the system created an untracked data asset with a guid as the name value, and 1 as the version value. In this cell, we derive and then display the data path from the asset:
+ In the batch inference pipeline (*/project/fraud_mode/pipelines/batch_inference_pipeline.yaml*) outputs, because you didn't provide `name` or `version` values for `outputs` of `inference_step`, the system created an untracked data asset with a GUID as the name value and `1` as the version value. In this cell, you derive and then display the data path from the asset.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)]
-## Cleanup
+## Clean up
-If you created a resource group for the tutorial, you can delete the resource group, to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually:
+If you created a resource group for the tutorial, you can delete the resource group to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually:
-1. To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it
-1. Follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to delete the user-assigned managed identity
-1. To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage you created, and delete it
+- To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it.
+- To delete the user-assigned managed identity, follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+- To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage that you created, and delete it.
## Next steps
-* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Tutorial Experiment Train Models Using Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md
Title: "Tutorial #3: experiment and train models using features (preview)"-
-description: Managed Feature Store tutorial part 3.
+ Title: "Tutorial 3: Experiment and train models by using features (preview)"
+
+description: This is part 3 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #3: Experiment and train models using features (preview)
+# Tutorial 3: Experiment and train models by using features (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Tutorial 3 shows how to experiment with features, as a way to improve model performance. This tutorial also shows how a feature store increases agility in the experimentation and training flows. It shows how to:
+The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. The second tutorial showed how to enable materialization and perform a backfill.
+
+This tutorial shows how to experiment with features as a way to improve model performance. It also shows how a feature store increases agility in the experimentation and training flows.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Prototype a new `accounts` feature set spec, using existing precomputed values as features. Then, register the local feature set spec as a feature set in the feature store. This differs from tutorial part 1, where we created a feature set that had custom transformations
-> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature-retrieval spec
-> * Run a training pipeline that uses the feature retrieval spec to train a new model. This pipeline uses the built-in feature-retrieval component, to generate the training data
+> * Prototype a new `accounts` feature set specification, by using existing precomputed values as features. Then, register the local feature set specification as a feature set in the feature store. This process differs from the first tutorial, where you created a feature set that had custom transformations.
+> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature retrieval specification.
+> * Run a training pipeline that uses the feature retrieval specification to train a new model. This pipeline uses the built-in feature retrieval component to generate the training data.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
-Before you proceed with this article, make sure you complete parts 1 and 2 of this tutorial series.
+Before you proceed with the following procedures, be sure to complete the first and second tutorials in the series.
## Set up
-1. Configure the Azure Machine Learning spark notebook
+1. Configure the Azure Machine Learning Spark notebook.
- 1. Running the tutorial: You can create a new notebook, and execute the instructions in this document step by step. You can also open and run existing notebook `3. Experiment and train models using features.ipynb`. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
+ You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook named *3. Experiment and train models using features.ipynb* from the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. Wait for a status bar in the top to display "configure session".
+ 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
1. Configure the session:
- * Select "configure session" in the bottom nav
- * Select **upload conda file**
- * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
- * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ 1. When the toolbar displays **Configure session**, select it.
+ 1. On the **Python packages** tab, select **Upload Conda file**.
+ 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
- 1. Start the spark session
+ 1. Start the Spark session.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=start-spark-session)]
- 1. Set up the root directory for the samples
+ 1. Set up the root directory for the samples.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=root-dir)] ### [Python SDK](#tab/python)
-
- Not applicable
-
+
+ Not applicable.
+ ### [Azure CLI](#tab/cli)
-
- Set up the CLI
-
- 1. Install the Azure Machine Learning extension
-
+
+ Set up the CLI:
+
+ 1. Install the Azure Machine Learning extension.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=install-ml-ext-cli)]
-
- 1. Authentication
-
+
+ 1. Authenticate.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=auth-cli)]
-
- 1. Set the default subscription
-
+
+ 1. Set the default subscription.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=set-default-subs-cli)]
-
+
-1. Initialize the project workspace variables
+1. Initialize the project workspace variables.
This is the current workspace, and the tutorial notebook runs in this resource. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-ws-crud-client)]
-1. Initialize the feature store variables
+1. Initialize the feature store variables.
- Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial.
+ Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-crud-client)]
-1. Initialize the feature store consumption client
+1. Initialize the feature store consumption client.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-core-sdk)]
-1. Create a compute cluster
+1. Create a compute cluster named `cpu-cluster` in the project workspace.
- We'll create a compute cluster named `cpu-cluster` in the project workspace. We need this compute cluster when we run the training / batch inference jobs.
+ You'll need this compute cluster when you run the training/batch inference jobs.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-compute-cluster)]
-## Create the accounts feature set locally
+## Create the account feature set locally
+
+In the first tutorial, you created a `transactions` feature set that had custom transformations. Here, you create an `accounts` feature set that uses precomputed values.
-In tutorial part 1, we created a transactions feature set that had custom transformations. Here, we create an accounts feature set that uses precomputed values.
+To onboard precomputed features, you can create a feature set specification without writing any transformation code. You use a feature set specification to develop and test a feature set in a fully local development environment.
-To onboard precomputed features, you can create a feature set spec without writing any transformation code. A feature set spec is a specification that we use to develop and test a feature set, in a fully local development environment. We don't need to connect to a feature store. In this step, you create the feature set spec locally, and then sample the values from it. For managed feature store capabilities, you must use a feature asset definition to register the feature set spec with a feature store. Later steps in this tutorial provide more details.
+You don't need to connect to a feature store. In this procedure, you create the feature set specification locally, and then sample the values from it. For capabilities of managed feature store, you must use a feature asset definition to register the feature set specification with a feature store. Later steps in this tutorial provide more details.
-1. Explore the source data for the accounts
+1. Explore the source data for the accounts.
> [!NOTE]
- > This notebook uses sample data hosted in a publicly-accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets using your own source data, please host those feature sets in an adls gen2 account, and use an `abfss` driver in the data path.
+ > This notebook uses sample data hosted in a publicly accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets by using your own source data, host those feature sets in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=explore-accts-fset-src-data)]
-1. Create the `accounts` feature set spec in local, from these precomputed features
+1. Create the `accounts` feature set specification locally, from these precomputed features.
- We don't need any transformation code here, because we reference precomputed features.
+ You don't need any transformation code here, because you reference precomputed features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-accts-fset-spec)]
-1. Export as a feature set spec
+1. Export as a feature set specification.
+
+ To register the feature set specification with the feature store, you must save the feature set specification in a specific format.
+
+ After you run the next cell, inspect the generated `accounts` feature set specification. To see the specification, open the *featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml* file from the file tree.
- To register the feature set spec with the feature store, you must save the feature set spec in a specific format.
+ The specification has these important elements:
- Action: After you run the next cell, inspect the generated `accounts` feature set spec. To see the spec, open the `featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml` file from the file tree to see the spec.
+ - `source`: A reference to a storage resource. In this case, it's a Parquet file in a blob storage resource.
- The spec has these important elements:
+ - `features`: A list of features and their datatypes. With provided transformation code (see the "Day 2" section), the code must return a DataFrame that maps to the features and datatypes. Without the provided transformation code, the system builds the query to map the features and datatypes to the source. In this case, the transformation code is the generated `accounts` feature set specification, because it's precomputed.
- 1. `source`: a reference to a storage resource, in this case, a parquet file in a blog storage resource
-
- 1. `features`: a list of features and their datatypes. With provided transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes. Without the provided transformation code (in this case, the generated `accounts` feature set spec, because it's precomputed), the system builds the query to map the features and datatypes to the source
-
- 1. `index_columns`: the join keys required to access values from the feature set
+ - `index_columns`: The join keys required to access values from the feature set.
- See the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-featureset-spec.md) to learn more.
+ To learn more, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and the [CLI (v2) feature set specification YAML schema](./reference-yaml-featureset-spec.md).
As an extra benefit, persisting supports source control.
- We don't need any transformation code here, because we reference precomputed features.
+ You don't need any transformation code here, because you reference precomputed features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=dump-accts-fset-spec)] ## Locally experiment with unregistered features
-As you develop features, you might want to locally test and validate them, before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`), and a feature set registered in the feature store (`transactions`), generates training data for the ML model.
+As you develop features, you might want to locally test and validate them before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`) and a feature set registered in the feature store (`transactions`) generates training data for the machine learning model.
-1. Select features for the model
+1. Select features for the model.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-unreg-features-for-model)]
-1. Locally generate training data
+1. Locally generate training data.
This step generates training data for illustrative purposes. As an option, you can locally train models here. Later steps in this tutorial explain how to train a model in the cloud. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=gen-training-data-locally)]
-1. Register the `accounts` feature set with the feature store
+1. Register the `accounts` feature set with the feature store.
- After you locally experiment with different feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store.
+ After you locally experiment with feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=reg-accts-fset)]
-1. Get the registered feature set, and sanity test it
+1. Get the registered feature set and test it.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=sample-accts-fset-data)] ## Run a training experiment
-In this step, you select a list of features, run a training pipeline, and register the model. You can repeat this step until the model performs as you'd like.
+In the following steps, you select a list of features, run a training pipeline, and register the model. You can repeat these steps until the model performs as you want.
-1. (Optional) Discover features from the feature store UI
+1. Optionally, discover features from the feature store UI.
- Part 1 of this tutorial covered this, when you registered the transactions feature set. Since you also have an accounts feature set, you can browse the available features:
+ The first tutorial covered this step, when you registered the `transactions` feature set. Because you also have an `accounts` feature set, you can browse through the available features:
- * Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStores).
- * In the left nav, select `feature stores`
- * The list of feature stores that you can access appears. Select the feature store that you created earlier.
+ 1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home).
+ 1. On the left pane, select **Feature stores**.
+ 1. In the list of feature stores, select the feature store that you created earlier.
- You can see the feature sets and entity that you created. Select the feature sets to browse the feature definitions. You can use the global search box to search for feature sets across feature stores.
+ The UI shows the feature sets and entity that you created. Select the feature sets to browse through the feature definitions. You can use the global search box to search for feature sets across feature stores.
-1. (Optional) Discover features from the SDK
+1. Optionally, discover features from the SDK.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=discover-features-from-sdk)]
-1. Select features for the model, and export the model as a feature-retrieval spec
+1. Select features for the model, and export the model as a feature retrieval specification.
- In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model shipping agility increases if you save the selected features as a feature-retrieval spec, and use the spec in the mlops/cicd flow for training and inference.
+ In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model-shipping agility increases if you save the selected features as a feature retrieval specification, and then use the specification in the machine learning operations (MLOps) or continuous integration and continuous delivery (CI/CD) flow for training and inference.
-1. Select features for the model
+1. Select features for the model.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-reg-features)]
-1. Export selected features as a feature-retrieval spec
+1. Export selected features as a feature retrieval specification.
- > [!NOTE]
- > A **feature retrieval spec** is a portable definition of the feature list associated with a model. It can help streamline ML model development and operationalization. It will become an input to the training pipeline which generates the training data. Then, it will be packaged with the model. The inference phase uses it to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy.
+ A feature retrieval specification is a portable definition of the feature list that's associated with a model. It can help streamline the development and operationalization of a machine learning model. It will become an input to the training pipeline that generates the training data. Then, it will be packaged with the model.
+
+ The inference phase uses the feature retrieval to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy.
- Use of the feature retrieval spec and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the spec should be **feature_retrieval_spec.yaml** when it's packaged with the model. This way, the system can recognize it.
+ Use of the feature retrieval specification and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the specification should be *feature_retrieval_spec.yaml* when it's packaged with the model. This way, the system can recognize it.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=export-as-frspec)] ## Train in the cloud with pipelines, and register the model
-In this step, you manually trigger the training pipeline. In a production scenario, a ci/cd pipeline could trigger it, based on changes to the feature-retrieval spec in the source repository. You can register the model if it's satisfactory.
+In this procedure, you manually trigger the training pipeline. In a production scenario, a CI/CD pipeline could trigger it, based on changes to the feature retrieval specification in the source repository. You can register the model if it's satisfactory.
-1. Run the training pipeline
+1. Run the training pipeline.
The training pipeline has these steps:
- 1. Feature retrieval: For its input, this built-in component takes the feature retrieval spec, the observation data, and the timestamp column name. It then generates the training data as output. It runs these steps as a managed spark job.
-
- 1. Training: Based on the training data, this step trains the model, and then generates a model (not yet registered)
-
- 1. Evaluation: This step validates whether or not the model performance and quality fall within a threshold (in our case, it's a placeholder/dummy step for illustration purposes)
-
- 1. Register the model: This step registers the model
+ 1. Feature retrieval: For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job.
+
+ 1. Training: Based on the training data, this step trains the model and then generates a model (not yet registered).
+
+ 1. Evaluation: This step validates whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.)
+
+ 1. Register the model: This step registers the model.
> [!NOTE]
- > In part 2 of this tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior will be the same, even if you use the `get_offline_features()` API.
+ > In the second tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior is the same, even if you use the `get_offline_features()` API.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=run-training-pipeline)]
- 1. Inspect the training pipeline and the model
+ 1. Inspect the training pipeline and the model.
- 1. Open the above pipeline, and run "web view" in a new window to see the pipeline steps.
+ 1. Open the pipeline. Run the web view in a new window to display the pipeline steps.
-1. Use the feature retrieval spec in the model artifacts
+1. Use the feature retrieval specification in the model artifacts:
- 1. In the left nav of the current workspace, select `Models`
- 1. Select open in a new tab or window
- 1. Select **fraud_model**
- 1. In the top nav, select Artifacts
+ 1. On the left pane of the current workspace, select **Models**.
+ 1. Select **Open in a new tab or window**.
+ 1. Select **fraud_model**.
+ 1. Select **Artifacts**.
- The feature retrieval spec is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval spec during experimentation. Now it became part of the model definition. In the next tutorial, you'll see how inferencing uses it.
+ The feature retrieval specification is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval specification during experimentation. Now it's part of the model definition. In the next tutorial, you'll see how inferencing uses it.
## View the feature set and model dependencies
-1. View the list of feature sets associated with the model
+1. View the list of feature sets associated with the model.
- In the same models page, select the `feature sets` tab. This tab shows both the `transactions` and the `accounts` feature sets on which this model depends.
+ On the same **Models** page, select the **Feature sets** tab. This tab shows both the `transactions` and `accounts` feature sets on which this model depends.
-1. View the list of models that use the feature sets
+1. View the list of models that use the feature sets:
- 1. Open the feature store UI (explained earlier in this tutorial)
- 1. Select `Feature sets` on the left nav
- 1. Select a feature set
- 1. Select the `Models` tab
+ 1. Open the feature store UI (explained earlier in this tutorial).
+ 1. On the left pane, select **Feature sets**.
+ 1. Select a feature set.
+ 1. Select the **Models** tab.
- You can see the list of models that use the feature sets. The feature retrieval spec determined this list when the model was registered.
+ The feature retrieval specification determined this list when the model was registered.
-## Cleanup
+## Clean up
-The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
+The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources.
## Next steps
-* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Go to the next tutorial in the series: [Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md).
+* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Title: "Tutorial #1: develop and register a feature set with managed feature store (preview)"-
-description: Managed Feature Store tutorial part 1.
+ Title: "Tutorial 1: Develop and register a feature set with managed feature store (preview)"
+
+description: This is part 1 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #1: develop and register a feature set with managed feature store (preview)
+# Tutorial 1: Develop and register a feature set with managed feature store (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Azure Machine Learning managed feature store lets you discover, create and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic feature store concepts, see [what is managed feature store](./concept-what-is-managed-feature-store.md) and [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+You can use Azure Machine Learning managed feature store to discover, create, and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic concepts for managed feature store, see [What is managed feature store?](./concept-what-is-managed-feature-store.md) and [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
-This tutorial is the first part of a four part series. Here, you'll learn how to:
+This tutorial is the first part of a four-part series. Here, you learn how to:
> [!div class="checklist"]
-> * Create a new minimal feature store resource
-> * Develop and locally test a feature set with feature transformation capability
-> * Register a feature store entity with the feature store
-> * Register the feature set that you developed with the feature store
-> * Generate a sample training dataframe using the features you created
+> * Create a new, minimal feature store resource.
+> * Develop and locally test a feature set with feature transformation capability.
+> * Register a feature store entity with the feature store.
+> * Register the feature set that you developed with the feature store.
+> * Generate a sample training DataFrame by using the features that you created.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported, or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This tutorial series has two tracks:
-## Prerequisites
-
-> [!NOTE]
-> This tutorial series has two tracks:
-> * SDK only track: Uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
-> * SDK & CLI track: This track uses the CLI for CRUD operations (create, update, and delete), and the Python SDK for feature set development and testing only. This is useful in CI / CD, or GitOps, scenarios, where CLI/yaml is preferred.
+* The SDK-only track uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
+* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD (create, read, update, and delete) operations. This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred.
-Before you proceed with this article, make sure you cover these prerequisites:
-* An Azure Machine Learning workspace. See [Quickstart: Create workspace resources](./quickstart-create-resources.md) article for more information about workspace creation.
+## Prerequisites
-* To proceed with this article, your user account must be assigned the owner or contributor role to the resource group where the feature store is created
+Before you proceed with this tutorial, be sure to cover these prerequisites:
- (Optional): If you use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group
+* An Azure Machine Learning workspace. For more information about workspace creation, see [Quickstart: Create workspace resources](./quickstart-create-resources.md).
-## Set up
+* On your user account, the Owner or Contributor role for the resource group where the feature store is created.
-### Prepare the notebook environment for development
+ If you choose to use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group.
-> [!NOTE]
-> This tutorial uses an Azure Machine Learning Spark notebook for development.
+## Prepare the notebook environment
-1. In the Azure Machine Learning studio environment, first select **Notebooks** in the left nav, and then select the **Samples** tab. Navigate to the **featurestore_sample** directory
+This tutorial uses an Azure Machine Learning Spark notebook for development.
- **Samples -> SDK v2 -> sdk -> python -> featurestore_sample**
+1. In the Azure Machine Learning studio environment, select **Notebooks** on the left pane, and then select the **Samples** tab.
- and then select **Clone**, as shown in this screenshot:
+1. Browse to the *featurestore_sample* directory (select **Samples** > **SDK v2** > **sdk** > **python** > **featurestore_sample**), and then select **Clone**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot showing selection of the featurestore_sample directory in Azure Machine Learning studio UI.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot that shows selection of the sample directory in Azure Machine Learning studio.":::
-1. The **Select target directory** panel opens next. Select the User directory, in this case **testUser**, and then select **Clone**, as shown in this screenshot:
+1. The **Select target directory** panel opens. Select the user directory (in this case, **testUser**), and then select **Clone**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot showing selection of the target directory location in Azure Machine Learning studio UI for the featurestore_sample resource.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot that shows selection of the target directory location in Azure Machine Learning studio for the sample resource.":::
-1. To configure the notebook environment, you must upload the **conda.yml** file. Select **Notebooks** in the left nav, and then select the **Files** tab. Navigate to the **env** directory
+1. To configure the notebook environment, you must upload the *conda.yml* file:
- **Users -> testUser -> featurestore_sample -> project -> env**
+ 1. Select **Notebooks** on the left pane, and then select the **Files** tab.
+ 1. Browse to the *env* directory (select **Users** > **testUser** > **featurestore_sample** > **project** > **env**), and then select the *conda.yml* file. In this path, *testUser* is the user directory.
+ 1. Select **Download**.
- and select the **conda.yml** file. In this navigation, **testUser** is the user directory. Select **Download**, as shown in this screenshot:
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot that shows selection of the Conda YAML file in Azure Machine Learning studio.":::
- :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot showing selection of the conda.yml file in Azure Machine Learning studio UI.":::
+1. In the Azure Machine Learning environment, open the notebook, and then select **Configure session**.
-1. At the Azure Machine Learning environment, open the notebook, and select **Configure Session**, as shown in this screenshot:
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot that shows selections for configuring a session for a notebook.":::
- :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot showing Open Configure Session for this notebook.":::
+1. On the **Configure session** panel, select **Python packages**.
-1. At the **Configure Session** panel, select **Python packages**. To upload the Conda file, select **Upload Conda file**, and **Browse** to the directory that hosts the Conda file. Select **conda.yml**, and then select **Open**, as shown in this screenshot:
+1. Upload the Conda file:
+ 1. On the **Python packages** tab, select **Upload Conda file**.
+ 1. Browse to the directory that hosts the Conda file.
+ 1. Select **conda.yml**, and then select **Open**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot showing the directory hosting the Conda file.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot that shows the directory that hosts the Conda file.":::
-1. Select **Apply**, as shown in this screenshot:
+1. Select **Apply**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot showing the Conda file upload.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot that shows the Conda file upload.":::
## Start the Spark session
Before you proceed with this article, make sure you cover these prerequisites:
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)]
-### [SDK Track](#tab/SDK-track)
+### [SDK track](#tab/SDK-track)
-Not applicable
+Not applicable.
-### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
### Set up the CLI
-1. Install the Azure Machine Learning extension
+1. Install the Azure Machine Learning extension.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
-1. Authentication
+1. Authenticate.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
-1. Set the default subscription
+1. Set the default subscription.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)] > [!NOTE]
-> Feature store Vs Project workspace: You'll use a feature store to reuse features across projects. You'll use a project workspace (an Azure Machine Learning workspace) to train and inference models, by leveraging features from feature stores. Many project workspaces can share and reuse the same feature store.
+> You use a feature store to reuse features across projects. You use a project workspace (an Azure Machine Learning workspace) to train inference models, by taking advantage of features from feature stores. Many project workspaces can share and reuse the same feature store.
-### [SDK Track](#tab/SDK-track)
+### [SDK track](#tab/SDK-track)
This tutorial uses two SDKs:
-* The Feature Store CRUD SDK
-* You use the same MLClient (package name azure-ai-ml) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for feature store CRUD operations for feature store, feature set, and feature store entity.
-* The feature store core SDK
-
- This SDK (azureml-featurestore) is intended for feature set development and consumption. Later steps in this tutorial describe these operations:
-
- * Feature set specification development
- * Feature data retrieval
- * List and Get registered feature sets
- * Generate and resolve feature retrieval specs
- * Generate training and inference data using point-in-time joins
+* *Feature store CRUD SDK*
+
+ You use the same `MLClient` (package name `azure-ai-ml`) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for CRUD operations for feature stores, feature sets, and feature store entities.
+
+* *Feature store core SDK*
+
+ This SDK (`azureml-featurestore`) is for feature set development and consumption. Later steps in this tutorial describe these operations:
+
+ * Develop a feature set specification.
+ * Retrieve feature data.
+ * List or get a registered feature set.
+ * Generate and resolve feature retrieval specifications.
+ * Generate training and inference data by using point-in-time joins.
+
+This tutorial doesn't require explicit installation of those SDKs, because the earlier Conda YAML instructions cover this step.
-This tutorial doesn't require explicit installation of those SDKs, because the earlier **conda YAML** instructions cover this step.
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
-### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+This tutorial uses both the feature store core SDK and the CLI for CRUD operations. It uses the Python SDK only for feature set development and testing. This approach is useful for GitOps or CI/CD scenarios, where CLI/YAML is preferred.
-This tutorial uses both the Feature store core SDK, and the CLI, for CRUD operations. It only uses the Python SDK for Feature set development and testing. This approach is useful for GitOps or CI / CD scenarios, where CLI / yaml is preferred.
+Here are general guidelines:
-* Use the CLI for CRUD operations on feature store, feature set, and feature store entities
-* Feature store core SDK: This SDK (`azureml-featurestore`) is meant for feature set development and consumption. This tutorial covers these operations:
+* Use the CLI for CRUD operations on feature stores, feature sets, and feature store entities.
+* The feature store core SDK (`azureml-featurestore`) is for feature set development and consumption. This tutorial covers these operations:
- * List / Get a registered feature set
- * Generate / resolve a feature retrieval spec
- * Execute a feature set definition, to generate a Spark dataframe
- * Generate training with a point-in-time join
+ * List or get a registered feature set
+ * Generate or resolve a feature retrieval specification
+ * Execute a feature set definition, to generate a Spark DataFrame
+ * Generate training by using point-in-time joins
-This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The **conda.yaml** file includes them in an earlier step.
+This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The *conda.yml* file includes them in an earlier step.
## Create a minimal feature store
-1. Set feature store parameters
-
- Set the name, location, and other values for the feature store
+1. Set feature store parameters, including name, location, and other values.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)]
-1. Create the feature store
+1. Create the feature store.
- ### [SDK Track](#tab/SDK-track)
+ ### [SDK track](#tab/SDK-track)
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)]
- ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+ ### [SDK and CLI track](#tab/SDK-and-CLI-track)
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
-1. Initialize an Azure Machine Learning feature store core SDK client
+1. Initialize a feature store core SDK client for Azure Machine Learning.
- As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features
+ As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)] ## Prototype and develop a feature set
-We'll build a feature set named `transactions` that has rolling, window aggregate-based features
+In the following steps, you build a feature set named `transactions` that has rolling, window aggregate-based features:
-1. Explore the transactions source data
+1. Explore the `transactions` source data.
- > [!NOTE]
- > This notebook uses sample data hosted in a publicly-accessible blob container. It can only be read into Spark with a `wasbs` driver. When you create feature sets using your own source data, host them in an adls gen2 account, and use an `abfss` driver in the data path.
+ This notebook uses sample data hosted in a publicly accessible blob container. It can be read into Spark only through a `wasbs` driver. When you create feature sets by using your own source data, host them in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)]
-1. Locally develop the feature set
-
- A feature set specification is a self-contained feature set definition that you can locally develop and test. Here, we create these rolling window aggregate features:
+1. Locally develop the feature set.
- * transactions three-day count
- * transactions amount three-day sum
- * transactions amount three-day avg
- * transactions seven-day count
- * transactions amount seven-day sum
- * transactions amount seven-day avg
+ A feature set specification is a self-contained definition of a feature set that you can locally develop and test. Here, you create these rolling window aggregate features:
- **Action:**
+ * `transactions three-day count`
+ * `transactions amount three-day sum`
+ * `transactions amount three-day avg`
+ * `transactions seven-day count`
+ * `transactions amount seven-day sum`
+ * `transactions amount seven-day avg`
- - Review the feature transformation code file: `featurestore/featuresets/transactions/transformation_code/transaction_transform.py`. Note the rolling aggregation defined for the features. This is a spark transformer.
+ Review the feature transformation code file: *featurestore/featuresets/transactions/transformation_code/transaction_transform.py*. Note the rolling aggregation defined for the features. This is a Spark transformer.
- See [feature store concepts](./concept-what-is-managed-feature-store.md) and **transformation concepts** to learn more about the feature set and transformations.
+ To learn more about the feature set and transformations, see [What is managed feature store?](./concept-what-is-managed-feature-store.md).
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)]
-1. Export as a feature set spec
+1. Export as a feature set specification.
+
+ To register the feature set specification with the feature store, you must save that specification in a specific format.
- To register the feature set spec with the feature store, you must save that spec in a specific format.
+ Review the generated `transactions` feature set specification. Open this file from the file tree to see the specification: *featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml*.
- **Action:** Review the generated `transactions` feature set spec: Open this file from the file tree to see the spec: `featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml`
+ The specification contains these elements:
- The spec contains these elements:
-
- 1. `source`: a reference to a storage resource. In this case, it's a parquet file in a blob storage resource.
- 1. `features`: a list of features and their datatypes. If you provide transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes.
- 1. `index_columns`: the join keys required to access values from the feature set
+ * `source`: A reference to a storage resource. In this case, it's a Parquet file in a blob storage resource.
+ * `features`: A list of features and their datatypes. If you provide transformation code (see the "Day 2" section), the code must return a DataFrame that maps to the features and datatypes.
+ * `index_columns`: The join keys required to access values from the feature set.
- To learn more about the spec, see [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-feature-set.md).
+ To learn more about the specification, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and [CLI (v2) feature set YAML schema](./reference-yaml-feature-set.md).
- Persisting the feature set spec offers another benefit: the feature set spec can be source controlled.
+ Persisting the feature set specification offers another benefit: the feature set specification can be source controlled.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)]
-## Register a feature-store entity
+## Register a feature store entity
+
+As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities include accounts and customers. Entities are typically created once and then reused across feature sets. To learn more, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+
+### [SDK track](#tab/SDK-track)
-As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities can include accounts, customers, etc. Entities are typically created once, and then reused across feature sets. To learn more, see [feature store concepts](./concept-top-level-entities-in-managed-feature-store.md).
+1. Initialize the feature store CRUD client.
- ### [SDK Track](#tab/SDK-track)
+ As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
- 1. Initialize the Feature Store CRUD client
+ In this code sample, the client is scoped at feature store level.
- As explained earlier in this tutorial, the MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at feature store level.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
+1. Register the `account` entity with the feature store.
- 1. Register the `account` entity with the feature store
+ Create an `account` entity that has the join key `accountID` of type `string`.
- Create an account entity that has the join key `accountID`, of type string.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
- ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+1. Initialize the feature store CRUD client.
- 1. Initialize the Feature Store CRUD client
+ As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
- As explained earlier in this tutorial, MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID`, of type string.
+ In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID` of type `string`.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
-
+ ## Register the transaction feature set with the feature store
-First, register a feature set asset with the feature store. You can then reuse that asset, and easily share it. Feature set asset registration offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities.
+Use the following code to register a feature set asset with the feature store. You can then reuse that asset and easily share it. Registration of a feature set asset offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities.
- ### [SDK Track](#tab/SDK-track)
+### [SDK track](#tab/SDK-track)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
- ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
-
+ ## Explore the feature store UI
-* Open the [Azure Machine Learning global landing page](https://ml.azure.com/home).
-* Select `Feature stores` in the left nav
-* From this list of accessible feature stores, select the feature store you created earlier in this tutorial.
+Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse through the feature store:
-> [!NOTE]
-> Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse the feature store.
+1. Open the [Azure Machine Learning global landing page](https://ml.azure.com/home).
+1. Select **Feature stores** on the left pane.
+1. From the list of accessible feature stores, select the feature store that you created earlier in this tutorial.
-## Generate a training data dataframe using the registered feature set
+## Generate a training data DataFrame by using the registered feature set
-1. Load observation data
+1. Load observation data.
- Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource. Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Since we use it for training, it also has an appended target variable (**is_fraud**).
+ Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource.
+
+ Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Because you use it for training, it also has an appended target variable (**is_fraud**).
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)]
-1. Get the registered feature set, and list its features
+1. Get the registered feature set, and list its features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)] [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)]
-1. Select features, and generate training data
-
- Here, we select the features that become part of the training data. Then, we use the feature store SDK to generate the training data itself.
+1. Select the features that become part of the training data. Then, use the feature store SDK to generate the training data itself.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)] A point-in-time join appends the features to the training data.
-This tutorial built the training data with features from feature store. Optional: you can save the training data to storage for later use, or you can run model training on it directly.
+This tutorial built the training data with features from the feature store. Optionally, you can save the training data to storage for later use, or you can run model training on it directly.
-## Cleanup
+## Clean up
-The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
+The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources.
## Next steps
-* [Part 2: enable materialization and back fill feature data](./tutorial-enable-materialization-backfill-data.md)
-* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Go to the next tutorial in the series: [Enable materialization and backfill feature data](./tutorial-enable-materialization-backfill-data.md).
+* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml.md
Classification is a common machine learning task. Classification is a type of su
The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML (v1)](../tutorial-first-experiment-automated-ml.md).
-See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
+See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
### Regression
Similar to classification, regression tasks are also a common supervised learnin
Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](how-to-auto-train-models-v1.md).
-See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
+See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
### Time-series forecasting
Advanced forecasting configuration includes:
* rolling window aggregate features
-See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
+See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
### Computer vision
See the [how-to (v1)](how-to-configure-auto-train.md#ensemble-configuration) for
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](../concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](../how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
How-to articles provide additional detail into what functionality automated ML o
### Jupyter notebook samples
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml).
### Python SDK reference
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md
The table shows resulting feature engineering that occurs when window aggregatio
![target rolling window](../media/how-to-auto-train-forecast/target-roll.svg)
-View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
+View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
### Short series handling
mse = mean_squared_error(
rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) ```
-In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
### Prediction into the future
-The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features.
fitted_model.forecast_quantiles(
test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) ```
-You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
+You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values.
The following diagram shows the workflow for the many models solution.
![Many models concept diagram](../media/how-to-auto-train-forecast/many-models.svg)
-The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
+The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
```python from azureml.train.automl.runtime._many_models.many_models_parameters import ManyModelsTrainParameters
To further visualize this, the leaf levels of the hierarchy contain all the time
The hierarchical time series solution is built on top of the Many Models Solution and share a similar configuration setup.
-The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
+The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
```python
hts_parameters = HTSTrainParameters(
## Example notebooks
-See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including:
+See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including:
-* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
-* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
-* [configurable lags](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
-* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* [configurable lags](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
## Next steps
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md
Automated ML supports model training for computer vision tasks like image classi
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
> [!NOTE] > Only Python 3.7 and 3.8 are compatible with automated ML support for computer vision tasks.
For a detailed description on task specific hyperparameters, please refer to [Hy
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](../how-to-use-automl-small-object-detect.md). ## Example notebooks
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
## Next steps
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
If you don't have an Azure subscription, create a free account before you begin.
This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages,
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
* Run `pip install azureml-opendatasets azureml-widgets` to get the required packages. ## Download and prepare data
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](../
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../includes/machine-learning-automl-sdk-version.md)]
Doing so, schedules distributed training of the NLP models and automatically sca
## Example notebooks See the sample notebooks for detailed code examples for each NLP task.
-* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
* [Multi-label text classification](
-https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
-* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
+https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
## Next steps + Learn more about [how and where to deploy a model](../how-to-deploy-online-endpoints.md).
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-features.md
In order to invoke BERT, set `enable_dnn: True` in your automl_settings and use
Automated ML takes the following steps for BERT.
-1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
+1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
2. **Concatenate all text columns into a single text column**, hence the `StringConcatTransformer` in the final model.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train.md
For this article you need,
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../includes/machine-learning-automl-sdk-version.md)]
Use&nbsp;data&nbsp;streaming&nbsp;algorithms <br> [(studio UI experiments)](../h
Next determine where the model will be trained. An automated ML training experiment can run on the following compute options.
- * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
* **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#azure-machine-learning-compute-managed) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
- * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
+ * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
Consider these factors when choosing your compute target:
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md
In AutoML config, when using Azure Databricks add the following parameters:
## ML notebooks that work with Azure Databricks Try it out:
-+ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.**
++ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.** + Import these samples directly from your workspace. See below: ![Select Import](../media/how-to-configure-environment/azure-db-screenshot.png)
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
For more information on `az ml model register`, see the [reference documentation
You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine. <!-- pyhton nb call -->
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
To include multiple files in the model registration, set `model_path` to the path of a folder that contains the files.
The two things you need to accomplish in your entry script are:
For your initial deployment, use a dummy entry script that prints the data it receives. Save this file as `echo_score.py` inside of a directory called `source_dir`. This dummy script returns the data you send to it, so it doesn't use the model. But it is useful for testing that the scoring script is running.
You can use any [Azure Machine Learning inference curated environments](../conce
A minimal inference configuration can be written as: Save this file with the name `dummyinferenceconfig.json`.
Save this file with the name `dummyinferenceconfig.json`.
The following example demonstrates how to create a minimal environment with no pip dependencies, using the dummy scoring script you defined above.
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
For more information on environments, see [Create and manage environments for training and deployment](../how-to-use-environments.md).
For more information, see the [deployment schema](reference-azure-machine-learni
The following Python demonstrates how to create a local deployment configuration:
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
az ml model deploy -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
curl -v -X POST -H "content-type:application/json" \
# [Python SDK](#tab/python) <!-- python nb call -->
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
curl -v -X POST -H "content-type:application/json" \
Now it's time to actually load your model. First, modify your entry script: Save this file as `score.py` inside of `source_dir`.
Notice the use of the `AZUREML_MODEL_DIR` environment variable to locate your re
[!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)] Save this file as `inferenceconfig.json`
az ml model deploy -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
curl -v -X POST -H "content-type:application/json" \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
Change your deploy configuration to correspond to the compute target you've chos
The options available for a deployment configuration differ depending on the compute target you choose. Save this file as `re-deploymentconfig.json`.
For more information, see [this reference](reference-azure-machine-learning-cli.
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
az ml service get-logs -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
For more information, see the documentation for [Model.deploy()](/python/api/azu
When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request.
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
The following table describes the different service states:
[!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
```azurecli-interactive az ml service delete -n myservice
Read more about [deleting a webservice](/cli/azure/ml(v1)/computetarget/create#a
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
To delete a deployed web service, use `service.delete()`. To delete a registered model, use `model.delete()`.
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-fpga-web-service.md
Before you can deploy to FPGAs, convert the model to the [ONNX](https://onnx.ai/
### Containerize and deploy the model
-Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image.
+Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image.
```python from azureml.core.image import Image
Next, create a Docker image from the converted model and all dependencies. This
#### Deploy to a local edge server
-All [Azure Azure Stack Edge devices](../../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
+All [Azure Stack Edge devices](../../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
### Consume the deployed model
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md
The following artifacts can be exported and imported between workspaces by using
### Workspace deletion
-If you accidentally deleted your workspace, you might able to recover it. For recovery steps, see [Recover workspace data after accidental deletion with soft delete (Preview)](../concept-soft-delete.md).
+If you accidentally deleted your workspace, you might able to recover it. For recovery steps, see [Recover workspace data after accidental deletion with soft delete](../concept-soft-delete.md).
Even if your workspace cannot be recovered, you may still be able to retrieve your notebooks from the workspace-associated Azure storage resource by following these steps: * In the [Azure portal](https://portal.azure.com) navigate to the storage account that was linked to the deleted Azure Machine Learning workspace.
Even if your workspace cannot be recovered, you may still be able to retrieve yo
## Next steps To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](../tutorial-create-secure-workspace-template.md).+
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models.md
arguments = ['--model_name', 'maskrcnn_resnet50_fpn', # enter the maskrcnn mode
-Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
+Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
```python script_run_config = ScriptRunConfig(source_directory='.', script='ONNX_batch_model_generator_automl_for_images.py',
Every ONNX model has a predefined set of input and output formats.
# [Multi-class image classification](#tab/multi-class)
-This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
+This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
### Input format
The output is an array of logits for all the classes/labels.
# [Multi-label image classification](#tab/multi-label)
-This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
+This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
### Input format
The output is an array of logits for all the classes/labels.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
The following table describes boxes, labels and scores returned for each sample
# [Object detection with YOLO](#tab/object-detect-yolo)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
Each cell in the list indicates box detections of a sample with shape `(n_boxes,
# [Instance segmentation](#tab/instance-segmentation)
-For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
+For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
>[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
batch, channel, height_onnx, width_onnx ```
-For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
```python import glob
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
If you already have a data labeling project and you want to use that data, you c
## Use conversion scripts
-If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml).
+If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml).
If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md).
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md
Make sure your code follows these tips:
### Horovod example
-* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
+* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
### DeepSpeed
Make sure your code follows these tips:
### DeepSpeed example
-* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/deepspeed/cifar)
+* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/deepspeed/cifar)
### Environment variables from Open MPI
run = Experiment(ws, 'experiment_name').submit(run_config)
### Pytorch per-process-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### <a name="per-node-launch"></a> Using torch.distributed.launch (per-node-launch)
run = Experiment(ws, 'experiment_name').submit(run_config)
### PyTorch per-node-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### PyTorch Lightning
TF_CONFIG='{
### TensorFlow example -- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)
+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)
## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
ws = Workspace.from_config()
### Get the data
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
### Prepare training script
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md
print(compute_target.get_status().serialize())
## Configure your training job
-For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
+For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](how-to-set-up-training-targets.md).
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-trigger-published-pipeline.md
published_pipeline = PublishedPipeline.get(ws, id="<pipeline-id-here>")
published_pipeline.endpoint ```
-## Create a Logic App
+## Create a logic app in Azure
-Now create an [Azure Logic App](../../logic-apps/logic-apps-overview.md) instance. If you wish, [use an integration service environment (ISE)](../../logic-apps/connect-virtual-network-vnet-isolated-environment.md) and [set up a customer-managed key](../../logic-apps/customer-managed-keys-integration-service-environment.md) for use by your Logic App.
-
-Once your Logic App has been provisioned, use these steps to configure a trigger for your pipeline:
+Now create an [Azure Logic App](../../logic-apps/logic-apps-overview.md) instance. After your logic app is provisioned, use these steps to configure a trigger for your pipeline:
1. [Create a system-assigned managed identity](../../logic-apps/create-managed-service-identity.md) to give the app access to your Azure Machine Learning Workspace.
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect.md
The following are the parameters you can use to control the tiling feature.
## Example notebooks
-See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model.
+See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model.
>[!NOTE] > All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models.md
You'll write code using the Python SDK in this tutorial and learn the following
* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
-* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
+* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
-This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages,
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages,
* Run `pip install azureml`
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
## Compute target setup
In this automated machine learning tutorial, you did the following tasks:
* [Learn how to set up AutoML to train computer vision models with Python](../how-to-auto-train-image-models.md). * [Learn how to configure incremental training on computer vision models](../how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](../reference-automl-images-hyperparameters.md).
-* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
> [!NOTE] > Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
The above code specifies a dataset that is based on the output of a pipeline ste
The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain.
-If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
+If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
If you're working from scratch, create a subdirectory called `keras-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
Once the data has been converted from the compressed format to CSV files, it can
With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory. Most of this code should be familiar to ML developers:
managed-grafana How To Create Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-api-keys.md
Last updated 11/17/2022
-# Generate and manage Grafana API keys in Azure Managed Grafana
+# Create and manage Grafana API keys in Azure Managed Grafana (Deprecated)
-> [!NOTE]
-> This document is deprecated as the API keys feature has been replaced by a new feature in Grafana 9.1. Go to [Service accounts](./how-to-service-accounts.md) to access the current recommended method to create and manage API keys.
-
-> [!TIP]
-> To switch to using service accounts, in Grafana instances created before the release of Grafana 9.1, go to **Configuration > API keys and select Migrate to service accounts now**. Select **Yes, migrate now**. Each existing API keys will be automatically migrated into a service account with a token. The service account will be created with the same permission as the API Key and current API keys will continue to work as before.
+> [!IMPORTANT]
+> This document is deprecated as the API keys feature has been replaced by [service accounts](./how-to-service-accounts.md) in Grafana 9.1. To switch to using service accounts, in Grafana instances created before the release of Grafana 9.1, go to **Configuration > API keys and select Migrate to service accounts now**. Select **Yes, migrate now**. Each existing API keys will be automatically migrated into a service account with a token. The service account will be created with the same permission as the API Key and current API keys will continue to work as before.
In this guide, learn how to generate and manage API keys, and start making API calls to the Grafana server. Grafana API keys will enable you to create integrations between Azure Managed Grafana and other services.
managed-grafana How To Grafana Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-grafana-enterprise.md
Title: Subscribe to Grafana Enterprise
-description: Activate Grafana Enterprise (preview) to access Grafana Enterprise plugins within Azure Managed Grafana
+description: Activate Grafana Enterprise to access Grafana Enterprise plugins within Azure Managed Grafana
Last updated 01/09/2023
-# Enable Grafana Enterprise (preview)
+# Enable Grafana Enterprise
-In this guide, learn how to activate the Grafana Enterprise (preview) add-on in Azure Managed Grafana, update your Grafana Enterprise plan, and access [Grafana Enterprise plugins](https://grafana.com/docs/plugins/).
+In this guide, learn how to activate the Grafana Enterprise add-on in Azure Managed Grafana, update your Grafana Enterprise plan, and access [Grafana Enterprise plugins](https://grafana.com/docs/plugins/).
The Grafana Enterprise plans offered through Azure Managed Grafana enable users to access Grafana Enterprise plugins to do more with Azure Managed Grafana.
You can enable access to Grafana Enterprise plugins by selecting a Grafana Enter
> [!NOTE] > The Grafana Enterprise monthly plan is a paid plan, owned and charged by Grafana Labs, through Azure Marketplace. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details.
-> [!IMPORTANT]
-> Grafana Enterprise is currently in preview within Azure Managed Grafana.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
This issue can happen if:
1. Your account is a foreign account: the Grafana instance isn't registered in your home tenant. 1. If you recently addressed this problem and have been assigned a sufficient Grafana role, you may need to wait for some time before the cookie expires and get refreshed. This process normally takes 5 min. If in doubts, delete all cookies or start a private browser session to force a fresh new cookie with new role information.
+## Authorized users don't show up in Grafana Users configuration
+
+After you add a user to a Managed Grafana's built-in RBAC role, such as Grafana Viewer, you don't see that user listed in the Grafana's **Configuration** UI page right away. This behavior is *by design*. Managed Grafana's RBAC roles are stored in the Azure AD (AAD). For performance reasons, Managed Grafana doesn't automatically synchronize users assigned to the built-in roles to every instance. There is no notification for changes in RBAC assignments. Querying AAD periodically to get current assignments adds much extra load to the AAD service.
+
+There's no "fix" for this in itself. After a user signs into your Grafana instance, the user shows up in the **Users** tab under Grafana **Configuration**. You can see the corresponding role that user has been assigned to.
+ ## Azure Managed Grafana dashboard panel doesn't display any data One or several Managed Grafana dashboard panels show no data.
managed-instance-apache-cassandra Best Practice Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md
For more information refer to [Virtual Machine and disk performance](../virtual-
### Network performance
-In most cases network performance is sufficient. However, if you are frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics:
+In most cases network performance is sufficient. However, if you're frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics:
:::image type="content" source="./media/best-practice-performance/metrics-network.png" alt-text="Screenshot of network metrics." lightbox="./media/best-practice-performance/metrics-network.png" border="true":::
If you only see the network elevated for a small number of nodes, you might have
### Too many connected clients
-Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this does not exceed tolerable limits.
+Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this doesn't exceed tolerable limits.
:::image type="content" source="./media/best-practice-performance/metrics-connections.png" alt-text="Screenshot of connected client metrics." lightbox="./media/best-practice-performance/metrics-connections.png" border="true"::: ### Disk space
-In most cases, there is sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
+In most cases, there's sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
> [!NOTE] > In order to ensure available space for compaction, disk utilization should be kept to around 50%.
Our default formula assigns half the VM's memory to the JVM with an upper limit
In most cases memory gets reclaimed effectively by the Java garbage collector, but especially if the CPU is often above 80% there aren't enough CPU cycles for the garbage collector left. So any CPU performance problems should be addresses before memory problems.
-If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you are on a SKU with limited memory. In most cases, you will need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
+If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you're on a SKU with limited memory. In most cases, you'll need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
If you indeed need more memory, you can:
You might encounter this warning in the [CassandraLogs](monitor-clusters.md#crea
`Writing large partition <table> (105.426MiB) to sstable <file>`
-This indicates a problem in the data model. Here is a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed.
+This indicates a problem in the data model. Here's a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed.
+
+## Specialized optimizations
+### Compression
+Cassandra allows the selection of an appropriate compression algorithm when a table is created (see [Compression](https://cassandra.apache.org/doc/latest/cassandra/operating/compression.html)) The default is LZ4 which is excellent
+for throughput and CPU but consumes more space on disk. Using Zstd (Cassandra 4.0 and up) saves about ~12% space with
+minimal CPU overhead.
+
+### Optimizing memtable heap space
+Our default is to use 1/4 of the JVM heap for [memtable_heap_space](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#memtable_heap_space)
+in the cassandra.yaml. For write oriented application and/or on SKUs with small memory
+this can lead to frequent flushing and fragmented sstables thus requiring more compaction.
+In such cases increasing it to at least 4048 might be beneficial but requires careful benchmarking
+to make sure other operations (e.g. reads) aren't affected.
## Next steps In this article, we laid out some best practices for optimal performance. You can now start working with the cluster: > [!div class="nextstepaction"]
-> [Create a cluster using Azure Portal](create-cluster-portal.md)
+> [Create a cluster using Azure Portal](create-cluster-portal.md)
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl
### Connecting from an application
-As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started).
+As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), [Node.js](https://github.com/Azure-Samples/azure-cassandra-mi-nodejs-getting-started) and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started).
Disabling certificate verification is recommended because certificate verification will not work unless you map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes. For Java, we also highly recommend enabling [speculative execution policy](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/) where applications are sensitive to tail latency. You can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution). > [!NOTE]
-> In the vast majority of cases it should **not be necessary** to configure or install certificates (rootCA, node or client, truststores, etc) to connect to Azure Managed Instance for Apache Cassandra. SSL encryption can be enabled by using the default truststore and password of the runtime being used by the client (see [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started) and [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started) samples), because Azure Managed Instance for Apache Cassandra certificates will be trusted by that environment. In rare cases, if the certificate is not trusted, you may need to add it to the truststore.
+> In the vast majority of cases it should **not be necessary** to configure or install certificates (rootCA, node or client, truststores, etc) to connect to Azure Managed Instance for Apache Cassandra. SSL encryption can be enabled by using the default truststore and password of the runtime being used by the client (see [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), [Node.js](https://github.com/Azure-Samples/azure-cassandra-mi-nodejs-getting-started) and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started) samples), because Azure Managed Instance for Apache Cassandra certificates will be trusted by that environment. In rare cases, if the certificate is not trusted, you may need to add it to the truststore.
### Configuring client certificates (optional)
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl
``` ### Connecting from an application
-As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started).
+As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL encryption to be enabled, and certification verification to be disabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), [Node.js](https://github.com/Azure-Samples/azure-cassandra-mi-nodejs-getting-started) and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started).
Disabling certificate verification is recommended because certificate verification will not work unless you map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes. For Java, we also highly recommend enabling [speculative execution policy](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/) where applications are sensitive to tail latency. You can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution). > [!NOTE]
-> In the vast majority of cases it should **not be necessary** to configure or install certificates (rootCA, node or client, truststores, etc) to connect to Azure Managed Instance for Apache Cassandra. SSL encryption can be enabled by using the default truststore and password of the runtime being used by the client (see [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started) and [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started) samples), because Azure Managed Instance for Apache Cassandra certificates will be trusted by that environment. In rare cases, if the certificate is not trusted, you may need to add it to the truststore.
+> In the vast majority of cases it should **not be necessary** to configure or install certificates (rootCA, node or client, truststores, etc) to connect to Azure Managed Instance for Apache Cassandra. SSL encryption can be enabled by using the default truststore and password of the runtime being used by the client (see [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started), [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started), [Node.js](https://github.com/Azure-Samples/azure-cassandra-mi-nodejs-getting-started) and [Python](https://github.com/Azure-Samples/azure-cassandra-mi-python-v4-getting-started) samples), because Azure Managed Instance for Apache Cassandra certificates will be trusted by that environment. In rare cases, if the certificate is not trusted, you may need to add it to the truststore.
### Configuring client certificates (optional)
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/manage-resources-cli.md
az managed-cassandra datacenter update \
> - seed_provider > - initial_token > - autobootstrap
-> - client_ecncryption_options
+> - client_encryption_options
> - server_encryption_options > - transparent_data_encryption_options > - audit_logging_options
az managed-cassandra datacenter update \
> - commitlog_directory > - cdc_raw_directory > - saved_caches_directory
+> - endpoint_snitch
+> - partitioner
+> - rpc_address
+> - rpc_interface
managed-instance-apache-cassandra Monitor Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/monitor-clusters.md
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup
## Audit whitelist > ![NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
By default, audit logging creates a record for every login attempt and CQL query. The result can be rather overwhelming and increase overhead. You can use the audit whitelist feature in Cassandra 3.11 to set what operations *don't* create an audit record. The audit whitelist feature is enabled by default in Cassandra 3.11. To learn how to configure your whitelist, see [Role-based whitelist management](https://github.com/Ericsson/ecaudit/blob/release/c2.2/doc/role_whitelist_management.md).
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
ms. - Previously updated : 12/12/2022+ Last updated : 08/24/2022 # Azure Migrate appliance: Common questions
The appliance can be deployed using a couple of methods:
- The appliance can be deployed using a template for servers running in VMware or Hyper-V environment ([OVA template for VMware](how-to-set-up-appliance-vmware.md) or [VHD for Hyper-V](how-to-set-up-appliance-hyper-v.md)). - If you don't want to use a template, you can deploy the appliance for VMware or Hyper-V environment using a [PowerShell installer script](deploy-appliance-script.md). - In Azure Government, you should deploy the appliance using a PowerShell installer script. Refer to the steps of deployment [here](deploy-appliance-script-government.md).-- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script.Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md).
+- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script. Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md).
## How does the appliance connect to Azure?
migrate Concepts Business Case Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md
ms. Previously updated : 06/06/2023 Last updated : 08/24/2023+ # Business case (preview) overview
Cost components for running on-premises servers. For TCO calculations, an annual
| | | | | Compute | Hardware | Server Hardware (Host machines) | Total hardware acquisition cost is calculated using a cost per core linear regression formula: Cost per core = 16.232*(Hyperthreaded core: memory in GB ratio) + 113.87. Hyperthreaded cores = 2*(cores) | | Software - SQL Server licensing | License cost | Calculated per two core pack license pricing of 2019 Enterprise or Standard. |
+| | SQL Server - Extended Security Update (ESU) | License cost | Calculated for 3 years after the end of support of SQL server license as follows:<br/><br/> ESU (Year 1) ΓÇô 75% of the license cost <br/><br/> ESU (Year 2) ΓÇô 100% of the license cost <br/><br/> ESU (Year 3) ΓÇô 125% of the license cost <br/><br/> |
| | | Software Assurance | Calculated per year as in settings. | | | Software - Windows Server licensing | License cost | Calculated per two core pack license pricing of Windows Server. |
+| | Windows Server - Extended Security Update (ESU) | License cost | Calculated for 3 years after the end of support of Windows server license: <br/><br/> ESU (Year 1) ΓÇô 75% of the license cost <br/><br/> ESU (Year 2) ΓÇô 100% of the license cost <br/><br/> ESU (Year 3) ΓÇô 125% of the license cost <br/><br/>|
| | | Software Assurance | Calculated per year as in settings. |
-| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost + support + management software cost) | License cost for vSphere Standard license + Production support for vSphere Standard license + Management software cost for VSphere Standard + production support cost of management software. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.|
+| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost + support + management software cost) | License cost for vSphere Standard license + Production support for vSphere Standard license + Management software cost for vSphere Standard + production support cost of management software. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.|
| | Virtualization software for servers running in Microsoft Hyper-V environment| Virtualization Software (management software cost + software assurance) | Management software cost for System Center + software assurance. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.| | Storage | Storage Hardware | | The total storage hardware acquisition cost is calculated by multiplying the Total volume of storage attached to per GB cost. Default is USD 2 per GB per month. | | | Storage Maintenance | | Default is 10% of storage hardware acquisition cost. |
Cost components for running on-premises servers. For TCO calculations, an annual
| **Operating Asset Expense (OPEX) (B)** | | | | | Network maintenance | Per year | | | | Storage maintenance | Per year | Power draw per Server, Average price per KW per month based on location. | |
-| License Support | License support cost for virtualization + Windows Server + SQL Server + Linux OS | | VMware licenses aren't retained; Windows, SQL and Hyper-V management software licenses are retained based on AHUB option in Azure. |
+| License Support | License support cost for virtualization + Windows Server + SQL Server + Linux OS + Windows server extended security update (ESU) + SQL Server extended security update (ESU) | | VMware licenses aren't retained; Windows, SQL and Hyper-V management software licenses are retained based on AHUB option in Azure. |
| Security | Per year | Per server annual security/protection cost. | | | Datacenter Admin cost | Number of people * hourly cost * 730 hours | Cost per hour based on location. | |
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md
ms. Previously updated : 07/14/2023- Last updated : 08/24/2023+ # Dependency analysis
The differences between agentless visualization and agent-based visualization ar
**Requirement** | **Agentless** | **Agent-based** | |
-**Support** | Available for VMware VMs in general availability (GA). | In general availability (GA).
+**Support** | Generally Available for VMware VMs, Hyper-V VMs, Physical servers, or servers running on other public clouds like AWS and GCP. | In general availability (GA).
**Agent** | No agents needed on servers you want to analyze. | Agents required on each on-premises server that you want to analyze. **Log Analytics** | Not required. | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency analysis.<br/><br/> You associate a Log Analytics workspace with a project. The workspace must reside in the East US, Southeast Asia, or West Europe regions. The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions). **Process** | Captures TCP connection data. After discovery, it gathers data at intervals of five minutes. | Service Map agents installed on a server gather data about TCP processes, and inbound/outbound connections for each process.
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
Previously updated : 02/28/2023 Last updated : 08/31/2023
Azure Migrate now supports agentless at-scale migration of ASP.NET web apps to [
Support | Details | **Supported servers** | Currently supported only for windows servers running IIS in your VMware environment.
-**Windows servers** | Windows Server 2008 R2 and later are supported.
+**Windows servers** | Windows Server 2012 R2 and later are supported.
**Linux servers** | Currently not supported. **IIS access** | Web apps discovery requires a local admin user account. **IIS versions** | IIS 7.5 and later are supported.
Support | Details
- [Networking features](../app-service/networking-features.md). - [Monitor App Service with Azure Monitor](../app-service/monitor-app-service.md). - [Configure Azure AD authentication](../app-service/configure-authentication-provider-aad.md).-- [Review best practices](../app-service/deploy-best-practices.md) for deploying to Azure App service.
+- [Review best practices](../app-service/deploy-best-practices.md) for deploying to Azure App service.
migrate Concepts Vmware Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-vmware-agentless-migration.md
ms. Previously updated : 05/31/2023 Last updated : 08/21/2023 # Azure Migrate agentless migration of VMware virtual machines
Delta replication cycles are scheduled as follows:
- First delta replication cycle is scheduled immediately after the initial replication cycle completes - Next delta replication cycles are scheduled according to the following logic:
- min[max[(Previous delta replication cycle time/2), 1 hour], 12 hours]
+ min[max[1 hour, (Previous delta replication cycle time/2)], 12 hours]
That is, the next delta replication will be scheduled no sooner than one hour and no later than 12 hours. For example, if a VM takes four hours for a delta replication cycle, the next delta replication cycle is scheduled in two hours, and not in the next hour.
migrate How To Automate Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md
ms. Previously updated : 11/15/2022 Last updated : 05/11/2023
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
Run an assessment as follows:
:::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-group.png" alt-text="Screenshot of adding VMs to a group.":::
-1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**.
+1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**. We recommend that you prioritize migrations for servers in extended support/out of support.
1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
An Azure VM assessment describes:
:::image type="content" source="./media/how-to-create-assessment/assessment-summary.png" alt-text="Screenshot of an Assessment summary.":::
+### Review support status
+
+The assessment summary displays the support status of the Operating system licenses.
+
+1. Select the graph in the **Supportability** section to view a list of the assessed VMs.
+2. The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which shows the type of support status, duration of support, and the recommended steps to secure their workloads.
+ - To view the remaining duration of support, that is, the number of months for which the license is valid,
+select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
++ ### Review Azure readiness 1. In **Azure readiness**, verify whether servers are ready for migration to Azure.
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
Previously updated : 03/15/2023- Last updated : 08/24/2023+ # Create an Azure SQL assessment
Run an assessment as follows:
3. Review the assessment summary. You can also edit the assessment settings or recalculate the assessment.
+### Review support status
+
+The assessment summary displays the support status of the database instance licenses.
+
+1. Select the graph in the **Supportability** section to view a list of the assessed VMs.
+2. The **Database instance license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which shows the type of support status, duration of support, and the recommended steps to secure their workloads.
+ - To view the remaining duration of support, that is, the number of months for which the license is valid,
+select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
++ ### Discovered entities This indicates the number of SQL servers, instances, and databases that were assessed in this assessment.
migrate How To Upgrade Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-upgrade-windows.md
ms. Previously updated : 07/17/2023- Last updated : 08/30/2023+ # Azure Migrate Windows Server upgrade (Preview)ΓÇ»
This article describes how to upgrade Windows Server OS while migrating to Azure
- Ensure you have an existing Migrate project or [create](create-manage-projects.md) a project. - Ensure you have discovered the servers according to [Discover servers in VMware environment](tutorial-discover-vmware.md) and replicated the servers as described in [Migrate VMware VMs](tutorial-migrate-vmware.md#replicate-vms). - Verify the operating system disk has enough [free space](https://learn.microsoft.com/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements) to perform the in-place upgrade. The minimum disk space requirement is 32 GB.  -- The upgrade feature only works for Windows Server Standard and Datacenter editions.
+- The upgrade feature only works for Windows Server Standard and Datacenter editions.
+- The upgrade feature does not work for non en-US language servers.
- This feature does not work for Windows Server with an evaluation license and needs a full license. If you have any server with an evaluation license, upgrade to full edition before starting migration to Azure. - Disable antivirus and anti-spyware software and firewalls. These types of software can conflict with the upgrade process. Re-enable antivirus and anti-spyware software and firewalls after the upgrade is completed. - Ensure that your VM has the capability of adding another data disk as this feature requires the addition of an extra data disk temporarily for a seamless upgrade experience.ΓÇ»
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
ms. Previously updated : 01/17/2023- Last updated : 08/24/2023+
There are four major reports that you need to review:
- Estimated year on year cashflow savings based on the estimated migration completed that year. - Savings from unique Azure benefits like Azure Hybrid Benefit. - Discovery insights covering the scope of the business case.
+ - Support status of the operating system and database licenses.
- **On-premises vs Azure**: This report covers the breakdown of the total cost of ownership by cost categories and insights on savings. - **Azure IaaS**: This report covers the Azure and on-premises footprint of the servers and workloads recommended for migrating to Azure IaaS. - **Azure PaaS**: This report covers the Azure and on-premises footprint of the workloads recommended for migrating to Azure PaaS.
As you plan to migrate to Azure in phases, this line chart shows your cashflow p
- The future state cost shows how your net cashflow will be as you migrate some percentage to Azure per year as in the 'Azure cost' assumptions, while your infrastructure is growing 5% per year. ### Savings with Azure Hybrid Benefits
-Currently, this card shows a static percentage of max savings you could get with Azure hybrid Benefits.
+This card shows a static percentage of maximum savings you could get with Azure hybrid Benefits.
+
+### Savings with Extended security updates
+It shows the potential savings with respect to extended security update license. It is the cost of extended security update license required to run Windows Server and SQL Server securely after the end of support of its licenses on-premises. Extended security updates are offered at no additional cost on Azure.
+ ### Discovery insights
-It covers the total severs scoped in the business case computation, virtualization distribution, utilization insights and distribution of servers based on workloads running on them.
+It covers the total servers scoped in the business case computation, virtualization distribution, utilization insights, support status of the licenses, and distribution of servers based on workloads running on them.
-### Utilization insights
+#### Utilization insights
It covers which servers are ideal for cloud, servers that can be decommissioned on-premises, and servers that can't be classified based on resource utilization/performance data: - Ideal for cloud: These servers are best fit for migrating to Azure and comprises of active and idle servers: - Active servers: These servers delivered business value by being on and had their CPU and memory utilization above 5% and network utilization above 2%.
It covers which servers are ideal for cloud, servers that can be decommissioned
- Zombie: The CPU, memory and network utilization were 0% with no performance data collection issues. - These servers were on but don't have adequate metrics available: - Unknown: Many servers can land in this section if the discovery is still ongoing or has some unaddressed discovery issues.+
-## On-premises vs Azure report
+## On-premises vs Azure report
It covers cost components for on-premises and Azure, savings, and insights to understand the savings better. :::image type="content" source="./media/how-to-view-a-business-case/comparison-inline.png" alt-text="Screenshot of on-premises and Azure comparison." lightbox="./media/how-to-view-a-business-case/comparison-expanded.png":::
It covers cost components for on-premises and Azure, savings, and insights to un
**Azure tab** This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits.
+- IaaS cost estimate:
+ - **Estimated cost by target**: This card includes the cost based on the target.
+ - **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit.
+ - **Savings** - This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year.
- Azure VM: - **Estimated cost by savings options**: This card includes compute cost for Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance or 3 year Azure Savings Plan to maximize savings. - **Recommended VM family**: This card covers the VM sizes recommended. The ones marked Unknown are the VMs that have some readiness issues and no SKUs could be found for them.
This section assumes instance to SQL Server on Azure VM migration recommendation
- On-premises footprint of the servers recommended to be migrated to Azure IaaS. - Contribution of Zombie servers in the on-premises cost. - Distribution of servers by OS, virtualization, and activity state.
+- Distribution by support status of OS licenses and OS versions.
## Azure PaaS report **Azure tab** This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits.
+- PaaS cost estimate:
+ - **Estimated cost by target**: This card includes the cost based on the target.
+ - **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit.
+ - **Savings** - This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year.
- Azure SQL: - **Estimated cost by savings options**: This card includes compute cost for Azure SQL MI. It is recommended that all idle SQL instances are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings. - **Distribution by recommended service tier** : This card covers the recommended service tier.
This section contains the cost estimate by recommended target (Annual cost and a
- On-premises footprint of the servers recommended to be migrated to Azure PaaS. - Contribution of Zombie SQL instances in the on-premises cost.
+- Distribution by support status of OS licenses and OS versions.
- Distribution of SQL instances by SQL version and activity state. + ## Next steps - [Learn more](concepts-business-case-calculation.md) about how business cases are calculated.
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
ms. Previously updated : 12/12/2022 Last updated : 08/22/2023
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 10,000 severs running across multiple vCenter Servers.<br>A single appliance can connect to up to 10 vCenter Servers. **Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br><br> Deploy on an existing server running Windows Server 2022 using PowerShell installer script.
-**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2022 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
+**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2191954).<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2022 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
**OVA verification** | [Verify](tutorial-discover-vmware.md#verify-security) the OVA template downloaded from project by checking the hash values. **PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br/><br/> **Hardware and network requirements** | The appliance should run on server with Windows Server 2022, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2022, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2022.)_
migrate Migrate Replication Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md
ms. Previously updated : 02/27/2023 Last updated : 08/29/2023
The replication appliance is deployed when you set up agent-based migration of V
## Appliance requirements
-When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2022 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements.
+When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2016 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements.
**Component** | **Requirement** |
RAM | 16 GB
Number of disks | Two: The OS disk and the process server cache disk. Free disk space (cache) | 600 GB **Software settings** |
-Operating system | Windows Server 2022 or Windows Server 2012 R2
-License | The appliance comes with a Windows Server 2022 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
+Operating system | Windows Server 2016 or Windows Server 2012 R2
+License | The appliance comes with a Windows Server 2016 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
Operating system locale | English (en-us) TLS | TLS 1.2 should be enabled. .NET Framework | .NET Framework 4.6 or later should be installed on the machine (with strong cryptography enabled.
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
1. In **Replication storage account**, select the Azure storage account in which replicated data will be stored in Azure.
-1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1) and [**grant permissions to the Recovery Services vault managed identity**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
+1. Next, [**create a private endpoint for the storage account**](https://learn.microsoft.com/azure/migrate/migrate-servers-to-azure-using-private-link?pivots=agentlessvmware#create-a-private-endpoint-for-the-storage-account) and [**grant permissions to the Recovery Services vault managed identity**](https://learn.microsoft.com/azure/migrate/migrate-servers-to-azure-using-private-link?pivots=agentbased#grant-access-permissions-to-the-recovery-services-vault-1) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
- For Hyper-V VM migrations to Azure, if the replication storage account is of *Premium* type, you must select another storage account of *Standard* type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
ms. Previously updated : 05/23/2023- Last updated : 09/01/2023+ # Support matrix for Hyper-V migration
You can select up to 10 VMs at once for replication. If you want to migrate more
| :-- | :- | | **Operating system** | All [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems that are supported by Azure. | **Windows Server 2003** | For VMs running Windows Server 2003, you need to [install Hyper-V Integration Services](prepare-windows-server-2003-migration.md) before migration. |
-**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br><br> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br> - Red Hat Enterprise Linux 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - Cent OS 8.x, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3 <br>- Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 10, 9, 8, 7<br> - Oracle Linux 8, 7.7-CI, 7.7, 6<br> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
+**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br><br> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - Cent OS 9.x (Release and Stream), 8.x (Release and Stream), 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> - For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
| **Required changes for Azure** | Some VMs might require changes so that they can run in Azure. Make adjustments manually before migration. The relevant articles contain instructions about how to do this. | | **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. |
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
ms. Previously updated : 05/23/2023 Last updated : 07/11/2023+ # Support matrix for migration of physical servers, AWS VMs, and GCP VMs
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
ms. Previously updated : 07/12/2023 Last updated : 09/01/2023
The table summarizes VMware vSphere hypervisor requirements.
**VMware** | **Details** | **VMware vCenter Server** | Version 5.5, 6.0, 6.5, 6.7, 7.0.
-**VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7, 7.0.
+**VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0.
**vCenter Server permissions** | Agentless migration uses the [Migrate Appliance](migrate-appliance.md). The appliance needs these permissions in vCenter Server:<br/><br/> - **Datastore.Browse** (Datastore -> Browse datastore): Allow browsing of VM log files to troubleshoot snapshot creation and deletion.<br/><br/> - **Datastore.FileManagement** (Datastore -> Low level file operations): Allow read/write/delete/rename operations in the datastore browser, to troubleshoot snapshot creation and deletion.<br/><br/> - **VirtualMachine.Config.ChangeTracking** (Virtual machine -> Disk change tracking): Allow enable or disable change tracking of VM disks, to pull changed blocks of data between snapshots.<br/><br/> - **VirtualMachine.Config.DiskLease** (Virtual machine -> Disk lease): Allow disk lease operations for a VM, to read the disk using the VMware vSphere Virtual Disk Development Kit (VDDK).<br/><br/> - **VirtualMachine.Provisioning.DiskRandomRead** (Virtual machine -> Provisioning -> Allow read-only disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.DiskRandomAccess** (Virtual machine -> Provisioning -> Allow disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.GetVmFiles** (Virtual machine -> Provisioning -> Allow virtual machine download): Allows read operations on files associated with a VM, to download the logs and troubleshoot if failure occurs.<br/><br/> - **VirtualMachine.State.\*** (Virtual machine -> Snapshot management): Allow creation and management of VM snapshots for replication.<br/><br/> - **VirtualMachine.GuestOperations.\*** (Virtual machine -> Guest operations): Allow Discovery, Software Inventory, and Dependency Mapping on VMs.<br/><br/> -**VirtualMachine.Interact.PowerOff** (Virtual machine > Interaction > Power off): Allow the VM to be powered off during migration to Azure. **Multiple vCenter Servers** | A single appliance can connect to up to 10 vCenter Servers.
The table summarizes agentless migration requirements for VMware vSphere VMs.
| **Supported operating systems** | You can migrate [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems supported by Azure. **Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration.
-**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 8.x, 7.x, 6.x <br/> - Cent OS 8.x, 7.x, 6.x</br> - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3<br/> - Ubuntu 20.04, 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 10, 9, 8, 7<br/> - Oracle Linux 8, 7.7-CI, 7.7, 6<br/> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.<br/> The `SELinux Enforced` setting is currently not fully supported. It causes Dynamic IP setup and Microsoft Azure Linux Guest agent (waagent/WALinuxAgent) installation to fail. You can still migrate and use the VM.
+**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - Cent OS 9.x (Release and Stream), 8.x (Release and Stream), 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.<br/> The `SELinux Enforced` setting is currently not fully supported. It causes Dynamic IP setup and Microsoft Azure Linux Guest agent (waagent/WALinuxAgent) installation to fail. You can still migrate and use the VM.
**Boot requirements** | **Windows VMs:**<br/>OS Drive (C:\\) and System Reserved Partition (EFI System Partition for UEFI VMs) should reside on the same disk.<br/>If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks. <br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. <br/><br/> **Linux VMs:**<br/> If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks.<br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. **Disk size** | Up to 2-TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks. Changing the size of the source disk after initiating replication is supported and will not impact ongoing replication cycle.
The table summarizes agentless migration requirements for VMware vSphere VMs.
**IPv6** | Not supported. **Target disk** | VMs can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with one appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
-**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04.
+**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. Supported for RHEL 6, RHEL 7, CentOS 7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04
> [!NOTE] > Ensure that the following special characters are not passed in any credentials as they are not supported for SSO passwords:
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
ms. Previously updated : 01/03/2023- Last updated : 07/27/2023+ # Azure Migrate support matrix
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-agentless-migration.md
Previously updated : 12/12/2022 Last updated : 09/01/2023
Azure Migrate automatically handles these configuration changes for the followin
**Operating system versions supported for hydration** - Windows Server 2008 or later-- Red Hat Enterprise Linux 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x-- CentOS 8.x, 7.7, 7.6, 7.5, 7.4, 6.x-- SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3-- Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS-- Debian 10, 9, 8, 7 -- Oracle Linux 8, 7.7-CI, 7.7, 6
+- Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x
+- CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.7, 7.6, 7.5, 7.4, 6.x
+- SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3
+- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS
+- Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022)
+- Debian 11, 10, 9, 8, 7
+- Oracle Linux 9, 8, 7.7-CI, 7.7, 6
You can also use this article to manually prepare the VMs for migration to Azure for operating systems versions not listed above. At a high level, these changes include:
migrate Troubleshoot Appliance Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-appliance-diagnostic.md
ms. Previously updated : 08/11/2021 Last updated : 07/11/2023+ # Diagnose and solve issues with Azure Migrate appliance
You can run **Diagnose and solve** at any time from the appliance configuration
## Diagnostic checks
-*Diagnose and solve* runs some pre-validations to see if the required configuration files are not missing or blocked by an anti-virus software on the appliance and then performs the following checks:
+*Diagnose and solve* runs some pre-validations to see if the required configuration files aren't missing or blocked by an anti-virus software on the appliance and then performs the following checks:
**Category** | **Diagnostics check** |**Description** | | |
You can run **Diagnose and solve** at any time from the appliance configuration
||VDDK check | Checks if the required VDDK files have been downloaded and copied at the required location on the appliance server. **Service health checks** |Operational status |Checks if the agents on the appliance are in running state. <br>*If not, appliance will auto-resolve by restarting the agents.* ||Service endpoint connectivity |Checks if the agents can communicate to their respective services on Azure either directly or via proxy.
-**Azure-specific checks** |AAD App availability* | Checks if the AAD App created during the appliance registration is available and is accessible from the appliance.
+**Azure-specific checks** |Azure Active Directory App availability* | Checks if the Azure Active Directory App created during the appliance registration is available and is accessible from the appliance.
||Migrate project availability* | Checks if the Migrate project to which the appliance has been registered still exists and is accessible from the appliance. ||Essential resources availability*| Checks if the Migrate resources created during appliance registration still exist and are accessible from the appliance. **Appliance-specific checks** | Key Vault certificate availability* | Checks if the certificate downloaded from Key Vault during appliance registration is still available on the appliance server. <br> *If not, appliance will auto-resolve by downloading the certificate again, provided the Key Vault is available and accessible*.
-|| Credential store availability | Checks if the Credential store resources on the appliance server have not been moved/deleted/edited.
+|| Credential store availability | Checks if the Credential store resources on the appliance server haven't been moved/deleted/edited.
|| Replication appliance/ASR components | Checks if the same server has also been used to install any ASR/replication appliance components. *It is currently not supported to install both Azure Migrate and replication appliance (for agent-based migration) on the same server.* || OS license availability | Checks if the evaluation license on the appliance server created from OVA/VHD is still valid. *The Windows Server 2022 evaluation license is valid for 180 days.* || CPU & memory utilization | Checks the CPU and memory utilized by the Migrate agents on the appliance server.
You can run **Diagnose and solve** at any time from the appliance configuration
## Running diagnostic checks
-If you are getting any issues with the appliance during its configuration or seeing issues with the ongoing Migrate operations like discovery, assessment and/or replication (*in case of VMware appliance*) on the portal, you can go to the appliance configuration manager and run diagnostics.
+If you're getting any issues with the appliance during its configuration or seeing issues with the ongoing Migrate operations like discovery, assessment and/or replication (*in case of VMware appliance*) on the portal, you can go to the appliance configuration manager and run diagnostics.
> [!NOTE] > Currently **Diagnose and solve** can perform checks related to appliance connectivity to Azure, availability of required resources on appliance server and/or Azure. The connectivity or discovery issues with the source environment like vCenter Server/ESXi hosts/Hyper-V hosts/VMs/physical servers are currently not covered under **Diagnose and solve**.
If you are getting any issues with the appliance during its configuration or see
![Diagnostic report](./media/troubleshoot-appliance-diagnostic-solve/diagnostic-report.png)
-1. Once diagnostic checks have completed, you can either view the report in another tab where you can choose it save it in a PDF format or you can go to this location-**C:\Users\Public\Desktop\DiagnosticsReport** on the appliance server where the report gets auto-saved in an HTML format.
+1. Once diagnostic checks have completed, you can either view the report in another tab where you can choose it save it in a PDF format or you can go to this location **C:\Users\Public\Desktop\DiagnosticsReport** on the appliance server where the report gets auto-saved in an HTML format.
![View diagnostic report](./media/troubleshoot-appliance-diagnostic-solve/view-diagnostic-report.png)
If you are getting any issues with the appliance during its configuration or see
![View status of diagnostic report](./media/troubleshoot-appliance-diagnostic-solve/view-status.png)
-1. You can follow the remediation steps on the report to solve an issue. If you are unable to resolve the issue, it is recommended that you attach the diagnostics report while creating a Microsoft support case so that it helps expedite the resolution.
+1. You can follow the remediation steps on the report to solve an issue. If you're unable to resolve the issue, it's recommended that you attach the diagnostics report while creating a Microsoft support case so that it helps expedite the resolution.
## Next steps
-If you are getting issues not covered under **Diagnose and solve**, you can go to [troubleshoot the Azure Migrate appliance](./troubleshoot-appliance.md) to find the remediation steps.
+If you're getting issues not covered under **Diagnose and solve**, you can go to [troubleshoot the Azure Migrate appliance](./troubleshoot-appliance.md) to find the remediation steps.
migrate Troubleshoot Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-upgrade.md
Previously updated : 06/08/2023- Last updated : 07/11/2023+ # Troubleshoot Windows OS upgrade issuesΓÇ»
migrate Tutorial App Containerization Aspnet App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-app-service.md
ms. Previously updated : 10/14/2021 Last updated : 07/14/2023+ # ASP.NET app containerization and migration to Azure App Service
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
Previously updated : 06/29/2023- Last updated : 08/17/2023+
Even when SQL Server credentials are not available, this report will provide rig
1. **Migrate all SQL databases to Azure SQL Database** In this strategy, you can see how you can migrate individual databases to Azure SQL Database and review the readiness and cost estimates.
+### Review support status
+
+This indicates the support status of SQL servers, instances, and databases that were assessed in this assessment.
+
+The Supportability section displays the support status of the SQL licenses.
+The Discovery details section gives a graphic representation of the number of discovered SQL instances and their SQL editions.
+
+1. Select the graph in the **Supportability** section to view a list of the assessed SQL instances.
+2. The **Database instance license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which shows the type of support status, duration of support, and the recommended steps to secure their workloads.
+ - To view the remaining duration of support, that is, the number of months for which the license is valid,
+select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
++ ### Review readiness You can review readiness reports for different migration strategies:
migrate Tutorial Assess Vmware Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vm.md
ms. Previously updated : 06/29/2023 Last updated : 08/24/2023 -+ #Customer intent: As a VMware VM admin, I want to assess my VMware VMs in preparation for migration to Azure.
An assessment describes:
To view an assessment: 1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VM assessment**.
-2. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
-
- ![Screenshot of Assessment summary.](./media/tutorial-assess-vmware-azure-vm/assessment-summary.png)
-
-3. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment.
-
+2. In **Assessments**, select an assessment to open it.
+4. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment.
+ - The Azure readiness graph displays the status of the VM.
+ - The Supportability section displays the distribution by OS license support status and the distribution by Windows Server version.
+ - The Savings option section displays the estimated savings on moving to Azure.
### Review readiness
migrate Tutorial Assess Webapps Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps-hyper-v.md
- Title: Tutorial to assess web apps for migration to Azure App Service for Hyper-V VMs
-description: Learn how to create assessment for Azure App Service for Hyper-V VMs in Azure Migrate
---- Previously updated : 06/29/2023----
-# Tutorial: Assess ASP.NET web apps for migration to Azure App Service for Hyper-V VMs
-
-As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
-This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Run an assessment based on web apps configuration data.
-> * Review an Azure App Service assessment
-
-> [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options where possible.
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-hyper-v.md)-- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.-
-## Run an assessment
-
-Run an assessment as follows:
-
-1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
-
- :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Screenshot of Overview page for Azure Migrate.":::
-
-2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure App Service**.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Screenshot of dropdown to choose assessment type as Azure App Service.":::
-
-3. In **Create assessment**, you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
-
-4. Select **Edit** to review the assessment properties.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Screenshot of Edit button from where assessment properties can be customized.":::
-
-5. Here's what's included in Azure App Service assessment properties:
-
- **Property** | **Details**
- |
- **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
- **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
- - In **Savings options (compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure compute cost.
- - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
- - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
- - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
- - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable.
- **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
- **Currency** | The billing currency for your account.
- **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings.
-
-1. In **Create assessment**, select **Next**.
-1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
-1. In **Select or create a group**, select **Create New** and specify a group name.
-1. Select the appliance, and select the servers that you want to add to the group. Select **Next**.
-1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
-1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh.
-
- :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Screenshot of Refresh discovery and assessment tool data.":::
-
-1. Select the number next to Azure App Service assessment.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Screenshot of Navigating to created assessment.":::
-
-1. Select the assessment name, which you wish to view.
-
-## Review an assessment
-
-**To view an assessment**:
-
-1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Azure App Service assessment.
-2. Select the assessment name, which you wish to view.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="Screenshot of App Service assessment overview.":::
-
-3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
-
-#### Azure App Service readiness
-
-This indicates the distribution of the assessed web apps. You can drill down to understand the details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md).
-You can also view the recommended App Service SKU and plan for migrating to Azure App Service.
-
-#### Azure App Service cost details
-
-An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses.
-
-### Review readiness
-
-1. Select **Azure App Service readiness**.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Screenshot of Azure App Service readiness details.":::
-
-1. Review Azure App Service readiness column in table, for the assessed web apps:
- 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
- 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
- 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
- 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
-1. Review the recommended SKU for the web apps, which is determined as per the matrix below:
-
- **Isolation required** | **Reserved instance** | **App Service plan/ SKU**
- | |
- Yes | Yes | I1
- Yes | No | I1
- No | Yes | P1v3
- No | No | P1v2
-
- **Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates**
- | |
- Ready | Yes | Yes
- Ready with conditions | Yes | Yes
- Not ready | No | No
- Unknown | No | No
-
-1. Select the App Service plan link in the Azure App Service readiness table to see the App Service plan details such as compute resources and other web apps that are part of the same plan.
-
-### Review cost estimates
-
-The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). The apps that you add into this App Service plan run on the compute resources defined by your App Service plan.
-To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. The number of web apps allocated to each plan instance is shown below.
-
-**App Service plan** | **Web apps per App Service plan**
- |
-I1 | 8
-P1v2 | 8
-P1v3 | 16
--
-## Next steps
--- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).-- [Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated.
migrate Tutorial Assess Webapps Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps-physical.md
- Title: Tutorial to assess web apps for migration to Azure App Service for Physical machines
-description: Learn how to create assessment for Azure App Service for Physical machines in Azure Migrate
---- Previously updated : 06/29/2023----
-# Tutorial: Assess ASP.NET web apps for migration to Azure App Service for Physical machines
-
-As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
-This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Run an assessment based on web apps configuration data.
-> * Review an Azure App Service assessment
-
-> [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options where possible.
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-physical.md)-- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.-
-## Run an assessment
-
-Run an assessment as follows:
-
-1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
-
- :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Screenshot of Overview page for Azure Migrate.":::
-
-2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure App Service**.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Screenshot of Dropdown to choose assessment type as Azure App Service.":::
-
-3. In **Create assessment**, you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
-
-4. Select **Edit** to review the assessment properties.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Screenshot of Edit button from where assessment properties can be customized.":::
-
-5. Here's what's included in Azure App Service assessment properties:
-
- **Property** | **Details**
- |
- **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
- **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
- - In **Savings options (compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure compute cost.
- - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
- - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
- - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
- - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable.
- **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
- **Currency** | The billing currency for your account.
- **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings.
-
-1. In **Create assessment**, select **Next**.
-1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
-1. In **Select or create a group**, select **Create New** and specify a group name.
-1. Select the appliance, and select the servers that you want to add to the group. Select **Next**.
-1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
-1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh.
-
- :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Screenshot of Refresh discovery and assessment tool data.":::
-
-1. Select the number next to Azure App Service assessment.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Screenshot of Navigation to created assessment.":::
-
-1. Select the assessment name, which you wish to view.
-
-## Review an assessment
-
-**To view an assessment**:
-
-1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Azure App Service assessment.
-2. Select the assessment name, which you wish to view.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="Screenshot of App Service assessment overview.":::
-
-3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
-
-#### Azure App Service readiness
-
-This indicates the distribution of the assessed web apps. You can drill down to understand the details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md).
-You can also view the recommended App Service SKU and plan for migrating to Azure App Service.
-
-#### Azure App Service cost details
-
-An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses.
-
-### Review readiness
-
-1. Select **Azure App Service readiness**.
-
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Screenshot of Azure App Service readiness details.":::
-
-1. Review Azure App Service readiness column in table, for the assessed web apps:
- 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
- 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
- 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
- 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
-1. Review the recommended SKU for the web apps, which is determined as per the matrix below:
-
- **Isolation required** | **Reserved instance** | **App Service plan/ SKU**
- | |
- Yes | Yes | I1
- Yes | No | I1
- No | Yes | P1v3
- No | No | P1v2
-
- **Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates**
- | |
- Ready | Yes | Yes
- Ready with conditions | Yes | Yes
- Not ready | No | No
- Unknown | No | No
-
-1. Select the App Service plan link in the Azure App Service readiness table to see the App Service plan details such as compute resources and other web apps that are part of the same plan.
-
-### Review cost estimates
-
-The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). The apps that you add into this App Service plan run on the compute resources defined by your App Service plan.
-To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. The number of web apps allocated to each plan instance is shown below.
-
-**App Service plan** | **Web apps per App Service plan**
- |
-I1 | 8
-P1v2 | 8
-P1v3 | 16
--
-## Next steps
--- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).-- [Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated.
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md
description: Learn how to create assessment for Azure App Service in Azure Migra
Previously updated : 06/29/2023 Last updated : 08/24/2023 -+
As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity. This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
-In this tutorial, you learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Run an assessment based on web apps configuration data.
In this tutorial, you learn how to:
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md)
+- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance for [VMware](tutorial-discover-vmware.md), [Hyper-V](tutorial-discover-hyper-v.md), or [Physical servers](tutorial-discover-physical.md).
- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article. ## Run an assessment
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Previously updated : 07/10/2023- Last updated : 08/24/2023+ #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
After discovery finishes, you can verify that the servers appear in the portal.
1. Open the Azure Migrate dashboard. 2. In **Azure Migrate - Servers** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
+#### View support status
+
+You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections.
+
+The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+
+To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
+
+The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+
+To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
++ ## Next steps - [Assess servers on Hyper-V environment](tutorial-assess-hyper-v.md) for migration to Azure VMs.
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Previously updated : 07/10/2023 Last updated : 08/30/2023 -+ #Customer intent: As a server admin I want to discover my on-premises server inventory.
Before you start this tutorial, ensure you have these prerequisites in place.
**Requirement** | **Details** |
-**Appliance** | You need a server to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2016 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2016.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance.
+**Appliance** | You need a server to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2022 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2022.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance.
**Windows servers** | Allow inbound connections on WinRM port 5985 (HTTP) for discovery of Windows servers.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). **Linux servers** | Allow inbound connections on port 22 (TCP) for discovery of Linux servers.<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). **SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](migrate-support-matrix-physical.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
After discovery finishes, you can verify that the servers appear in the portal.
1. Open the Azure Migrate dashboard. 2. In **Azure Migrate - Servers** > **Azure Migrate: Discovery and assessment** page, select the icon that displays the count for **Discovered servers**.
+#### View support status
+
+You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections.
+
+The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+
+To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
+
+The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+
+To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
++ ## Delete servers After the discovery has been initiated, you can delete any of the added servers from the appliance configuration manager by searching for the server name in the **Add discovery source** table and by selecting **Delete**.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 07/10/2023 Last updated : 08/24/2023 -+ #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
To start vCenter Server discovery, select **Start discovery**. After the discove
:::image type="content" source="./media/tutorial-discover-vmware/discovery-assessment-tile.png" alt-text="Screenshot that shows how to refresh data in discovery and assessment tile.":::
+Details such as OS license support status, inventory, database instances, etc are displayed.
+
+#### View support status
+
+You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections.
+
+The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+
+To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
+
+The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+
+To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
+
+ ## Next steps - Learn how to [assess servers to migrate to Azure VMs](./tutorial-assess-vmware-azure-vm.md).
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
ms. Previously updated : 12/12/2022- Last updated : 07/13/2023+ # Migrate Hyper-V VMs to Azure
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
ms. Previously updated : 08/18/2022 Last updated : 05/11/2023
migrate Tutorial Modernize Asp Net Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-aks.md
Previously updated : 02/28/2023 Last updated : 08/31/2023
Before you begin this tutorial, you should address the following:
### Limitations - You can migrate ASP.NET applications using Microsoft .NET framework 3.5 or later.
+ - You can migrate application servers running Windows Server 2012 R2 or later (application servers should be running PowerShell version 5.1).
- Applications should be running on Internet Information Services (IIS) 7.5 or later. ## Enable replication
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 07/26/2023- Last updated : 08/24/2023+ # What's new in Azure Migrate [Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (August 2023)
+- Azure Migrate now helps you gain deeper insights into the support posture of your IT estate by providing insights into Windows server and SQL Server license support information. You can stay ahead of license support deadlines with *Support ends in* information that helps to understand the time left until the end of support for respective servers and databases.
+- Azure Migrate also provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support.
+- Envision Extended Security Update (ESU) savings for out of support Windows Server and SQL Server licenses using Azure Migrate Business case.
+ ## Update (July 2023) - Discover Azure Migrate from Operations Manager console: Operations Manager 2019 UR3 and later allows you to discover Azure Migrate from console. You can now generate a complete inventory of your on-premises environment without appliance. This can be used in Azure Migrate to assess machines at scale. [Learn more](https://support.microsoft.com/topic/discover-azure-migrate-for-operations-manager-04b33766-f824-4e99-9065-3109411ede63). - Public Preview: Upgrade your Windows OS during Migration using the Migration and modernization tool in your VMware environment. [Learn more](how-to-upgrade-windows.md).
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
Previously updated : 07/26/2022 Last updated : 07/26/2022 # Backup and restore in Azure Database for MySQL - Flexible Server
The Backup and Restore blade in the Azure portal provides a complete list of the
In Azure Database for MySQL, performing a restore creates a new server from the original server's backups. There are two types of restore available: - Point-in-time restore: is available with either backup redundancy option and creates a new server in the same region as your original server.-- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to the geo-paired region. Geo-restore to other regions is not supported currently.
+- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to either a geo-paired region or any other azure supported region where flexible server is available. Please note, feature of geo-restore to other regions is currently supported in public preview.
+
+> [!NOTE]
+> Universal Geo Restore (Geo-restore to other regions which is different from a paired region) in Azure Database for MySQL - Flexible Server is currently in **public preview**. Few regions that are currently not supported for universal geo-restore feature in public preview are "Brazil South", "USGov Virginia" and "West US 3".
The estimated time for the recovery of the server depends on several factors:
The estimated time of recovery depends on several factors including the database
## Geo-restore
-You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is not supported currently.
+You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is supported currently in public preview.
Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
The estimated time for the recovery of the server depends on several factors:
- Learn about [business continuity](./concepts-business-continuity.md) - Learn about [zone redundant high availability](./concepts-high-availability.md) - Learn about [backup and recovery](./concepts-backup-restore.md)+++
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
The parameter `replicate_wild_ignore_table` creates a replication filter for tab
- With **public access**, ensure that the source server has a public IP address, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN). - With **private access**, ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running. (For more details, visit [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)).
+### Generated Invisible Primary Key
+
+For MySQL version 8.0 and above, [Generated Invisible Primary Keys](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html)(GIPK) is enabled by default for all the Azure Database for MySQL Flexible Servers. MySQL 8.0+ servers adds the invisible column *my_row_id* to the tables and a primary key on that column, where the InnoDB table is created without an explicit primary key. This feature, when enabled may impact some of the data-in replication use cases, as described below:
+
+- Data-in replication fails with replication error: ΓÇ£**ERROR 1068 (42000): Multiple primary key defined**ΓÇ¥ if source server creates a Primary key on the table without Primary Key. For mitigation, run the following sql command, skip replication error and restart [data-in replication](how-to-data-in-replication.md).
+
+ ```sql
+ alter table <table name> drop column my_row_id, add primary key <primary key name>(<column name>);
+ ```
+
+- Data-in replication fails with replication error: "**ERROR 1075 (42000): Incorrect table definition; there can be only one auto column and it must be defined as a key**" if source server adds an auto_increment column as Unique Key. For mitigation, run the following sql command, set "[sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key)" as OFF, skip replication error and restart [data-in replication](how-to-data-in-replication.md).
+ ```sql
+ alter table <table name> drop column my_row_id, modify <column name> int auto_increment;
+ ```
+
+- Data-in replication fails if source server runs any other SQL that isn't supported when "[sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key)" is ON. For example, create a partition table. In such a scenario mitigation is to set "[sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key)" as OFF and restart [data-in replication](how-to-data-in-replication.md).
+ ## Next steps - Learn more on how to [set up data-in replication](how-to-data-in-replication.md)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Data-in Replication isn't supported for HA servers. But Data-in Replication for
1. Enable HA. - **To reduce downtime, can I fail over to the standby server during server restarts or while scaling up or down?** </br>
-Currently, when you do a scale up or scale down operation, the standby and primary are scaled at the same time. So failing over doesn't help. Allowing scaling up of the standby first, followed by failover, and then scaling up of the primary is on our roadmap but isn't supported yet.</br>
+Currently, Azure MySQL Flexible Server has utlized Planned Failover to optmize the HA operations including scaling up/down, and planned maintenance to help reduce the downtime.
+When such operations started, it would operate on the original standby instance first, followed by triggering a planned failover operation, and then operate on the original primary instance. </br>
- **Can we change the availability mode (Zone-redundant HA/same-zone) of server** </br> If you create the server with Zone-redundant HA mode enabled then you can change from Zone-redundant HA to same-zone and vice versa. To change the availability mode, you can set **High Availability** to **Disabled** on the **High Availability** pane, and then set it back to **Zone Redundant or same-zone** and choose **High Availability Mode**.</br>
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
This article describes limitations in the Azure Database for MySQL - Flexible Se
Azure Database for MySQL supports tuning the values of server parameters. Some parameters' min and max values (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) are determined by the compute tier and before you compute the size of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits.
+### Generated Invisible Primary Keys
+For MySQL version 8.0 and above, [Generated Invisible Primary Keys](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html)(GIPK) is enabled by default for all the Azure Database for MySQL Flexible Servers. MySQL 8.0+ servers adds the invisible column *my_row_id* to the tables and a primary key on that column, where the InnoDB table is created without an explicit primary key. For this reason, you can't create a table having a column named *my_row_id* unless the table creation statement also specifies an explicit primary key,[Learn more](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html).
+By default, GIPKs are shown in the output of [SHOW CREATE TABLE](https://dev.mysql.com/doc/refman/8.0/en/show-create-table.html), [SHOW COLUMNS](https://dev.mysql.com/doc/refman/8.0/en/show-columns.html), and [SHOW INDEX](https://dev.mysql.com/doc/refman/8.0/en/show-https://docsupdatetracker.net/index.html), and are visible in the Information Schema [COLUMNS](https://dev.mysql.com/doc/refman/8.0/en/information-schema-columns-table.html) and [STATISTICS](https://dev.mysql.com/doc/refman/8.0/en/information-schema-statistics-table.html) tables.
+For more details on GIPK and its use cases with [Data-in-Replication](./concepts-data-in-replication.md) in Azure Database for MySQL Flexible Server, refer [GIPK with Data-in-Replication](./concepts-data-in-replication.md#generated-invisible-primary-key).
+
+#### Steps to disable GIPK
+
+- You can update the value of server parameter [sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key) to 'OFF' by following steps mentioned on how to update any server parameter from [Azure portal](./how-to-configure-server-parameters-portal.md#configure-server-parameters) or by using [Azure CLI](./how-to-configure-server-parameters-cli.md#modify-a-server-parameter-value).
+
+- Or you can connect to your Azure Database for MySQL Flexible Servers and run the below command.
+```sql
+mysql> SET sql_generate_invisible_primary_key=OFF;
+```
++ ## Storage engines MySQL supports many storage engines. On Azure Database for MySQL - Flexible Server, the following is the list of supported and unsupported storage engines:
The following are unsupported:
### Network -- Connectivity method can't be changed after creating the server. If the server is created with *Private access (VNet Integration)*, it can't be changed to *Public access (allowed IP addresses)* after creation, and vice versa
+- Connectivity method can't be changed after creating the server. If the server is created with *Private access (virtual network Integration)*, it can't be changed to *Public access (allowed IP addresses)* after creation, and vice versa
### Stop/start operation
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md
These metrics are available for Azure Database for MySQL:
|Metric display name|Metric|Unit|Description| ||||| |Host CPU percent|cpu_percent|Percent|Host CPU percent is total utilization of CPU to process all the tasks on your server over a selected period. This metric includes workload of your Azure Database for MySQL - Flexible Server and Azure MySQL process. High CPU percent can help you find if your database server has more workload than it can handle. This metric is equivalent to total CPU utilization similar to utilization of CPU on any virtual machine.|
+|CPU Credit Consumed|cpu_credits_consumed| Count|**This is for Burstable Tier Only** CPU credit is calculated based on workload. See [B-series burstable virtual machine sizes](/azure/virtual-machines/sizes-b-series-burstable) for more information.|
+|CPU Credit Remaining|cpu_credits_remaining|Count|**This is for Burstable Tier Only** CPU remaining is calculated based on workload. See [B-series burstable virtual machine sizes](/azure/virtual-machines/sizes-b-series-burstable) for more information.|
|Host Network In |network_bytes_ingress|Bytes|Total sum of incoming network traffic on the server for a selected period. This metric includes traffic to your database and to Azure MySQL features like monitoring, logs etc.| |Host Network out|network_bytes_egress|Bytes|Total sum of outgoing network traffic on the server for a selected period. This metric includes traffic from your database and from Azure MySQL features like monitoring, logs etc.| |Active Connections|active_connection|Count|The number of active connections to the server. Active connections are the total number of [threads connected](https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html#statvar_Threads_connected) to your server, which also includes threads from [azure_superuser](../single-server/how-to-create-users.md).|
mysql Concepts Networking Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-private-link.md
-# Private Link for Azure Database for MySQL - Flexible Server (Preview)
+# Private Link for Azure Database for MySQL - Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
To learn more about Azure Database for MySQL security features, see the followin
- To configure a firewall for Azure Database for MySQL, see [firewall support](../single-server/concepts-firewall-rules.md). - For an overview of Azure Database for MySQL connectivity, see [Azure Database for MySQL Connectivity Architecture](../single-server/concepts-connectivity-architecture.md)+
mysql Concepts Networking Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-public.md
Granting permission to an IP address is called a firewall rule. If a connection
You can consider enabling connections from all Azure data center IP addresses if a fixed outgoing IP address isn't available for your Azure service.
-> [!IMPORTANT]
-> The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users.
+> [!IMPORTANT]
+> - The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users.
+> - You can create a maximum of 500 IP firewall rules.
+>
Learn how to enable and manage public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
Consider the following points when access to the Microsoft Azure Database for My
- **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it shows the validation error. > [!NOTE]
-> We recommend you use the fully qualified domain name (FQDN) '<servername>.mysql.database.azure.com' in connection strings when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
+> We recommend you use the fully qualified domain name (FQDN) `<servername>.mysql.database.azure.com` in connection strings when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
## Next steps - Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md) - Learn how to [use TLS](how-to-connect-tls-ssl.md)++
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
To configure correctly, you need the following resources:
You can then use the flexible servername (FQDN) to connect from the client application in peered virtual network or on-premises network to flexible server. > [!NOTE]
-> We recommend you use the fully qualified domain name (FQDN) '<servername>.mysql.database.azure.com' in connection strings when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
+> We recommend you use the fully qualified domain name (FQDN) `<servername>.mysql.database.azure.com` in connection strings when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
## Unsupported virtual network scenarios
mysql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking.md
The following characteristics apply whether you choose to use the private access
## Hostname
-Regardless of your networking option, we recommend you use the fully qualified domain name (FQDN) '<servername>.mysql.database.azure.com' in connection strings when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
+Regardless of your networking option, we recommend you use the fully qualified domain name (FQDN) `<servername>.mysql.database.azure.com` in connection strings when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
An example that uses an FQDN as a host name is hostname = servername.mysql.database.azure.com. Where possible, avoid using hostname = 10.0.0.4 (a private address) or hostname = 40.2.45.67 (a public address).
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
Because replicas are read-only, they don't directly reduce write-capacity burden
The read replica feature uses MySQL asynchronous replication. The feature isn't meant for synchronous replication scenarios. There's a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the source. Use this feature for workloads that can accommodate this delay.
-## Cross-region replication in Geo-Paired Region
-
-You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. Azure database for MySQL Flexible Server allows you to provision read-replica in the Azure supported [geo-paired region] (https://learn.microsoft.com/azure/reliability/cross-region-replication-azure) to the source server.
+## Cross-region replication
+You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. Azure database for MySQL Flexible Server allows you to provision read-replica in any [Azure supported regions](https://learn.microsoft.com/azure/reliability/cross-region-replication-azure) where Azure Database for MySQL Flexible server is available. Using this feature, a source server can have a replica in its paired region or the universal replica regions. Please refer [here](./overview.md#azure-regions) to find the list of Azure regions where Azure Database for MySQL Flexible server is available today
## Create a replica
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
Last updated 11/21/2022 -+
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
In this tutorial, you learn how to:
- Configure data encryption for restoration. - Configure data encryption for replica servers.
+ > [!NOTE]
+> Azure key vault access configuration now supports two types of permission models - [Azure role-based access control](../../role-based-access-control/overview.md) and [Vault access policy](../../key-vault/general/assign-access-policy.md). The tutorial describes configuring data encryption for Azure Database for MySQL - Flexible server using Vault access policy. However, you can choose to use Azure RBAC as permission model to grant access to Azure Key Vault. To do so, you need any built-in or custom role that has below three permissions and assign it through "role assignments" using Access control (IAM) tab in the keyvault: a) KeyVault/vaults/keys/wrap/action b) KeyVault/vaults/keys/unwrap/action c) KeyVault/vaults/keys/read
+++ ## Prerequisites - An Azure account with an active subscription.
After your Azure Database for MySQL - Flexible Server is encrypted with a custom
- [Customer managed keys data encryption](concepts-customer-managed-key.md) - [Data encryption with Azure CLI](how-to-data-encryption-cli.md)++
mysql How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-move-regions.md
Title: Move Azure regions - Azure portal - Azure Database for MySQL - Flexible Server description: Move an Azure Database for MySQL - Flexible Server from one Azure region to another using the Azure portal.+++ Last updated : 08/23/2023 -- Previously updated : 04/08/2022
-#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
# Move an Azure Database for MySQL - Flexible Server to another region by using the Azure portal There are various scenarios for moving an existing Azure Database for MySQL - Flexible Server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
-You can use Azure Database for MySQL - Flexible Server's [geo-restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your flexible server. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region.
+You can use Azure Database for MySQL - Flexible Server's [geo restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your flexible server. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region.
-> [!NOTE]
+> [!NOTE]
> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article. ## Prerequisites
To move the Azure Database for MySQL - Flexible Server to the geo-paired region
1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-2. Click **Overview** from the left panel.
+1. Select **Overview** from the left panel.
-3. From the overview page, click **Restore**.
+1. From the overview page, select **Restore**.
-4. Restore page will be shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options cannot be selected simultaneously.
+1. Restore page is shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options can't be selected simultaneously.
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-flex.png" alt-text="Geo-restore option":::
+ :::image type="content" source="./media/how-to-move-regions/geo-restore-flex.png" alt-text="Screenshot of Geo-restore option" lightbox="./media/how-to-move-regions/geo-restore-flex.png":::
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-enabled-flex.png" alt-text="Enabling Geo-Restore":::
+ :::image type="content" source="./media/how-to-move-regions/geo-restore-enabled-flex.png" alt-text="Screenshot of Enabling Geo-Restore" lightbox="./media/how-to-move-regions/geo-restore-enabled-flex.png":::
-5. Provide a new server name in the **Name** field in the Server details section.
+1. Provide a new server name in the **Name** field in the Server details section.
-6. When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
+1. When primary region is down, one can't create geo-redundant servers in the respective geo-paired region as storage can't be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-1.png" alt-text="Compute + Storage window":::
+ :::image type="content" source="./media/how-to-move-regions/geo-restore-region-down-1.png" alt-text="Screenshot of Compute + Storage window" lightbox="./media/how-to-move-regions/geo-restore-region-down-1.png":::
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-2.png" alt-text="Disabling Geo-Redundancy":::
+ :::image type="content" source="./media/how-to-move-regions/geo-restore-region-down-2.png" alt-text="Screenshot of Disabling Geo-Redundancy" lightbox="./media/how-to-move-regions/geo-restore-region-down-2.png":::
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-3.png" alt-text="Restoring as Locally redundant server":::
+ :::image type="content" source="./media/how-to-move-regions/geo-restore-region-down-3.png" alt-text="Screenshot of Restoring as Locally redundant server" lightbox="./media/how-to-move-regions/geo-restore-region-down-3.png":::
-7. Select **Review + Create** to review your selections.
+1. Select **Review + Create** to review your selections.
-8. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes.
+1. A notification is shown that the restore operation has been initiated. This operation may take a few minutes.
-The new server created by geo-restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section.
+The new server created by geo-restore has the same server admin sign-in name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section.
## Clean up source server
In this tutorial, you moved an Azure Database for MySQL - Flexible Server from o
- Learn more about [geo-restore](concepts-backup-restore.md#geo-restore) - Learn more about [Azure paired regions](overview.md#azure-regions) supported for Azure Database for MySQL - Flexible Server-- Learn more about [business continuity](concepts-business-continuity.md) options
+- Learn more about [business continuity](concepts-business-continuity.md) options
mysql How To Networking Private Link Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-networking-private-link-azure-cli.md
-# Create and manage Private Link for Azure Database for MySQL - Flexible Server using CLI (Preview)
+# Create and manage Private Link for Azure Database for MySQL - Flexible Server using CLI
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
Connect to the VM *myVm* from the internet as follows:
1. If prompted, select **Connect**.
- 1. Enter the username and password you specified when creating the VM.
-
- > [!NOTE]
- > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
-
+. Enter the username and password you specified when creating the VM.
+ > [!NOTE]
+ > You may need to select **More choices** **Use a different account**, to specify the credentials you entered when you created the VM.
1. Select **OK**. 1. You may receive a certificate warning during the sign-in process. Select **Yes** or **Continue** if you receive a certificate warning.
az network private-endpoint-connection delete --id {PrivateEndpointConnectionID}
- Learn how to [manage connectivity](concepts-networking.md) to your Azure Database for MySQL flexible Server. - Learn how to [add another layer of encryption to your Azure Database for MySQL flexible server using [Customer Managed Keys](concepts-customer-managed-key.md). - Learn how to configure and use [Azure AD authentication](concepts-azure-ad-authentication.md) on your Azure Database for MySQL flexible server.++
mysql How To Networking Private Link Deny Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-networking-private-link-deny-public-access.md
-# Deny Public Network Access in Azure Database for MySQL - Flexible Server using Azure portal (Preview)
+# Deny Public Network Access in Azure Database for MySQL - Flexible Server using Azure portal
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
This article describes how you can configure an Azure Database for MySQL flexibl
- Learn how to [manage connectivity](concepts-networking.md) to your Azure Database for MySQL flexible Server. - Learn how to [add another layer of encryption to your Azure Database for MySQL flexible server using [Customer Managed Keys](concepts-customer-managed-key.md). - Learn how to configure and use [Azure AD authentication](concepts-azure-ad-authentication.md) on your Azure Database for MySQL flexible server.+
mysql How To Networking Private Link Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-networking-private-link-portal.md
-# Create and manage Private Link for Azure Database for MySQL - Flexible Server using the portal (Preview)
+# Create and manage Private Link for Azure Database for MySQL - Flexible Server using the portal
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
description: Learn how to set up and manage read replicas in Azure Database for
Previously updated : 05/10/2023 Last updated : 08/11/2023
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-In this article, you'll learn how to create and manage read replicas in the Azure Database for MySQL - Flexible Server using the Azure portal.
+In this article, you learn how to create and manage read replicas in the Azure Database for MySQL - Flexible Server using the Azure portal.
-> [!NOTE]
->
+> [!NOTE]
+>
> If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid) ## Prerequisites
In this article, you'll learn how to create and manage read replicas in the Azur
## Create a read replica
-> [!IMPORTANT]
->When you create a replica for a source that has no existing replicas, the source first restarts to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
+> [!IMPORTANT]
+> When you create a replica for a source that has no existing replicas, the source first restarts to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
A read replica server can be created using the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select the existing Azure Database for MySQL - Flexible Server that you want to use as a source. This action opens the **Overview** page.
+1. Select the existing Azure Database for MySQL - Flexible Server that you want to use as a source. This action opens the **Overview** page.
1. Select **Replication** from the menu, under **SETTINGS**.
A read replica server can be created using the following steps:
:::image type="content" source="./media/how-to-read-replica-portal/replica-name.png" alt-text="Screenshot of adding a replica name." lightbox="./media/how-to-read-replica-portal/replica-name.png":::
-1. Enter location based on your need to create an in-region or cross-region read-replica.
+1. Enter location based on your need to create an in-region or universal cross-region read-replica.
:::image type="content" source="media/how-to-read-replica-portal/select-cross-region.png" alt-text="Screenshot of selecting a cross region.":::
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-portal.md
Title: Restore an Azure Database for MySQL - Flexible Server with Azure portal.
+ Title: Restore MySQL - Flexible Server with Azure portal
+ description: This article describes how to perform restore operations in Azure Database for MySQL - Flexible Server through the Azure portal+++ Last updated : 08/22/2023 -- Previously updated : 07/26/2022
-# Point-in-time restore of a Azure Database for MySQL - Flexible Server using Azure portal
+# Point-in-time restore of an Azure Database for MySQL - Flexible Server using Azure portal
This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups.
Follow these steps to restore your flexible server using an earliest existing ba
1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-2. Click **Overview** from the left panel.
+1. Select **Overview** from the left panel.
-3. From the overview page, click **Restore**.
+1. From the overview page, select **Restore**.
-4. Restore page will be shown with an option to choose between **Latest restore point** and Custom restore point.
+1. Restore page is shown with an option to choose between **Latest restore point** and Custom restore point.
-5. Select **Latest restore point**.
+1. Select **Latest restore point**.
-6. Provide a new server name in the **Restore to new server** field.
+1. Provide a new server name in the **Restore to new server** field.
- :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-latest.png" alt-text="Earliest restore time":::
+ :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-latest.png" alt-text="Screenshot of earliest restore time." lightbox="./media/how-to-restore-server-portal/point-in-time-restore-latest.png":::
-7. Click **OK**.
-
-8. A notification will be shown that the restore operation has been initiated.
+1. Select **OK**.
+1. A notification is shown that the restore operation has been initiated.
## Restore to a fastest restore point
-Follow these steps to restore your flexible server using an existing full backup as the fastest restore point.
+Follow these steps to restore your flexible server using an existing full backup as the fastest restore point.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-2. Click **Overview** from the left panel.
+1. Select **Overview** from the left panel.
-3. From the overview page, click **Restore**.
+1. From the overview page, select **Restore**.
-4. Restore page will be shown with an option to choose between Latest restore point, Custom restore point and Fastest Restore Point.
+1. Restore page is shown with an option to choose between Latest restore point, Custom restore point and Fastest Restore Point.
-5. Select option **Select fastest restore point (Restore using full backup)**.
+1. Select option **Select fastest restore point (Restore using full backup)**.
-6. Select the desired full backup from the **Fastest Restore Point (UTC)** drop down list .
-
- :::image type="content" source="./media/how-to-restore-server-portal/fastest-restore-point.png" alt-text="Fastest Restore Point":::
+1. Select the desired full backup from the **Fastest Restore Point (UTC)** dropdown list.
-7. Provide a new server name in the **Restore to new server** field.
+ :::image type="content" source="./media/how-to-restore-server-portal/fastest-restore-point.png" alt-text="Screenshot of Fastest Restore Point." lightbox="./media/how-to-restore-server-portal/fastest-restore-point.png":::
-8. Click **Review + Create**.
+1. Provide a new server name in the **Restore to new server** field.
-9. Post clicking **Create**, a notification will be shown that the restore operation has been initiated.
+1. Select **Review + Create**.
-## Restore from a full backup through the Backup and Restore blade
+1. Post selecting **Create**, a notification is shown that the restore operation has been initiated.
-Follow these steps to restore your flexible server using an existing full backup.
+## Restore from a full backup through the Backup and Restore page
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+Follow these steps to restore your flexible server using an existing full backup.
-2. Click **Backup and Restore** from the left panel.
+1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+
+1. Select **Backup and Restore** from the left panel.
-3. View Available Backups page will be shown with the option to restore from available full automated backups and on-demand backups taken for the server within the retention period.
+1. View Available Backups page is shown with the option to restore from available full automated backups and on-demand backups taken for the server within the retention period.
-4. Select the desired full backup from the list by clicking on corresponding **Restore** action.
-
- :::image type="content" source="./media/how-to-restore-server-portal/view-available-backups.png" alt-text="View Available Backups":::
+1. Select the desired full backup from the list by selecting on corresponding **Restore** action.
-5. Restore page will be shown with the Fastest Restore Point option selected by default and the desired full backup timestamp selected on the View Available backups page.
+ :::image type="content" source="./media/how-to-restore-server-portal/view-available-backups.png" alt-text="Screenshot of view Available Backups." lightbox="./media/how-to-restore-server-portal/view-available-backups.png":::
-6. Provide a new server name in the **Restore to new server** field.
+1. Restore page is shown with the Fastest Restore Point option selected by default and the desired full backup timestamp selected on the View Available backups page.
-7. Click **Review + Create**.
+1. Provide a new server name in the **Restore to new server** field.
-8. Post clicking **Create**, a notification will be shown that the restore operation has been initiated.
+1. Select **Review + Create**.
+1. Post selecting **Create**, a notification is shown that the restore operation has been initiated.
-## Geo-restore to latest restore point
+## Geo restore to latest restore point
1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-2. Click **Overview** from the left panel.
+1. Select **Overview** from the left panel.
-3. From the overview page, click **Restore**.
+1. From the overview page, select **Restore**.
-4. Restore page will be shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options cannot be selected simultaneously.
+1. Restore page is shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options can't be selected simultaneously.
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-flex.png" alt-text="Geo-restore option":::
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-flex.png" alt-text="Screenshot of Geo-restore option." lightbox="./media/how-to-restore-server-portal/geo-restore-flex.png":::
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-enabled-flex.png" alt-text="Enabling Geo-Restore":::
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-enabled-flex.png" alt-text="Screenshot of enabling Geo-Restore." lightbox="./media/how-to-restore-server-portal/geo-restore-enabled-flex.png":::
-5. Provide a new server name in the **Name** field in the Server details section.
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-flex-location-dropdown.png" alt-text="Screenshot of location dropdown." lightbox="./media/how-to-restore-server-portal/geo-restore-flex-location-dropdown.png":::
-6. When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
+1. Provide a new server name in the **Name** field in the Server details section.
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-1.png" alt-text="Compute + Storage window":::
+1. When primary region is down, one can't create geo-redundant servers in the respective geo-paired region as storage can't be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-region-down-1.png" alt-text="Screenshot of Compute + Storage window." lightbox="./media/how-to-restore-server-portal/geo-restore-region-down-1.png":::
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-2.png" alt-text="Disabling Geo-Redundancy":::
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-region-down-2.png" alt-text="Screenshot of Disabling Geo-Redundancy." lightbox="./media/how-to-restore-server-portal/geo-restore-region-down-2.png":::
- :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-3.png" alt-text="Restoring as Locally redundant server":::
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-region-down-3.png" alt-text="Screenshot of Restoring as Locally redundant server." lightbox="./media/how-to-restore-server-portal/geo-restore-region-down-3.png":::
-7. Select **Review + Create** to review your selections.
+1. Select **Review + Create** to review your selections.
-8. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes.
+1. A notification is shown that the restore operation has been initiated. This operation may take a few minutes.
-The new server created by geo-restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section.
+The new server created by geo restore has the same server admin sign-in name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section.
-## Using restore to move a server from Public access to Private access
+## Use restore to move a server from Public access to Private access
Follow these steps to restore your flexible server using an earliest existing backup. 1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-2. From the overview page, click **Restore**.
+1. From the overview page, select **Restore**.
-3. Restore page will be shown with an option to choose between Geo-restore or Point-in-time restore options.
+1. Restore page is shown with an option to choose between geo restore or point-in-time restore options.
-4. Choose either **Geo-restore** or a **Point-in-time restore** option.
+1. Choose either **Geo restore** or a **Point-in-time restore** option.
-5. Provide a new server name in the **Restore to new server** field.
+1. Provide a new server name in the **Restore to new server** field.
- :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-private-dns-zone.png" alt-text="view overview":::
+ :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-private-dns-zone.png" alt-text="Screenshot of view overview." lightbox="./media/how-to-restore-server-portal/point-in-time-restore-private-dns-zone.png":::
-6. Go to the **Networking** tab to configure networking settings.
+1. Go to the **Networking** tab to configure networking settings.
-7. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** section, you can either select an already existing *virtual network* and *Subnet* that is delegated to *Microsoft.DBforMySQL/flexibleServers* or create a new one by clicking the *create virtual network* link.
- > [!Note]
- > Only virtual networks and subnets in the same region and subscription will be listed in the drop down. </br>
- > The chosen subnet will be delegated to *Microsoft.DBforMySQL/flexibleServers*. It means that only Azure Database for MySQL - Flexible Servers can use that subnet.</br>
+1. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** section, you can either select an already existing *virtual network* and *Subnet* that is delegated to *Microsoft.DBforMySQL/flexibleServers* or Create a new one by selecting the *create virtual network* link.
+ > [!NOTE]
+ > Only virtual networks and subnets in the same region and subscription is listed in the dropdown list. </br>
+ > The chosen subnet is delegated to *Microsoft.DBforMySQL/flexibleServers*. It means that only Azure Database for MySQL - Flexible Servers can use that subnet.</br>
- :::image type="content" source="./media/how-to-manage-virtual-network-portal/vnet-creation.png" alt-text="Vnet configuration":::
+ :::image type="content" source="./media/how-to-manage-virtual-network-portal/vnet-creation.png" alt-text="Screenshot of Vnet configuration." lightbox="./media/how-to-manage-virtual-network-portal/vnet-creation.png":::
-8. Create a new or Select an existing **Private DNS Zone**.
- > [!NOTE]
+1. Create a new or Select an existing **Private DNS Zone**.
+ > [!NOTE]
> Private DNS zone names must end with `mysql.database.azure.com`. </br> > If you do not see the option to create a new private dns zone, please enter the server name on the **Basics** tab.</br> > After the flexible server is deployed to a virtual network and subnet, you cannot move it to Public access (allowed IP addresses).</br>
- :::image type="content" source="./media/how-to-manage-virtual-network-portal/private-dns-zone.png" alt-text="dns configuration":::
-9. Select **Review + create** to review your flexible server configuration.
-10. Select **Create** to provision the server. Provisioning can take a few minutes.
-
-11. A notification will be shown that the restore operation has been initiated.
+ :::image type="content" source="./media/how-to-manage-virtual-network-portal/private-dns-zone.png" alt-text="Screenshot of dns configuration." lightbox="./media/how-to-manage-virtual-network-portal/private-dns-zone.png":::
+1. Select **Review + create** to review your flexible server configuration.
+1. Select **Create** to provision the server. Provisioning can take a few minutes.
+1. A notification is shown that the restore operation has been initiated.
## Perform post-restore tasks After the restore is completed, you should perform the following tasks to get your users and applications back up and running: - If the new server is meant to replace the original server, redirect clients and client applications to the new server.-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- Ensure appropriate virtual network rules are in place for users to connect. These rules aren't copied over from the original server.
- Ensure appropriate logins and database level permissions are in place. - Configure alerts as appropriate for the newly restore server. ## Next steps
-Learn more about [business continuity](concepts-business-continuity.md)
+- Learn more about [business continuity](concepts-business-continuity.md)
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
Last updated 05/24/2022
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-<iframe src="https://aka.ms/docs/player?id=492c7a41-5f0a-4482-828b-72be1b38e691" width="640" height="370"></iframe>
+> [!VIDEO https://aka.ms/docs/player?id=492c7a41-5f0a-4482-828b-72be1b38e691]
Azure Database for MySQL powered by the MySQL community edition is available in two deployment modes:
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +
+## September 2023
+
+- **Universal Cross Region Read Replica on Azure Database for MySQL- Flexible Server (General Availability)**
+Azure Database for MySQL - Flexible server now supports Universal Read Replicas in Public regions. The feature allows you to replicate your data from an instance of Azure Database for MySQL Flexible Server to a read-only server in Universal region which could be any region from the list of Azure supported region where flexible server is available. [Learn more](concepts-read-replicas.md)
+
+- **Private Link for Azure Database for MySQL - Flexible Server (General Availability)**
+You can now enable private endpoints to provide a secure means to access Azure Database for MySQL Flexible Server via a Private Link, allowing both public and private access simultaneously. If necessary, you have the choice to restrict public access, ensuring that connections are exclusively routed through private endpoints for heightened network security. It's also possible to configure or update Private Link settings either during or after the creation of the server. [Learn more](./concepts-networking-private-link.md).
+
+## August 2023
+
+- **Universal Geo Restore in Azure Database for MySQL - Flexible Server (Public Preview)**
+Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
+
+- **Generated Invisible Primary Key in Azure Database for MySQL - Flexible Server**
+Azure Database for MySQL Flexible Server now supports [generated invisible primary key (GIPK)](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) for MySQL version 8.0. With this change, by default, the value of the server system variable "[sql_generate_invisible_primary_key](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_generate_invisible_primary_key) " is ON for all MySQL - Flexible Server on MySQL 8.0. With GIPK mode ON, MySQL generates an invisible primary key to any InnoDB table which is new created without an explicit primary key. Learn more about the GIPK mode:
+[Generated Invisible Primary Keys](./concepts-limitations.md#generated-invisible-primary-keys)
+[Invisible Column Metadata](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html#invisible-column-metadata)
++ ## July 2023 - **Autoscale IOPS in Azure Database for MySQL - Flexible Server (General Availability)**
If you have questions about or suggestions for working with Azure Database for M
- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/). - Browse the [public documentation](index.yml) for Azure Database for MySQL ΓÇô Flexible Server. - Review details on [troubleshooting common migration errors](../howto-troubleshoot-common-errors.md).+++++
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-In-place automigration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details.
+**In-place automigration** from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with **Basic or General Purpose SKU**, data storage used **< 10 GiB** and **no complex features (CMK, AAD, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details.
-The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than 5 mins of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration:
+The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than **5 mins** of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration:
-* Target Flexible Server is deployed, inheriting all feature set and properties (including server parameters and firewall rules) from source Single Server. Source Single Server is set to read-only and backup from source Single Server is copied to the target Flexible Server.
-* DNS switch and cutover are performed successfully within the planned maintenance window with minimal downtime, allowing maintenance of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats ΓÇô username@server_name and username are also supported on the migrated Flexible Server.
-* The migrated Flexible Server is online and can now be managed via Azure portal/CLI. Stopped Single Server is deleted post days set as it's Backup Retention Period.
+* **Target Flexible Server is deployed**, inheriting all feature set and properties (including server parameters and firewall rules) from source Single Server. Source Single Server is set to read-only and backup from source Single Server is copied to the target Flexible Server.
+* **DNS switch and cutover** are performed successfully within the planned maintenance window with minimal downtime, allowing maintenance of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats ΓÇô username@server_name and username are also supported on the migrated Flexible Server.
+* The **migrated Flexible Server is online** and can now be managed via Azure portal/CLI. Stopped Single Server is deleted post days set as it's Backup Retention Period.
> [!NOTE]
-> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate.
+> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate.
## Configure migration alerts and review migration schedule
Servers eligible for in-place automigration are sent an advance notification by
Following described are the ways to check and configure automigration notifications: * Subscription owners for Single Servers scheduled for automigration receive an email notification.
-* Configure service health alerts to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification).
-* Check the in-place migration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal).
+* Configure **service health alerts** to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification).
+* Check the in-place migration **notification on the Azure portal** by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal).
Following described are the ways to review your migration schedule once you have received the in-place automigration notification: > [!NOTE] > The migration schedule will be locked 7 days prior to the scheduled migration window after which youΓÇÖll be unable to reschedule.
-* The Single Server overview page for your instance displays a portal banner with information about your migration schedule.
-* For Single Servers scheduled for automigration, a new Migration blade is lighted on the portal. You can review the migration schedule by navigating to the Migration blade of your Single Server instance.
+* The S**ingle Server overview page** for your instance displays a portal banner with information about your migration schedule.
+* For Single Servers scheduled for automigration, a new **Migration blade** is lighted on the portal. You can review the migration schedule by navigating to the Migration blade of your Single Server instance.
* If you wish to defer the migration, you can defer by a month at a time by navigating to the Migration blade of your single server instance on the Azure portal and rescheduling the migration by selecting another migration window within a month.
-* If your Single Server has General Purpose SKU, you have the other option to enable High Availability when reviewing the migration schedule. As High Availability can only be enabled during create time for a MySQL Flexible Server, it's highly recommended that you enable this feature when reviewing the migration schedule.
+* If your Single Server has **General Purpose SKU**, you have the other option to enable **High Availability** when reviewing the migration schedule. As High Availability can only be enabled during create time for a MySQL Flexible Server, it's highly recommended that you enable this feature when reviewing the migration schedule.
## Pre-requisite checks for in-place auto-migration
-* The Single Server instance should be in ready state and should not be in stopped state during the planned maintenance window for automigration to take place.
-* For Single Server instance with SSL enabled, ensure you have both certificates (BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate before scheduled auto-migration by following steps [here](../single-server/concepts-certificate-rotation.md#create-a-combined-ca-certificate) to ensure business continuity post-migration.
+* The Single Server instance should be in **ready state** and should not be in stopped state during the planned maintenance window for automigration to take place.
+* For Single Server instance with **SSL enabled**, ensure you have both certificates (**BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA**) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate before scheduled auto-migration by following steps [here](../single-server/concepts-certificate-rotation.md#create-a-combined-ca-certificate) to ensure business continuity post-migration.
+* The MySQL engine doesn't guarantee any sort order if there is no 'SORT' clause present in queries. Post in-place automigration, you may observe a change in the sort order. If preserving sort order is crucial, ensure your queries are updated to include 'SORT' clause before the scheduled in-place automigration.
## How is the target MySQL Flexible Server auto-provisioned?
Following described are the ways to review your migration schedule once you have
| Memory Optimized | 32 | MemoryOptimized | Standard_E32ds_v4 | * The MySQL version, region, *storage size, subscription and resource group for the target Flexible Server is same as that of the source Single Server.
-*For Single Servers with less than 20 GiB storage, the storage size is set to 20 GiB as that is the minimum storage limit on Azure Database for MySQL - Flexible Server.
+* For Single Servers with less than 20 GiB storage, the storage size is set to 20 GiB as that is the minimum storage limit on Azure Database for MySQL - Flexible Server.
* Both username formats ΓÇô username@server_name (Single Server) and username (Flexible Server) are supported on the migrated Flexible Server. * Both connection string formats ΓÇô Single Server and Flexible Server are supported on the migrated Flexible Server.
Copy the following properties from the source Single Server to target Flexible S
**Q. How can I set up or view in-place migration alerts?ΓÇï**
-**A.**
+**A.** Following are the ways you can set up alerts :
* Configure service health alerts to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). * Check the in-place migration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal).
Copy the following properties from the source Single Server to target Flexible S
**Q. What are some post-migration activities I need to perform?ΓÇï**
-**A.**
+**A.** Following are some post-migration activities :
* Monitoring page settings (Alerts, Metrics, and Diagnostic settings) * Any Terraform/CLI scripts you host to manage your Single Server instance should be updated with Flexible Server references.
mysql Migrate Single Flexible Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md
iops | 500 | Number of IOPS to be allocated for the target Azure Database for My
## How long does MySQL Import take to migrate my Single Server instance? Below is the benchmarked performance based on storage size.+ | Single Server Storage Size | MySQL Import time | | - |:-:| | 1 GiB | 0 min 23 secs |
Below is the benchmarked performance based on storage size.
From the table above, as the storage size increases, the time required for data copying also increases, almost in a linear relationship. However, it's important to note that copy speed can be significantly impacted by network fluctuations. Therefore, the data provided here should be taken as a reference only. Below is the benchmarked performance based on varying number of tables for 10 GiB storage size.+ | Number of tables in Single Server instance | MySQL Import time |
- | - |:-:|
+ | - | :-: |
| 100 | 4 min 24 secs | | 200 | 4 min 40 secs | | 800 | 4 min 52 secs |
Below is the benchmarked performance based on varying number of tables for 10 Gi
## Post-import steps - Copy the following properties from the source Single Server to target Flexible Server post MySQL Import operation is completed successfully:
- - Server parameters
- Firewall rules - Read-Replicas - Monitoring page settings (Alerts, Metrics, and Diagnostic settings)
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md
MySQL offers standard database driver connectivity for using MySQL with applicat
| PHP | Windows, Linux | [MySQL native driver for PHP - mysqlnd](https://dev.mysql.com/downloads/connector/php-mysqlnd/) | [Download](https://secure.php.net/downloads.php) | | ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) | | ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) |
-| JDBC | Platform independent | [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) |
+| JDBC | Platform independent | [MySQL Connector/J 8.1 Developer Guide](https://dev.mysql.com/doc/connector-j/8.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) |
| Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) | | Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) | | C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) |
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Azure Database for MySQL
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Previously updated : 08/03/2023 Last updated : 08/25/2023 # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
Learn how to migrate from Azure Database for MySQL - Single Server to Azure Data
For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md). > [!NOTE]
-> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
+> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
## Migration Eligibility
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
| Single Server configuration not supported in Flexible Server | How and when to migrate? | ||--| | Single servers with Private Link enabled | Private Link for flexible server is available now, and you can start migrating your single server. |
-| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server (for paired region) is available now, and you can start migrating your single server. |
+| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server is available now, and you can start migrating your single server. |
| Single servers with Query Store enabled | You are eligible to migrate and you can configure slow query logs on the target flexible server by following steps [here](https://learn.microsoft.com/azure/mysql/flexible-server/tutorial-query-performance-insights#configure-slow-query-logs-by-using-the-azure-portal). You can then view query insights by using [workbooks template](https://learn.microsoft.com/azure/mysql/flexible-server/tutorial-query-performance-insights#view-query-insights-by-using-workbooks). | | Single server deployed in regions where flexible server isn't supported (Learn more about regions [here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=all&products=mysql)). | Azure Database Migration Service (classic) supports cross-region migration. Deploy your target flexible server in a suitable region and migrate using DMS (classic). |
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**Q. I have cross-region read replicas configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
-**A.** Cross-Region Read Replicas for flexible server (for paired region) is available now, and you can start migrating your single server.
+**A.** Cross-Region Read Replicas for flexible server is available now, and you can start migrating your single server.
**Q. I have TLS v1.0/1.1 configured for my v8.0 single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
network-watcher Diagnose Vm Network Traffic Filtering Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md
Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure CLI'
+ Title: 'Quickstart: Diagnose a VM traffic filter problem - Azure CLI'
-description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher in Azure CLI.
+description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using Azure Network Watcher IP flow verify in Azure CLI.
Previously updated : 06/30/2023-
-#Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM.
Last updated : 08/23/2023+
+#Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM.
# Quickstart: Diagnose a virtual machine network traffic filter problem using the Azure CLI
-Azure allows and denies network traffic to and from a virtual machine based on its [effective security rules](network-watcher-security-group-view-overview.md). These security rules come from the network security groups applied to the virtual machine's network interface and subnet.
+In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](network-watcher-security-group-view-overview.md) for a network interface to determine why a security rule is allowing or denying traffic.
-In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
In this section, you create a virtual network and a subnet in the East US region. Then, you create a virtual machine in the subnet with a default network security group.
-1. Before you can create a VM, you must create a resource group to contain the VM. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+1. Create a resource group using [az group create](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
+ ```azurecli-interactive
+ # Create a resource group.
+ az group create --name 'myResourceGroup' --location 'eastus'
+ ```
-2. Create a VM with [az vm create](/cli/azure/vm). If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. The following example creates a VM named *myVm*:
+1. Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create).
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name myVm \
- --image UbuntuLTS \
- --generate-ssh-keys
-```
+ ```azurecli-interactive
+ # Create a virtual network and a subnet.
+ az network vnet create --resource-group 'myResourceGroup' --name 'myVNet' --subnet-name 'mySubnet' --subnet-prefixes 10.0.0.0/24
+ ```
+
+1. Create a default network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
+
+ ```azurecli-interactive
+ # Create a default network security group.
+ az network nsg create --name 'myVM-nsg' --resource-group 'myResourceGroup' --location 'eastus'
+ ```
+
+1. Create a virtual machine using [az vm create](/cli/azure/vm#az-vm-create). When prompted, enter a username and password.
-The VM takes a few minutes to create. Don't continue with the remaining steps until the VM is created and the Azure CLI returns the output.
+ ```azurecli-interactive
+ # Create a Linux virtual machine using the latest Ubuntu 20.04 LTS image.
+ az vm create --resource-group 'myResourceGroup' --name 'myVM' --location 'eastus' --vnet-name 'myVNet' --subnet 'mySubnet' --public-ip-address '' --nsg 'myVM-nsg' --image 'Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest'
+ ```
## Test network communication using IP flow verify In this section, you use the IP flow verify capability of Network Watcher to test network communication to and from the virtual machine.
-When you create a VM, Azure allows and denies network traffic to and from the VM, by default. You might override Azure's defaults later, allowing or denying additional types of traffic. To test whether traffic is allowed or denied to different destinations and from a source IP address, use the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command.
+1. Use [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command to test outbound communication from **myVM** to **13.107.21.200** using IP flow verify (`13.107.21.200` is one of the public IP addresses used by `www.bing.com`):
-Test outbound communication from the VM to one of the IP addresses for www.bing.com:
-```azurecli-interactive
-az network watcher test-ip-flow \
- --direction outbound \
- --local 10.0.0.4:60000 \
- --protocol TCP \
- --remote 13.107.21.200:80 \
- --vm myVm \
- --nic myVmVMNic \
- --resource-group myResourceGroup \
- --out table
-```
+ ```azurecli-interactive
+ # Start the IP flow verify session to test outbound flow to www.bing.com.
+ az network watcher test-ip-flow --direction 'outbound' --protocol 'TCP' --local '10.0.0.4:60000' --remote '13.107.21.200:80' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table'
+ ```
-After several seconds, the result returned informs you that access is allowed by a security rule named **DenyAllOutBound**.
+ After a few seconds, you get a similar output to the following example:
-Test outbound communication from the VM to 172.31.0.100:
+ ```output
+ Access RuleName
+ --
+ Allow defaultSecurityRules/AllowInternetOutBound
+ ```
-```azurecli-interactive
-az network watcher test-ip-flow \
- --direction outbound \
- --local 10.0.0.4:60000 \
- --protocol TCP \
- --remote 172.31.0.100:80 \
- --vm myVm \
- --nic myVmVMNic \
- --resource-group myResourceGroup \
- --out table
-```
+ The test result indicates that access is allowed to **13.107.21.200** because of the default security rule **AllowInternetOutBound**. By default, Azure virtual machines can access the internet.
-The result returned informs you that access is denied by a security rule named **DenyAllOutBound**.
+1. Change **RemoteIPAddress** to **10.0.1.10** and repeat the test. **10.0.1.10** is a private IP address in **myVNet** address space.
-Test inbound communication to the VM from 172.31.0.100:
+ ```azurecli-interactive
+ # Start the IP flow verify session to test outbound flow to 10.0.1.10.
+ az network watcher test-ip-flow --direction 'outbound' --protocol 'TCP' --local '10.0.0.4:60000' --remote '10.0.1.10:80' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table'
+ ```
-```azurecli-interactive
-az network watcher test-ip-flow \
- --direction inbound \
- --local 10.0.0.4:80 \
- --protocol TCP \
- --remote 172.31.0.100:60000 \
- --vm myVm \
- --nic myVmVMNic \
- --resource-group myResourceGroup \
- --out table
-```
+ After a few seconds, you get a similar output to the following example:
-The result returned informs you that access is denied because of a security rule named **DenyAllInBound**. Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems.
+ ```output
+ Access RuleName
+ --
+ Allow defaultSecurityRules/AllowVnetOutBound
+ ```
-## View details of a security rule
+ The result of the second test indicates that access is allowed to **10.0.1.10** because of the default security rule **AllowVnetOutBound**. By default, an Azure virtual machine can access all IP addresses in the address space of its virtual network.
-To determine why the rules in the previous section are allowing or preventing communication, review the effective security rules for the network interface with the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command:
+1. Change **RemoteIPAddress** to **10.10.10.10** and repeat the test. **10.10.10.10** is a private IP address that isn't in **myVNet** address space.
-```azurecli-interactive
-az network nic list-effective-nsg \
- --resource-group myResourceGroup \
- --name myVmVMNic
-```
+ ```azurecli-interactive
+ # Start the IP flow verify session to test outbound flow to 10.10.10.10.
+ az network watcher test-ip-flow --direction 'outbound' --protocol 'TCP' --local '10.0.0.4:60000' --remote '10.10.10.10:80' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table'
+ ```
-The output includes the following text for the **AllowInternetOutbound** rule that allowed outbound access to www.bing.com in a previous step under [Test network communication using IP flow verify](#test-network-communication-using-ip-flow-verify) section:
+ After a few seconds, you get a similar output to the following example:
+
+ ```output
+ Access RuleName
+ --
+ Allow defaultSecurityRules/DenyAllOutBound
+ ```
-```console
-{
- "access": "Allow",
- "additionalProperties": {},
- "destinationAddressPrefix": "Internet",
- "destinationAddressPrefixes": [
- "Internet"
- ],
- "destinationPortRange": "0-65535",
- "destinationPortRanges": [
- "0-65535"
- ],
- "direction": "Outbound",
- "expandedDestinationAddressPrefix": [
- "1.0.0.0/8",
- "2.0.0.0/7",
- "4.0.0.0/6",
- "8.0.0.0/7",
- "11.0.0.0/8",
- "12.0.0.0/6",
- ...
- ],
- "expandedSourceAddressPrefix": null,
- "name": "defaultSecurityRules/AllowInternetOutBound",
- "priority": 65001,
- "protocol": "All",
- "sourceAddressPrefix": "0.0.0.0/0",
- "sourceAddressPrefixes": [
- "0.0.0.0/0"
- ],
- "sourcePortRange": "0-65535",
- "sourcePortRanges": [
- "0-65535"
- ]
-},
-```
+ The result of the third test indicates that access is denied to **10.10.10.10** because of the default security rule **DenyAllOutBound**.
-You can see in the previous output that **destinationAddressPrefix** is **Internet**. It's not clear how 13.107.21.200 relates to **Internet** though. You see several address prefixes listed under **expandedDestinationAddressPrefix**. One of the prefixes in the list is **12.0.0.0/6**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the previous output that override this rule. To deny outbound communication to an IP address, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
+1. Change **direction** to **inbound**, the local port to **80**, and the remote port to **60000**, and then repeat the test.
-When you ran the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command to test outbound communication to 172.131.0.100 in [Test network communication using IP flow verify](#test-network-communication-using-ip-flow-verify) section, the output informed you that the **DenyAllOutBound** rule denied the communication. The **DenyAllOutBound** rule equates to the **DenyAllOutBound** rule listed in the following output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command:
+ ```azurecli-interactive
+ # Start the IP flow verify session to test inbound flow from 10.10.10.10.
+ az network watcher test-ip-flow --direction 'inbound' --protocol 'TCP' --local '10.0.0.4:80' --remote '10.10.10.10:6000' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table'
+ ```
-```console
-{
- "access": "Deny",
- "additionalProperties": {},
- "destinationAddressPrefix": "0.0.0.0/0",
- "destinationAddressPrefixes": [
- "0.0.0.0/0"
- ],
- "destinationPortRange": "0-65535",
- "destinationPortRanges": [
- "0-65535"
- ],
- "direction": "Outbound",
- "expandedDestinationAddressPrefix": null,
- "expandedSourceAddressPrefix": null,
- "name": "defaultSecurityRules/DenyAllOutBound",
- "priority": 65500,
- "protocol": "All",
- "sourceAddressPrefix": "0.0.0.0/0",
- "sourceAddressPrefixes": [
- "0.0.0.0/0"
- ],
- "sourcePortRange": "0-65535",
- "sourcePortRanges": [
- "0-65535"
- ]
-}
+ After a few seconds, you get similar output to the following example:
+
+ ```output
+ Access RuleName
+ --
+ Allow defaultSecurityRules/DenyAllInBound
+ ```
+
+ The result of the fourth test indicates that access is denied from **10.10.10.10** because of the default security rule **DenyAllInBound**. By default, all access to an Azure virtual machine from outside the virtual network is denied.
+
+## View details of a security rule
+
+To determine why the rules in the previous section allow or deny communication, review the effective security rules for the network interface of **myVM** virtual machine using the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command:
+
+```azurecli-interactive
+# Get the effective security rules for the network interface of myVM.
+az network nic list-effective-nsg --resource-group 'myResourceGroup' --name 'myVmVMNic'
```
-The rule lists **0.0.0.0/0** as the **destinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100 because the address is not within the **destinationAddressPrefix** of any of the other outbound rules in the output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
+The returned output includes the following information for the **AllowInternetOutbound** rule that allowed outbound access to `www.bing.com`:
-When you ran the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command in [Test network communication using IP flow verify](#test-network-communication-using-ip-flow-verify) section to test inbound communication from 172.131.0.100, the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command:
-```console
+```output
{
- "access": "Deny",
- "additionalProperties": {},
- "destinationAddressPrefix": "0.0.0.0/0",
- "destinationAddressPrefixes": [
- "0.0.0.0/0"
- ],
- "destinationPortRange": "0-65535",
- "destinationPortRanges": [
- "0-65535"
- ],
- "direction": "Inbound",
- "expandedDestinationAddressPrefix": null,
- "expandedSourceAddressPrefix": null,
- "name": "defaultSecurityRules/DenyAllInBound",
- "priority": 65500,
- "protocol": "All",
- "sourceAddressPrefix": "0.0.0.0/0",
- "sourceAddressPrefixes": [
- "0.0.0.0/0"
- ],
- "sourcePortRange": "0-65535",
- "sourcePortRanges": [
- "0-65535"
- ]
+ "access": "Allow",
+ "destinationAddressPrefix": "Internet",
+ "destinationAddressPrefixes": [
+ "Internet"
+ ],
+ "destinationPortRange": "0-65535",
+ "destinationPortRanges": [
+ "0-65535"
+ ],
+ "direction": "Outbound",
+ "expandedDestinationAddressPrefix": [
+ "1.0.0.0/8",
+ "2.0.0.0/7",
+ "4.0.0.0/9",
+ "4.144.0.0/12",
+ "4.160.0.0/11",
+ "4.192.0.0/10",
+ "5.0.0.0/8",
+ "6.0.0.0/7",
+ "8.0.0.0/7",
+ "11.0.0.0/8",
+ "12.0.0.0/8",
+ "13.0.0.0/10",
+ "13.64.0.0/11",
+ "13.104.0.0/13",
+ "13.112.0.0/12",
+ "13.128.0.0/9",
+ "14.0.0.0/7",
+ ...
+ ...
+ ...
+ "200.0.0.0/5",
+ "208.0.0.0/4"
+ ],
+ "name": "defaultSecurityRules/AllowInternetOutBound",
+ "priority": 65001,
+ "protocol": "All",
+ "sourceAddressPrefix": "0.0.0.0/0",
+ "sourceAddressPrefixes": [
+ "0.0.0.0/0",
+ "0.0.0.0/0"
+ ],
+ "sourcePortRange": "0-65535",
+ "sourcePortRanges": [
+ "0-65535"
+ ]
}, ```
-The **DenyAllInBound** rule is applied because, as shown in the output, no other higher priority rule exists in the output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command that allows port 80 inbound to the VM from 172.131.0.100. To allow the inbound communication, you could add a security rule with a higher priority that allows port 80 inbound from 172.131.0.100.
+You can see in the output that address prefix **13.104.0.0/13** is among the address prefixes of **AllowInternetOutBound** rule. This prefix encompasses the IP address **13.107.21.200**, which you utilized to test outbound communication to `www.bing.com`.
-The checks in this quickstart tested Azure configuration. If the checks return the expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
+Similarly, you can check the other rules to see the source and destination IP address prefixes under each rule.
## Clean up resources
-When no longer needed, you can use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains:
+When no longer needed, use [az group delete](/cli/azure/group) to delete **myResourceGroup** resource group and all of the resources it contains:
```azurecli-interactive
-az group delete --name myResourceGroup --yes
+# Delete the resource group and all resources it contains.
+az group delete --name 'myResourceGroup' --yes
``` ## Next steps In this quickstart, you created a VM and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a VM. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule).
-Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem-cli.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-cli.md).
+Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem-powershell.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-powershell.md).
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure PowerShell'
+ Title: 'Quickstart: Diagnose a VM traffic filter problem - Azure PowerShell'
-description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher in Azure PowerShell.
+description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using Azure Network Watcher IP flow verify in Azure PowerShell.
Previously updated : 07/17/2023-
-#Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM.
Last updated : 08/23/2023+
+#Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM.
# Quickstart: Diagnose a virtual machine network traffic filter problem using Azure PowerShell
-Azure allows and denies network traffic to and from a virtual machine based on its [effective security rules](network-watcher-security-group-view-overview.md). These security rules come from the network security groups applied to the virtual machine's network interface and subnet.
-
-In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it.
+In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](network-watcher-security-group-view-overview.md) for a network interface to determine why a security rule is allowing or denying traffic.
:::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem-powershell/ip-flow-verify-quickstart-diagram.png" alt-text="Diagram shows the resources created in Network Watcher quickstart.":::
In this section, you use the IP flow verify capability of Network Watcher to tes
1. Change **Direction** to **Inbound**, the **LocalPort** to **80**, and the **RemotePort** to **60000**, and then repeat the test. ```azurepowershell-interactive
- # Start the IP flow verify session to test outbound flow to 10.10.10.10.
+ # Start the IP flow verify session to test inbound flow from 10.10.10.10.
Test-AzNetworkWatcherIPFlow -Location 'eastus' -TargetVirtualMachineId $vm.Id -Direction 'Inbound' -Protocol 'TCP' -RemoteIPAddress '10.10.10.10' -RemotePort '60000' -LocalIPAddress '10.0.0.4' -LocalPort '80' ```
Similarly, you can check the other rules to see the source and destination IP ad
When no longer needed, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to delete the resource group and all of the resources it contains: ```azurepowershell-interactive
-# Delete the resource group and all resources it contains.
+# Delete the resource group and all resources it contains.
Remove-AzResourceGroup -Name 'myResourceGroup' -Force ```
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure portal'
+ Title: 'Quickstart: Diagnose a VM traffic filter problem - Azure portal'
-description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher in the Azure portal.
+description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using Azure Network Watcher IP flow verify in the Azure portal.
Previously updated : 07/17/2023-
-#Customer intent: I need to diagnose and troubleshoot a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM.
Last updated : 08/23/2023
+#Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM.
# Quickstart: Diagnose a virtual machine network traffic filter problem using the Azure portal
-Azure allows and denies network traffic to and from a virtual machine based on its [effective security rules](network-watcher-security-group-view-overview.md). These security rules come from the network security groups applied to the virtual machine's network interface and subnet.
-
-In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it.
+In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](network-watcher-security-group-view-overview.md) for a network interface to determine why a security rule is allowing or denying traffic.
:::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-quickstart-diagram.png" alt-text="Diagram shows the resources created in Network Watcher quickstart.":::
network-watcher Monitor Vm Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/monitor-vm-communication.md
Title: 'Tutorial: Monitor network communication between two VMs - Azure portal'+ description: In this tutorial, learn how to monitor network communication between two Azure virtual machines with Azure Network Watcher's connection monitor capability. + Previously updated : 07/17/2023--
-# Customer intent: I need to monitor the communication between two virtual machines in Azure. If the communication fails, I need to be alerted and I want to know why it failed, so that I can resolve the problem.
Last updated : 08/24/2023
+#CustomerIntent: As an Azure administrator, I want to monitor the communication between two virtual machines in Azure so I can be alerted if the communication fails to take actions. I alow want to know why the communication failed, so that I can resolve the problem.
# Tutorial: Monitor network communication between two virtual machines using the Azure portal
In this tutorial, you learn how to:
> * Monitor communication between the two virtual machines > * Diagnose a communication problem between the two virtual machines +
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites -- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure account with an active subscription.
## Sign in to Azure
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
az network watcher flow-log delete --name 'myFlowLog' --location 'eastus' --no-w
> [!NOTE] > Deleting a flow log does not delete the flow log data from the storage account. Flow logs data stored in the storage account follow the configured retention policy.
-## Next Steps
+## Next steps
- To learn how to use Azure built-in policies to audit or deploy NSG flow logs, see [Manage NSG flow logs using Azure Policy](nsg-flow-logs-policy-portal.md). - To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Remove-AzNetworkWatcherFlowLog -Name 'myFlowLog' -Location 'eastus'
> [!NOTE] > Deleting a flow log does not delete the flow log data from the storage account. Flow logs data stored in the storage account follow the configured retention policy.
-## Next Steps
+## Next steps
- To learn how to use Azure built-in policies to audit or deploy NSG flow logs, see [Manage NSG flow logs using Azure Policy](nsg-flow-logs-policy-portal.md). - To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
network-watcher Network Watcher Security Group View Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-cli.md
# Analyze your Virtual Machine security with Security Group View using Azure CLI
-> [!div class="op_single_selector"]
-> - [PowerShell](network-watcher-security-group-view-powershell.md)
-> - [Azure CLI](network-watcher-security-group-view-cli.md)
-> - [REST API](network-watcher-security-group-view-rest.md)
- > [!NOTE] > The Security Group View API is no longer being maintained and will be deprecated soon. Please use the [Effective Security Rules feature](./network-watcher-security-group-view-overview.md) which provides the same functionality.
A virtual machine is required to run the `vm list` cmdlet. The following command
az vm list -resource-group resourceGroupName ```
-Once you know the virtual machine, you can use the `vm show` cmdlet to get its resource Id:
+Once you know the virtual machine, you can use the `vm show` cmdlet to get its resource ID:
```azurecli az vm show -resource-group resourceGroupName -name virtualMachineName
network-watcher Network Watcher Security Group View Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-powershell.md
# Analyze your Virtual Machine security with Security Group View using PowerShell
-> [!div class="op_single_selector"]
-> - [PowerShell](network-watcher-security-group-view-powershell.md)
-> - [Azure CLI](network-watcher-security-group-view-cli.md)
-> - [REST API](network-watcher-security-group-view-rest.md)
- > [!NOTE] > The Security Group View API is no longer being maintained and will be deprecated soon. Please use the [Effective Security Rules feature](./network-watcher-security-group-view-overview.md) which provides the same functionality.
network-watcher Network Watcher Security Group View Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-rest.md
- Title: Analyze network security - Security Group View - Azure REST API-
-description: This article will describe how to the Azure REST API to analyze a virtual machines security with Security Group View.
----- Previously updated : 03/01/2022----
-# Analyze your Virtual Machine security with Security Group View using REST API
-
-> [!div class="op_single_selector"]
-> - [PowerShell](network-watcher-security-group-view-powershell.md)
-> - [Azure CLI](network-watcher-security-group-view-cli.md)
-> - [REST API](network-watcher-security-group-view-rest.md)
-
-> [!NOTE]
-> The Security Group View API is no longer being maintained and will be deprecated soon. Please use the [Effective Security Rules feature](./network-watcher-security-group-view-overview.md) which provides the same functionality.
-
-Security group view returns configured and effective network security rules that are applied to a virtual machine. This capability is useful to audit and diagnose Network Security Groups and rules that are configured on a VM to ensure traffic is being correctly allowed or denied. In this article, we show you how to retrieve the effective and applied security rules to a virtual machine using REST API
---
-## Before you begin
-
-In this scenario, you call the Network Watcher REST API to get the security group view for a virtual machine. ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
-
-This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher. The scenario also assumes that a Resource Group with a valid virtual machine exists to be used.
-
-## Scenario
-
-The scenario covered in this article retrieves the effective and applied security rules for a given virtual machine.
-
-## Log in with ARMClient
-
-```powershell
-armclient login
-```
-
-## Retrieve a virtual machine
-
-Run the following script to return a virtual machineThe following code needs variables:
--- **subscriptionId** - The subscription id can also be retrieved with the **Get-AzSubscription** cmdlet.-- **resourceGroupName** - The name of a resource group that contains virtual machines.-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = '<resource group name>'
-
-armclient get https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Compute/virtualMachines?api-version=2015-05-01-preview
-```
-
-The information that is needed is the **id** under the type `Microsoft.Compute/virtualMachines` in response, as seen in the following example:
-
-```json
-...,
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft
-.Network/networkInterfaces/{nicName}"
- }
- ]
- },
- "provisioningState": "Succeeded"
- },
- "resources": [
- {
- "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Com
-pute/virtualMachines/{vmName}/extensions/CustomScriptExtension"
- }
- ],
- "type": "Microsoft.Compute/virtualMachines",
- "location": "westcentralus",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute
-/virtualMachines/{vmName}",
- "name": "{vmName}"
- }
- ]
-}
-```
-
-## Get security group view for virtual machine
-
-The following example requests the security group view of a targeted virtual machine. The results from this example can be used to compare to the rules and security defined by the origination to look for configuration drift.
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "<resource group name>"
-$networkWatcherName = "<network watcher name>"
-$targetUri = "<uri of target resource>" # Example: /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.compute/virtualMachine/$vmName
-
-$requestBody = @"
-{
- 'targetResourceId': '${targetUri}'
-
-}
-"@
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/securityGroupView?api-version=2016-12-01" $requestBody -verbose
-```
-
-## View the response
-
-The following sample is the response returned from the preceding command. The results show all the effective and applied security rules on the virtual machine broken down in groups of **NetworkInterfaceSecurityRules**, **DefaultSecurityRules**, and **EffectiveSecurityRules**.
-
-```json
-
-{
- "networkInterfaces": [
- {
- "securityRuleAssociations": {
- "networkInterfaceAssociation": {
- "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{nicName}",
- "securityRules": [
- {
- "name": "default-allow-rdp",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{nsgName}/securityRules/default-allow-rdp",
- "etag": "W/\"d4c411d4-0d62-49dc-8092-3d4b57825740\"",
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "TCP",
- "sourcePortRange": "*",
- "destinationPortRange": "3389",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 1000,
- "direction": "Inbound"
- }
- }
- ]
- },
- "defaultSecurityRules": [
- {
- "name": "AllowVnetInBound",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{nsgName}/defaultSecurityRules/",
- "properties": {
- "provisioningState": "Succeeded",
- "description": "Allow inbound traffic from all VMs in VNET",
- "protocol": "*",
- "sourcePortRange": "*",
- "destinationPortRange": "*",
- "sourceAddressPrefix": "VirtualNetwork",
- "destinationAddressPrefix": "VirtualNetwork",
- "access": "Allow",
- "priority": 65000,
- "direction": "Inbound"
- }
- },
- ...
- ],
- "effectiveSecurityRules": [
- {
- "name": "DefaultOutboundDenyAll",
- "protocol": "All",
- "sourcePortRange": "0-65535",
- "destinationPortRange": "0-65535",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Deny",
- "priority": 65500,
- "direction": "Outbound"
- },
- ...
- ]
- }
- }
- ]
-}
-```
-
-## Next steps
-
-Visit [Auditing Network Security Groups (NSG) with Network Watcher](network-watcher-security-group-view-powershell.md) to learn how to automate validation of Network Security Groups.
network-watcher Nsg Flow Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logging.md
You can permanently delete an NSG flow log. Deleting a flow log deletes all its
> [!NOTE] > Deleting a flow log does not delete the flow log data from the storage account. Flow logs data stored in the storage account follows the configured retention policy or stays stored in the storage account until manually deleted (in case no retention policy is configured).
-## Next Steps
+## Next steps
- To learn how to use Azure built-in policies to audit or deploy NSG flow logs, see [Manage NSG flow logs using Azure Policy](nsg-flow-logs-policy-portal.md). - To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
network-watcher Required Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md
Title: Azure RBAC permissions required to use Azure Network Watcher capabilities description: Learn which Azure role-based access control (Azure RBAC) permissions are required to use Azure Network Watcher capabilities.- + Previously updated : 04/03/2023-- Last updated : 08/18/2023 # Azure role-based access control permissions required to use Network Watcher capabilities
Azure role-based access control (Azure RBAC) enables you to assign only the spec
| Microsoft.Network/networkWatchers/write | Create or update a network watcher | | Microsoft.Network/networkWatchers/delete | Delete a network watcher |
-## NSG flow logs
+## Flow logs
| Action | Description | | | - | | Microsoft.Network/networkWatchers/configureFlowLog/action | Configure a flow Log | | Microsoft.Network/networkWatchers/queryFlowLogStatus/action | Query status for a flow log |
+Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action | Fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account |
## Connection troubleshoot
Microsoft.Network/networkWatchers/packetCaptures/queryStatus/read | View the sta
Network Watcher capabilities also require the following actions:
-| Action(s) | Description |
-| | - |
-| Microsoft.Authorization/\*/Read | Used to fetch Azure role assignments and policy definitions |
-| Microsoft.Resources/subscriptions/resourceGroups/Read | Used to enumerate all the resource groups in a subscription |
-| Microsoft.Storage/storageAccounts/Read | Used to get the properties for the specified storage account |
-| Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action| Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account |
-| Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Used to log in to the VM, do a packet capture and upload it to storage account|
-| Microsoft.Compute/virtualMachines/extensions/Read </br> Microsoft.Compute/virtualMachines/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary |
-| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write| Used to access virtual machine scale sets, do packet captures and upload them to storage account|
-| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary |
-| Microsoft.Insights/alertRules/* | Used to set up metric alerts |
-| Microsoft.Support/* | Used to create and update support tickets from Network Watcher |
+| Action(s) | Description |
+| | - |
+| Microsoft.Authorization/\*/Read | Fetch Azure role assignments and policy definitions |
+| Microsoft.Resources/subscriptions/resourceGroups/Read | Enumerate all the resource groups in a subscription |
+| Microsoft.Storage/storageAccounts/Read | Get the properties for the specified storage account |
+| Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action | Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account |
+| Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Log in to the VM, do a packet capture and upload it to storage account |
+| Microsoft.Compute/virtualMachines/extensions/Read, </br> Microsoft.Compute/virtualMachines/extensions/Write | Check if Network Watcher extension is present, and install if necessary |
+| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write | Access virtual machine scale sets, do packet captures and upload them to storage account |
+| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Check if Network Watcher extension is present, and install if necessary |
+| Microsoft.Insights/alertRules/* | Set up metric alerts |
+| Microsoft.Support/* | Create and update support tickets from Network Watcher |
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Title: Traffic analytics schema and data aggregation
description: Learn about schema and data aggregation in Azure Network Watcher traffic analytics to analyze flow logs. --- Previously updated : 04/11/2023 -++ Last updated : 08/25/2023
+#CustomerIntent: As a administrator, I want learn about traffic analytics schema so I can easily use the queries and understand their output.
# Schema and data aggregation in Azure Network Watcher traffic analytics
Traffic analytics is a cloud-based solution that provides visibility into user a
## Data aggregation
+# [**NSG flow logs**](#tab/nsg)
+ - All flow logs at a network security group between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t` are captured at one-minute intervals as blobs in a storage account. - Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.-- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol` (TCP or UDP) (Note: source port is excluded for aggregation) are clubbed into a single flow by traffic analytics.-- This single record is decorated (details in the section below) and ingested in Log Analytics by traffic analytics. This process can take up to 1 hour max.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation).
+- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour.
- `FlowStartTime_t` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`.-- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Log Analytics user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
+
+# [**VNet flow logs (preview)**](#tab/vnet)
+
+- All flow logs between `FlowIntervalStartTime` and `FlowIntervalEndTime` are captured at one-minute intervals as blobs in a storage account.
+- Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation).
+- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour.
+- `FlowStartTime` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
++ The following query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days.
TableWithBlobId
The previous query constructs a URL to access the blob directly. The URL with placeholders is as follows: ```
-https://{saName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+https://{storageAccountName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networkSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
``` ## Traffic analytics schema
+Traffic analytics is built on top of Azure Monitor logs, so you can run custom queries on data decorated by traffic analytics and set alerts.
+
+### NSG flow logs
+
+The following table lists the fields in the schema and what they signify for NSG flow logs.
+
+| Field | Format | Comments |
+| -- | | -- |
+| **TableName** | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
+| **SubType_s** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType_s** are for internal use. |
+| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| **TimeProcessed_t** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| **FlowIntervalStartTime_t** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime_t** | Date and time in UTC | Ending time of the flow log processing interval. |
+| **FlowStartTime_t** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. This flow gets aggregated based on aggregation logic. |
+| **FlowEndTime_t** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). |
+| **FlowType_s** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
+| **SrcIP_s** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **DestIP_s** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **VMIP_s** | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
+| **DestPort_d** | Destination Port | Port at which traffic is incoming. |
+| **L4Protocol_s** | - T <br> - U | Transport Protocol. T = TCP <br> U = UDP. |
+| **L7Protocol_s** | Protocol Name | Derived from destination port. |
+| **FlowDirection_s** | - I = Inbound <br> - O = Outbound | Direction of the flow: in or out of network security group per flow log. |
+| **FlowStatus_s** | - A = Allowed <br> - D = Denied | Status of flow whether allowed or denied by the network security group per flow log. |
+| **NSGList_s** | \<SUBSCRIPTIONID\>/\<RESOURCEGROUP_NAME\>/\<NSG_NAME\> | Network security group associated with the flow. |
+| **NSGRules_s** | \<Index value 0>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | Network security group rule that allowed or denied this flow. |
+| **NSGRule_s** | NSG_RULENAME | Network security group rule that allowed or denied this flow. |
+| **NSGRuleType_s** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+| **MACAddress_s** | MAC Address | MAC address of the NIC at which the flow was captured. |
+| **Subscription_s** | Subscription of the Azure virtual network / network interface / virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Subscription1_s** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Subscription2_s** | Subscription ID | Subscription ID of virtual network/ network interface / virtual machine that the destination IP in the flow belongs to. |
+| **Region_s** | Azure region of virtual network / network interface / virtual machine that the IP in the flow belongs to. | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Region1_s** | Azure Region | Azure region of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Region2_s** | Azure Region | Azure region of virtual network that the destination IP in the flow belongs to. |
+| **NIC_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the VM sending or receiving the traffic. |
+| **NIC1_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the source IP in the flow. |
+| **NIC2_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
+| **VM_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | Virtual Machine associated with the Network interface NIC_s. |
+| **VM1_s** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual Machine associated with the source IP in the flow. |
+| **VM2_s** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual Machine associated with the destination IP in the flow. |
+| **Subnet_s** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the NIC_s. |
+| **Subnet1_s** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the Source IP in the flow. |
+| **Subnet2_s** | \<ResourceGroup_Name\>/<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the Destination IP in the flow. |
+| **ApplicationGateway1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the Source IP in the flow. |
+| **ApplicationGateway2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the Destination IP in the flow. |
+| **ExpressRouteCircuit1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is sent from site via ExpressRoute. |
+| **ExpressRouteCircuit2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is received from cloud by ExpressRoute. |
+| **ExpressRouteCircuitPeeringType_s** | - AzurePrivatePeering <br> - AzurePublicPeering <br> - MicrosoftPeering | ExpressRoute peering type involved in the flow. |
+| **LoadBalancer1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the Source IP in the flow. |
+| **LoadBalancer2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the Destination IP in the flow. |
+| **LocalNetworkGateway1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the Source IP in the flow. |
+| **LocalNetworkGateway2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the Destination IP in the flow. |
+| **ConnectionType_s** | - VNetPeering <br> - VpnGateway <br> - ExpressRoute | The onnection Type. |
+| **ConnectionName_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ConnectionName\> | The connection Name. For flow type P2S, it is formatted as \<gateway name\>_\<VPN Client IP\>. |
+| **ConnectingVNets_s** | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks are populated here. |
+| **Country_s** | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field share the same country code. |
+| **AzureRegion_s** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field share the Azure region. |
+| **AllowedInFlows_d** | | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| **DeniedInFlows_d** | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| **AllowedOutFlows_d** | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| **DeniedOutFlows_d** | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| **FlowCount_d** | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
+| **InboundPackets_d** | Represents packets sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **OutboundPackets_d** | Represents packets sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **InboundBytes_d** | Represents bytes sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **OutboundBytes_d** | Represents bytes sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **CompletedFlows_d**| | Populated with nonzero value only for Version 2 of NSG flow log schema. |
+| **PublicIPs_s** | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **SrcPublicIPs_s** | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **DestPublicIPs_s** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **IsFlowCapturedAtUDRHop_b** | - True <br> - False | If the flow was captured at a UDR hop, the value is True. |
+ > [!IMPORTANT]
-> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing need to parse the `FlowDirection` field so that queries are simpler. These are changes in the updated schema:
+> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing the need to parse the `FlowDirection` field so that queries are simpler. The updated schema had the following changes:
> > - `FASchemaVersion_s` updated from 1 to 2. > - Deprecated fields: `VMIP_s`, `Subscription_s`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d` > - New fields: `SrcPublicIPs_s`, `DestPublicIPs_s`, `NSGRule_s`
->
-> Deprecated fields are available until November 2022.
->
-Traffic analytics is built on top of Log Analytics, so you can run custom queries on data decorated by traffic analytics and set alerts on the same.
+### VNet flow logs (preview)
-The following table lists the fields in the schema and what they signify.
+The following table lists the fields in the schema and what they signify for VNet flow logs.
| Field | Format | Comments | | -- | | -- |
-| TableName | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
-| SubType_s | FlowLog | Subtype for the flow logs. Use only "FlowLog", other values of SubType_s are for internal workings of the product. |
-| FASchemaVersion_s | 2 | Schema version. Doesn't reflect NSG flow log version. |
-| TimeProcessed_t | Date and Time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
-| FlowIntervalStartTime_t | Date and Time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
-| FlowIntervalEndTime_t | Date and Time in UTC | Ending time of the flow log processing interval. |
-| FlowStartTime_t | Date and Time in UTC | First occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. This flow gets aggregated based on aggregation logic. |
-| FlowEndTime_t | Date and Time in UTC | Last occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as ΓÇ£BΓÇ¥ in the raw flow record). |
-| FlowType_s | * IntraVNet <br> * InterVNet <br> * S2S <br> * P2S <br> * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow <br> * Unknown Private <br> * Unknown | Definition in notes below the table. |
-| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
-| DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
-| VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
-| DestPort_d | Destination Port | Port at which traffic is incoming. |
-| L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP. |
-| L7Protocol_s | Protocol Name | Derived from destination port. |
-| FlowDirection_s | * I = Inbound<br> * O = Outbound | Direction of the flow in/out of NSG as per flow log. |
-| FlowStatus_s | * A = Allowed by NSG Rule <br> * D = Denied by NSG Rule | Status of flow allowed/nblocked by NSG as per flow log. |
-| NSGList_s | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network Security Group (NSG) associated with the flow. |
-| NSGRules_s | \<Index value 0)>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | NSG rule that allowed or denied this flow. |
-| NSGRule_s | NSG_RULENAME | NSG rule that allowed or denied this flow. |
-| NSGRuleType_s | * User Defined * Default | The type of NSG Rule used by the flow. |
-| MACAddress_s | MAC Address | MAC address of the NIC at which the flow was captured. |
-| Subscription_s | Subscription of the Azure virtual network/ network interface/ virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| Subscription1_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
-| Subscription2_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the destination IP in the flow belongs to. |
-| Region_s | Azure region of virtual network/ network interface/ virtual machine to which the IP in the flow belongs to | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| Region1_s | Azure Region | Azure region of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
-| Region2_s | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
-| NIC_s | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. |
-| NIC1_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
-| NIC2_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
-| VM_s | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. |
-| VM1_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. |
-| VM2_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. |
-| Subnet_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the NIC_s. |
-| Subnet1_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. |
-| Subnet2_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. |
-| ApplicationGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. |
-| ApplicationGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. |
-| LoadBalancer1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. |
-| LoadBalancer2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. |
-| LocalNetworkGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. |
-| LocalNetworkGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. |
-| ConnectionType_s | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. |
-| ConnectionName_s | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it will be formatted as \<gateway name\>_\<VPN Client IP\>. |
-| ConnectingVNets_s | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks will be populated here. |
-| Country_s | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field will share the same country code. |
-| AzureRegion_s | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field will share the Azure region. |
-| AllowedInFlows_d | | Count of inbound flows that were allowed. This represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
-| DeniedInFlows_d | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
-| AllowedOutFlows_d | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
-| DeniedOutFlows_d | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
-| FlowCount_d | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
-| InboundPackets_d | Represents packets sent from the destination to the source of the flow | This field is only populated for Version 2 of NSG flow log schema. |
-| OutboundPackets_d | Represents packets sent from the source to the destination of the flow | This field is only populated for Version 2 of NSG flow log schema. |
-| InboundBytes_d | Represents bytes sent from the destination to the source of the flow | This field is only populated Version 2 of NSG flow log schema. |
-| OutboundBytes_d | Represents bytes sent from the source to the destination of the flow | This field is only populated Version 2 of NSG flow log schema. |
-| CompletedFlows_d | | This field is only populated with nonzero value for Version 2 of NSG flow log schema. |
-| PublicIPs_s | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
-| SrcPublicIPs_s | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
-| DestPublicIPs_s | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **TableName** | NTANetAnalytics | Table for traffic analytics data. |
+| **SubType** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType** are for internal use. |
+| **FASchemaVersion** | 3 | Schema version. Doesn't reflect NSG flow log version. |
+| **TimeProcessed** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| **FlowIntervalStartTime** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime**| Date and time in UTC | Ending time of the flow log processing interval. |
+| **FlowStartTime** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. This flow gets aggregated based on aggregation logic. |
+| **FlowEndTime** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). |
+| **FlowType** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
+| **SrcIP** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **DestIP** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **TargetResourceId** | ResourceGroupName/ResourceName | The ID of the resource at which flow logging and traffic analytics is enabled. |
+| **TargetResourceType** | VirtualNetwork/Subnet/NetworkInterface | Type of resource at which flow logging and traffic analytics is enabled (virtual network, subnet, NIC or network security group).|
+| **FlowLogResourceId** | ResourceGroupName/NetworkWatcherName/FlowLogName | The resource ID of the flow log. |
+| **DestPort** | Destination Port | Port at which traffic is incoming. |
+| **L4Protocol** | - T <br> - U | Transport Protocol. **T** = TCP <br> **U** = UDP |
+| **L7Protocol** | Protocol Name | Derived from destination port. |
+| **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the network security group per flow log. |
+| **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by network security group per flow log. |
+| **NSGList** | \<SUBSCRIPTIONID\>/\<RESOURCEGROUP_NAME\>/\<NSG_NAME\> | Network security group associated with the flow. |
+| **NSGRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. |
+| **NSGRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+| **MACAddress** | MAC Address | MAC address of the NIC at which the flow was captured. |
+| **SrcSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **DestSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the destination IP in the flow belongs to. |
+| **SrcRegion** | Azure Region | Azure region of virtual network / network interface / virtual machine to which the source IP in the flow belongs to. |
+| **DestRegion** | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
+| **SecNIC** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the source IP in the flow. |
+| **DestNIC** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the destination IP in the flow. |
+| **SrcVM** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the source IP in the flow. |
+| **DestVM** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the destination IP in the flow. |
+| **SrcSubnet** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the source IP in the flow. |
+| **DestSubnet** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the destination IP in the flow. |
+| **SrcApplicationGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the source IP in the flow. |
+| **DestApplicationGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the destination IP in the flow. |
+| **SrcExpressRouteCircuit** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is sent from site via ExpressRoute. |
+| **DestExpressRouteCircuit** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is received from cloud by ExpressRoute. |
+| **ExpressRouteCircuitPeeringType** | - AzurePrivatePeering <br> - AzurePublicPeering <br> - MicrosoftPeering | ExpressRoute peering type involved in the flow. |
+| **SrcLoadBalancer** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the source IP in the flow. |
+| **DestLoadBalancer** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the destination IP in the flow. |
+| **SrcLocalNetworkGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the source IP in the flow. |
+| **DestLocalNetworkGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the destination IP in the flow. |
+| **ConnectionType** | - VNetPeering <br> - VpnGateway <br> - ExpressRoute | The connection type. |
+| **ConnectionName** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ConnectionName\> | The connection name. For flow type P2S, it's formatted as \<GatewayName>_\<VPNClientIP> |
+| **ConnectingVNets** | Space separated list of virtual network names. | In hub and spoke topology, hub virtual networks are populated here. |
+| **Country** | Two-letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs field share the same country code. |
+| **AzureRegion** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs field share the Azure region. |
+| **AllowedInFlows**| - | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| **DeniedInFlows** | - | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| **AllowedOutFlows** | - | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| **DeniedOutFlows** | - | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| **FlowCount** | Deprecated. Total flows that matched the same four-tuple. In flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. | - |
+| **PacketsDestToSrc** | Represents packets sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **PacketsSrcToDest** | Represents packets sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **BytesDestToSrc** | Represents bytes sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **BytesSrcToDest** | Represents bytes sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **CompletedFlows** | - | Populated with nonzero value only for the Version 2 of NSG flow log schema. |
+| **SrcPublicIPs** | \<SOURCE_PUBLIC_IP\>\|\<FLOW_STARTED_COUNT\>\|\<FLOW_ENDED_COUNT\>\|\<OUTBOUND_PACKETS\>\|\<INBOUND_PACKETS\>\|\<OUTBOUND_BYTES\>\|\<INBOUND_BYTES\> | Entries separated by bars. |
+| **DestPublicIPs** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **FlowEncryption** | - Encrypted <br>- Unencrypted <br>- Unsupported hardware <br>- Software not ready <br>- Drop due to no encryption <br>- Discovery not supported <br>- Destination on same host <br>- Fall back to no encryption. | Encryption level of flows. |
+| **IsFlowCapturedAtUDRHop** | - True <br> - False | If the flow was captured at a UDR hop, the value is True. |
+
+> [!NOTE]
+> *NTANetAnalytics* in VNet flow logs replaces *AzureNetworkAnalytics_CL* used in NSG flow logs.
## Public IP details schema
Traffic analytics provides WHOIS data and geographic location for all public IPs
The following table details public IP schema:
+# [**NSG flow logs**](#tab/nsg)
+ | Field | Format | Comments | | -- | | -- |
-| TableName | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
-| SubType_s | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
-| FASchemaVersion_s | 2 | Schema version. It doesn't reflect NSG flow log version. |
-| FlowIntervalStartTime_t | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
-| FlowIntervalEndTime_t | Date and Time in UTC | End time of the flow log processing interval. |
-| FlowType_s | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | Definition in notes below the table. |
-| IP | Public IP | Public IP whose information is provided in the record. |
-| Location | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
-| PublicIPDetails | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
-| ThreatType | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
-| ThreatDescription | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
-| DNSDomain | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+| **TableName** | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
+| **SubType_s** | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
+| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| **FlowIntervalStartTime_t** | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime_t** | Date and Time in UTC | End time of the flow log processing interval. |
+| **FlowType_s** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. |
+| **IP** | Public IP | Public IP whose information is provided in the record. |
+| **Location** | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
+| **PublicIPDetails** | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
+| **ThreatType** | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
+| **ThreatDescription** | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
+| **DNSDomain** | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+
+# [**VNet flow logs (preview)**](#tab/vnet)
+
+| Field | Format | Comments |
+| -- | | -- |
+| **TableName**| NTAIpDetails | Table that contains traffic analytics IP details data. |
+| **SubType**| FlowLog | Subtype for the flow logs. Use only **FlowLog**. Other values of SubType are for internal workings of the product. |
+| **FASchemaVersion** | 2 | Schema version. Doesn't reflect NSG flow Log version. |
+| **FlowIntervalStartTime**| Date and time in UTC | Start time of the flow log processing interval (the time from which flow interval is measured). |
+| **FlowIntervalEndTime**| Date and time in UTC | End time of the flow log processing interval. |
+| **FlowType** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. |
+| **IP**| Public IP | Public IP whose information is provided in the record. |
+| **PublicIPDetails** | Information about IP | **For AzurePublic IP**: Azure Service owning the IP or **Microsoft Virtual Public IP** for the IP 168.63.129.16. <br> **ExternalPublic/Malicious IP**: WhoIS information of the IP. |
+| **ThreatType** | Threat posed by malicious IP | *For Malicious IPs only*. One of the threats from the list of currently allowed values. For more information, see [Notes](#notes). |
+| **DNSDomain** | DNS domain | *For Malicious IPs only*. Domain name associated with this IP. |
+| **ThreatDescription** |Description of the threat | *For Malicious IPs only*. Description of the threat posed by the malicious IP. |
+| **Location** | Location of the IP | **For Azure Public IP**: Azure region of virtual network / network interface / virtual machine to which the IP belongs or Global for IP 168.63.129.16. <br> **For External Public IP and Malicious IP**: two-letter country code (ISO 3166-1 alpha-2) where IP is located. |
+
+> [!NOTE]
+> *NTAIPDetails* in VNet flow logs replaces *AzureNetworkAnalyticsIPDetails_CL* used in NSG flow logs.
++ List of threat types:
List of threat types:
| Phishing | Indicators relating to a phishing campaign. | | Proxy | Indicator of a proxy service. | | PUA | Potentially Unwanted Application. |
-| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or will require manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
+| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or requires manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
## Notes -- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to log analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
+- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to Log Analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
- Some field names are appended with `_s` or `_d`, which don't signify source and destination but indicate the data types *string* and *decimal* respectively. - Based on the IP addresses involved in the flow, we categorize the flows into the following flow types: - `IntraVNet`: Both IP addresses in the flow reside in the same Azure virtual network.
List of threat types:
## Next Steps -- To learn more about traffic analytics, see [Azure Network Watcher Traffic analytics](traffic-analytics.md).-- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics frequently asked questions.--
+- To learn more about traffic analytics, see [Traffic analytics overview](traffic-analytics.md).
+- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics most frequently asked questions.
network-watcher Vnet Flow Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md
+
+ Title: Manage VNet flow logs - Azure CLI
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using the Azure CLI.
++++ Last updated : 08/16/2023+++
+# Create, change, enable, disable, or delete VNet flow logs using the Azure CLI
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using the Azure CLI](../virtual-network/quick-create-cli.md).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using the Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli).
+
+- Bash environment in [Azure Cloud Shell](https://shell.azure.com) or the Azure CLI installed locally. To learn more about using Bash in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - Bash](../cloud-shell/quickstart.md).
+
+ - If you choose to install and use Azure CLI locally, this article requires the Azure CLI version 2.39.0 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
+
+## Register insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [az provider register](/cli/azure/provider#az-provider-register) to register it.
+
+```azurecli-interactive
+# Register Microsoft.Insights provider.
+az provider register --namespace Microsoft.Insights
+```
+
+## Enable VNet flow logs
+
+Use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log.
+
+```azurecli-interactive
+# Create a VNet flow log.
+az network watcher flow-log create --location eastus --resource-group myResourceGroup --name myVNetFlowLog --vnet myVNet --storage-account myStorageAccount
+```
+
+## Enable VNet flow logs and traffic analytics
+
+Use [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create) to create a traffic analytics workspace, and then use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log that uses it.
+
+```azurecli-interactive
+# Create a traffic analytics workspace.
+az monitor log-analytics workspace create --name myWorkspace --resource-group myResourceGroup --location eastus
+
+# Create a VNet flow log.
+az network watcher flow-log create --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --workspace myWorkspace --interval 10 --traffic-analytics true
+```
+
+## List all flow logs in a region
+
+Use [az network watcher flow-log list](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-list) to list all flow log resources in a particular region in your subscription.
+
+```azurecli-interactive
+# Get all flow logs in East US region.
+az network watcher flow-log list --location eastus --out table
+```
+
+## View VNet flow log resource
+
+Use [az network watcher flow-log show](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-show) to see details of a flow log resource.
+
+```azurecli-interactive
+# Get the flow log details.
+az network watcher flow-log show --name myVNetFlowLog --resource-group NetworkWatcherRG --location eastus
+```
+
+## Download a flow log
+
+To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+VNet flow log files saved to a storage account follow the logging path shown in the following example:
+
+```
+https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+```
+
+## Disable traffic analytics on flow log resource
+
+To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to a storage account, use [az network watcher flow-log update](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-update).
+
+```azurecli-interactive
+# Update the VNet flow log.
+az network watcher flow-log update --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --traffic-analytics false
+```
+
+## Delete a VNet flow log resource
+
+To delete a VNet flow log resource, use [az network watcher flow-log delete](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete).
+
+```azurecli-interactive
+# Delete the VNet flow log.
+az network watcher flow-log delete --name myVNetFlowLog --location eastus
+```
+
+## Next steps
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
+
+ Title: VNet flow logs (preview)
+
+description: Learn about VNet flow logs feature of Azure Network Watcher.
++++ Last updated : 08/16/2023++
+# VNet flow logs (preview)
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network (VNet) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a virtual network. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. Network Watcher VNet flow logs capability overcomes some of the existing limitations of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
+
+## Why use flow logs?
+
+It's vital to monitor, manage, and know your network so that you can protect and optimize it. You may need to know the current state of the network, who's connecting, and where users are connecting from. You may also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen.
+
+Flow logs are the source of truth for all network activity in your cloud environment. Whether you're in a startup that's trying to optimize resources or a large enterprise that's trying to detect intrusion, flow logs can help. You can use them for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more.
+
+## Common use cases
+
+#### Network monitoring
+
+- Identify unknown or undesired traffic.
+- Monitor traffic levels and bandwidth consumption.
+- Filter flow logs by IP and port to understand application behavior.
+- Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards.
+
+#### Usage monitoring and optimization
+
+- Identify top talkers in your network.
+- Combine with GeoIP data to identify cross-region traffic.
+- Understand traffic growth for capacity forecasting.
+- Use data to remove overly restrictive traffic rules.
+
+#### Compliance
+
+- Use flow data to verify network isolation and compliance with enterprise access rules.
+
+#### Network forensics and security analysis
+
+- Analyze network flows from compromised IPs and network interfaces.
+- Export flow logs to any SIEM or IDS tool of your choice.
+
+## VNet flow logs compared to NSG flow logs
+
+Both VNet flow logs and [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) record IP traffic but they differ in their behavior & capabilities. VNet flow logs simplify the scope of traffic monitoring by allowing you to enable logging at [virtual networks](../virtual-network/virtual-networks-overview.md), ensuring that traffic through all supported workloads within a virtual network are recorded. VNet flow logs also avoids the need to enable multi-level flow logging such as in cases of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md#best-practices) where network security groups are configured at both subnet & NIC.
+
+In addition to existing support to identify allowed/denied traffic by [network security group rules](../virtual-network/network-security-groups-overview.md), VNet flow logs support identification of traffic allowed/denied by [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). VNet flow logs also support evaluating the encryption status of your network traffic in scenarios where [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md) is enabled.
+
+## How logging works
+
+Key properties of VNet flow logs include:
+
+- Flow logs operate at Layer 4 of the Open Systems Interconnection (OSI) model and record all IP flows going through a virtual network.
+- Logs are collected at 1-minute intervals through the Azure platform and don't affect your Azure resources or network traffic.
+- Logs are written in the JSON (JavaScript Object Notation) format.
+- Each log record contains the network interface (NIC) the flow applies to, 5-tuple information, traffic direction, flow state, encryption state and throughput information.
+- All traffic flows in your network are evaluated through the rules in the applicable [network security group rules](../virtual-network/network-security-groups-overview.md) or [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). For more information, see [Log format](#log-format).
+
+## Log format
+
+VNet flow logs have the following properties:
+
+- `time`: Time in UTC when the event was logged.
+- `flowLogVersion`: Version of flow log schema.
+- `flowLogGUID`: The resource GUID of the FlowLog resource.
+- `macAddress`: MAC address of the network interface where the event was captured.
+- `category`: Category of the event. The category is always `FlowLogFlowEvent`.
+- `flowLogResourceID`: Resource ID of the FlowLog resource.
+- `targetResourceID`: Resource ID of target resource associated to the FlowLog resource.
+- `operationName`: Always `FlowLogFlowEvent`.
+- `flowRecords`: Collection of flow records.
+ - `flows`: Collection of flows. This property has multiple entries for different ACLs.
+ - `aclID`: Identifier of the resource evaluating traffic, either a network security group or Virtual Network Manager. For cases like traffic denied by encryption, this value is `unspecified`.
+ - `flowGroups`: Collection of flow records at a rule level.
+ - `rule`: Name of the rule that allowed or denied the traffic. For traffic denied due to encryption, this value is `unspecified`.
+ - `flowTuples`: string that contains multiple properties for the flow tuple in a comma-separated format:
+ - `Time Stamp`: Time stamp of when the flow occurred in UNIX epoch format.
+ - `Source IP`: Source IP address.
+ - `Destination IP`: Destination IP address.
+ - `Source port`: Source port.
+ - `Destination port`: Destination Port.
+ - `Protocol`: Layer 4 protocol of the flow expressed in IANA assigned values.
+ - `Flow direction`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound.
+ - `Flow state`: State of the flow. Possible states are:
+ - `B`: Begin, when a flow is created. No statistics are provided.
+ - `C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals.
+ - `E`: End, when a flow is terminated. Statistics are provided.
+ - `D`: Deny, when a flow is denied.
+ - `Flow encryption`: Encryption state of the flow. Possible values are:
+ - `X`: Encrypted.
+ - `NX`: Unencrypted.
+ - `NX_HW_NOT_SUPPORTED`: Unsupported hardware.
+ - `NX_SW_NOT_READY`: Software not ready.
+ - `NX_NOT_ACCEPTED`: Drop due to no encryption.
+ - `NX_NOT_SUPPORTED`: Discovery not supported.
+ - `NX_LOCAL_DST`: Destination on same host.
+ - `NX_FALLBACK`: Fall back to no encryption.
+ - `Packets sent`: Total number of packets sent from source to destination since the last update.
+ - `Bytes sent`: Total number of packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload.
+ - `Packets received`: Total number of packets sent from destination to source since the last update.
+ - `Bytes received`: Total number of packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload.
+
+Traffic in your virtual networks is Unencrypted (NX) by default. For encrypted traffic, enable [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
+
+`Flow encryption` has the following possible encryption statuses:
+
+| Encryption Status | Description |
+| -- | -- |
+| `X` | **Connection is encrypted**. Encryption is configured and the platform has encrypted the connection. |
+| `NX` | **Connection is Unencrypted**. This event is logged in two scenarios: <br> - When encryption isn't configured. <br> - When an encrypted virtual machine communicates with an endpoint that lacks encryption (such as an internet endpoint). |
+| `NX_HW_NOT_SUPPORTED` | **Unsupported hardware**. Encryption is configured, but the virtual machine is running on a host that doesn't support encryption. This issue can usually be the case where the FPGA isn't attached to the host, or could be faulty. Report this issue to Microsoft for investigation. |
+| `NX_SW_NOT_READY` | **Software not ready**. Encryption is configured, but the software component (GFT) in the host networking stack isn't ready to process encrypted connections. This issue can happen when the virtual machine is booting for the first time / restarting / redeployed. It can also happen in the case where there's an update to the networking components on the host where virtual machine is running. In all these scenarios, the packet gets dropped. The issue should be temporary and encryption should start working once either the virtual machine is fully up and running or the software update on the host is complete. If the issue is seen for longer durations, report it to Microsoft for investigation. |
+| `NX_NOT_ACCEPTED` | **Drop due to no encryption**. Encryption is configured on both source and destination endpoints with drop on unencrypted policy. If there's a failure to encrypt traffic, packet is dropped. |
+| `NX_NOT_SUPPORTED` | **Discovery not supported**. Encryption is configured, but the encryption session wasn't established, as discovery isn't supported in the host networking stack. In this case, packet is dropped. If you encounter this issue, report it to Microsoft for investigation. |
+| `NX_LOCAL_DST` | **Destination on same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. |
+| `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the allow unencrypted policy for both source and destination endpoints. Encryption was attempted, but ran into an issue. In this case, connection is allowed but it isn't encrypted. An example of this can be, the virtual machine initially landed on a node that supports encryption, but later, this support was disabled. |
++
+## Sample log record
+
+In the following example of VNet flow logs, multiple records that follow the property list described earlier.
+
+```json
+{
+ "records": [
+ {
+ "time": "2022-09-14T09:00:52.5625085Z",
+ "flowLogVersion": 4,
+ "flowLogGUID": "abcdef01-2345-6789-0abc-def012345678",
+ "macAddress": "00224871C205",
+ "category": "FlowLogFlowEvent",
+ "flowLogResourceID": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG",
+ "targetResourceID": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet",
+ "operationName": "FlowLogFlowEvent",
+ "flowRecords": {
+ "flows": [
+ {
+ "aclID": "00000000-1234-abcd-ef00-c1c2c3c4c5c6",
+ "flowGroups": [
+ {
+ "rule": "DefaultRule_AllowInternetOutBound",
+ "flowTuples": [
+ "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0",
+ "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580",
+ "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0",
+ "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108",
+ "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0",
+ "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466"
+ ]
+ }
+ ]
+ },
+ {
+ "aclID": "01020304-abcd-ef00-1234-102030405060",
+ "flowGroups": [
+ {
+ "rule": "BlockHighRiskTCPPortsFromInternet",
+ "flowTuples": [
+ "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0",
+ "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0"
+ ]
+ },
+ {
+ "rule": "Internet",
+ "flowTuples": [
+ "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0",
+ "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0",
+ "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0",
+ "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0",
+ "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0",
+ "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0",
+ "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
+## Log tuple and bandwidth calculation
++
+Here's an example bandwidth calculation for flow tuples from a TCP conversation between **185.170.185.105:35370** and **10.2.0.4:23**:
+
+`1493763938,185.170.185.105,10.2.0.4,35370,23,6,I,B,NX,,,,`
+`1493695838,185.170.185.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880`
+`1493696138,185.170.185.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072`
+
+For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000.
+
+## Considerations for VNet flow logs
+
+### Storage account
+
+- **Location**: The storage account used must be in the same region as the virtual network.
+- **Performance tier**: Currently, only standard-tier storage accounts are supported.
+- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs.
+
+### Cost
+
+VNet flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
+
+Pricing of VNet flow logs doesn't include the underlying costs of storage. Using the retention policy feature with VNet flow logs means incurring separate storage costs for extended periods of time.
+
+If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
+
+## Pricing
+
+VNet flow logs are not currently billed. In future, VNet flow logs will be charged per gigabyte of "Network Logs Collected" and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with VNet flow logs, then existing traffic analytics pricing is applicable. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+
+## Availability
+
+VNet flow logs is available in the following regions during the preview:
+
+- East US 2 EUAP
+- Central US EUAP
+- West Central US
+- East US
+- East US 2
+- West US
+- West US 2
+
+To sign up to obtain access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup).
+
+## Next steps
+
+- To learn how to create, change, enable, disable, or delete VNet flow logs, see [PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md) VNet flow logs articles.
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md) and [Traffic analytics schema](traffic-analytics-schema.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
+++
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
+
+ Title: Manage VNet flow logs - PowerShell
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using Azure PowerShell.
++++ Last updated : 08/16/2023+++
+# Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure CLI](vnet-flow-logs-cli.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using PowerShell](../virtual-network/quick-create-powershell.md).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell).
+
+- PowerShell environment in [Azure Cloud Shell](https://shell.azure.com) or Azure PowerShell installed locally. To learn more about using PowerShell in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - PowerShell](../cloud-shell/quickstart-powershell.md).
+
+ - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell version 7.4.0 or later. Run `Get-InstalledModule -Name Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). Run `Connect-AzAccount` to sign in to Azure.
+
+## Register insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) to register it.
+
+```azurepowershell-interactive
+# Register Microsoft.Insights provider.
+Register-AzResourceProvider -ProviderNamespace Microsoft.Insights
+```
+
+## Enable VNet flow logs
+
+Use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log.
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup
+
+# Create a VNet flow log.
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Enable VNet flow logs and traffic analytics
+
+Use [New-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace) to create a traffic analytics workspace, and then use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log that uses it.
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup
+
+# Create a traffic analytics workspace and place its configuration into a variable.
+$workspace = New-AzOperationalInsightsWorkspace -Name myWorkspace -ResourceGroupName myResourceGroup -Location EastUS
+
+# Create a VNet flow log.
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -EnableTrafficAnalytics -TrafficAnalyticsWorkspaceId $workspace.ResourceId -TrafficAnalyticsInterval 10
+```
+
+## List all flow logs in a region
+
+Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to list all flow log resources in a particular region in your subscription.
+
+```azurepowershell-interactive
+# Get all flow logs in East US region.
+Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG | format-table Name
+```
+
+## View VNet flow log resource
+
+Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to see details of a flow log resource.
+
+```azurepowershell-interactive
+# Get the flow log details.
+Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -Name myVNetFlowLog
+```
+
+## Download a flow log
+
+To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+VNet flow log files saved to a storage account follow the logging path shown in the following example:
+
+```
+https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+```
+
+## Disable traffic analytics on flow log resource
+
+To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to storage account, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage
+
+# Update the VNet flow log.
+Set-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Disable VNet flow logging
+
+To disable a VNet flow log without deleting it so you can re-enable it later, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage
+
+# Disable the VNet flow log.
+Set-AzNetworkWatcherFlowLog -Enabled $false -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Delete a VNet flow log resource
+
+To delete a VNet flow log resource, use [Remove-AzNetworkWatcherFlowLog](/powershell/module/az.network/remove-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Delete the VNet flow log.
+Remove-AzNetworkWatcherFlowLog -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG
+```
+
+## Next steps
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
Using Azure DNS, you can host and resolve public domains, manage DNS resolution
### <a name="nat"></a>Virtual network NAT Gateway
-[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) (network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
-For more information, see [What is virtual network NAT gateway?
+Virtual Network NAT(network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
+For more information, see [What is virtual network NAT gateway](../../virtual-network/nat-gateway/nat-overview.md)?
:::image type="content" source="./media/networking-overview/flow-map.png" alt-text="Virtual network NAT gateway":::
This section describes networking services in Azure that help protect your netwo
[Azure DDoS Protection](../../ddos-protection/manage-ddos-protection.md) provides countermeasures against the most sophisticated DDoS threats. The service provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. Additionally, customers using Azure DDoS Protection have access to DDoS Rapid Response support to engage DDoS experts during an active attack.
+Azure DDoS Protection consists of two tiers:
+
+- [DDoS Network Protection](../../ddos-protection/ddos-protection-overview.md#ddos-network-protection) Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network.
+- [DDoS IP Protection](../../ddos-protection/ddos-protection-overview.md#ddos-ip-protection) DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
+ ### <a name="privatelink"></a>Azure Private Link
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
notification-hubs Create Notification Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-portal.md
A namespace contains one or more notification hubs, so type a name for the hub i
1. Review the [**Availability Zones**](./notification-hubs-high-availability.md#zone-redundant-resiliency) option. If you chose a region that has availability zones, the check box is selected by default. Availability Zones is a paid feature, so an additional fee is added to your tier. > [!NOTE]
- > Availability zones, and the ability to edit cross region disaster recovery options, are public preview features. Availability Zones is available for an additional cost; however, you will not be charged while the feature is in preview. For more information, see [High availability for Azure Notification Hubs](./notification-hubs-high-availability.md).
+ > The Availability Zones feature is currently in public preview. Availability Zones is available for an additional cost; however, you will not be charged while the feature is in preview. For more information, see [High availability for Azure Notification Hubs](./notification-hubs-high-availability.md).
1. Choose a **Disaster recovery** option: **None**, **Paired recovery region**, or **Flexible recovery region**. If you choose **Paired recovery region**, the failover region is displayed. If you select **Flexible recovery region**, use the drop-down to choose from a list of recovery regions.
notification-hubs Notification Hubs High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md
Previously updated : 07/17/2023 Last updated : 08/22/2023
Last updated 07/17/2023
Android, Windows, etc.) from any back-end (cloud or on-premises). This article describes the configuration options to achieve the availability characteristics required by your solution. For more information about our SLA, see the [Notification Hubs SLA][]. > [!NOTE]
-> The following features are available in preview:
+> The following feature is available in preview:
>
-> - Ability to edit your cross region disaster recovery options
> - Availability zones >
-> If you're not participating in the preview, your failover region defaults to one of the [Azure paired regions][].
->
-> Availability zones support will incur an additional cost on top of existing tier pricing. You will not be charged to preview the feature. Once it becomes generally available, you will automatically be billed.
+> Availability zones support will incur an additional cost on top of existing tier pricing. You will not be charged to preview the feature. Once it becomes generally available, you are automatically billed.
Notification Hubs offers two availability configurations:
metadata are replicated across data centers in the availability zone. In the eve
New availability zones are being added regularly. The following regions currently support availability zones:
-| Americas | Europe | Africa | Asia Pacific |
-||-|-|--|
-| West US 3 | West Europe | South Africa North | Australia East |
-| East US 2 | France Central | | East Asia |
-| West US 2 | Poland Central | | Qatar |
-| Canada Central| UK South | | India Central |
-| | North Europe | | |
-| | Sweden Central | | |
+| Americas | Europe | Africa | Asia Pacific |
+||-|-|--|
+| West US 3 | West Europe | South Africa North | Australia East |
+| East US 2 | France Central | | East Asia |
+| West US 2 | Poland Central | | Qatar |
+| Canada Central| UK South | | India Central |
+| | North Europe | | Japan East |
+| | Sweden Central | | Korea Central |
+| | Norway East | | |
+| | Germany West Central | | |
### Enable availability zones
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Previously updated : 06/21/2023 Last updated : 08/16/2023
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the latest releases.
+## Version 4.12 - August 2023
+
+We're pleased to announce the launch of OpenShift 4.12 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.12](https://docs.openshift.com/container-platform/4.12/release_notes/ocp-4-12-release-notes.html).
+
+> [!NOTE]
+> Starting with ARO version 4.12, the support lifecycle for new versions will be set to 14 months from the day of general availability. That means that the end date for support of each version will no longer be dependent on the previous version (as shown in the table above for version 4.12.) This does not affect support for the previous version; two generally available (GA) minor versions of Red Hat OpenShift Container Platform will continue to be supported.
+>
+ ## Update - June 2023 - Removed dependencies on service endpoints
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md
Title: Create an Azure Files StorageClass on Azure Red Hat OpenShift 4
description: Learn how to create an Azure Files StorageClass on Azure Red Hat OpenShift Previously updated : 10/16/2020 Last updated : 08/28/2023 keywords: aro, openshift, az aro, red hat, cli, azure file
In this article, youΓÇÖll create a StorageClass for Azure Red Hat OpenShift 4 th
> * Setup the prerequisites and install the necessary tools > * Create an Azure Red Hat OpenShift 4 StorageClass with the Azure File provisioner
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Before you begin
Deploy an Azure Red Hat OpenShift 4 cluster into your subscription, see [Create
### Set up Azure storage account
-This step will create a resource group outside of the Azure Red Hat OpenShift (ARO) clusterΓÇÖs resource group. This resource group will contain the Azure Files shares that are created by Azure Red Hat OpenShiftΓÇÖs dynamic provisioner.
+This step creates a resource group outside of the Azure Red Hat OpenShift (ARO) clusterΓÇÖs resource group. This resource group contains the Azure Files shares that created Azure Red Hat OpenShiftΓÇÖs dynamic provisioner.
```azurecli AZURE_FILES_RESOURCE_GROUP=aro_azure_files
az role assignment create --role Contributor --scope /subscriptions/mySubscripti
### Set ARO cluster permissions
-The OpenShift persistent volume binder service account will require the ability to read secrets. Create and assign an OpenShift cluster role to achieve this.
+The OpenShift persistent volume binder service account requires the ability to read secrets. Create and assign an OpenShift cluster role to achieve this.
```azurecli ARO_API_SERVER=$(az aro list --query "[?contains(name,'$CLUSTER')].[apiserverProfile.url]" -o tsv)
oc adm policy add-cluster-role-to-user azure-secret-reader system:serviceaccount
## Create StorageClass with Azure Files provisioner
-This step will create a StorageClass with an Azure Files provisioner. Within the StorageClass manifest, the details of the storage account are required so that the ARO cluster knows to look at a storage account outside of the current resource group.
+This step creates a StorageClass with an Azure Files provisioner. Within the StorageClass manifest, the details of the storage account are required so that the ARO cluster knows to look at a storage account outside of the current resource group.
-During storage provisioning, a secret named by secretName is created for the mounting credentials. In a multi-tenancy context, it is strongly recommended to set the value for secretNamespace explicitly, otherwise the storage account credentials may be read by other users.
+During storage provisioning, a secret named by secretName is created for the mounting credentials. In a multi-tenancy context, it's strongly recommended to set the value for secretNamespace explicitly, otherwise the storage account credentials may be read by other users.
```bash cat << EOF >> azure-storageclass-azure-file.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1 metadata: name: azure-file
-provisioner: kubernetes.io/azure-file
+provisioner: file.csi.azure.com
mountOptions: - dir_mode=0777 - file_mode=0777
EOF
oc create -f azure-storageclass-azure-file.yaml ```
-Mount options for Azure Files will generally be dependent on the workload that you are deploying and the requirements of the application. Specifically for Azure files, there are additional parameters that you should consider using.
+Mount options for Azure Files will generally be dependent on the workload that you're deploying and the requirements of the application. Specifically for Azure files, there are other parameters that you should consider using.
Mandatory parameters: - "mfsymlinks" to map symlinks to a form the client can use
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
Title: Create an Azure Red Hat OpenShift 4 private cluster
description: Learn how to create an Azure Red Hat OpenShift private cluster running OpenShift 4 Previously updated : 03/17/2023-- Last updated : 09/01/2023++ keywords: aro, openshift, az aro, red hat, cli #Customer intent: As an operator, I need to create a private Azure Red Hat OpenShift cluster
After executing the `az aro create` command, it normally takes about 35 minutes
> > By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you'll need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html). -
-### Create a private cluster without a public IP address (preview)
+### Create a private cluster without a public IP address
Typically, private clusters are created with a public IP address and load balancer, providing a means for outbound connectivity to other services. However, you can create a private cluster without a public IP address. This may be required in situations in which security or policy requirements prohibit the use of public IP addresses.
-> [!IMPORTANT]
-> Currently, this Azure Red Hat OpenShift feature is being offered in preview only. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Azure Red Hat OpenShift previews are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
-
-To create a private cluster without a public IP address, register for the feature flag `UserDefinedRouting` using the following command structure:
+To create a private cluster without a public IP address, [follow the procedure above](#create-the-cluster), adding the parameter `--outbound-type UserDefinedRouting` to the `aro create` command, as in the following example:
```
-az feature register --namespace Microsoft.RedHatOpenShift --name UserDefinedRouting
+az aro create \
+ --resource-group $RESOURCEGROUP \
+ --name $CLUSTER \
+ --vnet aro-vnet \
+ --master-subnet master-subnet \
+ --worker-subnet worker-subnet \
+ --apiserver-visibility Private \
+ --ingress-visibility Private \
+ --outbound-type UserDefinedRouting
```
-After you've registered the feature flag, create the cluster [using the command above](#create-the-cluster).
-Enabling this User Defined Routing option prevents a public IP address from being provisioned. User Defined Routing (UDR) allows you to create custom routes in Azure to override the default system routes or to add more routes to a subnet's route table. See
+> [!NOTE]
+> The UserDefinedRouting flag can only be used when creating clusters with `--apiserver-visibility Private` and `--ingress-visibility Private` parameters.
+>
+
+This User Defined Routing option prevents a public IP address from being provisioned. User Defined Routing (UDR) allows you to create custom routes in Azure to override the default system routes or to add more routes to a subnet's route table. See
[Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md) to learn more.
-> [!NOTE]
+> [!IMPORTANT]
> Be sure to specify the correct subnet with the properly configured routing table when creating your private cluster. For egress, the User Defined Routing option ensures that the newly created cluster has the egress lockdown feature enabled to allow you to secure outbound traffic from your new private cluster. See [Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster (preview)](howto-restrict-egress.md) to learn more.
+> [!NOTE]
+> If you choose the User Defined Routing network type, you're completely responsible for managing the egress of your cluster's routing outside of your virtual network (for example, getting access to public internet). Azure Red Hat OpenShift cannot manage this for you.
+>
## Connect to the private cluster You can log into the cluster using the `kubeadmin` user. Run the following command to find the password for the `kubeadmin` user.
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
This article shows you how to quickly stand up IBM WebSphere Liberty and Open Li
For step-by-step guidance in setting up Liberty and Open Liberty on Azure Red Hat OpenShift, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
+This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty).
+ ## Prerequisites - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
openshift Howto Infrastructure Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-infrastructure-nodes.md
keywords: infrastructure nodes, aro, deploy, openshift, red hat Previously updated : 06/15/2023 Last updated : 08/30/2023
In a production deployment, it's recommended that you deploy at least three mach
## Qualified workloads
-The following infrastructure workloads don't incur OpenShift Container Platform worker subscriptions:
+The following infrastructure workloads don't incur Azure Red Hat OpenShift worker subscriptions:
-- Kubernetes and OpenShift Container Platform control plane services that run on masters
+- Kubernetes and Azure Red Hat OpenShift control plane services that run on masters
- The default router
openshift Howto Kubeconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-kubeconfig.md
+
+ Title: Use Admin Kubeconfig to access an Azure Red Hat OpenShift (ARO) cluster
+description: Learn how to use Admin Kubeconfig to access an Azure Red Hat OpenShift (ARO) cluster.
+++
+keywords: openshift, red hat, kubeconfig, cluster
+ Last updated : 07/12/2023+++
+# Use Admin Kubeconfig to access an Azure Red Hat OpenShift (ARO) cluster
+
+This article shows you how to regain access to an ARO cluster using the Admin Kubeconfig feature. The Admin Kubeconfig feature lets you download and log in with the Admin Kubconfig file using the OpenShift CLI rather than the ARO console, thus bypassing components that may not be functioning properly. This can be helpful in the following instances:
+
+- The Azure Red Hat OpenShift (ARO) console isn't responding, or not allowing a login.
+- The OpenShift CLI isn't responding to requests.
+- Cluster operators may not be available or accessible.
+- An alternate cluster login method is required in order to fix the above issues.
+
+The Admin Kubeconfig feature allows cluster access in scenarios where the kube-apiserver is available, but `openshift-ingress`, `openshift-console`, or `openshift-authentication` aren't allowing login.
+
+> [!NOTE]
+> When using the Admin Kubeconfig feature in an environment with multiple clusters, make sure you are working in the correct context. For more information about contexts, see the [Red Hat OpenShift documentation](https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/managing-cli-profiles.html).
+>
+
+## Before you begin
+
+Ensure you're running Azure CLI version 2.50.0 or later.
+
+To check the version of Azure CLI run:
+```azurecli-interactive
+# Azure CLI version
+az --version
+```
+To install or upgrade Azure CLI, see [Install Azure
+CLI](/cli/azure/install-azure-cli).
+
+## Retrieve Admin Kubeconfig
+
+Run the following to retrieve Admin Kubeconfig:
+
+```
+export SUBSCRIPTION_ID=<your-subscription-ID>
+export RESOURCE_GROUP=<your-resource-group-name>
+export CLUSTER=<name-of-ARO-cluster>
+
+az aro get-admin-kubeconfig --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --name $CLUSTER
+```
+
+## Source and use the Kubeconfig
+
+By default, the command used previously to retrieve Admin Kubeconfig saves it to the local directory under the name *kubeconfig*. To use it, set the environment variable `KUBECONFIG` to the path of that file:
+
+```
+export KUBECONFIG=/path/to/kubeconfig
+oc get nodes
+[output will show up here]
+```
+
+Now there's no need to use the OpenShift CLI (`oc`) login because the admin user is already logged in and the kubeconfig file is present.
openshift Openshift Service Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/openshift-service-definitions.md
Previously updated : 02/01/2022 Last updated : 08/24/2023 keywords: azure, openshift, aro, red hat, service, definition #Customer intent: I need to understand Azure Red Hat OpenShift service definitions to manage my subscription.
No monitoring of these private network connections is provided by Red Hat SRE. M
Azure Red Hat OpenShift customers can specify their own DNS servers. For more information, see [Configure custom DNS for your Azure Red Hat OpenShift cluster](./howto-custom-dns.md).
+### Container Network Interface
+
+Azure Red Hat OpenShift comes with OVN (Open Virtual Network) as the Container Network Interface (CNI). Replacing the CNI is not a supported operation. For more information, see [OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters](concepts-ovn-kubernetes.md).
+ ## Storage The following sections provide information about Azure Red Hat OpenShift storage.
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md
Previously updated : 06/01/2023 Last updated : 08/10/2023 # Support lifecycle for Azure Red Hat OpenShift 4
When a new minor version is introduced, the oldest minor version is deprecated a
## Release and deprecation process
-You can reference upcoming version releases and deprecations on the Azure Red Hat OpenShift Release Calendar.
+You can reference upcoming version releases and deprecations on the [Azure Red Hat OpenShift release calendar](#azure-red-hat-openshift-release-calendar).
For new minor versions of Red Hat OpenShift Container Platform: * The Azure Red Hat OpenShift SRE team publishes a pre-announcement with the planned date of a new version release, and respective old version deprecation, in the [Azure Red Hat OpenShift Release notes](https://github.com/Azure/OpenShift/releases) at least 30 days prior to removal.
See the following guide for the [past Red Hat OpenShift Container Platform (upst
|4.9|November 2021| February 1 2022|4.11 GA| |4.10|March 2022| June 21 2022|4.12 GA| |4.11|August 2022| March 2 2023|4.13 GA|
+|4.12|January 2023| August 19 2023|October 19 2024|
+> [!IMPORTANT]
+> Starting with ARO version 4.12, the support lifecycle for new versions will be set to 14 months from the day of general availability. That means that the end date for support of each version will no longer be dependent on the previous version (as shown in the table above for version 4.12.) This does not affect support for the previous version; two generally available (GA) minor versions of Red Hat OpenShift Container Platform will continue to be supported, as [explained previously](#red-hat-openshift-container-platform-version-support-policy).
+>
## FAQ **What happens when a user upgrades an OpenShift cluster with a minor version that is not supported?**
operator-nexus Concepts Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-network-fabric.md
Key capabilities offered in Azure Operator Nexus Network Fabric:
* **Network Policy Automation** - Automating the management of consistent network policies across the fabric to ensure security, performance, and access controls are enforced uniformly.
-* **Networking features built for Operators** - Support for unique features like multicast, SCTP, and jumbo frames.
+* **Networking features built for Operators** - Support for unique features like multicast, SCTP, and jumbo frames.
operator-nexus Concepts Nexus Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-kubernetes-cluster.md
to learn about Kubernetes.
## Nexus Kubernetes cluster
-Nexus Kubernetes cluster (NAKS) is an Operator Nexus version of AKS for on-premises use. It is optimized to automate creation of containers to
-run tenant network function workloads.
+Nexus Kubernetes cluster (NKS) is an Operator Nexus version of Kubernetes for on-premises use. It is optimized to automate creation of containers to run tenant network function workloads.
Like any Kubernetes cluster, Nexus Kubernetes cluster has two components:
operator-nexus Concepts Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-observability.md
The Operator Nexus observability framework provides operational insights into your on-premises instances. The framework supports logging, monitoring, and alerting (LMA), analytics, and visualization of operational (platform and workloads) data and metrics.
-<! IMG ![ Operator Nexus Logging, Monitoring and Alerting (LMA) Framework](Docs/media/log-monitoring-analytics-framework.png) IMG >
:::image type="content" source="media/log-monitoring-analytics-framework.png" alt-text="Screenshot of Operator Nexus Logging, Monitoring and Alerting (LMA) Framework."::: - Figure: Operator Nexus Logging, Monitoring and Alerting (LMA) Framework The key highlights of Operator Nexus observability framework are:
The logs from Operator Nexus platform are stored in the following tables:
The 'InsightMetrics' table in the Logs section contains the metrics collected from Bare Metal Machines and the undercloud Kubernetes cluster. In addition, a few selected metrics collected from the undercloud can be observed by opening the Metrics tab from the Azure Monitor menu.
-<! IMG ![Azure Monitor Metrics Selection](Docs/media/azure-monitor-metrics-selection.png) IMG >
:::image type="content" source="media/azure-monitor-metrics-selection.png" alt-text="Screenshot of Azure Monitor Metrics Selection."::: Figure: Azure Monitor Metrics Selection
You can use the sample Azure Resource Manager alarm templates for [Operator Nexu
## Log Analytic Workspace
-A [LAW](../azure-monitor/logs/log-analytics-workspace-overview.md)
+A [Log Analytics Workspace (LAW)](../azure-monitor/logs/log-analytics-workspace-overview.md)
is a unique environment to log data from Azure Monitor and other Azure services. Each workspace has its own data repository and configuration but may combine data from multiple services. Each workspace consists of multiple data tables.
operator-nexus Concepts Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-resource-types.md
This article introduces you to the Operator Nexus components represented as Azure resources in Azure Resource Manager.
-<! IMG ![Resource Types](Docs/media/resource-types.png) IMG >
:::image type="content" source="media/resource-types.png" alt-text="Screenshot of Resource Types."::: Figure: Resource model
The Operator Nexus Cluster (or Instance) platform components include the infrast
### Network Fabric Controller
-Network Fabric Controller (NFC) is an Operator Nexus resource which runs in your subscription in your desired resource group and [Virtual Network](../virtual-network/virtual-networks-overview.md). The Network Fabric Controller acts as a bridge between the Azure control plane and your on-premises infrastructure to manage the lifecycle and configuration of the Network Devices in a Network Fabric instance.
+Network Fabric Controller (NFC) is an Operator Nexus resource that runs in your subscription in your desired resource group and [Virtual Network](../virtual-network/virtual-networks-overview.md). The Network Fabric Controller acts as a bridge between the Azure control plane and your on-premises infrastructure to manage the lifecycle and configuration of the Network Devices in a Network Fabric instance.
-The Network Fabric Controller achieves this by establishing a private connectivity channel between your Azure environment and on-premises using [Azure ExpressRoute](../expressroute/expressroute-introduction.md) and other supporting resources which are deployed in a managed resource group. The NFC is typically the first resource which you would create to establish this connectivity to bootstrap and configure your management and workload networks.
+The Network Fabric Controller achieves this by establishing a private connectivity channel between your Azure environment and on-premises using [Azure ExpressRoute](../expressroute/expressroute-introduction.md) and other supporting resources which are deployed in a managed resource group. The NFC is typically the first resource that you would create to establish this connectivity to bootstrap and configure your management and workload networks.
The Network Fabric Controller enables you to manage all the Network resources within your Operator Nexus instance like Network Fabric, Network Racks, Network Devices, Isolation Domains, Route Policies, etc.
You can manage the lifecycle of a Network Fabric Controller via Azure using any
### Network Fabric
-Network Fabric (NF) resource is a representation of your on-premises network topology in Azure. Every Network Fabric must be associated to and controlled by a Network Fabric Controller which is deployed in the same Azure region. You can associate multiple Network Fabric resources per Network Fabric Controller, see [Nexus Limits and Quotas](./reference-limits-and-quotas.md). A single deployment of the infrastructure is considered a Network Fabric instance.
+Network Fabric (NF) resource is a representation of your on-premises network topology in Azure. Every Network Fabric must be associated with and controlled by a Network Fabric Controller that is deployed in the same Azure region. You can associate multiple Network Fabric resources per Network Fabric Controller, see [Nexus Limits and Quotas](./reference-limits-and-quotas.md). A single deployment of the infrastructure is considered a Network Fabric instance.
Operator Nexus allows you to create Network Fabrics based on specific SKU types, where each SKU represents the number of network racks and compute servers in each rack deployed on-premises.
-Each Network Fabric resource can contain a collection of network racks, network devices, isolation domains for their interconnections. Once a Network Fabric is created and you've validated that your network devices are connected, then it can be Provisioned. Provisioning a Network Fabric is the process of bootstrapping the Network Fabric instance to get the management network up.
+Each Network Fabric resource can contain a collection of network racks, network devices, and isolation domains for their interconnections. Once a Network Fabric is created and you've validated that your network devices are connected, then it can be Provisioned. Provisioning a Network Fabric is the process of bootstrapping the Network Fabric instance to get the management network up.
You can manage the lifecycle of a Network Fabric via Azure using any of the supported interfaces - Azure CLI, REST API, etc. See [how to create and provision a Network Fabric](./howto-configure-network-fabric.md) to learn more. ### Network racks
-Network Rack resource is a representation of your on-premises Racks from the networking perspective. The number of network racks in an Operator Nexus instance depends on the Network Fabric SKU which was chosen while creation.
+Network Rack resource is a representation of your on-premises racks from the networking perspective. The number of network racks in an Operator Nexus instance depends on the Network Fabric SKU that was chosen during creation.
-Each network rack consists of Network Devices which are part of that rack. For example - Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, Network Packet Brokers (NPB).
+Each network rack consists of Network Devices that are part of that rack. For example - Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, and Network Packet Brokers (NPB).
-The Network Rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other Racks via Network to Network Interconnect (NNI) resource.
+The Network Rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other racks via Network to Network Interconnect (NNI) resource.
-The lifecycle of Network Rack resources is tied to the Network Fabric resource. The Network Racks are automatically created when you create the Network Fabric and the number of racks depends on the SKU which was chosen. When the Network Fabric resource is deleted, all the associated Network Racks are also deleted along with it.
+The lifecycle of Network Rack resources is tied to the Network Fabric resource. The Network Racks are automatically created when you create the Network Fabric and the number of racks depends on the SKU that was chosen. When the Network Fabric resource is deleted, all the associated Network Racks are also deleted along with it.
### Network devices
-Network Devices represent the Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, Network Packet Brokers (NPB) which are deployed as part of the Network Fabric instance. Each Network Device resource is associated to a specific Network Rack where it is deployed.
+Network Devices represent the Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, and Network Packet Brokers (NPB) which are deployed as part of the Network Fabric instance. Each Network Device resource is associated with a specific Network Rack where it is deployed.
-Each network device resource has a SKU, Role, Host Name, and Serial Number as properties, and can have multiple network interfaces associated. Network Interfaces contain the IPv4 and IPv6 addresses, physical identifier, interface type, and the associated connections. Network Interfaces also has the administrativeState property which indicates whether the interface is enabled or disabled.
+Each network device resource has a SKU, Role, Host Name, and Serial Number as properties, and can have multiple network interfaces associated. Network Interfaces contain the IPv4 and IPv6 addresses, physical identifier, interface type, and the associated connections. Network Interfaces also have the `administrativeState` property that indicates whether the interface is enabled or disabled.
-The lifecycle of the Network Interface depends on the Network Device and can exist as long as the parent network device resource exists. However, you can perform certain operations on a network interface resource like enable/disable the administrativeState via Azure using any of the supported interfaces - Azure CLI, REST API, etc.
+The lifecycle of the Network Interface depends on the Network Device and can exist as long as the parent network device resource exists. However, you can perform certain operations on a network interface resource like enable/disable the `administrativeState` via Azure using any of the supported interfaces - Azure CLI, REST API, etc.
The lifecycle of the Network Device resources depends on the network rack resource and will exist as long as the parent Network Fabric resource exists. However, before provisioning the Network Fabric, you can perform certain operations on a network device like setting a custom hostname and updating the serial number of the device via Azure using any of the supported interfaces - Azure CLI, REST API, etc. ### Isolation domains
-Isolation Domains enable east-west or north-south connectivity across Operator Nexus instance. They provide the required network connectivity between infrastructure components and also workload components. In principle, there are two types of networks which are established by isolation domains - management network and workload or tenant network.
+Isolation Domains enable east-west or north-south connectivity across Operator Nexus instance. They provide the required network connectivity between infrastructure components and also workload components. In principle, there are two types of networks that are established by isolation domains - management network and workload or tenant network.
-Management network is the private connectivity that enables communication between the Network Fabric instance which is deployed on-premises and Azure Virtual Network. You can create workload or tenant networks to enable communication between the workloads which are deployed across the Operator Nexus instance.
+A management network provides private connectivity that enables communication between the Network Fabric instance that is deployed on-premises and Azure Virtual Network. You can create workload or tenant networks to enable communication between the workloads that are deployed across the Operator Nexus instance.
-Each isolation domain is associated to a specific Network Fabric resource and has the option to be enabled/disabled. Only when an isolation domain is enabled, it's configured on the network devices and the configuration is removed once the isolation domain is removed.
+Each isolation domain is associated with a specific Network Fabric resource and has the option to be enabled/disabled. Only when an isolation domain is enabled, it's configured on the network devices, and the configuration is removed once the isolation domain is removed.
Primarily, there are two types of isolation domains:
There are two types of Layer 3 networks that you can create:
* Internal Network * External Network
-Internal networks enable layer 3 east-west connectivity across racks within the Operator Nexus instance and external networks enable layer 3 north-south connectivity from the Operator Nexus instance to networks outside the instance. A Layer 3 isolation domain must be configured with at least one internal network and external networks are optional.
+Internal networks enable layer 3 east-west connectivity across racks within the Operator Nexus instance and external networks enable layer 3 north-south connectivity from the Operator Nexus instance to networks outside the instance. A Layer 3 isolation domain must be configured with at least one internal network; external networks are optional.
### Cluster manager
-The Cluster Manager (CM) is hosted on Azure and manages the lifecycle of all on-premises clusters.
+The Cluster Manager (CM) is hosted on Azure and manages the lifecycle of all on-premises infrastructure (also referred to as infra clusters).
Like NFC, a CM can manage multiple Operator Nexus instances. The CM and the NFC are hosted in the same Azure subscription.
-### Cluster
+### Infrastructure Cluster
-The Cluster (or Compute Cluster) resource models a collection of racks, bare metal machines, storage, and networking.
-Each cluster is mapped to the on-premises Network Fabric. A cluster provides a holistic view of the deployed compute capacity.
-Cluster capacity examples include the number of vCPUs, the amount of memory, and the amount of storage space. A cluster is also the basic unit for compute and storage upgrades.
+The Infrastructure Cluster (or Compute Cluster or infra cluster) resource models a collection of racks, bare metal machines, storage, and networking.
+Each infra cluster is mapped to the on-premises Network Fabric. The cluster provides a holistic view of the deployed compute capacity.
+Infra cluster capacity examples include the number of vCPUs, the amount of memory, and the amount of storage space. A cluster is also the basic unit for compute and storage upgrades.
### Rack
-The Rack (or a compute rack) resource represents the compute servers (Bare Metal Machines), management servers, management switch and ToRs. The Rack is created, updated or deleted as part of the Cluster lifecycle management.
+The Rack (or a compute rack) resource represents the compute servers (Bare Metal Machines), management servers, management switches, and ToRs. The Rack is created, updated, or deleted as part of the infra cluster lifecycle management.
### Storage appliance
-Storage Appliances represent storage arrays used for persistent data storage in the Operator Nexus instance. All user and consumer data is stored in these appliances local to your premises. This local storage complies with some of the most stringent local data storage requirements.
+Storage Appliances represent storage arrays used for persistent data storage in the Operator Nexus instance. All user and consumer data is stored in these local on-premises appliances. This local storage complies with some of the most stringent local data storage requirements.
### Bare Metal Machine
-Bare Metal Machines represent the physical servers in a rack. They're lifecycle managed by the Cluster Manager.
+Bare Metal Machines represent the physical servers in a rack. They are lifecycle managed by the Cluster Manager.
Bare Metal Machines are used by workloads to host Virtual Machines and Kubernetes clusters. ## Workload components
You can use VMs to host your Virtualized Network Function (VNF) workloads.
### Nexus Kubernetes cluster
-Nexus Kubernetes cluster is Azure Kubernetes Service cluster modified to run on your on-premises Operator Nexus instance. The Nexus Kubernetes cluster is designed to host your Containerized Network Function (CNF) workloads.
+Nexus Kubernetes cluster is a Kubernetes cluster modified to run on your on-premises Operator Nexus instance. The Nexus Kubernetes cluster is designed to host your Containerized Network Function (CNF) workloads.
operator-nexus Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-security.md
+
+ Title: "Azure Operator Nexus: Security concepts"
+description: Security overview for Azure Operator Nexus
++++ Last updated : 08/14/2023+++
+# Azure Operator Nexus security
+
+Azure Operator Nexus is designed and built to both detect and defend against
+the latest security threats and comply with the strict requirements of government
+and industry security standards. Two cornerstones form the foundation of its
+security architecture:
+
+* **Security by default** - Security resiliency is an inherent part of the platform with little to no configuration changes needed to use it securely.
+* **Assume breach** - The underlying assumption is that any system can be compromised, and as such the goal is to minimize the impact of a security breach if one occurs.
+
+Azure Operator Nexus realizes the above by leveraging Microsoft cloud-native security tools that give you the ability to improve your cloud security posture while allowing you to protect your operator workloads.
+
+## Platform-wide protection via Microsoft Defender for Cloud
+
+[Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) is a cloud-native application protection platform (CNAPP) that provides the security capabilities needed to harden your resources, manage your security posture, protect against cyberattacks, and streamline security management. These are some of the key features of Defender for Cloud that apply to the Azure Operator Nexus platform:
+
+* **Vulnerability assessment for virtual machines and container registries** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud.
+* **Hybrid cloud security** ΓÇô Get a unified view of security across all your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions.
+* **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyberattacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, Azure Storage and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.
+* **Compliance assessment against a variety of security standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in Azure Security Benchmark. When you enable the advanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organizationΓÇÖs needs. Add standards and track your compliance with them from the regulatory compliance dashboard.
+* **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments.
+
+There are enhanced security options that let you protect your on-premises host servers as well as the Kubernetes clusters that run your operator workloads. These options are described below.
+
+## Bare metal machine host operating system protection via Microsoft Defender for Endpoint
+
+Azure Operator Nexus bare-metal machines (BMMs), which host the on-premises infrastructure compute servers, are protected when you elect to enable the [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) solution. Microsoft Defender for Endpoint provides preventative antivirus (AV), endpoint detection and response (EDR), and vulnerability management capabilities.
+
+You have the option to enable Microsoft Defender for Endpoint protection once you have selected and activated a [Microsoft Defender for Servers](../defender-for-cloud/tutorial-enable-servers-plan.md) plan, as Defender for Servers plan activation is a prerequisite for Microsoft Defender for Endpoint. Once enabled, the Microsoft Defender for Endpoint configuration is managed by the platform to ensure optimal security and performance, and to reduce the risk of misconfigurations.
+
+## Kubernetes cluster workload protection via Microsoft Defender for Containers
+
+On-premises Kubernetes clusters that run your operator workloads are protected when you elect to enable the Microsoft Defender for Containers solution. [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) provides run-time threat protection for clusters and Linux nodes as well as cluster environment hardening against misconfigurations.
+
+You have the option to enable Defender for Containers protection within Defender for Cloud by activating the Defender for Containers plan.
+
+## Cloud security is a shared responsibility
+
+It is important to understand that in a cloud environment, security is a [shared responsibility](../security/fundamentals/shared-responsibility.md) between you and the cloud provider. The responsibilities vary depending on the type of cloud service your workloads run on, whether it is Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS), as well as where the workloads are hosted ΓÇô within the cloud providerΓÇÖs or your own on-premises datacenters.
+
+Azure Operator Nexus workloads run on servers in your datacenters, so you are in control of changes to your on-premises environment. Microsoft periodically makes new platform releases available that contain security and other updates. You must then decide when to apply these releases to your environment as appropriate for your organizationΓÇÖs business needs.
operator-nexus How To Route Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-route-policy.md
# Route Policy in Network Fabric
-Route policies provides Operators the capability to allow or deny routes in regards to Layer 3 isolation domains in Network Fabric.
+Route policies provide Operators the capability to allow or deny routes in regards to Layer 3 isolation domains in Network Fabric.
With route policies, routes are tagged with certain attributes via community values and extended community values when they're distributed via Border Gateway Patrol (BGP).
Expected output:
## IP extended community
-The `IPExtendedCommunity`resource allows operators to manipulate routes based on route targets. Operators use it to specify conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tag them with specific extended community values. The operator must create an ARM resource of the type `I`PExtendedCommunityList` by providing a list of community values and specific properties. ExtendedCommunityLists are used in specifying match conditions and the action properties for route policies.
+The `IPExtendedCommunity`resource allows operators to manipulate routes based on route targets. Operators use it to specify conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tag them with specific extended community values. The operator must create an ARM resource of the type `IPExtendedCommunityList` by providing a list of community values and specific properties. ExtendedCommunityLists are used in specifying match conditions and the action properties for route policies.
### Parameters for IP extended community
operator-nexus Howto Azure Operator Nexus Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-azure-operator-nexus-prerequisites.md
To get started with Operator Nexus, you need to create a Network Fabric Controll
in your target Azure region. Each NFC is associated with a CM in the same Azure region and your subscription.
-The NFC/CM pair lifecycle manages up to 32 Azure Operator Nexus instances deployed in your sites connected to this Azure region.
-You'll need to complete the prerequisites before you can deploy the Operator Nexus first NFC and CM pair.
-In subsequent deployments of Operator Nexus, you can skip to creating the NFC and CM.
+You'll need to complete the prerequisites before you can deploy the first Operator Nexus NFC and CM pair.
+In subsequent deployments of Operator Nexus, you will only need to create the NFC and CM after the [quota](./reference-limits-and-quotas.md#network-fabric) of supported Operator Nexus instances has been reached.
## Resource Provider Registration
In subsequent deployments of Operator Nexus, you can skip to creating the NFC an
- Microsoft.Resources ## Dependent Azure resources setup+ - Establish [ExpressRoute](/azure/expressroute/expressroute-introduction) connectivity from your on-premises network to an Azure Region: - ExpressRoute circuit [creation and verification](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager)
operator-nexus Howto Baremetal Bmm Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmm-ssh.md
There's no limit to the number of users in a group.
- It's important to specify the Cluster facing IP addresses for the jump hosts. These IP addresses may be different than the public facing IP address used to access the jump host. - Once added, users are able to access bare metal machines from any specified jump host IP including a jump host IP defined in another bare metal machine keyset group. - Existing SSH access remains when adding the first bare metal machine keyset. However, the keyset command limits an existing user's SSH access to the specified jump host IPs in the keyset commands.-- Currently, only IPv4 jump host addresses are supported. There is a known issue that IPv4 jump host addresses may be mis-parsed and lost if IPv6 addresses are also specified in the `--jump-hosts-allowed` argument of an `az networkcloud cluster baremetalmachinekeyset` command. Use only IPv4 addresses until IPv6 support is added. ## Prerequisites
operator-nexus Howto Cluster Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-manager.md
# Cluster
-The Cluster Manager is deployed in the operator's Azure subscription to manage the lifecycle of Operator Nexus Clusters.
+The Cluster Manager is deployed in the operator's Azure subscription to manage the lifecycle of Operator Nexus Infrastructure Clusters.
## Before you begin
operator-nexus Howto Cluster Runtime Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-runtime-upgrade.md
This how-to guide explains the steps for installing the required Azure CLI and e
## Prerequisites 1. The [Install Azure CLI][installation-instruction] must be installed.
-2. the `networkcloud` extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md).
+2. The `networkcloud` cli extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md).
3. Access to the Azure portal for the target cluster to be upgraded. 4. You must be logged in to the same subscription as your target cluster via `az login` 5. Target cluster must be in a running state, with all control plane nodes healthy and 80+% of compute nodes in a running and healthy state.
In the output, you can find the `availableUpgradeVersions` property and look at
"expectedDuration": "Upgrades may take up to 4 hours + 2 hours per rack", "impactDescription": "Workloads will be disrupted during rack-by-rack upgrade", "supportExpiryDate": "2023-07-31",
- "targetClusterVersion": "3.2.0",
+ "targetClusterVersion": "3.3.0",
"workloadImpact": "True" } ],
az networkcloud cluster update-version --cluster-name "clusterName" --target-clu
"versionNumber" --resource-group "resourceGroupName" ```
-The runtime upgrade is a long process. The upgrade is considered to be finished 80% of compute nodes and 100% of management/control nodes have been successfully upgraded.
+The runtime upgrade is a long process. The upgrade first upgrades the management nodes and then sequentially rack by rack for the worker nodes.
+The upgrade is considered to be finished when 80% of worker nodes per rack and 100% of management nodes have been successfully upgraded.
+Workloads may be impacted while the worker nodes in a rack is in the process of being upgraded, however workloads in all other racks will not be impacted. Consideration of workload placement in light of this implementation design is encouraged.
Upgrading all the nodes takes multiple hours but can take more if other processes, like firmware updates, are also part of the upgrade. Due to the length of the upgrade process, it's advised to check the Cluster's detail status periodically for the current state of the upgrade.
If the rack's spec wasn't updated to the upgraded runtime version before the har
### After a runtime upgrade the cluster shows "Failed" Provisioning State
-During a runtime upgrade the cluster will enter a state of "Upgrading." In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a "Failed" Provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and may be necessary to diagnose the failure with Microsoft support.
+During a runtime upgrade the cluster will enter a state of `Upgrading` In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and may be necessary to diagnose the failure with Microsoft support.
<!-- LINKS - External --> [installation-instruction]: https://aka.ms/azcli
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
The metrics generated from the logging data are available in [Azure Monitor metr
## Create a Cluster
-The Cluster resource represents an on-premises deployment of the platform
+The Infrastructure Cluster resource represents an on-premises deployment of the platform
within the Cluster Manager. All other platform-specific resources are dependent upon it for their lifecycle.
az networkcloud cluster deploy \
--name "$CLUSTER_NAME" \ --resource-group "$CLUSTER_RESOURCE_GROUP" \ --subscription "$SUBSCRIPTION_ID" \
- --no-wait --debug
+ --no-wait --debug
``` > [!TIP]
provided through the Cluster's rack definition. Based on the results of these ch
and any user skipped machines, a determination is done on whether sufficient nodes passed and/or are available to meet the thresholds necessary for deployment to continue.
+> [!IMPORTANT]
+> The hardware validation process will write the results to the specified `analyticsWorkspaceId` at Cluster Creation.
+> Additionally, the provided Service Principal in the Cluster object is used for authentication against the Log Analytics Workspace Data Collection API.
+> This capability is only visible during a new deployment (Green Field); existing cluster will not have the logs available retroactively.
+
+By default, the hardware validation process writes the results to the configured Cluster `analyticsWorkspaceId`.
+However, due to the nature of Log Analytics Workspace data collection and schema evaluation, there can be ingestion delay that can take several minutes or more.
+For this reason, the Cluster deployment proceeds even if there was a failure to write the results to the Log Analytics Workspace.
+To help address this possible event, the results, for redundancy, are also logged within the Cluster Manager.
+
+In the provided Cluster object's Log Analytics Workspace, a new custom table with the Cluster's name as prefix and the suffix `*_CL` should appear.
+In the _Logs_ section of the LAW resource, a query can be executed against the new `*_CL` Custom Log table.
+ #### Cluster Deploy Action with skipping specific bare-metal-machine A parameter can be passed in to the deploy command that represents the names of
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
Previously updated : 08/01/2023 Last updated : 08/21/2023 #
Example output:
Name Version -- - monitor-control-service 0.2.0
-connectedmachine 0.5.1
-connectedk8s 1.3.20
+connectedmachine 0.6.0
+connectedk8s 1.4.0
k8s-extension 1.4.2
-networkcloud 1.0.0b2
+networkcloud 1.0.0
k8s-configuration 1.7.0
-managednetworkfabric 3.1.0
+managednetworkfabric 3.2.0
customlocation 0.1.3 ssh 2.0.1 ```
operator-nexus Howto Run Instance Readiness Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md
A service principal with the following role assignments. The supplemental script
* `Contributor` - For creating and manipulating resources * `Storage Blob Data Contributor` - For reading from and writing to the storage blob container
-* `Azure ARC Kubernetes Admin` - For ARC enrolling the NAKS cluster
+* `Azure ARC Kubernetes Admin` - For ARC enrolling the NKS cluster
Additionally, the script creates the necessary security group, and adds the service principal to the security group. If the security group exists, it adds the service principal to the existing security group.
operator-nexus Howto Set Up Defender For Cloud Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-set-up-defender-for-cloud-security.md
+
+ Title: "Azure Operator Nexus: How to set up the Defender for Cloud security environment"
+description: Learn how to enable and configure Defender for Cloud security plan features on your Operator Nexus subscription.
++++ Last updated : 08/18/2023+++
+# Set up the Defender for Cloud security environment on your Operator Nexus subscription
+
+This guide provides you with instructions on how to enable Microsoft Defender for Cloud and activate and configure some of its enhanced security plan options that can be used to secure your Operator Nexus bare metal compute servers and workloads.
+
+## Before you begin
+
+To aid your understanding of Defender for Cloud and its many security features, there's a wide variety of material available on the [Microsoft Defender for Cloud documentation](https://learn.microsoft.com/azure/defender-for-cloud/) site that you might find helpful.
+
+## Prerequisites
+
+To successfully complete the actions in this guide:
+- You must have an Azure Operator Nexus subscription.
+- You must have a deployed Azure Arc-connected Operator Nexus instance running in your on-premises environment.
+- You must use an Azure portal user account in your subscription with Owner, Contributor or Reader role.
+
+## Enable Defender for Cloud
+
+Enabling Microsoft Defender for Cloud on your Operator Nexus subscription is simple and immediately gives you access to its free included security features. To turn on Defender for Cloud:
+
+1. Sign in to [Azure portal](https://portal.azure.com).
+2. In the search box at the top, enter ΓÇ£Defender for Cloud.ΓÇ¥
+3. Select Microsoft Defender for Cloud under Services.
+
+When the Defender for Cloud [overview page](../defender-for-cloud/overview-page.md) opens, you have successfully activated Defender for Cloud on your subscription. The overview page is an interactive dashboard user experience that provides a comprehensive view of your Operator Nexus security posture. It displays security alerts, coverage information, and much more. Using this dashboard, you can assess the security of your workloads and identify and mitigate risks.
+
+After activating Defender for Cloud, you have the option to enable Defender for CloudΓÇÖs enhanced security features that provide important server and workload protections:
+- [Defender for Servers](../defender-for-cloud/tutorial-enable-servers-plan.md)
+- [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) ΓÇô made available through Defender for Servers
+- [Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md)
+
+## Set up a Defender for Servers plan to protect your bare metal servers
+
+To take advantage of the added security protection of your on-premises bare metal machine (BMM) compute servers that's provided by Microsoft Defender for Endpoint, you can enable and configure a [Defender for Servers plan](../defender-for-cloud/plan-defender-for-servers-select-plan.md) on your Operator Nexus subscription.
+
+### Prerequisites
+
+- Defender for Cloud must be enabled on your subscription.
+
+To set up a Defender for Servers plan:
+1. [Turn on the Defender for Servers plan feature](../defender-for-cloud/tutorial-enable-servers-plan.md#enable-the-defender-for-servers-plan) under Defender for Cloud.
+2. [Select one of the Defender for Servers plans](../defender-for-cloud/tutorial-enable-servers-plan.md#select-a-defender-for-servers-plan).
+3. While on the *Defender plans* page, click the Settings link for Servers under the ΓÇ£Monitoring coverageΓÇ¥ column. The *Settings & monitoring* page will open.
+ * Ensure that **Log Analytics agent/Azure Monitor agent** is set to Off.
+ * Ensure that **Endpoint protection** is set to Off.
+ :::image type="content" source="media/security/nexus-defender-for-servers-plan-settings.png" alt-text="Screenshot of Defender for Servers plan settings for Operator Nexus." lightbox="media/security/nexus-defender-for-servers-plan-settings.png":::
+ * Click Continue to save any changed settings.
+
+### Operator Nexus-specific requirement for enabling Defender for Endpoint
+
+> [!IMPORTANT]
+> In Operator Nexus, Microsoft Defender for Endpoint is enabled on a per-cluster basis rather than across all clusters at once, which is the default behavior when the Endpoint protection setting is enabled in Defender for Servers. To request Endpoint protection to be turned on in one or more of your on-premises workload clusters you will need to open a Microsoft Support ticket, and the Support team will subsequently perform the enablement actions. You must have a Defender for Servers plan active in your subscription prior to opening a ticket.
+
+Once Defender for Endpoint is enabled by Microsoft Support, its configuration is managed by the platform to ensure optimal security and performance, and to reduce the risk of misconfigurations.
+
+## Set up the Defender for Containers plan to protect your Azure Kubernetes Service cluster workloads
+
+You can protect the on-premises Kubernetes clusters that run your operator workloads by enabling and configuring the [Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) plan on your subscription.
+
+### Prerequisites
+
+- Defender for Cloud must be enabled on your subscription.
+
+To set up the Defender for Containers plan:
+
+1. Turn on the [Defender for Containers plan feature](../defender-for-cloud/tutorial-enable-containers-azure.md#enable-the-defender-for-containers-plan) under Defender for Cloud.
+2. While on the *Defender plans* page, click the Settings link for Containers under the ΓÇ£Monitoring coverageΓÇ¥ column. The *Settings & monitoring* page will open.
+ * Ensure that **DefenderDaemonSet** is set to Off.
+ * Ensure that **Azure Policy for Kubernetes** is set to Off.
+ :::image type="content" source="media/security/nexus-defender-for-containers-plan-settings.png" alt-text="Screenshot of Defender for Containers plan settings for Operator Nexus." lightbox="media/security/nexus-defender-for-containers-plan-settings.png":::
+ * Click Continue to save any changed settings.
operator-nexus Howto Use Azure Policy For Aks Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-azure-policy-for-aks-cluster-security.md
+
+ Title: "Azure Operator Nexus: How to use Azure Policy to protect on-premises Azure Kubernetes Service clusters"
+description: Learn how to assign Azure built-in policies or create custom policies to secure your Operator Nexus on-premises Azure Kubernetes Service (AKS) clusters.
++++ Last updated : 08/18/2023+++
+# Use Azure Policy to secure your Azure Kubernetes Service (AKS) clusters
+
+You can set up extra security protections for your Operator Nexus Arc-connected on-premises Azure Kubernetes Service (AKS) clusters using Azure Policy. With Azure Policy, you assign a single policy or group of related policies (called an initiative or policy set) to one or more of your clusters. Individual policies can be either built-in or custom policy definitions that you create.
+
+This guide provides information on how to apply policy definitions to your clusters and verify those assignments are being enforced.
+
+## Before you begin
+
+If you're new to Azure Policy, here are some helpful resources that you can use to become more familiar with Azure Policy, how it can be used to secure your AKS clusters, and the built-in policy definitions that are available for you to use for AKS resource protection:
+
+- [Azure Policy documentation](https://learn.microsoft.com/azure/governance/policy/)
+- [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md)
+- [Azure Policy Built-in definitions for AKS](../aks/policy-reference.md)
+
+## Prerequisites
+
+- One or more on-premises AKS clusters that are Arc-connected to Azure.
+
+ > [!NOTE]
+ > Operator Nexus does not require you to install the Azure Policy add-on for AKS in your clusters since the extension is automatically installed during AKS cluster deployment.
+
+- A user account in your subscription with the appropriate role:
+ * A [Resource Policy Contributor](../role-based-access-control/built-in-roles.md#resource-policy-contributor) or Owner can view, create, assign, and disable policies.
+ * A Contributor or Reader can view policies and policy assignments.
+
+## Apply and validate policies against your AKS clusters
+
+The process for assigning a policy or initiative to your AKS clusters and validating the assignment is as follows:
+
+1. Determine whether there is an existing [built-in AKS policy](../aks/policy-reference.md) or initiative that is suitable for your security requirements.
+2. Sign in to the [Azure portal](https://portal.azure.com) to perform the appropriate type of policy or initiative assignment on your Operator Nexus subscription based on your research in Step 1.
+ * If a built-in policy or initiative exists, you can assign it using the instructions [here](../aks/use-azure-policy.md?source=recommendations#assign-a-built-in-policy-definition-or-initiative).
+ * Otherwise, you can create and assign a [custom policy definition](../aks/use-azure-policy.md?source=recommendations#create-and-assign-a-custom-policy-definition).
+
+3. [Validate](../aks/use-azure-policy.md?source=recommendations#validate-an-azure-policy-is-running) that the policy or initiative has been applied to your clusters.
operator-nexus Quickstarts Kubernetes Cluster Deployment Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-cli.md
Before you run the commands, you need to set several variables to define the con
| SERVICE_CIDR | The network range for the Kubernetes services in the cluster, in CIDR notation. | | DNS_SERVICE_IP | The IP address for the Kubernetes DNS service. | - Once you've defined these variables, you can run the Azure CLI command to create the cluster. Add the ```--debug``` flag at the end to provide more detailed output for troubleshooting purposes. To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example: ```bash RESOURCE_GROUP="myResourceGroup"
-LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')"
-SUBSCRIPTION_ID="$(az account show -o tsv --query id)"
+SUBSCRIPTION_ID="<Azure subscription ID>"
+LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION_ID -o tsv)"
CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>" CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>" CNI_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
K8S_VERSION="v1.24.9"
ADMIN_USERNAME="azureuser" SSH_PUBLIC_KEY="$(cat ~/.ssh/id_rsa.pub)" CONTROL_PLANE_COUNT="1"
-CONTROL_PLANE_VM_SIZE="NC_G2_v1"
+CONTROL_PLANE_VM_SIZE="NC_G6_28_v1"
INITIAL_AGENT_POOL_NAME="${CLUSTER_NAME}-nodepool-1" INITIAL_AGENT_POOL_COUNT="1"
-INITIAL_AGENT_POOL_VM_SIZE="NC_M4_v1"
+INITIAL_AGENT_POOL_VM_SIZE="NC_P10_56_v1"
POD_CIDR="10.244.0.0/16" SERVICE_CIDR="10.96.0.0/16" DNS_SERVICE_IP="10.96.0.10" ```+ > [!IMPORTANT] > It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, CNI_ARM_ID, and AAD_ADMIN_GROUP_OBJECT_ID with your actual values before running these commands.
After defining these variables, you can create the Kubernetes cluster by executi
```azurecli az networkcloud kubernetescluster create \name "${CLUSTER_NAME}" \resource-group "${RESOURCE_GROUP}" \subscription "${SUBSCRIPTION_ID}" \extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \location "${LOCATION}" \kubernetes-version "${K8S_VERSION}" \aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \admin-username "${ADMIN_USERNAME}" \ssh-key-values "${SSH_PUBLIC_KEY}" \control-plane-node-configuration \
+ --name "${CLUSTER_NAME}" \
+ --resource-group "${RESOURCE_GROUP}" \
+ --subscription "${SUBSCRIPTION_ID}" \
+ --extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \
+ --location "${LOCATION}" \
+ --kubernetes-version "${K8S_VERSION}" \
+ --aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \
+ --admin-username "${ADMIN_USERNAME}" \
+ --ssh-key-values "${SSH_PUBLIC_KEY}" \
+ --control-plane-node-configuration \
count="${CONTROL_PLANE_COUNT}" \ vm-sku-name="${CONTROL_PLANE_VM_SIZE}" \initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \network-configuration \
+ --initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \
+ --network-configuration \
cloud-services-network-id="${CSN_ARM_ID}" \ cni-network-id="${CNI_ARM_ID}" \ pod-cidrs="[${POD_CIDR}]" \
After a few minutes, the command completes and returns information about the clu
[!INCLUDE [quickstart-cluster-connect](./includes/kubernetes-cluster/quickstart-cluster-connect.md)] ## Add an agent pool+ The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ```az networkcloud kubernetescluster agentpool create``` command. The following example creates an agent pool named ```myNexusAKSCluster-nodepool-2```: You can also use the default values for some of the variables, as shown in the following example:
RESOURCE_GROUP="myResourceGroup"
CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>" CLUSTER_NAME="myNexusAKSCluster" AGENT_POOL_NAME="${CLUSTER_NAME}-nodepool-2"
-AGENT_POOL_VM_SIZE="NC_M4_v1"
+AGENT_POOL_VM_SIZE="NC_P10_56_v1"
AGENT_POOL_COUNT="1" AGENT_POOL_MODE="User" ```+ After defining these variables, you can add an agent pool by executing the following Azure CLI command: ```azurecli
az networkcloud kubernetescluster agentpool create \
--name "${AGENT_POOL_NAME}" \ --kubernetes-cluster-name "${CLUSTER_NAME}" \ --resource-group "${RESOURCE_GROUP}" \
+ --subscription "${SUBSCRIPTION_ID}" \
--extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \ --count "${AGENT_POOL_COUNT}" \ --mode "${AGENT_POOL_MODE}" \
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
To define these variables, use the following set commands and replace the exampl
```bash # Azure parameters RESOURCE_GROUP="myResourceGroup"
-SUBSCRIPTION="$(az account show -o tsv --query id)"
+SUBSCRIPTION="<Azure subscription ID>"
CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
-LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')"
+LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION -o tsv)"
# VM parameters VM_NAME="myNexusVirtualMachine"
operator-nexus Reference Limits And Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-limits-and-quotas.md
The creation of the Network Cloud specific resources is subject to the following
| Racks | Up to BOM-specified Compute Racks per Nexus Cluster | | Bare Metal Machines | Up to BOM-specified BareMetal machines per Rack | | Storage Appliances | Up to BOM-specified Storage appliances per Nexus Cluster instance |
-| NAKS Cluster | Depends on selection of VM flavor and number of nodes per NAKS cluster |
+| NKS Cluster | Depends on selection of VM flavor and number of nodes per NKS cluster |
| Layer 2 Networks | 3500 per Nexus instance | | Layer 3 Networks | 200 per Nexus instance | | Trunked Networks | 3500 per Nexus instance |
The table here briefly mentions other Azure resources that are necessary. Howeve
| Resource Type | Notes | | - | -| | Subscription | [Subscription limits](../azure-resource-manager/management/azure-subscription-service-limits.md) |
-| Resource Group | [Resource Group Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). There's a max limit for RG per subscription. Operators need to make appropriate consideration for how they want to manage Resource Groups for NAKS clusters vs Virtual machines per Nexus instance. |
+| Resource Group | [Resource Group Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). There's a max limit for RG per subscription. Operators need to make appropriate consideration for how they want to manage Resource Groups for NKS clusters vs Virtual machines per Nexus instance. |
| VM Flavors | Customer generally has VM flavor quota in each region within subscription. You need to ensure that you can still create VMs per the requirements. | | AKS Clusters | [AKS Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-kubernetes-service-limits) | | Virtual Networks | [Virtual Network Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) |
The table here briefly mentions other Azure resources that are necessary. Howeve
| Load Balancers (Standard) | [Load Balancer Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer) | | Public IP Address (Standard) | [Public IP Address Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#publicip-address) | | Azure Monitor Metrics | [Azure Monitor Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits) |
-| Log Analytics Workspace | [Log Analytics Workspace Limits](../azure-monitor/service-limits.md#log-analytics-workspaces) |
+| Log Analytics Workspace | [Log Analytics Workspace Limits](../azure-monitor/service-limits.md#log-analytics-workspaces) |
operator-nexus Reference Near Edge Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-compute.md
Azure Operator Nexus offers a group of on-premises cloud solutions. One of the o
In a near-edge environment (also known as an instance), the compute servers (also known as bare-metal machines) represent the physical machines on the rack. They run the CBL-Mariner operating system and provide support for running high-performance workloads.
-## Available SKUs
+<!-- ## Available SKUs
The Azure Operator Nexus offering is built with the following compute nodes for near-edge instances (the nodes that run the actual customer workloads). | SKU | Description | | -- | -- | | Dell R750 | Compute node for near edge |
+-->
## Compute connectivity
operator-nexus Reference Near Edge Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-storage.md
This table lists the characteristics of the storage appliance.
| Number of maximum I/O operations supported per second <br>(with 80/20 read/write ratio) | 250K+ (4K) <br>150K+ (16K) | | Number of I/O operations supported per volume per second | 50K+ | | Maximum I/O latency supported | 10 ms |
-| Nominal failover time supported | 10 s |
+| Nominal failover time supported | 10 s |
operator-nexus Reference Nexus Kubernetes Cluster Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-sku.md
+
+ Title: Azure Operator Nexus Kubernetes cluster VM SKUs
+description: Learn about Azure Operator Nexus Kubernetes cluster SKUs
++++ Last updated : 08/11/2023+++
+# Azure Operator Nexus Kubernetes cluster VM SKUs
+
+The Azure Operator Nexus Kubernetes cluster VMs are grouped into node pools, which are collections of VMs that have the same configuration. The VMs in a node pool are used to run your Kubernetes workloads. The Azure Operator Nexus Kubernetes cluster supports the following VM SKUs. These SKUs are available in all Azure regions where the Azure Operator Nexus Kubernetes cluster is available.
+
+There are two types of VM SKUs:
+
+* General purpose
+* Performance optimized
+
+The primary difference between the two types of VMs is their approach to emulator thread isolation. VM SKUs optimized for performance have dedicated emulator threads, which allow each VM to operate at maximum efficiency. Conversely, general-purpose VM SKUs have emulator threads that run on the same processors as applications running inside the VM. For application workloads that cannot tolerate other workloads sharing their processors, we recommend using the performance-optimized SKUs.
+
+All these SKUs are having the following characteristics:
+
+- Dedicated host-to-VM CPU mapping
+- Reserved CPUs for Kubelet are 0 and 1, except for NC_G2_8_v1 and NC_P4_28_v1
+
+These VM SKUs can be used for both worker and control plane nodes within the Azure Operator Nexus Kubernetes cluster.
+
+## General purpose VM SKUs
+
+| VM SKU Name | vCPU | Memory (GiB) | Root Disk (GiB) |
+||-|||
+| NC_G48_224_v1 | 48 | 224 | 300 |
+| NC_G36_168_v1 | 36 | 168 | 300 |
+| NC_G24_112_v1 | 24 | 112 | 300 |
+| NC_G12_56_v1 | 12 | 56 | 300 |
+| NC_G6_28_v1 | 6 | 28 | 300 |
+| NC_G2_8_v1 | 2 | 8 | 300 |
+
+## Performance optimized VM SKUs
+
+| VM SKU Name | vCPU | Memory (GiB) | Root Disk (GiB) |
+||-|||
+| NC_P46_224_v1 | 46 | 224 | 300 |
+| NC_P34_168_v1 | 34 | 168 | 300 |
+| NC_P22_112_v1 | 22 | 112 | 300 |
+| NC_P10_56_v1 | 10 | 56 | 300 |
+| NC_P4_28_v1 | 4 | 28 | 300 |
+
+## Next steps
+
+Try these SKUs in the Azure Operator Nexus Kubernetes cluster. For more information, see [Quickstart: Deploy an Azure Operator Nexus Kubernetes cluster](./quickstarts-kubernetes-cluster-deployment-bicep.md).
orbital License Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md
To initiate the spacecraft licensing process, you'll need:
- A spacecraft object that corresponds to the spacecraft in orbit or slated for launch. The links in this object must match all current and planned filings. - A list of ground stations that you wish to use to communicate with your satellite.
-## Step 1 - Initiate the request
+## Step 1: Initiate the request
The process starts by initiating the licensing request via the Azure portal.
The process starts by initiating the licensing request via the Azure portal.
1. Click next to Review + Create. 1. Click Create.
-## Step 2 - Provide more details
+## Step 2: Provide more details
When the request is generated, our regulatory team will investigate the request and determine if more detail is required. If so, a customer support representative will reach out to you with a regulatory intake form. You'll need to input information regarding relevant filings, call signs, orbital parameters, link details, antenna details, point of contacts, etc. Fill out all relevant fields in this form as it helps speeds up the process. When you're done entering information, email this form back to the customer support representative.
-## Step 3 - Await feedback from our regulatory team
+## Step 3: Await feedback from our regulatory team
Based on the details provided in the steps above, our regulatory team will make an assessment on time and cost to onboard your spacecraft to all requested ground stations. This step will take a few weeks to execute. Once the determination is made, we'll confirm the cost with you and ask you to authorize before proceeding.
-## Step 4 - Azure Orbital requests the relevant licensing
+## Step 4: Azure Orbital requests the relevant licensing
Upon authorization, you will be billed the fees associated with each relevant ground station. Our regulatory team will seek the relevant licenses to enable your spacecraft to communicate with the desired ground stations. Refer to the following table for an estimated timeline for execution:
Upon authorization, you will be billed the fees associated with each relevant gr
| -- | - | | - | - | - | | Onboarding Timeframe | 3-6 months | 3-6 months | 3-6 months | <1 month | 3-6 months |
-## Step 5 - Spacecraft is authorized
+## Step 5: Spacecraft is authorized
Once the licenses are in place, the spacecraft object will be updated by Azure Orbital to represent the licenses held at the specified ground stations. To understand how the authorizations are applied, see [Spacecraft Object](./spacecraft-object.md).
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
With Azure Orbital Ground Station, you can focus on your missions by off-loading
Azure Orbital Ground Station uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions. ## Earth Observation with Azure Orbital Ground Station
For a full end-to-end solution to manage fleet operations and "Telemetry, Tracki
* Direct data ingestion into Azure * Marketplace integration with third-party data processing and image calibration services * Integrated cloud modems for X and S bands
- * Global reach through integrated third-party networks
+ * Global reach through first-party and integrated third-party networks
+ ## Links to learn more - [Overview, features, security, and FAQ](https://azure.microsoft.com/products/orbital/#layout-container-uid189e)
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
# Tutorial: Process Aqua satellite data using NASA-provided tools
+> [!NOTE]
+> NASA has deprecated support of the DRL software used to process Aqua satellite imagery. Please see: [DRL Current Status](https://directreadout.sci.gsfc.nasa.gov/home.html). Steps 2, 3, and 4 of this tutorial are no longer relevant but presented for informational purposes only.
+ This article is a comprehensive walk-through showing how to use the [Azure Orbital Ground Station (AOGS)](https://azure.microsoft.com/services/orbital/) to capture and process satellite imagery. It introduces the AOGS and its core concepts and shows how to schedule contacts. The article also steps through an example in which we collect and process NASA Aqua satellite data in an Azure virtual machine (VM) using NASA-provided tools. Aqua is a polar-orbiting spacecraft launched by NASA in 2002. Data from all science instruments aboard Aqua is downlinked to the Earth using direct broadcast over the X-band in near real-time. More information about Aqua can be found on the [Aqua Project Science](https://aqua.nasa.gov/) website.
payment-hsm View Payment Hsms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/view-payment-hsms.md
-+ Last updated 08/09/2023
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Previously updated : 8/8/2023 Last updated : 9/5/2023 # Monitor metrics on Azure Database for PostgreSQL - Flexible Server
You can choose from the following categories of enhanced metrics:
|Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Sessions By State** (Preview)|`sessions_by_state` |Count|Overall state of the back ends. |State|No|
-|**Sessions By WaitEventType** (Preview)|`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the back end is waiting.|Wait Event Type|No|
-|**Oldest Backend** (Preview) |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest back end (irrespective of the state).|Doesn't apply|No|
-|**Oldest Query** (Preview) |`longest_query_time_sec`|Seconds|Age in seconds of the longest query that's currently running. |Doesn't apply|No|
-|**Oldest Transaction** (Preview) |`longest_transaction_time_sec`|Seconds|Age in seconds of the longest transaction (including idle transactions).|Doesn't apply|No|
-|**Oldest xmin** (Preview)|`oldest_backend_xmin`|Count|The actual value of the oldest `xmin`. If `xmin` isn't increasing, it indicates that there are some long-running transactions that can potentially hold dead tuples from being removed. |Doesn't apply|No|
-|**Oldest xmin Age** (Preview)|`oldest_backend_xmin_age`|Count|Age in units of the oldest `xmin`. Indicates how many transactions passed since the oldest `xmin`. |Doesn't apply|No|
+|**Sessions By State** |`sessions_by_state` |Count|Overall state of the back ends. |State|No|
+|**Sessions By WaitEventType** |`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the back end is waiting.|Wait Event Type|No|
+|**Oldest Backend** |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest back end (irrespective of the state).|Doesn't apply|No|
+|**Oldest Query** |`longest_query_time_sec`|Seconds|Age in seconds of the longest query that's currently running. |Doesn't apply|No|
+|**Oldest Transaction** |`longest_transaction_time_sec`|Seconds|Age in seconds of the longest transaction (including idle transactions).|Doesn't apply|No|
+|**Oldest xmin** |`oldest_backend_xmin`|Count|The actual value of the oldest `xmin`. If `xmin` isn't increasing, it indicates that there are some long-running transactions that can potentially hold dead tuples from being removed. |Doesn't apply|No|
+|**Oldest xmin Age** |`oldest_backend_xmin_age`|Count|Age in units of the oldest `xmin`. Indicates how many transactions passed since the oldest `xmin`. |Doesn't apply|No|
#### Database
-|Display name|Metric ID|Unit|Description|Dimension|Default enabled|
-|||||||
-|**Backends** (Preview) |`numbackends`|Count|Number of back ends that are connected to this database.|DatabaseName|No|
-|**Deadlocks** (Preview)|`deadlocks` |Count|Number of deadlocks that are detected in this database.|DatabaseName|No|
-|**Disk Blocks Hit** (Preview)|`blks_hit` |Count|Number of times disk blocks were found already in the buffer cache, so that a read wasn't necessary.|DatabaseName|No|
-|**Disk Blocks Read** (Preview) |`blks_read`|Count|Number of disk blocks that were read in this database.|DatabaseName|No|
-|**Temporary Files** (Preview)|`temp_files` |Count|Number of temporary files that were created by queries in this database. |DatabaseName|No|
-|**Temporary Files Size** (Preview) |`temp_bytes` |Bytes|Total amount of data that's written to temporary files by queries in this database. |DatabaseName|No|
-|**Total Transactions** (Preview) |`xact_total` |Count|Number of total transactions that executed in this database. |DatabaseName|No|
-|**Transactions Committed** (Preview) |`xact_commit`|Count|Number of transactions in this database that have been committed.|DatabaseName|No|
-|**Transactions Rolled back** (Preview) |`xact_rollback`|Count|Number of transactions in this database that have been rolled back.|DatabaseName|No|
-|**Tuples Deleted** (Preview) |`tup_deleted`|Count|Number of rows that were deleted by queries in this database. |DatabaseName|No|
-|**Tuples Fetched** (Preview) |`tup_fetched`|Count|Number of rows that were fetched by queries in this database. |DatabaseName|No|
-|**Tuples Inserted** (Preview)|`tup_inserted` |Count|Number of rows that were inserted by queries in this database.|DatabaseName|No|
-|**Tuples Returned** (Preview)|`tup_returned` |Count|Number of rows that were returned by queries in this database.|DatabaseName|No|
-|**Tuples Updated** (Preview) |`tup_updated`|Count|Number of rows that were updated by queries in this database. |DatabaseName|No|
+|Display name |Metric ID |Unit |Description |Dimension |Default enabled|
+||-|--|-|||
+|**Backends** |`numbackends` |Count|Number of back ends that are connected to this database. |DatabaseName|No |
+|**Deadlocks** |`deadlocks` |Count|Number of deadlocks that are detected in this database. |DatabaseName|No |
+|**Disk Blocks Hit** |`blks_hit` |Count|Number of times disk blocks were found already in the buffer cache, so that a read wasn't necessary.|DatabaseName|No |
+|**Disk Blocks Read** |`blks_read` |Count|Number of disk blocks that were read in this database. |DatabaseName|No |
+|**Temporary Files** |`temp_files` |Count|Number of temporary files that were created by queries in this database. |DatabaseName|No |
+|**Temporary Files Size** |`temp_bytes` |Bytes|Total amount of data that's written to temporary files by queries in this database. |DatabaseName|No |
+|**Total Transactions** |`xact_total` |Count|Number of total transactions that executed in this database. |DatabaseName|No |
+|**Transactions Committed** |`xact_commit` |Count|Number of transactions in this database that have been committed. |DatabaseName|No |
+|**Transactions per second (Preview)**|`tps` |Count|Number of transactions executed within a second. |DatabaseName|No |
+|**Transactions Rolled back** |`xact_rollback`|Count|Number of transactions in this database that have been rolled back. |DatabaseName|No |
+|**Tuples Deleted** |`tup_deleted` |Count|Number of rows that were deleted by queries in this database. |DatabaseName|No |
+|**Tuples Fetched** |`tup_fetched` |Count|Number of rows that were fetched by queries in this database. |DatabaseName|No |
+|**Tuples Inserted** |`tup_inserted` |Count|Number of rows that were inserted by queries in this database. |DatabaseName|No |
+|**Tuples Returned** |`tup_returned` |Count|Number of rows that were returned by queries in this database. |DatabaseName|No |
+|**Tuples Updated** |`tup_updated` |Count|Number of rows that were updated by queries in this database. |DatabaseName|No |
#### Logical replication |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Max Logical Replication Lag** (Preview)|`logical_replication_delay_in_bytes`|Bytes|Maximum lag across all logical replication slots.|Doesn't apply|Yes |
+|**Max Logical Replication Lag** |`logical_replication_delay_in_bytes`|Bytes|Maximum lag across all logical replication slots.|Doesn't apply|Yes |
#### Replication |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Max Physical Replication Lag** (Preview)|`physical_replication_delay_in_bytes`|Bytes|Maximum lag across all asynchronous physical replication slots.|Doesn't apply|Yes |
-|**Read Replica Lag** (Preview)|`physical_replication_delay_in_seconds`|Seconds|Read replica lag in seconds. |Doesn't apply|Yes |
+|**Max Physical Replication Lag** |`physical_replication_delay_in_bytes`|Bytes|Maximum lag across all asynchronous physical replication slots.|Doesn't apply|Yes |
+|**Read Replica Lag** |`physical_replication_delay_in_seconds`|Seconds|Read replica lag in seconds. |Doesn't apply|Yes |
#### Saturation
Autovaccum metrics can be used to monitor and tune autovaccum performance for Az
### List of autovacuum metrics
-|Display name|Metric ID|Unit|Description|Dimension|Default enabled|
-|||||||
-|**Analyze Counter User Tables** (Preview)|`analyze_count_user_tables`|Count|Number of times user-only tables have been manually analyzed in this database. |DatabaseName|No |
-|**AutoAnalyze Counter User Tables** (Preview)|`autoanalyze_count_user_tables`|Count|Number of times user-only tables have been analyzed by the autovacuum daemon in this database. |DatabaseName|No |
-|**AutoVacuum Counter User Tables** (Preview) |`autovacuum_count_user_tables` |Count|Number of times user-only tables have been vacuumed by the autovacuum daemon in this database. |DatabaseName|No |
-|**Estimated Dead Rows User Tables** (Preview)|`n_dead_tup_user_tables` |Count|Estimated number of dead rows for user-only tables in this database. |DatabaseName|No |
-|**Estimated Live Rows User Tables** (Preview)|`n_live_tup_user_tables` |Count|Estimated number of live rows for user-only tables in this database. |DatabaseName|No |
-|**Estimated Modifications User Tables** (Preview)|`n_mod_since_analyze_user_tables`|Count|Estimated number of rows that were modified since user-only tables were last analyzed. |DatabaseName|No |
-|**User Tables Analyzed** (Preview) |`tables_analyzed_user_tables`|Count|Number of user-only tables that have been analyzed in this database. |DatabaseName|No |
-|**User Tables AutoAnalyzed** (Preview) |`tables_autoanalyzed_user_tables`|Count|Number of user-only tables that have been analyzed by the autovacuum daemon in this database.|DatabaseName|No |
-|**User Tables AutoVacuumed** (Preview) |`tables_autovacuumed_user_tables`|Count|Number of user-only tables that have been vacuumed by the autovacuum daemon in this database.|DatabaseName|No |
-|**User Tables Counter** (Preview)|`tables_counter_user_tables` |Count|Number of user-only tables in this database.|DatabaseName|No |
-|**User Tables Vacuumed** (Preview) |`tables_vacuumed_user_tables`|Count|Number of user-only tables that have been vacuumed in this database. |DatabaseName|No |
-|**Vacuum Counter User Tables** (Preview) |`vacuum_count_user_tables` |Count|Number of times user-only tables have been manually vacuumed in this database (not counting `VACUUM FULL`).|DatabaseName|No |
+|Display name |Metric ID |Unit |Description |Dimension |Default enabled|
+|||-|--|||
+|**Analyze Counter User Tables** |`analyze_count_user_tables` |Count |Number of times user-only tables have been manually analyzed in this database. |DatabaseName|No |
+|**AutoAnalyze Counter User Tables** |`autoanalyze_count_user_tables` |Count |Number of times user-only tables have been analyzed by the autovacuum daemon in this database. |DatabaseName|No |
+|**AutoVacuum Counter User Tables** |`autovacuum_count_user_tables` |Count |Number of times user-only tables have been vacuumed by the autovacuum daemon in this database. |DatabaseName|No |
+|**Bloat Percent (Preview)** |`bloat_percent` |Percent|Estimated bloat percentage for user only tables. |DatabaseName|No |
+|**Estimated Dead Rows User Tables** |`n_dead_tup_user_tables` |Count |Estimated number of dead rows for user-only tables in this database. |DatabaseName|No |
+|**Estimated Live Rows User Tables** |`n_live_tup_user_tables` |Count |Estimated number of live rows for user-only tables in this database. |DatabaseName|No |
+|**Estimated Modifications User Tables**|`n_mod_since_analyze_user_tables`|Count |Estimated number of rows that were modified since user-only tables were last analyzed. |DatabaseName|No |
+|**User Tables Analyzed** |`tables_analyzed_user_tables` |Count |Number of user-only tables that have been analyzed in this database. |DatabaseName|No |
+|**User Tables AutoAnalyzed** |`tables_autoanalyzed_user_tables`|Count |Number of user-only tables that have been analyzed by the autovacuum daemon in this database. |DatabaseName|No |
+|**User Tables AutoVacuumed** |`tables_autovacuumed_user_tables`|Count |Number of user-only tables that have been vacuumed by the autovacuum daemon in this database. |DatabaseName|No |
+|**User Tables Counter** |`tables_counter_user_tables` |Count |Number of user-only tables in this database. |DatabaseName|No |
+|**User Tables Vacuumed** |`tables_vacuumed_user_tables` |Count |Number of user-only tables that have been vacuumed in this database. |DatabaseName|No |
+|**Vacuum Counter User Tables** |`vacuum_count_user_tables` |Count |Number of times user-only tables have been manually vacuumed in this database (not counting `VACUUM FULL`).|DatabaseName|No |
### Considerations for using autovacuum metrics
You can use PgBouncer metrics to monitor the performance of the PgBouncer proces
|Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Active client connections** (Preview) |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL connection. |DatabaseName|No |
-|**Waiting client connections** (Preview)|`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL connection to service them.|DatabaseName|No |
-|**Active server connections** (Preview) |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL that are in use by a client connection. |DatabaseName|No |
-|**Idle server connections** (Preview) |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL that are idle and ready to service a new client connection. |DatabaseName|No |
-|**Total pooled connections** (Preview)|`total_pooled_connections`|Count|Current number of pooled connections. |DatabaseName|No |
-|**Number of connection pools** (Preview)|`num_pools` |Count|Total number of connection pools. |DatabaseName|No |
+|**Active client connections** |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL connection. |DatabaseName|No |
+|**Waiting client connections** |`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL connection to service them.|DatabaseName|No |
+|**Active server connections** |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL that are in use by a client connection. |DatabaseName|No |
+|**Idle server connections** |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL that are idle and ready to service a new client connection. |DatabaseName|No |
+|**Total pooled connections** |`total_pooled_connections`|Count|Current number of pooled connections. |DatabaseName|No |
+|**Number of connection pools** |`num_pools` |Count|Total number of connection pools. |DatabaseName|No |
### Considerations for using the PgBouncer metrics
Is-db-alive is an database server availability metric for Azure Postgres Flexibl
|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|-|--|||
-|**Database Is Alive** (Preview) |`is_db_alive` |Count |Indicates if the database is up or not |N/a |Yes |
+|**Database Is Alive** |`is_db_alive` |Count |Indicates if the database is up or not |N/a |Yes |
#### Considerations when using the Database availability metrics -- Aggregating this metric with `MAX()` will allow customers to determine weather the server has been up or down in the last minute.
+- Aggregating this metric with `MAX()` will allow customers to determine whether the server has been up or down in the last minute.
- Customers have option to further aggregate these metrics with any desired frequency (5m, 10m, 30m etc.) to suit their alerting requirements and avoid any false positive. - Other possible aggregations are `AVG()` and `MIN()`
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
Here are some concepts to be familiar with when you're using virtual networks wi
* **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked. ### Using a private DNS zone
-If you use the Azure portal or the Azure CLI to create flexible servers with a virtual network, a new private DNS zone is automatically provisioned for each server in your subscription by using the server name that you provided.
+[Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
+
+When using private network access with Azure virtual network, providing the private DNS zone information is mandatory in order to be able to do DNS resolution. For new Azure Database for PostgreSQL Flexible Server creation using private network access, private DNS zones will need to be used while configuring flexible servers with private access.
+For new Azure Database for PostgreSQL Flexible Server creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring flexible servers with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating flexible servers, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription.
If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, create private DNS zones that end with `.postgres.database.azure.com`. Use those zones while configuring flexible servers with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name can't be the name you use for one of your flexible servers or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
-When using private network access with Azure virtual network, providing the private DNS zone information is mandatory across various interfaces, including API, ARM, and Terraform. Therefore, for new Azure Database for PostgreSQL Flexible Server creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring flexible servers with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating flexible servers, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription.
Using Azure Portal, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL - Flexible Server to another private DNS zone that exists the same or different subscription. > [!IMPORTANT] > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL - Flexible Server to another private DNS zone is currently disabled for servers with High Availability feature enabled.
+After you create a private DNS zone in Azure, you'll need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone.
+ > [!IMPORTANT]
+ > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL - Flexible Server with private networking. When creating server through the Portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure Portal.
### Integration with a custom DNS server
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
Previously updated : 7/1/2023 Last updated : 9/1/2023 # Monitor Performance with Query Store
This view returns all the data in Query Store. There is one row for each distinc
|user_id |oid |pg_authid.oid |OID of user who executed the statement| |db_id |oid |pg_database.oid |OID of database in which the statement was executed| |query_id |bigint || Internal hash code, computed from the statement's parse tree|
-|query_sql_text |Varchar(10000) || Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster.|
+|query_sql_text |varchar(10000) || Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. The default query text length is 6000 and can be modified using query store parameter `pg_qs.max_query_text_length`.|
|plan_id |bigint | |ID of the plan corresponding to this query| |start_time |timestamp || Queries are aggregated by time buckets - the time span of a bucket is 15 minutes by default. This is the start time corresponding to the time bucket for this entry.| |end_time |timestamp || End time corresponding to the time bucket for this entry.|
postgresql How To Use Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-use-pgvector.md
Last updated 05/09/2023
## Enable extension
-To install the extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command from the psql tool to load the packaged objects into your database.
+Before you can enable `pgvector` on your Flexible Server, you need to add it to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if it's correctly added by running `SHOW azure.extensions;`.
+
+Then you can install the extension, by connecting to your target database and running the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command. You need to repeat the command separately for every database you want the extension to be available in.
```postgresql CREATE EXTENSION vector; ``` > [!Note]
-> To disable an extension use `drop_extension()`
+> To remove the extension from the currently connected database use `DROP EXTENSION vector;`.
[!INCLUDE [`pgvector`](../../cosmos-db/postgresql/includes/pgvector-basics.md)]
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 2 | :x: $$ | :x: $ | :x: $ | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: | :x: $ | :x: $ | :heavy_check_mark: |
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
In addition, consider the following points of contact as appropriate:
## Next steps
-Now that you've read an introduction to Azure Database for PostgreSQL flexible server deployment mode, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)
+Now that you've read an introduction to Azure Database for PostgreSQL flexible server deployment mode, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
The following table shows the time for performing offline migrations for databas
> [!IMPORTANT] > In order to perform faster migrations, pick a higher SKU for your flexible server. You can always change the SKU to match the application needs post migration.
+## Migration of users/roles, ownerships and privileges
+Along with data migration, the tool automatically provides the following built-in capabilities:
+- Migration of users/roles present on your source server to the target server.
+- Migration of ownership of all the database objects on your source server to the target server.
+- Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server.
+
+> [!NOTE]
+> This functionality is enabled only for flexible servers in **North Europe** region. It will be enabled for flexible servers in other Azure regions soon. In the meantime, you can follow the steps mentioned in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md#migrate-the-roles) to perform user/roles migration
+ ## Limitations - You can have only one active migration to your flexible server. - You can select a max of eight databases in one migration attempt. If you've more than eight databases, you must wait for the first migration to be complete before initiating another migration for the rest of the databases. Support for migration of more than eight databases in a single migration will be introduced later. - The source and target server must be in the same Azure region. Cross region migrations are not supported.-- The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details, firewall rules, users, roles and permissions. In the later part of the document, we point you to docs that can help you perform the migration of users, roles and firewall rules from single server to flexible server.
+- The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details and firewall rules.
- The migration tool shows the number of tables copied from source to target server. You need to validate the data in target server post migration. - The tool only migrates user databases and not system databases like template_0, template_1, azure_sys and azure_maintenance.
+> [!NOTE]
+> The following limitations are applicable only for flexible servers on which the migration of users/roles functionality is enabled.
+
+- AAD users present on your source server will not be migrated to target server. To mitigate this limitation, manually create all AAD users on your target server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before triggering a migration. If AAD users are not created on target server, migration will fail with appropriate error message.
+- If the target flexible server uses SCRAM-SHA-256 password encryption method, connection to flexible server using the users/roles on single server will fail since the passwords are encrypted using md5 algorithm. To mitigate this limitation, please choose the option **MD5** for **password_encryption** server parameter on your flexible server.
+- Though the ownership of database objects such as tables, views, sequences, etc. are copied to the target server, the owner of the database in your target server will be the migration user of your target server. The limitation can be mitigated by executing the following command
+
+```sql
+ ALTER DATABASE <dbname> OWNER TO <user>;
+```
+ Make sure the user executing the above command is a member of the user to which ownership is being assigned to. This limitation will be fixed in the upcoming releases of the migration tool to match the database owners on your source server.
## Experience Get started with the Single to Flex migration tool by using any of the following methods:
For calculating the total downtime to perform offline migration of production se
> [!NOTE] > The size of databases is not the right metric for validation.The source server might have bloats/dead tuples which can bump up the size on the source server. Also, the storage containers used in single and flexible servers are completely different. It is completely normal to have size differences between source and target servers. If there is an issue in the first three steps of validation, it indicates a problem with the migration. -- **Migration of server settings** - The users, roles/privileges, server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server. Users and roles are migrated from Single to Flexible server by following the steps listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md).
+- **Migration of server settings** - The server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server.
- **Changing connection strings** - Post successful validation, application should change their connection strings to point to flexible server. This activity is coordinated with the application team to make changes to all the references of connection strings pointing to single server. Note that in the flexible server the user parameter in the connection string no longer needs to be in the **username@servername** format. You should just use the **user=username** format for this parameter in the connection string For example
The following table summarizes the list of networking scenarios supported by the
##### Allow list required extensions
-Use the following select command in the Single Server databases to list all the extensions that are being used.
+The migration tool automatically allow lists all extensions used by your single server databases on your flexible server except for the ones whose libraries need to be loaded at the server start.
+
+Use the following select command to list all the extensions used on your Single server databases.
```sql select * from pg_extension; ```
-Search for the **azure.extensions** parameter on the Server Parameters blade on your Flexible server. Select the list of extensions obtained by running the above query on your Single server database to this server parameter and select Save. You should wait for the deployment to complete before proceeding further.
--
-> [!NOTE]
-> If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER or PG_PARTMAN extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions.
- Check if the list contains any of the following extensions:-- PG_CRON65
+- PG_CRON
- PG_HINT_PLAN - PG_PARTMAN_BGW - PG_PREWARM
Check if the list contains any of the following extensions:
If yes, then follow the below steps.
-Go to the server parameters blade and search for **shared_preload_libraries** parameter. This parameter indicates the set of extension libraries that are preloaded at the server restart. Pg_cron and pg_stat_statements extensions are selected by default. Select the list of above extensions used by the single server database to this parameter and select on Save.
+Go to the server parameters blade and search for **shared_preload_libraries** parameter. PG_CRON and PG_STAT_STATEMENTS extensions are selected by default. Select the list of above extensions used by your single server databases to this parameter and select Save.
:::image type="content" source="./media/concepts-single-to-flexible/shared-preload-libraries.png" alt-text="Diagram that shows allow listing of shared preload libraries on Flexible Server." lightbox="./media/concepts-single-to-flexible/shared-preload-libraries.png":::
-The changes to this server parameter would require a server restart to come into effect.
+For the changes to take effect, server restart would be required.
:::image type="content" source="./media/concepts-single-to-flexible/save-and-restart.png" alt-text="Diagram that shows save and restart option on Flexible Server." lightbox="./media/concepts-single-to-flexible/save-and-restart.png":::
-Use the **Save and Restart** option and wait for the postgresql server to restart.
+Use the **Save and Restart** option and wait for the flexible server to restart.
+
+> [!NOTE]
+> If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, POSTGRES_FDW or PG_PARTMAN extensions are used in your single server, please raise a support request since the migration tool does not handle these extensions.
+
+##### Create AAD users on target server
+> [!NOTE]
+> This pre-requisite is applicable only for flexible servers on which the migration of users/roles functionality is enabled.
+
+Execute the following query on your source server to get the list of AAD users.
+```sql
+SELECT r.rolname
+ FROM
+ pg_roles r
+ JOIN pg_auth_members am ON r.oid = am.member
+ JOIN pg_roles m ON am.roleid = m.oid
+ WHERE
+ m.rolname IN (
+ 'azure_ad_admin',
+ 'azure_ad_user',
+ 'azure_ad_mfa'
+ );
+```
+Create the AAD users on your target flexible server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before creating a migration.
### Migration
In summary, the Single to Flexible migration tool will migrate a table in parall
- Once the migration is complete, verify the data on your flexible server and make sure it's an exact copy of the single server. - Post verification, enable HA option as needed on your flexible server. - Change the SKU of the flexible server to match the application needs. This change needs a database server restart.-- Migrate users and roles from single to flexible servers. This step can be done by creating users on flexible servers and providing them with suitable privileges or by using the steps that are listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md). - If you've changed any server parameters from their default values in single server, copy those server parameter values in flexible server. - Copy other server settings like tags, alerts, firewall rules (if applicable) from single server to flexible server. - Make changes to your application to point the connection strings to flexible server.
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
In this tutorial, you learn about:
To complete this tutorial, you need to: 1. Use an existing instance of Azure Database for PostgreSQL ΓÇô Single Server (the source server)
-2. All extensions used on the Single Server (source) must be [allow-listed on the Flexible Server (target)](./concepts-single-to-flexible.md#allow-list-required-extensions)
+2. Allowlist extensions whose libraries need to be loaded at server start, by following the steps mentioned in this [doc](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist these extensions before you initiate a migration using this tool.
>[!NOTE] > If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, POSTGRES_FDW or PG_PARTMAN extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions.
The `create` parameters that go into the json file format are as shown below:
| `sourceServerUserName` | Required | The default value is the admin user created during the creation of single server and the password provided will be used for authentication against this user. In case you are not using the default user, this parameter is the user or role on the source server used for performing the migration. This user should have necessary privileges and ownership on the database objects involved in the migration and should be a member of **azure_pg_admin** role. | | `targetServerUserName` | Required | The default value is the admin user created during the creation of flexible server and the password provided will be used for authentication against this user. In case you are not using the default user, this parameter is the user or role on the target server used for performing the migration. This user should be a member of **azure_pg_admin**, **pg_read_all_settings**, **pg_read_all_stats**,**pg_stat_scan_tables** roles and should have the **Create role, Create DB** attributes. | | `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
-| `overwriteDbsInTarget` | Required | When set to true (default), if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. |
+| `overwriteDbsInTarget` | Required | When set to true, if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. |
| `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. | | `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. | | `TargetDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution inside a virtual network. Provide the FQDN of the Flexible Server target according to the custom DNS server. <br> `SourceDBServerFullyQualifiedDomainName` and `TargetDBServerFullyQualifiedDomainName` are included as a part of the JSON only in the rare scenario that a custom DNS server is used for name resolution instead of Azure-provided DNS. Otherwise, don't include these parameters as a part of the JSON file. |
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
In this tutorial, you learn to:
1. Create the target flexible server. For guided steps, refer to the quickstart [Create an Azure Database for PostgreSQL flexible server using the Portal](../flexible-server/quickstart-create-server-portal.md)
-2. Allowlist all required extensions as shown in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist the extensions before you initiate a migration using this tool.
+2. Allowlist extensions whose libraries need to be loaded at server start, by following the steps mentioned in this [doc](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist these extensions before you initiate a migration using this tool.
3. Check if the data distribution among all the tables of a database is skewed with most of the data present in a single (or few) tables. If it is skewed, the migration speed could be slower than expected. In this case, the migration speed can be increased by [migrating the large table(s) in parallel](./concepts-single-to-flexible.md#improve-migration-speedparallel-migration-of-tables).
After deploying the Flexible Server, follow the steps 3 to 5 under [Configure th
### Setup tab
-The first tab is **Setup**. Just in case you missed it, allowlist all required extensions as shown in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist the extensions before you initiate a migration using this tool.
+The first tab is **Setup**. Just in case you missed it, allowlist necessary extensions as shown in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist these extensions before you initiate a migration using this tool.
>[!NOTE] > If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, POSTGRES_FDW or PG_PARTMAN extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions.
Select the **Next** button.
### Review + create tab >[!NOTE]
-> Gentle reminder to allowlist the [extensions](./concepts-single-to-flexible.md#allow-list-required-extensions) before you select **Create** in case it is not yet complete.
+> Gentle reminder to allowlist necessary [extensions](./concepts-single-to-flexible.md#allow-list-required-extensions) before you select **Create** in case it is not yet complete.
The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration.
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 08/03/2023 Last updated : 08/25/2023 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
Learn how to migrate from Azure Database for PostgreSQL - Single Server to Azure
**Q. Will I incur downtime when I migrate my Azure Database from PostgreSQL - Single Server to a Flexible Server?**
-**A.** Currently, the Single Server to Flexible Server migration tool only supports offline migrations. Offline migration requires downtime to your applications during the migration process. Visit Single Server to Flexible Server migration tool](../migrate/concepts-single-to-flexible.md) for more information.
+**A.** Currently, the Single Server to Flexible Server migration tool only supports offline migrations. Offline migration requires downtime to your applications during the migration process. For more information, see [Migration tool - Azure Database for PostgreSQL Single Server to Flexible Server](../migrate/concepts-single-to-flexible.md).
Downtime depends on several factors, including the number of databases, size of your databases, number of tables inside each database, number of indexes, and the distribution of data across tables. It also depends on the SKU of the source and target server and the IOPS available on the source and target server.
private-5g-core Azure Stack Edge Virtual Machine Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md
The following table contains information about the VMs that Azure Private 5G Cor
| AP5GC Cluster Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 128 | Control Plane of the Kubernetes cluster used for AP5GC | | AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 </br> Persistent - 102 GB | AP5GC workload node | | Control plane upgrade reserve | | 4 | 4 | 0 | Used by ASE during upgrade of the control plane VM |
-| **Total requirements** | | **24** | **44** | **Ephemeral - 336** </br> **Persistent - 102** </br> **Total - 438** | |
+| **Total requirements** | | **28** | **44** | **Ephemeral - 336** </br> **Persistent - 102** </br> **Total - 438** | |
## Remaining usable resource on Azure Stack Edge Pro
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
Run the following commands at the PowerShell prompt, specifying the object ID yo
```powershell Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}+
+Invoke-Command -Session $minishellSession -ScriptBlock {Enable-HcsAzureKubernetesService -f}
``` Once you've run this command, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image. :::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview.png" alt-text="Screenshot of configuration menu, with Kubernetes (Preview) highlighted.":::
-Select the **This Kubernetes cluster is for Azure Private 5G Core or SAP Digital Manufacturing Cloud workloads** checkbox.
-- If you go to the Azure portal and navigate to your **Azure Stack Edge** resource, you should see an **Azure Kubernetes Service** option. You'll set up the Azure Kubernetes Service in [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc). :::image type="content" source="media/commission-cluster/commission-cluster-ase-resource.png" alt-text="Screenshot of Azure Stack Edge resource in the Azure portal. Azure Kubernetes Service (PREVIEW) is shown under Edge services in the left menu.":::
The Azure Private 5G Core private mobile network requires a custom location and
1. Create the Network Function Operator Kubernetes extension: ```azurecli
- Add-Content -Path $TEMP_FILE -Value @"
+ cat > $TEMP_FILE <<EOF
{ "helm.versions": "v3", "Microsoft.CustomLocation.ServiceAccount": "azurehybridnetwork-networkfunction-operator",
The Azure Private 5G Core private mobile network requires a custom location and
"helm.release-namespace": "azurehybridnetwork", "managed-by": "helm" }
- "@
+ EOF
``` ```azurecli
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Get access to Azure Private 5G Core for your Azure subscription
-Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
+Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://forms.office.com/r/4Q1yNRakXe).
## Choose the core technology type (5G or 4G)
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
You must already have an AP5GC site deployed to collect diagnostics.
1. Copy the contents of the **URL** field in the **Container properties** view. 1. Create a [User-assigned identity](../active-directory/managed-identities-azure-resources/overview.md) and assign it to the storage account created above with the **Storage Blob Data Contributor** role. > [!TIP]
- > Make sure the same User-assigned identity is used during site creation.
+ > You may have already created and associated a user-assigned identity when creating the site.
1. Navigate to the **Packet core control plane** resource for the site.
-1. Select **Identity** under **Settings** on the left side menu.
-1. Toggle **Modify user assigned managed identity?** to **Yes** and select **+ Add**.
-1. In the **Add user assigned managed identity** select the user-signed managed identity you created.
+1. Select **Identity** under **Settings** in the left side menu.
1. Select **Add**.
-1. Select **Next**.
-1. Select **Create**.
+1. Select the user-signed managed identity you created and select **Add**.
## Gather diagnostics for a site
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
# Modify a packet core instance
-Each Azure Private 5G Core site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal; this includes modifying the packet core's custom location, connected Azure Stack Edge (ASE) device, and access network configuration. You'll also learn how to add and modify the data networks attached to the packet core instance.
+Each Azure Private 5G Core site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal; this includes modifying the packet core's custom location, connected Azure Stack Edge (ASE) device, and access network configuration. You'll also learn how to add, modify and remove the data networks attached to the packet core instance.
If you want to modify a packet core instance's local access configuration, follow [Modify the local access configuration in a site](modify-local-access-configuration.md).
To make changes to a data network attached to your packet core instance:
1. Select **Modify**. You should see your changes under the **Data networks** tab. 1. Go to [Submit and verify changes](#submit-and-verify-changes).
+## Remove an attached data network
+
+To remove a data network attached to the packet core:
+
+1. Select the checkbox for the data network you want to delete.
+1. Select **Delete**.
++
+This change will require a manual packet core reinstall to take effect, see [Next steps](#next-steps).
+ ## Submit and verify changes 1. Select **Modify**.
To make changes to a data network attached to your packet core instance:
- If you made changes to the packet core configuration, check that the fields under **Connected ASE device**, **Azure Arc Custom Location** and **Access network** contain the updated information. - If you made changes to the attached data networks, check that the fields under **Data networks** contain the updated information.
+## Remove data network resource
+
+If you removed an attached data network from the packet core and it is no longer attached to any packet cores or referenced by any SIM policies, you may remove the data network from the resource group:
+> [!NOTE]
+> The data network that you want to delete must have no SIM policies associated with it. If the data network has one or more associated SIM policies data network removal will be prevented.
+
+1. If you need to delete data network from a SIM policy's configuration:
+ 1. Navigate to the **SIM Policy** resource.
+ 1. Select **Modify SIM Policy**.
+ 1. Either:
+
+ - Select the **Delete** button for the network slice containing the associated data network.
+ - Or
+ 1. Select the **Edit** button for the network slice containing the associated data network.
+ 1. Select a new **Data network** to be associated with the network slice.
+ 1. Select **Modify**.
+ 1. Select **Review + Modify**.
+ 1. Select **Modify**.
+1. Navigate to the resource group containing your AP5GC resources.
+1. Select the checkbox for the data network resource you want to delete.
+1. Select **Delete**.
+ ## Restore backed up deployment information If you made changes that triggered a packet core reinstall, reconfigure your deployment using the information you gathered in [Back up deployment information](#back-up-deployment-information).
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
In this step, you'll roll back your packet core using a REST API request. Follow
If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
-> [!NOTE]
-> You can roll back your packet core instance to version [PMN-2211-0](azure-private-5g-core-release-notes-2211.md) or later.
- 1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Perform a [rollback POST request](/rest/api/mobilenetwork/packet-core-control-planes/rollback?tabs=HTTP).
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
If you encountered issues after the upgrade, you can roll back the packet core i
If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
-> [!NOTE]
-> You can roll back your packet core instance to version [PMN-2211-0](azure-private-5g-core-release-notes-2211.md) or later.
- 1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Navigate to the **Packet Core Control Plane** resource that you want to roll back as described in [View the current packet core version](#view-the-current-packet-core-version). 1. Select **Rollback version**.
If any of the configuration you set while your packet core instance was running
:::image type="content" source="media/upgrade-packet-core-azure-portal/confirm-rollback.png" alt-text="Screenshot of the Azure portal showing the Confirm rollback field in the Rollback packet core screen."::: 1. Select **Roll back packet core**.
-1. Azure will now redeploy the packet core instance at the new software version. You can check the latest status of the rollback by looking at the **Packet core installation state** field. The **Packet Core Control Plane** resource's overview page will refresh every 20 seconds, and you can select **Refresh** to trigger a manual update. The **Packet core installation state** field will show as **RollingBack** during the rollback and update to **Installed** when the process completes.
+1. Azure will now redeploy the packet core instance at the previous software version. You can check the latest status of the rollback by looking at the **Packet core installation state** field. The **Packet Core Control Plane** resource's overview page will refresh every 20 seconds, and you can select **Refresh** to trigger a manual update. The **Packet core installation state** field will show as **RollingBack** during the rollback and update to **Installed** when the process completes.
1. Follow the steps in [Restore backed up deployment information](#restore-backed-up-deployment-information) to reconfigure your deployment. 1. Follow the steps in [Verify upgrade](#verify-upgrade) to check if the rollback was successful.
private-link Create Private Link Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-portal.md
Previously updated : 06/22/2023 Last updated : 08/29/2023 #Customer intent: As someone with a basic network background who's new to Azure, I want to create an Azure Private Link service by using the Azure portal
Get started creating a Private Link service that refers to your service. Give Pr
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create an internal load balancer
+## <a name="create-a-virtual-network"></a> Sign in to Azure
-In this section, you create a virtual network and an internal Azure Load Balancer.
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-### Load balancer virtual network
-
-Create a virtual network and subnet to host the load balancer that accesses your Private Link service.
-
-1. Sign-in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-1. Select **+ Create**.
-
-1. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. Enter **test-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **vnet-1** |
- | Region | Select **East US 2** |
-
-1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-1. In the **IP Addresses** tab, under **IPv4 address space**, select the garbage deletion icon to remove any address space that already appears, and then enter **10.0.0.0/16**.
-
-1. Select **+ Add subnet**.
-
-1. In **Add subnet**, enter the following information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **subnet-1** |
- | Subnet address range | Enter **10.0.0.0/24** |
-
-1. Select **Add**.
-
-1. Select the **Review + create** tab or select the **Review + create** button.
-
-1. Select **Create**.
### Create load balancer
In this section, you map the private link service to a private endpoint. A virtu
### Create private endpoint virtual network
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-1. Select **+ Create**.
-
-1. In the **Basics** tab, enter or select the following information:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **test-rg** |
- | **Instance details** | |
- | Name | Enter **vnet-pe** |
- | Region | Select **East US 2** |
-
-1. Select **Next: IP Addresses** or the **IP Addresses** tab.
-
-1. In the **IP Addresses** tab, under **IPv4 address space**, select the garbage deletion icon to remove any address space that already appears, and then enter **10.1.0.0/16**.
+Repeat steps in [Create a virtual network](#create-a-virtual-network) to create a virtual network with the following settings:
-1. Select **+ Add subnet**.
-
-1. In **Add subnet**, enter the following information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **subnet-pe** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-1. Select **Add**.
-
-1. Select **Review + create**.
-
-1. Select **Create**.
+| Setting | Value |
+| - | -- |
+| Name | **vnet-pe** |
+| Location | **East US 2** |
+| Address space | **10.1.0.0/16** |
+| Subnet name | **subnet-pe** |
+| Subnet address range | **10.1.0.0/24** |
### Create private endpoint
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
Title: 'Use Azure Firewall to inspect traffic destined to a private endpoint'
+ Title: 'Azure Firewall scenarios to inspect traffic destined to a private endpoint'
-description: Learn how you can inspect traffic destined to a private endpoint using Azure Firewall.
+description: Learn about different scenarios to inspect traffic destined to a private endpoint using Azure Firewall.
- Previously updated : 04/27/2023+ Last updated : 08/14/2023
-# Use Azure Firewall to inspect traffic destined to a private endpoint
+# Azure Firewall scenarios to inspect traffic destined to a private endpoint
> [!NOTE] > If you want to secure traffic to private endpoints in Azure Virtual WAN using secured virtual hub, see [Secure traffic destined to private endpoints in Azure Virtual WAN](../firewall-manager/private-link-inspection-secure-virtual-hub.md).
If your security requirements require client traffic to services exposed via pri
The same considerations as in scenario 2 above apply. In this scenario, there aren't virtual network peering charges. For more information about how to configure your DNS servers to allow on-premises workloads to access private endpoints, see [on-premises workloads using a DNS forwarder](./private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
-## Prerequisites
-
-* An Azure subscription.
-
-* A Log Analytics workspace.
-
-See, [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md) to create a workspace if you don't have one in your subscription.
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a VM
-
-In this section, you create a virtual network and subnet to host the VM used to access your private link resource. An Azure SQL database is used as the example service.
-
-## Virtual networks and parameters
-
-Create three virtual networks and their corresponding subnets to:
-
-* Contain the Azure Firewall used to restrict communication between the VM and the private endpoint.
-
-* Host the VM that is used to access your private link resource.
-
-* Host the private endpoint.
-
-Replace the following parameters in the steps with the following information:
-
-### Azure Firewall network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myAzFwVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.0.0.0/16 |
-| **\<subnet-name>** | AzureFirewallSubnet |
-| **\<subnet-address-range>** | 10.0.0.0/24 |
-
-### Virtual machine network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myVMVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.1.0.0/16 |
-| **\<subnet-name>** | VMSubnet |
-| **\<subnet-address-range>** | 10.1.0.0/24 |
-
-### Private endpoint network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myPEVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.2.0.0/16 |
-| **\<subnet-name>** | PrivateEndpointSubnet |
-| **\<subnet-address-range>** | 10.2.0.0/24 |
--
-10. Repeat steps 1 to 9 to create the virtual networks for hosting the virtual machine and private endpoint resources.
-
-### Create virtual machine
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this resource group in the previous section. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select **(US) South Central US**. |
- | Availability options | Leave the default **No infrastructure redundancy required**. |
- | Image | Select **Ubuntu Server 18.04 LTS - Gen1**. |
- | Size | Select **Standard_B2s**. |
- | **Administrator account** | |
- | Authentication type | Select **Password**. |
- | Username | Enter a username of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/linux/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- | Confirm Password | Reenter password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
--
-3. Select **Next: Disks**.
-
-4. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
-
-5. In **Create a virtual machine - Networking**, select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Select **myVMVNet**. |
- | Subnet | Select **VMSubnet (10.1.0.0/24)**.|
- | Public IP | Leave the default **(new) myVm-ip**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **SSH**.|
- ||
-
-6. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-7. When you see the **Validation passed** message, select **Create**.
--
-## Deploy the Firewall
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Type **firewall** in the search box and press **Enter**.
-
-3. Select **Firewall** and then select **Create**.
-
-4. On the **Create a Firewall** page, use the following table to configure the firewall:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myAzureFirewall**. |
- | Region | Select **South Central US**. |
- | Availability zone | Leave the default **None**. |
- | Choose a virtual network | Select **Use Existing**. |
- | Virtual network | Select **myAzFwVNet**. |
- | Public IP address | Select **Add new** and in Name enter **myFirewall-ip**. |
- | Forced tunneling | Leave the default **Disabled**. |
- |||
-5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-6. When you see the **Validation passed** message, select **Create**.
-
-## Enable firewall logs
-
-In this section, you enable the logs on the firewall.
-
-1. In the Azure portal, select **All resources** in the left-hand menu.
-
-2. Select the firewall **myAzureFirewall** in the list of resources.
-
-3. Under **Monitoring** in the firewall settings, select **Diagnostic settings**
-
-4. Select **+ Add diagnostic setting** in the Diagnostic settings.
-
-5. In **Diagnostics setting**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Diagnostic setting name | Enter **myDiagSetting**. |
- | Category details | |
- | log | Select **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule**. |
- | Destination details | Select **Send to Log Analytics**. |
- | Subscription | Select your subscription. |
- | Log Analytics workspace | Select your Log Analytics workspace. |
-
-6. Select **Save**.
-
-## Create Azure SQL database
-
-In this section, you create a private SQL Database.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **SQL Database**.
-
-2. In **Create SQL Database - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this resource group in the previous section.|
- | **Database details** | |
- | Database name | Enter **mydatabase**. |
- | Server | Select **Create new** and enter the following information. |
- | Server name | Enter **mydbserver**. If this name is taken, enter a unique name. |
- | Server admin sign in | Enter a name of your choosing. |
- | Password | Enter a password of your choosing. |
- | Confirm Password | Reenter password |
- | Location | Select **(US) South Central US**. |
- | Want to use SQL elastic pool | Leave the default **No**. |
- | Compute + storage | Leave the default **General Purpose Gen5, 2 vCores, 32 GB Storage**. |
- |||
-
-3. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-4. When you see the **Validation passed** message, select **Create**.
-
-## Create private endpoint
-
-In this section, you create a private endpoint for the Azure SQL database in the previous section.
-
-1. In the Azure portal, select **All resources** in the left-hand menu.
-
-2. Select the Azure SQL server **mydbserver** in the list of services. If you used a different server name, choose that name.
-
-3. In the server settings, select **Private endpoint connections** under **Security**.
-
-4. Select **+ Private endpoint**.
-
-5. In **Create a private endpoint**, enter or select this information in the **Basics** tab:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Region | Select **(US) South Central US.** |
-
-6. Select the **Resource** tab or select **Next: Resource** at the bottom of the page.
-
-7. In the **Resource** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Connection method | Select **Connect to an Azure resource in my directory**. |
- | Subscription | Select your subscription. |
- | Resource type | Select **Microsoft.Sql/servers**. |
- | Resource | Select **mydbserver** or the name of the server you created in the previous step.
- | Target subresource | Select **sqlServer**. |
-
-8. Select the **Configuration** tab or select **Next: Configuration** at the bottom of the page.
-
-9. In the **Configuration** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Networking** | |
- | Virtual network | Select **myPEVnet**. |
- | Subnet | Select **PrivateEndpointSubnet**. |
- | **Private DNS integration** | |
- | Integrate with private DNS zone | Select **Yes**. |
- | Subscription | Select your subscription. |
- | Private DNS zones | Leave the default **privatelink.database.windows.net**. |
-
-10. Select the **Review + create** tab or select **Review + create** at the bottom of the page.
-
-11. Select **Create**.
-
-12. After the endpoint is created, select **Firewalls and virtual networks** under **Security**.
-
-13. In **Firewalls and virtual networks**, select **Yes** next to **Allow Azure services and resources to access this server**.
-
-14. Select **Save**.
-
-## Connect the virtual networks using virtual network peering
-
-In this section, we connect virtual networks **myVMVNet** and **myPEVNet** to **myAzFwVNet** using peering. There isn't direct connectivity between **myVMVNet** and **myPEVNet**.
-
-1. In the portal's search bar, enter **myAzFwVNet**.
-
-2. Select **Peerings** under **Settings** menu and select **+ Add**.
-
-3. In **Add Peering** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myVMVNet**. |
- | **Peer details** | |
- | Virtual network deployment model | Leave the default **Resource Manager**. |
- | I know my resource ID | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myVMVNet**. |
- | Name of the peering from remote virtual network to myAzFwVNet | Enter **myVMVNet-to-myAzFwVNet**. |
- | **Configuration** | |
- | **Configure virtual network access settings** | |
- | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. |
- | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. |
- | **Configure forwarded traffic settings** | |
- | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. |
- | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. |
- | **Configure gateway transit settings** | |
- | Allow gateway transit | Leave unchecked |
-
-4. Select **OK**.
-
-5. Select **+ Add**.
-
-6. In **Add Peering** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myPEVNet**. |
- | **Peer details** | |
- | Virtual network deployment model | Leave the default **Resource Manager**. |
- | I know my resource ID | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myPEVNet**. |
- | Name of the peering from remote virtual network to myAzFwVNet | Enter **myPEVNet-to-myAzFwVNet**. |
- | **Configuration** | |
- | **Configure virtual network access settings** | |
- | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. |
- | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. |
- | **Configure forwarded traffic settings** | |
- | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. |
- | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. |
- | **Configure gateway transit settings** | |
- | Allow gateway transit | Leave unchecked |
-
-7. Select **OK**.
-
-## Link the virtual networks to the private DNS zone
-
-In this section, we link virtual networks **myVMVNet** and **myAzFwVNet** to the **privatelink.database.windows.net** private DNS zone. This zone was created when we created the private endpoint.
-
-The link is required for the VM and firewall to resolve the FQDN of database to its private endpoint address. Virtual network **myPEVNet** was automatically linked when the private endpoint was created.
-
->[!NOTE]
->If you don't link the VM and firewall virtual networks to the private DNS zone, both the VM and firewall will still be able to resolve the SQL Server FQDN. They will resolve to its public IP address.
-
-1. In the portal's search bar, enter **privatelink.database**.
-
-2. Select **privatelink.database.windows.net** in the search results.
-
-3. Select **Virtual network links** under **Settings**.
-
-4. Select **+ Add**
-
-5. In **Add virtual network link** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Link name | Enter **Link-to-myVMVNet**. |
- | **Virtual network details** | |
- | I know the resource ID of virtual network | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myVMVNet**. |
- | **CONFIGURATION** | |
- | Enable auto registration | Leave unchecked. |
-
-6. Select **OK**.
-
-## Configure an application rule with SQL FQDN in Azure Firewall
-
-In this section, configure an application rule to allow communication between **myVM** and the private endpoint for SQL Server **mydbserver.database.windows.net**.
-
-This rule allows communication through the firewall that we created in the previous steps.
-
-1. In the portal's search bar, enter **myAzureFirewall**.
-
-2. Select **myAzureFirewall** in the search results.
-
-3. Select **Rules** under **Settings** in the **myAzureFirewall** overview.
-
-4. Select the **Application rule collection** tab.
-
-5. Select **+ Add application rule collection**.
-
-6. In **Add application rule collection** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Priority | Enter **100**. |
- | Action | Enter **Allow**. |
- | **Rules** | |
- | **FQDN tags** | |
- | Name | Leave blank. |
- | Source type | Leave the default **IP address**. |
- | Source | Leave blank. |
- | FQDN tags | Leave the default **0 selected**. |
- | **Target FQDNs** | |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Source type | Leave the default **IP address**. |
- | Source | Enter **10.1.0.0/16**. |
- | Protocol: Port | Enter **mssql:1433**. |
- | Target FQDNs | Enter **mydbserver.database.windows.net**. |
-
-7. Select **Add**.
-
-## Route traffic between the virtual machine and private endpoint through Azure Firewall
-
-We didn't create a virtual network peering directly between virtual networks **myVMVNet** and **myPEVNet**. The virtual machine **myVM** doesn't have a route to the private endpoint we created.
-
-In this section, we create a route table with a custom route.
-
-The route sends traffic from the **myVM** subnet to the address space of virtual network **myPEVNet**, through the Azure Firewall.
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Type **route table** in the search box and press **Enter**.
-
-3. Select **Route table** and then select **Create**.
-
-4. On the **Create Route table** page, use the following table to configure the route table:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Region | Select **South Central US**. |
- | Name | Enter **VMsubnet-to-AzureFirewall**. |
- | Propagate gateway routes | Select **No**. |
-
-5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-6. When you see the **Validation passed** message, select **Create**.
-
-7. Once the deployment completes select **Go to resource**.
-
-8. Select **Routes** under **Settings**.
-
-9. Select **+ Add**.
-
-10. On the **Add route** page, enter, or select this information:
-
- | Setting | Value |
- | - | -- |
- | Route name | Enter **myVMsubnet-to-privateendpoint**. |
- | Address prefix | Enter **10.2.0.0/16**. |
- | Next hop type | Select **Virtual appliance**. |
- | Next hop address | Enter **10.0.0.4**. |
-
-11. Select **OK**.
-
-12. Select **Subnets** under **Settings**.
-
-13. Select **+ Associate**.
-
-14. On the **Associate subnet** page, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Select **myVMVNet**. |
- | Subnet | Select **VMSubnet**. |
-
-15. Select **OK**.
-
-## Connect to the virtual machine from your client computer
-
-Connect to the VM **myVm** from the internet as follows:
-
-1. In the portal's search bar, enter **myVm-ip**.
-
-2. Select **myVM-ip** in the search results.
-
-3. Copy or write down the value under **IP address**.
-
-4. If you're using Windows 10, run the following command using PowerShell. For other Windows client versions, use an SSH client like [Putty](https://www.putty.org/):
-
-* Replace **username** with the admin username you entered during VM creation.
-
-* Replace **IPaddress** with the IP address from the previous step.
-
- ```bash
- ssh username@IPaddress
- ```
-
-5. Enter the password you defined when creating **myVm**
-
-## Access SQL Server privately from the virtual machine
-
-In this section, you connect privately to the SQL Database using the private endpoint.
-
-1. Enter `nslookup mydbserver.database.windows.net`
-
- You receive a message similar to the following output:
-
- ```output
- Server: 127.0.0.53
- Address: 127.0.0.53#53
-
- Non-authoritative answer:
- mydbserver.database.windows.net canonical name = mydbserver.privatelink.database.windows.net.
- Name: mydbserver.privatelink.database.windows.net
- Address: 10.2.0.4
- ```
-
-2. Install [SQL Server command-line tools](/sql/linux/quickstart-install-connect-ubuntu#tools).
-
-3. Run the following command to connect to the SQL Server. Use the server admin and password you defined when you created the SQL Server in the previous steps.
-
-* Replace **\<ServerAdmin>** with the admin username you entered during the SQL server creation.
-
-* Replace **\<YourPassword>** with the admin password you entered during SQL server creation.
-
- ```bash
- sqlcmd -S mydbserver.database.windows.net -U '<ServerAdmin>' -P '<YourPassword>'
- ```
-4. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
-
-5. Close the connection to **myVM** by entering **exit**.
-
-## Validate the traffic in Azure Firewall logs
-
-1. In the Azure portal, select **All Resources** and select your Log Analytics workspace.
-
-2. Select **Logs** under **General** in the Log Analytics workspace page.
-
-3. Select the blue **Get Started** button.
-
-4. In the **Example queries** window, select **Firewalls** under **All Queries**.
-
-5. Select the **Run** button under **Application rule log data**.
-
-6. In the log query output, verify **mydbserver.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **RuleCollection**.
-
-## Clean up resources
-
-When you're done using the resources, delete the resource group and all of the resources it contains:
-
-1. Enter **myResourceGroup** in the **Search** box at the top of the portal and select **myResourceGroup** from the search results.
-
-1. Select **Delete resource group**.
-
-1. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
- ## Next steps
-In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall.
+In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall.
-You connected to the VM and securely communicated to the database through Azure Firewall using private link.
+For a tutorial on how to configure Azure Firewall to inspect traffic destined to a private endpoint, see [Tutorial: Inspect private endpoint traffic with Azure Firewall](tutorial-inspect-traffic-azure-firewall.md)
To learn more about private endpoint, see [What is Azure Private Endpoint?](private-endpoint-overview.md).
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
You can use the following options to configure your DNS settings for private end
Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME record redirects the resolution to the private domain name. You can override the resolution with the private IP address of your private endpoints.
-Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications.
+Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications. However, the share will need to be remounted if it's currently mounted using the public endpoint.
> [!IMPORTANT] > * Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence.
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Data Factory | Microsoft.DataFactory/factories | dataFactory | | Azure Data Explorer | Microsoft.Kusto/clusters | cluster | | Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer |
-| Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for MySQL - Single Server | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for MySQL- Flexible Server | Microsoft.DBforMySQL/flexibleServers | mysqlServer |
| Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer | | Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps | | Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub |
For complete, detailed information about recommendations to configure DNS for pr
## Limitations
-The following information lists the known limitations to the use of private endpoints:
+The following information lists the known limitations to the use of private endpoints:
+
+### Static IP address
+
+| Limitation | Description |
+| | |
+| Static IP address configuration currently unsupported. | **Azure Kubernetes Service (AKS)** </br> **Azure Application Gateway** </br> **HD Insight**. |
### Network security group
private-link Tutorial Dns On Premises Private Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-dns-on-premises-private-resolver.md
Previously updated : 12/01/2022 Last updated : 08/29/2023
In this tutorial, you learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create virtual networks
+## <a name="create-a-virtual-network"></a> Sign in to Azure
-A virtual network for the Azure Web App and simulated on-premises network is used for the resources in the tutorial. You'll create two virtual networks and peer them to simulate an Express Route or VPN connection between on-premises and Azure.
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-### Create cloud virtual network
+## Overview
-1. Sign in to the [Azure portal](https://portal.azure.com).
+A virtual network for the Azure Web App and simulated on-premises network is used for the resources in the tutorial. You create two virtual networks and peer them to simulate an Express Route or VPN connection between on-premises and Azure. An Azure Bastion host is deployed in the simulated on-premises network to connect to the test virtual machine. The test virtual machine is used to test the private endpoint connection to the web app and DNS resolution.
-2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+The following resources are used in this tutorial to simulate an on-premises and cloud network infrastructure:
-3. Select **+ Create**.
+| Resource | Name | Description |
+|-||-|
+| Simulated on-premises virtual network | **vnet-1** | The virtual network that simulates an on-premises network. |
+| Cloud virtual network | **vnet-2** | The virtual network where the Azure Web App is deployed. |
+| Bastion host | **bastion** | Bastion host used to connect to the virtual machine in the simulated on-premises network. |
+| Test virtual machine | **vm-1** | Virtual machine used to test the private endpoint connection to the web app and DNS resolution. |
+| Virtual network peer | **vnet-1-to-vnet-2** | Virtual network peer between the simulated on-premises network and cloud virtual network. |
+| Virtual network peer | **vnet-2-to-vnet-1** | Virtual network peer between the cloud virtual network and simulated on-premises network. |
-4. Enter or select the following information in the **Basics** tab of **Create Virtual network**:
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> In **Name**, enter **TutorPEonPremDNS-rg** </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet-cloud**. |
- | Region | Select **West US 2**. |
-
-5. Select **Next: IP Addresses** or the **IP Addresses tab**.
-
-6. In **IPv4 address space**, select the existing address space. Enter **10.1.0.0/16**.
-
-7. Select **+ Add subnet**.
-
-8. Enter or select the following information in **+ Add subnet**:
-
- | Setting | Value |
- | - | -- |
- | Subnet name | Enter **mySubnet-cloud**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
-
-9. Select **Add**.
-
-10. Select **Review + create**.
-
-11. Select **Create**.
-
-### Create simulated on-premises network
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-2. Select **+ Create**.
+It takes a few minutes for the Bastion host deployment to complete. The Bastion host is used later in the tutorial to connect to the "on-premises" virtual machine to test the private endpoint. You can proceed to the next steps when the virtual network is created.
-3. Enter or select the following information in the **Basics** tab of **Create Virtual network**:
+## Create cloud virtual network
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **TutorPEonPremDNS-rg**. |
- | **Instance details** | |
- | Name | Enter **myVNet-onprem**. |
- | Region | Select **West US 2**. |
-
-4. Select **Next: IP Addresses** or the **IP Addresses tab**.
-
-5. In **IPv4 address space**, select the existing address space. Enter **10.2.0.0/16**.
-
-6. Select **+ Add subnet**.
-
-7. Enter or select the following information in **+ Add subnet**:
-
- | Setting | Value |
- | - | -- |
- | Subnet name | Enter **mySubnet-onprem**. |
- | Subnet address range | Enter **10.2.0.0/24**. |
-
-8. Select **Add**.
-
-9. Select **Next: Security** or the **Security** tab.
-
-10. Select **Enable** next to **BastionHost**.
-
-11. Enter or select the following information for **BastionHost**:
-
- | Setting | Value |
- | - | -- |
- | Bastion name | Enter **myBastion**. |
- | AzureBastionSubnet address space | Enter **10.2.1.0/26**. |
- | Public IP address | Select **Create new**. </br> Enter **myPublicIP-Bastion** in **Name**. </br> Select **OK**. |
+Repeat the previous steps to create a cloud virtual network for the Azure Web App private endpoint. Replace the values with the following values for the cloud virtual network:
-12. Select **Review + create**.
+>[!NOTE]
+> The Azure Bastion deployment section can be skipped for the cloud virtual network. The Bastion host is only required for the simulated on-premises network.
-13. Select **Create**.
+| Setting | Value |
+| - | -- |
+| Name | **vnet-2** |
+| Location | **East US 2** |
+| Address space | **10.1.0.0/16** |
+| Subnet name | **subnet-1** |
+| Subnet address range | **10.1.0.0/24** |
-It will take a few minutes for the Bastion host deployment to complete. The Bastion host is used later in the tutorial to connect to the "on-premises" virtual machine to test the private endpoint. You can proceed to the next steps when the virtual network is created.
-### Peer virtual networks
-You'll peer the virtual networks together to simulate an on-premises network. In a production environment, a site to site VPN or Express Route connection is present between the on-premises network and the Azure Virtual Network.
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-2. Select **myVNet-cloud**.
-
-3. In **Settings**, select **Peerings**.
-
-4. Select **+ Add**.
-
-5. In **Add peering**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **This virtual network** | |
- | Peering link name | Enter **myPeer-onprem**. |
- | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
- | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
- | Virtual network gateway or Route Server | Leave the default of **None (default)**. |
- | **Remote virtual network** | |
- | Peering link name | Enter **myPeer-cloud**. |
- | Virtual network deployment model | Leave the default of **Resource manager**. |
- | Subscription | Select your subscription. |
- | Virtual Network | Select **myVNet-onprem**. |
- | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
- | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
- | Virtual network gateway or Route Server | Leave the default of **None (default)**. |
-
-6. Select **Add**.
-
-## Create web app
-
-You'll create an Azure web app for the cloud resource accessed by the on-premises workload.
-
-1. In the search box at the top of the portal, enter **App Service**. Select **App Services** in the search results.
-
-2. Select **+ Create**.
-
-3. Enter or select the following information in the **Basics** tab of **Create Web App**.
-
- | Setting | Value |
- | - | -- |
- | **Project Details** | |
- | Subscription | Select your subscription. |
- | Resource Group | Select **TutorPEonPremDNS-rg**. |
- | **Instance Details** | |
- | Name | Enter a unique name for the web app. The name **mywebapp8675** is used for the examples in this tutorial. |
- | Publish | Select **Code**. |
- | Runtime stack | Select **.NET 6 (LTS)**. |
- | Operating System | Select **Windows**. |
- | Region | Select **West US 2**. |
- | **Pricing plans** | |
- | Windows Plan (West US 2) | Leave the default name. |
- | Pricing plan | Select **Change size**. |
-
-4. In **Spec Picker**, select **Production** for the workload.
-
-5. In **Recommended pricing tiers**, select **P1V2**.
-
-6. Select **Apply**.
-
-7. Select **Next: Deployment**.
-
-8. Select **Next: Networking**.
-
-9. Change 'Enable public access' to false.
-
-10. Select **Review + create**.
-
-11. Select **Create**.
## Create private endpoint An Azure private endpoint creates a network interface for a supported Azure service in your virtual network. The private endpoint enables the Azure service to be accessed from a private connection in your Azure Virtual Network or on-premises network.
-You'll create a private endpoint for the web app you created previously.
+You create a private endpoint for the web app you created previously.
1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
You'll create a private endpoint for the web app you created previously.
| - | -- | | **Project details** | | | Subscription | Select your subscription |
- | Resource group | Select **TutorPEonPremDNS-rg**. |
+ | Resource group | Select **test-rg**. |
| **Instance details** | |
- | Name | Enter **myPrivateEndpoint-webapp**. |
+ | Name | Enter **private-endpoint**. |
| Network Interface Name | Leave the default name. |
- | Region | Select **West US 2**. |
+ | Region | Select **East US 2**. |
4. Select **Next: Resource**.
You'll create a private endpoint for the web app you created previously.
| Connection method | Select **Connect to an Azure resource in my directory**. | | Subscription | Select your subscription. | | Resource type | Select **Microsoft.Web/sites**. |
- | Resource | Select your webapp. The name **mywebapp8675** is used for the examples in this tutorial. |
+ | Resource | Select your webapp. The name **webapp8675** is used for the examples in this tutorial. |
| Target subresource | Select **sites**. | 6. Select **Next: Virtual Network**.
You'll create a private endpoint for the web app you created previously.
| Setting | Value | | - | -- | | **Networking** | |
- | Virtual network | Select **myVNet-cloud (TutorPEonPremDNS-rg)**. |
- | Subnet | Select **mySubnet-cloud**. |
+ | Virtual network | Select **vnet-2 (test-rg)**. |
+ | Subnet | Select **subnet-1**. |
| Network policy for private endpoints | Leave the default of **Disabled**. | | **Private IP configuration** | Select **Statically allocate IP address**. |
- | **Name** | Enter **myIPconfig**. |
+ | **Name** | Enter **ipconfig-1**. |
| **Private IP** | Enter **10.1.0.10**. | 8. Select **Next: DNS**.
You'll create a private endpoint for the web app you created previously.
## Create a private resolver
-You'll create a private resolver in the virtual network where the private endpoint resides. The resolver will receive DNS requests from the simulated on-premises workload. Those requests are forwarded to the Azure provided DNS. The Azure provided DNS will resolve the Azure Private DNS zone for the private endpoint and return the IP address to the on-premises workload.
+You create a private resolver in the virtual network where the private endpoint resides. The resolver receives DNS requests from the simulated on-premises workload. Those requests are forwarded to the Azure provided DNS. The Azure provided DNS resolves the Azure Private DNS zone for the private endpoint and return the IP address to the on-premises workload.
1. In the search box at the top of the portal, enter **DNS private resolver**. Select **DNS private resolvers** in the search results.
You'll create a private resolver in the virtual network where the private endpoi
| - | -- | | **Project details** | | | Subscription | Select your subscription.|
- | Resource group | Select **TutorPEonPremDNS-rg** |
+ | Resource group | Select **test-rg** |
| **Instance details** | |
- | Name | Enter **myPrivateResolver**. |
- | Region | Select **(US) West US 2**. |
+ | Name | Enter **private-resolver**. |
+ | Region | Select **(US) East US 2**. |
| **Virtual Network** | |
- | Virtual Network | Select **myVNet-cloud**. |
+ | Virtual Network | Select **vnet-2**. |
4. Select **Next: Inbound Endpoints**.
You'll create a private resolver in the virtual network where the private endpoi
| Setting | Value | | - | -- |
- | Endpoint name | Enter **myInboundEndpoint**. |
- | Subnet | Select **Create new**. </br> Enter **mySubnet-resolver** in **Name**. </br> Leave the default **Subnet address range**. </br> Select **Create**. |
+ | Endpoint name | Enter **inbound-endpoint**. |
+ | Subnet | Select **Create new**. </br> Enter **subnet-resolver** in **Name**. </br> Leave the default **Subnet address range**. </br> Select **Create**. |
7. Select **Save**.
When the private resolver deployment is complete, continue to the next steps.
### Set up DNS for simulated network
-The following steps will set the private resolver as the primary DNS server for the simulated on-premises network **myVNet-onprem**.
+The following steps set the private resolver as the primary DNS server for the simulated on-premises network **vnet-1**.
-In a production environment, these steps aren't needed and are only to simulate the DNS resolution for the private endpoint. Your local DNS server will have a conditional forwarder to this IP address to resolve the private endpoint DNS records from the on-premises network.
+In a production environment, these steps aren't needed and are only to simulate the DNS resolution for the private endpoint. Your local DNS server has a conditional forwarder to this IP address to resolve the private endpoint DNS records from the on-premises network.
1. In the search box at the top of the portal, enter **DNS private resolver**. Select **DNS private resolvers** in the search results.
-2. Select **myPrivateResolver**.
+2. Select **private-resolver**.
3. Select **Inbound endpoints** in **Settings**.
-4. Make note of the **IP address** of the endpoint named **myInboundEndpoint**. In the example for this tutorial, the IP address is **10.1.1.4**.
+4. Make note of the **IP address** of the endpoint named **inbound-endpoint**. In the example for this tutorial, the IP address is **10.1.1.4**.
5. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-6. Select **myVNet-onprem**.
+6. Select **vnet-1**.
7. Select **DNS servers** in **Settings**.
In a production environment, these steps aren't needed and are only to simulate
10. Select **Save**.
-## Create a virtual machine
-
-You'll create a virtual machine that will be used to test the private endpoint from the simulated on-premises network.
-
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-
-2. Select **+ Create** > **Azure virtual machine**.
-
-3. In **Create a virtual machine**, enter or select the following information in the **Basics** tab.
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **TutorPEonPremDNS-rg**. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM-onprem**. |
- | Region | Select **(US) West US 2**. |
- | Availability Options | Select **No infrastructure redundancy required**. |
- | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2**. |
- | Size | Choose VM size or leave the default setting. |
- | **Administrator account** | |
- | Username | Enter a username. |
- | Password | Enter a password. |
- | Confirm password | Reenter password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
-
-4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-5. In the **Networking** tab, enter or select the following information:
-
- | Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | Select **myVNet-onprem**. |
- | Subnet | Select **mySubnet-onprem (10.2.0.0/24)**. |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports | Select **None**. |
-
-6. Select **Review + create**.
-
-7. Review the settings, and then select **Create**.
- ## Test connectivity to private endpoint
-In this section, you'll use the virtual machine you created in the previous step to connect to the web app across the private endpoint.
+In this section, you use the virtual machine you created in the previous step to connect to the web app across the private endpoint.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM-onprem**.
+2. Select **vm-1**.
-3. On the overview page for **myVM-onprem**, select **Connect** then **Bastion**.
+3. On the overview page for **vm-1**, select **Connect** then **Bastion**.
4. Enter the username and password that you entered during the virtual machine creation.
In this section, you'll use the virtual machine you created in the previous step
6. Open Windows PowerShell on the server after you connect.
-7. Enter `nslookup <webapp-name>.azurewebsites.net`. Replace **\<webapp-name>** with the name of the web app you created in the previous steps. You'll receive a message similar to what is displayed below:
+7. Enter `nslookup <webapp-name>.azurewebsites.net`. Replace **\<webapp-name>** with the name of the web app you created in the previous steps. You receive a message similar to the following output:
- ```powershell
+ ```output
Server: UnKnown Address: 168.63.129.16 Non-authoritative answer:
- Name: mywebapp.privatelink.azurewebsites.net
+ Name: webapp.privatelink.azurewebsites.net
Address: 10.1.0.10
- Aliases: mywebapp.azurewebsites.net
+ Aliases: webapp.azurewebsites.net
```
- A private IP address of **10.1.0.10** is returned for the web app name. This address is in **mySubnet-cloud** subnet of **myVNet-cloud** virtual network you created previously.
+ A private IP address of **10.1.0.10** is returned for the web app name. This address is in **subnet-1** subnet of **vnet-2** virtual network you created previously.
8. Open Microsoft Edge, and enter the URL of your web app, `https://<webapp-name>.azurewebsites.net`.
In this section, you'll use the virtual machine you created in the previous step
:::image type="content" source="./media/tutorial-dns-on-premises-private-resolver/web-app-default-page.png" alt-text="Screenshot of Microsoft Edge showing default web app page." border="true":::
-10. Close the connection to **myVM-onprem**.
+10. Close the connection to **vm-1**.
11. Open a web browser on your local computer and enter the URL of your web app, `https://<webapp-name>.azurewebsites.net`.
In this section, you'll use the virtual machine you created in the previous step
:::image type="content" source="./media/tutorial-dns-on-premises-private-resolver/web-app-ext-403.png" alt-text="Screenshot of web browser showing a blue page with Error 403 for external web app address." border="true":::
-## Clean up resources
-
-If you're not going to continue to use this application, delete
-the virtual networks, private endpoint and resolver, and virtual machine with the following steps:
-
-1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
-
-2. Select **TutorPEonPremDNS-rg**.
-
-3. Select **Delete resource group**. Enter the name of the resource group in **TYPE THE RESOURCE GROUP NAME:**.
-
-4. Select **Delete**.
## Next steps
private-link Tutorial Inspect Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md
+
+ Title: 'Tutorial: Inspect private endpoint traffic with Azure Firewall'
+description: Learn how to inspect private endpoint traffic with Azure Firewall.
+++++ Last updated : 08/15/2023++
+# Tutorial: Inspect private endpoint traffic with Azure Firewall
+
+Azure Private Endpoint is the fundamental building block for Azure Private Link. Private endpoints enable Azure resources deployed in a virtual network to communicate privately with private link resources.
+
+Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity.
+
+You may need to inspect or block traffic from clients to the services exposed via private endpoints. Complete this inspection by using [Azure Firewall](../firewall/overview.md) or a third-party network virtual appliance.
+
+For more information and scenarios that involve private endpoints and Azure Firewall, see [Azure Firewall scenarios to inspect traffic destined to a private endpoint](inspect-traffic-with-azure-firewall.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network and bastion host for the test virtual machine.
+> * Create the private endpoint virtual network.
+> * Create a test virtual machine.
+> * Deploy Azure Firewall.
+> * Create an Azure SQL database.
+> * Create a private endpoint for Azure SQL.
+> * Create a network peer between the private endpoint virtual network and the test virtual machine virtual network.
+> * Link the virtual networks to a private DNS zone.
+> * Configure application rules in Azure Firewall for Azure SQL.
+> * Route traffic between the test virtual machine and Azure SQL through Azure Firewall.
+> * Test the connection to Azure SQL and validate in Azure Firewall logs.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+
+- A Log Analytics workspace. For more information about the creation of a log analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+
+## Sign in to the Azure portal
+
+Sign in to the [Azure portal](https://portal.azure.com).
++++
+## Deploy Azure Firewall
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+1. In **Firewalls**, select **+ Create**.
+
+1. Enter or select the following information in the **Basics** tab of **Create a firewall**:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Name | Enter **firewall**. |
+ | Region | Select **East US 2**. |
+ | Availability zone | Select **None**. |
+ | Firewall SKU | Select **Standard**. |
+ | Firewall management | Select **Use a Firewall Policy to manage this firewall**. |
+ | Firewall policy | Select **Add new**. </br> Enter **firewall-policy** in **Policy name**. </br> Select **East US 2** in region. </br> Select **OK**. |
+ | Choose a virtual network | Select **Create new**. |
+ | Virtual network name | Enter **vnet-firewall**. |
+ | Address space | Enter **10.2.0.0/16**. |
+ | Subnet address space | Enter **10.2.1.0/26**. |
+ | Public IP address | Select **Add new**. </br> Enter **public-ip-firewall** in **Name**. </br> Select **OK**. |
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+Wait for the firewall deployment to complete before you continue.
+
+## Enable firewall logs
+
+In this section, you enable the firewall logs and send them to the log analytics workspace.
+
+> [!NOTE]
+> You must have a log analytics workspace in your subscription before you can enable firewall logs. For more information, see [Prerequisites](#prerequisites).
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+1. Select **firewall**.
+
+1. In **Monitoring** select **Diagnostic settings**.
+
+1. Select **+ Add diagnostic setting**.
+
+1. In **Diagnostic setting** enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Diagnostic setting name | Enter **diagnostic-setting-firewall**. |
+ | **Logs** | |
+ | Categories | Select **Azure Firewall Application Rule (Legacy Azure Diagnostics)** and **Azure Firewall Network Rule (Legacy Azure Diagnostics)**. |
+ | **Destination details** | |
+ | Destination | Select **Send to Log Analytics workspace**. |
+ | Subscription | Select your subscription. |
+ | Log Analytics workspace | Select your log analytics workspace. |
+
+1. Select **Save**.
+
+## Create an Azure SQL database
+
+1. In the search box at the top of the portal, enter **SQL**. Select **SQL databases** in the search results.
+
+1. In **SQL databases**, select **+ Create**.
+
+1. In the **Basics** tab of **Create SQL Database**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Database details** | |
+ | Database name | Enter **sql-db**. |
+ | Server | Select **Create new**. </br> Enter **sql-server-1** in **Server name** (Server names must be unique, replace **sql-server-1** with a unique value). </br> Select **(US) East US 2** in **Location**. </br> Select **Use SQL authentication**. </br> Enter a server admin sign-in and password. </br> Select **OK**. |
+ | Want to use SQL elastic pool? | Select **No**. |
+ | Workload environment | Leave the default of **Production**. |
+ | **Backup storage redundancy** | |
+ | Backup storage redundancy | Select **Locally redundant backup storage**. |
+
+1. Select **Next: Networking**.
+
+1. In the **Networking** tab of **Create SQL Database**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Network connectivity** | |
+ | Connectivity method | Select **Private endpoint**. |
+ | **Private endpoints** | |
+ | Select **+Add private endpoint**. | |
+ | **Create private endpoint** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | Location | Select **East US 2**. |
+ | Name | Enter **private-endpoint-sql**. |
+ | Target subresource | Select **SqlServer**. |
+ | **Networking** | |
+ | Virtual network | Select **vnet-private-endpoint**. |
+ | Subnet | Select **subnet-private-endpoint**. |
+ | **Private DNS integration** | |
+ | Integrate with private DNS zone | Select **Yes**. |
+ | Private DNS zone | Leave the default of **privatelink.database.windows.net**. |
+
+1. Select **OK**.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+## Connect virtual networks with virtual network peering
+
+In this section, you connect the virtual networks with virtual network peering. The networks **vnet-1** and **vnet-private-endpoint** are connected to **vnet-firewall**. There isn't direct connectivity between **vnet-1** and **vnet-private-endpoint**.
+
+1. In the search box at the top of the portal, enter **Virtual networks**. Select **Virtual networks** in the search results.
+
+1. Select **vnet-firewall**.
+
+1. In **Settings** select **Peerings**.
+
+1. In **Peerings** select **+ Add**.
+
+1. In **Add peering**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **This virtual network** | |
+ | Peering link name | Enter **vnet-firewall-to-vnet-1**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **vnet-1-to-vnet-firewall**. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-1**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+
+1. Select **Add**.
+
+1. In **Peerings** select **+ Add**.
+
+1. In **Add peering**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **This virtual network** | |
+ | Peering link name | Enter **vnet-firewall-to-vnet-private-endpoint**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **vnet-private-endpoint-to-vnet-firewall**. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-private-endpoint**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+
+1. Select **Add**.
+
+1. Verify the **Peering status** displays **Connected** for both network peers.
+
+## Link the virtual networks to the private DNS zone
+
+The private DNS zone created during the private endpoint creation in the previous section must be linked to the **vnet-1** and **vnet-firewall** virtual networks.
+
+1. In the search box at the top of the portal, enter **Private DNS zone**. Select **Private DNS zones** in the search results.
+
+1. Select **privatelink.database.windows.net**.
+
+1. In **Settings** select **Virtual network links**.
+
+1. Select **+ Add**.
+
+1. In **Add virtual network link**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Virtual network link** | |
+ | Virtual network link name | Enter **link-to-vnet-1**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-1 (test-rg)**. |
+ | Configuration | Leave the default of unchecked for **Enable auto registration**. |
+
+1. Select **OK**.
+
+1. Select **+ Add**.
+
+1. In **Add virtual network link**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Virtual network link** | |
+ | Virtual network link name | Enter **link-to-vnet-firewall**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-firewall (test-rg)**. |
+ | Configuration | Leave the default of unchecked for **Enable auto registration**. |
+
+1. Select **OK**.
+
+## Create route between vnet-1 and vnet-private-endpoint
+
+A network link between **vnet-1** and **vnet-private-endpoint** doesn't exist. You must create a route to allow traffic to flow between the virtual networks through Azure Firewall.
+
+The route sends traffic from **vnet-1** to the address space of virtual network **vnet-private-endpoint**, through the Azure Firewall.
+
+1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results.
+
+1. Select **+ Create**.
+
+1. In the **Basics** tab of **Create Route table**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Region | Select **East US 2**. |
+ | Name | Enter **vnet-1-to-vnet-firewall**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results.
+
+1. Select **vnet-1-to-vnet-firewall**.
+
+1. In **Settings** select **Routes**.
+
+1. Select **+ Add**.
+
+1. In **Add route**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Route name | Enter **subnet-1-to-subnet-private-endpoint**. |
+ | Destination type | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **10.1.0.0/16**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.2.1.4**. |
+
+1. Select **Add**.
+
+1. In **Settings**, select **Subnets**.
+
+1. Select **+ Associate**.
+
+1. In **Associate subnet**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Virtual network | Select **vnet-1(test-rg)**. |
+ | Subnet | Select **subnet-1**. |
+
+1. Select **OK**.
+
+## Configure an application rule in Azure Firewall
+
+Create an application rule to allow communication from **vnet-1** to the private endpoint of the Azure SQL server **sql-server-1.database.windows.net**. Replace **sql-server-1** with the name of your Azure SQL server.
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall Policies** in the search results.
+
+1. In **Firewall Policies**, select **firewall-policy**.
+
+1. In **Settings** select **Application rules**.
+
+1. Select **+ Add a rule collection**.
+
+1. In **Add a rule collection**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Name | Enter **rule-collection-sql**. |
+ | Rule collection type | Leave the selection of **Application**. |
+ | Priority | Enter **100**. |
+ | Rule collection action | Select **Allow**. |
+ | Rule collection group | Leave the default of **DefaultApplicationRuleCollectionGroup**. |
+ | **Rules** | |
+ | **Rule 1** | |
+ | Name | Enter **SQLPrivateEndpoint**. |
+ | Source type | Select **IP Address**. |
+ | Source | Enter **10.0.0.0/16** |
+ | Protocol | Enter **mssql:1433** |
+ | Destination type | Select **FQDN**. |
+ | Destination | Enter **sql-server-1.database.windows.net**. |
+
+1. Select **Add**.
+
+## Test connection to Azure SQL from virtual machine
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+1. Select **vm-1**.
+
+1. In **Operations** select **Bastion**.
+
+1. Enter the username and password for the virtual machine.
+
+1. Select **Connect**.
+
+1. To verify name resolution of the private endpoint, enter the following command in the terminal window:
+
+ ```bash
+ nslookup sql-server-1.database.windows.net
+ ```
+
+ You receive a message similar to the following example. The IP address returned is the private IP address of the private endpoint.
+
+ ```output
+ Server: 127.0.0.53
+ Address: 127.0.0.53#53
+
+ Non-authoritative answer:
+ sql-server-8675.database.windows.netcanonical name = sql-server-8675.privatelink.database.windows.net.
+ Name:sql-server-8675.privatelink.database.windows.net
+ Address: 10.1.0.4
+ ```
+
+1. Install the SQL server command line tools from [Install the SQL Server command-line tools sqlcmd and bcp on Linux](/sql/linux/sql-server-linux-setup-tools). Proceed with the next steps after the installation is complete.
+
+1. Use the following commands to connect to the SQL server you created in the previous steps.
+
+ * Replace **\<server-admin>** with the admin username you entered during the SQL server creation.
+
+ * Replace **\<admin-password>** with the admin password you entered during SQL server creation.
+
+ * Replace **sql-server-1** with the name of your SQL server.
+
+ ```bash
+ sqlcmd -S sql-server-1.database.windows.net -U '<server-admin>' -P '<admin-password>'
+ ```
+
+1. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
+
+## Validate traffic in the Azure Firewall logs
+
+1. In the search box at the top of the portal, enter **Log Analytics**. Select **Log Analytics** in the search results.
+
+1. Select your log analytics workspace. In this example, the workspace is named **log-analytics-workspace**.
+
+1. In the General settings, select **Logs**.
+
+1. In the example **Queries** in the search box, enter **Application rule**. In the returned results in **Network**, select the **Run** button for **Application rule log data**.
+
+1. In the log query output, verify **sql-server-1.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **Rule**.
++
+## Next steps
+
+Advance to the next article to learn how to use a private endpoint with Azure Private Resolver:
+> [!div class="nextstepaction"]
+> [Create a private endpoint DNS infrastructure with Azure Private Resolver for an on-premises workload](tutorial-dns-on-premises-private-resolver.md)
private-link Tutorial Private Endpoint Cosmosdb Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-cosmosdb-portal.md
- Title: 'Tutorial: Connect to an Azure Cosmos DB account using an Azure Private endpoint'-
-description: Get started with this tutorial using Azure Private endpoint to connect to an Azure Cosmos DB account privately.
---- Previously updated : 06/22/2022---
-# Tutorial: Connect to an Azure Cosmos DB account using an Azure Private Endpoint
-
-Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as Azure Cosmos DB.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a virtual network and bastion host.
-> * Create a virtual machine.
-> * Create an Azure Cosmos DB account with a private endpoint.
-> * Test connectivity to the private endpoint.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-* An Azure subscription
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a virtual network and bastion host
-
-In this section, you'll create a virtual network, subnet, and bastion host.
-
-The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
-
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
-
-2. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | Setting | Value |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **East US**. |
-
-3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-4. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16**. |
-
-5. Under **Subnet name**, select the word **default**.
-
-6. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
-
-7. Select **Save**.
-
-8. Select the **Security** tab.
-
-9. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
--
-8. Select the **Review + create** tab or select the **Review + create** button.
-
-9. Select **Create**.
-
-## Create a virtual machine
-
-In this section, you'll create a virtual machine that will be used to test the private endpoint.
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select **East US**. |
- | Availability Options | Select **No infrastructure redundancy required**. |
- | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Select **No**. |
- | Size | Choose VM size or take default setting. |
- | **Administrator account** | |
- | Username | Enter a username. |
- | Password | Enter a password. |
- | Confirm password | Reenter password. |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
-
- | Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet**. |
- | Subnet | **mySubnet**. |
- | Public IP | Select **None**. |
- | NIC network security group | **Basic**. |
- | Public inbound ports | Select **None**. |
-
-5. Select **Review + create**.
-
-6. Review the settings, and then select **Create**.
--
-## Create an Azure Cosmos DB account with a private endpoint
-
-In this section, you'll create an Azure Cosmos DB account and configure the private endpoint.
-
-1. In the left-hand menu, select **Create a resource** > **Databases** > **Azure Cosmos DB**, or search for **Azure Cosmos DB** in the search box.
-
-2. In **Select API option** page, Select **Create** under **Azure Cosmos DB for NoSQL**.
-
-2. In the **Basics** tab of **Create Azure Cosmos DB account** enter or select the following information:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Account name | Enter **mycosmosdb**. If the name is unavailable, enter a unique name. |
- | Location | Select **(US) East US**. |
- | Capacity mode | Leave the default **Provisioned throughput**. |
- | Apply Free Tier Discount | Leave the default **Do Not Apply**. |
-
-3. Select the **Networking** tab, or select **Next: Global Distribution**, then **Next: Networking**.
-
-4. In the **Networking** tab, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Network connectivity** | |
- | Connectivity method | Select **Private endpoint**. |
- | **Configure Firewall** | |
- | Allow access from the Azure portal | Leave the default **Allow**. |
- | Allow access from my IP | Leave the default **Deny**. |
-
-5. In **Private endpoint**, select **+ Add**.
-
-6. In **Create private endpoint** enter or select the following information:
-
- | Setting | Value |
- |--|-|
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
- | Location | Select **East US**. |
- | Name | Enter **myPrivateEndpoint**. |
- | Azure Cosmos DB sub-resource | Leave the default **Azure Cosmos DB for NoSQL - Recommended**. |
- | **Networking** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet**. |
- | **Private DNS integration** | |
- | Integrate with private DNS zone | Leave the default **Yes**. |
- | Private DNS Zone | Leave the default **(New) privatelink.documents.azure.com**. |
-
-7. Select **OK**.
-
-8. Select **Review + create**.
-
-9. Select **Create**.
-
-### Add a database and a container
-
-1. Select **Go to resource**, or in the left-hand menu of the Azure portal, select **All Resources** > **mycosmosdb**.
-
-2. In the left-hand menu, select **Data Explorer**.
-
-3. In the **Data Explorer** window, select **New Container**.
-
-4. In **New Container**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Database id | Leave the default of **Create new**. </br> Enter **mydatabaseid** in the box. |
- | Database throughput (400 - unlimited RU/s) | Select **Manual**. </br> Enter **400** in the box. |
- | Container id | Enter **mycontainerid**. |
- | Partition key | Enter **/mykey**. |
-
-5. Select **OK**.
-
-6. In the **Settings** section of the Azure Cosmos DB account, select **Keys**.
-
-7. Select copy on the **PRIMARY CONNECTION STRING**. A valid connection string is in the format: `AccountEndpoint=https://<cosmosdb-account-name>.documents.azure.com:443/;AccountKey=<accountKey>;`
-
-## Test connectivity to private endpoint
-
-In this section, you'll use the virtual machine you created in the previous steps to connect to the Azure Cosmos DB account across the private endpoint using **Azure Cosmos DB Explorer**.
-
-1. Select **Resource groups** in the left-hand navigation pane.
-
-1. Select **myResourceGroup**.
-
-1. Select **myVM**.
-
-1. On the overview page for **myVM**, select **Connect** then **Bastion**.
-
-1. Enter the username and password that you entered during the virtual machine creation.
-
-1. Select **Connect** button.
-
-1. Open Windows PowerShell on the server after you connect.
-
-1. Enter `nslookup <cosmosdb-account-name>.documents.azure.com` and validate the name resolution. Replace **\<cosmosdb-account-name>** with the name of the Azure Cosmos DB account you created in the previous steps. You'll receive a message similar to what is displayed below:
-
- ```powershell
- Server: UnKnown
- Address: 168.63.129.16
-
- Non-authoritative answer:
- Name: mycosmosdb.privatelink.documents.azure.com
- Address: 10.1.0.5
- Aliases: mycosmosdb.documents.azure.com
- ```
- A private IP address of **10.1.0.5** is returned for the Azure Cosmos DB account name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
-
-1. Go to [Azure Cosmos DB](https://cosmos.azure.com/). Select **Connect to your account with connection string**, then paste the connection string that you copied in the previous steps and select **Connect**.
-
-1. Under the **Azure Cosmos DB for NoSQL** menu on the left, you see **mydatabaseid** and **mycontainerid** that you previously created in **mycosmosdb**.
-
-1. Close the connection to **myVM**.
-
-## Clean up resources
-
-If you're not going to continue to use this application, delete the virtual network, virtual machine, and Azure Cosmos DB account with the following steps:
-
-1. From the left-hand menu, select **Resource groups**.
-
-2. Select **myResourceGroup**.
-
-3. Select **Delete resource group**.
-
-4. Enter **myResourceGroup** in **TYPE THE RESOURCE GROUP NAME**.
-
-5. Select **Delete**.
-
-## Next steps
-
-In this tutorial, you learned how to create:
-
-* Virtual network and bastion host.
-* Virtual Machine.
-* Azure Cosmos DB account.
-
-Learn how to connect to a web app using an Azure Private Endpoint:
-> [!div class="nextstepaction"]
-> [Connect to a web app using Private Endpoint](tutorial-private-endpoint-webapp-portal.md)
private-link Tutorial Private Endpoint Sql Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-portal.md
Previously updated : 06/22/2022 Last updated : 08/30/2023 # Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to the [Azure portal](https://portal.azure.com).
-## Create a virtual network and bastion host
-In this section, you'll create a virtual network, subnet, and bastion host.
-
-The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
-
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
-
-2. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | Setting | Value |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **Create new**. </br> Enter **CreateSQLEndpointTutorial** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **East US**. |
-
-3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-4. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16**. |
-
-5. Under **Subnet name**, select the word **default**.
-
-6. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
-
-7. Select **Save**.
-
-8. Select the **Security** tab.
-
-9. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
--
-8. Select the **Review + create** tab or select the **Review + create** button.
-
-9. Select **Create**.
-
-## Create a virtual machine
-
-In this section, you'll create a virtual machine that will be used to test the private endpoint.
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
-
-2. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **CreateSQLEndpointTutorial**. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select **(US) East US**. |
- | Availability Options | Select **No infrastructure redundancy required**. |
- | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Select **No**. |
- | Size | Choose VM size or take default setting. |
- | **Administrator account** | |
- | Username | Enter a username. |
- | Password | Enter a password. |
- | Confirm password | Reenter password. |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the **Networking** tab, enter or select this information:
-
- | Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet**. |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports | Select **None**. |
-
-5. Select **Review + create**.
-
-6. Review the settings, and then select **Create**.
- ## <a name ="create-a-private-endpoint"></a>Create an Azure SQL server and private endpoint
-In this section, you'll create a SQL server in Azure.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **SQL database**.
-
-1. In the **Basics** tab of **Create SQL database**, enter or select this information:
+In this section, you create a SQL server in Azure.
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateSQLEndpointTutorial**. You created this resource group in the previous section.|
- | **Database details** | |
- | Database name | Enter **mysqldatabase**. |
- | Server | Select **Create new**. |
-
-1. In **Create SQL Database Server**, enter or select this information:
+1. In the search box at the top of the portal, enter **SQL**. Select **SQL databases** in the search results.
- | Setting | Value |
- | - | -- |
- | **Server details** | |
- | Server name | Enter **mysqlserver**. If this name is taken, create a unique name.|
- | Location | Select **(US) East US**. |
- | **Authentication** | |
- | Authentication method | Select **Use SQL authentication**. |
- | Server admin login | Enter an administrator name of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least eight characters long and meet the defined requirements. |
- | Confirm password | Reenter password. |
-
-1. Select **OK**.
+1. In **SQL databases**, select **+ Create**.
-1. In the **Basics** tab, enter or select this information after creating the SQL database server:
+1. In the **Basics** tab of **Create SQL Database**, enter or select the following information:
| Setting | Value |
- | - | -- |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
| **Database details** | |
- | Want to use SQL elastic pool? | Select **No**. |
- | Compute + Storage | Take default settings or select **Configure database** to configure compute and storage settings. |
+ | Database name | Enter **sql-db**. |
+ | Server | Select **Create new**. </br> Enter **sql-server-1** in **Server name** (Server names must be unique, replace **sql-server-1** with a unique value). </br> Select **(US) East US 2** in **Location**. </br> Select **Use SQL authentication**. </br> Enter a server admin sign-in and password. </br> Select **OK**. |
+ | Want to use SQL elastic pool? | Select **No**. |
+ | Workload environment | Leave the default of **Production**. |
| **Backup storage redundancy** | |
- | Backup storage redundancy | Select **Locally-redundant backup storage**. |
-
- :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/create-sql-database-basics-tab-inline.png" alt-text="Screenshot of Create S Q L Database page showing the settings used." lightbox="./media/tutorial-private-endpoint-sql-portal/create-sql-database-basics-tab-expanded.png":::
+ | Backup storage redundancy | Select **Locally redundant backup storage**. |
-1. Select the **Networking** tab or select the **Next: Networking** button.
+1. Select **Next: Networking**.
-1. In the **Networking** tab, enter or select this information:
+1. In the **Networking** tab of **Create SQL Database**, enter or select the following information:
| Setting | Value |
- | - | -- |
- | **Network connectivity** | |
+ |||
+ | **Network connectivity** | |
| Connectivity method | Select **Private endpoint**. |
-
-1. Select **+ Add private endpoint** in **Private endpoints**.
-
-1. In **Create private endpoint**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
+ | **Private endpoints** | |
+ | Select **+Add private endpoint**. | |
+ | **Create private endpoint** | |
| Subscription | Select your subscription. |
- | Resource group | Select **CreateSQLEndpointTutorial**. |
- | Location | Select **East US**. |
- | Name | Enter **myPrivateSQLendpoint**. |
- | Target sub-resource | Select **SqlServer**. |
+ | Resource group | Select **test-rg**. |
+ | Location | Select **East US 2**. |
+ | Name | Enter **private-endpoint-sql**. |
+ | Target subresource | Select **SqlServer**. |
| **Networking** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet**. |
- | **Private DNS integration** | |
- | Integrate with private DNS zone | Leave the default **Yes**. |
- | Private DNS Zone | Leave the default **(New) privatelink.database.windows.net**. |
-
-1. Select **OK**.
+ | Virtual network | Select **vnet-1**. |
+ | Subnet | Select **subnet-1**. |
+ | **Private DNS integration** | |
+ | Integrate with private DNS zone | Select **Yes**. |
+ | Private DNS zone | Leave the default of **privatelink.database.windows.net**. |
- :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/create-private-endpoint-sql-inline.png" alt-text="Screenshot of Create private endpoint page showing the settings used." lightbox="./media/tutorial-private-endpoint-sql-portal/create-private-endpoint-sql-expanded.png":::
+1. Select **OK**.
1. Select **Review + create**.
In this section, you'll create a SQL server in Azure.
> When adding a Private endpoint connection, public routing to your Azure SQL server is not blocked by default. The setting "Deny public network access" under the "Firewall and virtual networks" blade is left unchecked by default. To disable public network access ensure this is checked. ## Disable public access to Azure SQL logical server+ For this scenario, assume you would like to disable all public access to your Azure SQL server, and only allow connections from your virtual network.
-1. In the Azure portal search box, enter **mysqlserver** or the server name you entered in the previous steps.
-2. On the **Networking** page, select **Public access** tab, then select **Disable** for **Public network access**.
+1. In the search box at the top of the portal, enter **SQL server**. Select **SQL servers** in the search results.
- :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/disable-sql-server-public-access-inline.png" alt-text="Screenshot of the S Q L server Networking page showing how to disable public access." lightbox="./media/tutorial-private-endpoint-sql-portal/disable-sql-server-public-access-expanded.png":::
+1. Select **sql-server-1**.
+
+1. On the **Networking** page, select **Public access** tab, then select **Disable** for **Public network access**.
-3. Select **Save**.
+1. Select **Save**.
## Test connectivity to private endpoint
-In this section, you'll use the virtual machine you created in the previous steps to connect to the SQL server across the private endpoint.
+In this section, you use the virtual machine you created in the previous steps to connect to the SQL server across the private endpoint.
-1. Select **Resource groups** in the left-hand navigation pane.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **CreateSQLEndpointTutorial**.
+1. Select **vm-1**.
-3. Select **myVM**.
+1. In **Operations** select **Bastion**.
-4. On the overview page for **myVM**, select **Connect** then **Bastion**.
+1. Enter the username and password for the virtual machine.
-5. Enter the username and password that you entered during the virtual machine creation.
+1. Select **Connect**.
-6. Select **Connect** button.
+1. To verify name resolution of the private endpoint, enter the following command in the terminal window:
-7. Open Windows PowerShell on the server after you connect.
+ ```bash
+ nslookup sql-server-1.database.windows.net
+ ```
-8. Enter `nslookup <sqlserver-name>.database.windows.net`. Replace **\<sqlserver-name>** with the name of the SQL server you created in the previous steps. You'll receive a message similar to what is displayed below:
+ You receive a message similar to the following example. The IP address returned is the private IP address of the private endpoint.
- ```powershell
- Server: UnKnown
- Address: 168.63.129.16
+ ```output
+ Server: 127.0.0.53
+ Address: 127.0.0.53#53
Non-authoritative answer:
- Name: mysqlserver.privatelink.database.windows.net
- Address: 10.1.0.5
- Aliases: mysqlserver.database.windows.net
+ sql-server-8675.database.windows.netcanonical name = sql-server-8675.privatelink.database.windows.net.
+ Name:sql-server-8675.privatelink.database.windows.net
+ Address: 10.1.0.4
```
- A private IP address of **10.1.0.5** is returned for the SQL server name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
-9. Install [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?preserve-view=true&view=sql-server-2017) on **myVM**.
+1. Install the SQL server command line tools from [Install the SQL Server command-line tools sqlcmd and bcp on Linux](/sql/linux/sql-server-linux-setup-tools). Proceed with the next steps after the installation is complete.
-10. Open **SQL Server Management Studio**.
+1. Use the following commands to connect to the SQL server you created in the previous steps.
-4. In **Connect to server**, enter or select this information:
+ * Replace **\<server-admin>** with the admin username you entered during the SQL server creation.
- | Setting | Value |
- | - | -- |
- | Server type | Select **Database Engine**.|
- | Server name | Enter **\<sqlserver-name>.database.windows.net**. |
- | Authentication | Select **SQL Server Authentication**. |
- | User name | Enter the username you entered during server creation. |
- | Password | Enter the password you entered during server creation. |
- | Remember password | Select **Yes**. |
+ * Replace **\<admin-password>** with the admin password you entered during SQL server creation.
-1. Select **Connect**.
-2. Browse databases from left menu.
-3. (Optionally) Create or query information from **mysqldatabase**.
-4. Close the remote desktop connection to **myVM**.
+ * Replace **sql-server-1** with the name of your SQL server.
+
+ ```bash
+ sqlcmd -S sql-server-1.database.windows.net -U '<server-admin>' -P '<admin-password>'
+ ```
+
+1. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
-## Clean up resources
-When you're done using the private endpoint, SQL server, and the VM, delete the resource group and all of the resources it contains:
-1. Enter **CreateSQLEndpointTutorial** in the **Search** box at the top of the portal and select **CreateSQLEndpointTutorial** from the search results.
-2. Select **Delete resource group**.
-3. Enter *CreateSQLEndpointTutorial* for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+ ## Next steps In this tutorial, you learned how to create: * Virtual network and bastion host.+ * Virtual machine.+ * Azure SQL server with private endpoint. You used the virtual machine to test connectivity privately and securely to the SQL server across the private endpoint.
public-multi-access-edge-compute-mec Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md
Title: Key concepts for Azure public MEC description: Learn about important concepts for Azure public multi-access edge compute (MEC). --++ Last updated 11/22/2022
quotas Quotas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/quotas-overview.md
Title: Quotas overview description: Learn about to view quotas and request increases in the Azure portal. Previously updated : 07/22/2022 Last updated : 08/17/2023
Many Azure services have quotas, which are the assigned number of resources for your Azure subscription. Each quota represents a specific countable resource, such as the number of virtual machines you can create, the number of storage accounts you can use concurrently, the number of networking resources you can consume, or the number of API calls to a particular service you can make.
-The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide).
+The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings.
## Quotas or limits?
Different entry points, data views, actions, and programming options are availab
| Option | Azure portal | Quota APIs | Support API | |||||
-| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. |
+| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota Service REST API](/rest/api/quota) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. |
| Availability | All customers | All customers | All customers with unified, premier, professional direct support plans | | Which to choose? | Useful for customers desiring a central location and an efficient visual interface for viewing and managing quotas. Provides quick access to requesting quota increases. | Useful for customers who want granular and programmatic control of quota management for adjustable quotas. Intended for end to end automation of quota usage validation and quota increase requests through APIs. | Customers who want end to end automation of support request creation and management. Provides an alternative path to Azure portal for requests. | | Providers supported | All providers | Compute, Machine Learning | All providers |
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Cognitive Search](../search/search-reliability.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Container Apps](../container-apps/disaster-recovery.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Container Apps](reliability-azure-container-apps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Container Instances](../container-instances/availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
reliability Reliability Azure Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md
+
+ Title: Reliability in Azure Container Apps
+description: Learn how to ensure application reliability in Azure Container Apps
++++++ Last updated : 08/29/2023++
+# Reliability in Azure Container Apps
+
+This article describes reliability support in Azure Container Apps, and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/).
+
+## Availability zone support
++
+Azure Container Apps uses [availability zones](availability-zones-overview.md#availability-zones) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
+
+By enabling Container Apps' zone redundancy feature, replicas are automatically distributed across the zones in the region. Traffic is load balanced among the replicas. If a zone outage occurs, traffic is automatically routed to the replicas in the remaining zones.
+
+> [!NOTE]
+> There is no extra charge for enabling zone redundancy, but it only provides benefits when you have 2 or more replicas, with 3 or more being ideal since most regions that support zone redundancy have 3 zones.
+
+### Prerequisites
+
+Azure Container Apps offers the same reliability support regardless of your plan type.
+
+Azure Container Apps uses [availability zones](availability-zones-overview.md#availability-zones) in regions where they're available. For a list of regions that support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md).
+
+### SLA improvements
+
+There are no increased SLAs for Azure Container Apps. For more information on the Azure Container Apps SLAs, see [Service Level Agreement for Azure Container Apps](https://azure.microsoft.com/support/legal/sla/container-apps/).
+
+### Create a resource with availability zone enabled
+
+#### Set up zone redundancy in your Container Apps environment
+
+To take advantage of availability zones, you must enable zone redundancy when you create a Container Apps environment. The environment must include a virtual network with an available subnet. To ensure proper distribution of replicas, set your app's minimum replica count to three.
+
+##### Enable zone redundancy via the Azure portal
+
+To create a container app in an environment with zone redundancy enabled using the Azure portal:
+
+1. Navigate to the Azure portal.
+1. Search for **Container Apps** in the top search box.
+1. Select **Container Apps**.
+1. Select **Create New** in the *Container Apps Environment* field to open the *Create Container Apps Environment* panel.
+1. Enter the environment name.
+1. Select **Enabled** for the *Zone redundancy* field.
+
+Zone redundancy requires a virtual network with an infrastructure subnet. You can choose an existing virtual network or create a new one. When creating a new virtual network, you can accept the values provided for you or customize the settings.
+
+1. Select the **Networking** tab.
+1. To assign a custom virtual network name, select **Create New** in the *Virtual Network* field.
+1. To assign a custom infrastructure subnet name, select **Create New** in the *Infrastructure subnet* field.
+1. You can select **Internal** or **External** for the *Virtual IP*.
+1. Select **Create**.
++
+##### Enable zone redundancy with the Azure CLI
+
+Create a virtual network and infrastructure subnet to include with the Container Apps environment.
+
+When using these commands, replace the `<PLACEHOLDERS>` with your values.
+
+>[!NOTE]
+> The Consumption only environment requires a dedicated subnet with a CIDR range of `/23` or larger. The workload profiles environment requires a dedicated subnet with a CIDR range of `/27` or larger. To learn more about subnet sizing, see the [networking architecture overview](../container-apps/networking.md#subnet).
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az network vnet create \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --name <VNET_NAME> \
+ --location <LOCATION> \
+ --address-prefix 10.0.0.0/16
+```
+
+```azurecli-interactive
+az network vnet subnet create \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --vnet-name <VNET_NAME> \
+ --name infrastructure \
+ --address-prefixes 10.0.0.0/21
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$SubnetArgs = @{
+ Name = 'infrastructure-subnet'
+ AddressPrefix = '10.0.0.0/21'
+}
+$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs
+```
+
+```azurepowershell-interactive
+$VnetArgs = @{
+ Name = <VNetName>
+ Location = <Location>
+ ResourceGroupName = <ResourceGroupName>
+ AddressPrefix = '10.0.0.0/16'
+ Subnet = $subnet
+}
+$vnet = New-AzVirtualNetwork @VnetArgs
+```
+++
+Next, query for the infrastructure subnet ID.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id
+```
+++
+Finally, create the environment with the `--zone-redundant` parameter. The location must be the same location used when creating the virtual network.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az containerapp env create \
+ --name <CONTAINER_APP_ENV_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --location "<LOCATION>" \
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET \
+ --zone-redundant
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
+
+```azurepowershell-interactive
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = <ResourceGroupName>
+ Location = <Location>
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName <ResourceGroupName> -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName <ResourceGroupName> -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell-interactive
+$EnvArgs = @{
+ EnvName = <EnvironmentName>
+ ResourceGroupName = <ResourceGroupName>
+ Location = <Location>
+ AppLogConfigurationDestination = "log-analytics"
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+ VnetConfigurationInfrastructureSubnetId = $InfrastructureSubnet
+ VnetConfigurationInternal = $true
+}
+New-AzContainerAppManagedEnv @EnvArgs
+```
+++
+### Safe deployment techniques
+
+When you set up [zone redundancy in your container app](#set-up-zone-redundancy-in-your-container-apps-environment), replicas are distributed automatically across the zones in the region. After the replicas are distributed, traffic is load balanced among them. If a zone outage occurs, traffic automatically routes to the replicas in the remaining zone.
+
+You should still use safe deployment techniques such as [blue-green deployment](../container-apps/blue-green-deployment.md). Azure Container Apps doesn't provide one-zone-at-a-time deployment or upgrades.
+
+If you have enabled [session affinity](../container-apps/sticky-sessions.md), and a zone goes down, clients for that zone are routed to new replicas because the previous replicas are no longer available. Any state associated with the previous replicas is lost.
+
+### Availability zone redeployment and migration
+
+To take advantage of availability zones, enable zone redundancy as you create the Container Apps environment. The environment must include a virtual network with an available subnet. You can't migrate an existing Container Apps environment from nonavailability zone support to availability zone support.
+
+## Disaster recovery: cross-region failover
+
+In the unlikely event of a full region outage, you have the option of using one of two strategies:
+
+- **Manual recovery**: Manually deploy to a new region, or wait for the region to recover, and then manually redeploy all environments and apps.
+
+- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. For more information, see [Cross-region replication in Azure](cross-region-replication-azure.md).
+
+> [!NOTE]
+> Regardless of which strategy you choose, make sure your deployment configuration files are in source control so you can easily redeploy if necessary.
+
+## More guidance
+
+The following resources can help you create your own disaster recovery plan:
+
+- [Failure and disaster recovery for Azure applications](/azure/architecture/reliability/disaster-recovery)
+- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Reliability in Azure](availability-zones-overview.md)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
| **Products** | | | |[Azure Cosmos DB](../cosmos-db/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Database for PostgreSQL - Flexible Server](reliability-postgre-flexible.md)|
+[Azure Database for PostgreSQL - Flexible Server](reliability-postgresql-flexible-server.md)|
[Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
[Azure Storage Mover](reliability-azure-storage-mover.md)| [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Virtual Machines](reliability-virtual-machines.md)|
+[Azure Virtual Machines Image Builder](reliability-image-builder.md)|
[Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-image-builder.md
+
+ Title: Reliability in Azure Image Builder
+description: Find out about reliability in Azure Image Builder
++++++ Last updated : 08/22/2023++
+# Reliability in Azure Image Builder (AIB)
+
+This article describes reliability support in Azure Image Builder. Azure Image Builder doesn't currently support availability zones at this time, however it does support [cross-regional resiliency with disaster recovery](#disaster-recovery-cross-region-failover).
++
+Azure Image Builder (AIB) is a regional service with a cluster that serves single regions. The AIB regional setup keeps data and resources within the regional boundary. AIB as a service doesn't do fail over for cluster and SQL database in region down scenarios.
++
+For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Disaster recovery: cross-region failover
+
+If a region-wide disaster occurs, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+To ensure fast and easy recovery for Azure Image Builder (AIB), it's recommended that you run an image template in region pairs or multiple regions when designing your AIB solution. You should also replicate resources from the start when you're setting up your image templates.
++
+### Cross-region disaster recovery in multi-region geography
+
+When a regional disaster occurs, Microsoft is responsible for outage detection, notifications, and support for AIB. However, you're responsible for setting up disaster recovery for the control (service side) and data planes.
++
+#### Outage detection, notification, and management
+
+Microsoft sends a notification if there's an outage in the Azure Image Builder (AIB) Service. One common outage symptom is image templates getting 500 errors when attempting to run. You can review Azure Image Builder outage notifications and status updates through [support request management.](../azure-portal/supportability/how-to-manage-azure-support-request.md)
++
+#### Set up disaster recovery and outage detection
+
+You're responsible for setting up disaster recovery for your Azure Image Builder (AIB) environment, as there isn't a region failover at the AIB service side. You need to configure both the control plane (service side) and data plane.
+
+It's recommended that you create an AIB resource in another nearby region, into which you can replicate your resources. For more information, see the [supported regions](../virtual-machines/image-builder-overview.md#regions) and what resources are included in an [AIB creation](/azure/virtual-machines/image-builder-overview#how-it-works).
+
+### Single-region geography disaster recovery
+
+In the case of a diaster for single-region, you still need to get an image template resource from that region even when that region isn't available. You can either maintain a copy of an image template locally or can use [Azure Resource Graph](../governance/resource-graph/index.yml) from the Azure portal to get an image template resource.
+
+To get an image template resource using Resource Graph from the Azure portal:
+
+1. Go to the search bar in Azure portal and search for *resource graph explorer*.
+
+ ![Screenshot of Azure Resource Graph Explorer in the portal.](../virtual-machines//media/image-builder-reliability/resource-graph-explorer-portal.png#lightbox)
+
+1. Use the search bar on the far left to search resource by type and name to see how the details give you properties of the image template. The *See details* option on the bottom right shows the image template's properties attribute and tags separately. Template name, location, ID, and tenant ID can be used to get the correct image template resource.
+
+ ![Screenshot of using Azure Resource Graph Explorer search.](../virtual-machines//media/image-builder-reliability/resource-graph-explorer-search.png#lightbox)
++
+### Capacity and proactive disaster recovery resiliency
+
+Microsoft and its customers operate under the [shared responsibility model](./business-continuity-management-program.md#shared-responsibility-model). In customer-enabled DR (customer-responsible services), you're responsible for addressing DR for any service you deploy and control. To ensure that recovery is proactive, you should always pre-deploy secondaries. Without pre-deployed secondaries, there's no guarantee of capacity at time of impact.
+
+When planning where to replicate a template, consider:
+
+- AIB region availability:
+ - Choose [AIB supported regions](../virtual-machines//image-builder-overview.md#regions) close to your users.
+ - AIB continually expands into new regions.
+- Azure paired regions:
+ - For your geographic area, choose two regions paired together.
+ - Recovery efforts for paired regions where prioritization is needed.
+
+## Additional guidance
+
+In regards to your data processing information, refer to the Azure Image Builder [data residency](../virtual-machines//linux/image-builder-json.md#data-residency) details.
++
+## Next steps
+
+- [Reliability in Azure](../reliability/overview.md)
+- [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md)
+- [Azure Image Builder overview](../virtual-machines//image-builder-overview.md)
reliability Reliability Postgre Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgre-flexible.md
- Title: Reliability and high availability in Azure Database for PostgreSQL - Flexible Server
-description: Find out about reliability and high availability in Azure Database for PostgreSQL - Flexible Server
----- Previously updated : 08/04/2023--
-<!--#Customer intent: I want to understand reliability support in Azure Database for PostgreSQL - Flexible Server so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
-
-# High availability (Reliability) in Azure Database for PostgreSQL - Flexible Server
---
-This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
-
-Azure Database for PostgreSQL: Flexible Server offers high availability support by provisioning physically separate primary and standby replica either within the same availability zone (zonal) or across availability zones (zone-redundant). This high availability model is designed to ensure that committed data is never lost in the case of failures. The model is also designed so that the database doesn't become a single point of failure in your software architecture. For more information on high availability and availability zone support, see [Availability zone support](#availability-zone-support).
--
-## Availability zone support
--
-Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md#azure-services-with-availability-zone-support) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events.
--- **Zone-redundant**. Zone redundant high availability deploys a standby replica in a different zone with automatic failover capability. Zone redundancy provides the highest level of availability, but requires you to configure application redundancy across zones. For that reason, choose zone redundancy when you want protection from availability zone level failures and when latency across the availability zones is acceptable. -
- You can choose the region and the availability zones for both primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with a similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs, a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, automatically storing **three** data copies. A zone-redundant configuration provides physical isolation of the entire stack between primary and standby servers.
-
- :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Pictures illustrating redundant high availability architecture.":::
--- **Zonal**. Choose a zonal deployment when you want to achieve the highest level of availability within a single availability zone, but with the lowest network latency. You can choose the region and the availability zone to deploy both your primary database server. A standby replica server is *automatically* provisioned and managed in the *same* availability zone - with similar compute, storage, and network configuration - as the primary server. A zonal configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. -
- :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Pictures illustrating zonal high availability architecture.":::
-
-
->[!NOTE]
->Both zonal and zone-redundant deployment models architecturally behave the same. Various discussions in the following sections apply to both unless called out otherwise.
-
-### Prerequisites
-
-**Zone redundancy:**
--- The **zone-redundancy** option is only available in a [regions that support availability zones](../postgresql/flexible-server/overview.md#azure-regions).--- Zone-redundancy zones are **not** supported for:-
- - Azure Database for PostgreSQL ΓÇô Single Server SKU.
- - Burstable compute tier.
- - Regions with single-zone availability.
-
-**Zonal:**
--- The **zonal** deployment option is available in all [Azure regions](../postgresql/flexible-server/overview.md#azure-regions) where you can deploy Flexible Server. --
-### High availability features
-
-* A standby replica is deployed in the same VM configuration - including vCores, storage, network settings - as the primary server.
-
-* You can add availability zone support for an existing database server.
-
-* You can remove the standby replica by disabling high availability.
-
-* You can choose availability zones for your primary and standby database servers for zone-redundant availability.
-
-* Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time.
-
-* In zone-redundant and zonal models, automatic backups are performed periodically from the primary database server. At the same time, the transaction logs are continuously archived in the backup storage from the standby replica. If the region supports availability zones, backup data is stored on zone-redundant storage (ZRS). In regions that don't support availability zones, backup data is stored on local redundant storage (LRS).
-
-* Clients always connect to the end hostname of the primary database server.
-
-* Any changes to the server parameters are also applied to the standby replica.
-
-* Ability to restart the server to pick up any static server parameter changes.
-
-* Periodic maintenance activities such as minor version upgrades happen at the standby first and the service failed to reduce downtime.
-
-### High availability limitations
-
-* Due to synchronous replication to the standby server, especially with a zone-redundant configuration, applications can experience elevated write and commit latency.
-
-* Standby replica cannot be used for read queries.
-
-* Depending on the workload and activity on the primary server, the failover process might take longer than 120 seconds due to the recovery involved at the standby replica before it can be promoted.
-
-* The standby server typically recovers WAL files at 40 MB/s. If your workload exceeds this limit, you may encounter extended time for the recovery to complete either during the failover or after establishing a new standby.
-
-* Configuring for availability zones induces some latency to writes and commitsΓÇöno impact on reading queries. The performance impact varies depending on your workload. As a general guideline, writes and commit impact can be around 20-30% impact.
-
-* Restarting the primary database server also restarts the standby replica.
-
-* Configuring extra standbys is not supported.
-
-* Configuring customer-initiated management tasks cannot be scheduled during the managed maintenance window.
-
-* Planned events such as scale computing and scale storage happens on the standby first and then on the primary server. Currently, the server doesn't failover for these planned operations.
-
-* If logical decoding or logical replication is configured with an availability-configured Flexible Server, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server.
-
-* Configuring availability zones between private (VNET) and public access isn't supported. You must configure availability zones within a VNET (spanned across availability zones within a region) or public access.
-
-* Availability zones are configured only within a single region. Availability zones cannot be configured across regions.
-
-### SLA
--- **Zone-Redundancy** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql).--- **Zonal** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql).-
-### Create an Azure Database for PostgreSQL - Flexible Server with availability zone enabled
-
-To learn how to create an Azure Database for PostgreSQL - Flexible Server for high availability with availability zones, see [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal).
-
-### Availability zone redeployment and migration
-
-To learn how to enable or disable high availability configuration in your flexible server in both zone-redundant and zonal deployment models see [Manage high availability in Flexible Server](../postgresql/flexible-server/how-to-manage-high-availability-portal.md).
--
-### High availability components and workflow
-
-#### Transaction completion
-
-Application transaction-triggered writes and commits are first logged to the WAL on the primary server. It is then streamed to the standby server using the Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged for write completion. Only then and the application confirmed the writes. An extra round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process does not wait for the logs to be applied to the standby server. The standby server is permanently in recovery mode until it is promoted.
-
-#### Health check
-
-Flexible server health monitoring periodically checks for both the primary and standby health. If, after multiple pings, health monitoring detects that a primary server isn't reachable, the service then initiates an automatic failover to the standby server. The health monitoring algorithm is based on multiple data points to avoid false positive situations.
-
-#### Failover modes
-
-Flexible server supports two failover modes, [**Planned failover**](#planned-failover) and [**Unplanned failover**](#unplanned-failover). In both modes, once the replication is severed, the standby server runs the recovery before being promoted as a primary and opens for read/write. With automatic DNS entries updated with the new primary server endpoint, applications can connect to the server using the same endpoint. A new standby server is established in the background, so that your application can maintain connectivity.
--
-#### High availability status
-
-The health of primary and standby servers are continuously monitored, and appropriate actions are taken to remediate issues, including triggering a failover to the standby server. The table below lists the possible high availability statuses:
-
-| **Status** | **Description** |
-| - | |
-| **Initializing** | In the process of creating a new standby server. |
-| **Replicating Data** | After the standby is created, it is catching up with the primary. |
-| **Healthy** | Replication is in steady state and healthy. |
-| **Failing Over** | The database server is in the process of failing over to the standby. |
-| **Removing Standby** | In the process of deleting standby server. |
-| **Not Enabled** | Zone redundant high availability is not enabled. |
-
-
->[!NOTE]
-> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during the post-create stage, operating when the primary server activity is low is recommended.
-
-#### Steady-state operations
-
-PostgreSQL client applications are connected to the primary server using the DB server name. Application reads are served directly from the primary server. At the same time, commits and writes are confirmed to the application only after the log data is persisted on both the primary server and the standby replica. Due to this extra round-trip, applications can expect elevated latency for writes and commits. You can monitor the health of the high availability on the portal.
--
-1. Clients connect to the flexible server and perform write operations.
-2. Changes are replicated to the standby site.
-3. Primary receives an acknowledgment.
-4. Writes/commits are acknowledged.
-
-#### Point-in-time restore of high availability servers
-
-For flexible servers configured with high availability, log data is replicated in real-time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates, are replicated to the standby replica. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform a point-in-time restore from the backup. Using a flexible server's point-in-time restore capability, you can restore to the time before the error occurred. A new database server is restored as a single-zone flexible server with a new user-provided server name for databases configured with high availability. You can use the restored server for a few use cases:
--- You can use the restored server for production and optionally enable zone-redundant high availability.--- If you want to restore an object, export it from the restored database server and import it to your production database server.-- If you want to clone your database server for testing and development purposes or to restore for any other purposes, you can perform the point-in-time restore.-
-To learn how to do a point-in-time restore of a flexible server, see [Point-in-time restore of a flexible server](/azure/postgresql/flexible-server/how-to-restore-server-portal).
-
-### Failover Support
-
-#### Planned failover
-
-Planned downtime events include Azure scheduled periodic software updates and minor version upgrades.You can also use a planned failover to return the primary server to a preferred availability zone. When configured in high availability, these operations are first applied to the standby replica while the applications continue to access the primary server. Once the standby replica is updated, primary server connections are drained, and a failover is triggered, which activates the standby replica to be the primary with the same database server name. Client applications have to reconnect with the same database server name to the new primary server and can resume their operations. A new standby server is established in the same zone as the old primary.
-
-For other user-initiated operations such as scale-compute or scale-storage, the changes are applied on the standby first, followed by the primary. Currently, the service is not failed over to the standby, and hence while the scale operation is carried out on the primary server, applications will encounter a short downtime.
-
-You can also use this feature to failover to the standby server with reduced downtime. For example, your primary could be on a different availability zone after an unplanned failover than the application. You want to bring the primary server back to the previous zone to colocate with your application.
-
-When executing this feature, the standby server is first prepared to ensure it is caught up with recent transactions, allowing the application to continue performing reads/writes. The standby is then promoted, and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover.
-
-| **Step** | **Description** | **App downtime expected?** |
- | - | | -- |
- | 1 | Wait for the standby server to have caught-up with the primary. | No |
- | 2 | Internal monitoring system initiates the failover workflow. | No |
- | 3 | Application writes are blocked when the standby server is close to the primary log sequence number (LSN). | Yes |
- | 4 | Standby server is promoted to be an independent server. | Yes |
- | 5 | DNS record is updated with the new standby server's IP address. | Yes |
- | 6 | Application to reconnect and resume its read/write with new primary | No |
- | 7 | A new standby server in another zone is established. | No |
- | 8 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
- | 9 | A steady state between the primary and the standby server is established. | No |
- | 10 | Planned failover process is complete. | No |
-
-Application downtime starts at step #3 and can resume operation post step #5. The rest of the steps happen in the background without impacting application writes and commits.
--
->[!TIP]
->With flexible server, you can optionally schedule Azure-initiated maintenance activities by choosing a 60-minute window on a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that window. If you don't choose a custom window, a system allocated 1-hr window between 11 pm - 7 am local time is selected for your server.
-
->These Azure-initiated maintenance activities are also performed on the standby replica for flexible servers that are configured with availability zones.
--
-For a list of possible planned downtime events, see [Planned downtime events](/azure/postgresql/flexible-server/concepts-business-continuity#planned-downtime-events)
-
-#### Unplanned failover
-
-Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention.
-
-For information on unplanned failovers and downtime, including possible scenarios, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation).
--
-#### Failover testings (forced failover)
-
-With a forced failover, you can simulate an unplanned outage scenario while running your production workload and observe your application downtime. You can also use a forced failover when your primary server becomes unresponsive.
-
-A forced failover brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it is promoted to be the primary server. DNS records are updated, and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background, which doesn't impact the uptime.
-
-The following are the steps during forced failover:
-
- | **Step** | **Description** | **App downtime expected?** |
- | - | | -- |
- | 1 | Primary server is stopped shortly after receiving the failover request. | Yes |
- | 2 | Application encounters downtime as the primary server is down. | Yes |
- | 3 | Internal monitoring system detects the failure and initiates a failover to the standby server. | Yes |
- | 4 | Standby server enters recovery mode before being fully promoted as an independent server. | Yes |
- | 5 | The failover process waits for the standby recovery to complete. | Yes |
- | 6 | Once the server is up, the DNS record is updated with the same hostname but using the standby's IP address. | Yes |
- | 7 | Application can reconnect to the new primary server and resume the operation. | No |
- | 8 | A standby server in the preferred zone is established. | No |
- | 9 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
- | 10 | A steady state between the primary and the standby server is established. | No |
- | 11 | Forced failover process is complete. | No |
-
-Application downtime is expected to start after step #1 and persists until step #6 is completed. The rest of the steps happen in the background without impacting the application writes and commits.
-
->[!Important]
->The end-to-end failover process includes (a) failing over to the standby server after the primary failure and (b) establishing a new standby server in a steady state. As your application incurs downtime until the failover to the standby is complete, **please measure the downtime from your application/client perspective** instead of the overall end-to-end failover process.
--
-#### Considerations while performing forced failovers
-
-* The overall end-to-end operation time may be seen as longer than the actual downtime experienced by the application.
-
- >[!IMPORTANT]
- > Always observe the downtime from the application perspective!
-
-* Don't perform immediate, back-to-back failovers. Wait for at least 15-20 minutes between failovers, allowing the new standby server to be fully established.
-
-* It's recommended that your perform a forced failover during a low-activity period to reduce downtime.
--
-### Zone-down experience
-
-**Zonal**. To recover from a zone-level failure, you can [perform point-in-time restore](#point-in-time-restore-of-high-availability-servers) using the backup. You can choose a custom restore point with the latest time to restore the latest data. A new flexible server is deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover.
-
-For more information on point-in-time restore see [Backup and restore in Azure Database for PostgreSQL-Flexible Server]
-(/azure/postgresql/flexible-server/concepts-backup-restore).
-
-**Zone-redundant**. Flexible server is automatically failed over to the standby server within 60-120s with zero data loss.
--
-## Configurations without availability zones
-
-Although it's not recommended, you can configure you flexible server without high availability enabled. For flexible servers configured without high availability, the service provides local redundant storage with three copies of data, zone-redundant backup (in regions where it is supported), and built-in server resiliency to automatically restart a crashed server and relocate the server to another physical node. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. During planned or unplanned failover events, if the server goes down, the service maintains the availability of the servers using the following automated procedure:
-
-1. A new compute Linux VM is provisioned.
-2. The storage with data files is mapped to the new virtual machine
-3. PostgreSQL database engine is brought online on the new virtual machine.
-
-The picture below shows the transition between VM and storage failure.
--
-## Disaster recovery: cross-region failover
-
-In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
-
-Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
-
-### Cross-region disaster recovery in multi-region geography
-
-#### Geo-redundant backup and restore
-
-Geo-redundant backup and restore provides the ability to restore your server in a different region in the event of a disaster. It also provides at least 99.99999999999999 percent (16 nines) durability of backup objects over a year.
-
-Geo-redundant backup can be configured only at the time of server creation. When the server is configured with geo-redundant backup, the backup data and transaction logs are copied to the paired region asynchronously through storage replication.
-
-For more information on geo-redundant backup and restore, see [geo-redundant backup and restore](/azure/postgresql/flexible-server/concepts-backup-restore#geo-redundant-backup-and-restore).
-
-#### Read replicas
-
-Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers.
-
-For more information on on read replica features and considerations, see [Read replicas](/azure/postgresql/flexible-server/concepts-read-replicas).
-
-#### Outage detection, notification, and management
-
-If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server is provisioned and recovered to the last available data that was copied to this region.
-
-You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure.
-
-For more information on unplanned downtime mitigation as well as recovery after regional disaster, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation).
-----
-## Next steps
-> [!div class="nextstepaction"]
-> [Azure Database for PostgreSQL documentation](/azure/postgresql/)
-
-> [!div class="nextstepaction"]
-> [Reliability in Azure](availability-zones-overview.md)
reliability Reliability Postgresql Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md
+
+ Title: Reliability and high availability in PostgreSQL - Flexible Server
+titleSufffix: Azure Database for PostgreSQL - Flexible Server
+description: Find out about reliability and high availability in Azure Database for PostgreSQL - Flexible Server
+++ Last updated : 08/24/2023+++
+ - references_regions
+ - subject-reliability
++
+<!--#Customer intent: I want to understand reliability support in Azure Database for PostgreSQL - Flexible Server so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
+
+# High availability (Reliability) in Azure Database for PostgreSQL - Flexible Server
++
+This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+Azure Database for PostgreSQL: Flexible Server offers high availability support by provisioning physically separate primary and standby replica either within the same availability zone (zonal) or across availability zones (zone-redundant). This high availability model is designed to ensure that committed data is never lost in the case of failures. The model is also designed so that the database doesn't become a single point of failure in your software architecture. For more information on high availability and availability zone support, see [Availability zone support](#availability-zone-support).
+
+## Availability zone support
++
+Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md#azure-services-with-availability-zone-support) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events.
+
+- **Zone-redundant**. Zone redundant high availability deploys a standby replica in a different zone with automatic failover capability. Zone redundancy provides the highest level of availability, but requires you to configure application redundancy across zones. For that reason, choose zone redundancy when you want protection from availability zone level failures and when latency across the availability zones is acceptable.
+
+ You can choose the region and the availability zones for both primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with a similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs, a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, automatically storing **three** data copies. A zone-redundant configuration provides physical isolation of the entire stack between primary and standby servers.
+
+ :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Pictures illustrating redundant high availability architecture." lightbox="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png":::
+
+- **Zonal**. Choose a zonal deployment when you want to achieve the highest level of availability within a single availability zone, but with the lowest network latency. You can choose the region and the availability zone to deploy both your primary database server. A standby replica server is *automatically* provisioned and managed in the *same* availability zone - with similar compute, storage, and network configuration - as the primary server. A zonal configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica.
+
+ :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Pictures illustrating zonal high availability architecture." lightbox="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png":::
+
+> [!NOTE]
+> Both zonal and zone-redundant deployment models architecturally behave the same. Various discussions in the following sections apply to both unless called out otherwise.
+
+### Prerequisites
+
+**Zone redundancy:**
+
+- The **zone-redundancy** option is only available in a [regions that support availability zones](../postgresql/flexible-server/overview.md#azure-regions).
+
+- Zone-redundancy zones are **not** supported for:
+
+ - Azure Database for PostgreSQL ΓÇô Single Server SKU.
+ - Burstable compute tier.
+ - Regions with single-zone availability.
+
+**Zonal:**
+
+- The **zonal** deployment option is available in all [Azure regions](../postgresql/flexible-server/overview.md#azure-regions) where you can deploy Flexible Server.
+
+### High availability features
+
+- A standby replica is deployed in the same VM configuration - including vCores, storage, network settings - as the primary server.
+
+- You can add availability zone support for an existing database server.
+
+- You can remove the standby replica by disabling high availability.
+
+- You can choose availability zones for your primary and standby database servers for zone-redundant availability.
+
+- Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time.
+
+- In zone-redundant and zonal models, automatic backups are performed periodically from the primary database server. At the same time, the transaction logs are continuously archived in the backup storage from the standby replica. If the region supports availability zones, backup data is stored on zone-redundant storage (ZRS). In regions that don't support availability zones, backup data is stored on local redundant storage (LRS).
+
+- Clients always connect to the end hostname of the primary database server.
+
+- Any changes to the server parameters are also applied to the standby replica.
+
+- Ability to restart the server to pick up any static server parameter changes.
+
+- Periodic maintenance activities such as minor version upgrades happen at the standby first and the service failed to reduce downtime.
+
+### High availability limitations
+
+- Due to synchronous replication to the standby server, especially with a zone-redundant configuration, applications can experience elevated write and commit latency.
+
+- Standby replica can't be used for read queries.
+
+- Depending on the workload and activity on the primary server, the failover process might take longer than 120 seconds due to the recovery involved at the standby replica before it can be promoted.
+
+- The standby server typically recovers WAL files at 40 MB/s. If your workload exceeds this limit, you may encounter extended time for the recovery to complete either during the failover or after establishing a new standby.
+
+- Configuring for availability zones induces some latency to writes and commitsΓÇöno impact on reading queries. The performance impact varies depending on your workload. As a general guideline, writes and commit impact can be around 20-30% impact.
+
+- Restarting the primary database server also restarts the standby replica.
+
+- Configuring an extra standby isn't supported.
+
+- Configuring customer-initiated management tasks can't be scheduled during the managed maintenance window.
+
+- Planned events such as scale computing and scale storage happens on the standby first and then on the primary server. Currently, the server doesn't failover for these planned operations.
+
+- If logical decoding or logical replication is configured with an availability-configured Flexible Server, in the event of a failover to the standby server, the logical replication slots aren't copied over to the standby server.
+
+- Configuring availability zones between private (VNET) and public access isn't supported. You must configure availability zones within a VNET (spanned across availability zones within a region) or public access.
+
+- Availability zones are configured only within a single region. Availability zones can't be configured across regions.
+
+### SLA
+
+- **Zone-Redundancy** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql).
+
+- **Zonal** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql).
+
+### Create an Azure Database for PostgreSQL - Flexible Server with availability zone enabled
+
+To learn how to create an Azure Database for PostgreSQL - Flexible Server for high availability with availability zones, see [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal).
+
+### Availability zone redeployment and migration
+
+To learn how to enable or disable high availability configuration in your flexible server in both zone-redundant and zonal deployment models see [Manage high availability in Flexible Server](../postgresql/flexible-server/how-to-manage-high-availability-portal.md).
+
+### High availability components and workflow
+
+#### Transaction completion
+
+Application transaction-triggered writes and commits are first logged to the WAL on the primary server. It's then streamed to the standby server using the Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged for write completion. Only then and the application confirmed the writes. An extra round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process doesn't wait for the logs to be applied to the standby server. The standby server is permanently in recovery mode until it's promoted.
+
+#### Health check
+
+Flexible server health monitoring periodically checks for both the primary and standby health. After multiple pings, if health monitoring detects that a primary server isn't reachable, the service then initiates an automatic failover to the standby server. The health monitoring algorithm is based on multiple data points to avoid false positive situations.
+
+#### Failover modes
+
+Flexible server supports two failover modes, [**Planned failover**](#planned-failover) and [**Unplanned failover**](#unplanned-failover). In both modes, once the replication is severed, the standby server runs the recovery before being promoted as a primary and opens for read/write. With automatic DNS entries updated with the new primary server endpoint, applications can connect to the server using the same endpoint. A new standby server is established in the background, so that your application can maintain connectivity.
+
+#### High availability status
+
+The health of primary and standby servers are continuously monitored, and appropriate actions are taken to remediate issues, including triggering a failover to the standby server. The table below lists the possible high availability statuses:
+
+| **Status** | **Description** |
+| | |
+| **Initializing** | In the process of creating a new standby server. |
+| **Replicating Data** | After the standby is created, it's catching up with the primary. |
+| **Healthy** | Replication is in steady state and healthy. |
+| **Failing Over** | The database server is in the process of failing over to the standby. |
+| **Removing Standby** | In the process of deleting standby server. |
+| **Not Enabled** | Zone redundant high availability isn't enabled. |
+
+> [!NOTE]
+> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during the post-create stage, operating when the primary server activity is low is recommended.
+
+#### Steady-state operations
+
+PostgreSQL client applications are connected to the primary server using the DB server name. Application reads are served directly from the primary server. At the same time, commits and writes are confirmed to the application only after the log data is persisted on both the primary server and the standby replica. Due to this extra round-trip, applications can expect elevated latency for writes and commits. You can monitor the health of the high availability on the portal.
++
+1. Clients connect to the flexible server and perform write operations.
+1. Changes are replicated to the standby site.
+1. Primary receives an acknowledgment.
+1. Writes/commits are acknowledged.
+
+#### Point-in-time restore of high availability servers
+
+For flexible servers configured with high availability, log data is replicated in real-time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates, are replicated to the standby replica. So, you can't use standby to recover from such logical errors. To recover from such errors, you have to perform a point-in-time restore from the backup. Using a flexible server's point-in-time restore capability, you can restore to the time before the error occurred. A new database server is restored as a single-zone flexible server with a new user-provided server name for databases configured with high availability. You can use the restored server for a few use cases:
+
+- You can use the restored server for production and optionally enable zone-redundant high availability.
+
+- If you want to restore an object, export it from the restored database server and import it to your production database server.
+- If you want to clone your database server for testing and development purposes or to restore for any other purposes, you can perform the point-in-time restore.
+
+To learn how to do a point-in-time restore of a flexible server, see [Point-in-time restore of a flexible server](/azure/postgresql/flexible-server/how-to-restore-server-portal).
+
+### Failover Support
+
+#### Planned failover
+
+Planned downtime events include Azure scheduled periodic software updates and minor version upgrades. You can also use a planned failover to return the primary server to a preferred availability zone. When configured in high availability, these operations are first applied to the standby replica while the applications continue to access the primary server. Once the standby replica is updated, primary server connections are drained, and a failover is triggered, which activates the standby replica to be the primary with the same database server name. Client applications have to reconnect with the same database server name to the new primary server and can resume their operations. A new standby server is established in the same zone as the old primary.
+
+For other user-initiated operations such as scale-compute or scale-storage, the changes are applied on the standby first, followed by the primary. Currently, the service isn't failed over to the standby, and hence while the scale operation is carried out on the primary server, applications encounters a short downtime.
+
+You can also use this feature to failover to the standby server with reduced downtime. For example, your primary could be on a different availability zone after an unplanned failover than the application. You want to bring the primary server back to the previous zone to colocate with your application.
+
+When executing this feature, the standby server is first prepared to ensure it's caught up with recent transactions, allowing the application to continue performing reads/writes. The standby is then promoted, and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover.
+
+| **Step** | **Description** | **App downtime expected?** |
+ | | | |
+ | 1 | Wait for the standby server to have caught-up with the primary. | No |
+ | 2 | Internal monitoring system initiates the failover workflow. | No |
+ | 3 | Application writes are blocked when the standby server is close to the primary log sequence number (LSN). | Yes |
+ | 4 | Standby server is promoted to be an independent server. | Yes |
+ | 5 | DNS record is updated with the new standby server's IP address. | Yes |
+ | 6 | Application to reconnect and resume its read/write with new primary | No |
+ | 7 | A new standby server in another zone is established. | No |
+ | 8 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
+ | 9 | A steady state between the primary and the standby server is established. | No |
+ | 10 | Planned failover process is complete. | No |
+
+Application downtime starts at step #3 and can resume operation post step #5. The rest of the steps happen in the background without affecting application writes and commits.
+
+> [!TIP]
+> With flexible server, you can optionally schedule Azure-initiated maintenance activities by choosing a 60-minute window on a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that window. If you don't choose a custom window, a system allocated 1-hr window between 11 pm - 7 am local time is selected for your server.
+> These Azure-initiated maintenance activities are also performed on the standby replica for flexible servers that are configured with availability zones.
+
+For a list of possible planned downtime events, see [Planned downtime events](/azure/postgresql/flexible-server/concepts-business-continuity#planned-downtime-events)
+
+#### Unplanned failover
+
+Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention.
+
+For information on unplanned failovers and downtime, including possible scenarios, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation).
+
+#### Failover testings (forced failover)
+
+With a forced failover, you can simulate an unplanned outage scenario while running your production workload and observe your application downtime. You can also use a forced failover when your primary server becomes unresponsive.
+
+A forced failover brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it's promoted to be the primary server. DNS records are updated, and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background, which doesn't impact the uptime.
+
+The following are the steps during forced failover:
+
+ | **Step** | **Description** | **App downtime expected?** |
+ | | | |
+ | 1 | Primary server is stopped shortly after receiving the failover request. | Yes |
+ | 2 | Application encounters downtime as the primary server is down. | Yes |
+ | 3 | Internal monitoring system detects the failure and initiates a failover to the standby server. | Yes |
+ | 4 | Standby server enters recovery mode before being fully promoted as an independent server. | Yes |
+ | 5 | The failover process waits for the standby recovery to complete. | Yes |
+ | 6 | Once the server is up, the DNS record is updated with the same hostname but using the standby's IP address. | Yes |
+ | 7 | Application can reconnect to the new primary server and resume the operation. | No |
+ | 8 | A standby server in the preferred zone is established. | No |
+ | 9 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
+ | 10 | A steady state between the primary and the standby server is established. | No |
+ | 11 | Forced failover process is complete. | No |
+
+Application downtime is expected to start after step #1 and persists until step #6 is completed. The rest of the steps happen in the background without affecting the application writes and commits.
+
+> [!IMPORTANT]
+> The end-to-end failover process includes (a) failing over to the standby server after the primary failure and (b) establishing a new standby server in a steady state. As your application incurs downtime until the failover to the standby is complete, **please measure the downtime from your application/client perspective** instead of the overall end-to-end failover process.
+
+#### Considerations while performing forced failovers
+
+- The overall end-to-end operation time may be seen as longer than the actual downtime experienced by the application.
+
+ > [!IMPORTANT]
+ > Always observe the downtime from the application perspective!
+
+- Don't perform immediate, back-to-back failovers. Wait for at least 15-20 minutes between failovers, allowing the new standby server to be fully established.
+
+- It's recommended that your perform a forced failover during a low-activity period to reduce downtime.
+
+### Zone-down experience
+
+**Zonal**. To recover from a zone-level failure, you can [perform point-in-time restore](#point-in-time-restore-of-high-availability-servers) using the backup. You can choose a custom restore point with the latest time to restore the latest data. A new flexible server is deployed in another nonaffected zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover.
+
+For more information on point-in-time restore, see [Backup and restore in Azure Database for PostgreSQL-Flexible Server]
+(/azure/postgresql/flexible-server/concepts-backup-restore).
+
+**Zone-redundant**. Flexible server is automatically failed over to the standby server within 60-120 s with zero data loss.
+
+## Configurations without availability zones
+
+Although it's not recommended, you can configure you flexible server without high availability enabled. For flexible servers configured without high availability, the service provides local redundant storage with three copies of data, zone-redundant backup (in regions where it's supported), and built-in server resiliency to automatically restart a crashed server and relocate the server to another physical node. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. During planned or unplanned failover events, if the server goes down, the service maintains the availability of the servers using the following automated procedure:
+
+1. A new compute Linux VM is provisioned.
+1. The storage with data files is mapped to the new virtual machine
+1. PostgreSQL database engine is brought online on the new virtual machine.
+
+The picture below shows the transition between VM and storage failure.
++
+## Disaster recovery: cross-region failover
+
+In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
+
+### Cross-region disaster recovery in multi-region geography
+
+#### Geo-redundant backup and restore
+
+Geo-redundant backup and restore provide the ability to restore your server in a different region in the event of a disaster. It also provides at least 99.99999999999999 percent (16 nines) durability of backup objects over a year.
+
+Geo-redundant backup can be configured only at the time of server creation. When the server is configured with geo-redundant backup, the backup data and transaction logs are copied to the paired region asynchronously through storage replication.
+
+For more information on geo-redundant backup and restore, see [geo-redundant backup and restore](/azure/postgresql/flexible-server/concepts-backup-restore#geo-redundant-backup-and-restore).
+
+#### Read replicas
+
+Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers.
+
+For more information on read replica features and considerations, see [Read replicas](/azure/postgresql/flexible-server/concepts-read-replicas).
+
+#### Outage detection, notification, and management
+
+If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server is provisioned and recovered to the last available data that was copied to this region.
+
+You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure.
+
+For more information on unplanned downtime mitigation and recovery after regional disaster, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Database for PostgreSQL documentation](/azure/postgresql/)
+
+> [!div class="nextstepaction"]
+> [Reliability in Azure](availability-zones-overview.md)
remote-rendering Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/custom-models/custom-models.md
Title: Interfaces and custom models
-description: Add view controllers and ingest custom models to be rendered by Azure Remote Rendering
+description: Add view controllers and ingest custom models to render them with Azure Remote Rendering
Last updated 06/15/2020
In this tutorial, you learn how to:
## Get started with the Mixed Reality Toolkit (MRTK)
-The Mixed Reality Toolkit (MRTK) is a cross-platform toolkit for building mixed reality experiences. We'll use MRTK 2.5.1 for its interaction and visualization features.
+The Mixed Reality Toolkit (MRTK) is a cross-platform toolkit for building mixed reality experiences. We use MRTK 2.8.3 for its interaction and visualization features.
-To add MRTK, follow the [Required steps](https://microsoft.github.io/MixedRealityToolkit-Unity/version/releases/2.5.1/Documentation/Installation.html#required) listed in the [MRTK Installation Guide](https://microsoft.github.io/MixedRealityToolkit-Unity/version/releases/2.5.1/Documentation/Installation.html).
-
-Those steps are:
- - Even though it says "latest", please use version 2.5.1 from the MRTK release page.
- - We only use the *Foundation* package in this tutorial. The *Extensions*, *Tools*, and *Examples* packages are not required.
- - You should have done this step already in the first chapter, but now is a good time to double check!
- - You can add MRTK to a new scene and re-add your coordinator and model objects/scripts, or you can add MRTK to your existing scene using the *Mixed Reality Toolkit -> Add to Scene and Configure* menu command.
+The [official guide](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr) to import MRTK contains some steps we don't need to do. Only these three steps are necessary:
+ - Importing the 'Mixed Reality Toolkit/Mixed Reality Toolkit Foundation' version 2.8.3 to your project through the Mixed Reality Feature Tool ([Import MRTK](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr#import-the-mrtk-unity-foundation-package)).
+ - Run the configuration wizard of MRTK ([Configure MRTK](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr#configure-the-unity-project)).
+ - Add MRTK to the current scene ([Add to scene](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr#create-the-scene-and-configure-mrtk)). Use the *ARRMixedRealityToolkitConfigurationProfile* here instead of the suggested profile in the tutorial.
## Import assets used by this tutorial
-Starting in this chapter, we'll implement a simple [model-view-controller pattern](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) for much of the material covered. The *model* part of the pattern is the Azure Remote Rendering specific code and the state management related to Azure Remote Rendering. The *view* and *controller* parts of the pattern are implemented using MRTK assets and some custom scripts. It is possible to use the *model* in this tutorial without the *view-controller* implemented here. This separation allows you to easily integrate the code found in this tutorial into your own application where it will take over the *view-controller* part of the design pattern.
+Starting in this chapter, we'll implement a basic [model-view-controller pattern](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) for much of the material covered. The *model* part of the pattern is the Azure Remote Rendering specific code and the state management related to Azure Remote Rendering. The *view* and *controller* parts of the pattern are implemented using MRTK assets and some custom scripts. It's possible to use the *model* in this tutorial without the *view-controller* implemented here. This separation allows you to easily integrate the code found in this tutorial into your own application where it takes over the *view-controller* part of the design pattern.
-With the introduction of MRTK, there are a number of scripts, prefabs, and assets that can now be added to the project to support interactions and visual feedback. These assets, referred to as the **Tutorial Assets**, are bundled into a [Unity Asset Package](https://docs.unity3d.com/Manual/AssetPackages.html), which is included in the [Azure Remote Rendering GitHub](https://github.com/Azure/azure-remote-rendering) in '\Unity\TutorialAssets\TutorialAssets.unitypackage'.
+With the introduction of MRTK, there are multiple scripts, prefabs, and assets that can now be added to the project to support interactions and visual feedback. These assets referred to as the **Tutorial Assets**, are bundled into a [Unity Asset Package](https://docs.unity3d.com/Manual/AssetPackages.html), which is included in the [Azure Remote Rendering GitHub](https://github.com/Azure/azure-remote-rendering) in '\Unity\TutorialAssets\TutorialAssets.unitypackage'.
1. Clone or download the git repository [Azure Remote Rendering](https://github.com/Azure/azure-remote-rendering), if downloading extract the zip to a known location. 1. In your Unity project, choose *Assets -> Import Package -> Custom Package*.
-1. In the file explorer, navigate to the directory where you cloned or unzipped the Azure Remote Rendering repository, then select the .unitypackage found in **Unity -> TutorialAssets -> TutorialAssets.unitypackage**
+1. In the file explorer, navigate to the directory where you cloned or unzipped the Azure Remote Rendering repository, then select the `.unitypackage` found in **Unity -> TutorialAssets -> TutorialAssets.unitypackage**
1. Select the **Import** button to import the contents of the package into your project. 1. In the Unity Editor, select *Mixed Reality Toolkit -> Utilities -> Upgrade MRTK Standard Shader for Lightweight Render Pipeline* from the top menu bar and follow the prompts to upgrade the shader.
-Once MRTK and the Tutorial Assets are included in the project, we'll switch the MRTK profile to one more suitable for the tutorial.
+Once MRTK and the Tutorial Assets are setup double check, that the correct profile is selected.
1. Select the **MixedRealityToolkit** GameObject in the scene hierarchy. 1. In the Inspector, under the **MixedRealityToolkit** component, switch the configuration profile to *ARRMixedRealityToolkitConfigurationProfile*. 1. Press *Ctrl+S* to save your changes.
-This will configure MRTK, primarily, with the default HoloLens 2 profiles. The provided profiles are pre-configured in the following ways:
+This step configures MRTK, primarily, with the default HoloLens 2 profiles. The provided profiles are preconfigured in the following ways:
- Turn off the profiler (Press 9 to toggle it on/off, or say "Show/Hide Profiler" on device). - Turn off the eye gaze cursor. - Enable Unity mouse clicks, so you can click MRTK UI elements with the mouse instead of the simulated hand. ## Add the App Menu
-Most of the view controllers in this tutorial operate against abstract base classes instead of against concrete classes. This pattern provides more flexibility and allows us to provide the view controllers for you, while still helping you learn the Azure Remote Rendering code. For simplicity, the **RemoteRenderingCoordinator** class does not have an abstract class provided and its view controller operates directly against the concrete class.
+Most of the view controllers in this tutorial operate against abstract base classes instead of against concrete classes. This pattern provides more flexibility and allows us to provide the view controllers for you, while still helping you learn the Azure Remote Rendering code. For simplicity, the **RemoteRenderingCoordinator** class doesn't have an abstract class provided and its view controller operates directly against the concrete class.
-You can now add the prefab **AppMenu** to the scene, for visual feedback of the current session state. This view controller will "unlock" more sub menu view controllers as we implement and integrate more ARR features into the scene. For now, the **AppMenu** will have a visual indication of the ARR state and present the modal panel that the user uses to authorize the application to connect to ARR.
+You can now add the prefab **AppMenu** to the scene, for visual feedback of the current session state. The **AppMenu** also present the modal panel that the user uses to authorize the application to connect to ARR.
1. Locate the **AppMenu** prefab in *Assets/RemoteRenderingTutorial/Prefabs/AppMenu* 1. Drag the **AppMenu** prefab into the scene.
-1. You'll likely see a dialog for **TMP Importer**, since this is the first time we're including *Text Mesh Pro* assets in the scene. Follow the prompts to **Import TMP Essentials**. Then close the importer dialog, the examples and extras are not needed.
+1. If you see a dialog for **TMP Importer**, follow the prompts to **Import TMP Essentials**. Then close the importer dialog, as the examples and extras aren't needed.
1. The **AppMenu** is configured to automatically hook up and provide the modal for consenting to connecting to a Session, so we can remove the bypass placed earlier. On the **RemoteRenderingCoordinator** GameObject, remove the bypass for authorization we implemented previously, by pressing the '-' button on the **On Requesting Authorization** event. ![Remove bypass](./media/remove-bypass-event.png).
You can now add the prefab **AppMenu** to the scene, for visual feedback of the
1. Test the view controller by pressing **Play** in the Unity Editor. 1. In the Editor, now that MRTK is configured, you can use the WASD keys to change the position your view and holding the right mouse button + moving the mouse to change your view direction. Try "driving" around the scene a bit to get a feel for the controls. 1. On device, you can raise your palm up to summon the **AppMenu**, in the Unity Editor, use the hotkey 'M'.
-1. If you've lost sight of the menu, press the 'M' key to summon the menu. The menu will be placed near the camera for easy interaction.
-1. The authorization will now show as a request to the right of the **AppMenu**, from now on, you'll use this to authorize the app to manage remote rendering sessions.
+1. If you've lost sight of the menu, press the 'M' key to summon the menu. The menu is placed near the camera for easy interaction.
+1. The **AppMenu** presents a UI element for authorization to the right of the **AppMenu**. From now on, you should use this UI element to authorize the app to manage remote rendering sessions.
![UI authorize](./media/authorize-request-ui.png)
You can now add the prefab **AppMenu** to the scene, for visual feedback of the
## Manage model state
-Now we'll implement a new script, **RemoteRenderedModel** that is for tracking state, responding to events, firing events, and configuration. Essentially, **RemoteRenderedModel** stores the remote path for the model data in `modelPath`. It will listen for state changes in the **RemoteRenderingCoordinator** to see if it should automatically load or unload the model it defines. The GameObject that has the **RemoteRenderedModel** attached to it will be the local parent for the remote content.
+We need a new script called **RemoteRenderedModel** that is for tracking state, responding to events, firing events, and configuration. Essentially, **RemoteRenderedModel** stores the remote path for the model data in `modelPath`. It listens for state changes in the **RemoteRenderingCoordinator** to see if it should automatically load or unload the model it defines. The GameObject that has the **RemoteRenderedModel** attached to it's the local parent for the remote content.
-Notice that the **RemoteRenderedModel** script implements **BaseRemoteRenderedModel**, included from **Tutorial Assets**. This will allow the remote model view controller to bind with your script.
+Notice that the **RemoteRenderedModel** script implements **BaseRemoteRenderedModel**, included from **Tutorial Assets**. This connection allows the remote model view controller to bind with your script.
1. Create a new script named **RemoteRenderedModel** in the same folder as **RemoteRenderingCoordinator**. Replace the entire contents with the following code:
Notice that the **RemoteRenderedModel** script implements **BaseRemoteRenderedMo
} ```
-In the most basic terms, **RemoteRenderedModel** holds the data needed to load a model (in this case the SAS or *builtin://* URI) and tracks the remote model state. When it's time to load, the `LoadModel` method is called on **RemoteRenderingCoordinator** and the Entity containing the model is returned for reference and unloading.
+In the most basic terms, **RemoteRenderedModel** holds the data needed to load a model (in this case the SAS or *builtin://* URI) and tracks the remote model state. When it's time to load the model, the `LoadModel` method is called on **RemoteRenderingCoordinator**, and the Entity containing the model is returned for reference and unloading.
## Load the Test Model
-Let's test the new script by loading the test model again. We'll add a Game Object to contain the script and be a parent to the test model. We'll also create a virtual stage that contains the model. The stage will stay fixed relative to the real world using a [WorldAnchor](/windows/mixed-reality/develop/unity/spatial-anchors-in-unity?tabs=worldanchor). We use a fixed stage so that the model itself can still be moved around later on.
+Let's test the new script by loading the test model again. For this test, we need a Game Object to contain the script and be a parent to the test model, and we also need a virtual stage that contains the model. The stage stays fixed relative to the real world using a [WorldAnchor](/windows/mixed-reality/develop/unity/spatial-anchors-in-unity?tabs=worldanchor). We use a fixed stage so that the model itself can still be moved around later on.
1. Create a new empty Game Object in the scene and name it **ModelStage**. 1. Add a World Anchor component to **ModelStage**
Let's test the new script by loading the test model again. We'll add a Game Obje
1. Ensure **AutomaticallyLoad** is turned on. 1. Press **Play** in the Unity Editor to test the application.
-1. Grant authorization by clicking the *Connect* button to allow the app to create a session and it will connect to a Session and automatically load the model.
+1. Grant authorization by clicking the *Connect* button to allow the app to create a session, connect to it, and automatically load the model.
-Watch the Console as the application progresses through its states. Keep in mind, some states may take some time to complete, and won't show progress. Eventually, you'll see the logs from the model loading and then the test model will be rendered in the scene.
+Watch the Console as the application progresses through its states. Keep in mind, some states may take some time to complete, and there might be no progress updates for a while. Eventually, you see logs from the model loading and then shortly after the rendered test model in the scene.
-Try moving and rotating the **TestModel** GameObject via the Transform in the Inspector, or in the Scene view. You'll see the model move and rotate it in the Game view.
+Try moving and rotating the **TestModel** GameObject via the Transform in the Inspector, or in the Scene view and observe the transformations in the Game view.
![Unity Log](./media/unity-loading-log.png) ## Provision Blob Storage in Azure and custom model ingestion
-Now we can try loading your own model. To do that, you'll need to configure Blob Storage and on Azure, upload and convert a model, then we'll load the model using the **RemoteRenderedModel** script. The custom model loading steps can be safely skipped if you don't have your own model to load at this time.
+Now we can try loading your own model. To do that, you need to configure Blob Storage on Azure, upload and convert a model, and then load the model using the **RemoteRenderedModel** script. The custom model loading steps can be safely skipped if you don't have your own model to load at this time.
-Follow the steps specified in the [Quickstart: Convert a model for rendering](../../../quickstarts/convert-model.md). Skip the **Insert new model into Quickstart Sample App** section for the purpose of this tutorial. Once you have your ingested model's *Shared Access Signature (SAS)* URI, continue to the next step below.
+Follow the steps specified in the [Quickstart: Convert a model for rendering](../../../quickstarts/convert-model.md). Skip the **Insert new model into Quickstart Sample App** section for this tutorial. Once you have your ingested model's *Shared Access Signature (SAS)* URI, continue.
## Load and rendering a custom model
Follow the steps specified in the [Quickstart: Convert a model for rendering](..
![Add RemoteRenderedModel component](./media/add-remote-rendered-model-script.png) 1. Fill in the `Model Display Name` with an appropriate name for your model.
-1. Fill in the `Model Path` with the model's *Shared Access Signature (SAS)* URI you created in the ingestion steps above.
+1. Fill in the `Model Path` with the model's *Shared Access Signature (SAS)* URI you created in the [Provision Blob Storage in Azure and custom model ingestion](#provision-blob-storage-in-azure-and-custom-model-ingestion) step.
1. Position the GameObject in front of the camera, at position **x = 0, y = 0, z = 3.** 1. Ensure **AutomaticallyLoad** is turned on. 1. Press **Play** in the Unity Editor to test the application.
- You will see the Console begin to populate with the current state, and eventually, model loading progress messages. Your custom model will then load into the scene.
+ The console shows the current session state and also the model loading progress messages, once the session is connected.
-1. Remove your custom model object from the scene. The best experience for this tutorial will be using the test model. While multiple models are certainly supported in ARR, this tutorial was written to best support a single remote model at a time.
+1. Remove your custom model object from the scene. The best experience for this tutorial is with the test model. While multiple models are supported in ARR, this tutorial was written to best support a single remote model at a time.
## Next steps
-You can now load your own models into Azure Remote Rendering and view them in your application! Next, we'll guide you through manipulating your models.
+You can now load your own models into Azure Remote Rendering and view them in your application! Next, we guide you through manipulating your models.
> [!div class="nextstepaction"] > [Next: Manipulating models](../manipulate-models/manipulate-models.md)
remote-rendering Manipulate Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/manipulate-models/manipulate-models.md
The bounds of a model are defined by the box that contains the entire model - ju
> [!NOTE] > If you see an error in Visual Studio claiming *Feature 'X' is not available in C# 6. Please use language version 7.0 or greater*, these error can be safely ignored. This is related to Unity's Solution and Project generation.
- This script should be added to the same GameObject as the script that implements **BaseRemoteRenderedModel**. In this case, that means **RemoteRenderedModel**. Similar to previous scripts, this initial code will handle all the state changes, events, and data related to remote bounds.
+ This script should be added to the same GameObject as the script that implements **BaseRemoteRenderedModel**. In this case, that means **RemoteRenderedModel**. Similar to previous scripts, this initial code handles all the state changes, events, and data related to remote bounds.
There is only one method left to implement: **QueryBounds**. **QueryBounds** fetches the bounds asynchronously, takes the result of the query and applies it to the local **BoxCollider**.
The bounds of a model are defined by the box that contains the entire model - ju
} ```
- We'll check the query result to see if it was successful. If yes, convert and apply the returned bounds in a format that the **BoxCollider** can accept.
+ We check the query result to see if it was successful. If yes, convert and apply the returned bounds in a format that the **BoxCollider** can accept.
-Now, when the **RemoteBounds** script is added to the same game object as the **RemoteRenderedModel**, a **BoxCollider** will be added if needed and when the model reaches its `Loaded` state, the bounds will automatically be queried and applied to the **BoxCollider**.
+Now, when the **RemoteBounds** script is added to the same game object as the **RemoteRenderedModel**, a **BoxCollider** is added if needed and when the model reaches its `Loaded` state, the bounds will automatically be queried and applied to the **BoxCollider**.
1. Using the **TestModel** GameObject created previously, add the **RemoteBounds** component. 1. Confirm the script is added.
This tutorial is using MRTK for object interaction. Most of the MRTK specific im
1. Press Unity's Play button to play the scene and open the **Model Tools** menu inside the **AppMenu**. ![View controller](./media/model-with-view-controller.png)
-The **AppMenu** has a sub menu **Model Tools** that implements a view controller for binding with the model. When the GameObject contains a **RemoteBounds** component, the view controller will add a [**BoundingBox**](https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_BoundingBox.html) component, which is an MRTK component that renders a bounding box around an object with a **BoxCollider**. A [**ObjectManipulator**](https://microsoft.github.io/MixedRealityToolkit-Unity/version/releases/2.5.1/api/Microsoft.MixedReality.Toolkit.UI.ObjectManipulator.html), which is responsible for hand interactions. These scripts combined will allow us to move, rotate, and scale the remotely rendered model.
+The **AppMenu** has a sub menu **Model Tools** that implements a view controller for binding with the model. When the GameObject contains a **RemoteBounds** component, the view controller will add a [**BoundingBox**](https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_BoundingBox.html) component, which is an MRTK component that renders a bounding box around an object with a **BoxCollider**. A [**ObjectManipulator**](/windows/mixed-reality/mrtk-unity/mrtk2/features/ux-building-blocks/object-manipulator), which is responsible for hand interactions. These scripts combined will allow us to move, rotate, and scale the remotely rendered model.
1. Move your mouse to the Game panel and click inside it to give it focus. 1. Using [MRTK's hand simulation](https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/InputSimulation/InputSimulationService.html#hand-simulation), press and hold the left Shift key.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Contributor](#contributor) | Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. | b24988ac-6180-42a0-ab88-20f7382dd24c | > | [Owner](#owner) | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. | 8e3af657-a8ff-443c-a75c-2fe8c4bcb635 | > | [Reader](#reader) | View all resources, but does not allow you to make any changes. | acdd72a7-3385-48ef-bd42-f606fba81ae7 |
+> | [Role Based Access Control Administrator (Preview)](#role-based-access-control-administrator-preview) | Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy. | f58310d9-a9f6-439a-9e8d-f62e7b41a168 |
> | [User Access Administrator](#user-access-administrator) | Lets you manage user access to Azure resources. | 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 | > | **Compute** | | | > | [Classic Virtual Machine Contributor](#classic-virtual-machine-contributor) | Lets you manage classic virtual machines, but not access to them, and not the virtual network or storage account they're connected to. | d73bb868-a0df-4d4d-bd69-98a00b01fccb |
View all resources, but does not allow you to make any changes. [Learn more](rba
"type": "Microsoft.Authorization/roleDefinitions" } ```
+### Role Based Access Control Administrator (Preview)
+
+Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | Create a role assignment at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | Delete a role assignment at the specified scope. |
+> | */read | Read resources of all types, except secrets. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168",
+ "name": "f58310d9-a9f6-439a-9e8d-f62e7b41a168",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/roleAssignments/write",
+ "Microsoft.Authorization/roleAssignments/delete",
+ "*/read",
+ "Microsoft.Support/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Role Based Access Control Administrator (Preview)",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
### User Access Administrator
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
You should remove this elevated access once you have made the changes you need t
Follow these steps to elevate access for a Global Administrator using the Azure portal.
-1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Active Directory admin center](https://aad.portal.azure.com) as a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
If you are using Azure AD Privileged Identity Management, [activate your Global Administrator role assignment](../active-directory/privileged-identity-management/pim-how-to-activate-role.md).
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
role-based-access-control Role Assignments Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md
When you assign a role at a parent scope, those permissions are inherited to the
[!INCLUDE [Scope for Azure RBAC least privilege](../../includes/role-based-access-control/scope-least.md)] For more information, see [Understand scope](scope-overview.md).
-## Step 4. Check your prerequisites
+## Step 4: Check your prerequisites
To assign roles, you must be signed in with a user that is assigned a role that has role assignments write permission, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) at the scope you are trying to assign the role. Similarly, to remove a role assignment, you must have the role assignments delete permission.
If your user account doesn't have permission to assign a role within your subscr
If you are using a service principal to assign roles, you might get the error "Insufficient privileges to complete the operation." This error is likely because Azure is attempting to look up the assignee identity in Azure Active Directory (Azure AD) and the service principal cannot read Azure AD by default. In this case, you need to grant the service principal permissions to read data in the directory. Alternatively, if you are using Azure CLI, you can create the role assignment by using the assignee object ID to skip the Azure AD lookup. For more information, see [Troubleshoot Azure RBAC](troubleshooting.md).
-## Step 5. Assign role
+## Step 5: Assign role
Once you know the security principal, role, and scope, you can assign the role. You can assign roles using the Azure portal, Azure PowerShell, Azure CLI, Azure SDKs, or REST APIs.
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
role-based-access-control Tutorial Role Assignments Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-group-powershell.md
-+ Last updated 02/02/2019
role-based-access-control Tutorial Role Assignments User Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-user-powershell.md
-+ Last updated 02/02/2019
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Previously updated : 08/14/2023- Last updated : 08/15/2023 # Azure Route Server support for ExpressRoute and Azure VPN
For example, in the following diagram:
You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN and ExpressRoute gateways are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other.
+If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange).
+ > [!IMPORTANT]
-> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not necessary to have BGP enabled on the VPN gateway.
+> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not a requirement to have BGP enabled on the VPN gateway to communicate with the Route Server.
-> [!IMPORTANT]
+> [!NOTE]
> When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred. ## Next steps
route-server Next Hop Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/next-hop-ip.md
Previously updated : 07/25/2023- Last updated : 08/21/2023 # Next Hop IP support
-With the support for Next Hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance.
+With the support for Next Hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and use load balancing to improve connectivity performance.
:::image type="content" source="./media/next-hop-ip/route-server-next-hop.png" alt-text="Diagram of two NVAs behind a load balancer and a Route Server.":::
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
Previously updated : 08/11/2022 Last updated : 08/11/2023
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Previously updated : 08/14/2023 Last updated : 08/18/2023 # Azure Route Server frequently asked questions (FAQ)
No, Azure Route Server supports only 16-bit (2 bytes) ASNs.
If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the virtual machines (VMs) in the virtual network. When a VM sends traffic to the destination of this route, the VM host uses Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
+### Does creating a Route Server affect the operation of existing virtual network gateways (VPN or ExpressRoute)?
+
+Yes. When you create or delete a Route Server in a virtual network that contains a virtual network gateway (ExpressRoute or VPN), expect downtime until the operation is complete. If you have an ExpressRoute circuit connected to the virtual network where you're creating or deleting the Route Server, the downtime doesn't affect the ExpressRoute circuit or its connections to other virtual networks.
+ ### Does Azure Route Server exchange routes by default between NVAs and the virtual network gateways (VPN or ExpressRoute)? No. By default, Azure Route Server doesn't propagate routes it receives from an NVA and a virtual network gateway to each other. The Route Server exchanges these routes after you enable **branch-to-branch** in it.
You can still use Route Server to direct traffic between subnets in different vi
No, Azure Route Server provides transit only between ExpressRoute and Site-to-Site (S2S) VPN gateway connections (when enabling the *branch-to-branch* setting).
+### Can I create an Azure Route Server in a spoke VNet that's connected to a Virtual WAN hub?
+
+No. The spoke VNet can't have a Route Server if it's connected to the virtual WAN hub.
+ ## Limitations ### How many Azure Route Servers can I create in a virtual network?
No, Azure Route Server doesn't support configuring a user defined route (UDR) on
No, Azure Route Server doesn't support network security group association to the ***RouteServerSubnet*** subnet.
-### <a name = "limitations"></a>What are Azure Route Server limits?
+### <a name = "limits"></a>What are Azure Route Server limits?
Azure Route Server has the following limits (per deployment).
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
Title: Configure control plane for automation framework
-description: Configure your deployment control plane for the SAP on Azure Deployment Automation Framework.
+description: Configure your deployment control plane for SAP Deployment Automation Framework.
# Configure the control plane
-The control plane for the [SAP on Azure Deployment Automation Framework](deployment-framework.md) consists of the following components:
+The control plane for [SAP Deployment Automation Framework](deployment-framework.md) consists of the following components:
+ - Deployer - SAP library ## Deployer
-The [deployer](deployment-framework.md#deployment-components) is the execution engine of the [SAP automation framework](deployment-framework.md). It's a pre-configured virtual machine (VM) that is used for executing Terraform and Ansible commands. When using Azure DevOps the deployer is a self-hosted agent.
+The [deployer](deployment-framework.md#deployment-components) is the execution engine of [SAP Deployment Automation Framework](deployment-framework.md). It's a preconfigured virtual machine (VM) that's used for running Terraform and Ansible commands. When you use Azure DevOps, the deployer is a self-hosted agent.
-The configuration of the deployer is performed in a Terraform tfvars variable file.
+The configuration of the deployer is performed in a Terraform `tfvars` variable file.
-## Terraform Parameters
+## Terraform parameters
-This table shows the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts
+This table shows the Terraform parameters. These parameters need to be entered manually if you aren't using the deployment scripts.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | | - |
-> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP Library that contains the Terraform state files | Required |
-
+> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP library that contains the Terraform state files | Required |
-### Environment Parameters
+### Environment parameters
This table shows the parameters that define the resource naming. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | - | - | - |
-> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
-> | `location` | The Azure region in which to deploy. | Required | Use lower case |
-> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) |
-> | 'place_delete_lock_on_resources | Place a delete lock on the key resources | Optional |
-### Resource Group
+> | `environment` | Identifier for the control plane (maximum of five characters). | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
+> | `location` | Azure region in which to deploy. | Required | Use lowercase. |
+> | `name_override_file` | Name override file. | Optional | See [Custom naming](naming-module.md). |
+> | `place_delete_lock_on_resources` | Place a delete lock on the key resources. | Optional |
+
+### Resource group
This table shows the parameters that define the resource group.
This table shows the parameters that define the resource group.
> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional | > | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
+### Network parameters
-### Network Parameters
+The automation framework supports both creating the virtual network and the subnets (green field) or using an existing virtual network and existing subnets (brown field) or a combination of green field and brown field:
-The automation framework supports both creating the virtual network and the subnets (green field) or using an existing virtual network and existing subnets (brown field) or a combination of green field and brown field.
+ - **Green-field scenario**: The virtual network address space and the subnet address prefixes must be specified.
+ - **Brown-field scenario**: The Azure resource identifier for the virtual network and the subnets must be specified.
The recommended CIDR of the virtual network address space is /27, which allows space for 32 IP addresses. A CIDR value of /28 only allows 16 IP addresses. If you want to include Azure Firewall, use a CIDR value of /25, because Azure Firewall requires a range of /26.
-The recommended CIDR value for the management subnet is /28 that allows 16 IP addresses.
-The recommended CIDR value for the firewall subnet is /26 that allows 64 IP addresses.
+The recommended CIDR value for the management subnet is /28, which allows 16 IP addresses.
+The recommended CIDR value for the firewall subnet is /26, which allows 64 IP addresses.
This table shows the networking parameters. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | - | |
-> | `management_network_name` | The name of the VNet into which the deployer will be deployed | Optional | For green field deployments. |
+> | `management_network_name` | The name of the virtual network into which the deployer will be deployed | Optional | For green-field deployments |
> | `management_network_logical_name` | The logical name of the network (DEV-WEEU-MGMT01-INFRASTRUCTURE) | Required | |
-> | `management_network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown field deployments. |
-> | `management_network_address_space` | The address range for the virtual network | Mandatory | For green field deployments. |
+> | `management_network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown-field deployments |
+> | `management_network_address_space` | The address range for the virtual network | Mandatory | For green-field deployments |
> | | | | | > | `management_subnet_name` | The name of the subnet | Optional | |
-> | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
-> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown field deployments. |
-> | `management_subnet_nsg_name` | The name of the Network Security Group name | Optional | |
-> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the Network Security Group | Mandatory | Mandatory For brown field deployments. |
+> | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments |
+> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown-field deployments |
+> | `management_subnet_nsg_name` | The name of the network security group | Optional | |
+> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the network security group | Mandatory | For brown-field deployments |
> | `management_subnet_nsg_allowed_ips` | Range of allowed IP addresses to add to Azure Firewall | Optional | | > | | | | |
-> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Firewall subnet | Mandatory | For brown field deployments. |
-> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Azure Firewall subnet | Mandatory | For brown-field deployments |
+> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments |
> | | | | |
-> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Bastion subnet | Mandatory | For brown field deployments. |
-> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Azure Bastion subnet | Mandatory | For brown-field deployments |
+> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments |
> | | | | |
-> | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown field deployments using the web app |
-> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments using the web app |
+> | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown-field deployments by using the web app |
+> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments by using the web app |
> [!NOTE]
-> When using an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms
-
+> When you use an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms.
-### Deployer Virtual Machine Parameters
+### Deployer virtual machine parameters
-This table shows the parameters related to the deployer virtual machine.
+This table shows the parameters related to the deployer VM.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | - |
-> | `deployer_size` | Defines the Virtual machine SKU to use, for example Standard_D4s_v3 | Optional |
-> | `deployer_count` | Defines the number of Deployers | Optional |
-> | `deployer_image` | Defines the Virtual machine image to use, see below | Optional |
-> | `plan` | Defines the plan associated to the Virtual machine image, see below | Optional |
-> | `deployer_disk_type` | Defines the disk type, for example Premium_LRS | Optional |
-> | `deployer_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional |
-> | `deployer_private_ip_address` | Defines the Private IP address to use | Optional |
+> | `deployer_size` | Defines the VM SKU to use, for example, Standard_D4s_v3 | Optional |
+> | `deployer_count` | Defines the number of deployers | Optional |
+> | `deployer_image` | Defines the VM image to use | Optional |
+> | `plan` | Defines the plan associated to the VM image | Optional |
+> | `deployer_disk_type` | Defines the disk type, for example, Premium_LRS | Optional |
+> | `deployer_use_DHCP` | Controls if the Azure subnet-provided IP addresses should be used (dynamic) true | Optional |
+> | `deployer_private_ip_address` | Defines the private IP address to use | Optional |
> | `deployer_enable_public_ip` | Defines if the deployer has a public IP | Optional |
-> | `auto_configure_deployer` | Defines deployer will be configured with the required software (Terraform and Ansible) | Optional |
-> | `add_system_assigned_identity` | Defines deployer will be assigned a system identity | Optional |
+> | `auto_configure_deployer` | Defines if the deployer is configured with the required software (Terraform and Ansible) | Optional |
+> | `add_system_assigned_identity` | Defines if the deployer is assigned a system identity | Optional |
+The VM image is defined by using the following structure:
-The Virtual Machine image is defined using the following structure:
```python { "os_type" = ""
The Virtual Machine image is defined using the following structure:
``` > [!NOTE]
-> type can be marketplace/marketplace_with_plan/custom
-> Note that using a image of type 'marketplace_with_plan' will require that the image in question has been used at least once in the subscription. This is because the first usage prompts the user to accept the License terms and the automation has no mean to approve it.
---
-### Authentication Parameters
+> The type can be `marketplace/marketplace_with_plan/custom`.
+> Using an image of type `marketplace_with_plan` requires that the image in question was used at least once in the subscription. The first usage prompts the user to accept the license terms and the automation has no means to approve it.
-The table below defines the parameters used for defining the Virtual Machine authentication
+### Authentication parameters
+This section defines the parameters used for defining the VM authentication.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | | | |
-> | `deployer_vm_authentication_type` | Defines the default authentication for the Deployer | Optional |
+> | `deployer_vm_authentication_type` | Defines the default authentication for the deployer | Optional |
> | `deployer_authentication_username` | Administrator account name | Optional | > | `deployer_authentication_password` | Administrator password | Optional | > | `deployer_authentication_path_to_public_key` | Path to the public key used for authentication | Optional | > | `deployer_authentication_path_to_private_key` | Path to the private key used for authentication | Optional |
-### Key Vault Parameters
+### Key vault parameters
-The table below defines the parameters used for defining the Key Vault information
+This section defines the parameters used for defining the Azure Key Vault information.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | | | - |
-> | `user_keyvault_id` | Azure resource identifier for the user key vault | Optional |
-> | `spn_keyvault_id` | Azure resource identifier for the key vault containing the deployment credentials | Optional |
-> | `deployer_private_key_secret_name` | The Azure Key Vault secret name for the deployer private key | Optional |
-> | `deployer_public_key_secret_name` | The Azure Key Vault secret name for the deployer public key | Optional |
-> | `deployer_username_secret_name` | The Azure Key Vault secret name for the deployer username | Optional |
-> | `deployer_password_secret_name` | The Azure Key Vault secret name for the deployer password | Optional |
-> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment KeyVault access policies | Optional |
-> | `set_secret_expiry` | Set expiry of 12 months for key vault secrets | Optional |
+> | `user_keyvault_id` | Azure resource identifier for the user key vault. | Optional |
+> | `spn_keyvault_id` | Azure resource identifier for the key vault that contains the deployment credentials. | Optional |
+> | `deployer_private_key_secret_name` | The key vault secret name for the deployer private key. | Optional |
+> | `deployer_public_key_secret_name` | The key vault secret name for the deployer public key. | Optional |
+> | `deployer_username_secret_name` | The key vault secret name for the deployer username. | Optional |
+> | `deployer_password_secret_name` | The key vault secret name for the deployer password. | Optional |
+> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment key vault access policies. | Optional |
+> | `set_secret_expiry` | Set expiry of 12 months for key vault secrets. | Optional |
-### DNS Support
+### DNS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |
-> | `dns_label` | DNS name of the private DNS zone | Optional |
-> | `use_custom_dns_a_registration` | Uses an external system for DNS, set to false for Azure native | Optional |
-> | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional |
-> | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional |
-
+> | `dns_label` | DNS name of the Private DNS zone. | Optional |
+> | `use_custom_dns_a_registration` | Uses an external system for DNS, set to false for Azure native. | Optional |
+> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the Private DNS zone. | Optional |
+> | `management_dns_resourcegroup_name` | Resource group that contains the Private DNS zone. | Optional |
### Other parameters > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | -- | -- |
-> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | |
-> | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | |
-> | `bastion_sku` | SKU for Azure Bastion host to be deployed (Basic/Standard) | Optional | |
-> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
-> | `use_private_endpoint` | Use private endpoints | Optional |
-> | `use_service_endpoint` | Use service endpoints for subnets | Optional |
-> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets | Optional |
+> | `firewall_deployment` | Boolean flag that controls if an Azure firewall is to be deployed. | Optional | |
+> | `bastion_deployment` | Boolean flag that controls if Azure Bastion host is to be deployed. | Optional | |
+> | `bastion_sku` | SKU for Azure Bastion host to be deployed (Basic/Standard). | Optional | |
+> | `enable_purge_control_for_keyvaults` | Boolean flag that controls if purge control is enabled on the key vault. | Optional | Use only for test deployments. |
+> | `use_private_endpoint` | Use private endpoints. | Optional |
+> | `use_service_endpoint` | Use service endpoints for subnets. | Optional |
+> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional |
### Example parameters file for deployer (required parameters only)
firewall_deployment=true
bastion_deployment=true ```
+## SAP library
-## SAP Library
-
-The [SAP Library](deployment-framework.md#deployment-components) provides the persistent storage of the Terraform state files and the downloaded SAP installation media for the control plane.
+The [SAP library](deployment-framework.md#deployment-components) provides the persistent storage of the Terraform state files and the downloaded SAP installation media for the control plane.
-The configuration of the SAP Library is performed in a Terraform tfvars variable file.
+The configuration of the SAP library is performed in a Terraform `tfvars` variable file.
-### Terraform Parameters
+### Terraform parameters
-This table shows the Terraform parameters, these parameters need to be entered manually when not using the deployment scripts
+This table shows the Terraform parameters. These parameters need to be entered manually if you aren't using the deployment scripts or Azure Pipelines.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | - | -- |
-> | `deployer_tfstate_key` | The state file name for the deployer | Required |
+> | `deployer_tfstate_key` | State file name for the deployer | Required |
-### Environment Parameters
+### Environment parameters
This table shows the parameters that define the resource naming. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | - | - |
-> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
-> | `location` | The Azure region in which to deploy. | Required | Use lower case |
-> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) |
+> | `environment` | Identifier for the control plane (maximum of five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
+> | `location` | Azure region in which to deploy | Required | Use lowercase. |
+> | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). |
-### Resource Group
+### Resource group
This table shows the parameters that define the resource group.
This table shows the parameters that define the resource group.
> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional | > | `resourcegroup_tags` | Tags to be associated with the resource group | Optional | -
-### SAP Installation media storage account
+### SAP installation media storage account
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type |
This table shows the parameters that define the resource group.
> | -- | -- | - | > | `library_terraform_state_arm_id` | Azure resource identifier | Optional |
-### DNS Support
-
+### DNS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |
-> | `dns_label` | DNS name of the private DNS zone | Optional |
-> | `use_custom_dns_a_registration` | Use an existing Private DNS zone | Optional |
-> | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional |
-> | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional |
-
+> | `dns_label` | DNS name of the Private DNS zone. | Optional |
+> | `use_custom_dns_a_registration` | Use an existing Private DNS zone. | Optional |
+> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the Private DNS zone. | Optional |
+> | `management_dns_resourcegroup_name` | Resource group that contains the Private DNS zone. | Optional |
### Extra parameters - > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | -- | -- |
-> | `use_private_endpoint` | Use private endpoints | Optional |
-> | `use_service_endpoint` | Use service endpoints for subnets | Optional |
-> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets | Optional |
+> | Variable | Description | Type |
+> | | | -- |
+> | `use_private_endpoint` | Use private endpoints. | Optional |
+> | `use_service_endpoint` | Use service endpoints for subnets. | Optional |
+> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional |
+> | `subnets_to_add_to_firewall_for_keyvaults_and_storage` | Subnets that need access to key vaults and storage accounts. | Optional |
-### Example parameters file for sap library (required parameters only)
+### Example parameters file for the SAP library (required parameters only)
```terraform # The environment value is a mandatory field, it is used for partitioning the environments, for example (PROD and NP)
location = "westeurope"
``` -
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Configure SAP system](configure-system.md)
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Title: Configure Azure DevOps Services for SAP on Azure Deployment Automation Framework
-description: Configure your Azure DevOps Services for the SAP on Azure Deployment Automation Framework.
+ Title: Configure Azure DevOps Services for SAP Deployment Automation Framework
+description: Configure your Azure DevOps Services for SAP Deployment Automation Framework.
-# Use SAP on Azure Deployment Automation Framework from Azure DevOps Services
+# Use SAP Deployment Automation Framework from Azure DevOps Services
-Using Azure DevOps will streamline the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities.
-You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application.
+Azure DevOps streamlines the deployment process by providing pipelines that you can run to perform the infrastructure deployment and the configuration and SAP installation activities.
+
+You can use Azure Repos to store your configuration files and use Azure Pipelines to deploy and configure the infrastructure and the SAP application.
## Sign up for Azure DevOps Services
-To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
+To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory. To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either sign in or create a new account.
-## Configure Azure DevOps Services for the SAP on Azure Deployment Automation Framework
+## Configure Azure DevOps Services for SAP Deployment Automation Framework
-You can use the following script to do a basic installation of Azure Devops Services for the SAP on Azure Deployment Automation Framework.
+You can use the following script to do a basic installation of Azure DevOps Services for SAP Deployment Automation Framework.
Open PowerShell ISE and copy the following script and update the parameters to match your environment.
Open PowerShell ISE and copy the following script and update the parameters to m
```
-Run the script and follow the instructions. The script will open browser windows for authentication and for performing tasks in the Azure DevOps project.
+Run the script and follow the instructions. The script opens browser windows for authentication and for performing tasks in the Azure DevOps project.
You can choose to either run the code directly from GitHub or you can import a copy of the code into your Azure DevOps project. -
-Validate that the project has been created by navigating to the Azure DevOps portal and selecting the project. Ensure that the Repo is populated and that the pipelines have been created.
+To confirm that the project was created, go to the Azure DevOps portal and select the project. Ensure that the repo was populated and that the pipelines were created.
> [!IMPORTANT]
-> Run the following steps on your local workstation, also ensure that you have the latest Azure CLI installed by running the 'az upgrade' command.
+> Run the following steps on your local workstation. Also ensure that you have the latest Azure CLI installed by running the `az upgrade` command.
-### Configure Azure DevOps Services artifacts for a new Workload zone.
+### Configure Azure DevOps Services artifacts for a new workload zone
-You can use the following script to deploy the artifacts needed to support a new workload zone. This will create the Variable group and the Service Connection in Azure DevOps as well as optionally the deployment service principal.
+Use the following script to deploy the artifacts that are needed to support a new workload zone. This process creates the variable group and the service connection in Azure DevOps and, optionally, the deployment service principal.
Open PowerShell ISE and copy the following script and update the parameters to match your environment.
Open PowerShell ISE and copy the following script and update the parameters to m
```
+### Create a sample control plane configuration
-### Create a sample Control Plane configuration
+You can run the `Create Sample Deployer Configuration` pipeline to create a sample configuration for the control plane. When it's running, choose the appropriate Azure region. You can also control if you want to deploy Azure Firewall and Azure Bastion.
-You can run the 'Create Sample Deployer Configuration' pipeline to create a sample configuration for the Control Plane. When running choose the appropriate Azure region. You can also control if you want to deploy Azure Firewall and Azure Bastion.
+## Manual configuration of Azure DevOps Services for SAP Deployment Automation Framework
-## Manual configuration of Azure DevOps Services for the SAP on Azure Deployment Automation Framework
+You can manually configure Azure DevOps Services for SAP Deployment Automation Framework.
### Create a new project
-You can use Azure Repos to store both the code from the sap-automation GitHub repository and the environment configuration files.
+You can use Azure Repos to store the code from the sap-automation GitHub repository and the environment configuration files.
-Open (https://dev.azure.com) and create a new project by clicking on the _New Project_ button and enter the project details. The project will contain both the Azure Repos source control repository and Azure Pipelines for performing deployment activities.
+Open [Azure DevOps](https://dev.azure.com) and create a new project by selecting **New Project** and entering the project details. The project contains the Azure Repos source control repository and Azure Pipelines for performing deployment activities.
-> [!NOTE]
-> If you are unable to see _New Project_ ensure that you have permissions to create new projects in the organization.
+If you don't see **New Project**, ensure that you have permissions to create new projects in the organization.
Record the URL of the project.+ ### Import the repository
-Start by importing the SAP on Azure Deployment Automation Framework GitHub repository into Azure Repos.
+Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos.
-Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation-bootstrap.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
+Go to the **Repositories** section and select **Import a repository**. Import the `https://github.com/Azure/sap-automation-bootstrap.git` repository into Azure DevOps. For more information, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true).
-If you're unable to import a repository, you can create the repository manually, and then import the content from the SAP on Azure Deployment Automation Framework GitHub repository to it.
+If you're unable to import a repository, you can create the repository manually. Then you can import the content from the SAP Deployment Automation Framework GitHub repository to it.
### Create the repository for manual import
-> [!NOTE]
-> Only do this step if you are unable to import the repository directly.
+Only do this step if you're unable to import the repository directly.
+
+To create the **workspaces** repository, in the **Repos** section, under **Project settings**, select **Create**.
-Create the 'workspaces' repository by navigating to the 'Repositories' section in 'Project Settings' and clicking the _Create_ button.
+Choose the repository, enter **Git**, and provide a name for the repository. For example, use **SAP Configuration Repository**.
-Choose the repository type 'Git' and provide a name for the repository, for example 'SAP Configuration Repository'.
-### Cloning the repository
+### Clone the repository
-In order to provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally.
-Clone the repository to a local folder by clicking the _Clone_ button in the Files view in the Repos section of the portal. For more info, see [Cloning a repository](/azure/devops/repos/git/clone?view=azure-devops#clone-an-azure-repos-git-repo&preserve-view=true)
+To provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally.
+To clone the repository to a local folder, on the **Repos** section of the portal, under **Files**, select **Clone**. For more information, see [Clone a repository](/azure/devops/repos/git/clone?view=azure-devops#clone-an-azure-repos-git-repo&preserve-view=true).
-### Manually importing the repository content using a local clone
-You can also download the content from the SAP on Azure Deployment Automation Framework repository manually and add it to your local clone of the Azure DevOps repository.
+### Manually import the repository content by using a local clone
-Navigate to 'https://github.com/Azure/SAP-automation-samples' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_.
+You can also manually download the content from the SAP Deployment Automation Framework repository and add it to your local clone of the Azure DevOps repository.
-Copy the content from the zip file to the root folder of your local clone.
+Go to the `https://github.com/Azure/SAP-automation-samples` repository and download the repository content as a .zip file. Select **Code** and choose **Download ZIP**.
-Open the local folder in Visual Studio code, you should see that there are changes that need to be synchronized by the indicator by the source control icon as is shown in the picture below.
+Copy the content from the .zip file to the root folder of your local clone.
+Open the local folder in Visual Studio Code. You should see that changes need to be synchronized by the indicator by the source control icon shown here.
-Select the source control icon and provide a message about the change, for example: "Import from GitHub" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.
-### Choosing the source for the Terraform and Ansible code
+Select the source control icon and provide a message about the change. For example, enter **Import from GitHub** and select Ctrl+Enter to commit the changes. Next, select **Sync Changes** to synchronize the changes back to the repository.
-You can either run the SDAF code directly from GitHub or you can import it locally.
-#### Running the code from a local repository
+### Choose the source for the Terraform and Ansible code
-If you want to run the SDAF code from the local Azure DevOps project you need to create a separate code repository and a configuration repository in the Azure DevOps project.
+You can either run the SAP Deployment Automation Framework code directly from GitHub or you can import it locally.
-Name of code repository: 'sap-automation', source: 'https://github.com/Azure/sap-automation.git'
+#### Run the code from a local repository
-Name of sample and template repository: 'sap-samples', source: 'https://github.com/Azure/sap-automation-samples.git'
+If you want to run the SAP Deployment Automation Framework code from the local Azure DevOps project, you need to create a separate code repository and a configuration repository in the Azure DevOps project:
-#### Running the code directly from GitHub
-If you want to run the code directly from GitHub you need to provide credentials for Azure DevOps to be able to pull the content from GitHub.
-#### Creating the GitHub Service connection
+- **Name of code repository**: `sap-automation`. Source is `https://github.com/Azure/sap-automation.git`.
+- **Name of sample and template repository**: `sap-samples`. Source is `https://github.com/Azure/sap-automation-samples.git`.
-To pull the code from GitHub, you need a GitHub service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true)
+#### Run the code directly from GitHub
-To create the service connection, go to Project settings and navigate to the Service connections setting in the Pipelines section.
+If you want to run the code directly from GitHub, you need to provide credentials for Azure DevOps to be able to pull the content from GitHub.
+#### Create the GitHub service connection
-Choose _GitHub_ as the service connection type. Choose 'Azure Pipelines' in the OAuth Configuration drop-down.
+To pull the code from GitHub, you need a GitHub service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true).
-Click 'Authorize' to log on to GitHub.
-
-Enter a Service connection name, for instance 'SDAF Connection to GitHub' and ensure that the _Grant access permission to all pipelines_ checkbox is checked. Select _Save_ to save the service connection.
+To create the service connection, go to **Project Settings** and under the **Pipelines** section, go to **Service connections**.
+Select **GitHub** as the service connection type. Select **Azure Pipelines** in the **OAuth Configuration** dropdown.
+
+Select **Authorize** to sign in to GitHub.
+
+Enter a service connection name, for instance, **SDAF Connection to GitHub**. Ensure that the **Grant access permission to all pipelines** checkbox is selected. Select **Save** to save the service connection.
## Set up the web app
-The automation framework optionally provisions a web app as a part of the control plane to assist with the SAP workload zone and system configuration files. If you would like to use the web app, you must first create an app registration for authentication purposes. Open the Azure Cloud Shell and execute the following commands:
+The automation framework optionally provisions a web app as a part of the control plane to assist with the SAP workload zone and system configuration files. If you want to use the web app, you must first create an app registration for authentication purposes. Open Azure Cloud Shell and run the following commands.
# [Linux](#tab/linux)
-Replace MGMT with your environment as necessary.
+
+Replace `MGMT` with your environment, as necessary.
+ ```bash echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query
rm manifest.json ```+ # [Windows](#tab/windows)
-Replace MGMT with your environment as necessary.
+
+Replace `MGMT` with your environment, as necessary.
+ ```powershell Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]'
del manifest.json
Save the app registration ID and password values for later use.
+## Create Azure pipelines
-## Create Azure Pipelines
+Azure pipelines are implemented as YAML files. They're stored in the *deploy/pipelines* folder in the repository.
-Azure Pipelines are implemented as YAML files and they're stored in the 'deploy/pipelines' folder in the repository.
## Control plane deployment pipeline
-Create the control plane deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the control plane deployment pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- |
Create the control plane deployment pipeline by choosing _New Pipeline_ from the
| Path | `deploy/pipelines/01-deploy-control-plane.yml` | | Name | Control plane deployment |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Control plane deployment' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Control plane deployment**.
## SAP workload zone deployment pipeline
-Create the SAP workload zone pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the SAP workload zone pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- |
Create the SAP workload zone pipeline by choosing _New Pipeline_ from the Pipeli
| Path | `deploy/pipelines/02-sap-workload-zone.yml` | | Name | SAP workload zone deployment |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP workload zone deployment' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP workload zone deployment**.
## SAP system deployment pipeline
-Create the SAP system deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the SAP system deployment pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | |
Create the SAP system deployment pipeline by choosing _New Pipeline_ from the Pi
| Path | `deploy/pipelines/03-sap-system-deployment.yml` | | Name | SAP system deployment (infrastructure) |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP system deployment (infrastructure)' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP system deployment (infrastructure)**.
## SAP software acquisition pipeline
-Create the SAP software acquisition pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the SAP software acquisition pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | |
Create the SAP software acquisition pipeline by choosing _New Pipeline_ from the
| Path | `deploy/pipelines/04-sap-software-download.yml` | | Name | SAP software acquisition |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP software acquisition' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP software acquisition**.
## SAP configuration and software installation pipeline
-Create the SAP configuration and software installation pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the SAP configuration and software installation pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- |
Create the SAP configuration and software installation pipeline by choosing _New
| Path | `deploy/pipelines/05-DB-and-SAP-installation.yml` | | Name | Configuration and SAP installation |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP configuration and software installation' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP configuration and software installation**.
## Deployment removal pipeline
-Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the deployment removal pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- |
Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipel
| Path | `deploy/pipelines/10-remover-terraform.yml` | | Name | Deployment removal |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Deployment removal**.
## Control plane removal pipeline
-Create the control plane deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the control plane deployment removal pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- |
Create the control plane deployment removal pipeline by choosing _New Pipeline_
| Path | `deploy/pipelines/12-remove-control-plane.yml` | | Name | Control plane removal |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Control plane removal' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Control plane removal**.
-## Deployment removal pipeline using Azure Resource Manager
+## Deployment removal pipeline by using Azure Resource Manager
-Create the deployment removal ARM pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the deployment removal Azure Resource Manager pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- | | Branch | main | | Path | `deploy/pipelines/11-remover-arm-fallback.yml` |
-| Name | Deployment removal using ARM |
+| Name | Deployment removal using ARM processor |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal using ARM processor' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Deployment removal using ARM processor**.
> [!NOTE]
-> Only use this pipeline as last resort, removing just the resource groups will leave remnants that may complicate re-deployments.
+> Only use this pipeline as a last resort. Removing just the resource groups leaves remnants that might complicate redeployments.
## Repository updater pipeline
-Create the Repository updater pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+Create the repository updater pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings:
| Setting | Value | | - | -- |
Create the Repository updater pipeline by choosing _New Pipeline_ from the Pipel
| Path | `deploy/pipelines/20-update-ado-repository.yml` | | Name | Repository updater |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Repository updater' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Repository updater**.
This pipeline should be used when there's an update in the sap-automation repository that you want to use.
-## Import Ansible task from Visual Studio Marketplace
+## Import the Ansible task from Visual Studio Marketplace
-The pipelines use a custom task to run Ansible. The custom task can be installed from [Ansible](https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.vss-services-ansible). Install it to your Azure DevOps organization before running the _Configuration and SAP installation_ or _SAP software acquisition_ pipelines.
+The pipelines use a custom task to run Ansible. You can install the custom task from [Ansible](https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.vss-services-ansible). Install it to your Azure DevOps organization before you run the **Configuration and SAP installation** or **SAP software acquisition** pipelines.
-## Import Cleanup task from Visual Studio Marketplace
+## Import the cleanup task from Visual Studio Marketplace
-The pipelines use a custom task to perform cleanup activities post deployment. The custom task can be installed from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before running the pipelines.
+The pipelines use a custom task to perform cleanup activities post deployment. You can install the custom task from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before you run the pipelines.
+## Preparations for a self-hosted agent
-## Preparations for self-hosted agent
+1. Create an agent pool by going to **Organizational Settings**. Under the **Pipelines** section, select **Agent Pools** > **Add Pool**. Select **Self-hosted** as the pool type. Name the pool to align with the control plane environment. For example, use `MGMT-WEEU-POOL`. Ensure that **Grant access permission to all pipelines** is selected and select **Create** to create the pool.
+1. Sign in with the user account you plan to use in your [Azure DevOps](https://dev.azure.com) organization.
-1. Create an Agent Pool by navigating to the Organizational Settings and selecting _Agent Pools_ from the Pipelines section. Click the _Add Pool_ button and choose Self-hosted as the pool type. Name the pool to align with the control plane environment, for example `MGMT-WEEU-POOL`. Ensure _Grant access permission to all pipelines_ is selected and create the pool using the _Create_ button.
+1. From your home page, open your user settings and select **Personal access tokens**.
-1. Sign in with the user account you plan to use in your Azure DevOps organization (https://dev.azure.com).
+ :::image type="content" source="./media/devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram that shows the creation of a personal access token.":::
-1. From your home page, open your user settings, and then select _Personal access tokens_.
+1. Create a personal access token with these settings:
- :::image type="content" source="./media/devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram showing the creation of the Personal Access Token (PAT).":::
+ - **Agent Pools**: Select **Read & manage**.
+ - **Build**: Select **Read & execute**.
+ - **Code**: Select **Read & write**.
+ - **Variable Groups**: Select **Read, create, & manage**.
-1. Create a personal access token. Ensure that _Read & manage_ is selected for _Agent Pools_, _Read & write_ is selected for _Code_, _Read & execute_ is selected for _Build_, and _Read, create, & manage_ is selected for _Variable Groups_. Write down the created token value.
+ Write down the created token value.
- :::image type="content" source="./media/devops/automation-new-pat.png" alt-text="Diagram showing the attributes of the Personal Access Token (PAT).":::
+ :::image type="content" source="./media/devops/automation-new-pat.png" alt-text="Diagram that shows the attributes of the personal access token.":::
## Variable definitions
-The deployment pipelines are configured to use a set of predefined parameter values defined using variable groups.
-
+The deployment pipelines are configured to use a set of predefined parameter values defined by using variable groups.
### Common variables
-There's a set of common variables that are used by all the deployment pipelines. These variables are stored in a variable group called 'SDAF-General'.
+Common variables are used by all the deployment pipelines. They're stored in a variable group called `SDAF-General`.
-Create a new variable group 'SDAF-General' using the Library page in the Pipelines section. Add the following variables:
+Create a new variable group named `SDAF-General` by using the **Library** page in the **Pipelines** section. Add the following variables:
| Variable | Value | Notes | | - | | - |
-| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. |
+| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration, use `samples/WORKSPACES` instead of WORKSPACES. |
| Branch | main | | | S-Username | `<SAP Support user account name>` | |
-| S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon. |
-| `tf_version` | 1.3.0 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
+| S-Password | `<SAP Support user password>` | Change the variable type to secret by selecting the lock icon. |
+| `tf_version` | 1.3.0 | The Terraform version to use. See [Terraform download](https://www.terraform.io/downloads). |
Save the variables.
-Or alternatively you can use the Azure DevOps CLI to set up the groups.
+Alternatively, you can use the Azure DevOps CLI to set up the groups.
```bash s-user="<SAP Support user account name>"
az pipelines variable-group create --name SDAF-General --variables ANSIBLE_HOST_
```
-> [!NOTE]
-> Remember to assign permissions for all pipelines using _Pipeline permissions_.
+Remember to assign permissions for all pipelines by using **Pipeline permissions**.
-### Environment specific variables
+### Environment-specific variables
-As each environment may have different deployment credentials you'll need to create a variable group per environment, for example 'SDAF-MGMT','SDAF-DEV', 'SDAF-QA'.
+Because each environment might have different deployment credentials, you need to create a variable group per environment. For example, use `SDAF-MGMT`,`SDAF-DEV`, and `SDAF-QA`.
-Create a new variable group 'SDAF-MGMT' for the control plane environment using the Library page in the Pipelines section. Add the following variables:
+Create a new variable group named `SDAF-MGMT` for the control plane environment by using the **Library** page in the **Pipelines** section. Add the following variables:
| Variable | Value | Notes | | - | | -- |
-| Agent | 'Azure Pipelines' or the name of the agent pool | Note, this pool will be created in a later step. |
-| CP_ARM_CLIENT_ID | 'Service principal application ID'. | |
-| CP_ARM_OBJECT_ID | 'Service principal object ID'. | |
-| CP_ARM_CLIENT_SECRET | 'Service principal password'. | Change variable type to secret by clicking the lock icon |
-| CP_ARM_SUBSCRIPTION_ID | 'Target subscription ID'. | |
-| CP_ARM_TENANT_ID | 'Tenant ID' for the service principal. | |
-| AZURE_CONNECTION_NAME | Previously created connection name. | |
-| sap_fqdn | SAP Fully Qualified Domain Name, for example 'sap.contoso.net'. | Only needed if Private DNS isn't used. |
-| FENCING_SPN_ID | 'Service principal application ID' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. |
-| FENCING_SPN_PWD | 'Service principal password' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. |
-| FENCING_SPN_TENANT | 'Service principal tenant ID' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. |
-| PAT | `<Personal Access Token>` | Use the Personal Token defined in the previous step |
-| POOL | `<Agent Pool name>` | The Agent pool to use for this environment |
+| Agent | `Azure Pipelines` or the name of the agent pool | This pool is created in a later step. |
+| CP_ARM_CLIENT_ID | `Service principal application ID` | |
+| CP_ARM_OBJECT_ID | `Service principal object ID` | |
+| CP_ARM_CLIENT_SECRET | `Service principal password` | Change the variable type to secret by selecting the lock icon. |
+| CP_ARM_SUBSCRIPTION_ID | `Target subscription ID` | |
+| CP_ARM_TENANT_ID | `Tenant ID` for the service principal | |
+| AZURE_CONNECTION_NAME | Previously created connection name | |
+| sap_fqdn | SAP fully qualified domain name, for example, `sap.contoso.net` | Only needed if Private DNS isn't used. |
+| FENCING_SPN_ID | `Service principal application ID` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. |
+| FENCING_SPN_PWD | `Service principal password` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. |
+| FENCING_SPN_TENANT | `Service principal tenant ID` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. |
+| PAT | `<Personal Access Token>` | Use the personal token defined in the previous step. |
+| POOL | `<Agent Pool name>` | The agent pool to use for this environment. |
| | | |
-| APP_REGISTRATION_APP_ID | 'App registration application ID' | Required if deploying the web app |
-| WEB_APP_CLIENT_SECRET | 'App registration password' | Required if deploying the web app |
+| APP_REGISTRATION_APP_ID | `App registration application ID` | Required if deploying the web app. |
+| WEB_APP_CLIENT_SECRET | `App registration password` | Required if deploying the web app. |
| | | |
-| SDAF_GENERAL_GROUP_ID | The group ID for the SDAF-General group | The ID can be retrieved from the URL parameter 'variableGroupId' when accessing the variable group using a browser. For example: 'variableGroupId=8 |
-| WORKLOADZONE_PIPELINE_ID | The ID for the 'SAP workload zone deployment' pipeline | The ID can be retrieved from the URL parameter 'definitionId' from the pipeline page in Azure DevOps. For example: 'definitionId=31. |
-| SYSTEM_PIPELINE_ID | The ID for the 'SAP system deployment (infrastructure)' pipeline | The ID can be retrieved from the URL parameter 'definitionId' from the pipeline page in Azure DevOps. For example: 'definitionId=32. |
+| SDAF_GENERAL_GROUP_ID | The group ID for the SDAF-General group | The ID can be retrieved from the URL parameter `variableGroupId` when accessing the variable group by using a browser. For example: `variableGroupId=8`. |
+| WORKLOADZONE_PIPELINE_ID | The ID for the `SAP workload zone deployment` pipeline | The ID can be retrieved from the URL parameter `definitionId` from the pipeline page in Azure DevOps. For example: `definitionId=31`. |
+| SYSTEM_PIPELINE_ID | The ID for the `SAP system deployment (infrastructure)` pipeline | The ID can be retrieved from the URL parameter `definitionId` from the pipeline page in Azure DevOps. For example: `definitionId=32`. |
Save the variables.
-> [!NOTE]
-> Remember to assign permissions for all pipelines using _Pipeline permissions_.
->
-> When using the web app, ensure that the Build Service has at least Contribute permissions.
->
-> You can use the clone functionality to create the next environment variable group. APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID, WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-MGMT group.
+Remember to assign permissions for all pipelines by using **Pipeline permissions**.
+When you use the web app, ensure that the Build Service has at least Contribute permissions.
+You can use the clone functionality to create the next environment variable group. APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID, WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-MGMT group.
## Create a service connection
-To remove the Azure resources, you need an Azure Resource Manager service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true)
+To remove the Azure resources, you need an Azure Resource Manager service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true).
-To create the service connection, go to Project settings and navigate to the Service connections setting in the Pipelines section.
+To create the service connection, go to **Project Settings**. Under the **Pipelines** section, select **Service connections**.
-Choose _Azure Resource Manager_ as the service connection type and _Service principal (manual)_ as the authentication method. Enter the target subscription, typically the control plane subscription, and provide the service principal details. Validate the credentials using the _Verify_ button. For more information on how to create a service principal, see [Creating a Service Principal](deploy-control-plane.md#prepare-the-deployment-credentials).
+Select **Azure Resource Manager** as the service connection type and **Service principal (manual)** as the authentication method. Enter the target subscription, which is typically the control plane subscription. Enter the service principal details. Select **Verify** to validate the credentials. For more information on how to create a service principal, see [Create a service principal](deploy-control-plane.md#prepare-the-deployment-credentials).
-Enter a Service connection name, for instance 'Connection to MGMT subscription' and ensure that the _Grant access permission to all pipelines_ checkbox is checked. Select _Verify and save_ to save the service connection.
+Enter a **Service connection name**, for instance, use `Connection to MGMT subscription`. Ensure that the **Grant access permission to all pipelines** checkbox is selected. Select **Verify and save** to save the service connection.
## Permissions
-> [!NOTE]
-> Most of the pipelines will add files to the Azure Repos and therefore require pull permissions. Assign "Contribute" permissions to the 'Build Service' using the Security tab of the source code repository in the Repositories section in Project settings.
+Most of the pipelines add files to the Azure repos and therefore require pull permissions. On **Project Settings**, under the **Repositories** section, select the **Security** tab of the source code repository and assign Contribute permissions to the `Build Service`.
-## Deploy the Control Plane
+## Deploy the control plane
-Newly created pipelines might not be visible in the default view. Select on recent tab and go back to All tab to view the new pipelines.
-
-Select the _Control plane deployment_ pipeline, provide the configuration names for the deployer and the SAP library and choose "Run" to deploy the control plane. Make sure to check "Deploy the configuration web application" if you would like to set up the configuration web app.
+Newly created pipelines might not be visible in the default view. Select the **Recent** tab and go back to **All tabs** to view the new pipelines.
+Select the **Control plane deployment** pipeline and enter the configuration names for the deployer and the SAP library. Select **Run** to deploy the control plane. Make sure to select the **Deploy the configuration web application** checkbox if you want to set up the configuration web app.
### Configure the Azure DevOps Services self-hosted agent manually
-> [!NOTE]
->This is only needed if the Azure DevOps Services agent is not automatically configured. Please check that the agent pool is empty before proceeding.
-
+Manual configuration is only needed if the Azure DevOps Services agent isn't automatically configured. Check that the agent pool is empty before you proceed.
-Connect to the deployer by following these steps:
+To connect to the deployer:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to the resource group containing the deployer virtual machine.
+1. Go to the resource group that contains the deployer virtual machine.
-1. Connect to the virtual machine using Azure Bastion.
+1. Connect to the virtual machine by using Azure Bastion.
-1. The default username is *azureadm*
+1. The default username is **azureadm**.
-1. Choose *SSH Private Key from Azure Key Vault*
+1. Select **SSH Private Key from Azure Key Vault**.
-1. Select the subscription containing the control plane.
+1. Select the subscription that contains the control plane.
1. Select the deployer key vault.
-1. From the list of secrets choose the secret ending with *-sshkey*.
+1. From the list of secrets, select the secret that ends with **-sshkey**.
1. Connect to the virtual machine.
-Run the following script to configure the deployer.
+Run the following script to configure the deployer:
```bash mkdir -p ~/Azure_SAP_Automated_Deployment
cd sap-automation/deploy/scripts
./configure_deployer.sh ```
-Reboot the deployer and reconnect and run the following script to set up the Azure DevOps agent.
+Reboot the deployer, reconnect, and run the following script to set up the Azure DevOps agent:
```bash cd ~/Azure_SAP_Automated_Deployment/
cd ~/Azure_SAP_Automated_Deployment/
$DEPLOYMENT_REPO_PATH/deploy/scripts/setup_ado.sh ```
-Accept the license and when prompted for server URL, enter the URL you captured when you created the Azure DevOps Project. For authentication, choose PAT and enter the token value from the previous step.
+Accept the license and, when you're prompted for the server URL, enter the URL you captured when you created the Azure DevOps project. For authentication, select **PAT** and enter the token value from the previous step.
-When prompted enter the application pool name, you created in the previous step. Accept the default agent name and the default work folder name.
-The agent will now be configured and started.
+When prompted, enter the application pool name that you created in the previous step. Accept the default agent name and the default work folder name. The agent is now configured and starts.
+## Deploy the control plane web application
-## Deploy the Control Plane Web Application
+Selecting the `deploy the web app infrastructure` parameter when you run the control plane deployment pipeline provisions the infrastructure necessary for hosting the web app. The **Deploy web app** pipeline publishes the application's software to that infrastructure.
-Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure.
+Wait for the deployment to finish. Select the **Extensions** tab and follow the instructions to finalize the configuration. Update the `reply-url` values for the app registration.
-Wait for the deployment to finish. Once the deployment is complete, navigate to the Extensions tab and follow the instructions to finalize the configuration and update the 'reply-url' values for the app registration.
-
-As a result of running the control plane pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. You can at any time update the URLs of the registered application web app using the following command.
+As a result of running the control plane pipeline, part of the web app URL that's needed is stored in a variable named `WEBAPP_URL_BASE` in your environment-specific variable group. At any time, you can update the URLs of the registered application web app by using the following command.
# [Linux](#tab/linux)
As a result of running the control plane pipeline, part of the web app URL neede
webapp_url_base=<WEBAPP_URL_BASE> az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback ```+ # [Windows](#tab/windows) ```powershell
$webapp_url_base="<WEBAPP_URL_BASE>"
az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback ```
-You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality won't work.
+You also need to grant reader permissions to the app service system-assigned managed identity. Go to the app service resource. On the left side, select **Identity**. On the **System assigned** tab, select **Azure role assignments** > **Add role assignment**. Select **Subscription** as the scope and **Reader** as the role. Then select **Save**. Without this step, the web app dropdown functionality won't work.
-You should now be able to visit the web app, and use it to deploy SAP workload zones and SAP system infrastructure.
+You should now be able to visit the web app and use it to deploy SAP workload zones and SAP system infrastructure.
## Next step > [!div class="nextstepaction"]
-> [DevOps hands on lab](devops-tutorial.md)
+> [Azure DevOps hands-on lab](devops-tutorial.md)
sap Configure Extra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-extra-disks.md
Title: Custom disk configurations
-description: Provide custom disk configurations for your system in the SAP on Azure Deployment Automation Framework. Add extra disks to a new system, or an existing system.
+description: Provide custom disk configurations for your system in SAP Deployment Automation Framework. Add extra disks to a new system or an existing system.
-# Change the disk configuration for the SAP deployment automation
+# Change the disk configuration for SAP Deployment Automation Framework
-By default, the [SAP on Azure Deployment Automation Framework](deployment-framework.md) defines the disk configuration for the SAP systems. As needed, you can change the default configuration by providing a custom disk configuration json file.
+By default, [SAP Deployment Automation Framework](deployment-framework.md) defines the disk configuration for SAP systems. As needed, you can change the default configuration by providing a custom disk configuration JSON file.
> [!TIP] > When possible, it's a best practice to increase the disk size instead of adding more disks. - ### HANA databases
-The table shows the default disk configuration for HANA systems.
+The table shows the default disk configuration for HANA systems.
-| Size | VM SKU | OS disk | Data disks | Log disks | Hana shared | User SAP | Backup |
+| Size | VM SKU | OS disk | Data disks | Log disks | HANA shared | User SAP | Backup |
|--|||||-|--|--| | Default | Standard_D8s_v3 | E6 (64 GB) | P20 (512 GB) | P20 (512 GB) | E20 (512 GB) | E6 (64 GB) | E20 (512 GB) | | S4DEMO | Standard_E32ds_v4 | P10 (128 GB) | P10x4 (128 GB) | P10x3 (128 GB) | | P20 (512 GB) | P20 (512 GB) |
The table shows the default disk configuration for HANA systems.
### AnyDB databases
-The table shows the default disk configuration for AnyDB systems.
+The table shows the default disk configuration for AnyDB systems.
| Size | VM SKU | OS disk | Data disks | Log disks | |||-||--|
The table shows the default disk configuration for AnyDB systems.
| 40 TB | Standard_M128s | P10(128 GB) | P50x10 (4096 GB) | P40x2 (2048 GB) | | 50 TB | Standard_M128s | P10(128 GB) | P50x13 (4096 GB) | P40x2 (2048 GB) | - ## Custom sizing file
-The disk sizing for an SAP system can be defined using a custom sizing json file. The file is grouped in four sections: "db", "app", "scs", and "web" and each section contains a list of disk configuration names, for example for the database tier "M32ts", "M64s", etc.
+You can define the disk sizing for an SAP system by using a custom sizing JSON file. The file is grouped in four sections: `db`, `app`, `scs`, and `web`. Each section contains a list of disk configuration names. For example, for the database tier, the names might be `M32ts` or `M64s`.
-These sections contain the information for which is the default Virtual machine size and the list of disk to be deployed for each tier.
+These sections contain the information for the default virtual machine size and the list of disks to be deployed for each tier.
-Create a file using the structure shown below and save the file in the same folder as the parameter file for the system, for instance 'XO1_sizes.json'. Then, define the parameter `custom_disk_sizes_filename` in the parameter file. For example, `custom_disk_sizes_filename = "XO1_db_sizes.json"`.
+Create a file by using the structure shown in the following code sample. Save the file in the same folder as the parameter file for the system. For instance, use `XO1_sizes.json`. Then define the parameter `custom_disk_sizes_filename` in the parameter file. For example, use `custom_disk_sizes_filename = "XO1_db_sizes.json"`.
> [!TIP]
-> The path to the disk configuration needs to be relative to the folder containing the tfvars file.
-
+> The path to the disk configuration needs to be relative to the folder that contains the `tfvars` file.
-The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU) and a backup disk (LUN 13). The application tier servers (Application, Central Services amd Web Dispatchers) will be deployed with jus a single 'sap' data disk.
-
-The three data disks will be striped using LVM. The log disk will be mounted as a single disk. The backup disk will be mounted as a single disk.
+The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU), and a backup disk (LUN 13). The application tier servers (application, central services, and web dispatchers) are deployed with just a single `sap` data disk.
+The three data disks are striped by using LVM. The log disk and the backup disk are each mounted as a single disk.
```json {
The three data disks will be striped using LVM. The log disk will be mounted as
} ```
-## Add extra disks to existing system
+## Add extra disks to an existing system
If you need to add disks to an already deployed system, you can add a new block to your JSON structure. Include the attribute `append` in this block, and set the value to `true`. For example, in the following sample code, the last block contains the attribute `"append" : true,`. The last block adds a new disk to the database tier, which is already configured in the first `"data"` block in the code.
If you need to add disks to an already deployed system, you can add a new block
} ```
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [Configure custom naming module](naming-module.md)
-
+> [Configure custom naming](naming-module.md)
sap Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-sap-parameters.md
Title: Configure SAP parameters file for Ansible
-description: Define SAP parameters for Ansible
+ Title: Configure SAP parameters files for Ansible
+description: Learn how to define SAP parameters for Ansible.
-# Configure SAP Installation parameters
+# Configure SAP installation parameters
-The Ansible playbooks use a combination of default parameters and parameters defined by the Terraform deployment for the SAP installation.
+The Ansible playbooks use a combination of default parameters and parameters defined by the Terraform deployment for the SAP installation.
+## Default parameters
-## Default Parameters
-
-This table contains the default parameters defined by the framework.
+The following tables contain the default parameters defined by the framework.
### User IDs This table contains the IDs for the SAP users and groups for the different platforms. > [!div class="mx-tdCol2BreakAll "]
-> | Parameter | Description | Default Value |
+> | Parameter | Description | Default value |
> | - | -- | - | > | HANA | | |
-> | `sapadm_uid` | The UID for the sapadm account. | 2100 |
-> | `sidadm_uid` | The UID for the sidadm account. | 2003 |
-> | `hdbadm_uid` | The UID for the hdbadm account. | 2200 |
-> | `sapinst_gid` | The GID for the sapinst group. | 2001 |
-> | `sapsys_gid` | The GID for the sapsys group. | 2000 |
-> | `hdbshm_gid` | The GID for the hdbshm group. | 2002 |
+> | `sapadm_uid` | The UID for the sapadm account | 2100 |
+> | `sidadm_uid` | The UID for the sidadm account | 2003 |
+> | `hdbadm_uid` | The UID for the hdbadm account | 2200 |
+> | `sapinst_gid` | The GID for the sapinst group | 2001 |
+> | `sapsys_gid` | The GID for the sapsys group | 2000 |
+> | `hdbshm_gid` | The GID for the hdbshm group | 2002 |
> | DB2 | | |
-> | `db2sidadm_uid` | The UID for the db2sidadm account. | 3004 |
-> | `db2sapsid_uid` | The UID for the db2sapsid account. | 3005 |
-> | `db2sysadm_gid` | The UID for the db2sysadm group. | 3000 |
-> | `db2sysctrl_gid` | The UID for the db2sysctrl group. | 3001 |
-> | `db2sysmaint_gid` | The UID for the db2sysmaint group. | 3002 |
-> | `db2sysmon_gid` | The UID for the db2sysmon group. | 2003 |
+> | `db2sidadm_uid` | The UID for the db2sidadm account | 3004 |
+> | `db2sapsid_uid` | The UID for the db2sapsid account | 3005 |
+> | `db2sysadm_gid` | The UID for the db2sysadm group | 3000 |
+> | `db2sysctrl_gid` | The UID for the db2sysctrl group | 3001 |
+> | `db2sysmaint_gid` | The UID for the db2sysmaint group | 3002 |
+> | `db2sysmon_gid` | The UID for the db2sysmon group | 2003 |
> | ORACLE | | |
-> | `orasid_uid` | The UID for the orasid account. | 3100 |
-> | `oracle_uid` | The UID for the oracle account. | 3101 |
-> | `observer_uid` | The UID for the observer account. | 4000 |
-> | `dba_gid` | The GID for the dba group. | 3100 |
-> | `oper_gid` | The GID for the oper group. | 3101 |
-> | `asmoper_gid` | The GID for the asmoper group. | 3102 |
-> | `asmadmin_gid` | The GID for the asmadmin group. | 3103 |
-> | `asmdba_gid` | The GID for the asmdba group. | 3104 |
-> | `oinstall_gid` | The GID for the oinstall group. | 3105 |
-> | `backupdba_gid` | The GID for the backupdba group. | 3106 |
-> | `dgdba_gid` | The GID for the dgdba group. | 3107 |
-> | `kmdba_gid` | The GID for the kmdba group. | 3108 |
-> | `racdba_gid` | The GID for the racdba group. | 3108 |
-
+> | `orasid_uid` | The UID for the orasid account | 3100 |
+> | `oracle_uid` | The UID for the oracle account | 3101 |
+> | `observer_uid` | The UID for the observer account | 4000 |
+> | `dba_gid` | The GID for the dba group | 3100 |
+> | `oper_gid` | The GID for the oper group | 3101 |
+> | `asmoper_gid` | The GID for the asmoper group | 3102 |
+> | `asmadmin_gid` | The GID for the asmadmin group | 3103 |
+> | `asmdba_gid` | The GID for the asmdba group | 3104 |
+> | `oinstall_gid` | The GID for the oinstall group | 3105 |
+> | `backupdba_gid` | The GID for the backupdba group | 3106 |
+> | `dgdba_gid` | The GID for the dgdba group | 3107 |
+> | `kmdba_gid` | The GID for the kmdba group | 3108 |
+> | `racdba_gid` | The GID for the racdba group | 3108 |
### Windows parameters This table contains the information pertinent to Windows deployments. > [!div class="mx-tdCol2BreakAll "]
-> | Parameter | Description | Default Value |
+> | Parameter | Description | Default value |
> | - | -- | - | > | `mssserver_version` | SQL Server version | `mssserver2019` | - ## Parameters
-This table contains the parameters stored in the sap-parameters.yaml file, most of the values are prepopulated via the Terraform deployment.
+The following tables contain the parameters stored in the *sap-parameters.yaml* file. Most of the values are prepopulated via the Terraform deployment.
### Infrastructure
This table contains the parameters stored in the sap-parameters.yaml file, most
> | - | - | - | > | `sap_fqdn` | The FQDN suffix for the virtual machines to be added to the local hosts file | Required |
-### Application Tier
+### Application tier
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | > | `bom_base_name` | The name of the SAP Application Bill of Materials file | Required | > | `sap_sid` | The SID of the SAP application | Required |
-> | `scs_high_availability` | Defines if the Central Services is deployed highly available | Required |
+> | `scs_high_availability` | Defines if the central services is deployed highly available | Required |
> | `scs_instance_number` | Defines the instance number for ASCS | Optional | > | `scs_lb_ip` | IP address of ASCS instance | Optional | > | `scs_virtual_hostname` | The host name of the ASCS instance | Optional |
This table contains the parameters stored in the sap-parameters.yaml file, most
> | `ers_lb_ip` | IP address of ERS instance | Optional | > | `ers_virtual_hostname` | The host name of the ERS instance | Optional | > | `pas_instance_number` | Defines the instance number for PAS | Optional |
-> | `web_sid` | The SID for the Web Dispatcher | Required if web dispatchers are deployed |
-> | `scs_clst_lb_ip` | IP address of Windows Cluster service | Optional |
+> | `web_sid` | The SID for the web dispatcher | Required if web dispatchers are deployed |
+> | `scs_clst_lb_ip` | IP address of Windows cluster service | Optional |
-### Database Tier
+### Database tier
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
-> | `db_sid` | The SID of the SAP database | Required |
-> | `db_instance_number` | Defines the instance number for the database | Required |
-> | `db_high_availability` | Defines if the database is deployed highly available | Required |
-> | `db_lb_ip` | IP address of the database load balancer | Optional |
-> | `platform` | The database platform. Valid values are: ASE, DB2, HANA, ORACLE, SQLSERVER | Required |
-> | `db_clst_lb_ip` | IP address of database cluster for Windows | Optional |
+> | `db_sid` | The SID of the SAP database. | Required |
+> | `db_instance_number` | Defines the instance number for the database. | Required |
+> | `db_high_availability` | Defines if the database is deployed highly available. | Required |
+> | `db_lb_ip` | IP address of the database load balancer. | Optional |
+> | `platform` | The database platform. Valid values are ASE, DB2, HANA, ORACLE, and SQLSERVER. | Required |
+> | `db_clst_lb_ip` | IP address of database cluster for Windows. | Optional |
### NFS > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
-> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
-> | `sap_mnt` | The NFS path for sap_mnt | Required |
-> | `sap_trans` | The NFS path for sap_trans | Required |
-> | `usr_sap_install_mountpoint` | The NFS path for usr/sap/install | Required |
+> | `NFS_provider` | Defines what NFS back end to use. The options are `AFS` for Azure Files NFS or `ANF` for Azure NetApp Files, `NONE` for NFS from the SCS server, or `NFS` for an external NFS solution. | Optional |
+> | `sap_mnt` | The NFS path for sap_mnt. | Required |
+> | `sap_trans` | The NFS path for sap_trans. | Required |
+> | `usr_sap_install_mountpoint` | The NFS path for usr/sap/install. | Required |
### Azure NetApp Files > [!div class="mx-tdCol2BreakAll "]
This table contains the parameters stored in the sap-parameters.yaml file, most
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
-> | `domain_name` | Defines the Windows domain name, for example sap.contoso.net. | Required |
-> | `domain` | Defines the Windows domain Netbios name, for example sap. | Optional |
+> | `domain_name` | Defines the Windows domain name, for example, sap.contoso.net | Required |
+> | `domain` | Defines the Windows domain Netbios name, for example, sap | Optional |
> | SQL | | |
-> | `use_sql_for_SAP` | Uses the SAP defined SQL Server media, defaults to 'true' | Optional |
+> | `use_sql_for_SAP` | Uses the SAP-defined SQL Server media, defaults to `true` | Optional |
> | `win_cluster_share_type` | Defines the cluster type (CSD/FS), defaults to CSD | Optional | ### Miscellaneous
This table contains the parameters stored in the sap-parameters.yaml file, most
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
-> | `kv_name` | The name of the Azure key vault containing the system credentials | Required |
-> | `secret_prefix` | The prefix for the name of the secrets for the SID stored in key vault | Required |
-> | `upgrade_packages` | Update all installed packages on the virtual machines | Required |
-> | `use_msi_for_clusters` | Use managed identities for fencing | Required |
+> | `kv_name` | The name of the Azure key vault that contains the system credentials | Required |
+> | `secret_prefix` | The prefix for the name of the secrets for the SID stored in the key vault | Required |
+> | `upgrade_packages` | Updates all installed packages on the virtual machines | Required |
+> | `use_msi_for_clusters` | Uses managed identities for fencing | Required |
### Disks
-Disks define a dictionary with information about the disks of all the virtual machines in the SAP Application virtual machines.
+Disks define a dictionary with information about the disks of all the virtual machines in the SAP application virtual machines.
> [!div class="mx-tdCol2BreakAll "]
-> | attribute | Description | Type |
+> | Attribute | Description | Type |
> | - | - | - |
-> | `host` | The computer name of the virtual machine | Required |
-> | `LUN` | Defines the LUN number that the disk is attached to | Required |
-> | `type` | This attribute is used to group the disks, each disk of the same type will be added to the LVM on the virtual machine | Required |
-
+> | `host` | The computer name of the virtual machine. | Required |
+> | `LUN` | Defines the LUN number that the disk is attached to. | Required |
+> | `type` | This attribute is used to group the disks. Each disk of the same type is added to the LVM on the virtual machine. | Required |
Example of the disks dictionary:+ ```yaml disks:
disks:
### Oracle support
-From the v3.4 release, it's possible to deploy SAP on Azure systems in a Shared Home configuration using an Oracle database backend. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](../workloads/dbms-guide-oracle.md).
+From the v3.4 release, it's possible to deploy SAP on Azure systems in a shared home configuration by using an Oracle database back end. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](../workloads/dbms-guide-oracle.md).
-In order to install the Oracle backend using the SAP on Azure Deployment Automation Framework, you need to provide the following parameters
+To install the Oracle back end by using SAP Deployment Automation Framework, you need to provide the following parameters:
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
-> | `platform` | The database backend, 'ORACLE' | Required |
-> | `ora_release` | The Oracle release version, for example 19 | Required |
-> | `ora_release` | The Oracle release version, for example 19.0.0 | Required |
+> | `platform` | The database back end, `ORACLE` | Required |
+> | `ora_release` | The Oracle release version, for example, 19 | Required |
+> | `ora_release` | The Oracle release version, for example, 19.0.0 | Required |
> | `oracle_sbp_patch` | The Oracle SBP patch file name | Required |
-#### Shared Home support
+#### Shared home support
-To configure shared home support for Oracle, you need to add a dictionary defining the SIDs to be deployed. You can do that by adding the parameter 'MULTI_SIDS' that contains a list of the SIDs and the SID details.
+To configure shared home support for Oracle, you need to add a dictionary that defines the SIDs to be deployed. You can do that by adding the parameter `MULTI_SIDS` that contains a list of the SIDs and the SID details.
```yaml MULTI_SIDS:
MULTI_SIDS:
- {sid: 'QE1', dbsid_uid: '3006', sidadm_uid: '2002', ascs_inst_no: '01', pas_inst_no: '01', app_inst_no: '01'} ```
-Each row must specify the following parameters.
-
+Each row must specify the following parameters:
+ > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
Each row must specify the following parameters.
> | `pas_inst_no` | The PAS instance number for the instance | Required | > | `app_inst_no` | The APP instance number for the instance | Required |
+## Override the default parameters
-## Overriding the default parameters
-
-You can override the default parameters by either specifying them in the sap-parameters.yaml file or by passing them as command line parameters to the Ansible playbooks.
+You can override the default parameters by either specifying them in the *sap-parameters.yaml* file or by passing them as command-line parameters to the Ansible playbooks.
-For example if you want to override the default value of the group ID for the sapinst group (`sapinst_gid`) parameter, you can do it by adding the following line to the sap-parameters.yaml file:
+For example, if you want to override the default value of the group ID for the `sapinst` group (`sapinst_gid`) parameter, add the following line to the *sap-parameters.yaml* file:
```yaml sapinst_gid: 1000 ```
-If you want to provide them as parameters for the Ansible playbooks, you can do it by adding the following parameter to the command line:
+If you want to provide them as parameters for the Ansible playbooks, add the following parameter to the command line:
```bash ansible-playbook -i hosts SID_hosts.yaml --extra-vars "sapinst_gid=1000" ..... ```
-You can also override the default parameters by specifying them in the `configuration_settings' variable in your tfvars file. For example, if you want to override 'sapinst_gid' your tfvars file should contain the following line:
+You can also override the default parameters by specifying them in the `configuration_settings` variable in your `tfvars` file. For example, if you want to override `sapinst_gid`, your `tfvars` file should contain the following line:
```terraform configuration_settings = {
configuration_settings = {
} ``` --
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [Deploy SAP System](deploy-system.md)
+> [Deploy the SAP system](deploy-system.md)
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
Title: Configure SAP system parameters for automation
-description: Define the SAP system properties for the SAP on Azure Deployment Automation Framework using a parameters file.
+description: Define the SAP system properties for SAP Deployment Automation Framework by using a parameters file.
# Configure SAP system parameters
-Configuration for the [SAP on Azure Deployment Automation Framework](deployment-framework.md)] happens through parameters files. You provide information about your SAP system infrastructure in a tfvars file, which the automation framework uses for deployment. You can find examples of the variable file in the 'samples' repository.
+Configuration for [SAP Deployment Automation Framework](deployment-framework.md) happens through parameters files. You provide information about your SAP system infrastructure in a `tfvars` file, which the automation framework uses for deployment. You can find examples of the variable file in the `samples` repository.
-The automation supports both creating resources (green field deployment) or using existing resources (brownfield deployment).
-
-For the green field scenario, the automation defines default names for resources, however some resource names may be defined in the tfvars file.
-For the brownfield scenario, the Azure resource identifiers for the resources must be specified.
+The automation supports creating resources (green-field deployment) or using existing resources (brown-field deployment):
+- **Green-field scenario**: The automation defines default names for resources, but some resource names might be defined in the `tfvars` file.
+- **Brown-field scenario**: The Azure resource identifiers for the resources must be specified.
## Deployment topologies
-The automation framework can be used to deploy the following SAP architectures:
+You can use the automation framework to deploy the following SAP architectures:
- Standalone - Distributed-- Distributed (Highly Available)
+- Distributed (highly available)
### Standalone
-In the Standalone architecture, all the SAP roles are installed on a single server.
-
+In the standalone architecture, all the SAP roles are installed on a single server.
To configure this topology, define the database tier values and set `enable_app_tier_deployment` to false. ### Distributed
-The distributed architecture has a separate database server and application tier. The application tier can further be separated by having SAP Central Services on a virtual machine and one or more application servers.
-To configure this topology, define the database tier values and define `scs_server_count` = 1, `application_server_count` >= 1
+The distributed architecture has a separate database server and application tier. The application tier can further be separated by having SAP central services on a virtual machine and one or more application servers.
-### High Availability
+To configure this topology, define the database tier values and define `scs_server_count` = 1, `application_server_count` >= 1.
-The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters or in case of Windows with Windows Failover clustering.
+### High availability
-To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count = 1` and `scs_high_availability` = true and
-`application_server_count` >= 1
+The distributed (highly available) deployment is similar to the distributed architecture. In this deployment, the database and/or SAP central services can both be configured by using a highly available configuration that uses two virtual machines, each with Pacemaker clusters or Windows failover clustering.
-## Environment parameters
+To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count` = 1 and `scs_high_availability` = true and `application_server_count` >= 1.
-The table below contains the parameters that define the environment settings.
+## Environment parameters
+This section contains the parameters that define the environment settings.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | -- | - | - |
-> | `environment` | Identifier for the workload zone (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
-> | `location` | The Azure region in which to deploy. | Required | |
+> | `environment` | Identifier for the workload zone (maximum five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
+> | `location` | The Azure region in which to deploy | Required | |
> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional | | > | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx |
-> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) |
-> | 'save_naming_information | Create a sample naming json file | Optional | see [Custom naming](naming-module.md) |
+> | 'name_override_file' | Name override file | Optional | See [Custom naming](naming-module.md). |
+> | 'save_naming_information | Creates a sample naming JSON file | Optional | See [Custom naming](naming-module.md). |
## Resource group parameters
-The table below contains the parameters that define the resource group.
+This section contains the parameters that define the resource group.
> [!div class="mx-tdCol2BreakAll "]
The table below contains the parameters that define the resource group.
> | `resourcegroup_tags` | Tags to be associated to the resource group | Optional |
-## SAP Virtual Hostname parameters
+## SAP virtual hostname parameters
-In the SAP on Azure Deployment Automation Framework, the SAP virtual hostname is defined by specifying the `use_secondary_ips` parameter.
+In SAP Deployment Automation Framework, the SAP virtual hostname is defined by specifying the `use_secondary_ips` parameter.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | - |
-> | `use_secondary_ips` | Boolean flag indicating if SAP should be installed using Virtual hostnames | Optional |
+> | `use_secondary_ips` | Boolean flag that indicates if SAP should be installed by using virtual hostnames | Optional |
### Database tier parameters
-The database tier defines the infrastructure for the database tier, supported database backends are:
+The database tier defines the infrastructure for the database tier. Supported database back ends are:
- `HANA` - `DB2`
The database tier defines the infrastructure for the database tier, supported da
- `ORACLE-ASM` - `ASE` - `SQLSERVER`-- `NONE` (in this case no database tier is deployed)-
+- `NONE` (in this case, no database tier is deployed)
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | -- | -- | |
-> | `database_sid` | Defines the database SID. | Required | |
-> | `database_platform` | Defines the database backend. | Supported values are `HANA`, `DB2`, `ORACLE`, `ASE`, `SQLSERVER`, `NONE` |
-> | `database_high_availability` | Defines if the database tier is deployed highly available. | Optional | See [High availability configuration](configure-system.md#high-availability-configuration) |
-> | `database_server_count` | Defines the number of database servers. | Optional | Default value is 1 |
-> | `database_vm_zones` | Defines the Availability Zones for the database servers. | Optional | |
-> | `db_sizing_dictionary_key` | Defines the database sizing information. | Required | See [Custom Sizing](configure-extra-disks.md) |
-> | `db_disk_sizes_filename` | Defines the custom database sizing file name. | Optional | See [Custom Sizing](configure-extra-disks.md) |
-> | `database_vm_use_DHCP` | Controls if Azure subnet provided IP addresses should be used. | Optional | |
-> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet). | Optional | |
-> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet). | Optional | |
-> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet). | Optional | |
-> | `database_vm_image` | Defines the Virtual machine image to use, see below. | Optional | |
-> | `database_vm_authentication_type` | Defines the authentication type (key/password). | Optional | |
-> | `database_use_avset` | Controls if the database servers are placed in availability sets. | Optional | default is false |
-> | `database_use_ppg` | Controls if the database servers will be placed in proximity placement groups. | Optional | default is true |
-> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs. | Optional | Primarily used with ANF pinning |
-> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces. | Optional | default is true |
-
-The Virtual Machine and the operating system image is defined using the following structure:
+> | `database_sid` | Defines the database SID | Required | |
+> | `database_platform` | Defines the database back end | Supported values are `HANA`, `DB2`, `ORACLE`, `ASE`, `SQLSERVER`, and `NONE`. |
+> | `database_high_availability` | Defines if the database tier is deployed highly available | Optional | See [High-availability configuration](configure-system.md#high-availability-configuration). |
+> | `database_server_count` | Defines the number of database servers | Optional | Default value is 1. |
+> | `database_vm_zones` | Defines the availability zones for the database servers | Optional | |
+> | `db_sizing_dictionary_key` | Defines the database sizing information | Required | See [Custom sizing](configure-extra-disks.md). |
+> | `db_disk_sizes_filename` | Defines the custom database sizing file name | Optional | See [Custom sizing](configure-extra-disks.md). |
+> | `database_vm_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used | Optional | |
+> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet) | Optional | |
+> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet) | Optional | |
+> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet) | Optional | |
+> | `database_vm_image` | Defines the virtual machine image to use | Optional | |
+> | `database_vm_authentication_type` | Defines the authentication type (key/password) | Optional | |
+> | `database_use_avset` | Controls if the database servers are placed in availability sets | Optional | Default is false. |
+> | `database_use_ppg` | Controls if the database servers are placed in proximity placement groups | Optional | Default is true. |
+> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used with ANF pinning. |
+> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | Default is true. |
+
+The virtual machine and the operating system image are defined by using the following structure:
```python {
The Virtual Machine and the operating system image is defined using the followin
### Common application tier parameters
-The application tier defines the infrastructure for the application tier, which can consist of application servers, central services servers and web dispatch servers
-
+The application tier defines the infrastructure for the application tier, which can consist of application servers, central services servers, and web dispatch servers.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | | --| | > | `enable_app_tier_deployment` | Defines if the application tier is deployed | Optional | | > | `sid` | Defines the SAP application SID | Required | |
-> | `app_tier_sizing_dictionary_key` | Lookup value defining the VM SKU and the disk layout for tha application tier servers | Optional |
-> | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom Sizing](configure-extra-disks.md) |
-> | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machine(s) | Optional | |
-> | `app_tier_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) | Optional | |
+> | `app_tier_sizing_dictionary_key` | Lookup value that defines the VM SKU and the disk layout for the application tier servers | Optional |
+> | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom sizing](configure-extra-disks.md). |
+> | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machines | Optional | |
+> | `app_tier_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used (dynamic) | Optional | |
> | `app_tier_dual_nics` | Defines if the application tier server will have two network interfaces | Optional | |
-### SAP Central services parameters
-
+### SAP central services parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | -| |
-> | `scs_server_count` | Defines the number of SCS servers. | Required | |
-> | `scs_high_availability` | Defines if the Central Services is highly available. | Optional | See [High availability configuration](configure-system.md#high-availability-configuration) |
-> | `scs_instance_number` | The instance number of SCS. | Optional | |
-> | `ers_instance_number` | The instance number of ERS. | Optional | |
-> | `scs_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
-> | `scs_server_image` | Defines the Virtual machine image to use. | Required | |
-> | `scs_server_zones` | Defines the availability zones of the SCS servers. | Optional | |
-> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet). | Optional | |
-> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet). | Optional | |
-> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet). | Optional | |
-> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet). | Optional | |
-> | `scs_server_use_ppg` | Controls if the SCS servers are placed in availability sets. | Optional | |
-> | `scs_server_use_avset` | Controls if the SCS servers will be placed in proximity placement groups.| Optional | |
-> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers. | Optional | |
+> | `scs_server_count` | Defines the number of SCS servers | Required | |
+> | `scs_high_availability` | Defines if the central services is highly available | Optional | See [High availability configuration](configure-system.md#high-availability-configuration). |
+> | `scs_instance_number` | The instance number of SCS | Optional | |
+> | `ers_instance_number` | The instance number of ERS | Optional | |
+> | `scs_server_sku` | Defines the virtual machine SKU to use | Optional | |
+> | `scs_server_image` | Defines the virtual machine image to use | Required | |
+> | `scs_server_zones` | Defines the availability zones of the SCS servers | Optional | |
+> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet) | Optional | |
+> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet) | Optional | |
+> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet) | Optional | |
+> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet) | Optional | |
+> | `scs_server_use_ppg` | Controls if the SCS servers are placed in availability sets | Optional | |
+> | `scs_server_use_avset` | Controls if the SCS servers are placed in proximity placement groups | Optional | |
+> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers | Optional | |
### Application server parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | --| |
-> | `application_server_count` | Defines the number of application servers. | Required | |
-> | `application_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
-> | `application_server_image` | Defines the Virtual machine image to use. | Required | |
-> | `application_server_zones` | Defines the availability zones to which the application servers are deployed.| Optional | |
-> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet). | Optional | |
-> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet). | Optional | |
-> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet). | Optional | |
-> | `application_server_use_ppg` | Controls if application servers are placed in availability sets. | Optional | |
-> | `application_server_use_avset` | Controls if application servers will be placed in proximity placement | Optional | |
-> | `application_server_tags` | Defines a list of tags to be applied to the application servers. | Optional | |
+> | `application_server_count` | Defines the number of application servers | Required | |
+> | `application_server_sku` | Defines the virtual machine SKU to use | Optional | |
+> | `application_server_image` | Defines the virtual machine image to use | Required | |
+> | `application_server_zones` | Defines the availability zones to which the application servers are deployed| Optional | |
+> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet) | Optional | |
+> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet) | Optional | |
+> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet) | Optional | |
+> | `application_server_use_ppg` | Controls if application servers are placed in availability sets | Optional | |
+> | `application_server_use_avset` | Controls if application servers are placed in proximity placement groups | Optional | |
+> | `application_server_tags` | Defines a list of tags to be applied to the application servers | Optional | |
### Web dispatcher parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | | | |
-> | `webdispatcher_server_count` | Defines the number of web dispatcher servers. | Required | |
-> | `webdispatcher_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
-> | `webdispatcher_server_image` | Defines the Virtual machine image to use. | Optional | |
-> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed. | Optional | |
-> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app/web subnet). | Optional | |
-> | `webdispatcher_server_nic_secondary_ips[]` | List of secondary IP addresses for the web dispatcher server (app/web subnet). | Optional | |
-> | `webdispatcher_server_app_admin_nic_ips` | List of IP addresses for the web dispatcher server (admin subnet). | Optional | |
-> | `webdispatcher_server_use_ppg` | Controls if web dispatchers are placed in availability sets. | Optional | |
-> | `webdispatcher_server_use_avset` | Controls if web dispatchers will be placed in proximity placement | Optional | |
-> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers. | Optional | |
+> | `webdispatcher_server_count` | Defines the number of web dispatcher servers | Required | |
+> | `webdispatcher_server_sku` | Defines the virtual machine SKU to use | Optional | |
+> | `webdispatcher_server_image` | Defines the virtual machine image to use | Optional | |
+> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed | Optional | |
+> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app/web subnet) | Optional | |
+> | `webdispatcher_server_nic_secondary_ips[]` | List of secondary IP addresses for the web dispatcher server (app/web subnet) | Optional | |
+> | `webdispatcher_server_app_admin_nic_ips` | List of IP addresses for the web dispatcher server (admin subnet) | Optional | |
+> | `webdispatcher_server_use_ppg` | Controls if web dispatchers are placed in availability sets | Optional | |
+> | `webdispatcher_server_use_avset` | Controls if web dispatchers are placed in proximity placement groups | Optional | |
+> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers | Optional | |
## Network parameters If the subnets aren't deployed using the workload zone deployment, they can be added in the system's tfvars file.
-The automation framework can either deploy the virtual network and the subnets (green field deployment) or using an existing virtual network and existing subnets (brown field deployments).
+The automation framework can either deploy the virtual network and the subnets (green-field deployment) or use an existing virtual network and existing subnets (brown-field deployments):
-Ensure that the virtual network address space is large enough to host all the resources.
+ - **Green-field scenario**: The virtual network address space and the subnet address prefixes must be specified.
+ - **Brown-field scenario**: The Azure resource identifier for the virtual network and the subnets must be specified.
-The table below contains the networking parameters.
+Ensure that the virtual network address space is large enough to host all the resources.
+This section contains the networking parameters.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | | - |
-> | `network_logical_name` | The logical name of the network. | Required | |
+> | `network_logical_name` | The logical name of the network | Required | |
> | | | | |
-> | `admin_subnet_name` | The name of the 'admin' subnet. | Optional | |
-> | `admin_subnet_address_prefix` | The address range for the 'admin' subnet. | Mandatory | For green field deployments. |
-> | `admin_subnet_arm_id` * | The Azure resource identifier for the 'admin' subnet. | Mandatory | For brown field deployments. |
-> | `admin_subnet_nsg_name` | The name of the 'admin' Network Security Group name. | Optional | |
-> | `admin_subnet_nsg_arm_id` * | The Azure resource identifier for the 'admin' Network Security Group | Mandatory | For brown field deployments. |
+> | `admin_subnet_name` | The name of the `admin` subnet | Optional | |
+> | `admin_subnet_address_prefix` | The address range for the `admin` subnet | Mandatory | For green-field deployments |
+> | `admin_subnet_arm_id` * | The Azure resource identifier for the `admin` subnet | Mandatory | For brown-field deployments |
+> | `admin_subnet_nsg_name` | The name of the `admin` network security group | Optional | |
+> | `admin_subnet_nsg_arm_id` * | The Azure resource identifier for the `admin` network security group | Mandatory | For brown-field deployments |
> | | | Optional | |
-> | `db_subnet_name` | The name of the 'db' subnet. | Optional | |
-> | `db_subnet_address_prefix` | The address range for the 'db' subnet. | Mandatory | For green field deployments. |
-> | `db_subnet_arm_id` * | The Azure resource identifier for the 'db' subnet. | Mandatory | For brown field deployments. |
-> | `db_subnet_nsg_name` | The name of the 'db' Network Security Group name. | Optional | |
-> | `db_subnet_nsg_arm_id` * | The Azure resource identifier for the 'db' Network Security Group. | Mandatory | For brown field deployments. |
+> | `db_subnet_name` | The name of the `db` subnet | Optional | |
+> | `db_subnet_address_prefix` | The address range for the `db` subnet | Mandatory | For green-field deployments |
+> | `db_subnet_arm_id` * | The Azure resource identifier for the `db` subnet | Mandatory | For brown-field deployments |
+> | `db_subnet_nsg_name` | The name of the `db` network security group name | Optional | |
+> | `db_subnet_nsg_arm_id` * | The Azure resource identifier for the `db` network security group | Mandatory | For brown-field deployments |
> | | | Optional | |
-> | `app_subnet_name` | The name of the 'app' subnet. | Optional | |
-> | `app_subnet_address_prefix` | The address range for the 'app' subnet. | Mandatory | For green field deployments. |
-> | `app_subnet_arm_id` * | The Azure resource identifier for the 'app' subnet. | Mandatory | For brown field deployments. |
-> | `app_subnet_nsg_name` | The name of the 'app' Network Security Group name. | Optional | |
-> | `app_subnet_nsg_arm_id` * | The Azure resource identifier for the 'app' Network Security Group. | Mandatory | For brown field deployments. |
+> | `app_subnet_name` | The name of the `app` subnet | Optional | |
+> | `app_subnet_address_prefix` | The address range for the `app` subnet | Mandatory | For green-field deployments |
+> | `app_subnet_arm_id` * | The Azure resource identifier for the `app` subnet | Mandatory | For brown-field deployments |
+> | `app_subnet_nsg_name` | The name of the `app` network security group name | Optional | |
+> | `app_subnet_nsg_arm_id` * | The Azure resource identifier for the `app` network security group | Mandatory | For brown-field deployments |
> | | | Optional | |
-> | `web_subnet_name` | The name of the 'web' subnet. | Optional | |
-> | `web_subnet_address_prefix` | The address range for the 'web' subnet. | Mandatory | For green field deployments. |
-> | `web_subnet_arm_id` * | The Azure resource identifier for the 'web' subnet. | Mandatory | For brown field deployments. |
-> | `web_subnet_nsg_name` | The name of the 'web' Network Security Group name. | Optional | |
-> | `web_subnet_nsg_arm_id` * | The Azure resource identifier for the 'web' Network Security Group. | Mandatory | For brown field deployments. |
+> | `web_subnet_name` | The name of the `web` subnet | Optional | |
+> | `web_subnet_address_prefix` | The address range for the `web` subnet | Mandatory | For green-field deployments |
+> | `web_subnet_arm_id` * | The Azure resource identifier for the `web` subnet | Mandatory | For brown-field deployments |
+> | `web_subnet_nsg_name` | The name of the `web` network security group name | Optional | |
+> | `web_subnet_nsg_arm_id` * | The Azure resource identifier for the `web` network security group | Mandatory | For brown-field deployments |
-\* = Required For brown field deployments.
+\* = Required for brown-field deployments
-## Key Vault Parameters
+## Key vault parameters
-If you don't want to use the workload zone key vault but another one, this can be added in the system's tfvars file.
+If you don't want to use the workload zone key vault but another one, you can define the key vault's Azure resource identifier in the system's `tfvar` file.
-The table below defines the parameters used for defining the Key Vault information.
+This section defines the parameters used for defining the key vault information.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | | -- | > | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | | > | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
-> | `enable_purge_control_for_keyvaults | Disables the purge protection for Azure key vaults. | Optional | Only use this for test environments |
-
+> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults | Optional | Only use for test environments. |
### Anchor virtual machine parameters
-The SAP on Azure Deployment Automation Framework supports having an Anchor virtual machine. The anchor virtual machine will be the first virtual machine to be deployed and is used to anchor the proximity placement group.
-
-The table below contains the parameters related to the anchor virtual machine.
+SAP Deployment Automation Framework supports having an anchor virtual machine. The anchor virtual machine is the first virtual machine to be deployed. It's used to anchor the proximity placement group.
+This section contains the parameters related to the anchor virtual machine.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | | -- |
-> | `deploy_anchor_vm` | Defines if the anchor Virtual Machine is used | Optional |
-> | `anchor_vm_sku` | Defines the VM SKU to use. For example, Standard_D4s_v3. | Optional |
-> | `anchor_vm_image` | Defines the VM image to use. See the following code sample. | Optional |
-> | `anchor_vm_use_DHCP` | Controls whether to use dynamic IP addresses provided by Azure subnet. | Optional |
-> | `anchor_vm_accelerated_networking` | Defines if the Anchor VM is configured to use accelerated networking | Optional |
+> | `deploy_anchor_vm` | Defines if the anchor virtual machine is used | Optional |
+> | `anchor_vm_sku` | Defines the VM SKU to use, for example, Standard_D4s_v3 | Optional |
+> | `anchor_vm_image` | Defines the VM image to use (as shown in the following code sample) | Optional |
+> | `anchor_vm_use_DHCP` | Controls whether to use dynamic IP addresses provided by Azure subnet | Optional |
+> | `anchor_vm_accelerated_networking` | Defines if the anchor VM is configured to use accelerated networking | Optional |
> | `anchor_vm_authentication_type` | Defines the authentication type for the anchor VM key and password | Optional |
-The Virtual Machine and the operating system image is defined using the following structure:
+The virtual machine and the operating system image are defined by using the following structure:
+ ```python { os_type = "linux"
The Virtual Machine and the operating system image is defined using the followin
### Authentication parameters
-By default the SAP System deployment uses the credentials from the SAP Workload zone. If the SAP system needs unique credentials, you can provide them using these parameters.
+By default, the SAP system deployment uses the credentials from the SAP workload zone. If the SAP system needs unique credentials, you can provide them by using these parameters.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type |
By default the SAP System deployment uses the credentials from the SAP Workload
> | `automation_path_to_public_key` | Path to existing public key | Optional | > | `automation_path_to_private_key` | Path to existing private key | Optional | - ## Other parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | -- |
-> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster using managed Identities | Optional |
-> | `resource_offset` | Provides and offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional |
-> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks using customer provided keys | Optional |
-> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional |
-> | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows the possible values are `None`, `Windows_Client` and `Windows_Server`. |
-> | `use_zonal_markers` | Specifies if zonal Virtual Machines will include a zonal identifier. 'xooscs_z1_00l###' vs 'xooscs00l###'| Default value is true. |
-> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups | |
-> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups| |
-
+> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
+> | `resource_offset` | Provides an offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional |
+> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional |
+> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations. | Optional |
+> | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows, the possible values are `None`, `Windows_Client`, and `Windows_Server`. |
+> | `use_zonal_markers` | Specifies if zonal virtual machines will include a zonal identifier: `xooscs_z1_00l###` versus `xooscs00l###`.| Default value is true. |
+> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | |
+> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | |
+> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional |
## NFS support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
-> | `sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional |
-
-### Azure files NFS Support
+> | `NFS_provider` | Defines what NFS back end to use. The options are `AFS` for Azure Files NFS or `ANF` for Azure NetApp files. |
+> | `sapmnt_volume_size` | Defines the size (in GB) for the `sapmnt` volume. | Optional |
+### Azure files NFS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account used for sapmnt | Optional |
+> | `azure_files_storage_account_id` | If provided, the Azure resource ID of the storage account used for `sapmnt` | Optional |
-### Azure NetApp Files Support
+### Azure NetApp Files support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | --| -- | | > | `ANF_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |
-> | `ANF_HANA_data_use_existing_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes |
+> | `ANF_HANA_data_use_existing_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes. |
> | `ANF_HANA_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
-> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 |
-> | `ANF_HANA_data_volume_throughput` | Azure NetApp Files volume throughput for HANA data. | Optional | default is 128 MBs/s |
+> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | Default size is 256. |
+> | `ANF_HANA_data_volume_throughput` | Azure NetApp Files volume throughput for HANA data. | Optional | Default is 128 MBs/s. |
> | | | | | > | `ANF_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |
-> | `ANF_HANA_log_use_existing` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes |
+> | `ANF_HANA_log_use_existing` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes. |
> | `ANF_HANA_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |
-> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 |
-> | `ANF_HANA_log_volume_throughput` | Azure NetApp Files volume throughput for HANA log. | Optional | default is 128 MBs/s |
+> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | Default size is 128. |
+> | `ANF_HANA_log_volume_throughput` | Azure NetApp Files volume throughput for HANA log. | Optional | Default is 128 MBs/s. |
> | | | | | > | `ANF_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |
-> | `ANF_HANA_shared_use_existing` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes |
+> | `ANF_HANA_shared_use_existing` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes. |
> | `ANF_HANA_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |
-> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 |
-> | `ANF_HANA_shared_volume_throughput` | Azure NetApp Files volume throughput for HANA shared. | Optional | default is 128 MBs/s |
+> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | Default size is 128. |
+> | `ANF_HANA_shared_volume_throughput` | Azure NetApp Files volume throughput for HANA shared. | Optional | Default is 128 MBs/s. |
> | | | | |
-> | `ANF_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | |
-> | `ANF_sapmnt_use_existing_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes |
-> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | |
-> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 |
-> | `ANF_sapmnt_throughput` | Azure NetApp Files volume throughput for sapmnt. | Optional | default is 128 MBs/s |
+> | `ANF_sapmnt` | Create Azure NetApp Files volume for `sapmnt`. | Optional | |
+> | `ANF_sapmnt_use_existing_volume` | Use existing Azure NetApp Files volume for `sapmnt`. | Optional | Use for pre-created volumes. |
+> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for `sapmnt`. | Optional | |
+> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for `sapmnt`. | Optional | Default size is 128. |
+> | `ANF_sapmnt_throughput` | Azure NetApp Files volume throughput for `sapmnt`. | Optional | Default is 128 MBs/s. |
> | | | | |
-> | `ANF_usr_sap` | Create Azure NetApp Files volume for usrsap. | Optional | |
-> | `ANF_usr_sap_use_existing` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes |
-> | `ANF_usr_sap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | |
-> | `ANF_usr_sap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 |
-> | `ANF_usr_sap_throughput` | Azure NetApp Files volume throughput for usrsap. | Optional | default is 128 MBs/s |
-
+> | `ANF_usr_sap` | Create Azure NetApp Files volume for `usrsap`. | Optional | |
+> | `ANF_usr_sap_use_existing` | Use existing Azure NetApp Files volume for `usrsap`. | Optional | Use for pre-created volumes. |
+> | `ANF_usr_sap_volume_name` | Azure NetApp Files volume name for `usrsap`. | Optional | |
+> | `ANF_usr_sap_volume_size` | Azure NetApp Files volume size in GB for `usrsap`. | Optional | Default size is 128. |
+> | `ANF_usr_sap_throughput` | Azure NetApp Files volume throughput for `usrsap`. | Optional | Default is 128 MBs/s. |
## Oracle parameters
-> [!NOTE]
-> These parameters need to be updated in the sap-parameters.yaml file when deploying Oracle based systems.
-
+These parameters need to be updated in the *sap-parameters.yaml* file when you deploy Oracle-based systems.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | --| -- | |
-> | `ora_release` | Release of Oracle, e.g. 19 | Mandatory | |
-> | `ora_version` | Version of Oracle, e.g. 19.0.0 | Mandatory | |
-> | `oracle_sbp_patch` | Oracle SBP patch file name, e.g. SAP19P_2202-70004508.ZIP | Mandatory | Must be part of the Bill of Materials |
+> | `ora_release` | Release of Oracle, for example, 19 | Mandatory | |
+> | `ora_version` | Version of Oracle, for example, 19.0.0 | Mandatory | |
+> | `oracle_sbp_patch` | Oracle SBP patch file name, for example, SAP19P_2202-70004508.ZIP | Mandatory | Must be part of the Bill of Materials |
## Terraform parameters
-The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts.
-
+This section contains the Terraform parameters. These parameters need to be entered manually if you're not using the deployment scripts.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | - |
-> | `tfstate_resource_id` | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | Required * |
-> | `deployer_tfstate_key` | The name of the state file for the Deployer | Required * |
+> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP library that will contain the Terraform state files | Required * |
+> | `deployer_tfstate_key` | The name of the state file for the deployer | Required * |
> | `landscaper_tfstate_key` | The name of the state file for the workload zone | Required * |
-\* = required for manual deployments
+\* = Required for manual deployments
-## High availability configuration
+## High-availability configuration
-The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags. For Red Hat and SUSE should use the appropriate 'HA' version of the virtual machine images (RHEL-SAP-HA, sles-sap-15-sp?).
+The high-availability configuration for the database tier and the SCS tier is configured by using the `database_high_availability` and `scs_high_availability` flags. Red Hat and SUSE should use the appropriate HA version of the virtual machine images (RHEL-SAP-HA, sles-sap-15-sp?).
-High availability configurations use Pacemaker with Azure fencing agents.
+High-availability configurations use Pacemaker with Azure fencing agents.
> [!NOTE]
-> The highly available Central Services deployment requires using a shared file system for sap_mnt. This can be achieved by using Azure Files or Azure NetApp Files, using the NFS_provider attribute. The default is Azure Files. To use Azure NetApp Files, set the NFS_provider attribute to ANF.
-
+> The highly available central services deployment requires using a shared file system for `sap_mnt`. You can use Azure Files or Azure NetApp Files by using the `NFS_provider` attribute. The default is Azure Files. To use Azure NetApp Files, set the `NFS_provider` attribute to `ANF`.
-### Fencing agent configuration
+### Fencing agent configuration
-SDAF supports using either managed identities or service principals for fencing agents. The following section describe how to configure each option.
+SAP Deployment Automation Framework supports using either managed identities or service principals for fencing agents. The following section describes how to configure each option.
-By defining the variable 'use_msi_for_clusters' to true the fencing agent will use managed identities. This is the recommended option.
+If you set the variable `use_msi_for_clusters` to `true`, the fencing agent uses managed identities.
-If you want to use a service principal for the fencing agent set that variable to false.
+If you want to use a service principal for the fencing agent, set that variable to false.
-The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](../../virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device)
+The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create a fencing agent](../../virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device).
```azurecli-interactive az ad sp create-for-rbac --role="Linux Fence Agent Role" --scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent" ```
-Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01` and `<subscriptionID>` with the workload zone subscription ID.
+Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01`. Replace `<subscriptionID>` with the workload zone subscription ID.
> [!IMPORTANT]
-> The name of the Fencing Agent Service Principal must be unique in the tenant. The script assumes that a role 'Linux Fence Agent Role' has already been created
+> The name of the fencing agent service principal must be unique in the tenant. The script assumes that a role `Linux Fence Agent Role` was already created.
>
-> Record the values from the Fencing Agent SPN.
+> Record the values from the fencing agent SPN:
> - appId > - password > - tenant
-The fencing agent details must be stored in the workload zone key vault using a predefined naming convention. Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01`, `<workload_kv_name>` with the name of the key vault from the workload zone resource group and for the other values use the values recorded from the previous step and run the script.
-
+The fencing agent details must be stored in the workload zone key vault by using a predefined naming convention. Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01`. Replace `<workload_kv_name>` with the name of the key vault from the workload zone resource group. For the other values, use the values recorded from the previous step and run the script.
```azurecli-interactive az keyvault secret set --name "<prefix>-fencing-spn-id" --vault-name "<workload_kv_name>" --value "<appId>";
sap Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-workload-zone.md
Title: About workload zone configuration in automation framework
-description: Overview of the SAP workload zone configuration process within the SAP on Azure Deployment Automation Framework.
+ Title: Workload zone configuration in the automation framework
+description: Overview of the SAP workload zone configuration process within SAP Deployment Automation Framework.
-# Workload zone configuration in SAP automation framework
+# Workload zone configuration in the SAP automation framework
-An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. The [SAP on Azure Deployment Automation Framework](deployment-framework.md) refers to these tiers as [workload zones](deployment-framework.md#deployment-components). See the following diagram for an example of a workload zone with two SAP systems.
-
+An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. [SAP Deployment Automation Framework](deployment-framework.md) calls these tiers [workload zones](deployment-framework.md#deployment-components). See the following diagram for an example of a workload zone with two SAP systems.
## Workload zone deployment configuration
-The configuration of the SAP workload zone is done via a Terraform tfvars variable file. You can find examples of the variable file in the 'samples/WORKSPACES/LANDSCAPE' folder.
+The configuration of the SAP workload zone is done via a Terraform `tfvars` variable file. You can find examples of the variable file in the `samples/WORKSPACES/LANDSCAPE` folder.
-The sections below show the different sections of the variable file.
+The following sections show the different sections of the variable file.
## Environment parameters
-The table below contains the parameters that define the environment settings.
-
+This table contains the parameters that define the environment settings.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | - | - |
-> | `environment` | Identifier for the workload zone (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
-> | `location` | The Azure region in which to deploy. | Required | |
-> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) |
-
+> | `environment` | Identifier for the workload zone (maximum five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
+> | `location` | The Azure region in which to deploy | Required | |
+> | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). |
## Resource group parameters
-The table below contains the parameters that define the resource group.
-
+This table contains the parameters that define the resource group.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type |
The table below contains the parameters that define the resource group.
> | `resource_group_name` | Name of the resource group to be created | Optional | > | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
+## Network parameters
-## Network Parameters
+The automation framework supports both creating the virtual network and the subnets (green field) or using an existing virtual network and existing subnets (brown field) or a combination of green field and brown field:
-The automation framework supports both creating the virtual network and the subnets For green field deployments. (Green field) or using an existing virtual network and existing subnets For brown field deployments. (Brown field) or a combination of For green field deployments. and For brown field deployments.
+ - **Green-field scenario**: The virtual network address space and the subnet address prefixes must be specified.
+ - **Brown-field scenario**: The Azure resource identifier for the virtual network and the subnets must be specified.
-Ensure that the virtual network address space is large enough to host all the resources
+Ensure that the virtual network address space is large enough to host all the resources.
-The table below contains the networking parameters.
+This table contains the networking parameters.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | | - |
-> | `network_name` | The name of the network. | Optional | |
-> | `network_logical_name` | The logical name of the network, for eaxmple 'SAP01' | Required | Used for resource naming. |
-> | `network_arm_id` | The Azure resource identifier for the virtual network. | Optional | For brown field deployments. |
-> | `network_address_space` | The address range for the virtual network. | Mandatory | For green field deployments. |
+> | `network_name` | The name of the network | Optional | |
+> | `network_logical_name` | The logical name of the network, for example, `SAP01` | Required | Used for resource naming |
+> | `network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown-field deployments |
+> | `network_address_space` | The address range for the virtual network | Mandatory | For green-field deployments |
> | | | | |
-> | `admin_subnet_name` | The name of the `admin` subnet. | Optional | |
-> | `admin_subnet_address_prefix` | The address range for the `admin` subnet. | Mandatory | For green field deployments. |
-> | `admin_subnet_arm_id` | The Azure resource identifier for the `admin` subnet. | Mandatory | For brown field deployments. |
+> | `admin_subnet_name` | The name of the `admin` subnet | Optional | |
+> | `admin_subnet_address_prefix` | The address range for the `admin` subnet | Mandatory | For green-field deployments |
+> | `admin_subnet_arm_id` | The Azure resource identifier for the `admin` subnet | Mandatory | For brown-field deployments |
> | | | | |
-> | `admin_subnet_nsg_name` | The name of the `admin`Network Security Group name. | Optional | |
-> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the `admin` Network Security Group. | Mandatory | For brown field deployments. |
+> | `admin_subnet_nsg_name` | The name of the `admin`network security group | Optional | |
+> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the `admin` network security group | Mandatory | For brown-field deployments |
> | | | | |
-> | `db_subnet_name` | The name of the `db` subnet. | Optional | |
-> | `db_subnet_address_prefix` | The address range for the `db` subnet. | Mandatory | For green field deployments. |
-> | `db_subnet_arm_id` | The Azure resource identifier for the `db` subnet. | Mandatory | For brown field deployments. |
+> | `db_subnet_name` | The name of the `db` subnet | Optional | |
+> | `db_subnet_address_prefix` | The address range for the `db` subnet | Mandatory | For green-field deployments |
+> | `db_subnet_arm_id` | The Azure resource identifier for the `db` subnet | Mandatory | For brown-field deployments |
> | | | | |
-> | `db_subnet_nsg_name` | The name of the `db` Network Security Group name. | Optional | |
-> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the `db` Network Security Group | Mandatory | For brown field deployments. |
+> | `db_subnet_nsg_name` | The name of the `db` network security group | Optional | |
+> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the `db` network security group | Mandatory | For brown-field deployments |
> | | | | |
-> | `app_subnet_name` | The name of the `app` subnet. | Optional | |
-> | `app_subnet_address_prefix` | The address range for the `app` subnet. | Mandatory | For green field deployments. |
-> | `app_subnet_arm_id` | The Azure resource identifier for the `app` subnet. | Mandatory | For brown field deployments. |
+> | `app_subnet_name` | The name of the `app` subnet | Optional | |
+> | `app_subnet_address_prefix` | The address range for the `app` subnet | Mandatory | For green-field deployments |
+> | `app_subnet_arm_id` | The Azure resource identifier for the `app` subnet | Mandatory | For brown-field deployments |
> | | | | |
-> | `app_subnet_nsg_name` | The name of the `app` Network Security Group name. | Optional | |
-> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the `app` Network Security Group. | Mandatory | For brown field deployments. |
+> | `app_subnet_nsg_name` | The name of the `app` network security group | Optional | |
+> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the `app` network security group | Mandatory | For brown-field deployments |
> | | | | |
-> | `web_subnet_name` | The name of the `web` subnet. | Optional | |
-> | `web_subnet_address_prefix` | The address range for the `web` subnet. | Mandatory | For green field deployments. |
-> | `web_subnet_arm_id` | The Azure resource identifier for the `web` subnet. | Mandatory | For brown field deployments. |
+> | `web_subnet_name` | The name of the `web` subnet | Optional | |
+> | `web_subnet_address_prefix` | The address range for the `web` subnet | Mandatory | For green-field deployments |
+> | `web_subnet_arm_id` | The Azure resource identifier for the `web` subnet | Mandatory | For brown-field deployments |
> | | | | |
-> | `web_subnet_nsg_name` | The name of the `web` Network Security Group name. | Optional | |
-> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the `web` Network Security Group | Mandatory | For brown field deployments. |
--
-The table below contains the networking parameters if Azure NetApp Files are used.
+> | `web_subnet_nsg_name` | The name of the `web` network security group | Optional | |
+> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the `web` network security group | Mandatory | For brown-field deployments |
+This table contains the networking parameters if Azure NetApp Files is used.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | | - |
-> | `anf_subnet_name` | The name of the ANF subnet. | Optional | |
-> | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet. | Required | When using existing subnets |
-> | `anf_subnet_address_prefix` | The address range for the `ANF` subnet. | Required | When using ANF for new deployments |
-
+> | `anf_subnet_name` | The name of the `ANF` subnet | Optional | |
+> | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet | Required | When using existing subnets |
+> | `anf_subnet_address_prefix` | The address range for the `ANF` subnet | Required | When using `ANF` for new deployments |
-**Minimum required network definition**
+#### Minimum required network definition
```terraform network_logical_name = "SAP01"
app_subnet_address_prefix = "10.110.32.0/19"
```
-### Authentication Parameters
+### Authentication parameters
-The table below defines the credentials used for defining the Virtual Machine authentication
+This table defines the credentials used for defining the virtual machine authentication.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | -| -- | - |
-> | `automation_username` | Administrator account name | Optional | Default: 'azureadm' |
+> | `automation_username` | Administrator account name | Optional | Default: `azureadm` |
> | `automation_password` | Administrator password | Optional | | > | `automation_path_to_public_key` | Path to existing public key | Optional | | > | `automation_path_to_private_key` | Path to existing private key | Optional | |
-**Minimum required authentication definition**
+#### Minimum required authentication definition
```terraform automation_username = "azureadm" ```
+## Key vault parameters
-## Key Vault Parameters
-
-The table below defines the parameters used for defining the Key Vault information
+This table defines the parameters used for defining the key vault information.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | | | -- | > | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | | > | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
-> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults. | Optional | Only use this for test environments |
-> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment KeyVault access policies | Optional | |
+> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults | Optional | Use only for test environments. |
+> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment key vault access policies | Optional | |
## Private DNS - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- | > | `dns_label` | If specified, is the DNS name of the private DNS zone | Optional |
-> | `dns_resource_group_name` | The name of the resource group containing the Private DNS zone | Optional |
+> | `dns_resource_group_name` | The name of the resource group that contains the private DNS zone | Optional |
-## NFS Support
+## NFS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | -- | -- | |
-> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional | |
-> | `install_volume_size` | Defines the size (in GB) for the 'install' volume | Optional | |
-> | `install_private_endpoint_id` | Azure resource ID for the 'install' private endpoint | Optional | For existing endpoints|
-> | `transport_volume_size` | Defines the size (in GB) for the 'transport' volume | Optional | |
-> | `transport_private_endpoint_id` | Azure resource ID for the 'transport' private endpoint | Optional | For existing endpoints|
+> | `NFS_provider` | Defines what NFS back end to use. The options are `AFS` for Azure Files NFS or `ANF` for Azure NetApp Files, `NONE` for NFS from the SCS server, or `NFS` for an external NFS solution. | Optional | |
+> | `install_volume_size` | Defines the size (in GB) for the `install` volume. | Optional | |
+> | `install_private_endpoint_id` | Azure resource ID for the `install` private endpoint. | Optional | For existing endpoints|
+> | `transport_volume_size` | Defines the size (in GB) for the `transport` volume. | Optional | |
+> | `transport_private_endpoint_id` | Azure resource ID for the `transport` private endpoint. | Optional | For existing endpoints|
-### Azure Files NFS Support
+### Azure Files NFS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | --| -- | |
-> | `install_storage_account_id` | Azure resource identifier for the 'install' storage account. | Optional | For brown field deployments. |
-> | `transport_storage_account_id` | Azure resource identifier for the 'transport' storage account. | Optional | For brown field deployments. |
+> | `install_storage_account_id` | Azure resource identifier for the `install` storage account | Optional | For brown-field deployments |
+> | `transport_storage_account_id` | Azure resource identifier for the `transport` storage account | Optional | For brown-field deployments |
-**Minimum required Azure Files NFS definition**
+#### Minimum required Azure Files NFS definition
```terraform NFS_provider = "AFS"
use_private_endpoint = true
``` -
-### Azure NetApp Files Support
+### Azure NetApp Files support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | --| -- | |
-> | `ANF_account_name` | Name for the Azure NetApp Files Account. | Optional | |
-> | `ANF_service_level` | Service level for the Azure NetApp Files Capacity Pool. | Optional | |
-> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files Capacity Pool. | Optional | |
-> | `ANF_qos_type` | The Quality of Service type of the pool (Auto or Manual). | Optional | |
-> | `ANF_use_existing_pool` | Use existing the Azure NetApp Files Capacity Pool. | Optional | |
-> | `ANF_pool_name` | The name of the Azure NetApp Files Capacity Pool. | Optional | |
-> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files Account. | Optional | For brown field deployments. |
+> | `ANF_account_name` | Name for the Azure NetApp Files account | Optional | |
+> | `ANF_service_level` | Service level for the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_qos_type` | The quality of service type of the pool (auto or manual) | Optional | |
+> | `ANF_use_existing_pool` | Use existing for the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_pool_name` | The name of the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files account | Optional | For brown-field deployments |
> | | | | |
-> | `ANF_transport_volume_use_existing` | Defines if an existing transport volume is used. | Optional | |
-> | `ANF_transport_volume_name` | Defines the transport volume name. | Optional | For brown field deployments. |
-> | `ANF_transport_volume_size` | Defines the size of the transport volume in GB. | Optional | |
-> | `ANF_transport_volume_throughput` | Defines the throughput of the transport volume. | Optional | |
+> | `ANF_transport_volume_use_existing` | Defines if an existing transport volume is used | Optional | |
+> | `ANF_transport_volume_name` | Defines the transport volume name | Optional | For brown-field deployments |
+> | `ANF_transport_volume_size` | Defines the size of the transport volume in GB | Optional | |
+> | `ANF_transport_volume_throughput` | Defines the throughput of the transport volume | Optional | |
> | | | | |
-> | `ANF_install_volume_use_existing` | Defines if an existing install volume is used. | Optional | |
-> | `ANF_install_volume_name` | Defines the install volume name. | Optional | For brown field deployments. |
-> | `ANF_install_volume_size` | Defines the size of the install volume in GB. | Optional | |
-> | `ANF_install_volume_throughput` | Defines the throughput of the install volume. | Optional | |
+> | `ANF_install_volume_use_existing` | Defines if an existing install volume is used | Optional | |
+> | `ANF_install_volume_name` | Defines the install volume name | Optional | For brown-field deployments |
+> | `ANF_install_volume_size` | Defines the size of the install volume in GB | Optional | |
+> | `ANF_install_volume_throughput` | Defines the throughput of the install volume | Optional | |
-
-**Minimum required ANF definition**
+#### Minimum required ANF definition
```terraform NFS_provider = "ANF"
ANF_service_level = "Ultra"
```
-### DNS Support
-
+### DNS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |
-> | `use_custom_dns_a_registration` | Use an existing Private DNS zone | Optional |
-> | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional |
-> | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional |
-> | `dns_label` | DNS name of the private DNS zone | Optional |
-
+> | `use_custom_dns_a_registration` | Use an existing private DNS zone. | Optional |
+> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the private DNS zone. | Optional |
+> | `management_dns_resourcegroup_name` | Resource group that contains the private DNS zone. | Optional |
+> | `dns_label` | DNS name of the private DNS zone. | Optional |
-## Other Parameters
+## Other parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | -- | - |
-> | `enable_purge_control_for_keyvaults` | Is purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
+> | `enable_purge_control_for_keyvaults` | If purge control is enabled on the key vault. | Optional | Use only for test deployments. |
> | `use_private_endpoint` | Are private endpoints created for storage accounts and key vaults. | Optional | | > | `use_service_endpoint` | Are service endpoints defined for the subnets. | Optional | |
-> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets | Optional |
-> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For brown field deployments. |
-> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For brown field deployments. |
--
-## ISCSI Parameters
+> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional |
+> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account. | Required | For brown-field deployments. |
+> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account. | Required | For brown-field deployments. |
+## iSCSI parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | | -- |
-> | `iscsi_subnet_name` | The name of the `iscsi` subnet. | Optional | |
-> | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet. | Mandatory | For green field deployments. |
-> | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet. | Mandatory | For brown field deployments. |
-> | `iscsi_subnet_nsg_name` | The name of the `iscsi` Network Security Group name | Optional | |
-> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` Network Security Group | Mandatory | For brown field deployments. |
-> | `iscsi_count` | The number of iSCSI Virtual Machines | Optional | |
+> | `iscsi_subnet_name` | The name of the `iscsi` subnet | Optional | |
+> | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet | Mandatory | For green-field deployments |
+> | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet | Mandatory | For brown-field deployments |
+> | `iscsi_subnet_nsg_name` | The name of the `iscsi` network security group | Optional | |
+> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` network security group | Mandatory | For brown-field deployments |
+> | `iscsi_count` | The number of iSCSI virtual machines | Optional | |
> | `iscsi_use_DHCP` | Controls whether to use dynamic IP addresses provided by the Azure subnet | Optional | |
-> | `iscsi_image` | Defines the Virtual machine image to use, see below | Optional | |
-> | `iscsi_authentication_type` | Defines the default authentication for the iSCSI Virtual Machines | Optional | |
+> | `iscsi_image` | Defines the virtual machine image to use (next table) | Optional | |
+> | `iscsi_authentication_type` | Defines the default authentication for the iSCSI virtual machines | Optional | |
> | `iscsi__authentication_username` | Administrator account name | Optional | |
-> | `iscsi_nic_ips` | IP addresses for the iSCSI Virtual Machines | Optional | ignored if `iscsi_use_DHCP` is defined |
--
-## Utility VM Parameters
+> | `iscsi_nic_ips` | IP addresses for the iSCSI virtual machines | Optional | Ignored if `iscsi_use_DHCP` is defined |
+## Utility VM parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | | - |
-> | `utility_vm_count` | Defines the number of Utility virtual machines to deploy. | Optional | Use the utility virtual machine to host SAPGui |
-> | `utility_vm_size` | Defines the SKU for the Utility virtual machines. | Optional | Default: Standard_D4ds_v4 |
-> | `utility_vm_useDHCP` | Defines if Azure subnet provided IPs should be used. | Optional | |
-> | `utility_vm_image` | Defines the virtual machine image to use. | Optional | Default: Windows Server 2019 |
-> | `utility_vm_nic_ips` | Defines the IP addresses for the virtual machines. | Optional | |
-
+> | `utility_vm_count` | Defines the number of utility virtual machines to deploy | Optional | Use the utility virtual machine to host SAPGui |
+> | `utility_vm_size` | Defines the SKU for the utility virtual machines | Optional | Default: Standard_D4ds_v4 |
+> | `utility_vm_useDHCP` | Defines if Azure subnet provided IPs should be used | Optional | |
+> | `utility_vm_image` | Defines the virtual machine image to use | Optional | Default: Windows Server 2019 |
+> | `utility_vm_nic_ips` | Defines the IP addresses for the virtual machines | Optional | |
-## Terraform Parameters
-
-The table below contains the Terraform parameters. These parameters need to be entered manually if not using the deployment scripts.
+## Terraform parameters
+This table contains the Terraform parameters. These parameters need to be entered manually if you're not using the deployment scripts.
| Variable | Description | Type | | -- | - | - |
-| `tfstate_resource_id` | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | Required |
-| `deployer_tfstate_key` | The name of the state file for the Deployer | Required |
-
+| `tfstate_resource_id` | The Azure resource identifier for the storage account in the SAP library that contains the Terraform state files. | Required |
+| `deployer_tfstate_key` | The name of the state file for the deployer. | Required |
-## Next Step
+## Next step
> [!div class="nextstepaction"] > [About SAP system deployment in automation framework](deploy-workload-zone.md)
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
Title: About Control Plane deployment for the SAP on Azure Deployment Automation Framework
-description: Overview of the Control Plan deployment process within the SAP on Azure Deployment Automation Framework.
+ Title: Deploy the control plane for SAP Deployment Automation Framework
+description: Overview of the control plane deployment process in SAP Deployment Automation Framework.
# Deploy the control plane
-The control plane deployment for the [SAP on Azure Deployment Automation Framework](deployment-framework.md) consists of the following components:
+The control plane deployment for [SAP Deployment Automation Framework](deployment-framework.md) consists of the:
+- Deployer
+- SAP library
+ ## Prepare the deployment credentials
-The SAP Deployment Frameworks uses Service Principals when doing the deployments. You can create the Service Principal for the Control Plane deployment using the following steps using an account with permissions to create Service Principals:
+SAP Deployment Automation Framework uses service principals for deployments. To create a service principal for the control plane deployment, use an account that has permissions to create service principals:
```azurecli az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-Account"
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip
``` > [!IMPORTANT]
-> The name of the Service Principal must be unique.
+> The name of the service principal must be unique.
>
-> Record the output values from the command.
+> Record the output values from the command:
> - appId > - password > - tenant
-Optionally assign the following permissions to the Service Principal:
+Optionally, assign the following permissions to the service principal:
```azurecli az role assignment create --assignee <appId> --role "User Access Administrator" --scope /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName> ``` -
-## Prepare the webapp
-This step is optional. If you would like a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before deploying the control plane.
+## Prepare the web app
+This step is optional. If you want a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before you deploy the control plane.
# [Linux](#tab/linux)
del manifest.json
# [Azure DevOps](#tab/devops)
-It's currently not possible to perform this action from Azure DevOps.
+Currently, it isn't possible to perform this action from Azure DevOps.
- ## Deploy the control plane
-
-The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder.
-The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
+The sample deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder.
-Running the following command creates the Deployer, the SAP Library and adds the Service Principal details to the deployment key vault. If you followed the web app setup in the previous step, this command also creates the infrastructure to host the application.
+The sample SAP library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder.
+
+Run the following command to create the deployer and the SAP library. The command adds the service principal details to the deployment key vault. If you followed the web app setup in the previous step, this command also creates the infrastructure to host the application.
# [Linux](#tab/linux)
Run the following command to deploy the control plane:
```bash
-az logout
-cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config
-cd config/WORKSPACES
- export ARM_SUBSCRIPTION_ID="<subscriptionId>" export ARM_CLIENT_ID="<appId>" export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>" export env_code="MGMT" export region_code="WEEU"
-export vnet_code="WEEU"
+export vnet_code="DEP01"
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+
+az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
-export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-="${subscriptionId}"
-export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES"
-export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ --tenant_id "${ARM_TENANT_ID}"
``` - # [Windows](#tab/windows) You can't perform a control plane deployment from Windows.+ # [Azure DevOps](#tab/devops)
-Open (https://dev.azure.com) and go to your Azure DevOps project.
+Open [Azure DevOps](https://dev.azure.com) and go to your Azure DevOps project.
-> [!NOTE]
-> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
+Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to the folder that contains your configuration files. For this example, you can use `samples/WORKSPACES`.
-The deployment uses the configuration defined in the Terraform variable files located in the 'WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE' and 'WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY' folders.
+The deployment uses the configuration defined in the Terraform variable files located in the `WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` and `WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folders.
-Run the pipeline by selecting the _Deploy control plane_ pipeline from the Pipelines section. Enter the configuration names for the deployer and the SAP library. Use 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name.
+Run the pipeline by selecting the `Deploy control plane` pipeline from the **Pipelines** section. Enter the configuration names for the deployer and the SAP library. Use `MGMT-WEEU-DEP00-INFRASTRUCTURE` as the deployer configuration name and `MGMT-WEEU-SAP_LIBRARY` as the SAP library configuration name.
-You can track the progress in the Azure DevOps portal. Once the deployment is complete, you can see the Control Plane details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps portal. After the deployment is finished, you can see the control plane details on the **Extensions** tab.
- :::image type="content" source="media/devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot of the run Azure DevOps pipeline run results.":::
+ :::image type="content" source="media/devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot that shows the run Azure DevOps pipeline run results.":::
-### Manually configure the deployer using Azure Bastion
+### Manually configure the deployer by using Azure Bastion
-Connect to the deployer by following these steps:
+To connect to the deployer:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to the resource group containing the deployer virtual machine.
+1. Go to the resource group that contains the deployer virtual machine (VM).
-1. Connect to the virtual machine using Azure Bastion.
+1. Connect to the VM by using Azure Bastion.
-1. The default username is *azureadm*
+1. The default username is **azureadm**.
-1. Choose *SSH Private Key from Azure Key Vault*
+1. Select **SSH Private Key from Azure Key Vault**.
-1. Select the subscription containing the control plane.
+1. Select the subscription that contains the control plane.
1. Select the deployer key vault.
-1. From the list of secrets choose the secret ending with *-sshkey*.
+1. From the list of secrets, choose the secret that ends with **-sshkey**.
-1. Connect to the virtual machine.
+1. Connect to the VM.
-Run the following script to configure the deployer.
+Run the following script to configure the deployer:
```bash
cd sap-automation/deploy/scripts
./configure_deployer.sh ```
-The script installs Terraform and Ansible and configure the deployer.
+The script installs Terraform and Ansible and configures the deployer.
### Manually configure the deployer
-> [!NOTE]
->You need to connect to the deployer virtual Machine from a computer that is able to reach the Azure Virtual Network
+Connect to the deployer VM from a computer that can reach the Azure virtual network.
-Connect to the deployer by following these steps:
+To connect to the deployer:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select or search for **Key vaults**.
-1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by the **Resource group** or **Location** if necessary.
+1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by the **Resource group** or **Location**, if necessary.
-1. Select **Secrets** from the **Settings** section in the left pane.
+1. On the **Settings** section in the left pane, select **Secrets**.
-1. Find and select the secret containing **sshkey**. It might look like this: `MGMT-[REGION]-DEP00-sshkey`
+1. Find and select the secret that contains **sshkey**. It might look like `MGMT-[REGION]-DEP00-sshkey`.
-1. On the secret's page, select the current version. Then, copy the **Secret value**.
+1. On the secret's page, select the current version. Then copy the **Secret value**.
-1. Open a plain text editor. Copy in the secret value.
-
-1. Save the file where you keep SSH keys. For example, `C:\\Users\\<your-username>\\.ssh`.
-
-1. Save the file. If you're prompted to **Save as type**, select **All files** if **SSH** isn't an option. For example, use `deployer.ssh`.
+1. Open a plain text editor. Copy the secret value.
-1. Connect to the deployer VM through any SSH client such as Visual Studio Code. Use the private IP address of the deployer, and the SSH key you downloaded. For instructions on how to connect to the Deployer using Visual Studio Code see [Connecting to Deployer using Visual Studio Code](tools-configuration.md#configuring-visual-studio-code). If you're using PuTTY, convert the SSH key file first using PuTTYGen.
+1. Save the file where you keep SSH keys. An example is `C:\Users\<your-username>\.ssh`.
-> [!NOTE]
->The default username is *azureadm*
+1. Save the file. If you're prompted to **Save as type**, select **All files** if **SSH** isn't an option. For example, use `deployer.ssh`.
+
+1. Connect to the deployer VM through any SSH client, such as Visual Studio Code. Use the private IP address of the deployer and the SSH key you downloaded. For instructions on how to connect to the deployer by using Visual Studio Code, see [Connect to the deployer by using Visual Studio Code](tools-configuration.md#configure-visual-studio-code). If you're using PuTTY, convert the SSH key file first by using PuTTYGen.
-Configure the deployer using the following script:
+> [!NOTE]
+>The default username is **azureadm**.
+Configure the deployer by using the following script:
```bash mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
cd sap-automation/deploy/scripts
./configure_deployer.sh ```
-The script installs Terraform and Ansible and configure the deployer.
--
+The script installs Terraform and Ansible and configures the deployer.
## Next step > [!div class="nextstepaction"]
-> [Configure SAP Workload Zone](configure-workload-zone.md)
+> [Configure SAP workload zone](configure-workload-zone.md)
sap Deploy System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-system.md
Title: About SAP system deployment for the automation framework
-description: Overview of the SAP system deployment process within the SAP on Azure Deployment Automation Framework.
+ Title: SAP system deployment for the automation framework
+description: Overview of the SAP system deployment process in SAP Deployment Automation Framework.
# SAP system deployment for the automation framework
-The creation of the [SAP system](deployment-framework.md#sap-concepts) is part of the [SAP on Azure Deployment Automation Framework](deployment-framework.md) process. The SAP system creates your virtual machines (VMs), and supporting components for your [SAP application](deployment-framework.md#sap-concepts).
+The creation of the [SAP system](deployment-framework.md#sap-concepts) is part of the [SAP Deployment Automation Framework](deployment-framework.md) process. The SAP system deployment creates your virtual machines (VMs) and supporting components for your [SAP application](deployment-framework.md#sap-concepts).
The SAP system deploys: -- The [database tier](#database-tier), which deploys database VMs, their disks, and a Standard Azure Load Balancer. You can run [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) in this tier.-- The [SAP central services tier](#central-services-tier), which deploys a customer-defined number of VMs and an Azure Standard Load Balancer.
+- The [database tier](#database-tier), which deploys database VMs, their disks, and a Standard instance of Azure Load Balancer. You can run [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) in this tier.
+- The [SAP central services tier](#central-services-tier), which deploys a customer-defined number of VMs and a Standard instance of Load Balancer.
- The [application tier](#application-tier), which deploys the VMs and their disks.-- The [web dispatcher tier](#web-dispatcher-tier)
+- The [web dispatcher tier](#web-dispatcher-tier).
## Application tier The application tier deploys a customer-defined number of VMs. These VMs are size **Standard_D4s_v3** with a 30-GB operating system (OS) disk and a 512-GB data disk.
-To set the application server count, define the parameter `application_server_count` for this tier in your parameter file. For example, `application_server_count= 3`.
-
+To set the application server count, define the parameter `application_server_count` for this tier in your parameter file. For example, use `application_server_count= 3`.
## Central services tier
-The SAP central services (SCS) tier deploys a customer-defined number of VMs. These VMs are size **Standard_D4s_v3** with a 30-GB OS disk and a 512-GB data disk. This tier also deploys an [Azure Standard Load Balancer](../../load-balancer/load-balancer-overview.md).
-
-To set the SCS server count, define the parameter `scs_server_count` for this tier in your parameter file. For example, `scs_server_count=1`.
+The SAP central services (SCS) tier deploys a customer-defined number of VMs. These VMs are size **Standard_D4s_v3** with a 30-GB OS disk and a 512-GB data disk. This tier also deploys a [Standard instance of Load Balancer](../../load-balancer/load-balancer-overview.md).
+To set the SCS server count, define the parameter `scs_server_count` for this tier in your parameter file. For example, use `scs_server_count=1`.
## Web dispatcher tier
-The web dispatcher tier deploys a customer-defined number of VMs. This tier also deploys an [Azure Standard Load Balancer](../../load-balancer/load-balancer-overview.md).
+The web dispatcher tier deploys a customer-defined number of VMs. This tier also deploys a [Standard instance of Load Balancer](../../load-balancer/load-balancer-overview.md).
-To set the web server count, define the parameter `web_server_count` for this tier in your parameter file. For example, `web_server_count = 2`.
+To set the web server count, define the parameter `web_server_count` for this tier in your parameter file. For example, use `web_server_count = 2`.
## Database tier
-The database tier deploys the VMs and their disks, and also an [Azure Standard Load Balancer](../../load-balancer/load-balancer-overview.md). You can use either [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) as your database VMs.
+The database tier deploys the VMs and their disks and also deploys a [Standard instance of Load Balancer](../../load-balancer/load-balancer-overview.md). You can use either [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) as your database VMs.
-You can set the size of database VMs with the parameter `size` for this tier. For example, `"size": "S4Demo"` for HANA databases or `"size": "1 TB"` for AnyDB databases. Refer to the **Size** parameter in the tables of [HANA database VM options](configure-extra-disks.md#hana-databases) and [AnyDB database VM options](configure-extra-disks.md#anydb-databases) for possible values.
+You can set the size of database VMs with the parameter `size` for this tier. For example, use `"size": "S4Demo"` for HANA databases or `"size": "1 TB"` for AnyDB databases. For possible values, see the **Size** parameter in the tables of [HANA database VM options](configure-extra-disks.md#hana-databases) and [AnyDB database VM options](configure-extra-disks.md#anydb-databases).
-By default, the automation framework deploys the correct disk configuration for HANA database deployments. For HANA database deployments, the framework calculates default disk configuration based on VM size. However, for AnyDB database deployments, the framework calculates default disk configuration based on database size. You can set a disk size as needed by creating a custom JSON file in your deployment. For an example, [see the following JSON code sample and replace values as necessary for your configuration](configure-extra-disks.md#custom-sizing-file). Then, define the parameter `db_disk_sizes_filename` in the parameter file for the database tier. For example, `db_disk_sizes_filename = "path/to/JSON/file"`.
+By default, the automation framework deploys the correct disk configuration for HANA database deployments. For HANA database deployments, the framework calculates default disk configuration based on VM size. However, for AnyDB database deployments, the framework calculates default disk configuration based on database size. You can set a disk size as needed by creating a custom JSON file in your deployment. For an example, [see the following JSON code sample and replace values as necessary for your configuration](configure-extra-disks.md#custom-sizing-file). Then, define the parameter `db_disk_sizes_filename` in the parameter file for the database tier. An example is `db_disk_sizes_filename = "path/to/JSON/file"`.
-You can also [add extra disks to a new system](configure-extra-disks.md#custom-sizing-file), or [add extra disks to an existing system](configure-extra-disks.md#add-extra-disks-to-existing-system).
+You can also [add extra disks to a new system](configure-extra-disks.md#custom-sizing-file) or [add extra disks to an existing system](configure-extra-disks.md#add-extra-disks-to-an-existing-system).
## Core configuration
webdispatcher_server_count=0
```
-## Deploying the SAP system
-
-The sample SAP System configuration file `DEV-WEEU-SAP01-X01.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01` folder.
+## Deploy the SAP system
+
+The sample SAP system configuration file `DEV-WEEU-SAP01-X01.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01` folder.
-Running the command below will deploy the SAP System.
+Run the following command to deploy the SAP system.
# [Linux](#tab/linux)
-> [!TIP]
-> Perform this task from the deployer.
+Perform this task from the deployer.
You can copy the sample configuration files to start testing the deployment automation framework.
xcopy sap-automation\deploy\samples\WORKSPACES WORKSPACES
``` - ```powershell cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-X01
New-SAPSystem -Parameterfile DEV-WEEU-SAP01-X01.tfvars
# [Azure DevOps](#tab/devops)
-Open (https://dev.azure.com) and go to your Azure DevOps Services project.
+Open [Azure DevOps](https://dev.azure.com) and go to your Azure DevOps Services project.
-> [!NOTE]
-> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
+Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to the folder that contains your configuration files. For this example, you can use `samples/WORKSPACES`.
-The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00' folder.
+The deployment uses the configuration defined in the Terraform variable file located in the `samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00` folder.
-Run the pipeline by selecting the _SAP system deployment_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name.
+Run the pipeline by selecting the `SAP system deployment` pipeline from the **Pipelines** section. Enter `DEV-WEEU-SAP01-X00` as the SAP system configuration name.
-You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the SAP System details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the SAP system details on the **Extensions** tab.
### Output files
-The deployment will create an Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`) that are required input for the Ansible playbooks.
-## Next steps
+The deployment creates an Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`). These files are required input for the Ansible playbooks.
+
+## Next step
> [!div class="nextstepaction"]
-> [About workload zone deployment with automation framework](software.md)
+> [Workload zone deployment with automation framework](software.md)
sap Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-workload-zone.md
Title: About workload zone deployment in automation framework
-description: Overview of the SAP workload zone deployment process within the SAP on Azure Deployment Automation Framework.
+description: Overview of the SAP workload zone deployment process within SAP Deployment Automation Framework.
-# Workload zone deployment in SAP automation framework
+# Workload zone deployment in the SAP automation framework
-An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. The [SAP on Azure Deployment Automation Framework](deployment-framework.md) refers to these tiers as [workload zones](deployment-framework.md#deployment-components).
+An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. [SAP Deployment Automation Framework](deployment-framework.md) calls these tiers [workload zones](deployment-framework.md#deployment-components).
-You can use workload zones in multiple Azure regions. Each workload zone then has its own Azure Virtual Network (Azure virtual network)
+You can use workload zones in multiple Azure regions. Each workload zone then has its own instance of Azure Virtual Network.
The following services are provided by the SAP workload zone: -- Azure Virtual Network, including subnets and network security groups.-- Azure Key Vault, for system credentials.-- Storage account for boot diagnostics-- Storage account for cloud witnesses-- Azure NetApp account and capacity pools (optional)-- Azure Files NFS Shares (optional)
+- A virtual network, including subnets and network security groups
+- An Azure Key Vault instance, for system credentials
+- An Azure Storage account for boot diagnostics
+- A Storage account for cloud witnesses
+- An Azure NetApp Files account and capacity pools (optional)
+- Azure Files NFS shares (optional)
-The workload zones are typically deployed in spokes in a hub and spoke architecture. They may be in their own subscriptions.
-
-Supports the Private DNS from the Control Plane or from a configurable source.
+The workload zones are typically deployed in spokes in a hub-and-spoke architecture. They can be in their own subscriptions.
+The private DNS is supported from the control plane or from a configurable source.
## Core configuration
location="westeurope"
# The network logical name is mandatory - it is used in the naming convention and should map to the workload virtual network logical name network_name="SAP01"
-# network_address_space is a mandatory parameter when an existing Virtual network is not used
+# network_address_space is a mandatory parameter when an existing virtual network is not used
network_address_space="10.110.0.0/16" # admin_subnet_address_prefix is a mandatory parameter if the subnets are not defined in the workload or if existing subnets are not used
automation_username="azureadm"
```
-## Preparing the Workload zone deployment credentials
-
-The SAP Deployment Frameworks uses Service Principals when doing the deployment. You can create the Service Principal for the Workload Zone deployment using the following steps using an account with permissions to create Service Principals:
+## Prepare the workload zone deployment credentials
+SAP Deployment Automation Framework uses service principals when doing the deployment. To create the service principal for the workload zone deployment, use an account with permissions to create service principals.
```azurecli-interactive az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-Account"
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip
``` > [!IMPORTANT]
-> The name of the Service Principal must be unique.
+> The name of the service principal must be unique.
>
-> Record the output values from the command.
+> Record the output values from the command:
> - appId > - password > - tenant
-Assign the correct permissions to the Service Principal:
+Assign the correct permissions to the service principal.
```azurecli az role assignment create --assignee <appId> \
az role assignment create --assignee <appId> \
--role "User Access Administrator" ```
-## Deploying the SAP Workload zone
+## Deploy the SAP workload zone
-The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
+The sample workload zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
-Running the following command deploys the SAP Workload Zone.
+Run the following command to deploy the SAP workload zone.
# [Linux](#tab/linux)
-> [!TIP]
-> Perform this task from the deployer.
+Perform this task from the deployer.
You can copy the sample configuration files to start testing the deployment automation framework.
export region_code="<region_code>"
export vnet_code="SAP02" export deployer_environment="MGMT"
-az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
- export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
++ cd "${CONFIG_REPO_PATH}/LANDSCAPE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE" parameterFile="${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ --tenant_id "${ARM_TENANT_ID}"
```+ # [Windows](#tab/windows) It isn't possible to perform the deployment from Windows.-
-> [!NOTE]
-> Be sure to replace the sample value `<subscriptionID>` with your subscription ID.
-> Replace the `<appID>`, `<password>`, `<tenant>` values with the output values of the SPN creation
-> Replace `<keyvault>` with the deployer key vault name
-> Replace `<storageaccount>` with the name of the storage account containing the Terraform state files
-> Replace `<statefile_subscription>` with the subscription ID for the storage account containing the Terraform state files
+To begin, be sure to replace:
+
+- The sample value `<subscriptionID>` with your subscription ID.
+- The `<appID>`, `<password>`, and `<tenant>` values with the output values of the SPN creation.
+- The `<keyvault>` value with the deployer key vault name.
+- The `<storageaccount>` value with the name of the storage account that contains the Terraform state files.
+- The `<statefile_subscription>` value with the subscription ID for the storage account that contains the Terraform state files.
# [Azure DevOps](#tab/devops)
-Open (https://dev.azure.com) and go to your Azure DevOps Services project.
+Open [Azure DevOps](https://dev.azure.com) and go to your Azure DevOps Services project.
-> [!NOTE]
-> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
+Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to the folder that contains your configuration files. For this example, you can use `samples/WORKSPACES`.
-The deployment uses the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE' folder.
+The deployment uses the configuration defined in the Terraform variable file located in the `samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
-Run the pipeline by selecting the _Deploy workload zone_ pipeline from the Pipelines section. Enter the workload zone configuration name and the deployer environment name. Use 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the Workload zone configuration name and 'MGMT' as the Deployer Environment Name.
+Run the pipeline by selecting the `Deploy workload zone` pipeline from the **Pipelines** section. Enter the workload zone configuration name and the deployer environment name. Use `DEV-WEEU-SAP01-INFRASTRUCTURE` as the workload zone configuration name and `MGMT` as the deployer environment name.
-You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Workload Zone details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the workload zone details on the **Extensions** tab.
- > [!TIP]
-> If the scripts fail to run, it can sometimes help to clear the local cache files by removing `~/.sap_deployment_automation/` and `~/.terraform.d/` directories before running the scripts again.
+> If the scripts fail to run, it can sometimes help to clear the local cache files by removing the `~/.sap_deployment_automation/` and `~/.terraform.d/` directories before you run the scripts again.
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [About SAP system deployment in automation framework](configure-system.md)
+> [SAP system deployment with the automation framework](configure-system.md)
sap Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deployment-framework.md
Title: About SAP on Azure Deployment Automation Framework
-description: Overview of the framework and tooling for the SAP on Azure Deployment Automation Framework.
+ Title: About SAP Deployment Automation Framework
+description: Overview of the framework and tooling for SAP Deployment Automation Framework.
-# SAP on Azure Deployment Automation Framework
+# SAP Deployment Automation Framework
-The [SAP on Azure Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB using [Terraform](https://www.terraform.io/), and [Ansible](https://www.ansible.com/) for the operating system and application configuration. The systems can be deployed on any of the SAP-supported operating system versions and deployed into any Azure region.
+[SAP Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool that's used to deploy, install, and maintain SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB by using [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/) for the operating system and application configuration. You can deploy the systems on any of the SAP-supported operating system versions and into any Azure region.
-Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure.
+[Terraform](https://www.terraform.io/) from Hashicorp is an open-source tool for provisioning and managing cloud infrastructure.
-[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can automate deployment and configuration of resources in your environment.
+[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. When you use Ansible, you can automate deployment and configuration of resources in your environment.
The [automation framework](https://github.com/Azure/sap-automation) has two main components:-- Deployment infrastructure (control plane, hub component)-- SAP Infrastructure (SAP Workload, spoke component)
-You'll use the control plane of the SAP on Azure Deployment Automation Framework to deploy the SAP Infrastructure and the SAP application. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
+- Deployment infrastructure (control plane and hub component)
+- SAP infrastructure (SAP workload and spoke component)
+
+You use the control plane of SAP Deployment Automation Framework to deploy the SAP infrastructure and the SAP application. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas)-defined infrastructure to host the SAP applications.
> [!NOTE]
-> This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance.
+> This automation framework is based on Microsoft best practices and principles for SAP on Azure. To understand how to use certified virtual machines (VMs) and storage solutions for stability, reliability, and performance, see [Get started with SAP automation framework on Azure](get-started.md).
> > This automation framework also follows the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/).
-The automation framework can be used to deploy the following SAP architectures:
--- Standalone-- Distributed-- Distributed (Highly Available)-
-In the Standalone architecture, all the SAP roles are installed on a single server. In the distributed architecture, you can separate the database server and the application tier. The application tier can further be separated in two by having SAP Central Services on a virtual machine and one or more application servers.
+You can use the automation framework to deploy the following SAP architectures:
-The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters.
+- **Standalone**: For this architecture, all the SAP roles are installed on a single server.
+- **Distributed**: With this architecture, you can separate the database server and the application tier. The application tier can further be separated in two by having SAP central services on a VM and one or more application servers.
+- **Distributed (highly available)**: This architecture is similar to the distributed architecture. In this deployment, the database and/or SAP central services can both be configured by using a highly available configuration that uses two VMs, each with Pacemaker clusters.
-The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment, a single control plane is used to manage multiple SAP deployments.
+The dependency between the control plane and the application plane is illustrated in the following diagram. In a typical deployment, a single control plane is used to manage multiple SAP deployments.
## About the control plane
-The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
+The control plane houses the deployment infrastructure from which other environments are deployed. After the control plane is deployed, it rarely needs to be redeployed, if ever.
+
+The control plane provides the following
-The control plane provides the following services
- Deployment agents for running:
- - Terraform Deployment
+ - Terraform deployment
- Ansible configuration - Persistent storage for the Terraform state files-- Persistent storage for the Downloaded SAP Software
+- Persistent storage for the downloaded SAP software
- Azure Key Vault for secure storage for deployment credentials - Private DNS zone (optional)-- Configuration Web Application
+- Configuration for web applications
+
+The control plane is typically a regional resource deployed into the hub subscription in a [hub-and-spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
-The control plane is typically a regional resource deployed in to the hub subscription in a [hub and spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
+The following diagram shows the key components of the control plane and the workload zone.
-The following diagram shows the key components of the control plane and workload zone.
+The application configuration is performed from the deployment agents in the control plane by using a set of predefined playbooks. These playbooks will:
+- Configure base operating system settings.
+- Configure SAP-specific operating system settings.
+- Make the installation media available in the system.
+- Install the SAP system components.
+- Install the SAP database (SAP HANA and AnyDB).
+- Configure high availability by using Pacemaker.
+- Configure high availability for your SAP database.
-The application configuration will be performed from the deployment agents in the Control plane using a set of pre-defined playbooks. These playbooks will:
+For more information about how to configure and deploy the control plane, see [Configure the control plane](configure-control-plane.md) and [Deploy the control plane](deploy-control-plane.md).
-- Configure base operating system settings-- Configure SAP-specific operating system settings-- Make the installation media available in the system-- Install the SAP system components-- Install the SAP database (SAP HANA, AnyDB)-- Configure high availability (HA) using Pacemaker-- Configure high availability (HA) for your SAP database
+## Software acquisition process
+The framework also provides an Ansible playbook that can be used to download the software from SAP and persist it in the storage accounts in the control plane's SAP library resource group.
-For more information of how to configure and deploy the control plane, see [Configuring the control plane](configure-control-plane.md) and [Deploying the control plane](deploy-control-plane.md).
+The software acquisition is using an SAP application manifest file that contains the list of SAP software to be downloaded. The manifest file is a YAML file that contains the:
-## Software acquisition process
+- List of files to be downloaded.
+- List of the product IDs for the SAP application components.
+- Set of template files used to provide the parameters for the unattended installation.
+
+The SAP software download playbook processes the manifest file and the dependent manifest files and downloads the SAP software from SAP by using the specified SAP user account. The software is downloaded to the SAP library storage account and is available for the installation process.
-The framework also provides an Ansible playbook that can be used to download the software from SAP and persist it in the storage accounts in the Control Plane's SAP Library resource group.
+As part of the download process, the application manifest and the supporting templates are also persisted in the storage account. The application manifest and the dependent manifests are aggregated into a single manifest file that's used by the installation process.
-The software acquisition is using an SAP Application manifest file that contains the list of SAP software to be downloaded. The manifest file is a YAML file that contains the following information:
+### Deployer VMs
-- List of files to be downloaded-- List of the Product IDs for the SAP application components-- A set of template files used to provide the parameters for the unattended installation
+These VMs are used to run the orchestration scripts that deploy the Azure resources by using Terraform. They're also Ansible controllers and are used to execute the Ansible playbooks on all the managed nodes, that is, the VMs of an SAP deployment.
-The SAP Software download playbook will process the manifest file and the dependent manifest files and download the SAP software from SAP using the specified SAP user account. The software will be downloaded to the SAP Library storage account and will be available for the installation process. As part of the download the process the application manifest and the supporting templates will also be persisted in the storage account. The application manifest and the dependent manifests will be aggregated into a single manifest file that will be used by the installation process.
+## About the SAP workload
-### Deployer Virtual Machines
+The SAP workload contains all the Azure infrastructure resources for the SAP deployments. These resources are deployed from the control plane.
-These virtual machines are used to run the orchestration scripts that will deploy the Azure resources using Terraform. They are also Ansible Controllers and are used to execute the Ansible playbooks on all the managed nodes, i.e the virtual machines of an SAP deployment.
+The SAP workload has two main components:
-## About the SAP Workload
+- SAP workload zone
+- SAP systems
-The SAP Workload contains all the Azure infrastructure resources for the SAP Deployments. These resources are deployed from the control plane.
-The SAP Workload has two main components:
-- SAP Workload Zone-- SAP System(s)
+## About the SAP workload zone
-## About the SAP Workload Zone
+The workload zone allows for partitioning of the deployments into different environments, such as development, test, and production. The workload zone provides the shared services (networking and credentials management) to the SAP systems.
-The workload zone allows for partitioning of the deployments into different environments (Development, Test, Production). The Workload zone will provide the shared services (networking, credentials management) to the SAP systems.
+The SAP workload zone provides the following services to the SAP systems:
-The SAP Workload Zone provides the following services to the SAP Systems
-- Virtual Networking infrastructure-- Azure Key Vault for system credentials (Virtual Machines and SAP)-- Shared Storage (optional)
+- Virtual networking infrastructure
+- Azure Key Vault for system credentials (VMs and SAP)
+- Shared storage (optional)
-For more information of how to configure and deploy the SAP Workload zone, see [Configuring the workload zone](configure-workload-zone.md) and [Deploying the SAP workload zone](deploy-workload-zone.md).
+For more information about how to configure and deploy the SAP workload zone, see [Configure the workload zone](configure-workload-zone.md) and [Deploy the SAP workload zone](deploy-workload-zone.md).
-## About the SAP System
+## About the SAP system
-The system deployment consists of the virtual machines that will be running the SAP application, including the web, app and database tiers.
+The system deployment consists of the VMs that run the SAP application, including the web, app, and database tiers.
-The SAP System provides the following services
-- Virtual machine, storage, and supporting infrastructure to host the SAP applications.
+The SAP system provides VM, storage, and support infrastructure to host the SAP applications.
-For more information of how to configure and deploy the SAP System, see [Configuring the SAP System](configure-system.md) and [Deploying the SAP system](deploy-system.md).
+For more information about how to configure and deploy the SAP system, see [Configure the SAP system](configure-system.md) and [Deploy the SAP system](deploy-system.md).
## Glossary
The following terms are important concepts for understanding the automation fram
> [!div class="mx-tdCol2BreakAll "] > | Term | Description | > | - | -- |
-> | System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the **SID**.
+> | System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the *SID*.
> | Landscape | A collection of systems in different environments within an SAP application. For example, SAP ERP Central Component (ECC), SAP customer relationship management (CRM), and SAP Business Warehouse (BW). |
-> | Workload zone | Partitions the SAP applications to environments, such as non-production and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vault, to all systems within. |
+> | Workload zone | Partitions the SAP applications to environments, such as nonproduction and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vaults, to all systems within. |
The following diagram shows the relationships between SAP systems, workload zones (environments), and landscapes. In this example setup, the customer has three SAP landscapes: ECC, CRM, and BW. Each landscape contains three workload zones: production, quality assurance, and development. Each workload zone contains one or more systems. ### Deployment components > [!div class="mx-tdCol2BreakAll "] > | Term | Description | Scope | > | - | -- | -- |
-> | Deployer | A virtual machine that can execute Terraform and Ansible commands. | Region |
+> | Deployer | A VM that can execute Terraform and Ansible commands. | Region |
> | Library | Provides storage for the Terraform state files and the SAP installation media. | Region |
-> | Workload zone | Contains the virtual network for the SAP systems and a key vault that holds the system credentials | Workload zone |
-> | System | The deployment unit for the SAP application (SID). Contains all infrastructure assets | Workload zone |
-
+> | Workload zone | Contains the virtual network for the SAP systems and a key vault that holds the system credentials. | Workload zone |
+> | System | The deployment unit for the SAP application (SID). Contains all infrastructure assets. | Workload zone |
## Next steps > [!div class="nextstepaction"]
-> [Get started with the deployment automation framework](get-started.md)
-> [Planning for the automation framwework](plan-deployment.md)
-> [Configuring Azure DevOps for the automation framwework](configure-devops.md)
-> [Configuring the control plane](configure-control-plane.md)
-> [Configuring the workload zone](configure-workload-zone.md)
-> [Configuring the SAP System](configure-system.md)
-
+> - [Get started with the deployment automation framework](get-started.md)
+> - [Plan for the automation framework](plan-deployment.md)
+> - [Configure Azure DevOps for the automation framework](configure-devops.md)
+> - [Configure the control plane](configure-control-plane.md)
+> - [Configure the workload zone](configure-workload-zone.md)
+> - [Configure the SAP system](configure-system.md)
sap Devops Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/devops-tutorial.md
Title: SAP on Azure Deployment Automation Framework DevOps hands-on lab
-description: DevOps Hands-on lab for the SAP on Azure Deployment Automation Framework.
+ Title: 'Tutorial: Use SAP Deployment Automation Framework with DevOps'
+description: This tutorial shows you how to use SAP Deployment Automation Framework by using Azure DevOps Services.
-# SAP on Azure Deployment Automation Framework DevOps - Hands-on lab
+# Tutorial: Use SAP Deployment Automation Framework with DevOps
-This tutorial shows how to perform the deployment activities of the [SAP on Azure Deployment Automation Framework](deployment-framework.md) using Azure DevOps Services.
+This tutorial shows you how to perform the deployment activities of [SAP Deployment Automation Framework](deployment-framework.md) by using Azure DevOps Services.
-You'll perform the following tasks during this lab:
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Deploy the Control Plane (Deployer Infrastructure & Library)
-> * Deploy the Workload Zone (Landscape, System)
-> * Deploy the SAP Infrastructure
-> * Install HANA Database
-> * Install SCS server
-> * Load HANA Database
-> * Install Primary Application Server
-> * Download the SAP software
-> * Install SAP
+> * Deploy the control plane (deployer infrastructure and library).
+> * Deploy the workload zone (landscape and system).
+> * Deploy the SAP infrastructure.
+> * Install the HANA database.
+> * Install the SCS server.
+> * Load the HANA database.
+> * Install the primary application server.
+> * Download the SAP software.
+> * Install SAP.
## Prerequisites -- An Azure subscription. If you don't have an Azure subscription, you can [create a free account here](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure subscription. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> [!Note]
-> The free Azure account may not be sufficient to run the deployment.
--- A Service Principal with 'Contributor' permissions in the target subscriptions. For more information, see [Prepare the deployment credentials](deploy-control-plane.md#prepare-the-deployment-credentials).--- A configured Azure DevOps instance, follow the steps here [Configure Azure DevOps Services for SAP Deployment Automation](configure-devops.md)
+ > [!Note]
+ > The free Azure account might not be sufficient to run the deployment.
-- For the 'SAP software acquisition' and the 'Configuration and SAP installation' pipelines a configured self hosted agent.
+- A service principal with Contributor permissions in the target subscriptions. For more information, see [Prepare the deployment credentials](deploy-control-plane.md#prepare-the-deployment-credentials).
+- A configured Azure DevOps instance. For more information, see [Configure Azure DevOps Services for SAP Deployment Automation](configure-devops.md).
+- For the `SAP software acquisition` and the `Configuration and SAP installation` pipelines, a configured self-hosted agent.
-> [!Note]
-> The self hosted agent virtual machine will be deployed as part of the control plane deployment.
+The self-hosted agent virtual machine is deployed as part of the control plane deployment.
## Overview
-These steps reference and use the [default naming convention](naming.md) for the automation framework. Example values are also used for naming throughout the configurations. In this tutorial, the following names are used:
-- Azure DevOps Services project name is `SAP-Deployment` -- Azure DevOps Services repository name is `sap-automation` -- The control plane environment is named `MGMT`, in the region West Europe (`WEEU`) and installed in the virtual network `DEP00`, giving a deployer configuration name: `MGMT-WEEU-DEP00-INFRASTRUCTURE`
+These steps reference and use the [default naming convention](naming.md) for the automation framework. Example values are also used for naming throughout the configurations. This tutorial uses the following names:
-- The SAP workload zone has the environment name `DEV` and is in the same region as the control plane using the virtual network `SAP01`, giving the SAP workload zone configuration name: `DEV-WEEU-SAP01-INFRASTRUCTURE`-- The SAP System with SID `X00` will be installed in this SAP workload zone. The configuration name for the SAP System: `DEV-WEEU-SAP01-X00`
+- The Azure DevOps Services project name is `SAP-Deployment`.
+- The Azure DevOps Services repository name is `sap-automation`.
+- The control plane environment is named `MGMT`. It's in the region West Europe (`WEEU`) and is installed in the virtual network `DEP00`. The deployer configuration name is `MGMT-WEEU-DEP00-INFRASTRUCTURE`.
+- The SAP workload zone has the environment name `DEV`. It's in the same region as the control plane and uses the virtual network `SAP01`. The SAP workload zone configuration name is `DEV-WEEU-SAP01-INFRASTRUCTURE`.
+- The SAP system with SID `X00` is installed in this SAP workload zone. The configuration name for the SAP system is `DEV-WEEU-SAP01-X00`.
| Artifact type | Configuration name | Location | | - | - | |
-| Control Plane | MGMT-WEEU-DEP00-INFRASTRUCTURE | westeurope |
-| Workload Zone | DEP-WEEU-SAP01-INFRASTRUCTURE | westeurope |
-| SAP System | DEP-WEEU-SAP01-X00 | westeurope |
-
-The deployed infrastructure is shown in the diagram below.
+| Control plane | MGMT-WEEU-DEP00-INFRASTRUCTURE | westeurope |
+| Workload zone | DEP-WEEU-SAP01-INFRASTRUCTURE | westeurope |
+| SAP system | DEP-WEEU-SAP01-X00 | westeurope |
- :::image type="content" source="media/devops/automation-devops-tutorial-design.png" alt-text="Picture showing the DevOps tutorial infrastructure design":::
+The following diagram shows the deployed infrastructure.
+ :::image type="content" source="media/devops/automation-devops-tutorial-design.png" alt-text="Diagram that shows the DevOps tutorial infrastructure design.":::
> [!Note]
-> In this tutorial the X00 SAP system will be deployed with the following configuration:
+> In this tutorial, the X00 SAP system is deployed with the following configuration:
+>
> * Standalone deployment > * HANA DB VM SKU: Standard_M32ts > * ASCS VM SKU: Standard_D4s_v3 > * APP VM SKU: Standard_D4s_v3
-## Deploy the Control Plane
-
-The deployment will use the configuration defined in the Terraform variable files located in the 'samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE' and 'samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY' folders.
+## Deploy the control plane
-Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to 'samples/WORKSPACES'
+The deployment uses the configuration defined in the Terraform variable files located in the `samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` and `samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folders.
-Run the pipeline by selecting the _Deploy control plane_ pipeline from the Pipelines section. Enter 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name.
+Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to `samples/WORKSPACES`.
+Run the pipeline by selecting the `Deploy control plane` pipeline from the **Pipelines** section. Enter `MGMT-WEEU-DEP00-INFRASTRUCTURE` as the deployer configuration name and `MGMT-WEEU-SAP_LIBRARY` as the SAP library configuration name.
-You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Control Plane details in the _Extensions_ tab.
- :::image type="content" source="media/devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot of the DevOps run pipeline results.":::
+You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the control plane details on the **Extensions** tab.
+ :::image type="content" source="media/devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot that shows the DevOps Run pipeline results.":::
-## Deploy the Workload zone
+## Deploy the workload zone
-The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE' folder.
+The deployment uses the configuration defined in the Terraform variable file located in the `samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
-Run the pipeline by selecting the _Deploy workload zone_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the Workload zone configuration name and 'MGMT' as the Deployer Environment Name.
+Run the pipeline by selecting the `Deploy workload zone` pipeline from the **Pipelines** section. Enter `DEV-WEEU-SAP01-INFRASTRUCTURE` as the workload zone configuration name and `MGM` as the deployer environment name.
-You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Workload Zone details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the workload zone details on the **Extensions** tab.
-## Deploy the SAP System
+## Deploy the SAP system
-The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00' folder.
+The deployment uses the configuration defined in the Terraform variable file located in the `samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00` folder.
-Run the pipeline by selecting the _SAP system deployment_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name.
+Run the pipeline by selecting the `SAP system deployment` pipeline from the **Pipelines** section. Enter `DEV-WEEU-SAP01-X00` as the SAP system configuration name.
-You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the SAP System details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the SAP system details on the **Extensions** tab.
-## Download the SAP Software
+## Download the SAP software
-Run the pipeline by selecting the _SAP software acquisition_ pipeline from the Pipelines section. Enter 'S41909SPS03_v0011ms' as the Name of Bill of Materials (BoM), 'MGMT' as the Control Plane Environment name: MGMT and 'WEEU' as the
-Control Plane (SAP Library) location code.
+Run the pipeline by selecting the `SAP software acquisition` pipeline from the **Pipelines** section. Enter `S41909SPS03_v0011ms` as the name of Bill of Materials, `MGMT` as the control plane environment name, and `MGMT` and `WEEU` as the control plane (SAP library) location code.
-You can track the progress in the Azure DevOps portal.
+You can track the progress in the Azure DevOps portal.
-## Run the Configuration and SAP Installation pipeline
+## Run the configuration and SAP installation pipeline
-Run the pipeline by selecting the _Configuration and SAP installation_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name and 'S41909SPS03_v0010ms' as the Bill of Materials name.
+Run the pipeline by selecting the `Configuration and SAP installation` pipeline from the **Pipelines** section. Enter `DEV-WEEU-SAP01-X00` as the SAP system configuration name and `S41909SPS03_v0010ms` as the Bill of Materials name.
-Choose the playbooks to execute.
+Choose the playbooks to run.
-You can track the progress in the Azure DevOps Services portal.
+You can track the progress in the Azure DevOps Services portal.
-## Run the Repository update pipeline
+## Run the repository update pipeline
-Run the pipeline by selecting the _Repository updater_ pipeline from the Pipelines section. Enter 'https://github.com/Azure/sap-automation.git' as the Source repository and 'main' as the source branch to update from.
-
-Only choose 'Force the update' if the update fails.
+Run the pipeline by selecting the `Repository updater` pipeline from the **Pipelines** section. Enter `https://github.com/Azure/sap-automation.git` as the source repository and `main` as the source branch to update from.
+Only select **Force the update** if the update fails.
## Run the removal pipeline
-Run the pipeline by selecting the _Deployment removal_ pipeline from the Pipelines section.
-
-### SAP System removal
-
-Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name.
+Run the pipeline by selecting the `Deployment removal` pipeline from the **Pipelines** section.
-### SAP Workload Zone removal
+### SAP system removal
-Enter 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the SAP workload zone configuration name.
+Enter `DEV-WEEU-SAP01-X00` as the SAP system configuration name.
-### Control Plane removal
+### SAP workload zone removal
-Enter 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the
-SAP Library configuration name.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Configure Control Plane](configure-control-plane.md)
+Enter `DEV-WEEU-SAP01-INFRASTRUCTURE` as the SAP workload zone configuration name.
+### Control plane removal
+Enter `MGMT-WEEU-DEP00-INFRASTRUCTURE` as the deployer configuration name and enter `MGMT-WEEU-SAP_LIBRARY` as the SAP library configuration name.
+## Next step
+> [!div class="nextstepaction"]
+> [Configure control plane](configure-control-plane.md)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Title: Get started with the SAP on Azure deployment automation framework
-description: Quickly get started with the SAP on Azure Deployment Automation Framework. Deploy an example configuration using sample parameter files.
+ Title: Get started with SAP Deployment Automation Framework
+description: Quickly get started with SAP Deployment Automation Framework. Deploy an example configuration by using sample parameter files.
-# Get started with SAP automation framework on Azure
+# Get started with SAP Deployment Automation Framework
-Get started quickly with the [SAP on Azure Deployment Automation Framework](deployment-framework.md).
+Get started quickly with [SAP Deployment Automation Framework](deployment-framework.md).
## Prerequisites
+To get started with SAP Deployment Automation Framework, you need:
- An Azure subscription. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Ability to [download of the SAP software](software.md) in your Azure environment.
+- The ability to [download the SAP software](software.md) in your Azure environment.
- An [Azure CLI](/cli/azure/install-azure-cli) installation on your local computer. - An [Azure PowerShell](/powershell/azure/install-az-ps#update-the-azure-powershell-module) installation on your local computer.-- A Service Principal to use for the control plane deployment-- Ability to create an Azure Devops project if you want to use Azure DevOps for deployment.
+- A service principal to use for the control plane deployment.
+- An ability to create an Azure DevOps project if you want to use Azure DevOps for deployment.
-Some of the prerequisites may already be installed in your deployment environment. Both Cloud Shell and the deployer have Terraform and the Azure CLI installed.
+Some of the prerequisites might already be installed in your deployment environment. Both Azure Cloud Shell and the deployer have Terraform and the Azure CLI installed.
-## Use SAP on Azure Deployment Automation Framework from Azure DevOps Services
+## Use SAP Deployment Automation Framework from Azure DevOps Services
-Using Azure DevOps streamlines the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities.
-You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application.
+Using Azure DevOps streamlines the deployment process. Azure DevOps provides pipelines that you can run to perform the infrastructure deployment and the configuration and SAP installation activities.
+
+You can use Azure Repos to store your configuration files. Use Azure Pipelines to deploy and configure the infrastructure and the SAP application.
### Sign up for Azure DevOps Services
-To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
+To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory. To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either sign in or create a new account.
+
+To configure Azure DevOps for SAP Deployment Automation Framework, see [Configure Azure DevOps for SAP Deployment Automation Framework](configure-devops.md).
-Follow the guidance here [Configure Azure DevOps for SDAF](configure-devops.md) to configure Azure DevOps for the SAP on Azure Deployment Automation Framework.
+## Create the SAP Deployment Automation Framework environment without Azure DevOps
-## Creating the SAP on Azure Deployment Automation Framework environment without Azure DevOps
+You can run SAP Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment.
-You can run the SAP on Azure Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment.
+> [!IMPORTANT]
+> Ensure that the virtual machine is using either a system-assigned or user-assigned identity with permissions on the subscription to create resources.
-Clone the repository and prepare the execution environment by using the following steps on a Linux Virtual machine in Azure:
+Ensure the virtual machine has the following prerequisites installed:
-Ensure the Virtual Machine has the following prerequisites installed:
- git - jq - unzip
-
-Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources.
-
+ - virtualenv (if running on Ubuntu)
-- Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment.
+You can install the prerequisites on an Ubuntu virtual machine by using the following command:
```bash
-mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
+sudo apt-get install -y git jq unzip virtualenv
-git clone https://github.com/Azure/sap-automation.git sap-automation
+```
-git clone https://github.com/Azure/sap-automation-samples.git samples
+You can then install the deployer components by using the following commands:
-git clone https://github.com/Azure/sap-automation-bootstrap.git config
+```bash
-cd sap-automation/deploy/scripts
-
+wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/configure_deployer.sh -O configure_deployer.sh
+chmod +x ./configure_deployer.sh
./configure_deployer.sh
-```
+# Source the new variables
+. /etc/profile.d/deploy_server.sh
-> [!TIP]
-> The deployer already clones the required repositories.
+```
## Samples
-The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample configuration files to start testing the deployment automation framework. You can copy them using the following steps.
-
+The `~/Azure_SAP_Automated_Deployment/samples` folder contains a set of sample configuration files to start testing the deployment automation framework. You can copy them by using the following commands:
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config
+cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment
``` - ## Next step > [!div class="nextstepaction"]
sap Naming Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/naming-module.md
Title: Configure custom naming for the automation framework
-description: Explanation of how to implement custom naming conventions for the SAP on Azure Deployment Automation Framework.
+description: Explanation of how to implement custom naming conventions for SAP Deployment Automation Framework.
-# Overview
+# Configure custom naming for the automation framework
-The [SAP on Azure Deployment Automation Framework](deployment-framework.md) uses a standard naming convention for Azure [resource naming](naming.md).
+[SAP Deployment Automation Framework](deployment-framework.md) uses a standard naming convention for Azure [resource naming](naming.md).
-The Terraform module `sap_namegenerator` defines the names of all resources that the automation framework deploys. The module is located at `/deploy/terraform/terraform-units/modules/sap_namegenerator/` in the repository. The framework also supports providing your own names for some of the resources using the [parameter files](configure-system.md).
+The Terraform module `sap_namegenerator` defines the names of all resources that the automation framework deploys. The module is located at `/deploy/terraform/terraform-units/modules/sap_namegenerator/` in the repository. The framework also supports providing your own names for some of the resources by using the [parameter files](configure-system.md).
The naming of the resources uses the following format: resource prefix + resource_group_prefix + separator + resource name + resource suffix.
+If these capabilities aren't enough, you can also use custom naming logic by either providing a custom JSON file that contains the resource names or by modifying the naming module used by the automation.
-If these capabilities are not enough, you can also use custom naming logic by either providing a custom json file containing the resource names or by modifying the naming module used by the automation.
+## Provide name overrides by using a JSON file
-## Provide name overrides using a json file
+You can specify a custom naming JSON file in your `tfvars` parameter file by using the `name_override_file` parameter.
-You can specify a custom naming json file in your tfvars parameter file using the 'name_override_file' parameter.
-
-The json file has sections for the different resource types.
+The JSON file has sections for the different resource types.
The deployment types are: -- DEPLOYER (Control Plane)-- SDU (SAP System Infrastructure)-- WORKLOAD_ZONE (Workload zone)
+- DEPLOYER (control plane)
+- SDU (SAP system infrastructure)
+- WORKLOAD_ZONE (workload zone)
### Availability set names
-The names for the availability sets are defined in the "availabilityset_names" structure. The example below lists the availability set names for a deployment.
+The names for the availability sets are defined in the `availabilityset_names` structure. The following example lists the availability set names for a deployment.
```json "availabilityset_names" : {
The names for the availability sets are defined in the "availabilityset_names" s
"web": "web-avset" } ```
-### Key Vault names
-The names for the key vaults are defined in the "keyvault_names" structure. The example below lists the key vault names for a deployment in the "DEV" environment in West Europe.
+### Key vault names
+
+The names for the key vaults are defined in the `keyvault_names` structure. The following example lists the key vault names for a deployment in the `DEV` environment in West Europe.
```json "keyvault_names": {
The names for the key vaults are defined in the "keyvault_names" structure. The
} ```
-> [!NOTE]
-> This key vault names need to be unique across Azure, SAP on Azure Deployment Automation Framework appends 3 random characters (ABC in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
+The key vault names need to be unique across Azure. SAP Deployment Automation Framework appends three random characters (ABC in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
-The "private_access" names are currently not used.
+The `private_access` names are currently not used.
-### Storage Account names
+### Storage account names
-The names for the storage accounts are defined in the "storageaccount_names" structure. The example below lists the storage account names for a deployment in the "DEV" environment in West Europe.
+The names for the storage accounts are defined in the `storageaccount_names` structure. The following example lists the storage account names for a deployment in the `DEV` environment in West Europe.
```json "storageaccount_names": {
The names for the storage accounts are defined in the "storageaccount_names" str
} ```
-> [!NOTE]
-> This key vault names need to be unique across Azure, SAP on Azure Deployment Automation Framework appends 3 random characters (abc in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
-### Virtual Machine names
+The key vault names need to be unique across Azure. SAP Deployment Automation Framework appends three random characters (abc in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
+
+### Virtual machine names
-The names for the virtual machines are defined in the "virtualmachine_names" structure. Both the computer and the virtual machine names can be provided.
+The names for the virtual machines are defined in the `virtualmachine_names` structure. Both the computer and the virtual machine names can be provided.
-The example below lists the virtual machine names for a deployment in the "DEV" environment in West Europe. The deployment has a database server, two application servers, a Central Services server and a web dispatcher.
+The following example lists the virtual machine names for a deployment in the `DEV` environment in West Europe. The deployment has a database server, two application servers, a central services server, and a web dispatcher.
```json "virtualmachine_names": {
The example below lists the virtual machine names for a deployment in the "DEV"
} ```
-## Configure custom naming module
+## Configure the custom naming module
There are multiple files within the module for naming resources: -- Virtual machine (VM) and computer names are defined in (`vm.tf`)-- Resource group naming is defined in (`resourcegroup.tf`)-- Key vaults in (`keyvault.tf`)-- Resource suffixes (`variables_local.tf`)
+- Virtual machine and computer names are defined in (`vm.tf`).
+- Resource group naming is defined in (`resourcegroup.tf`).
+- Key vaults are defined in (`keyvault.tf`).
+- Resource suffixes are defined in (`variables_local.tf`).
+
+The different resource names are identified by prefixes in the Terraform code:
-The different resource names are identified by prefixes in the Terraform code.
-- SAP deployer deployments use resource names with the prefix `deployer_`-- SAP library deployments use resource names with the prefix `library`-- SAP landscape deployments use resource names with the prefix `vnet_`-- SAP system deployments use resource names with the prefix `sdu_`
+- SAP deployer deployments use resource names with the prefix `deployer_`.
+- SAP library deployments use resource names with the prefix `library`.
+- SAP landscape deployments use resource names with the prefix `vnet_`.
+- SAP system deployments use resource names with the prefix `sdu_`.
-The calculated names are returned in a data dictionary, which is used by all the terraform modules.
+The calculated names are returned in a data dictionary, which is used by all the Terraform modules.
-## Using custom names
+## Use custom names
-Some of the resource names can be changed by providing parameters in the tfvars parameter file.
+Some of the resource names can be changed by providing parameters in the `tfvars` parameter file.
| Resource | Parameter | Notes | | - | -- | |
-| `Prefix` | `custom_prefix` | This is used as prefix for all the resources in the resource group |
+| `Prefix` | `custom_prefix` | Used as prefix for all the resources in the resource group |
| `Resource group` | `resourcegroup_name` | | | `admin subnet name` | `admin_subnet_name` | | | `admin nsg name` | `admin_subnet_nsg_name` | |
Some of the resource names can be changed by providing parameters in the tfvars
| `web nsg name` | `web_subnet_nsg_name` | | | `admin nsg name` | `admin_subnet_nsg_name` | |
-## Changing the naming module
+## Change the naming module
-To prepare your Terraform environment for custom naming, you first need to create custom naming module. The easiest way is to copy the existing module and make the required changes in the copied module.
+To prepare your Terraform environment for custom naming, you first need to create a custom naming module. The easiest way is to copy the existing module and make the required changes in the copied module.
-1. Create a root-level folder in your Terraform environment. For example, `Azure_SAP_Automated_Deployment`.
-1. Navigate to your new root-level folder.
+1. Create a root-level folder in your Terraform environment. An example is `Azure_SAP_Automated_Deployment`.
+1. Go to your new root-level folder.
1. Clone the [automation framework repository](https://github.com/Azure/sap-automation). This step creates a new folder `sap-automation`. 1. Create a folder within the root-level folder called `Contoso_naming`.
-1. Navigate to the `sap-automation` folder.
-1. Check out the appropriate branch in git.
-1. Navigate to `\deploy\terraform\terraform-units\modules` within the `sap-automation` folder.
+1. Go to the `sap-automation` folder.
+1. Check out the appropriate branch in Git.
+1. Go to `\deploy\terraform\terraform-units\modules` within the `sap-automation` folder.
1. Copy the folder `sap_namegenerator` to the `Contoso_naming` folder.
-The naming module is called from the root terraform folders:
+The naming module is called from the root `terraform` folders:
```terraform module "sap_namegenerator" {
For each file, change the source for the module `sap_namegenerator` to point to
## Change resource group naming logic
-To change your resource group's naming logic, navigate to your custom naming module folder (for example, `Workspaces\Contoso_naming`). Then, edit the file `resourcegroup.tf`. Modify the following code with your own naming logic.
+To change your resource group's naming logic, go to your custom naming module folder (for example, `Workspaces\Contoso_naming`). Then, edit the file `resourcegroup.tf`. Modify the following code with your own naming logic.
```terraform locals {
locals {
## Change resource suffixes
-To change your resource suffixes, navigate to your custom naming module folder (for example, `Workspaces\Contoso_naming`). Then, edit the file `variables_local.tf`. Modify the following map with your own resource suffixes.
+To change your resource suffixes, go to your custom naming module folder (for example, `Workspaces\Contoso_naming`). Then, edit the file `variables_local.tf`. Modify the following map with your own resource suffixes.
> [!NOTE]
-> Only change the map **values**. Don't change the map **key**, which the Terraform code uses.
+> Only change the map *values*. Don't change the map *key*, which the Terraform code uses.
> For example, if you want to rename the administrator network interface component, change `"admin-nic" = "-admin-nic"` to `"admin-nic" = "yourNICname"`. ```terraform
variable resource_suffixes {
} ```
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Learn about naming conventions](naming.md)
sap Naming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/naming.md
Title: Naming standards for the automation framework
-description: Explanation of naming conventions for the SAP on Azure Deployment Automation Framework.
+description: Explanation of naming conventions for SAP Deployment Automation Framework.
-# Naming conventions for SAP automation framework
+# Naming conventions for SAP Deployment Automation Framework
-The [SAP on Azure Deployment Automation Framework](deployment-framework.md) uses standard naming conventions. Consistent naming helps the automation framework run correctly with Terraform. Standard naming helps you deploy the automation framework smoothly. For example, consistent naming helps you to:
+[SAP Deployment Automation Framework](deployment-framework.md) uses standard naming conventions. Consistent naming helps the automation framework run correctly with Terraform. Standard naming helps you deploy the automation framework smoothly. For example, consistent naming helps you to:
- Deploy the SAP virtual network infrastructure into any supported Azure region.--- Do multiple deployments with partitioned virtual networks. --- Deploy the SAP system into any SAP workload zone. --- Run regular and high availability (HA) instances-
+- Do multiple deployments with partitioned virtual networks.
+- Deploy the SAP system into any SAP workload zone.
+- Run regular and high availability instances.
- Do disaster recovery and fall forward behavior.
-Review the standard terms, area paths, variable names before you begin your deployment. If necessary, you can also [configure custom naming](naming-module.md).
+Review the standard terms, area paths, and variable names before you begin your deployment. If necessary, you can also [configure custom naming](naming-module.md).
## Placeholder values The naming convention's example formats use the following placeholder values.
-| Placeholder | Concept | Character limit | Example |
+| Placeholder | Concept | Character limit | Example |
| -- | - | | - | | `{ENVIRONMENT}` | Environment | 5 | `DEV`, `PROTO`, `NP`, `PROD` | | `{REGION_MAP}` | [Region](#azure-region-names) map | 4 | `weus` for `westus` |
-| `{SAP_VNET}` | SAP virtual network (VNet) | 7 | `SAP0` |
+| `{SAP_VNET}` | SAP virtual network | 7 | `SAP0` |
| `{SID}` | SAP system identifier | 3 | `X01` | | `{PREFIX}` | SAP resource prefix | | `DEV-WEEU-SAP01-X01` |
-| `{DEPLOY_VNET}` | Deployer VNet | 7 | |
-| `{REMOTE_VNET}` | Remote VNet | 7 | |
-| `{LOCAL_VNET}` |Local VNet | 7 | |
+| `{DEPLOY_VNET}` | Deployer virtual network | 7 | |
+| `{REMOTE_VNET}` | Remote virtual network | 7 | |
+| `{LOCAL_VNET}` |Local virtual network | 7 | |
| `{CODENAME}` | Logical name for version | | `version1`, `beta` | | `{VM_NAME}` | VM name | | | | `{SUBNET}` | Subnet | | |
For an explanation of the **Format** column, see the [definitions for placeholde
| Concept | Character limit | Format | Example | | - | | | - |
-| Resource Group | 80 | `{ENVIRONMENT}-{REGION_MAP}-{DEPLOY_VNET}-INFRASTRUCTURE` | `MGMT-WEEU-DEP00-INFRASTRUCTURE` |
+| Resource group | 80 | `{ENVIRONMENT}-{REGION_MAP}-{DEPLOY_VNET}-INFRASTRUCTURE` | `MGMT-WEEU-DEP00-INFRASTRUCTURE` |
| Virtual network | 38 (64) | `{ENVIRONMENT}-{REGION_MAP}-{DEPLOY_VNET}-vnet` | `MGMT-WEEU-DEP00-vnet` | | Subnet | 80 | `{ENVIRONMENT}-{REGION_MAP}-{DEPLOY_VNET}_deployment-subnet` | `MGMT-WEEU-DEP00_deployment-subnet` | | Storage account | 24 | `{ENVIRONMENT}{REGION_MAP}{SAP_VNET}{DIAG}{RND}` | `mgmtweeudep00diagxxx` |
For an explanation of the **Format** column, see the [definitions for placeholde
| Key vault | 24 | `{ENVIRONMENT}{REGION_MAP}{DEPLOY_VNET}{USER}{RND}` (deployment credentials) | `MGMTWEEUDEP00userxxx` | | Public IP address | | `{ENVIRONMENT}-{REGION_MAP}-{DEPLOY_VNET}_{COMPUTER_NAME}-pip` | `MGMT-WEEU-DEP00_permweeudep00deploy00-pip` |
-### SAP Library names
+### SAP library names
For an explanation of the **Format** column, see the [definitions for placeholder values](#placeholder-values).
For an explanation of the **Format** column, see the [definitions for placeholde
| Storage account | 24 | `{ENVIRONMENT}{REGION_MAP}saplib(12CHAR){RND}` | `mgmtweeusaplibxxx` | | Storage account | 24 | `{ENVIRONMENT}{REGION_MAP}tfstate(12CHAR){RND}` | `mgmtweeutfstatexxx` |
-### SAP Workload zone names
+### SAP workload zone names
For an explanation of the **Format** column, see the [definitions for placeholder values](#placeholder-values).
For an explanation of the **Format** column, see the [definitions for placeholde
| NetApp account | | `{ENVIRONMENT}{REGION_MAP}{SAP_VNET}_netapp_account` | `DEV-WEEU-SAP01_netapp_account` | | NetApp capacity pool | 24 | `{ENVIRONMENT}{REGION_MAP}{SAP_VNET}_netapp_pool` | `DEV-WEEU-SAP01_netapp_pool` |
-### SAP System names
+### SAP system names
For an explanation of the **Format** column, see the [definitions for placeholder values](#placeholder-values).
For an explanation of the **Format** column, see the [definitions for placeholde
| Network security group | 80 | `{PREFIX}_utility-nsg` | `DEV-WEEU-SAP01_X01_dbSubnet-nsg` | | Network interface component | | `{PREFIX}_{VM_NAME}-{SUBNET}-nic` | `-app-nic`, `-web-nic`, `-admin-nic`, `-db-nic` | | Computer name (database) | 14 | `{SID}d{DBSID}##{OS flag l/w}{primary/secondary 0/1}{RND}` | `DEV-WEEU-SAP01-X01_x01dxdb00l0xxx` |
-| Computer name (non-database) | 14 | `{SID}{ROLE}##{OS flag l/w}{RND}` | `DEV-WEEU-SAP01-X01_x01app01l538`, `DEV-WEEU-SAP01-X01_x01scs01l538` |
+| Computer name (nondatabase) | 14 | `{SID}{ROLE}##{OS flag l/w}{RND}` | `DEV-WEEU-SAP01-X01_x01app01l538`, `DEV-WEEU-SAP01-X01_x01scs01l538` |
| VM | | `{PREFIX}_{COMPUTER-NAME}` | | | Disk | | `{PREFIX}_{VM_NAME}-{disk_type}{counter}` | `{VM-NAME}-sap00`, `{VM-NAME}-data00`, `{VM-NAME}-log00`, `{VM-NAME}-backup00` | | OS disk | | `{PREFIX}_{VM_NAME}-osDisk` | `DEV-WEEU-SAP01-X01_x01scs00lxxx-OsDisk` | | Azure load balancer (utility)| 80 | `{PREFIX}_db-alb` | `DEV-WEEU-SAP01-X01_db-alb` | | Load balancer front-end IP address (utility)| | `{PREFIX}_dbAlb-feip` | `DEV-WEEU-SAP01-X01_dbAlb-feip` |
-| Load balancer backend pool (utility)| | `{PREFIX}_dbAlb-bePool` | `DEV-WEEU-SAP01-X01_dbAlb-bePool` |
+| Load balancer back-end pool (utility)| | `{PREFIX}_dbAlb-bePool` | `DEV-WEEU-SAP01-X01_dbAlb-bePool` |
| Load balancer health probe (utility)| | `{PREFIX}_dbAlb-hp` | `DEV-WEEU-SAP01-X01_dbAlb-hp`| | Key vault (user) | 24 | `{SHORTPREFIX}u{RND}` | `DEVWEEUSAP01uX01xxx` | | NetApp volume (utility) | 24 | `{PREFIX}-utility` | `DEV-WEEU-SAP01-X01_sapmnt` | -- > [!NOTE] > Disk numbering starts at zero. The naming convention uses a two-character format; for example, `00`.
You can set the mapping under the variable `_region_mapping` in the name generat
Then, you can use the `_region_mapping` variable elsewhere, such as an area path. The format for an area path is `{ENVIRONMENT}-{REGION_MAP}-{SAP_VNET}-{ARTIFACT}` where: -- `{ENVIRONMENT}` is the name of the environment, or workload zone.
+- `{ENVIRONMENT}` is the name of the environment or workload zone.
- `{REGION_MAP}` is the short form of the Azure region name. - `{SAP_VNET}` is the SAP virtual network within the environment. - `{ARTIFACT}` is the deployment artifact within the virtual network, such as `INFRASTRUCTURE`.
sap New Vs Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/new-vs-existing.md
Title: Configuring the automation framework for new and existing deployments
-description: How to configure the SAP on Azure Deployment Automation Framework for both new and existing scenarios.
+ Title: Configure the automation framework for new and existing deployments
+description: Learn how to configure SAP Deployment Automation Framework for both new and existing scenarios.
-# Configuring for new and existing deployments
+# Configure new and existing deployments
-You can use the [SAP on Azure Deployment Automation Framework](deployment-framework.md) in both new and existing deployment scenarios.
+You can use [SAP Deployment Automation Framework](deployment-framework.md) in both new and existing deployment scenarios.
In new deployment scenarios, the automation framework doesn't use existing Azure infrastructure. The deployment process creates the virtual networks, subnets, key vaults, and more. In existing deployment scenarios, the automation framework uses existing Azure infrastructure. For example, the deployment uses existing virtual networks.+ ## New deployment scenarios
-The following is an example of new deployment scenarios that create new resources.
+The following examples show new deployment scenarios that create new resources.
> [!IMPORTANT] > Modify all example configurations as necessary for your scenario.+ ### New deployment
-In this scenario, the automation framework creates all Azure components, and uses the [deployer](deployment-framework.md#deployment-components). This example deployment contains:
+In this scenario, the automation framework creates all Azure components and uses the [deployer](deployment-framework.md#deployment-components). This example deployment contains:
-- Two environments the West Europe Azure region
- - Management (`MGMT`) hosts the control plane
- - Development (`DEV`) hosts the development environment
+- Two environments in the West Europe Azure region:
+ - Management (`MGMT`) hosts the control plane.
+ - Development (`DEV`) hosts the development environment.
- A deployer-- SAP Library
+- SAP library
- SAP system (`SID X00`) with:
- - Two application servers
- - A highly available (HA) central services instance
- - A web dispatcher with a single node HANA backend using SUSE 12 SP5
+ - Two application servers.
+ - A highly available central services instance.
+ - A web dispatcher with a single node HANA back end that uses SUSE 12 SP5.
| Component | Parameter file location | | | -- |
In this scenario, the automation framework creates all Azure components, and use
| Workload zone | LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE/DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars | | System | SYSTEM/DEV-WEEU-SAP01-X00/DEV-WEEU-SAP01-X00.tfvars |
+To test this scenario:
-To test this scenario:
-
-Clone the [SAP on Azure Deployment Automation Framework](https://github.com/Azure/sap-automation/) repository and copy the sample files to your root folder for parameter files:
+Clone the [SAP Deployment Automation Framework](https://github.com/Azure/sap-automation/) repository and copy the sample files to your root folder for parameter files:
```bash cd ~/Azure_SAP_Automated_Deployment
mkdir -p WORKSPACES/SYSTEM
cp sap-automation/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00 WORKSPACES/SYSTEM/. -r cd WORKSPACES ```
-
+ Prepare the control plane by installing the deployer and library. Be sure to replace the sample values with your service principal's information. ```bash
New-SAPAutomationRegion -DeployerParameterfile .\DEPLOYER\MGMT-WEEU-DEP01-INFRAS
-Tenant_id $Tenant_id ```
-Deploy the workload zone by running either the Bash or PowerShell script.
+Deploy the workload zone by running either the Bash or PowerShell script.
-Be sure to replace the sample credentials with your service principal's information. You can use the same service principal credentials that you used in the control plane deployment, however for production deployments we recommend using different service principals per workload zone.
+Be sure to replace the sample credentials with your service principal's information. You can use the same service principal credentials that you used in the control plane deployment. For production deployments, we recommend using different service principals per workload zone.
```bash
cd \Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-X00
New-SAPSystem --parameterfile .\DEV-WEEU-SAP01-X00.tfvars -Type sap_system ```+ ## Existing example scenarios
-The following are example existing scenarios that use existing Azure resources.
+The following examples show existing scenarios that use existing Azure resources.
> [!IMPORTANT] > Modify all example configurations as necessary for your scenario.
-> Update all the '<arm_resource_id>' placeholders
+> Update all the `<arm_resource_id>` placeholders.
### Existing environment scenario
-In this scenario, the automation framework uses existing Azure components, and uses the [deployer](deployment-framework.md#deployment-components). These existing components include resource groups, storage accounts, virtual networks, subnets, and network security groups. This example deployment contains:
+In this scenario, the automation framework uses existing Azure components and uses the [deployer](deployment-framework.md#deployment-components). These existing components include resource groups, storage accounts, virtual networks, subnets, and network security groups. This example deployment contains:
- Two environments in the East US 2 region
- - Management (`MGMT`) hosts the control plane
- - Quality assurance (`QA`) hosts the SAP QA environment
+ - Management (`MGMT`) hosts the control plane.
+ - Quality assurance (`QA`) hosts the SAP QA environment.
- A deployer-- The SAP Library
+- The SAP library
- An SAP system (`SID X01`) with:
- - Two application servers
- - An HA central services instance
- - A database using a Microsoft SQL server backend running Windows Server 2016
- - A web dispatcher
+ - Two application servers.
+ - An HA central services instance.
+ - A database that uses a Microsoft SQL server back-end running Windows Server 2016.
+ - A web dispatcher.
| Component | Parameter file location | | | -- |
In this scenario, the automation framework uses existing Azure components, and u
| Workload zone | LANDSCAPE/QA-EUS2-SAP03-INFRASTRUCTURE/QA-EUS2-SAP03-INFRASTRUCTURE.tfvars | | System | SYSTEM/QA-EUS2-SAP03-X01/QA-EUS2-SAP03-X01.tfvars | - Copy the sample files to your root folder for parameter files: ```bash
cp sap-automation/samples/WORKSPACES/SYSTEM/QA-EUS2-SAP03-X01 WORKSPACES/SYSTEM/
cd WORKSPACES ```
-The sample tfvars file has placeholders '"<azure_resource_id>"' which you need to replace with the actual Azure resource IDs for resource groups, virtual networks, and subnets.
+The sample `tfvars` file has `<azure_resource_id>` placeholders. You need to replace them with the actual Azure resource IDs for resource groups, virtual networks, and subnets.
-Deploy the control plane installing the deployer and SAP Library. Run either the Bash or PowerShell command. Be sure to replace the sample credentials with your service principal's information.
+Deploy the control plane by installing the deployer and SAP library. Run either the Bash or PowerShell command. Be sure to replace the sample credentials with your service principal's information.
```bash cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
New-SAPAutomationRegion
-Silent ```
-Deploy the workload zone by running either the Bash or PowerShell script.
+Deploy the workload zone by running either the Bash or PowerShell script.
-Be sure to replace the sample credentials with your service principals information. You can use the same service principal credentials that you used in the control plane deployment, however for production deployments we recommend using different service principals per workload zone.
+Be sure to replace the sample credentials with your service principal's information. You can use the same service principal credentials that you used in the control plane deployment. For production deployments, we recommend using different service principals per workload zone.
```bash
New-SAPWorkloadZone --parameterfile .\QA-EUS2-SAP03-INFRASTRUCTURE.tfvars
-Tenant_id $tenant_id ``` - Deploy the SAP system in the QA environment. Run either the Bash or PowerShell command. - ```bash cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/QA-EUS2-SAP03-X01
cd \Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\QA-EUS2-SAP03-X01
New-SAPSystem --parameterfile .\QA-EUS2-SAP03-tfvars.json -Type sap_system ```
-## Next steps
+
+## Next step
> [!div class="nextstepaction"]
-> [Hands-on lab instructions](tutorial.md)
+> [Tutorial: Enterprise scale for SAP Deployment Automation Framework](tutorial.md)
sap Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md
Title: Plan your SAP deployment with the automation framework on Azure
-description: Prepare for using the SAP on Azure Deployment Automation Framework. Steps include planning for credentials management, DevOps structure, and deployment scenarios.
+description: Prepare for using SAP Deployment Automation Framework. Steps include planning for credentials management, DevOps structure, and deployment scenarios.
-# Plan your deployment of SAP automation framework
+# Plan your deployment of the SAP automation framework
-There are multiple considerations for planning an SAP deployment and running the [SAP on Azure Deployment Automation Framework](deployment-framework.md), this includes topics like deployment mechanisms, credentials management, virtual network design.
+There are multiple considerations for planning an SAP deployment and running [SAP Deployment Automation Framework](deployment-framework.md) like deployment mechanisms, credentials management, and virtual network design.
-For generic SAP on Azure design considerations, visit [Introduction to an SAP adoption scenario](/azure/cloud-adoption-framework/scenarios/sap)
+For generic SAP on Azure design considerations, see [Introduction to an SAP adoption scenario](/azure/cloud-adoption-framework/scenarios/sap).
> [!NOTE]
-> The Terraform deployment uses Terraform templates provided by Microsoft from the [SAP on Azure Deployment Automation Framework repository](https://github.com/Azure/SAP-automation-samples/tree/main/Terraform/WORKSPACES). The templates use parameter files with your system-specific information to perform the deployment.
+> The Terraform deployment uses Terraform templates provided by Microsoft from the [SAP Deployment Automation Framework repository](https://github.com/Azure/SAP-automation-samples/tree/main/Terraform/WORKSPACES). The templates use parameter files with your system-specific information to perform the deployment.
-## Control Plane planning
+## Control plane planning
-You can perform the deployment and configuration activities from either Azure Pipelines or by using the provided shell scripts directly from Azure hosted Linux virtual machines. This environment is referred to as the control plane. For setting up Azure DevOps for the deployment framework, see [Set up Azure DevOps for SDAF](configure-control-plane.md).
+You can perform the deployment and configuration activities from either Azure Pipelines or by using the provided shell scripts directly from Azure-hosted Linux virtual machines. This environment is referred to as the control plane. For setting up Azure DevOps for the deployment framework, see [Set up Azure DevOps for SAP Deployment Automation Framework](configure-control-plane.md).
Before you design your control plane, consider the following questions: * In which regions do you need to deploy workloads? * Is there a dedicated subscription for the control plane?
-* Is there a dedicated deployment credential (Service Principal) for the control plane?
-* Are you deploying to an existing Virtual Network or creating a new Virtual Network?
-* How is outbound internet provided for the Virtual Machines?
+* Is there a dedicated deployment credential (service principal) for the control plane?
+* Are you deploying to an existing virtual network or creating a new virtual network?
+* How is outbound internet provided for the virtual machines?
* Are you going to deploy Azure Firewall for outbound internet connectivity? * Are private endpoints required for storage accounts and the key vault?
-* Are you going to use an existing Private DNS zone for the Virtual Machines or will you use the Control Plane for it?
-* Are you going to use Azure Bastion for secure remote access to the Virtual Machines?
-* Are you going to use the SDAF Configuration Web Application for performing configuration and deployment activities?
+* Are you going to use an existing private DNS zone for the virtual machines or will you use the control plane for it?
+* Are you going to use Azure Bastion for secure remote access to the virtual machines?
+* Are you going to use the SAP Deployment Automation Framework configuration web application for performing configuration and deployment activities?
-### Control Plane
+### Control plane
If you're supporting multiple workload zones in a region, use a unique identifier for your control plane. Don't use the same identifier as for the workload zone. For example, use `MGMT` for management purposes.
The control plane provides the following
- Deployment VMs, which do Terraform deployments and Ansible configuration. Acts as Azure DevOps self-hosted agents. - A key vault, which contains the deployment credentials (service principals) used by Terraform when performing the deployments. - Azure Firewall for providing outbound internet connectivity.-- Azure Bastion for providing secure remote access to the deployed Virtual Machines.-- An SDAF Configuration Azure Web Application for performing configuration and deployment activities.
+- Azure Bastion for providing secure remote access to the deployed virtual machines.
+- An SAP Deployment Automation Framework configuration Azure web application for performing configuration and deployment activities.
-The control plane is defined using two configuration files:
+The control plane is defined by using two configuration files.
The deployment configuration file defines the region, environment name, and virtual network information. For example:
enable_firewall_for_keyvaults_and_storage = true
### DNS considerations
-When planning the DNS configuration for the automation framework, consider the following questions:
+When you plan the DNS configuration for the automation framework, consider the following questions:
-You can integrate with an existing Private DNS Zone by providing the following values in your tfvars files:
+ - Is there an existing private DNS that the solutions can integrate with or do you need to use a custom private DNS zone for the deployment environment?
+ - Are you going to use predefined IP addresses for the virtual machines or let Azure assign them dynamically?
+
+You can integrate with an existing private DNS zone by providing the following values in your `tfvars` files:
```tfvars management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
use_custom_dns_a_registration = false ```
-Without these values, a Private DNS Zone will be created in the SAP Library resource group.
+Without these values, a private DNS zone is created in the SAP library resource group.
For more information, see the [in-depth explanation of how to configure the deployer](configure-control-plane.md).
+## SAP library configuration
-## SAP Library configuration
-
-The SAP library resource group provides storage for SAP installation media, Bill of Material (BOM) files, Terraform state files and optionally the Private DNS Zones. The configuration file defines the region and environment name for the SAP library. For parameter information and examples, see [how to configure the SAP library for automation](configure-control-plane.md).
+The SAP library resource group provides storage for SAP installation media, Bill of Material files, Terraform state files, and, optionally, the private DNS zones. The configuration file defines the region and environment name for the SAP library. For parameter information and examples, see [Configure the SAP library for automation](configure-control-plane.md).
## Workload zone planning
-Most SAP application landscapes are partitioned in different tiers. In SDAF these are called workload zones, for example, you might have different workload zones for development, quality assurance, and production. See [workload zones](deployment-framework.md#deployment-components).
+Most SAP application landscapes are partitioned in different tiers. In SAP Deployment Automation Framework, these tiers are called workload zones. For example, you might have different workload zones for development, quality assurance, and production. For more information, see [Workload zones](deployment-framework.md#deployment-components).
-The default naming convention for workload zones is `[ENVIRONMENT]-[REGIONCODE]-[NETWORK]-INFRASTRUCTURE`, for example, `DEV-WEEU-SAP01-INFRASTRUCTURE` for a development environment hosted in the West Europe region using the SAP01 virtual network or `PRD-WEEU-SAP02-INFRASTRUCTURE` for a production environment hosted in the West Europe region using the SAP02 virtual network.
+The default naming convention for workload zones is `[ENVIRONMENT]-[REGIONCODE]-[NETWORK]-INFRASTRUCTURE`. For example, `DEV-WEEU-SAP01-INFRASTRUCTURE` is for a development environment hosted in the West Europe region by using the SAP01 virtual network. `PRD-WEEU-SAP02-INFRASTRUCTURE` is for a production environment hosted in the West Europe region by using the SAP02 virtual network.
-The `SAP01` and `SAP02` define the logical names for the Azure virtual networks, these can be used to further partition the environments. If you need two Azure Virtual Networks for the same workload zone, for example, for a multi subscription scenario where you host development environments in two subscriptions, you can use the different logical names for each virtual network. For example, `DEV-WEEU-SAP01-INFRASTRUCTURE` and `DEV-WEEU-SAP02-INFRASTRUCTURE`.
+The `SAP01` and `SAP02` designations define the logical names for the Azure virtual networks. They can be used to further partition the environments. Suppose you need two Azure virtual networks for the same workload zone. For example, you might have a multi-subscription scenario where you host development environments in two subscriptions. You can use the different logical names for each virtual network. For example, you can use `DEV-WEEU-SAP01-INFRASTRUCTURE` and `DEV-WEEU-SAP02-INFRASTRUCTURE`.
-The workload zone provides the following shared services for the SAP Applications:
+The workload zone provides the following shared services for the SAP applications:
-* Azure Virtual Network, subnets and network security groups.
+* Azure Virtual Network, for virtual networks, subnets, and network security groups.
* Azure Key Vault, for storing the virtual machine and SAP system credentials.
-* Azure Storage accounts, for Boot Diagnostics and Cloud Witness.
-* Shared storage for the SAP Systems either Azure Files or Azure NetApp Files.
+* Azure Storage accounts for boot diagnostics and Cloud Witness.
+* Shared storage for the SAP systems, either Azure Files or Azure NetApp Files.
Before you design your workload zone layout, consider the following questions: * In which regions do you need to deploy workloads?
-* How many workload zones does your scenario require (development, quality assurance, production etc.)?
-* Are you deploying into new Virtual networks or are you using existing virtual networks
-* How is DNS configured (integrate with existing DNS or deploy a Private DNS zone in the control plane)?
-* What storage type do you need for the shared storage (Azure Files NFS, Azure NetApp Files)?
+* How many workload zones does your scenario require (development, quality assurance, and production)?
+* Are you deploying into new virtual networks or are you using existing virtual networks?
+* How is DNS configured (integrated with existing DNS or deployed a private DNS zone in the control plane)?
+* What storage type do you need for the shared storage (Azure Files NFS or Azure NetApp Files)?
-For more information, see [how to configure a workload zone deployment for automation](deploy-workload-zone.md).
+For more information, see [Configure a workload zone deployment for automation](deploy-workload-zone.md).
-### Windows based deployments
+### Windows-based deployments
-When doing Windows based deployments the Virtual Machines in the workload zone's Virtual Network need to be able to communicate with Active Directory in order to join the SAP Virtual Machines to the Active Directory Domain. The provided DNS name needs to be resolvable by the Active Directory.
+When you perform Windows-based deployments, the virtual machines in the workload zone's virtual network need to be able to communicate with Active Directory to join the SAP virtual machines to the Active Directory domain. The provided DNS name needs to be resolvable by Active Directory.
-As SDAF won't create accounts in Active Directory the accounts need to be precreated and stored in the workload zone key vault.
+SAP Deployment Automation Framework doesn't create accounts in Active Directory, so the accounts need to be precreated and stored in the workload zone key vault.
| Credential | Name | Example | | | -- | -- | | Account that can perform domain join activities | [IDENTIFIER]-ad-svc-account | DEV-WEEU-SAP01-ad-svc-account | | Password for the account that performs the domain join | [IDENTIFIER]-ad-svc-account-password | DEV-WEEU-SAP01-ad-svc-account-password |
-| 'sidadm' account password | [IDENTIFIER]-[SID]-win-sidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
+| `sidadm` account password | [IDENTIFIER]-[SID]-win-sidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password | | SQL Server Service account | [IDENTIFIER]-[SID]-sql-svc-account | DEV-WEEU-SAP01-W01-sql-svc-account | | SQL Server Service account password | [IDENTIFIER]-[SID]-sql-svc-password | DEV-WEEU-SAP01-W01-sql-svc-password |
As SDAF won't create accounts in Active Directory the accounts need to be precre
#### DNS settings
-For High Availability scenarios a DNS record is needed in the Active Directory for the SAP Central Services cluster. The DNS record needs to be created in the Active Directory DNS zone. The DNS record name is defined as '[sid]>scs[scs instance number]cl1'. For example, `w01scs00cl1` for the cluster for the 'W01' SID using the instance number '00'.
+For high-availability scenarios, a DNS record is needed in the Active Directory for the SAP central services cluster. The DNS record needs to be created in the Active Directory DNS zone. The DNS record name is defined as `[sid]>scs[scs instance number]cl1`. For example, `w01scs00cl1` is used for the cluster, with `W01` for the SID and `00` for the instance number.
+ ## Credentials management
-The automation framework uses [Service Principals](#service-principal-creation) for infrastructure deployment. It's recommended to use different deployment credentials (service principals) for each [workload zone](#workload-zone-planning). The framework stores these credentials in the [deployer's](deployment-framework.md#deployment-components) key vault. Then, the framework retrieves these credentials dynamically during the deployment process.
+The automation framework uses [service principals](#service-principal-creation) for infrastructure deployment. We recommend using different deployment credentials (service principals) for each [workload zone](#workload-zone-planning). The framework stores these credentials in the [deployer's](deployment-framework.md#deployment-components) key vault. Then, the framework retrieves these credentials dynamically during the deployment process.
-### SAP and Virtual machine Credentials management
+### SAP and virtual machine credentials management
-The automation framework will use the workload zone key vault for storing both the automation user credentials and the SAP system credentials. The virtual machine credentials are named as follows:
+The automation framework uses the workload zone key vault for storing both the automation user credentials and the SAP system credentials. The following table lists the names of the virtual machine credentials.
| Credential | Name | Example | | - | - | - |
The automation framework will use the workload zone key vault for storing both t
| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub | | Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username | | Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
-| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
-| sidadm account password | [IDENTIFIER]-[SID]-winsidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
+| `sidadm` password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
+| `sidadm` account password | [IDENTIFIER]-[SID]-winsidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password | - ### Service principal creation
-Create your service principal:
+To create your service principal:
+
+1. Sign in to the [Azure CLI](/cli/azure/) with an account that has permissions to create a service principal.
+1. Create a new service principal by running the command `az ad sp create-for-rbac`. Make sure to use a description name for `--name`. For example:
-1. Sign in to the [Azure CLI](/cli/azure/) with an account that has permissions to create a Service Principal.
-1. Create a new Service Principal by running the command `az ad sp create-for-rbac`. Make sure to use a description name for `--name`. For example:
```azurecli az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --name="DEV-Deployment-Account" ```+ 1. Note the output. You need the application identifier (`appId`), password (`password`), and tenant identifier (`tenant`) for the next step. For example:+ ```json { "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
Create your service principal:
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } ```
-1. Optionally assign the User Access Administrator role to your service principal. For example:
+
+1. Optionally, assign the User Access Administrator role to your service principal. For example:
+ ```azurecli az role assignment create --assignee <your-application-ID> --role "User Access Administrator" --scope /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name> ```
-For more information, see [the Azure CLI documentation for creating a service principal](/cli/azure/create-an-azure-service-principal-azure-cli)
+For more information, see the [Azure CLI documentation for creating a service principal](/cli/azure/create-an-azure-service-principal-azure-cli).
+ ### Permissions management
-In a locked down environment, you might need to assign another permissions to the service principals. For example, you might need to assign the User Access Administrator role to the service principal.
+In a locked-down environment, you might need to assign another permission to the service principals. For example, you might need to assign the User Access Administrator role to the service principal.
#### Required permissions
-The following table shows the required permissions for the service principals:
+The following table shows the required permissions for the service principals.
> [!div class="mx-tdCol2BreakAll "] > | Credential | Area | Required permissions | > | -- | -- | - |
-> | Control Plane SPN | Control Plane subscription | Contributor |
+> | Control Plane SPN | Control plane subscription | Contributor |
> | Workload Zone SPN | Target subscription | Contributor | > | Workload Zone SPN | Control plane subscription | Reader |
-> | Workload Zone SPN | Control Plane Virtual Network | Network Contributor |
-> | Workload Zone SPN | SAP Library tfstate storage account | Storage Account Contributor |
-> | Workload Zone SPN | SAP Library sapbits storage account | Reader |
-> | Workload Zone SPN | Private DNS Zone | Private DNS Zone Contributor |
+> | Workload Zone SPN | Control plane virtual network | Network contributor |
+> | Workload Zone SPN | SAP library `tfstate` storage account | Storage account contributor |
+> | Workload Zone SPN | SAP library `sapbits` storage account | Reader |
+> | Workload Zone SPN | Private DNS zone | Private DNS zone contributor |
> | Web Application Identity | Target subscription | Reader |
-> | Cluster Virtual Machine Identity | Resource Group | Fencing role |
+> | Cluster Virtual Machine Identity | Resource group | Fencing role |
### Firewall configuration > [!div class="mx-tdCol2BreakAll "] > | Component | Addresses | Duration | Notes | > | -- | | - | -- |
-> | SDAF | 'github.com/Azure/sap-automation', 'github.com/Azure/sap-automation-samples', 'githubusercontent.com' | Setup of Deployer | |
-> | Terraform | 'releases.hashicorp.com', 'registry.terraform.io', 'checkpoint-api.hashicorp.com' | Setup of Deployer | See [Installing Terraform](https://developer.hashicorp.com/terraform/downloads?product_intent=terraform) |
-> | Azure CLI | Installing [Azure CLI](/cli/azure/install-azure-cli-linux) | Setup of Deployer and during deployments | The firewall requirements for Azure CLI installation are defined here: [Installing Azure CLI](/cli/azure/azure-cli-endpoints) |
-> | PIP | 'bootstrap.pypa.io' | Setup of Deployer | See [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) |
-> | Ansible | 'pypi.org', 'pythonhosted.org', 'galaxy.ansible.com' | Setup of Deployer | |
-> | PowerShell Gallery | 'onegetcdn.azureedge.net', 'psg-prod-centralus.azureedge.net', 'psg-prod-eastus.azureedge.net' | Setup of Windows based systems | See [PowerShell Gallery](/powershell/gallery/getting-started#network-access-to-the-powershell-gallery) |
-> | Windows components | 'download.visualstudio.microsoft.com', 'download.visualstudio.microsoft.com', 'download.visualstudio.com' | Setup of Windows based systems | See [Visual Studio components](/visualstudio/install/install-and-use-visual-studio-behind-a-firewall-or-proxy-server#install-visual-studio) |
-> | SAP Downloads | 'softwaredownloads.sap.com'ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | SAP Software download | See [SAP Downloads](https://launchpad.support.sap.com/#/softwarecenter) |
-> | Azure DevOps Agent | 'https://vstsagentpackage.azureedge.net'ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | Setup Azure DevOps | |
+> | SDAF | `github.com/Azure/sap-automation`, `github.com/Azure/sap-automation-samples`, `githubusercontent.com` | Setup of deployer | |
+> | Terraform | `releases.hashicorp.com`, `registry.terraform.io`, `checkpoint-api.hashicorp.com` | Setup of deployer | See [Installing Terraform](https://developer.hashicorp.com/terraform/downloads?product_intent=terraform). |
+> | Azure CLI | Installing [Azure CLI](/cli/azure/install-azure-cli-linux) | Setup of deployer and during deployments | The firewall requirements for the Azure CLI installation are defined in [Installing Azure CLI](/cli/azure/azure-cli-endpoints). |
+> | PIP | `bootstrap.pypa.io` | Setup of deployer | See [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html). |
+> | Ansible | `pypi.org`, `pythonhosted.org`, `galaxy.ansible.com` | Setup of deployer | |
+> | PowerShell Gallery | `onegetcdn.azureedge.net`, `psg-prod-centralus.azureedge.net`, `psg-prod-eastus.azureedge.net` | Setup of Windows-based systems | See [PowerShell Gallery](/powershell/gallery/getting-started#network-access-to-the-powershell-gallery). |
+> | Windows components | `download.visualstudio.microsoft.com`, `download.visualstudio.microsoft.com`, `download.visualstudio.com` | Setup of Windows-based systems | See [Visual Studio components](/visualstudio/install/install-and-use-visual-studio-behind-a-firewall-or-proxy-server#install-visual-studio). |
+> | SAP downloads | `softwaredownloads.sap.com`ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | SAP software download | See [SAP downloads](https://launchpad.support.sap.com/#/softwarecenter). |
+> | Azure DevOps agent | `https://vstsagentpackage.azureedge.net`ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | Setup of Azure DevOps | |
## DevOps structure
The deployment framework uses three separate repositories for the deployment art
This repository contains the Terraform parameter files and the files needed for the Ansible playbooks for all the workload zone and system deployments.
-You can create this repository by cloning the [SAP on Azure Deployment Automation Framework bootstrap repository](https://github.com/Azure/sap-automation-bootstrap/) into your source control repository.
+You can create this repository by cloning the [SAP Deployment Automation Framework bootstrap repository](https://github.com/Azure/sap-automation-bootstrap/) into your source control repository.
> [!IMPORTANT] > This repository must be the default repository for your Azure DevOps project.
The following sample folder hierarchy shows how to structure your configuration
| None (root level) | Configuration files, template files | The root folder for all systems that you're managing from this deployment environment. | | CONFIGURATION | Shared configuration files | A shared folder for referring to custom configuration files from multiple places. For example, custom disk sizing configuration files. | | DEPLOYER | Configuration files for the deployer | A folder with [deployer configuration files](configure-control-plane.md) for all deployments that the environment manages. Name each subfolder by the naming convention of **Environment - Region - Virtual Network**. For example, **PROD-WEEU-DEP00-INFRASTRUCTURE**. |
-| LIBRARY | Configuration files for SAP Library | A folder with [SAP Library configuration files](configure-control-plane.md) for all deployments that the environment manages. Name each subfolder by the naming convention of **Environment - Region - Virtual Network**. For example, **PROD-WEEU-SAP-LIBRARY**. |
+| LIBRARY | Configuration files for SAP library | A folder with [SAP library configuration files](configure-control-plane.md) for all deployments that the environment manages. Name each subfolder by the naming convention of **Environment - Region - Virtual Network**. For example, **PROD-WEEU-SAP-LIBRARY**. |
| LANDSCAPE | Configuration files for landscape deployments | A folder with [configuration files for all workload zones](deploy-workload-zone.md) that the environment manages. Name each subfolder by the naming convention **Environment - Region - Virtual Network**. For example, **PROD-WEEU-SAP00-INFRASTRUCTURE**. |
-| SYSTEM | Configuration files for the SAP systems | A folder with [configuration files for all SAP System Identification (SID) deployments](configure-system.md) that the environment manages. Name each subfolder by the naming convention **Environment - Region - Virtual Network - SID**. for example, **PROD-WEEU-SAPO00-ABC**. |
+| SYSTEM | Configuration files for the SAP systems | A folder with [configuration files for all SAP System Identification (SID) deployments](configure-system.md) that the environment manages. Name each subfolder by the naming convention **Environment - Region - Virtual Network - SID**. For example, **PROD-WEEU-SAPO00-ABC**. |
-> [!IMPORTANT]
-> Your parameter file's name becomes the name of the Terraform state file. Make sure to use a unique parameter file name for this reason.
+Your parameter file's name becomes the name of the Terraform state file. Make sure to use a unique parameter file name for this reason.
### Code repository
-This repository contains the Terraform automation templates and the Ansible playbooks as well as the deployment pipelines and scripts. For most use cases, consider this repository as read-only and don't modify it.
+This repository contains the Terraform automation templates, the Ansible playbooks, and the deployment pipelines and scripts. For most use cases, consider this repository as read-only and don't modify it.
-You can create this repository by cloning the [SAP on Azure Deployment Automation Framework repository](https://github.com/Azure/sap-automation/) into your source control repository.
+To create this repository, clone the [SAP Deployment Automation Framework repository](https://github.com/Azure/sap-automation/) into your source control repository.
-> [!IMPORTANT]
-> This repository should be named 'sap-automation'.
+Name this repository `sap-automation`.
### Sample repository This repository contains the sample Bill of Materials files and the sample Terraform configuration files.
-You can create this repository by cloning the [SAP on Azure Deployment Automation Framework samples repository](https://github.com/Azure/sap-automation-samples/) into your source control repository.
-
-> [!IMPORTANT]
-> This repository should be named 'samples'.
+To create this repository, clone the [SAP Deployment Automation Framework samples repository](https://github.com/Azure/sap-automation-samples/) into your source control repository.
+Name this repository `samples`.
## Supported deployment scenarios
Before you deploy a solution, it's important to consider which Azure regions to
The automation framework supports deployments into multiple Azure regions. Each region hosts: -- Deployment infrastructure-- SAP Library with state files and installation media-- 1-N workload zones-- 1-N SAP systems in the workload zones
+- The deployment infrastructure.
+- The SAP library with state files and installation media.
+- 1-N workload zones.
+- 1-N SAP systems in the workload zones.
## Deployment environments
The automation framework also supports having the deployment environment and SAP
The deployment environment provides the following -- One or more deployment virtual machines, which perform the infrastructure deployments using Terraform and performs the system configuration and SAP installation using Ansible playbooks.
+- One or more deployment virtual machines, which perform the infrastructure deployments by using Terraform and perform the system configuration and SAP installation by using Ansible playbooks.
- A key vault, which contains service principal identity information for use by Terraform deployments. - An Azure Firewall component, which provides outbound internet connectivity.
For more information, see the [in-depth explanation of how to configure the depl
Most SAP configurations have multiple [workload zones](deployment-framework.md#deployment-components) for different application tiers. For example, you might have different workload zones for development, quality assurance, and production.
-You'll be creating or granting access to the following services in each workload zone:
+You create or grant access to the following services in each workload zone:
-* Azure Virtual Networks, for virtual networks, subnets and network security groups.
-* Azure Key Vault, for system credentials and the deployment Service Principal.
-* Azure Storage accounts, for Boot Diagnostics and Cloud Witness.
-* Shared storage for the SAP Systems either Azure Files or Azure NetApp Files.
+* Azure Virtual Networks, for virtual networks, subnets, and network security groups.
+* Azure Key Vault, for system credentials and the deployment service principal.
+* Azure Storage accounts, for boot diagnostics and Cloud Witness.
+* Shared storage for the SAP systems, either Azure Files or Azure NetApp Files.
Before you design your workload zone layout, consider the following questions:
Before you design your workload zone layout, consider the following questions:
* In which regions do you need to deploy workloads? * What's your [deployment scenario](#supported-deployment-scenarios)?
-For more information, see [how to configure a workload zone deployment for automation](deploy-workload-zone.md).
+For more information, see [Configure a workload zone deployment for automation](deploy-workload-zone.md).
## SAP system setup
The SAP system contains all Azure components required to host the SAP applicatio
Before you configure the SAP system, consider the following questions:
-* What database backend do you want to use?
+* What database back end do you want to use?
* How many database servers do you need? * Does your scenario require high availability? * How many application servers do you need? * How many web dispatchers do you need, if any? * How many central services instances do you need?
-* What size virtual machine (VM) do you need?
-* Which VM image do you want to use? Is the image on Azure Marketplace or custom?
-* Are you deploying to [a new or existing deployment scenario](#supported-deployment-scenarios)?
-* What is your IP allocation strategy? Do you want Azure to set IPs or use custom settings?
+* What size virtual machine do you need?
+* Which virtual machine image do you want to use? Is the image on Azure Marketplace or custom?
+* Are you deploying to a [new or existing deployment scenario](#supported-deployment-scenarios)?
+* What's your IP allocation strategy? Do you want Azure to set IPs or use custom settings?
-For more information, see [how to configure the SAP system for automation](configure-system.md).
+For more information, see [Configure the SAP system for automation](configure-system.md).
## Deployment flow
-When planning a deployment, it's important to consider the overall flow. There are three main steps of an SAP deployment on Azure with the automation framework.
+When you plan a deployment, it's important to consider the overall flow. There are three main steps of an SAP deployment on Azure with the automation framework.
-1. Deploy the control plane. This step deploys components to support the SAP automation framework in a specified Azure region. Some parts of this step are:
- 1. Creating the deployment environment
- 1. Creating shared storage for Terraform state files
- 1. Creating shared storage for SAP installation media
+1. Deploy the control plane. This step deploys components to support the SAP automation framework in a specified Azure region. Some parts of this step are to:
+ 1. Create the deployment environment.
+ 1. Create shared storage for Terraform state files.
+ 1. Create shared storage for SAP installation media.
1. Deploy the workload zone. This step deploys the [workload zone components](#workload-zone-planning), such as the virtual network and key vaults.
-1. Deploy the system. This step includes the [infrastructure for the SAP system](#sap-system-setup) deployment and the SAP configuration [configuration and SAP installation](run-ansible.md).
-
+1. Deploy the system. This step includes the [infrastructure for the SAP system](#sap-system-setup) deployment and the [SAP configuration and SAP installation](run-ansible.md).
## Naming conventions
-The automation framework uses a default naming convention. If you'd like to use a custom naming convention, plan and define your custom names before deployment. For more information, see [how to configure the naming convention](naming-module.md).
+The automation framework uses a default naming convention. If you want to use a custom naming convention, plan and define your custom names before deployment. For more information, see [Configure the naming convention](naming-module.md).
## Disk sizing If you want to [configure custom disk sizes](configure-extra-disks.md), make sure to plan your custom setup before deployment.
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [About manual deployments of automation framework](manual-deployment.md)
+> [Manual deployment of the automation framework](manual-deployment.md)
sap Reference Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/reference-bash.md
Title: SAP on Azure Deployment Automation Framework Bash reference | Microsoft Docs
-description: SAP on Azure Deployment Automation Framework Bash reference
+ Title: SAP Deployment Automation Framework Bash reference | Microsoft Docs
+description: Use shell scripts to deploy SAP Deployment Automation Framework components.
Last updated 11/17/2021
-# Using SAP on Azure Deployment Automation Framework shell scripts
+# Use SAP Deployment Automation Framework shell scripts
-You can deploy all [SAP on Azure Deployment Automation Framework](deployment-framework.md) components using shell scripts.
+You can deploy all [SAP Deployment Automation Framework](deployment-framework.md) components by using shell scripts.
-## Control Plane operations
+## Control plane operations
-You can deploy or update the control plane using the [prepare_region](bash/prepare-region.md) shell script.
+You can deploy or update the control plane by using the [prepare_region](bash/prepare-region.md) shell script.
-Remove the control plane using the [remove_region](bash/remove-region.md) shell script.
+Remove the control plane by using the [remove_region](bash/remove-region.md) shell script.
-You can bootstrap the deployer in the control plane using the [install_deployer](bash/install-deployer.md) Shell script.
+You can bootstrap the deployer in the control plane by using the [install_deployer](bash/install-deployer.md) shell script.
-You can bootstrap the SAP Library in the control plane using the [install_library](bash/install-library.md) Shell script.
+You can bootstrap the SAP library in the control plane by using the [install_library](bash/install-library.md) shell script.
-## Workload Zone operations
+## Workload zone operations
-Deploy or update the workload zone using the [`install_workloadzone`](bash/install-workloadzone.md) shell script.
+Deploy or update the workload zone by using the [install_workloadzone](bash/install-workloadzone.md) shell script.
-Remove the workload zone using the [`remover`](bash/remover.md) shell script.
+Remove the workload zone by using the [remover](bash/remover.md) shell script.
+## SAP system operations
-## SAP System operations
-
-Deploy or update the SAP system using the [`installer`](bash/installer.md) shell script.
-
-Remove the SAP system using the [`remover`](bash/remover.md) Shell script.
+Deploy or update the SAP system by using the [installer](bash/installer.md) shell script.
+Remove the SAP system by using the [remover](bash/remover.md) shell script.
## Other operations
-Set the deployment credentials using the
-[`Set SPN secrets`](bash/set-secrets.md) Shell script.
+Set the deployment credentials by using the
+[Set SPN secrets](bash/set-secrets.md) shell script.
-Update the Terraform state file using the
-[`Update Terraform state`](bash/advanced-state-management.md) Shell script.
+Update the Terraform state file by using the
+[Update Terraform state](bash/advanced-state-management.md) shell script.
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [Deploying the control plane using bash](bash/prepare-region.md)
+> [Deploy the control plane by using Bash](bash/prepare-region.md)
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
Title: Run Ansible to configure SAP system
-description: Configure the environment and install SAP using Ansible playbooks with the SAP on Azure Deployment Automation Framework.
+ Title: Run Ansible to configure the SAP system
+description: Configure the environment and install SAP by using Ansible playbooks with SAP Deployment Automation Framework.
-# Get started Ansible configuration
+# Get started with Ansible configuration
-When you use the [SAP on Azure Deployment Automation Framework](deployment-framework.md), you have the option to do an [automated infrastructure deployment](get-started.md), However, you can also do the required operating system configurations and install SAP using Ansible playbooks provided in the repository. These playbooks are located in the automation framework repository in the `/sap-automation/deploy/ansible` folder.
+When you use [SAP Deployment Automation Framework](deployment-framework.md), you can perform an [automated infrastructure deployment](get-started.md). You can also do the required operating system configurations and install SAP by using Ansible playbooks provided in the repository. These playbooks are located in the automation framework repository in the `/sap-automation/deploy/ansible` folder.
| Filename | Description | | | - |
When you use the [SAP on Azure Deployment Automation Framework](deployment-frame
| `playbook_05_02_sap_pas_install.yaml` | SAP primary application server (PAS) installation | | `playbook_05_03_sap_app_install.yaml` | SAP application server installation | | `playbook_05_04_sap_web_install.yaml` | SAP web dispatcher installation |
-| `playbook_04_00_01_hana_hsr.yaml` | SAP HANA HA configuration |
+| `playbook_04_00_01_hana_hsr.yaml` | SAP HANA high-availability configuration |
## Prerequisites
-The Ansible playbooks require the following files `sap-parameters.yaml` and `SID_host.yaml` in the current directory.
+The Ansible playbooks require the `sap-parameters.yaml` and `SID_host.yaml` files in the current directory.
### Configuration files
-The **sap-parameters.yaml** contains information that Ansible uses for configuration of the SAP infrastructure
+The `sap-parameters.yaml` file contains information that Ansible uses for configuration of the SAP infrastructure.
```yaml
scs_high_availability: false
# SCS Instance Number scs_instance_number: "00" # scs_lb_ip is the SCS IP address of the load balancer in
-# from of the SAP Central Services virtual machines
+# front of the SAP Central Services virtual machines
scs_lb_ip: 10.110.32.26 # ERS Instance Number ers_instance_number: "02" # ecs_lb_ip is the ERS IP address of the load balancer in
-# from of the SAP Central Services virtual machines
+# front of the SAP Central Services virtual machines
ers_lb_ip: # sap_sid is the database SID
platform: HANA
# db_high_availability is a boolean flag indicating if the # SAP database servers are deployed using high availability db_high_availability: false
-# db_lb_ip is the IP address of the load balancer in from of the database virtual machines
+# db_lb_ip is the IP address of the load balancer in front of the database virtual machines
db_lb_ip: 10.110.96.13 disks:
disks:
... ```
-The **`X01_hosts.yaml`** is the inventory file Ansible uses for configuration of the SAP infrastructure. 'X01' may differ for your deployments.
+The `X01_hosts.yaml` file is the inventory file that Ansible uses for configuration of the SAP infrastructure. The `X01` label might differ for your deployments.
```yaml X01_DB:
X01_WEB:
## Run a playbook
-Make sure you've [downloaded the SAP software](software.md) to your Azure environment before running this step.
-
-To execute a playbook or multiple playbooks, use the command `ansible-playbook` as follows. The example below runs the Operating System configuration playbook.
+Make sure that you [download the SAP software](software.md) to your Azure environment before you run this step.
+To run a playbook or multiple playbooks, use the following `ansible-playbook` command. This example runs the operating system configuration playbook.
```bash
ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-a
```
-### Operating System Configuration
+### Operating system configuration
-The Operating System Configuration playbook is used to configure the operating system of the SAP virtual machines. The playbook performs the following tasks:
+The operating system configuration playbook is used to configure the operating system of the SAP virtual machines. The playbook performs the following tasks.
# [Linux](#tab/linux) The following tasks are executed on Linux virtual machines:-- Enables logging for sudo operations+
+- Enables logging for `sudo` operations
- Ensures that the Azure virtual machine agent is configured correctly - Ensures that all the repositories are registered and enabled-- Ensures that all the packaged are installed-- Creates to volume groups and logical volumes
+- Ensures that all the packages are installed
+- Creates volume groups and logical volumes
- Configures the kernel parameters-- Configures routing for additional network interfaces (if required)-- Crates the user accounts and groups-- Configures the banners displayed when logged in
+- Configures routing for more network interfaces (if necessary)
+- Creates the user accounts and groups
+- Configures the banners displayed when signed in
- Configures the services required # [Windows](#tab/windows) -- Ensures that all the components are installed
+- Ensures that all the components are installed:
- StorageDsc - NetworkingDsc - ComputerManagementDsc
The following tasks are executed on Linux virtual machines:
- ServerManager - SecurityPolicyDsc - Visual C++ runtime libraries
- - ODBC Drivers
+ - ODBC drivers
- Configures the swap file size - Initializes the disks - Configures Windows Firewall
The following tasks are executed on Linux virtual machines:
-### SAP Specific Operating System Configuration
+### SAP-specific operating system configuration
-The SAP Specific Operating System Configuration playbook is used to configure the operating system of the SAP virtual machines. The playbook performs the following tasks:
+The SAP-specific operating system configuration playbook is used to configure the operating system of the SAP virtual machines. The playbook performs the following tasks.
# [Linux](#tab/linux) The following tasks are executed on Linux virtual machines:+ - Configures the hosts file-- Ensures that all the SAP specific repositories are registered and enabled-- Ensures that all the SAP specific packaged are installed
+- Ensures that all the SAP-specific repositories are registered and enabled
+- Ensures that all the SAP-specific packages are installed
- Performs the disk mount operations-- Configures the SAP specific services
+- Configures the SAP-specific services
- Implements configurations defined in the relevant SAP Notes # [Windows](#tab/windows) -- Add local groups and permissions
+- Adds local groups and permissions
- Connects to the Windows file shares ### Local software download
-This playbooks downloads the installation media from the control plane to the installation media source. The installation media can be shared out from the Central Services instance or from Azure Files or Azure NetApp Files.
+This playbook downloads the installation media from the control plane to the installation media source. The installation media can be shared out from the central services instance or from Azure Files or Azure NetApp Files.
# [Linux](#tab/linux)
-The following tasks are executed on the Central services instance virtual machine:
-- Download the software from the storage account and make it available for the other virtual machines
+The following tasks are executed on the central services instance virtual machine:
+
+- Download the software from the storage account and make it available for the other virtual machines.
# [Windows](#tab/windows)
-The following tasks are executed on the Central services instance virtual machine:
-- Download the software from the storage account and make it available for the other virtual machines
+The following tasks are executed on the central services instance virtual machine:
+
+- Download the software from the storage account and make it available for the other virtual machines.
sap Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/software.md
Title: Download SAP software for automation framework
-description: Download the SAP software to your Azure environment using Ansible playbooks to use the SAP on Azure Deployment Automation Framework.
+ Title: Download SAP software for the automation framework
+description: Download the SAP software to your Azure environment by using Ansible playbooks to use SAP Deployment Automation Framework.
# Download SAP software
-You need a copy of the SAP software before you can use [the SAP on Azure Deployment Automation Framework](deployment-framework.md). [Prepare your Azure environment](#configure-key-vault) so you can put the SAP media in your storage account. Then, [download the SAP software using Ansible playbooks](#download-sap-software).
+You need a copy of the SAP software before you can use [SAP Deployment Automation Framework](deployment-framework.md). [Prepare your Azure environment](#configure-a-key-vault) so that you can put the SAP media in your storage account. Then, [download the SAP software by using Ansible playbooks](#download-sap-software).
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An SAP user account (SAP-User or S-User account) with software download privileges.
-## Configure key vault
+## Configure a key vault
First, configure your deployer key vault secrets. For this example configuration, the resource group is `DEMO-EUS2-DEP00-INFRASTRUCTURE` or `DEMO-SCUS-DEP00-INFRASTRUCTURE`.
First, configure your deployer key vault secrets. For this example configuration
az keyvault secret set --name "S-Password" --vault-name "${key_vault}" --value "${sap_user_password}"; ```
-1. There are two other secrets which are needed in this step for the storage account `sapbits`, are automatically setup by the automation framework. However its always good to verify whether these are existed in your deployer keyvault or not.
+1. Two other secrets are needed in this step for the storage account. The automation framework automatically sets up `sapbits`. It's always a good practice to verify whether they existed in your deployer key vault or not.
```text sapbits-access-key
First, configure your deployer key vault secrets. For this example configuration
## Download SAP software
-Next, [configure your SAP parameters file](#configure-parameters-file) for the download process. Then, [download the SAP software using Ansible playbooks](#download-sap-software).
+Next, [configure your SAP parameters file](#configure-the-parameters-file) for the download process. Then, [download the SAP software by using Ansible playbooks](#download-sap-software).
-### Configure parameters file
+### Configure the parameters file
-Configure the SAP parameters file:
+To configure the SAP parameters file:
-1. Create a new directory called `BOMS`:
+1. Create a new directory called `BOMS`.
```bash mkdir -p ~/Azure_SAP_Automated_Deployment/WORKSPACES/BOMS; cd $_
Configure the SAP parameters file:
1. Change the value of `kv_name` to the name of the deployer key vault.
- 1. (If needed) Change the value of `secret_prefix` to match the prefix in your environment (for example DEV-WEEU-SAP)
-
-### Execute Ansible playbooks
+ 1. (If needed) Change the value of `secret_prefix` to match the prefix in your environment (for example, `DEV-WEEU-SAP`).
+
+### Run the Ansible playbooks
-Then, execute the Ansible playbooks. One way you can execute the playbooks is to use the validator test menu:
+You're ready to run the Ansible playbooks. One way you can run the playbooks is to use the validator test menu.
1. Run the download menu script:
Then, execute the Ansible playbooks. One way you can execute the playbooks is to
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/download_menu.sh ```
-1. Select the playbook to execute. For example:
+1. Select the playbook to run. For example:
```text 1) BoM Downloader
Then, execute the Ansible playbooks. One way you can execute the playbooks is to
Please select playbook: ```
-Another option is to execute the Ansible playbooks using the command `ansible-playbook`.
+Another option is to run the Ansible playbooks by using the `ansible-playbook` command.
```bash ansible-playbook \
ansible-playbook
~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_bom_downloader.yaml ```
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [Deploy the SAP Infrastructure](deploy-system.md)
+> [Deploy the SAP infrastructure](deploy-system.md)
sap Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/supportability.md
Title: Supportability matrix for the SAP on Azure Deployment Automation Framework
-description: Supported platforms, topologies, and capabilities for the SAP on Azure Deployment Automation Framework.
+ Title: Supportability matrix for SAP Deployment Automation Framework
+description: Supported platforms, topologies, and capabilities for SAP Deployment Automation Framework.
-# Supportability matrix for the SAP Automation Framework
+# Supportability matrix for the SAP automation framework
-The [SAP on Azure Deployment Automation Framework](deployment-framework.md) supports deployment of all the supported SAP on Azure topologies.
+[SAP Deployment Automation Framework](deployment-framework.md) supports deployment of all the supported SAP on Azure topologies.
## Supported operating systems
+The automation framework supports the following operating systems.
+ ### Control plane
-The deployer virtual machine of the control plane must be deployed on Linux as the Ansible controllers only work on Linux.
+The deployer virtual machine of the control plane must be deployed on Linux because the Ansible controllers only work on Linux.
-### SAP Infrastructure
+### SAP infrastructure
-The automation framework supports deployment of the SAP on Azure infrastructure both on Linux or Windows virtual machines on x86-64 or x64 hardware.
+The automation framework supports deployment of the SAP on Azure infrastructure both on Linux or Windows virtual machines on x86-64 or x64 hardware.
The framework supports the following operating systems and distributions: - Windows server 64 bit for the x86-64 platform-- SUSE linux 64 bit for the x86-64 platform (12.x and 15.x)
+- SUSE Linux 64 bit for the x86-64 platform (12.x and 15.x)
- Red Hat Linux 64 bit for the x86-64 platform (7.x and 8.x) - Oracle Linux 64 bit for the x86-64 platform The following distributions have been tested with the framework:+ - Red Hat 7.9 - Red Hat 8.2 - Red Hat 8.4
The following distributions have been tested with the framework:
- Windows Server 2019 - Windows Server 2022
-## Supported database backends
+## Supported database back ends
-The framework supports the following database backends:
+The automation framework supports the following database back ends:
- SAP HANA - DB2
The framework supports the following database backends:
- Sybase - Microsoft SQL Server - ## Supported topologies
-By default, the automation framework deploys with database and application tiers. The application tier is split into three more tiers: application, central services, and web dispatchers.
+By default, the automation framework deploys with database and application tiers. The application tier is split into three more tiers: application, central services, and web dispatchers.
You can also deploy the automation framework to a standalone server by specifying a configuration without an application tier. ## Supported deployment topologies
-The automation framework supports both green field and brown field deployments.
+The automation framework supports both green-field and brown-field deployments.
-### Greenfield deployments
-In a green field deployment, the automation framework creates all the required resources.
+### Green-field deployments
+In a green-field deployment, the automation framework creates all the required resources.
-In this scenario, you provide the relevant data (address spaces for networks and subnets) when configuring the environment. See [Configuring the workload zone](configure-workload-zone.md) for more examples.
+In this scenario, you provide the relevant data (address spaces for networks and subnets) when you configure the environment. For more examples, see [Configure the workload zone](configure-workload-zone.md).
-### Brownfield deployments
-In the brownfield deployment existing Azure resources can be used as part of the deployment.
+### Brown-field deployments
+In a brown-field deployment, you can use existing Azure resources as part of the deployment.
-In this scenario, you provide the Azure resource identifiers for the existing resources when configuring the environment. See [Configuring the workload zone](configure-workload-zone.md) for more examples.
+In this scenario, you provide the Azure resource identifiers for the existing resources when you configure the environment. For more examples, see [Configure the workload zone](configure-workload-zone.md).
## Supported Azure features
-The automation framework uses or can use the following Azure services, features, and capabilities.
+The automation framework can use the following Azure services, features, and capabilities:
-- Azure Virtual Machines (VMs)
- - Accelerated Networking
+- Azure Virtual Machines
+ - Accelerated networking
- Anchor VMs (optional) - SSH authentication - Username and password authentication - SKU configuration - Custom images - New or existing proximity placement groups-- Azure Virtual Network (VNet)
+- Azure Virtual Network
- Deployment in networks peered to your SAP network - Customer-specified IP addressing - Azure-provided IP addressing - New or existing network security groups - New or existing virtual networks - New or existing subnets-- Azure Availability Zones
+- Azure availability zones
- High availability (HA) - Azure Firewall - Azure Load Balancer - Standard load balancers - Azure Storage - Boot diagnostics storage
- - SAP Installation Media storage
+ - SAP installation media storage
- Terraform state file storage - Cloud Witness storage for HA scenarios - Azure Key Vault - New or existing key vaults - Customer-managed keys for disk encryption-- Azure Application Security Groups (ASG)
+- Azure application security groups
- Azure Files for NFS-- Azure NetApp Files (ANF)
+- Azure NetApp Files
- For shared files - For database files
-## Unsupported Azure features
-
-At this time the automation framework **doesn't support** the following Azure services, features, or capabilities:
- ## Supported SAP architectures
-The automation framework can be used to deploy the following SAP architectures:
+You can use the automation framework to deploy the following SAP architectures:
- Standalone - Distributed-- Distributed (Highly Available)--
-## Next steps
+- Distributed (highly available)
+## Next step
> [!div class="nextstepaction"] > [Get started with the automation framework](get-started.md)
sap Tools Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tools-configuration.md
Title: Configuring external tools for the SAP on Azure Deployment Automation Framework
-description: Describes how to configure external tools for using SAP on Azure Deployment Automation Framework.
+ Title: Configure external tools for SAP Deployment Automation Framework
+description: Learn how to configure external tools for using SAP Deployment Automation Framework.
-# Configuring external tools to use with the SAP on Azure Deployment Automation Framework
+# Configure external tools to use with SAP Deployment Automation Framework
-This document describes how to configure external tools to use the SAP on Azure Deployment Automation Framework.
+This article describes how to configure external tools to use SAP Deployment Automation Framework.
-## Configuring Visual Studio Code
+## Configure Visual Studio Code
-### Copy the ssh key from the key vault
+Follow these steps to configure Visual Studio Code.
+
+### Copy the SSH key from the key vault
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select or search for **Key vaults**.
-1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by the **Resource group** or **Location** if necessary.
+1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by **Resource group** or **Location**, if necessary.
+
+1. On the **Settings** section in the left pane, select **Secrets**.
-1. Select **Secrets** from the **Settings** section in the left pane.
+1. Find and select the secret that contains **sshkey**. It might look like `MGMT-[REGION]-DEP00-sshkey`.
-1. Find and select the secret containing **sshkey**. It might look like this: `MGMT-[REGION]-DEP00-sshkey`
+1. On the secret's page, select the current version. Copy the **Secret value**.
-1. On the secret's page, select the current version. Then, copy the **Secret value**.
+1. Create a new file in Visual Studio Code and copy in the secret value.
-1. Create a new file in VS Code and copy in the secret value.
-
-1. Save the file where you keep SSH keys. For example, `C:\\Users\\<your-username>\\.ssh\weeu_deployer.ssh`. Make sure that you save the file without an extension.
+1. Save the file where you keep SSH keys. For example, use `C:\\Users\\<your-username>\\.ssh\weeu_deployer.ssh`. Make sure that you save the file without an extension.
-Once you have downloaded the ssh key for the deployer, you can use it to connect to the Deployer virtual machine.
+After you've downloaded the SSH key for the deployer, you can use it to connect to the deployer virtual machine.
### Get the public IP of the deployer 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Find the resource group for the Deployer. The name starts with `MGMT-[REGION_CODE]-DEP00` unless you have deployed the control plane using a custom naming convention. The contents of the Deployer resource group should look like the image shown below.
+1. Find the resource group for the deployer. The name starts with `MGMT-[REGION_CODE]-DEP00` unless you've deployed the control plane by using a custom naming convention. The contents of the deployer resource group should look like the following image.
- :::image type="content" source="media/tutorial/deployer-resource-group.png" alt-text="Screenshot of Deployer resources":::
+ :::image type="content" source="media/tutorial/deployer-resource-group.png" alt-text="Screenshot that shows deployer resources":::
-1. Find the public IP for the deployer. The name should end with `-pip`. Filter by the **type** if necessary.
-
-1. Copy the IP Address.
+1. Find the public IP for the deployer. The name should end with `-pip`. Filter by **type**, if necessary.
+1. Copy the IP address.
### Install the Remote Development extension
-1. Open the Extensions window by selecting View - Extensions or by using the `Ctrl-Shift-X` keyboard shortcut.
-
-1. Ensure that the *Remote Development* extension is installed
+1. Open the **Extensions** window by selecting **View** > **Extensions** or by selecting Ctrl+Shift+X.
+
+1. Ensure that the **Remote Development** extension is installed.
-### Connect to the Deployer
+### Connect to the deployer
-1. Open the Command Palette by selecting View - Command Palette or by using the `Ctrl-Shift-P` keyboard shortcut and type "Connect to host". You can also click on the icon in the lower left corner of VS Code and choose "Connect to host"
+1. Open the command palette by selecting **View** > **Command Palette** or by selecting Ctrl+Shift+P. Enter **Connect to host**. You can also select the icon in the lower-left corner of Visual Studio Code and select **Connect to host**.
-1. Choose "Add New SSH Host"
+1. Select **Add New SSH Host**.
```bash ssh -i `C:\\Users\\<your-username>\\weeu_deployer.ssh` azureadm@<IP_Address> ```
- > [!NOTE]
- >Change the <IP_Address> to reflect the Deployer IP.
-1. Click Connect, choose Linux when prompted for the target operating system, accept the remaining dialogues (key, trust etc.)
+ > [!NOTE]
+ > Change <IP_Address> to reflect the deployer IP.
-1. When connected choose Open Folder and open the "/Azure_SAP_Automated_Deployment" folder.
+1. Select **Connect**. Select **Linux** when you're prompted for the target operating system, and accept the remaining dialogs (such as key and trust).
+
+1. When connected, select **Open Folder** and open the `/Azure_SAP_Automated_Deployment` folder.
## Next step > [!div class="nextstepaction"]
-> [Configure SAP Workload Zone](deploy-workload-zone.md)
--
+> [Configure the SAP workload zone](deploy-workload-zone.md)
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
Make sure you can connect to your deployer VM:
1. Save the file. If you're prompted to **Save as type**, select **All files** if **SSH** isn't an option. For example, use `deployer.ssh`.
-1. Connect to the deployer VM through any SSH client such as VSCode. Use the public IP address you noted earlier, and the SSH key you downloaded. For instructions on how to connect to the Deployer using VSCode see [Connecting to Deployer using VSCode](tools-configuration.md#configuring-visual-studio-code). If you're using PuTTY, convert the SSH key file first using PuTTYGen.
+1. Connect to the deployer VM through any SSH client such as VSCode. Use the public IP address you noted earlier, and the SSH key you downloaded. For instructions on how to connect to the Deployer using VSCode see [Connecting to Deployer using VSCode](tools-configuration.md#configure-visual-studio-code). If you're using PuTTY, convert the SSH key file first using PuTTYGen.
> [!NOTE] >The default username is *azureadm*
sap Compliance Bcdr Reliabilty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-bcdr-reliabilty.md
-+ Last updated 05/15/2023
sap Compliance Cedr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-cedr.md
+ Last updated 05/15/2023 # Customer enabled disaster recovery in *Azure Center for SAP solutions*
sap Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/deploy-s4hana.md
Last updated 02/22/2023--++ #Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal.
sap Get Quality Checks Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-quality-checks-insights.md
Last updated 10/19/2022--++ #Customer intent: As a developer, I want to use the quality checks feature so that I can learn more insights about virtual machines within my Virtual Instance for SAP resource.
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Last updated 04/06/2023--++ #Customer intent: As a developer, I want to download the necessary SAP media for installing the SAP software and upload it for us with Azure Center for SAP solutions.
Next, set up a virtual machine (VM) where you will download the SAP components l
Next, download the SAP installation media to the VM using a script.
-1. Run the Ansible script **playbook_bom_download** with your own information. Enter the actual values **within** double quotes but **without** the triangular brackets. The Ansible command that you run should look like:
+1. Run the Ansible script **playbook_bom_download** with your own information. With the exception of the `s_password` variable, enter the actual values **within** double quotes but **without** the triangular brackets. For the `s_password` variable, use single quotes. The Ansible command that you run should look like:
```bash export bom_base_name="<Enter bom base name>" export s_user="<s-user>"
- export s_password="<password>"
+ export s_password='<password>'
export storage_account_access_key="<storageAccountAccessKey>" export sapbits_location_base_path="<containerBasePath>" export BOM_directory="<BOM_directory_path>"
sap Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/install-software.md
Last updated 02/03/2023--++ #Customer intent: As a developer, I want to install SAP software so that I can use Azure Center for SAP solutions.
sap Manage Virtual Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-virtual-instance.md
Last updated 02/03/2023--++ #Customer intent: As a SAP Basis Admin, I want to view and manage my SAP systems using Virtual Instance for SAP solutions resource where I can find SAP system properties.
sap Monitor Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/monitor-portal.md
Last updated 10/19/2022--++ #Customer intent: As a developer, I want to set up monitoring for my Virtual Instance for SAP solutions, so that I can monitor the health and status of my SAP system in Azure Center for SAP solutions.
sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/overview.md
Last updated 10/19/2022--++ #Customer intent: As a developer, I want to learn about Azure Center for SAP solutions so that I can decide to use the service with a new or existing SAP system.
sap Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/prepare-network.md
Last updated 10/19/2022--++ #Customer intent: As a developer, I want to create a virtual network so that I can deploy S/4HANA infrastructure in Azure Center for SAP solutions.
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
Last updated 02/03/2023--++ #Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
sap Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/start-stop-sap-systems.md
Last updated 10/19/2022--++ #Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
sap Large Instance High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-high-availability-rhel.md
Last updated 04/19/2021
# Azure Large Instances high availability for SAP on RHEL > [!NOTE]
-> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate an SAP HANA database failover. You need to have a good understanding of Linux, SAP HANA, and Pacemaker to complete the steps in this guide.
sap Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-upgrade-hana-large-instance.md
This article describes the details of operating system (OS) upgrades on HANA Large Instances (HLI), otherwise known as BareMetal Infrastructure. > [!NOTE]
-> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
>[!NOTE] >Upgrading the OS is your responsibility. Microsoft operations support can guide you in key areas of the upgrade, but consult your operating system vendor as well when planning an upgrade.
sap Provider Netweaver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-netweaver.md
to unprotect the web-methods in the SAP Windows virtual machine.
RFC metrics are only supported for **AS ABAP applications** and do not apply to SAP JAVA systems. This step is **mandatory** when the connection type selected is **SOAP+RFC**. Below steps need to be performed as a pre-requisite to enable RFC
-1. **Create or upload role** in the SAP NW ABAP system. Azure Monitor for SAP solutions requires this role to connect to SAP. The role uses the least privileged access. Download and unzip [Z_AMS_NETWEAVER_MONITORING.zip](https://github.com/hsridharan/azure-docs-pr/files/12114525/Z_AMS_NETWEAVER_MONITORING.zip)
+1. **Create or upload role** in the SAP NW ABAP system. Azure Monitor for SAP solutions requires this role to connect to SAP. The role uses the least privileged access. Download and unzip [Z_AMS_NETWEAVER_MONITORING.zip](https://github.com/MicrosoftDocs/azure-docs-pr/files/12528831/Z_AMS_NETWEAVER_MONITORING.zip)
+ 1. Sign in to your SAP system. 1. Use the transaction code **PFCG** &gt; select on **Role Upload** in the menu.
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| Appliance Template | Date | Description | Creation Link | | | - | -- | - |
+| [**SAP S/4HANA 2022 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/983008db-db92-4d4d-ac79-7e2afa95a2e0)| July 16 2023 |This appliance contains SAP S/4HANA 2022 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=983008db-db92-4d4d-ac79-7e2afa95a2e0&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
| [**SAP S/4HANA 2022 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3722f683-42af-4059-90db-4e6a52dc9f54) | April 20 2023 |This appliance contains SAP S/4HANA 2022 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3722f683-42af-4059-90db-4e6a52dc9f54&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP S/4HANA 2022, Fully-Activated Appliance**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311) | December 15 2022 |This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
+| [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/a954cc12-da16-4caa-897e-cf84bc74cf15)| April 26 2022 |This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. |[Create Appliance](https://cal.sap.com/registration?sguid=a954cc12-da16-4caa-897e-cf84bc74cf15&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP BW/4HANA 2021 SP04 Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db)| March 23 2023 | This solution offers you an insight of SAP BW/4HANA2021 SP04. SAP BW/4HANA is the next generation Data Warehouse optimized for SAP HANA. Beside the basic BW/4HANA options the solution offers a bunch of SAP HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. | [Create Appliance](https://cal.sap.com/registration?sguid=1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/5a830213-f0cb-423e-ab5f-f7736e57f5a1)| May 10 2023 | The SAP ABAP Platform on SAP HANA gives you access to your own copy of SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements, including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=5a830213-f0cb-423e-ab5f-f7736e57f5a1&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP Focused Run 4.0 FP01, unconfigured**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/2afd7a3e-ecf4-4a20-a975-ce05c4360e55) | June 29 2023 | SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics.| [Create Appliance](https://cal.sap.com/registration?sguid=2afd7a3e-ecf4-4a20-a975-ce05c4360e55&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
The following links highlight the Product stacks that you can quickly deploy on
| All products | Link | | -- | : |
+| **SAP S/4HANA 2022 FPS02 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=c86d7a56-4130-4459-8060-ffad1a1118ce&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=newInstallation) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/c86d7a56-4130-4459-8060-ffad1a1118ce) |
+| **SAP S/4HANA 2022 FPS01 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=1294f31c-2697-443c-bacc-117d5924fcb2&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=newInstallation) |
+This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/1294f31c-2697-443c-bacc-117d5924fcb2) |
| **SAP S/4HANA 2022 FPS00 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=3b1dc287-c865-4f79-b9ed-d5ec2dc755e9&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) | |This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/3b1dc287-c865-4f79-b9ed-d5ec2dc755e9) | | **SAP S/4HANA 2021 FPS04 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=29403c63-6504-4919-b5dd-319d7a99804e&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=newInstallation) |
The following links highlight the Product stacks that you can quickly deploy on
|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/1c796928-0617-490b-a87d-478568a49628)| | **SAP S/4HANA 2021 FPS00 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=108febf9-5e7b-4e47-a64d-231b6c4c821d&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) | |This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/108febf9-5e7b-4e47-a64d-231b6c4c821d) |
+| **SAP S/4HANA 2020 FPS04 for Productive Deployments**| [Deploy System](https://cal.sap.com/registration?sguid=615c5c18-5226-4dcb-b0ab-19d0141baf9b&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=newInstallation) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/615c5c18-5226-4dcb-b0ab-19d0141baf9b) |
+
sap Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md
IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Title: Get started with SAP on Azure VMs | Microsoft Docs
description: Learn about SAP solutions that run on virtual machines (VMs) in Microsoft Azure -
-tags: azure-resource-manager
-keywords: ''
Previously updated : 08/03/2023 Last updated : 08/24/2023 - # Use Azure to host and run SAP workload scenarios When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP applications across development and test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP BI on Linux to Windows, and SAP HANA to SQL Server, Oracle, Db2, etc., we've got you covered.
-Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure. Our partnership with SAP resulted in a variety of integration scenarios with the overall Microsoft ecosystem. Check out the **dedicated [Integration section](./integration-get-started.md)** to learn more.
+Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure. Our partnership with SAP resulted in various integration scenarios with the overall Microsoft ecosystem. Check out the **dedicated [Integration section](./integration-get-started.md)** to learn more.
We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP solutions 2.0 entering the public preview stage. These services give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass.
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- August 24, 2023: Support of priority-fencing-delay cluster property on two-node pacemaker cluster to address split-brain situation in RHEL is updated on [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md), [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md), [High availability of SAP HANA Scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), and [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md) documents.
- August 03, 2023: Change of recommendation to use a /25 IP range for delegated subnet for ANF for SAP workload [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - August 03, 2023: Change in support of block storage and NFS on ANF storage for SAP HANA documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - July 25, 2023: Adding reference to SAP Note #3074643 to [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)-- July 13, 2023: Clarifying dfifferences in zonal replication between NFS on AFS and ANF in table in [Azure Storage types for SAP workload](./planning-guide-storage.md)-- July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do not show any performance difference in [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md)
+- July 21, 2023: Support of priority-fencing-delay cluster property on two-node pacemaker cluster to address split-brain situation in SLES is updated on [High availability for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](./sap-hana-high-availability-netapp-files-suse.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](./high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](./high-availability-guide-suse-netapp-files.md) document.
+- July 13, 2023: Clarifying differences in zonal replication between NFS on AFS and ANF in table in [Azure Storage types for SAP workload](./planning-guide-storage.md)
+- July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 don't show any performance difference in [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md)
- July 13, 2023: Replaced links in ANF section of [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) to new ANF related documentation - July 11, 2023: Add a note about Azure NetApp Files application volume group for SAP HANA in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for HANA Scale-out HA on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) and [HA for HANA scale-out on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) - June 29, 2023: Update important considerations and sizing information in [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md)
In the SAP workload documentation space, you can find the following areas:
- November 30, 2022: Added storage recommendations for Premium SSD v2 into [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-sapase.md) - November 22, 2022: Release of Disaster Recovery guidelines for SAP workload on Azure - [Disaster Recovery overview and infrastructure guidelines for SAP workload](disaster-recovery-overview-guide.md) and [Disaster Recovery recommendation for SAP workload](disaster-recovery-sap-guide.md). - November 22, 2022: Update of [SAP workloads on Azure: planning and deployment checklist](deployment-checklist.md) to add latest recommendations-- November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md)
+- November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md)
- November 15, 2022: Change in [HA for SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add recommendation to use mount option `nconnect` for workloads with higher throughput requirements - November 15, 2022: Add a recommendation for minimum required version of package resource-agents in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - November 14, 2022: Provided more details about nconnect mount option in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- November 14, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to update suggested timeouts for `FileSystem` Pacemaker cluster resources
+- November 14, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to update suggested timeouts for `FileSystem` Pacemaker cluster resources
- November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md) - November 07, 2022: Added monitor operation for azure-lb resource in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [SAP HANA scale-out with HSR and Pacemaker on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [Set up IBM Db2 HADR on Azure virtual machines (VMs)](dbms-guide-ha-ibm.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](high-availability-guide-suse.md), [High availability for NFS on Azure VMs on SLES](high-availability-guide-suse-nfs.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](high-availability-guide-suse-multi-sid.md) - October 31, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) to fix script location for DRBD 9.0
In the SAP workload documentation space, you can find the following areas:
- October 27, 2022: Adding Ev4 and Ev5 VM families and updated OS releases to table in [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-sapase.md) - October 20, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to indicate that we're de-emphasizing SAP reference architectures, utilizing NFS clusters - October 18, 2022: Clarify some considerations around using Azure Availability Zones in [SAP workload configurations with Azure Availability Zones](./high-availability-zones.md)-- October 17, 2022: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add guidance for setting up parameter `AUTOMATED_REGISTER`
+- October 17, 2022: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add guidance for setting up parameter `AUTOMATED_REGISTER`
- September 29, 2022: Announcing HANA Large Instances being in sunset mode in [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md) and [What is SAP HANA on Azure (Large Instances)?](../../virtual-machines/workloads/sap/hana-overview-architecture.md). Adding some statements around Azure VMware and Azure Active Directory support status in [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md) - September 27, 2022: Minor changes in [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications to adjust mount instructions - September 14, 2022 Release of updated SAP on Oracle guide with new and updated content [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
In the SAP workload documentation space, you can find the following areas:
- June 17, 2020: Change in [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to remove meta keyword from HANA resource creation command (RHEL 8.x) - June 09, 2021: Correct VM SKU names for M192_v2 in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - May 26, 2021: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to add configuration to prepare the OS for running HANA on ANF -- May 13, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to clarify how resource agent azure-events operates
+- May 13, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to clarify how resource agent azure-events operate
- April 30, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to include warning about incompatible change with Azure Fence Agent in a version of package python3-azure-mgmt-compute (SLES 15) - April 27, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md) to add links to important SAP notes in the prerequisites section - April 27, 2021: Added new Msv2, Mdsv2 VMs into HANA storage configuration in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
In the SAP workload documentation space, you can find the following areas:
- February 03, 2021: More details on I/O scheduler settings for SUSE in article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - February 01, 2021: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add a link to [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - January 23, 2021: Introduce the functionality of HANA data volume partitioning as functionality to stripe I/O operations against HANA data files across different Azure disks or NFS shares without using a disk volume manager in articles [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- January18, 2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+- January 18, 2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
- January 11, 2021: Minor changes in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to adjust commands to work for both RHEL8 and RHEL7, and ENSA1 and ENSA2 - January 05, 2021: Changes in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md), revising the recommended configuration to allow SAP Host Agent to manage the local port range - January 04,2021: Add new Azure regions supported by HLI into [What is SAP HANA on Azure (Large Instances)](../large-instances/hana-overview-architecture.md)
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
Previously updated : 10/07/2022 Last updated : 08/30/2023
This document is about HANA storage configurations for Azure premium storage or
> [!IMPORTANT]
-> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you are not utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
+> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you aren't utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
## Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines Azure Write Accelerator is a functionality that is available for Azure M-Series VMs exclusively in combination with Azure premium storage. As the name states, the purpose of the functionality is to improve I/O latency of writes against the Azure premium storage. For SAP HANA, Write Accelerator is supposed to be used against the **/hana/log** volume only. Therefore, the **/hana/data** and **/hana/log** are separate volumes with Azure Write Accelerator supporting the **/hana/log** volume only.
Azure Write Accelerator is a functionality that is available for Azure M-Series
The caching recommendations for Azure premium disks below are assuming the I/O characteristics for SAP HANA that list like: -- There hardly is any read workload against the HANA data files. Exceptions are large sized I/Os after restart of the HANA instance or when data is loaded into HANA. Another case of larger read I/Os against data files can be HANA database backups. As a result read caching mostly does not make sense since in most of the cases, all data file volumes need to be read completely. -- Writing against the data files is experienced in bursts based by HANA savepoints and HANA crash recovery. Writing savepoints is asynchronous and are not holding up any user transactions. Writing data during crash recovery is performance critical in order to get the system responding fast again. However, crash recovery should be rather exceptional situations
+- There hardly is any read workload against the HANA data files. Exceptions are large sized I/Os after restart of the HANA instance or when data is loaded into HANA. Another case of larger read I/Os against data files can be HANA database backups. As a result read caching mostly doesn't make sense since in most of the cases, all data file volumes need to be read completely.
+- Writing against the data files is experienced in bursts based by HANA savepoints and HANA crash recovery. Writing savepoints is asynchronous and aren't holding up any user transactions. Writing data during crash recovery is performance critical in order to get the system responding fast again. However, crash recovery should be rather exceptional situations
- There are hardly any reads from the HANA redo files. Exceptions are large I/Os when performing transaction log backups, crash recovery, or in the restart phase of a HANA instance. - Main load against the SAP HANA redo log file is writes. Dependent on the nature of workload, you can have I/Os as small as 4 KB or in other cases I/O sizes of 1 MB or more. Write latency against the SAP HANA redo log is performance critical. - All writes need to be persisted on disk in a reliable fashion **Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for the different volumes using Azure premium storage should be set like:** -- **/hana/data** - no caching or read caching-- **/hana/log** - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be enabled
+- **/hana/data** - None or read caching
+- **/hana/log** - None. Enable Write Accelerator for M- and Mv2-Series VMs, the option in the Azure portal is "None + Write Accelerator."
- **/hana/shared** - read caching - **OS disk** - don't change default caching that is set by Azure at creation time of the VM ### Azure burst functionality for premium storage
-For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../virtual-machines/disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You are going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
+For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../virtual-machines/disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You're going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that contain data files for the different DBMS. The I/O workload expected against those volumes, especially with small to mid-ranged systems is expected to look like: -- Low to moderate read workload since data ideally is cached in memory, or like in the case of HANA should be completely in memory
+- Low to moderate read workload since data ideally is cached in memory, or like with SAP HANA should be completely in memory
- Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis-- Backup workload that reads in a continuous stream in cases where backups are not executed via storage snapshots
+- Backup workload that reads in a continuous stream in cases where backups aren't executed via storage snapshots
- For SAP HANA, load of the data into memory after an instance restart Especially on smaller DBMS systems where your workload is handling a few hundred transactions per seconds only, such a burst functionality can make sense as well for the disks or volumes that store the transaction or redo log. Expected workload against such a disk or volumes looks like:
Especially on smaller DBMS systems where your workload is handling a few hundred
> SAP HANA certification for Azure M-Series virtual machines is exclusively with Azure Write Accelerator for the **/hana/log** volume. As a result, production scenario SAP HANA deployments on Azure M-Series virtual machines are expected to be configured with Azure Write Accelerator for the **/hana/log** volume. > [!NOTE]
-> In scenarios that involve Azure premium storage, we are implementing burst capabilities into the configuration. As you are using storage test tools of whatever shape or form, keep the way [Azure premium disk bursting works](../../virtual-machines/disk-bursting.md) in mind. Running the storage tests delivered through the SAP HWCCT or HCMT tool, we are not expecting that all tests will pass the criteria since some of the tests will exceed the bursting credits you can accumulate. Especially when all the tests run sequentially without break.
+> In scenarios that involve Azure premium storage, we are implementing burst capabilities into the configuration. As you're using storage test tools of whatever shape or form, keep the way [Azure premium disk bursting works](../../virtual-machines/disk-bursting.md) in mind. Running the storage tests delivered through the SAP HWCCT or HCMT tool, we aren't expecting that all tests will pass the criteria since some of the tests will exceed the bursting credits you can accumulate. Especially when all the tests run sequentially without break.
> [!NOTE] > With M32ts and M32ls VMs it can happen that disk throughput could be lower than expected using HCMT/HWCCT disk tests. Even with disk bursting or with sufficiently provisioned I/O throughput of the underlying disks. Root cause of the observed behavior was that the HCMT/HWCCT storage test files were completely cached in the read cache of the Premium storage data disks. This cache is located on the compute host that hosts the virtual machine and can cache the test files of HCMT/HWCCT completely. In such a case the quotas listed in the column **Max cached and temp storage throughput: IOPS/MBps (cache size in GiB)** in the article [M-series](../../virtual-machines/m-series.md) are relevant. Specifically for M32ts and M32ls, the throughput quota against the read cache is only 400MB/sec. As a result of the tests files being completely cached, it is possible that despite disk bursting or higher provisioned I/O throughput, the tests can fall slightly short of 400MB/sec maximum throughput. As an alternative, you can test without read cache enabled on the Azure Premium storage data disks.
Configuration for SAP **/hana/data** volume:
| M208s_v2 | 2,850 GiB | 1,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting | | M416s_v2 | 5,700 GiB | 2,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
+| M416s_8_v2 | 7,600 | 2,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
| M416ms_v2 | 11,400 GiB | 2,000 MBps | 4 x P50 | 1,000 MBps | no bursting | 30,000 | no bursting |
+| M832isx<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 4 x P60<sup>1</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
+| M832isx_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 4 x P60<sup>1</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+
+<sup>2</sup> Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially savepoint activity, can force you to deploy significant more premium storage v1 capacity.
For the **/hana/log** volume. the configuration would look like:
For the **/hana/log** volume. the configuration would look like:
| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M416s_v2 | 5,700 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
| M416ms_v2 | 11,400 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M832isx<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
+| M832isx_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
For the other volumes, the configuration would look like:
For the other volumes, the configuration would look like:
| M208s_v2 | 2,850 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
| M416ms_v2 | 11,400 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M832isx<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M832isx_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps |1 x P30 | 1 x P10 | 1 x P6 |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual machine type. Azure Write Accelerator only works with [Azure managed disks](https://azure.microsoft.com/services/managed-disks/). So at least the Azure premium storage disks forming the **/han).
-For the HANA certified VMs of the Azure [Esv3](../../virtual-machines/ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../virtual-machines/edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), [Edsv5](../../virtual-machines/easv5-eadsv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.jsons), and [Esv5](../../virtual-machines/ev5-esv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv5-series) you need to use ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs. Though, many customers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
+You may want to use Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs when using E-series VMs. Though, many customers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS | | | | | | | | |
A less costly alternative for such configurations could look like:
| M208s_v2 | 2,850 GiB | 1,000 MB/s | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M208ms_v2 | 5,700 GiB | 1,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M416s_v2 | 5,700 GiB | 2,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M416s_8_v2 | 5,700 GiB | 2,000 MB/s | 5 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
| M416ms_v2 | 11400 GiB | 2,000 MB/s | 7 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
A less costly alternative for such configurations could look like:
<sup>2</sup> The VM family supports [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md), but there's a potential that the IOPS limit of Write accelerator could limit the disk configurations IOPS capabilities
-In the case of combining the data and log volume for SAP HANA, the disks building the striped volume should not have read cache or read/write cache enabled.
+When combining the data and log volume for SAP HANA, the disks building the striped volume shouldn't have read cache or read/write cache enabled.
-There are VM types listed that are not certified with SAP and as such not listed in the so called [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). Feedback of customers was that those non-listed VM types were used successfully for some non-production tasks.
+There are VM types listed that aren't certified with SAP and as such not listed in the so called [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). Feedback of customers was that those non-listed VM types were used successfully for some non-production tasks.
## Next steps
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
Previously updated : 06/22/2023 Last updated : 08/30/2023
When you look up the price list for Azure managed disks, then it becomes apparen
- You try to simplify your storage architecture by using a single disk for **/hana/data** and **/hana/log** and pay for more IOPS and throughput as needed to achieve the levels we recommend below. With the awareness that a single disk has a throughput level of 1,200 MBps and 80,000 IOPS. - You want to benefit of the 3,000 IOPS and 125MBps that come for free with each disk. To do so, you would build multiple smaller disks that sum up to the capacity you need and then build a striped volume with a logical volume manager across these multiple disks. Striping across multiple disks would give you the possibility to reduce the IOPS and throughput cost factors. But would result in some more efforts in automating deployments and operating such solutions.
-Since we don't want to define which direction you should go, we're leaving the decision to you on whether to take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing storage throughput are going to grow over time. And that HANA savepoints are extremely critical and demand high throughput for the **/hana/data** volume
+Since we don't want to define which direction you should go, we're leaving the decision to you on whether to take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing storage throughput are going to grow over time. And that HANA savepoints are critical and demand high throughput for the **/hana/data** volume
> [!IMPORTANT] > You have the possibility to define the sector size of Azure Premium SSD v2 as 512 Bytes or 4096 Bytes. Default sector size is 4096 Bytes. Tests conducted with HCMT did not reveal any significant differences in performance and throughput between the different sector sizes. This sector size is different than stripe sizes that you need to define when using a logical volume manager.
-**Recommendation: The recommended configurations with Azure premium storage for production scenarios look like:**
+**Recommendation: The recommended starting configurations with Azure premium storage v2 for production scenarios look like:**
Configuration for SAP **/hana/data** volume:
Configuration for SAP **/hana/data** volume:
| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3424 GB | 1,000 MBps| 15,000 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 6,848 GB | 1,000 MBps | 15,000 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 6,848 GB | 1,200 MBps| 17,000 |
-| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 13,680 GB | 1,200 MBps| 25,000 |
+| M416s_8_v2 | 7,600 GiB | 2,000 MBps | 80,000 | 9,120 GB | 1,250 MBps| 20,000 |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 13,680 GB | 1,300 MBps| 25,000 |
+| M832isx<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 19,200 GB | 2,000 MBps<sup>2</sup> | 40,000 |
+| M832isx_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 28,400 GB | 2,000 MBps<sup>2</sup> | 60,000 |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+
+<sup>2</sup> Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially savepoint activity, can force you to deploy significant more throughput and IOPS
For the **/hana/log** volume. the configuration would look like:
For the **/hana/log** volume. the configuration would look like:
| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 512 GB | 350 MBps | 4,500 | 1,024 GB | | M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB |
+| M416s_8_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB |
| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB |
+| M832isx<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB |
+| M832isx_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase either IOPS, and/or throughput on the individual disks you're using.
A few examples on how combining multiple Premium SSD v2 disks with a stripe set
| M416ms_v2 | 11,400 GiB | 1 | 13,680 | 25,000 | 3,000 | 22,000 | 1,200 MBps | 125 MBps | 1,075 MBps | | M416ms_v2 | 11,400 GiB | 2 | 6,840 | 25,000 | 6,000 | 19,000 | 1,200 MBps | 250 MBps | 950 MBps | | M416ms_v2 | 11,400 GiB | 4 | 3,420 | 25,000 | 12,000 | 13,000 | 1,200 MBps | 500 MBps | 700 MBps |
+| M832isx<sup>1</sup> | 14,902 GiB | 2 | 7,451 GB | 40,000 | 6,000 | 34,000 | 2,000 MBps | 250 MBps | 1750 MBps |
+| M832isx<sup>1</sup> | 14,902 GiB | 4 | 3,726 GB | 40,000 | 12,000 | 28,000 | 2,000 MBps | 500 MBps | 1500 MBps |
+| M832isx<sup>1</sup> | 14,902 GiB | 8 | 1,863 GB | 40,000 | 24,000 | 16,000 | 2,000 MBps | 1,000 MBps | 1000 MBps |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
For **/hana/log**, a similar approach of using two disks could look like:
For **/hana/log**, a similar approach of using two disks could look like:
| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2 | 256 GB | 4,000 | 6,000 | 0 | 300 MBps | 250 MBps | 50 MBps | | M416ms_v2 | 11,400 GiB | 1 | 512 GB | 5,000 | 3,000 | 2,000 | 400 MBps | 125 MBps | 275 MBps | | M416ms_v2 | 11,400 GiB | 2 | 256 GB | 5,000 | 6,000 | 0 | 400 MBps | 250 MBps | 150 MBps |
+| M832isx<sup>1</sup> | 14,902 GiB | 1 | 512 GB | 9,000 | 3,000 | 6,000 | 600 MBps | 125 MBps | 475 MBps |
+| M832isx<sup>1</sup> | 14,902 GiB | 2 | 256 GB | 9,000 | 6,000 | 3,000 | 600 MBps | 250 MBps | 350 MBps |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
These tables combined with the [prices of IOPS and throughput](https://azure.microsoft.com/pricing/details/managed-disks/) should give you an idea how striping across multiple Premium SSD v2 disks could reduce the costs for the particular storage configuration you're looking at. Based on these calculations, you can decide whether to move ahead with a single disk approach for **/hana/data** and/or **/hana/log**.
sap Hana Vm Ultra Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-ultra-disk.md
Previously updated : 07/13/2023 Last updated : 08/30/2023
This document is about HANA storage configurations for Azure Ultra Disk storage
Another Azure storage type is called [Azure Ultra disk](../../virtual-machines/disks-types.md#ultra-disks). The significant difference between Azure storage offered so far and Ultra disk is that the disk capabilities aren't bound to the disk size anymore. As a customer you can define these capabilities for Ultra disk: - Size of a disk ranging from 4 GiB to 65,536 GiB-- IOPS range from 100 IOPS to 160K IOPS (maximum depends on VM types as well)
+- IOPS range from 100 IOPS to 160,000 IOPS (maximum depends on VM types as well)
- Storage throughput from 300 MB/sec to 2,000 MB/sec Ultra disk gives you the possibility to define a single disk that fulfills your size, IOPS, and disk throughput range. Instead of using logical volume managers like LVM or MDADM on top of Azure premium storage to construct volumes that fulfill IOPS and storage throughput requirements. You can run a configuration mix between Ultra disk and premium storage. As a result, you can limit the usage of Ultra disk to the performance critical **/hana/data** and **/hana/log** volumes and cover the other volumes with Azure premium storage
The recommendations are often exceeding the SAP minimum requirements as stated e
| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data volume | /hana/data I/O throughput | /hana/data IOPS | /hana/log volume | /hana/log I/O throughput | /hana/log IOPS | | | | | | | | | | -- |
-| E20ds_v4 | 160 GiB | 480 MB/s | 200 GB | 400 MBps | 2,500 | 80 GB | 250 MB | 1,800 |
-| E32ds_v4 | 256 GiB | 768 MB/s | 300 GB | 400 MBps | 2,500 | 128 GB | 250 MBps | 1,800 |
-| E48ds_v4 | 384 GiB | 1152 MB/s | 460 GB | 400 MBps | 3,000 | 192 GB | 250 MBps | 1,800 |
-| E64ds_v4 | 504 GiB | 1200 MB/s | 610 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
-| E64s_v3 | 432 GiB | 1,200 MB/s | 610 GB | 400 MBps | 3,500 | 220 GB | 250 MB | 1,800 |
-| M32ts | 192 GiB | 500 MB/s | 250 GB | 400 MBps | 2,500 | 96 GB | 250 MBps | 1,800 |
-| M32ls | 256 GiB | 500 MB/s | 300 GB | 400 MBps | 2,500 | 256 GB | 250 MBps | 1,800 |
-| M64ls | 512 GiB | 1,000 MB/s | 620 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MB/s | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MB/s | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
-| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MB/s | 2,100 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MB/s |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MB/s |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
-| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MB/s | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
-| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MB/s | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
-| M208s_v2 | 2,850 GiB | 1,000 MB/s | 3,500 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
-| M208ms_v2 | 5,700 GiB | 1,000 MB/s | 7,200 GB | 750 MBps | 14,400 | 512 GB | 250 MBps | 2,500 |
-| M416s_v2 | 5,700 GiB | 2,000 MB/s | 7,200 GB | 1,000 MBps | 14,400 | 512 GB | 400 MBps | 4,000 |
-| M416ms_v2 | 11,400 GiB | 2,000 MB/s | 14,400 GB | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
+| E20ds_v4 | 160 GiB | 480 MBps | 200 GB | 400 MBps | 2,500 | 80 GB | 250 MB | 1,800 |
+| E32ds_v4 | 256 GiB | 768 MBps | 300 GB | 400 MBps | 2,500 | 128 GB | 250 MBps | 1,800 |
+| E48ds_v4 | 384 GiB | 1152 MBps | 460 GB | 400 MBps | 3,000 | 192 GB | 250 MBps | 1,800 |
+| E64ds_v4 | 504 GiB | 1200 MBps | 610 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
+| E64s_v3 | 432 GiB | 1,200 MBps | 610 GB | 400 MBps | 3,500 | 220 GB | 250 MB | 1,800 |
+| M32ts | 192 GiB | 500 MBps | 250 GB | 400 MBps | 2,500 | 96 GB | 250 MBps | 1,800 |
+| M32ls | 256 GiB | 500 MBps | 300 GB | 400 MBps | 2,500 | 256 GB | 250 MBps | 1,800 |
+| M64ls | 512 GiB | 1,000 MBps | 620 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 2,100 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
+| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 3,500 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 7,200 GB | 750 MBps | 14,400 | 512 GB | 250 MBps | 2,500 |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 7,200 GB | 1,000 MBps | 14,400 | 512 GB | 400 MBps | 4,000 |
+| M416s_8_v2 | 7,600 | 2,000 MBps | 9,500 GB | 1,250 MBps | 20,000 | 512 GB | 400 MBps | 4,000 |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 14,400 GB | 1,500 MBps | 28,800 | 512 GB | 400 MBps | 4,000 |
+| M832isx<sup>1</sup> | 14902 GiB | larger than 2,000 Mbps | 19,200 GB | 2,000 MBps<sup>2</sup> | 40,000 | 512 GB | 600 MBps | 9,000 |
+| M832isx_v2<sup>1</sup> | 23088 GiB | larger than 2,000 Mbps | 28,400 GB | 2,000 MBps<sup>2</sup> | 60,000 | 512 GB | 600 MBps | 9,000 |
+
+<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+
+<sup>2</sup> Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially savepoint activity, can force you to deploy significant more throughput and IOPS
+ **The values listed are intended to be a starting point and need to be evaluated against the real demands.** The advantage with Azure Ultra disk is that the values for IOPS and throughput can be adapted without the need to shut down the VM or halting the workload applied to the system.
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
sap High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md
Previously updated : 06/21/2023 Last updated : 08/23/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
If using enqueue server 1 architecture (ENSA1), define the resources as follows:
- ```bash
- sudo pcs property set maintenance-mode=true
-
- # If using NFSv3
- sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
+ ```bash
+ sudo pcs property set maintenance-mode=true
+
+ # If using NFSv3
+ sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
The following items are prefixed with either **[A]** - applicable to all nodes,
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS
- # If using NFSv4.1
- sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
The following items are prefixed with either **[A]** - applicable to all nodes,
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS
- sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000
+ sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000
- # If using NFSv3
- sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv3
+ sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS
- # If using NFSv4.1
- sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=105 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS
- sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
- sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1
- sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false
+ sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
+ sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1
+ sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false
- sudo pcs node unstandby anftstsapcl1
- sudo pcs property set maintenance-mode=false
- ```
+ sudo pcs node unstandby anftstsapcl1
+ sudo pcs property set maintenance-mode=false
+ ```
SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows: ```bash
- sudo pcs property set maintenance-mode=true
+ sudo pcs property set maintenance-mode=true
# If using NFSv3
- sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
+ sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \
The following items are prefixed with either **[A]** - applicable to all nodes,
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS
- # If using NFSv4.1
- sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \
The following items are prefixed with either **[A]** - applicable to all nodes,
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS
- sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000
+ sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000
- # If using NFSv3
- sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv3
+ sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS
- # If using NFSv4.1
- sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv4.1
+ sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=105 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS
- sudo pcs resource meta rsc_sap_QAS_ERS01 resource-stickiness=3000
+ sudo pcs resource meta rsc_sap_QAS_ERS01 resource-stickiness=3000
- sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
- sudo pcs constraint order start g-QAS_ASCS then start g-QAS_AERS kind=Optional symmetrical=false
- sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false
+ sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
+ sudo pcs constraint order start g-QAS_ASCS then start g-QAS_AERS kind=Optional symmetrical=false
+ sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false
- sudo pcs node unstandby anftstsapcl1
- sudo pcs property set maintenance-mode=false
+ sudo pcs node unstandby anftstsapcl1
+ sudo pcs property set maintenance-mode=false
```
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
+ If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
- > [!NOTE]
- > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals.
- > For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
- > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
+ > [!NOTE]
+ > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals.
+ > For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
+ > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
+ Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 ```
-1. **[A]** Add firewall rules for ASCS and ERS on both nodes
+1. **[1]** Execute below step to configure priority-fencing-delay (applicable only as of pacemaker-2.0.4-6.el8 or higher)
- Add the firewall rules for ASCS and ERS on both nodes.
+ > [!NOTE]
+ > If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
+ >
+ > The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device.
- ```bash
- # Probe Port of ASCS
- sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62000/tcp
- sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3200/tcp
- sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3600/tcp
- sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3900/tcp
- sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8100/tcp
- sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50013/tcp
- sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50014/tcp
- sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50016/tcp
-
- # Probe Port of ERS
- sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62101/tcp
- sudo firewall-cmd --zone=public --add-port=3201/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3201/tcp
- sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3301/tcp
- sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50113/tcp
- sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50114/tcp
- sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50116/tcp
- ```
+ ```bash
+ sudo pcs resource defaults update priority=1
+ sudo pcs resource update rsc_sap_QAS_ASCS00 meta priority=10
+
+ sudo pcs property set priority-fencing-delay=15s
+ ```
+
+1. **[A]** Add firewall rules for ASCS and ERS on both node.
+
+ ```bash
+ # Probe Port of ASCS
+ sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=62000/tcp
+ sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3200/tcp
+ sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3600/tcp
+ sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3900/tcp
+ sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=8100/tcp
+ sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=50013/tcp
+ sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=50014/tcp
+ sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=50016/tcp
+
+ # Probe Port of ERS
+ sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=62101/tcp
+ sudo firewall-cmd --zone=public --add-port=3201/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3201/tcp
+ sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=3301/tcp
+ sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=50113/tcp
+ sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=50114/tcp
+ sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port=50116/tcp
+ ```
## SAP NetWeaver application server preparation
Follow these steps to install an SAP application server.
## Test the cluster setup
-1. Manually migrate the ASCS instance
-
- Resource state before starting the test:
-
- ```bash
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
- Run the following commands as root to migrate the ASCS instance.
-
- ```bash
- [root@anftstsapcl1 ~]# pcs resource move rsc_sap_QAS_ASCS00
-
- [root@anftstsapcl1 ~]# pcs resource clear rsc_sap_QAS_ASCS00
-
- # Remove failed actions for the ERS that occurred as part of the migration
- [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
-1. Simulate node crash
-
- Resource state before starting the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Run the following command as root on the node where the ASCS instance is running
-
- ```bash
- [root@anftstsapcl2 ~]# echo b > /proc/sysrq-trigger
- ```
-
- The status after the node is started again should look like this.
-
- ```text
- Online: [ anftstsapcl1 anftstsapcl2 ]
-
- Full list of resources:
-
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
-
- Failed Actions:
- * rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete, exitreason='',
- ```
-
- Use the following command to clean the failed resources.
-
- ```bash
- [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
-1. Kill message server process
-
- Resource state before starting the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
- Run the following commands as root to identify the process of the message server and kill it.
-
- ```bash
- [root@anftstsapcl1 ~]# pgrep -f ms.sapQAS | xargs kill -9
- ```
-
- If you only kill the message server once, it will be restarted by `sapstart`. If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.
-
- ```bash
- [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00
- [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
-1. Kill enqueue server process
-
- Resource state before starting the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Run the following commands as root on the node where the ASCS instance is running to kill the enqueue server.
-
- ```bash
- #If using ENSA1
- [root@anftstsapcl2 ~]# pgrep -f en.sapQAS | xargs kill -9
-
- #If using ENSA2
- [root@anftstsapcl2 ~]# pgrep -f enq.sapQAS | xargs kill -9
- ```
-
- The ASCS instance should immediately fail over to the other node, in the case of ENSA1. The ERS instance should also fail over after the ASCS instance is started. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.
-
- ```bash
- [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00
- [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
-1. Kill enqueue replication server process
-
- Resource state before starting the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
- Run the following command as root on the node where the ERS instance is running to kill the enqueue replication server process.
-
- ```bash
- #If using ENSA1
- [root@anftstsapcl2 ~]# pgrep -f er.sapQAS | xargs kill -9
-
- #If using ENSA2
- [root@anftstsapcl2 ~]# pgrep -f enqr.sapQAS | xargs kill -9
- ```
-
- If you only run the command once, `sapstart` will restart the process. If you run it often enough, `sapstart` will not restart the process, and the resource will be in a stopped state. Run the following commands as root to clean up the resource state of the ERS instance after the test.
-
- ```bash
- [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
-1. Kill enqueue sapstartsrv process
-
- Resource state before starting the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
- Run the following commands as root on the node where the ASCS is running.
-
- ```bash
- [root@anftstsapcl1 ~]# pgrep -fl ASCS00.*sapstartsrv
- # 59545 sapstartsrv
-
- [root@anftstsapcl1 ~]# kill -9 59545
- ```
-
- The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the monitoring. Resource state after the test:
-
- ```text
- rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- Resource Group: g-QAS_AERS
- fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
+Thoroughly test your Pacemaker cluster. [Execute the typical failover tests](high-availability-guide-rhel.md#test-the-cluster-setup).
## Next steps
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
Previously updated : 06/21/2023 Last updated : 08/23/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
pcs resource defaults update migration-threshold=3 ```
-1. **[1]** Create a virtual IP resource and health-probe for the ASCS instance
+2. **[1]** Create a virtual IP resource and health-probe for the ASCS instance
```bash sudo pcs node standby sap-cl2
The following items are prefixed with either **[A]** - applicable to all nodes,
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1 ```
-1. **[1]** Install SAP NetWeaver ASCS
+3. **[1]** Install SAP NetWeaver ASCS
Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS, for example **sapascs**, **10.90.90.10** and the instance number that you used for the probe of the load balancer, for example **00**.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chgrp sapsys /usr/sap/NW1/ASCS00 ```
-1. **[1]** Create a virtual IP resource and health-probe for the ERS instance
+4. **[1]** Create a virtual IP resource and health-probe for the ERS instance
```bash sudo pcs node unstandby sap-cl2
The following items are prefixed with either **[A]** - applicable to all nodes,
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-cl2 ```
-1. **[2]** Install SAP NetWeaver ERS
+5. **[2]** Install SAP NetWeaver ERS
Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS, for example **sapers**, **10.90.90.9** and the instance number that you used for the probe of the load balancer, for example **01**.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chgrp sapsys /usr/sap/NW1/ERS01 ```
-1. **[1]** Adapt the ASCS/SCS and ERS instance profiles
+6. **[1]** Adapt the ASCS/SCS and ERS instance profiles
* ASCS/SCS profile
The following items are prefixed with either **[A]** - applicable to all nodes,
# Autostart = 1 ```
-1. **[A]** Configure Keep Alive
+7. **[A]** Configure Keep Alive
The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1/ENSA2. Read [SAP Note 1410736][1410736] for more information.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo sysctl net.ipv4.tcp_keepalive_time=300 ```
-1. **[A]** Update the /usr/sap/sapservices file
+8. **[A]** Update the /usr/sap/sapservices file
To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file.
The following items are prefixed with either **[A]** - applicable to all nodes,
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW1/ERS01/exe/sapstartsrv pf=/usr/sap/NW1/ERS01/profile/NW1_ERS01_sapers -D -u nw1adm ```
-1. **[1]** Create the SAP cluster resources
+9. **[1]** Create the SAP cluster resources.
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ If using enqueue server 1 architecture (ENSA1), define the resources as follows:
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-NW1_ASCS
-
+
sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000 sudo pcs resource create rsc_sap_NW1_ERS01 SAPInstance \
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs property set maintenance-mode=false ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows:
+ SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+ If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows:
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs property set maintenance-mode=false ```
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
+ If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
- > [!NOTE]
- > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
+ > [!NOTE]
+ > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+ Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-cl1 # vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-cl1 # rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
- ```
+ ```
+
+10. **[1]** Execute below step to configure priority-fencing-delay (applicable only as of pacemaker-2.0.4-6.el8 or higher)
+
+ > [!NOTE]
+ > If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
+ >
+ > The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device.
+
+ ```bash
+ sudo pcs resource defaults update priority=1
+ sudo pcs resource update rsc_sap_NW1_ASCS00 meta priority=10
+
+ sudo pcs property set priority-fencing-delay=15s
+ ```
-1. **[A]** Add firewall rules for ASCS and ERS on both nodes
- Add the firewall rules for ASCS and ERS on both nodes.
+11. **[A]** Add firewall rules for ASCS and ERS on both nodes.
```bash # Probe Port of ASCS
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
Title: Setting up Pacemaker on RHEL in Azure | Microsoft Docs description: Setting up Pacemaker on Red Hat Enterprise Linux in Azure
-tags: azure-resource-manager
-keywords: ''
- vm-windows - Previously updated : 04/10/2022 Last updated : 08/23/2023
[virtual-machines-linux-maintenance]:../../virtual-machines/maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot
-The article describes how to configure basic Pacemaker cluster on Red Hat Enterprise Server(RHEL). The instructions cover RHEL 7, RHEL 8 and RHEL 9.
+The article describes how to configure basic Pacemaker cluster on Red Hat Enterprise Server(RHEL). The instructions cover RHEL 7, RHEL 8 and RHEL 9.
## Prerequisites+ Read the following SAP Notes and papers first: * SAP Note [1928533], which has:
Read the following SAP Notes and papers first:
> [!NOTE] > Red Hat doesn't support software-emulated watchdog. Red Hat doesn't support SBD on cloud platforms. For details see [Support Policies for RHEL High Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691).
+>
> The only supported fencing mechanism for Pacemaker Red Hat Enterprise Linux clusters on Azure, is Azure fence agent. The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2. Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9 are marked in the document.
The following items are prefixed with either **[A]** - applicable to all nodes,
For example, if deploying on RHEL 7, register your virtual machine and attach it to a pool that contains repositories for RHEL 7.
- <pre><code>sudo subscription-manager register
+ ```bash
+ sudo subscription-manager register
# List the available pools sudo subscription-manager list --available --matches '*SAP*'
- sudo subscription-manager attach --pool=&lt;pool id&gt;
- </code></pre>
+ sudo subscription-manager attach --pool=<pool id>
+ ```
By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To mitigate this situation, Azure now provides BYOS RHEL images. For more information, see [Red Hat Enterprise Linux bring-your-own-subscription Azure images](../../virtual-machines/workloads/redhat/byos.md).
The following items are prefixed with either **[A]** - applicable to all nodes,
In order to install the required packages on RHEL 7, enable the following repositories.
- <pre><code>sudo subscription-manager repos --disable "*"
+ ```bash
+ sudo subscription-manager repos --disable "*"
sudo subscription-manager repos --enable=rhel-7-server-rpms sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms sudo subscription-manager repos --enable=rhel-sap-for-rhel-7-server-rpms sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-eus-rpms
- </code></pre>
+ ```
1. **[A]** Install RHEL HA Add-On ```bash sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat ```
-
+ > [!IMPORTANT] > We recommend the following versions of Azure Fence agent (or later) for customers to benefit from a faster failover time, if a resource stop fails or the cluster nodes cannot communicate which each other anymore:
- > RHEL 7.7 or higher use the latest available version of fence-agents package
- > RHEL 7.6: fence-agents-4.2.1-11.el7_6.8
+ >
+ > RHEL 7.7 or higher use the latest available version of fence-agents package.
+ >
+ > RHEL 7.6: fence-agents-4.2.1-11.el7_6.8
+ >
> RHEL 7.5: fence-agents-4.0.11-86.el7_5.8
- > RHEL 7.4: fence-agents-4.0.11-66.el7_4.12
+ >
+ > RHEL 7.4: fence-agents-4.0.11-66.el7_4.12
+ >
> For more information, see [Azure VM running as a RHEL High Availability cluster member take a very long time to be fenced, or fencing fails / times-out before the VM shuts down](https://access.redhat.com/solutions/3408711). > [!IMPORTANT]
- > We recommend the following versions of Azure Fence agent (or later) for customers wishing to use Managed Identities for Azure resources instead of service principal names for the fence agent.
- > RHEL 8.4: fence-agents-4.2.1-54.el8
+ > We recommend the following versions of Azure Fence agent (or later) for customers wishing to use Managed Identities for Azure resources instead of service principal names for the fence agent.
+ >
+ > RHEL 8.4: fence-agents-4.2.1-54.el8.
+ >
> RHEL 8.2: fence-agents-4.2.1-41.el8_2.4
+ >
> RHEL 8.1: fence-agents-4.2.1-30.el8_1.4
+ >
> RHEL 7.9: fence-agents-4.2.1-41.el7_9.4. > [!IMPORTANT]
- > On RHEL 9, we recommend the following package versions (or later) to avoid issues with Azure Fence agent:
+ > On RHEL 9, we recommend the following package versions (or later) to avoid issues with Azure Fence agent:
+ >
> fence-agents-4.10.0-20.el9_0.7
- > fence-agents-common-4.10.0-20.el9_0.6
+ >
+ > fence-agents-common-4.10.0-20.el9_0.6
+ >
> ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm Check the version of the Azure fence agent. If necessary, update it to the minimum required version or later.
- <pre><code># Check the version of the Azure Fence Agent
+ ```bash
+ # Check the version of the Azure Fence Agent
sudo yum info fence-agents-azure-arm
- </code></pre>
+ ```
> [!IMPORTANT] > If you need to update the Azure Fence agent, and if using custom role, make sure to update the custom role to include action **powerOff**. For details see [Create a custom role for the fence agent](#1-create-a-custom-role-for-the-fence-agent).
-1. If deploying on RHEL 9, install also the resource agents for cloud deployment:
-
+1. If deploying on RHEL 9, install also the resource agents for cloud deployment:
+ ```bash sudo yum install -y resource-agents-cloud ```
The following items are prefixed with either **[A]** - applicable to all nodes,
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands.
- >[!IMPORTANT]
+ > [!IMPORTANT]
> If using host names in the cluster configuration, it's vital to have reliable host name resolution. The cluster communication will fail, if the names are not available and that can lead to cluster failover delays.
+ >
> The benefit of using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too.
- <pre><code>sudo vi /etc/hosts
- </code></pre>
+ ```bash
+ sudo vi /etc/hosts
+ ```
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
- <pre><code># IP address of the first cluster node
- <b>10.0.0.6 prod-cl1-0</b>
+ ```text
+ # IP address of the first cluster node
+ 10.0.0.6 prod-cl1-0
# IP address of the second cluster node
- <b>10.0.0.7 prod-cl1-1</b>
- </code></pre>
+ 10.0.0.7 prod-cl1-1
+ ```
1. **[A]** Change hacluster password to the same password
- <pre><code>sudo passwd hacluster
- </code></pre>
+ ```bash
+ sudo passwd hacluster
+ ```
1. **[A]** Add firewall rules for pacemaker Add the following firewall rules to all cluster communication between the cluster nodes.
- <pre><code>sudo firewall-cmd --add-service=high-availability --permanent
+ ```bash
+ sudo firewall-cmd --add-service=high-availability --permanent
sudo firewall-cmd --add-service=high-availability
- </code></pre>
+ ```
1. **[A]** Enable basic cluster services Run the following commands to enable the Pacemaker service and start it.
- <pre><code>sudo systemctl start pcsd.service
+ ```bash
+ sudo systemctl start pcsd.service
sudo systemctl enable pcsd.service
- </code></pre>
+ ```
1. **[1]** Create Pacemaker cluster Run the following commands to authenticate the nodes and create the cluster. Set the token to 30000 to allow Memory preserving maintenance. For more information, see [this article for Linux][virtual-machines-linux-maintenance].
-
+ If building a cluster on **RHEL 7.x**, use the following commands:
- <pre><code>sudo pcs cluster auth <b>prod-cl1-0</b> <b>prod-cl1-1</b> -u hacluster
- sudo pcs cluster setup --name <b>nw1-azr</b> <b>prod-cl1-0</b> <b>prod-cl1-1</b> --token 30000
+
+ ```bash
+ sudo pcs cluster auth prod-cl1-0 prod-cl1-1 -u hacluster
+ sudo pcs cluster setup --name nw1-azr prod-cl1-0 prod-cl1-1 --token 30000
sudo pcs cluster start --all
- </code></pre>
+ ```
If building a cluster on **RHEL 8.x/RHEL 9.x**, use the following commands:
- <pre><code>sudo pcs host auth <b>prod-cl1-0</b> <b>prod-cl1-1</b> -u hacluster
- sudo pcs cluster setup <b>nw1-azr</b> <b>prod-cl1-0</b> <b>prod-cl1-1</b> totem token=30000
+
+ ```bash
+ sudo pcs host auth prod-cl1-0 prod-cl1-1 -u hacluster
+ sudo pcs cluster setup nw1-azr prod-cl1-0 prod-cl1-1 totem token=30000
sudo pcs cluster start --all
- </code></pre>
+ ```
Verify the cluster status, by executing the following command:
- <pre><code> # Run the following command until the status of both nodes is online
+
+ ```bash
+ # Run the following command until the status of both nodes is online
sudo pcs status
+
# Cluster name: nw1-azr # WARNING: no stonith devices and stonith-enabled is not false # Stack: corosync
- # Current DC: <b>prod-cl1-1</b> (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
+ # Current DC: prod-cl1-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
# Last updated: Fri Aug 17 09:18:24 2018
- # Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on <b>prod-cl1-1</b>
+ # Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on prod-cl1-1
# # 2 nodes configured # 0 resources configured #
- # Online: [ <b>prod-cl1-0</b> <b>prod-cl1-1</b> ]
+ # Online: [ prod-cl1-0 prod-cl1-1 ]
# # No resources #
The following items are prefixed with either **[A]** - applicable to all nodes,
# corosync: active/disabled # pacemaker: active/disabled # pcsd: active/enabled
- </code></pre>
+ ```
+
+1. **[A]** Set Expected Votes.
-1. **[A]** Set Expected Votes.
+ ```bash
+ # Check the quorum votes
+ pcs quorum status
- <pre><code># Check the quorum votes
- pcs quorum status
- # If the quorum votes are not set to 2, execute the next command
- sudo pcs quorum expected-votes 2
- </code></pre>
+ # If the quorum votes are not set to 2, execute the next command
+ sudo pcs quorum expected-votes 2
+ ```
- >[!TIP]
- > If building multi-node cluster, that is cluster with more than two nodes, don't set the votes to 2.
+ > [!TIP]
+ > If building multi-node cluster, that is cluster with more than two nodes, don't set the votes to 2.
1. **[1]** Allow concurrent fence actions
- <pre><code>sudo pcs property set concurrent-fencing=true
- </code></pre>
+ ```bash
+ sudo pcs property set concurrent-fencing=true
+ ```
## Create fencing device
-The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure.
+The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure.
+
+### [Managed Identity](#tab/msi)
+
+To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x.
-### Using Managed Identity
-To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x.
+### [Service Principal](#tab/spn)
-### Using Service Principal
Follow these steps to create a service principal, if not using managed identity. 1. Go to the [Azure portal](https://portal.azure.com).
-1. Open the Azure Active Directory blade
+2. Open the Azure Active Directory blade.
Go to Properties and make a note of the Directory ID. This is the **tenant ID**.
-1. Click App registrations
-1. Click New Registration
-1. Enter a Name, select "Accounts in this organization directory only"
-2. Select Application Type "Web", enter a sign-on URL (for example http:\//localhost) and click Add
- The sign-on URL isn't used and can be any valid URL
-1. Select Certificates and Secrets, then click New client secret
-1. Enter a description for a new key, select "Never expires" and click Add
-1. Make a node the Value. It is used as the **password** for the service principal
-1. Select Overview. Make a note the Application ID. It's used as the username (**login ID** in the steps below) of the service principal
+3. Click App registrations.
+4. Click New Registration.
+5. Enter a Name, select "Accounts in this organization directory only".
+6. Select Application Type "Web", enter a sign-on URL (for example http:\//localhost) and click Add.
+ The sign-on URL isn't used and can be any valid URL.
+7. Select Certificates and Secrets, then click New client secret.
+8. Enter a description for a new key, select "Two years" and click Add.
+9. Make a note of the Value. It is used as the **password** for the service principal.
+10. Select Overview. Make a note the Application ID. It's used as the username (**login ID** in the steps below) of the service principal.
++ ### **[1]** Create a custom role for the fence agent
Use the following content for the input file. You need to adapt the content to y
### **[A]** Assign the custom role
-#### Using Managed Identity
+#### [Managed Identity](#tab/msi)
Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to each managed identity of the cluster VMs. Each VM system-assigned managed identity needs the role assigned for every cluster VM's resource. For detailed steps, see [Assign a managed identity access to a resource by using the Azure portal](../../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Verify each VM's managed identity role assignment contains all cluster VMs. > [!IMPORTANT] > Be aware assignment and removal of authorization with managed identities [can be delayed](../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization) until effective.
-#### Using Service Principal
+#### [Service Principal](#tab/spn)
+
+Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Don't use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Don't use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-Make sure to assign the role for both cluster nodes.
+Make sure to assign the role for both cluster nodes.
++ ### **[1]** Create the fencing devices After you edited the permissions for the virtual machines, you can configure the fencing devices in the cluster.
-<pre><code>
+```bash
sudo pcs property set stonith-timeout=900
-</code></pre>
+```
> [!NOTE] > Option 'pcmk_host_map' is ONLY required in the command, if the RHEL host names and the Azure VM names are NOT identical. Specify the mapping in the format **hostname:vm-name**. > Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map](https://access.redhat.com/solutions/2619961) - #### [Managed Identity](#tab/msi)
-For RHEL **7.x**, use the following command to configure the fence device:
-<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm <b>msi=true</b> resourceGroup="<b>resource group</b>" \
-subscriptionId="<b>subscription id</b>" <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
+For RHEL **7.x**, use the following command to configure the fence device:
+
+```bash
+sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
+subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600
-</code></pre>
+```
For RHEL **8.x/9.x**, use the following command to configure the fence device:
-<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm <b>msi=true</b> resourceGroup="<b>resource group</b>" \
-subscriptionId="<b>subscription id</b>" <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
+
+```bash
+# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command:
+sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
+subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \
+op monitor interval=3600
+
+# If the version of pacemaker is less than 2.0.4-6.el8, then run following command:
+sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
+subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600
-</code></pre>
+```
#### [Service Principal](#tab/spn)
-For RHEL **7.x**, use the following command to configure the fence device:
-<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm login="<b>login ID</b>" passwd="<b>password</b>" \
-resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \
-<b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
+For RHEL **7.x**, use the following command to configure the fence device:
+
+```bash
+sudo pcs stonith create rsc_st_azure fence_azure_arm login="login ID" passwd="password" \
+resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
+pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600
-</code></pre>
+```
For RHEL **8.x/9.x**, use the following command to configure the fence device:
-<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm username="<b>login ID</b>" password="<b>password</b>" \
-resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \
-<b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
+
+```bash
+# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command:
+sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \
+resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
+pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \
+op monitor interval=3600
+
+# If the version of pacemaker is less than 2.0.4-6.el8, then run following command:
+sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \
+resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
+pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600
-</code></pre>
+```
If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. > [!TIP]
-> Only configure the `pcmk_delay_max` attribute in two node Pacemaker clusters. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829).
-
+> Only configure the `pcmk_delay_max` attribute in two node clusters, with pacemaker version less than 2.0.4-6.el8. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829).
> [!IMPORTANT] > The monitoring and fencing operations are deserialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation. ### **[1]** Enable the use of a fencing device
-<pre><code>sudo pcs property set stonith-enabled=true
-</code></pre>
+```bash
+sudo pcs property set stonith-enabled=true
+```
> [!TIP] >Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md). - ## Optional fencing configuration > [!TIP] > This section is only applicable, if it is desired to configure special fencing device `fence_kdump`.
-If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
+If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
> [!IMPORTANT]
-> Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover.
->
-> If a crash dump is successfully detected, the fencing will be delayed until the crash recovery service completes. If the failed node is unreachable or if it doesn't respond, the fencing will be delayed by time determined by the configured number of iterations and the `fence_kdump` timeout. For more details, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971).
+> Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover.
+>
+> If a crash dump is successfully detected, the fencing will be delayed until the crash recovery service completes. If the failed node is unreachable or if it doesn't respond, the fencing will be delayed by time determined by the configured number of iterations and the `fence_kdump` timeout. For more details, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971).
+>
> The proposed fence_kdump timeout may need to be adapted to the specific environment.
->
-> We recommend to configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence method as Azure Fence Agent.
+>
+> We recommend to configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence method as Azure Fence Agent.
The following Red Hat KBs contain important information about configuring `fence_kdump` fencing:
The following Red Hat KBs contain important information about configuring `fence
* [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 HA cluster with kexec-tools older than 2.0.14](https://access.redhat.com/solutions/2388711) * For information how to change the default timeout see [How do I configure kdump for use with the RHEL 6,7,8 HA Add-On](https://access.redhat.com/articles/67570) * For information on how to reduce failover delay, when using `fence_kdump` see [Can I reduce the expected delay of failover when adding fence_kdump configuration](https://access.redhat.com/solutions/5512331)
-
-Execute the following optional steps to add `fence_kdump` as a first level fencing configuration, in addition to the Azure Fence Agent configuration.
+
+Execute the following optional steps to add `fence_kdump` as a first level fencing configuration, in addition to the Azure Fence Agent configuration.
+1. **[A]** Verify that kdump is active and configured.
-1. **[A]** Verify that kdump is active and configured.
- ```
+ ```bash
systemctl is-active kdump # Expected result # active ```
-2. **[A]** Install the `fence_kdump` fence agent.
- ```
+
+2. **[A]** Install the `fence_kdump` fence agent.
+
+ ```bash
yum install fence-agents-kdump ```
-3. **[1]** Create `fence_kdump` fencing device in the cluster.
- <pre><code>
- pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" <b>pcmk_host_list="prod-cl1-0 prod-cl1-1</b>" timeout=30
- </code></pre>
+
+3. **[1]** Create `fence_kdump` fencing device in the cluster.
+
+ ```bash
+ pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" pcmk_host_list="prod-cl1-0 prod-cl1-1" timeout=30
+ ```
4. **[1]** Configure fencing levels, so that `fence_kdump` fencing mechanism is engaged first.
- <pre><code>
- pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" <b>pcmk_host_list="prod-cl1-0 prod-cl1-1</b>"
- pcs stonith level add 1 <b>prod-cl1-0</b> rsc_st_kdump
- pcs stonith level add 1 <b>prod-cl1-1</b> rsc_st_kdump
- pcs stonith level add 2 <b>prod-cl1-0</b> rsc_st_azure
- pcs stonith level add 2 <b>prod-cl1-1</b> rsc_st_azure
+
+ ```bash
+ pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" pcmk_host_list="prod-cl1-0 prod-cl1-1"
+ pcs stonith level add 1 prod-cl1-0 rsc_st_kdump
+ pcs stonith level add 1 prod-cl1-1 rsc_st_kdump
+ pcs stonith level add 2 prod-cl1-0 rsc_st_azure
+ pcs stonith level add 2 prod-cl1-1 rsc_st_azure
+
# Check the fencing level configuration pcs stonith level # Example output
- # Target: <b>prod-cl1-0</b>
+ # Target: prod-cl1-0
# Level 1 - rsc_st_kdump # Level 2 - rsc_st_azure
- # Target: <b>prod-cl1-1</b>
+ # Target: prod-cl1-1
# Level 1 - rsc_st_kdump # Level 2 - rsc_st_azure
- </code></pre>
+ ```
5. **[A]** Allow the required ports for `fence_kdump` through the firewall
- ```
+
+ ```bash
firewall-cmd --add-port=7410/udp firewall-cmd --add-port=7410/udp --permanent ```
-6. **[A]** Ensure that `initramfs` image file contains `fence_kdump` and `hosts` files. For details see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971).
- ```
+6. **[A]** Ensure that `initramfs` image file contains `fence_kdump` and `hosts` files. For details see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971).
+
+ ```bash
lsinitrd /boot/initramfs-$(uname -r)kdump.img | egrep "fence|hosts" # Example output # -rw-r--r-- 1 root root 208 Jun 7 21:42 etc/hosts
Execute the following optional steps to add `fence_kdump` as a first level fenci
7. **[A]** Perform the `fence_kdump_nodes` configuration in `/etc/kdump.conf` to avoid `fence_kdump` failing with a timeout for some `kexec-tools` versions. For details see [fence_kdump times out when fence_kdump_nodes is not specified with kexec-tools version 2.0.15 or later](https://access.redhat.com/solutions/4498151) and [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 High Availability cluster with kexec-tools versions older than 2.0.14](https://access.redhat.com/solutions/2388711). The example configuration for a two node cluster is presented below. After making a change in `/etc/kdump.conf`, the kdump image must be regenerated. That can be achieved by restarting the `kdump` service.
- <pre><code>
+ ```bash
vi /etc/kdump.conf
- # On node <b>prod-cl1-0</b> make sure the following line is added
- fence_kdump_nodes <b>prod-cl1-1</b>
- # On node <b>prod-cl1-1</b> make sure the following line is added
- fence_kdump_nodes <b>prod-cl1-0</b>
-
+ # On node prod-cl1-0 make sure the following line is added
+ fence_kdump_nodes prod-cl1-1
+ # On node prod-cl1-1 make sure the following line is added
+ fence_kdump_nodes prod-cl1-0
+
# Restart the service on each node systemctl restart kdump
- </code></pre>
+ ```
8. Test the configuration by crashing a node. For details see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). > [!IMPORTANT]
- > If the cluster is already in productive use, plan the test accordingly as crashing a node will have an impact on the application.
+ > If the cluster is already in productive use, plan the test accordingly as crashing a node will have an impact on the application.
- ```
+ ```bash
echo c > /proc/sysrq-trigger ```+ ## Next steps
-* [Azure Virtual Machines planning and implementation for SAP][planning-guide]
-* [Azure Virtual Machines deployment for SAP][deployment-guide]
-* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+* [Azure Virtual Machines planning and implementation for SAP][planning-guide].
+* [Azure Virtual Machines deployment for SAP][deployment-guide].
+* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide].
+* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha].
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
Follow these steps to install an SAP application server.
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
-1. Simulate node crash
+2. Simulate node crash
Resource state before starting the test:
Follow these steps to install an SAP application server.
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ```
-1. Kill message server process
+3. Blocking network communication
+
+ Resource state before starting the test:
+
+ ```text
+ rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
+ Resource Group: g-NW1_ASCS
+ fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0
+ nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0
+ vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
+ rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
+ Resource Group: g-NW1_AERS
+ fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1
+ nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1
+ vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
+ rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
+ ```
+
+ Execute firewall rule to block the communication on one of the nodes.
+
+ ```bash
+ # Execute iptable rule on nw1-cl-0 (10.0.0.7) to block the incoming and outgoing traffic to nw1-cl-1 (10.0.0.8)
+ iptables -A INPUT -s 10.0.0.8 -j DROP; iptables -A OUTPUT -d 10.0.0.8 -j DROP
+ ```
+
+ When cluster nodes can't communicate to each other, there's a risk of a split-brain scenario. In such situations, cluster nodes will try to simultaneously fence each other, resulting in fence race. To avoid such situation, it's recommended to set [priority-fencing-delay](https://access.redhat.com/solutions/5110521) property in cluster configuration (applicable only for [pacemaker-2.0.4-6.el8](https://access.redhat.com/errata/RHEA-2020:4804) or higher).
+
+ By enabling priority-fencing-delay property, the cluster introduces an additional delay in the fencing action specifically on the node hosting ASCS resource, allowing the node to win the fence race.
+
+ Execute below command to delete the firewall rule.
+
+ ```bash
+ # If the iptables rule set on the server gets reset after a reboot, the rules will be cleared out. In case they have not been reset, please proceed to remove the iptables rule using the following command.
+ iptables -D INPUT -s 10.0.0.8 -j DROP; iptables -D OUTPUT -d 10.0.0.8 -j DROP
+ ```
+
+4. Kill message server process
Resource state before starting the test:
Follow these steps to install an SAP application server.
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
-1. Kill enqueue server process
+5. Kill enqueue server process
Resource state before starting the test:
Follow these steps to install an SAP application server.
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ```
-1. Kill enqueue replication server process
+6. Kill enqueue replication server process
Resource state before starting the test:
Follow these steps to install an SAP application server.
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ```
-1. Kill enqueue sapstartsrv process
+7. Kill enqueue sapstartsrv process
Resource state before starting the test:
sap High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs.md
This article describes how to deploy the virtual machines, configure the virtual
This guide describes how to set up a highly available NFS server that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the example assume that you have used the [SAP file server template][template-file-server] with resource prefix **prod**. > [!NOTE]
-> This article contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
Read the following SAP Notes and papers first
sap High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md
To create a service principal, do the following:
5. For **Application type**, select **Web**, enter a sign-on URL (for example, *http://localhost*), and then select **Add**. The sign-on URL isn't used and can be any valid URL. 6. Select **Certificates and secrets**, and then select **New client secret**.
-7. Enter a description for a new key, select **Never expires**, and then select **Add**.
+7. Enter a description for a new key, select **Two years**, and then select **Add**.
8. Write down the value, which you'll use as the password for the service principal. 9. Select **Overview**, and then write down the application ID, which you'll use as the username of the service principal.
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Select an area for resources about how to integrate SAP and Azure in that space.
| [Microsoft Teams](#microsoft-teams) | Discover collaboration scenarios boosting your daily productivity by interacting with your SAP applications directly from Microsoft Teams. | | [Microsoft Power Platform](#microsoft-power-platform) | Learn about the available [out-of-the-box SAP applications](/power-automate/sap-integration/solutions) enabling your business users to achieve more with less. | | [SAP Fiori](#sap-fiori) | Increase performance and security of your SAP Fiori applications by integrating them with Azure services. |
-| [Azure Active Directory (Azure AD)](#azure-ad) | Ensure end-to-end SAP user authentication and authorization with Azure Active Directory. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. |
+| [Microsoft Entra ID (formerly Azure Active Directory)](#microsoft-entra-id-formerly-azure-ad) | Ensure end-to-end SAP user authentication and authorization with Microsoft Entra ID. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. |
| [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure AI services and more. | | [App Development in any language including ABAP and DevOps](#app-development-in-any-language-including-abap-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | | [Azure Data Services](#azure-data-services) | Learn how to integrate your SAP data with Data Services like Azure Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose, tune performance, efficiently troubleshoot, and more. |
-| [Threat Monitoring with Microsoft Sentinel for SAP](#microsoft-sentinel) | Learn how to best secure your SAP workload with Microsoft Sentinel, prevent incidents from happening and detect and respond to threats in real-time with this [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution. |
+| [Threat Monitoring and Response Automation with Microsoft Security Services for SAP](#microsoft-security-for-sap) | Learn how to best secure your SAP workload with Microsoft Defender for Cloud and the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution. Prevent incidents from happening, detect and respond to threats in real-time. |
| [SAP Business Technology Platform (BTP)](#sap-btp) | Discover integration scenarios like SAP Private Link to securely and efficiently connect your BTP apps to your Azure workloads. | ### Azure OpenAI service
For more information about integration with [Azure OpenAI service](/azure/ai-ser
Also see these SAP resources: -- [empower SAP RISE enterprise users with Azure OpenAI in multi-cloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/)
+- [empower SAP RISE enterprise users with Azure OpenAI in multicloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/)
- [Consume OpenAI services (GPT) through CAP & SAP BTP, AI Core](https://github.com/SAP-samples/azure-openai-aicore-cap-api) - [SAP SuccessFactors Helps HR Solve Skills Gap with Generative AI | SAP News](https://news.sap.com/2023/05/sap-successfactors-helps-hr-solve-skills-gap-with-generative-ai/)
Also see the following SAP resources:
- [Azure CDN for SAPUI5 libraries](https://blogs.sap.com/2021/03/22/sap-fiori-using-azure-cdn-for-sapui5-libraries/) - [Web Application Firewall Setup for Internet facing SAP Fiori Apps](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
-### Azure AD
+### Microsoft Entra ID (formerly Azure AD)
For more information about integration with Azure AD, see the following Azure documentation:
For more information about using SAP with Azure Integration services, see the fo
- [Connect to SAP from workflows in Azure Logic Apps](../../logic-apps/logic-apps-using-sap-connector.md) - [Import SAP OData metadata as an API into Azure API Management](../../api-management/sap-api.md) - [Apply SAP Principal Propagation to your Azure hosted APIs](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml)
+- [Using Logic Apps (Standard) to connect with SAP BAPIs and RFC](https://www.youtube.com/watch?v=ZmOPPtIYYM4)
Also see the following SAP resources:
For more information about integrating SAP with Microsoft services natively, see
- [Use community-driven OData SDKs with Azure Functions](https://github.com/Azure/azure-sdk-for-sap-odata) Also see the following SAP resources: -- [SAP BTP ABAP Environment (aka. Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)-- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (aka. Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)
+- [SAP BTP ABAP Environment (also known as Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)
+- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (also known as Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)
- [dotNET speaks OData too, how to implement Azure App Service with SAP Gateway](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) - [Apply cloud native deployment practice blue-green to SAP BTP apps with Azure DevOps](https://blogs.sap.com/2019/12/20/go-blue-green-for-your-cloud-foundry-app-from-webide-with-azure-devops/)
Also see the following SAP resources:
- [Integrate SAP Data Warehouse Cloud with Power BI and Azure Synapse Analytics](https://blogs.sap.com/2022/07/27/your-sap-on-azure-part-28-integrate-sap-data-warehouse-cloud-with-powerbi-and-azure-synapse/) - [Extend SAP Integrated Business Planning forecasting algorithms with Azure Machine Learning](https://blogs.sap.com/2022/10/03/microsoft-azure-machine-learning-for-supply-chain-planning/)
-### Microsoft Sentinel
+### Microsoft Security for SAP
+
+Protect your data, apps, and infrastructure against rapidly evolving cyber threats with cloud security services from Microsoft. Artificial intelligence (AI) and device learning (ML) backed capabilities are required to keep up with the pace.
+
+Use [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) to secure your cloud-infrastructure surrounding the SAP system including automated responses.
+
+Complimenting that, use the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution [Microsoft Sentinel](../../sentinel/sap/sap-solution-security-content.md) to protect your SAP system from within using signals from the SAP Audit Log among others.
+
+Learn more about identity focused integration capabilities that power the analysis on Defender and Sentinel via the [Microsoft Entra ID section](#microsoft-entra-id-formerly-azure-ad).
+
+#### Microsoft Defender for Cloud
+
+The [Defender product family](../../defender-for-cloud/defender-for-cloud-introduction.md) consist of multiple products tailored to provide "cloud security posture management" (CSPM) and "cloud workload protection" (CWPP) for the various workload types. Below excerpt serves as entry point to start securing your SAP system.
+
+- Defender for Servers (SAP hosts)
+ - [Protect your SAP hosts with Defender](../../defender-for-cloud/defender-for-servers-introduction.md) including OS specific Endpoint protection with Microsoft Defender for Endpoint (MDE)
+ - [Microsoft Defender for Endpoint on Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux)
+ - [Microsoft Defender for Endpoint on Windows](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint)
+ - [Enable Defender for Servers](../../defender-for-cloud/tutorial-enable-servers-plan.md#enable-the-defender-for-servers-plan)
+- Defender for Storage (SAP SMB file shares on Azure)
+ - [Protect your SAP SMB file shares with Defender](../../defender-for-cloud/defender-for-storage-introduction.md)
+ - [Enable Defender for Storage](../../defender-for-cloud/tutorial-enable-storage-plan.md)
+- Defender for APIs (SAP Gateway, SAP Business Technology Platform, SAP SaaS)
+ - [Protect your OpenAPI APIs with Defender for APIs](../../defender-for-cloud/defender-for-apis-introduction.md)
+ - [Enable the Defender for APIs](../../defender-for-cloud/defender-for-apis-deploy.md)
+
+See SAP's recommendation to use AntiVirus software for SAP hosts and systems on both Linux and Windows based platforms [here](https://wiki.scn.sap.com/wiki/display/Basis/Protecting+SAP+systems+using+antivirus+softwares). Be aware that the threat landscape has evolved from file-based attacks to file-less attacks. Therefore, the protection approach has to evolve beyond pure AntiVirus capabilities too.
+
+For more information about using Microsoft Defender for Endpoint (MDE) via Microsoft Defender for Server for SAP applications regarding `Next-generation protection` (AntiVirus) and `Endpoint Detection and Response` (EDR) see the following Microsoft resources:
+
+- [SAP Applications and Microsoft Defender for Linux | Microsoft TechCommunity](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-applications-and-microsoft-defender-for-linux/ba-p/3675480)
+- [Enable the Microsoft Defender for Endpoint integration](../../defender-for-cloud/integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration)
+- [Common mistakes to avoid when defining exclusions](/microsoft-365/security/defender-endpoint/common-exclusion-mistakes-microsoft-defender-antivirus)
+
+Also see the following SAP resources:
+
+- [2808515 - Installing security software on SAP servers running on Linux](https://me.sap.com/notes/2808515)
+- [1730997 - Unrecommended versions of antivirus software](https://me.sap.com/notes/1730997)
+
+> [!Note]
+> It is **not recommended** to exclude files, paths or processes from EDR because it creates blind spots for Defender. If exclusions are required nevertheless, open a support case with Microsoft Support via the Defender365 Portal specifying executables and/or paths to exclude. Follow the same process for tuning of real-time scans.
+
+> [!Note]
+> Certification for the SAP Virus Scan Interface (NW-VSI) doesn't apply to MDE, because it operates outside of the SAP system. It complements Microsoft Sentinel for SAP, which interacts with the SAP system directly. See more details and the SAP certification note for Sentinel below.
+
+> [!Tip]
+> MDE was formerly called Microsoft Defender Advanced Threat Protection (ATP). Older articles or SAP notes still refer to that name.
+
+> [!Tip]
+> Microsoft Defender for Server includes Endpoint detection and response (EDR) features that are provided by Microsoft Defender for Endpoint Plan 2.
+
+#### Microsoft Sentinel for SAP
For more information about [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) threat monitoring with Microsoft Sentinel for SAP, see the following Microsoft resources:
For more information about Azure integration with SAP Business Technology Platfo
- [Route Multi-Region Traffic to SAP BTP Services Intelligently with Azure Traffic Manager](https://discovery-center.cloud.sap/missiondetail/3603/) - [Distributed Resiliency of SAP CAP applications using SAP HANA Cloud with Azure Traffic Manager](https://blogs.sap.com/2022/11/12/distributed-resiliency-of-sap-cap-applications-using-sap-hana-cloud-multi-zone-replication-with-azure-traffic-manager/) - [Federate your data from Azure Data Explorer to SAP Data Warehouse Cloud](https://discovery-center.cloud.sap/missiondetail/3433/3473/)-- [Integrate globally available SAP BTP apps with Azure CosmosDB via OData](https://blogs.sap.com/2021/06/11/sap-where-can-i-get-toilet-paper-an-implementation-of-the-geodes-pattern-with-s4-btp-and-azure-cosmosdb/)
+- [Integrate globally available SAP BTP apps with Azure Cosmos DB via OData](https://blogs.sap.com/2021/06/11/sap-where-can-i-get-toilet-paper-an-implementation-of-the-geodes-pattern-with-s4-btp-and-azure-cosmosdb/)
- [Explore your Azure data sources with SAP Data Warehouse Cloud](https://discovery-center.cloud.sap/missiondetail/3656/3699/) - [Building Applications on SAP BTP with Microsoft Services | OpenSAP course](https://open.sap.com/courses/btpma1)
sap Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md
Following are some of the important points to consider with respect to Azure Pre
## Supported OS versions
-Both Windows Server 2016 and Windows Server 2019 are supported (use the latest data center images).
+Windows Servers 2016, 2019 and higher are supported (use the latest data center images).
-We strongly recommend using **Windows Server 2019 Datacenter**, as:
+We strongly recommend using at least **Windows Server 2019 Datacenter**, as:
- Windows 2019 Failover Cluster Service is Azure aware - There is added integration and awareness of Azure Host Maintenance and improved experience by monitoring for Azure schedule events. - It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on Azure Internal Load Balancer.
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
Title: High availability of SAP HANA Scale-up with ANF on RHEL | Microsoft Docs description: Establish high availability of SAP HANA with ANF on Azure virtual machines (VMs).
vm-linux
Last updated 07/11/2023 - # High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux
[deployment-guide]:deployment-guide.md [planning-guide]:planning-guide.md
-[anf-azure-doc]:/azure/azure-netapp-files/
-[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-
-[2205917]:https://launchpad.support.sap.com/#/notes/2205917
-[1944799]:https://launchpad.support.sap.com/#/notes/1944799
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2191498]:https://launchpad.support.sap.com/#/notes/2191498
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1984787]:https://launchpad.support.sap.com/#/notes/1984787
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[1410736]:https://launchpad.support.sap.com/#/notes/1410736
-[1900823]:https://launchpad.support.sap.com/#/notes/1900823
-[2292690]:https://launchpad.support.sap.com/#/notes/2292690
-[2455582]:https://launchpad.support.sap.com/#/notes/2455582
-[2593824]:https://launchpad.support.sap.com/#/notes/2593824
-[2009879]:https://launchpad.support.sap.com/#/notes/2009879
-[3108302]:https://launchpad.support.sap.com/#/notes/3108302
-
-[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
-
-[sap-hana-ha]:sap-hana-high-availability.md
-[nfs-ha]:high-availability-guide-suse-nfs.md
- This article describes how to configure SAP HANA System Replication in Scale-up deployment, when the HANA file systems are mounted via NFS, using Azure NetApp Files (ANF). In the example configurations and installation commands, instance number **03**, and HANA System ID **HN1** are used. SAP HANA Replication consists of one primary node and at least one secondary node. When steps in this document are marked with the following prefixes, the meaning is as follows:
When steps in this document are marked with the following prefixes, the meaning
- **[A]**: The step applies to all nodes - **[1]**: The step applies to node1 only - **[2]**: The step applies to node2 only
-
+ Read the following SAP Notes and papers first: - SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533), which has:
- - The list of Azure VM sizes that are supported for the deployment of SAP software.
- - Important capacity information for Azure VM sizes.
- - The supported SAP software, and operating system (OS) and database combinations.
- - The required SAP kernel version for Windows and Linux on Microsoft Azure.
+ - The list of Azure VM sizes that are supported for the deployment of SAP software.
+ - Important capacity information for Azure VM sizes.
+ - The supported SAP software, and operating system (OS) and database combinations.
+ - The required SAP kernel version for Windows and Linux on Microsoft Azure.
- SAP Note [2015553](https://launchpad.support.sap.com/#/notes/2015553) lists prerequisites for SAP-supported SAP software deployments in Azure. - SAP Note [405827](https://launchpad.support.sap.com/#/notes/405827) lists out recommended file system for HANA environment. - SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) has recommended OS settings for Red Hat Enterprise Linux.
Read the following SAP Notes and papers first:
- [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide] - [SAP HANA system replication in pacemaker cluster.](https://access.redhat.com/articles/3004101) - General RHEL documentation
- - [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
- - [High Availability Add-On Administration.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
- - [High Availability Add-On Reference.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
- - [Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571)
+ - [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
+ - [High Availability Add-On Administration.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
+ - [High Availability Add-On Reference.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
+ - [Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571)
- Azure-specific RHEL documentation:
- - [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members.](https://access.redhat.com/articles/3131341)
- - [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure.](https://access.redhat.com/articles/3252491)
- - [Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are on NFS shares](https://access.redhat.com/solutions/5156571)
+ - [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members.](https://access.redhat.com/articles/3131341)
+ - [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure.](https://access.redhat.com/articles/3252491)
+ - [Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are on NFS shares](https://access.redhat.com/solutions/5156571)
- [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/us/media/tr-4746.pdf) - [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
In order to achieve SAP HANA High Availability of scale-up system on [Azure NetA
![SAP HANA HA Scale-up on ANF](./media/sap-hana-high-availability-rhel/sap-hana-scale-up-netapp-files-red-hat.png)
-SAP HANA filesystems are mounted on NFS shares using Azure NetApp Files on each node. File systems /hana/data, /hana/log, and /hana/shared are unique to each node.
+SAP HANA filesystems are mounted on NFS shares using Azure NetApp Files on each node. File systems /hana/data, /hana/log, and /hana/shared are unique to each node.
Mounted on node1 (**hanadb1**)
Mounted on node2 (**hanadb2**)
- 10.32.2.4:/**hanadb2**-shared-mnt00001 on /hana/shared > [!NOTE]
-> File systems /hana/shared, /hana/data and /hana/log are not shared between the two nodes. Each cluster node has its own, separate file systems.
+> File systems /hana/shared, /hana/data and /hana/log are not shared between the two nodes. Each cluster node has its own, separate file systems.
The SAP HANA System Replication configuration uses a dedicated virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The presented configuration shows a load balancer with: - Front-end IP address: 10.32.0.10 for hn1-db-- Probe Port: 62503
+- Probe Port: 62503
## Set up the Azure NetApp File infrastructure
For information about the availability of Azure NetApp Files by Azure region, se
### Important considerations
-As you are creating your Azure NetApp Files volumes for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
+As you're creating your Azure NetApp Files volumes for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
### Sizing of HANA database on Azure NetApp Files The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
-While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
+While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
-The configuration in this article is presented with simple Azure NetApp Files Volumes.
+The configuration in this article is presented with simple Azure NetApp Files Volumes.
> [!IMPORTANT]
-> For production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
+> For production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
### Deploy Azure NetApp Files resources
The following instructions assume that you've already deployed your [Azure virtu
1. Create a NetApp account in your selected Azure region by following the instructions in [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md).
-2. Set up an Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
+2. Set up Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
+
+ The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md).
- The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md).
+3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md).
-3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md).
+4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md).
-4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md).
+ As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically.
- As you are deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically.
+ Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.32.2.4/hanadb1-data-mnt00001, nfs://10.32.2.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes.
- Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.32.2.4/hanadb1-data-mnt00001, nfs://10.32.2.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes.
-
- On **hanadb1**
+ On **hanadb1**
- - Volume hanadb1-data-mnt00001 (nfs://10.32.2.4:/hanadb1-data-mnt00001)
- - Volume hanadb1-log-mnt00001 (nfs://10.32.2.4:/hanadb1-log-mnt00001)
- - Volume hanadb1-shared-mnt00001 (nfs://10.32.2.4:/hanadb1-shared-mnt00001)
-
- On **hanadb2**
+ - Volume hanadb1-data-mnt00001 (nfs://10.32.2.4:/hanadb1-data-mnt00001)
+ - Volume hanadb1-log-mnt00001 (nfs://10.32.2.4:/hanadb1-log-mnt00001)
+ - Volume hanadb1-shared-mnt00001 (nfs://10.32.2.4:/hanadb1-shared-mnt00001)
- - Volume hanadb2-data-mnt00001 (nfs://10.32.2.4:/hanadb2-data-mnt00001)
- - Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001)
- - Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-mnt00001)
+ On **hanadb2**
+
+ - Volume hanadb2-data-mnt00001 (nfs://10.32.2.4:/hanadb2-data-mnt00001)
+ - Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001)
+ - Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-mnt00001)
> [!NOTE] > All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes. > If you deployed the /hana/shared volumes as NFSv3 volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3.
-## Deploy Linux virtual machine via Azure portal
+## Deploy Linux virtual machine via Azure portal
-First you need to create the Azure NetApp Files volumes. Then do the following steps:
+This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
-1. Create a resource group.
-2. Create a virtual network.
-3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
-4. Create a load balancer (internal). We recommend standard load balancer.
- Select the virtual network created in step 2.
-5. Create Virtual Machine 1 (**hanadb1**).
-6. Create Virtual Machine 2 (**hanadb2**).
-7. While creating virtual machine, we will not be adding any disk as all our mount points will be on NFS shares from Azure NetApp Files.
+Deploy virtual machines for SAP HANA. Choose a suitable RHEL image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+>
+> Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type.
+
+During VM configuration, we won't be adding any disk as all our mount points are on NFS shares from Azure NetApp Files. Also, you have an option to create or select exiting load balancer in networking section. If you're creating a new load balancer, follow below steps -
+
+1. To set up standard load balancer, follow these configuration steps:
+ 1. First, create a front-end IP pool:
+ 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
+ 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
+ 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.32.0.10**).
+ 4. Select **OK**.
+ 5. After the new front-end IP pool is created, note the pool IP address.
+ 2. Create a single back-end pool:
+ 1. Open the load balancer, select **Backend pools**, and then select **Add**.
+ 2. Enter the name of the new back-end pool (for example, **hana-backend**).
+ 3. Select **NIC** for Backend Pool Configuration.
+ 4. Select **Add a virtual machine**.
+ 5. Select the virtual machines of the HANA cluster.
+ 6. Select **Add**.
+ 7. Select **Save**.
+ 3. Next, create a health probe:
+ 1. Open the load balancer, select **health probes**, and select **Add**.
+ 2. Enter the name of the new health probe (for example, **hana-hp**).
+ 3. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5.
+ 4. Select **OK**.
+ 4. Next, create the load-balancing rules:
+ 1. Open the load balancer, select **load balancing rules**, and select **Add**.
+ 2. Enter the name of the new load balancer rule (for example, **hana-lb**).
+ 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
+ 1. Increase idle timeout to 30 minutes
+ 4. Select **HA Ports**.
+ 5. Make sure to **enable Floating IP**.
+ 6. Select **OK**.
-> [!NOTE]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694).
-8. To set up standard load balancer, follow these configuration steps:
- 1. First, create a front-end IP pool:
- 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
- 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.32.0.10**).
- 1. Select **OK**.
- 1. After the new front-end IP pool is created, note the pool IP address.
- 1. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **hana-backend**).
- 2. Select **NIC** for Backend Pool Configuration.
- 1. Select **Add a virtual machine**.
- 1. Select the virtual machines of the HANA cluster.
- 1. Select **Add**.
- 2. Select **Save**.
- 1. Next, create a health probe:
- 1. Open the load balancer, select **health probes**, and select **Add**.
- 1. Enter the name of the new health probe (for example, **hana-hp**).
- 1. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5.
- 1. Select **OK**.
- 1. Next, create the load-balancing rules:
- 1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
- 2. Increase idle timeout to 30 minutes
- 1. Select **HA Ports**.
- 1. Make sure to **enable Floating IP**.
- 1. Select **OK**.
+> [!IMPORTANT]
+> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694).
+> [!NOTE]
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT] > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md). See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). ## Mount the Azure NetApp Files volume
-1. **[A]** Create mount points for the HANA database volumes.
+1. **[A]** Create mount points for the HANA database volumes.
```bash sudo mkdir -p /hana/data
For more information about the required ports for SAP HANA, read the chapter [Co
```bash sudo cat /etc/idmapd.conf ```+ Example output+ ```output [General] Domain = defaultv4iddomain.com
For more information about the required ports for SAP HANA, read the chapter [Co
Nobody-Group = nobody ```
- > [!IMPORTANT]
+ > [!IMPORTANT]
> Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on Azure NetApp Files: **defaultv4iddomain.com**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as nobody.
-
-3. **[1]** Mount the node-specific volumes on node1 (**hanadb1**)
+3. **[1]** Mount the node-specific volumes on node1 (**hanadb1**)
```bash sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-log-mnt00001 /hana/log sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-data-mnt00001 /hana/data ```
-
-4. **[2]** Mount the node-specific volumes on node2 (**hanadb2**)
-
+
+4. **[2]** Mount the node-specific volumes on node2 (**hanadb2**)
+ ```bash sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-log-mnt00001 /hana/log
For more information about the required ports for SAP HANA, read the chapter [Co
```bash sudo nfsstat -m ```
- Verify that flag vers is set to 4.1
+
+ Verify that flag vers is set to 4.1
Example from hanadb1
-
+ ```output /hana/log from 10.32.2.4:/hanadb1-log-mnt00001 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.32.0.4,local_lock=none,addr=10.32.2.4
For more information about the required ports for SAP HANA, read the chapter [Co
6. **[A]** Verify **nfs4_disable_idmapping**. It should be set to **Y**. To create the directory structure where **nfs4_disable_idmapping** is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers.
- Check nfs4_disable_idmapping
- ```bash
+ Check nfs4_disable_idmapping
+
+ ```bash
sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping ```
- If you need to set nfs4_disable_idmapping to
+
+ If you need to set nfs4_disable_idmapping to
+ ```bash sudo echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping ```+ Make the configuration permanent+ ```bash sudo echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ```
- ΓÇïFor more details on how to change **nfs_disable_idmapping** parameter, see [https://access.redhat.com/solutions/1749883](https://access.redhat.com/solutions/1749883).
-
+ ΓÇïFor more information on how to change nfs_disable_idmapping parameter, see [https://access.redhat.com/solutions/1749883](https://access.redhat.com/solutions/1749883).
## SAP HANA installation
For more information about the required ports for SAP HANA, read the chapter [Co
``` Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your environment + ```output 10.32.0.4 hanadb1 10.32.0.5 hanadb2 ```
-3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
+2. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
```bash sudo vi /etc/sysctl.d/91-NetApp-HANA.conf ```+ Add the following entries in the configuration file+ ```output net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
For more information about the required ports for SAP HANA, read the chapter [Co
net.ipv4.tcp_sack = 1 ```
-4. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings.
+3. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings.
```bash sudo vi /etc/sysctl.d/ms-az.conf ```+ Add the following entries in the configuration file+ ```output net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348
For more information about the required ports for SAP HANA, read the chapter [Co
``` > [!TIP]
- > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+ > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-5. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
+4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
```bash sudo vi /etc/modprobe.d/sunrpc.conf ``` Insert the following line:+ ```output options sunrpc tcp_max_slot_table_entries=128 ```
-2. **[A]** RHEL for HANA Configuration
+5. **[A]** RHEL for HANA Configuration
Configure RHEL as described in below SAP Note based on your RHEL version - [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690) - [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782) - [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)
- - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)
+ - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)
- [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607)
-3. **[A]** Install the SAP HANA
-
- Started with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some case you do not want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711)
-
- Run the **hdblcm** program from the HANA DVD. Enter the following values at the prompt:
- Choose installation: Enter **1** (for install)
- Select additional components for installation: Enter **1**.
- Enter Installation Path [/hana/shared]: press Enter to accept the default
- Enter Local Host Name [..]: Press Enter to accept the default
- Do you want to add additional hosts to the system? (y/n) [n]: **n**
- Enter SAP HANA System ID: Enter **HN1**.
- Enter Instance Number [00]: Enter **03**
- Select Database Mode / Enter Index [1]: press Enter to accept the default
- Select System Usage / Enter Index [4]: enter **4** (for custom)
- Enter Location of Data Volumes [/hana/data]: press Enter to accept the default
- Enter Location of Log Volumes [/hana/log]: press Enter to accept the default
- Restrict maximum memory allocation? [n]: press Enter to accept the default
- Enter Certificate Host Name For Host '...' [...]: press Enter to accept the default
- Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password
- Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm
- Enter System Administrator (hn1adm) Password: Enter the system administrator password
- Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to confirm
- Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default
- Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default
- Enter System Administrator User ID [1001]: press Enter to accept the default
- Enter ID of User Group (sapsys) [79]: press Enter to accept the default
- Enter Database User (SYSTEM) Password: Enter the database user password
- Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm
- Restart system after machine reboot? [n]: press Enter to accept the default
- Do you want to continue? (y/n): Validate the summary. Enter **y** to continue
+6. **[A]** Install the SAP HANA
+
+ Started with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some case you don't want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711)
+
+ Run the **hdblcm** program from the HANA DVD. Enter the following values at the prompt:
+ Choose installation: Enter **1** (for install)
+ Select additional components for installation: Enter **1**.
+ Enter Installation Path [/hana/shared]: press Enter to accept the default
+ Enter Local Host Name [..]: Press Enter to accept the default
+ Do you want to add additional hosts to the system? (y/n) [n]: **n**
+ Enter SAP HANA System ID: Enter **HN1**.
+ Enter Instance Number [00]: Enter **03**
+ Select Database Mode / Enter Index [1]: press Enter to accept the default
+ Select System Usage / Enter Index [4]: enter **4** (for custom)
+ Enter Location of Data Volumes [/hana/data]: press Enter to accept the default
+ Enter Location of Log Volumes [/hana/log]: press Enter to accept the default
+ Restrict maximum memory allocation? [n]: press Enter to accept the default
+ Enter Certificate Host Name For Host '...' [...]: press Enter to accept the default
+ Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password
+ Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm
+ Enter System Administrator (hn1adm) Password: Enter the system administrator password
+ Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to confirm
+ Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default
+ Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default
+ Enter System Administrator User ID [1001]: press Enter to accept the default
+ Enter ID of User Group (sapsys) [79]: press Enter to accept the default
+ Enter Database User (SYSTEM) Password: Enter the database user password
+ Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm
+ Restart system after machine reboot? [n]: press Enter to accept the default
+ Do you want to continue? (y/n): Validate the summary. Enter **y** to continue
4. **[A]** Upgrade SAP Host Agent
For more information about the required ports for SAP HANA, read the chapter [Co
## Configure SAP HANA system replication
-Follow the steps in Set up [SAP HANA System Replication](./sap-hana-high-availability-rhel.md#configure-sap-hana-20-system-replication) to configure SAP HANA System Replication.
+Follow the steps in Set up [SAP HANA System Replication](./sap-hana-high-availability-rhel.md#configure-sap-hana-20-system-replication) to configure SAP HANA System Replication.
## Cluster configuration
-This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on NFS shares using Azure NetApp Files.
+This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on NFS shares using Azure NetApp Files.
### Create a Pacemaker cluster
Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux](./high-av
### Implement the Python system replication hook SAPHanaSR
-This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
-
-1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
-
- > [!TIP]
- > The Python hook can only be implemented for HANA 2.0.
-
- 1. Prepare the hook as `root`.
-
- ```bash
- sudo mkdir -p /hana/shared/myHooks
- sudo cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks
- sudo chown -R hn1adm:sapsys /hana/shared/myHooks
- ```
-
- 2. Stop HANA on both nodes. Execute as <sid\>adm:
-
- ```bash
- sapcontrol -nr 03 -function StopSystem
- ```
-
- 3. Adjust `global.ini` on each cluster node.
-
- ```config
- # add to global.ini
- [ha_dr_provider_SAPHanaSR]
- provider = SAPHanaSR
- path = /hana/shared/myHooks
- execution_order = 1
-
- [trace]
- ha_dr_saphanasr = info
- ```
-
-2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
- ```bash
- sudo visudo -f /etc/sudoers.d/20-saphana
- ```
- Insert the following lines and then save
- ```output
- Cmnd_Alias SITE1_SOK = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE1 -v SOK -t crm_config -s SAPHanaSR
- Cmnd_Alias SITE1_SFAIL = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE1 -v SFAIL -t crm_config -s SAPHanaSR
- Cmnd_Alias SITE2_SOK = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE2 -v SOK -t crm_config -s SAPHanaSR
- Cmnd_Alias SITE2_SFAIL = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE2 -v SFAIL -t crm_config -s SAPHanaSR
- hn1adm ALL=(ALL) NOPASSWD: SITE1_SOK, SITE1_SFAIL, SITE2_SOK, SITE2_SFAIL
- Defaults!SITE1_SOK, SITE1_SFAIL, SITE2_SOK, SITE2_SFAIL !requiretty
- ```
-
-3. **[A]** Start SAP HANA on both nodes. Execute as <sid\>adm.
-
- ```bash
- sapcontrol -nr 03 -function StartSystem
- ```
-
-4. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site.
-
- ```bash
- cdtrace
- awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
- { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
- ```
- Example output
-
- ```output
- # 2021-04-12 21:36:16.911343 ha_dr_SAPHanaSR SFAIL
- # 2021-04-12 21:36:29.147808 ha_dr_SAPHanaSR SFAIL
- # 2021-04-12 21:37:04.898680 ha_dr_SAPHanaSR SOK
- ```
-
-For more details on the implementation of the SAP HANA system replication hook see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook).
+This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It's highly recommended to configure the SAPHanaSR Python hook. Follow the steps mentioned in [Implement the Python system replication hook SAPHanaSR](sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr)
### Configure filesystem resources
-In this example each cluster node has its own HANA NFS filesystems /hana/shared, /hana/data, and /hana/log.
+In this example each cluster node has its own HANA NFS filesystems /hana/shared, /hana/data, and /hana/log.
1. **[1]** Put the cluster in maintenance mode.
- ```
+ ```bash
sudo pcs property set maintenance-mode=true ```
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
`on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop successfully if the NFS server holding the HANA executables is inaccessible.
- The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup.
+ The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup.
- For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release.
+ For workloads that require higher throughput consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release.
4. **[1]** Configuring Location Constraints
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
7. **[1]** Creating Ordering Constraints Configure ordering constraints so that a node's attribute resources start only after all of the node's NFS mounts are mounted.
-
+ ```bash sudo pcs constraint order hanadb1_nfs then hana_nfs1_active sudo pcs constraint order hanadb2_nfs then hana_nfs2_active ``` > [!TIP]
- > If your configuration includes file systems, outside of group `hanadb1_nfs` or `hanadb2_nfs`, then include the `sequential=false` option, so that there are no ordering dependencies among the file systems. All file systems must start before `hana_nfs1_active`, but they do not need to start in any order relative to each other. For more details see [How do I configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571)
+ > If your configuration includes file systems, outside of group `hanadb1_nfs` or `hanadb2_nfs`, then include the `sequential=false` option, so that there are no ordering dependencies among the file systems. All file systems must start before `hana_nfs1_active`, but they do not need to start in any order relative to each other. For more information, see [How do I configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571)
### Configure SAP HANA cluster resources
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
2. **[1]** Configure constraints between the SAP HANA resources and the NFS mounts
- Location rule constraints will be set so that the SAP HANA resources can run on a node only if all of the node's NFS mounts are mounted.
+ Location rule constraints are set so that the SAP HANA resources can run on a node only if all of the node's NFS mounts are mounted.
```bash sudo pcs constraint location SAPHanaTopology_HN1_03-clone rule score=-INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true ```+ On RHEL 7.x+ ```bash sudo pcs constraint location SAPHana_HN1_03-master rule score=-INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true ```
- On RHEL 8.x
+
+ On RHEL 8.x/9.x
+ ```bash sudo pcs constraint location SAPHana_HN1_03-clone rule score=-INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true ```+ Take the cluster out of maintenance mode+ ```bash sudo pcs property set maintenance-mode=false ``` Check the status of cluster and all the resources > [!NOTE]
- > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
-
+ > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+ ```bash sudo pcs status ```+ Example output+ ```output Online: [ hanadb1 hanadb2 ]
Starting with SAP HANA 2.0 SPS 01 SAP allows Active/Read-Enabled setups for SAP
The additional configuration, required to manage HANA Active/Read enabled system replication in a Red Hat high availability cluster with second virtual IP is described in [Configure HANA Active/Read Enabled System Replication in Pacemaker cluster](./sap-hana-high-availability-rhel.md#configure-hana-activeread-enabled-system-replication-in-pacemaker-cluster).
-Before proceeding further, make sure you have fully configured Red Hat High Availability Cluster managing SAP HANA database as described in above segments of the documentation.
-
+Before proceeding further, make sure you have fully configured Red Hat High Availability Cluster managing SAP HANA database as described in above segments of the documentation.
## Test the cluster setup
-This section describes how you can test your setup.
+This section describes how you can test your setup.
-1. Before you start a test, make sure that Pacemaker does not have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA system replication is sync state, for example with systemReplicationStatus:
+1. Before you start a test, make sure that Pacemaker doesn't have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA system replication is sync state, for example with systemReplicationStatus:
```bash sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
This section describes how you can test your setup.
2. Verify the cluster configuration for a failure scenario when a node loses access to the NFS share (/hana/shared) The SAP HANA resource agents depend on binaries, stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented scenario.
- It is difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be performed is to re-mount the file system as read-only.
- This approach validates that the cluster will be able to failover, if access to `/hana/shared` is lost on the active node.
-
+ It's difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be performed is to remount the file system as read-only.
+ This approach validates that the cluster will be able to fail over, if access to `/hana/shared` is lost on the active node.
- **Expected Result:** On making `/hana/shared` as read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1` which performs read/write operation on file system will fail as it is not able to write anything on the file system and will perform HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares.
+ **Expected Result:** On making `/hana/shared` as read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1` which performs read/write operation on file system fails as it isn't able to write anything on the file system and will perform HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares.
Resource state before starting the test: ```bash sudo pcs status ```+ Example output+ ```output Full list of resources: rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hanadb1
This section describes how you can test your setup.
```bash sudo pcs status ```+ Example output+ ```output Full list of resources:
This section describes how you can test your setup.
## Next steps
-* [Azure Virtual Machines planning and implementation for SAP][planning-guide]
-* [Azure Virtual Machines deployment for SAP][deployment-guide]
-* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
+- [Azure Virtual Machines planning and implementation for SAP][planning-guide]
+- [Azure Virtual Machines deployment for SAP][deployment-guide]
+- [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
Read the following SAP Notes and papers first:
- [Azure Virtual Machines planning and implementation for SAP on Linux](./planning-guide.md) >[!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
## Overview
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
documentationcenter: - Previously updated : 04/27/2023+ Last updated : 08/23/2023 # High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
[deployment-guide]:deployment-guide.md [planning-guide]:planning-guide.md
-[2205917]:https://launchpad.support.sap.com/#/notes/2205917
-[1944799]:https://launchpad.support.sap.com/#/notes/1944799
[1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553 [2178632]:https://launchpad.support.sap.com/#/notes/2178632 [2191498]:https://launchpad.support.sap.com/#/notes/2191498 [2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1984787]:https://launchpad.support.sap.com/#/notes/1984787
[1999351]:https://launchpad.support.sap.com/#/notes/1999351 [2388694]:https://launchpad.support.sap.com/#/notes/2388694
-[2292690]:https://launchpad.support.sap.com/#/notes/2292690
-[2455582]:https://launchpad.support.sap.com/#/notes/2455582
[2002167]:https://launchpad.support.sap.com/#/notes/2002167 [2009879]:https://launchpad.support.sap.com/#/notes/2009879 [3108302]:https://launchpad.support.sap.com/#/notes/3108302 [sap-swcenter]:https://launchpad.support.sap.com/#/softwarecenter
-[template-multisid-db]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
For on-premises development, you can use either HANA System Replication or use shared storage to establish high availability for SAP HANA. On Azure virtual machines (VMs), HANA System Replication on Azure is currently the only supported high availability function.
SAP HANA System Replication setup uses a dedicated virtual hostname and virtual
The Azure Marketplace contains images qualified for SAP HANA with the High Availability add-on, which you can use to deploy new virtual machines using various versions of Red Hat.
-### Deploy with a template
-
-You can use one of the quickstart templates that are on GitHub to deploy all the required resources. The template deploys the virtual machines, the load balancer, the availability set, and so on.
-To deploy the template, follow these steps:
-
-1. Open the [database template][template-multisid-db] on the Azure portal.
-1. Enter the following parameters:
- * **Sap System ID**: Enter the SAP system ID of the SAP system you want to install. The ID is used as a prefix for the resources that are deployed.
- * **Os Type**: Select one of the Linux distributions. For this example, select **RHEL 7**.
- * **Db Type**: Select **HANA**.
- * **Sap System Size**: Enter the number of SAPS that the new system is going to provide. If you're not sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator.
- * **System Availability**: Select **HA**.
- * **Admin Username, Admin Password or SSH key**: A new user is created that can be used to sign in to the machine.
- * **Subnet ID**: If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be assigned to, name the ID of that specific subnet. The ID usually looks like **/subscriptions/\<subscription ID>/resourceGroups/\<resource group name>/providers/Microsoft.Network/virtualNetworks/\<virtual network name>/subnets/\<subnet name>**. Leave empty, if you want to create a new virtual network
-
-### Manual deployment
-
-1. Create a resource group.
-1. Create a virtual network.
-1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
-1. Create a load balancer (internal). We recommend [standard load balancer](../../load-balancer/load-balancer-overview.md).
- * Select the virtual network created in step 2.
-1. Create virtual machine 1.
- Use a properly supported version of Red Hat for SAP + High Availability, supported for your version of SAP HANA. This page will use the image [Red Hat Enterprise Linux- SAP, HA, Update Services](https://portal.azure.com/#create/redhat.rhel-sap-ha).
- Select the availability set created in step 3.
-1. Create virtual machine 2.
- Use a properly supported version of Red Hat for SAP + High Availability, supported for your version of SAP HANA. This page will use the image [Red Hat Enterprise Linux- SAP, HA, Update Services](https://portal.azure.com/#create/redhat.rhel-sap-ha).
- Select the availability set created in step 3.
-1. Add data disks.
+### Deploy Linux VMs manually via Azure portal
+
+This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
+
+Deploy virtual machines for SAP HANA. Choose a suitable RHEL image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+>
+> Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type.
-> [!Note]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+During VM configuration, you can create or select exiting load balancer in networking section. If you're creating a new load balancer, follow below steps -
-To set up standard load balancer, follow these configuration steps:
1. First, create a front-end IP pool: 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
- 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**).
- 1. Select **OK**.
- 1. After the new front-end IP pool is created, note the pool IP address.
+ 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
+ 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**).
+ 4. Select **OK**.
+ 5. After the new front-end IP pool is created, note the pool IP address.
+
+ 2. Create a single back-end pool:
- 1. Create a single back-end pool:
-
1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **hana-backend**).
- 2. Select **NIC** for Backend Pool Configuration.
- 1. Select **Add a virtual machine**.
- 1. Select the virtual machines of the HANA cluster.
- 1. Select **Add**.
- 2. Select **Save**.
+ 2. Enter the name of the new back-end pool (for example, **hana-backend**).
+ 3. Select **NIC** for Backend Pool Configuration.
+ 4. Select **Add a virtual machine**.
+ 5. Select the virtual machines of the HANA cluster.
+ 6. Select **Add**.
+ 7. Select **Save**.
- 1. Next, create a health probe:
+ 3. Next, create a health probe:
1. Open the load balancer, select **health probes**, and select **Add**.
- 1. Enter the name of the new health probe (for example, **hana-hp**).
- 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5.
- 1. Select **OK**.
+ 2. Enter the name of the new health probe (for example, **hana-hp**).
+ 3. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5.
+ 4. Select **OK**.
+
+ 4. Next, create the load-balancing rules:
- 1. Next, create the load-balancing rules:
-
1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
- 2. Increase idle timeout to 30 minutes
- 1. Select **HA Ports**.
- 1. Increase the **idle timeout** to 30 minutes.
- 1. Make sure to **enable Floating IP**.
- 1. Select **OK**.
+ 2. Enter the name of the new load balancer rule (for example, **hana-lb**).
+ 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
+ 4. Increase idle timeout to 30 minutes
+ 5. Select **HA Ports**.
+ 6. Increase the **idle timeout** to 30 minutes.
+ 7. Make sure to **enable Floating IP**.
+ 8. Select **OK**.
For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or [SAP Note 2388694][2388694].
+> [!IMPORTANT]
+> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+ > [!IMPORTANT] > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
-> See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
## Install SAP HANA
The steps in this section use the following prefixes:
```output /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2 /dev/disk/azure/scsi1/lun3 ```
-
+ Create physical volumes for all of the disks that you want to use: ```bash
The steps in this section use the following prefixes:
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log sudo mkfs.xfs /dev/vg_hana_shared_HN1/hana_shared ```
-
- Do not mount the directories by issuing mount commands, rather enter the configurations into the fstab and issue a final `mount -a` to validate the syntax. Start by creating the mount directories for each volume:
+
+ Don't mount the directories by issuing mount commands, rather enter the configurations into the fstab and issue a final `mount -a` to validate the syntax. Start by creating the mount directories for each volume:
```bash sudo mkdir -p /hana/data sudo mkdir -p /hana/log sudo mkdir -p /hana/shared ```
-
+ Next create `fstab` entries for the three logical volumes by inserting the following lines in the `/etc/fstab` file: /dev/mapper/vg_hana_data_HN1-hana_data /hana/data xfs defaults,nofail 0 2
The steps in this section use the following prefixes:
10.0.0.5 hn1-db-0 10.0.0.6 hn1-db-1 - 1. **[A]** RHEL for HANA configuration Configure RHEL as described in the following notes:
- - [2447641 - Additional packages required for installing SAP HANA SPS 12 on RHEL 7.X](https://access.redhat.com/solutions/2447641)
- - [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690)
- - [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782)
- - [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)
- - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)
- - [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607)
+ * [2447641 - Additional packages required for installing SAP HANA SPS 12 on RHEL 7.X](https://access.redhat.com/solutions/2447641)
+ * [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690)
+ * [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782)
+ * [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)
+ * [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)
+ * [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607)
1. **[A]** Install the SAP HANA
The steps in this section use the following prefixes:
``` 1. **[2]** Configure System Replication on the second node:
-
+ Register the second node to start the system replication. Run the following command as <hanasid\>adm: ```bash
This is important step to optimize the integration with the cluster and improve
provider = SAPHanaSR path = /hana/shared/myHooks execution_order = 1
-
+
[trace] ha_dr_saphanasr = info ```
This is important step to optimize the integration with the cluster and improve
# 2021-04-12 21:37:04.898680 ha_dr_SAPHanaSR SOK ```
-For more details on the implementation of the SAP HANA system replication hook see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook).
+For more details on the implementation of the SAP HANA system replication hook, see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook).
## Create SAP HANA cluster resources Create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes. Throughout these instructions, be sure to substitute your instance number, HANA system ID, IP addresses, and system names, where appropriate: ```bash
- sudo pcs property set maintenance-mode=true
+sudo pcs property set maintenance-mode=true
- sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1 InstanceNumber=03 \
- op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 \
- clone clone-max=2 clone-node-max=1 interleave=true
+sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1 InstanceNumber=03 \
+ op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 \
+ clone clone-max=2 clone-node-max=1 interleave=true
``` Next, create the HANA resources. > [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
If building a cluster on **RHEL 7.x**, use the following commands:
sudo pcs resource defaults migration-threshold=5000
sudo pcs property set maintenance-mode=false ```
-If building a cluster on **RHEL 8.x**, use the following commands:
+If building a cluster on **RHEL 8.x/9.x**, use the following commands:
```bash sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
sudo pcs resource defaults update migration-threshold=5000
sudo pcs property set maintenance-mode=false ```
+To configure priority-fencing-delay for SAP HANA (applicable only as of pacemaker-2.0.4-6.el8 or higher), following commands needs to be executed.
+
+> [!NOTE]
+> If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Pacemaker cluster properties](https://access.redhat.com/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_controlling-cluster-behavior-configuring-and-managing-high-availability-clusters).
+>
+> The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device.
+
+```bash
+sudo pcs property set maintenance-mode=true
+
+sudo pcs resource defaults update priority=1
+sudo pcs resource update SAPHana_HN1_03-clone meta priority=10
+
+sudo pcs property set priority-fencing-delay=15s
+
+sudo pcs property set maintenance-mode=false
+```
+ > [!IMPORTANT]
-> It's a good idea to set `AUTOMATED_REGISTER` to `false`, while you're performing failover tests, to prevent a failed primary instance to automatically register as secondary. After testing, as a best practice, set `AUTOMATED_REGISTER` to `true`, so that after takeover, system replication can resume automatically.
+> It's a good idea to set `AUTOMATED_REGISTER` to `false`, while you're performing failover tests, to prevent a failed primary instance to automatically register as secondary. After testing, as a best practice, set `AUTOMATED_REGISTER` to `true`, so that after takeover, system replication can resume automatically.
Make sure that the cluster status is ok and that all of the resources are started. It's not important on which node the resources are running. > [!NOTE] > The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database.
-Use the command `sudo pcs status` to check the state of the cluster resources just created:
+Use the command `sudo pcs status` to check the state of the cluster resources created:
```output # Online: [ hn1-db-0 hn1-db-1 ]
Before proceeding further, make sure you have fully configured Red Hat High Avai
### Additional setup in Azure load balancer for active/read-enabled setup
-To proceed with additional steps on provisioning second virtual IP, make sure you have configured Azure Load Balancer as described in [Manual Deployment](#manual-deployment) section.
+To proceed with additional steps on provisioning second virtual IP, make sure you have configured Azure Load Balancer as described in [Deploy Linux VMs manually via Azure portal](#deploy-linux-vms-manually-via-azure-portal) section.
1. For **standard** load balancer, follow below additional steps on the same load balancer that you had created in earlier section.
- a. Create a second front-end IP pool:
+ a. Create a second front-end IP pool:
- - Open the load balancer, select **frontend IP pool**, and select **Add**.
- - Enter the name of the second front-end IP pool (for example, **hana-secondaryIP**).
- - Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.14**).
- - Select **OK**.
- - After the new front-end IP pool is created, note the pool IP address.
+ * Open the load balancer, select **frontend IP pool**, and select **Add**.
+ * Enter the name of the second front-end IP pool (for example, **hana-secondaryIP**).
+ * Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.14**).
+ * Select **OK**.
+ * After the new front-end IP pool is created, note the pool IP address.
b. Next, create a health probe:
- - Open the load balancer, select **health probes**, and select **Add**.
- - Enter the name of the new health probe (for example, **hana-secondaryhp**).
- - Select **TCP** as the protocol and port **62603**. Keep the **Interval** value set to 5, and the **Unhealthy threshold** value set to 2.
- - Select **OK**.
+ * Open the load balancer, select **health probes**, and select **Add**.
+ * Enter the name of the new health probe (for example, **hana-secondaryhp**).
+ * Select **TCP** as the protocol and port **62603**. Keep the **Interval** value set to 5, and the **Unhealthy threshold** value set to 2.
+ * Select **OK**.
c. Next, create the load-balancing rules:
- - Open the load balancer, select **load balancing rules**, and select **Add**.
- - Enter the name of the new load balancer rule (for example, **hana-secondarylb**).
- - Select the front-end IP address , the back-end pool, and the health probe that you created earlier (for example, **hana-secondaryIP**, **hana-backend** and **hana-secondaryhp**).
- - Select **HA Ports**.
- - Make sure to **enable Floating IP**.
- - Select **OK**.
+ * Open the load balancer, select **load balancing rules**, and select **Add**.
+ * Enter the name of the new load balancer rule (for example, **hana-secondarylb**).
+ * Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-secondaryIP**, **hana-backend** and **hana-secondaryhp**).
+ * Select **HA Ports**.
+ * Make sure to **enable Floating IP**.
+ * Select **OK**.
### Configure HANA active/read enabled system replication
-The steps to configure HANA system replication are described in [Configure SAP HANA 2.0 System Replication](#configure-sap-hana-20-system-replication) section. If you are deploying read-enabled secondary scenario, while configuring system replication on the second node, execute following command as **hanasid**adm:
+The steps to configure HANA system replication are described in [Configure SAP HANA 2.0 System Replication](#configure-sap-hana-20-system-replication) section. If you're deploying read-enabled secondary scenario, while configuring system replication on the second node, execute following command as **hanasid**adm:
-```
+```bash
sapcontrol -nr 03 -function StopWait 600 10 hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --name=SITE2 --operationMode=logreplay_readaccess
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMo
The second virtual IP and the appropriate colocation constraint can be configured with the following commands:
-```
+```bash
pcs property set maintenance-mode=true pcs resource create secvip_HN1_03 ocf:heartbeat:IPaddr2 ip="10.40.0.16"
pcs constraint location g_secip_HN1_03 rule score=4000 hana_hn1_sync_state eq PR
pcs property set maintenance-mode=false ```
-Make sure that the cluster status is ok and that all of the resources are started. The second virtual IP will run on the secondary site along with SAPHana secondary resource.
+
+Make sure that the cluster status is ok and that all of the resources are started. The second virtual IP runs on the secondary site along with SAPHana secondary resource.
```output sudo pcs status
In next section, you can find the typical set of failover tests to execute.
Be aware of the second virtual IP behavior, while testing a HANA cluster configured with read-enabled secondary:
-1. When you migrate **SAPHana_HN1_03** cluster resource to secondary site **hn1-db-1**, the second virtual IP will continue to run on the same site **hn1-db-1**. If you have set AUTOMATED_REGISTER="true" for the resource and HANA system replication is registered automatically on **hn1-db-0**, then your second virtual IP will also move to **hn1-db-0**.
+1. When you migrate **SAPHana_HN1_03** cluster resource to secondary site **hn1-db-1**, the second virtual IP continues to run on the same site **hn1-db-1**. If you have set AUTOMATED_REGISTER="true" for the resource and HANA system replication is registered automatically on **hn1-db-0**, then your second virtual IP will also move to **hn1-db-0**.
-2. On testing server crash, second virtual IP resources (**secvip_HN1_03**) and Azure load balancer port resource (**secnc_HN1_03**) will run on primary server alongside the primary virtual IP resources. So, till the time secondary server is down, application that are connected to read-enabled HANA database will connect to primary HANA database. The behavior is expected as you do not want applications that are connected to read-enabled HANA database to be inaccessible till the time secondary server is unavailable.
+2. On testing server crash, second virtual IP resources (**secvip_HN1_03**) and Azure load balancer port resource (**secnc_HN1_03**) will run on primary server alongside the primary virtual IP resources. So, till the time secondary server is down, application that are connected to read-enabled HANA database connects to primary HANA database. The behavior is expected as you don't want applications that are connected to read-enabled HANA database to be inaccessible till the time secondary server is unavailable.
3. During failover and fallback of second virtual IP address, it may happen that the existing connections on applications that use second virtual IP to connect to the HANA database may get interrupted.
The setup maximizes the time that the second virtual IP resource will be assigne
## Test the cluster setup
-This section describes how you can test your setup. Before you start a test, make sure that Pacemaker does not have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA is sync state, for example with systemReplicationStatus:
+This section describes how you can test your setup. Before you start a test, make sure that Pacemaker doesn't have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA is sync state, for example with systemReplicationStatus:
```bash sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
Resource Group: g_ip_HN1_03
You can migrate the SAP HANA master node by executing the following command as root:
-#### On RHEL 7.x
- ```bash
+# On RHEL 7.x
pcs resource move SAPHana_HN1_03-master
-```
-
-#### On RHEL 8.x
-
-```bash
+# On RHEL 8.x
pcs resource move SAPHana_HN1_03-clone --master ```
The SAP HANA resource on hn1-db-0 is stopped. In this case, configure the HANA i
```bash sapcontrol -nr 03 -function StopWait 600 10
-hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMod
-e=sync --name=SITE1
+hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMode=sync --name=SITE1
``` The migration creates location constraints that need to be deleted again. Do the following as root, or via sudo:
Resource Group: g_ip_HN1_03
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1 ```
+### Blocking network communication
+
+Resource state before starting the test:
+
+```output
+Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
+ Started: [ hn1-db-0 hn1-db-1 ]
+Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
+ Masters: [ hn1-db-1 ]
+ Slaves: [ hn1-db-0 ]
+Resource Group: g_ip_HN1_03
+ nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1
+ vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
+```
+
+Execute firewall rule to block the communication on one of the nodes.
+
+```bash
+# Execute iptable rule on hn1-db-1 (10.0.0.6) to block the incoming and outgoing traffic to hn1-db-0 (10.0.0.5)
+iptables -A INPUT -s 10.0.0.5 -j DROP; iptables -A OUTPUT -d 10.0.0.5 -j DROP
+```
+
+When cluster nodes can't communicate to each other, there's a risk of a split-brain scenario. In such situations, cluster nodes will try to simultaneously fence each other, resulting in fence race. To avoid such situation, it's recommended to set [priority-fencing-delay](#create-sap-hana-cluster-resources) property in cluster configuration (applicable only for [pacemaker-2.0.4-6.el8](https://access.redhat.com/errata/RHEA-2020:4804) or higher).
+
+By enabling priority-fencing-delay property, the cluster introduces an additional delay in the fencing action specifically on the node hosting HANA master resource, allowing the node to win the fence race.
+
+Execute below command to delete the firewall rule.
+
+```bash
+# If the iptables rule set on the server gets reset after a reboot, the rules will be cleared out. In case they have not been reset, please proceed to remove the iptables rule using the following command.
+iptables -D INPUT -s 10.0.0.5 -j DROP; iptables -D OUTPUT -d 10.0.0.5 -j DROP
+```
+ ### Test the Azure fencing agent > [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
Resource state before starting the test:
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMo
Switch back to root and clean up the failed state
-#### On RHEL 7.x
- ```bash
+# On RHEL 7.x
pcs resource cleanup SAPHana_HN1_03-master
-```
-
-#### On RHEL 8.x
-```bash
+# On RHEL 8.x
pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource needs to be cleaned> ```
hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMo
``` Then as root -
-#### On RHEL 7.x
```bash
+# On RHEL 7.x
pcs resource cleanup SAPHana_HN1_03-master
-```
-
-#### On RHEL 8.x
-
-```bash
+# On RHEL 8.x
pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource needs to be cleaned> ```
Resource Group: g_ip_HN1_03
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1 ```
-### Test a manual failover
-
-Resource state before starting the test:
-
-```output
-Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03]
- Started: [ hn1-db-0 hn1-db-1 ]
-Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03]
- Masters: [ hn1-db-0 ]
- Slaves: [ hn1-db-1 ]
-Resource Group: g_ip_HN1_03
- nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0
- vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
-```
-
-You can test a manual failover by stopping the cluster on the hn1-db-0 node, as root:
-
-```bash
-pcs cluster stop
-```
-- ## Next steps * [Azure Virtual Machines planning and implementation for SAP][planning-guide]
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
Now you're ready to create the cluster resources:
1. Create the HANA instance resource. > [!NOTE]
- > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we’ll remove it from this article.
+ > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
If you're building a RHEL **7.x** cluster, use the following commands: ```bash
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va
3. Next, create the HANA instance resource. > [!NOTE]
- > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+ > This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
```bash sudo crm configure primitive rsc_SAPHana_HN1_HDB03 ocf:suse:SAPHanaController \
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va
## Test SAP HANA failover > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
1. Before you start a test, check the cluster and SAP HANA system replication status.
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
Next, create the HANA resources:
> For existing Pacemaker clusters, if your configuration was already changed to use `socat` as described in [Azure Load Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), you don't need to immediately switch to the `azure-lb` resource agent. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
```bash # Replace <placeholders> with your instance number, HANA system ID, and the front-end IP address of the Azure load balancer.
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
This article describes how to deploy a highly available SAP HANA system in a sca
In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
Before you begin, refer to the following SAP notes and papers:
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
In this example for deploying SAP HANA in scale-out configuration with standby n
## Test SAP HANA failover > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
1. Simulate a node crash on an SAP HANA worker node. Do the following:
sap Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-shared-disk.md
Following are some of the important points to consider for Azure Premium shared
### Supported OS versions
-Both Windows Servers 2016 and 2019 are supported (use the latest data center images).
+Windows Servers 2016, 2019 and higher are supported (use the latest data center images).
-We strongly recommend using **Windows Server 2019 Datacenter**, as:
+We strongly recommend using at least **Windows Server 2019 Datacenter**, as:
- Windows 2019 Failover Cluster Service is Azure aware - There is added integration and awareness of Azure Host Maintenance and improved experience by monitoring for Azure schedule events. - It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on Azure Internal Load Balancer.
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
- Last updated 08/01/2023
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Title: Create a skillset
-description: A skillset defines content extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search.
+description: A skillset defines data extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search.
Previously updated : 08/08/2023 Last updated : 07/14/2022 # Create a skillset in Azure Cognitive Search
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
You're now ready to move on the Import data wizard.
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command." border="true":::
-### Step 1 - Create a data source
+### Step 1: Create a data source
1. In **Connect to your data**, choose **Azure Blob Storage**.
If you get "Error detecting index schema from data source", the indexer that's p
| Resource is behind an IP firewall | [Create an inbound rule for Search and for Azure portal](search-indexer-howto-access-ip-restricted.md) | | Resource requires a private endpoint connection | [Connect over a private endpoint](search-indexer-howto-access-private.md) |
-### Step 2 - Add cognitive skills
+### Step 2: Add cognitive skills
Next, configure AI enrichment to invoke OCR, image analysis, and natural language processing.
Next, configure AI enrichment to invoke OCR, image analysis, and natural languag
Continue to the next page.
-### Step 3 - Configure the index
+### Step 3: Configure the index
An index contains your searchable content and the **Import data** wizard can usually create the schema for you by sampling the data source. In this step, review the generated schema and potentially revise any settings. Below is the default schema created for the demo Blob data set.
Marking a field as **Retrievable** doesn't mean that the field *must* be present
Continue to the next page.
-### Step 4 - Configure the indexer
+### Step 4: Configure the indexer
The indexer drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, including an indexer that you can reset and run repeatedly.
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Previously updated : 04/18/2023 Last updated : 08/31/2023 # Configure relevance scoring
BM25 similarity adds two parameters to control the relevance score calculation.
## Enable BM25 scoring on older services
-If you're running a search service that was created from March 2014 through July 15, 2020, you can enable BM25 by setting a "similarity" property on new indexes. The property is only exposed on new indexes, so if want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a "similarity" property set to "Microsoft.Azure.Search.BM25Similarity".
+If you're running a search service that was created from March 2014 through July 15, 2020, you can enable BM25 by setting a "similarity" property on new indexes. The property is only exposed on new indexes, so if you want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a "similarity" property set to "Microsoft.Azure.Search.BM25Similarity".
Once an index exists with a "similarity" property, you can switch between `BM25Similarity` or `ClassicSimilarity`.
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Previously updated : 04/18/2023 Last updated : 08/31/2023 # Relevance and scoring in Azure Cognitive Search This article explains the relevance and the scoring algorithms used to compute search scores in Azure Cognitive Search. A relevance score is computed for each match found in a [full text search](search-lucene-query-architecture.md), where the strongest matches are assigned higher search scores.
-Relevance applies to full text search only. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked for relevance.
+Relevance applies to full text search only. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries aren't scored or ranked for relevance.
In Azure Cognitive Search, you can tune search relevance and boost search scores through these mechanisms:
Relevance scoring refers to the computation of a search score that serves as an
The search score is computed based on statistical properties of the string input and the query itself. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#query-parameters)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency.
-Search scores can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is undefined and not stable. Run the query again, and you might see items shift position, especially if you are using the free service or a billable service with multiple replicas. Given two items with an identical score, there is no guarantee which one appears first.
+Search scores can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is undefined and not stable. Run the query again, and you might see items shift position, especially if you are using the free service or a billable service with multiple replicas. Given two items with an identical score, there's no guarantee that one appears first.
If you want to break the tie among repeating scores, you can add an **$orderby** clause to first order by score, then order by another sortable field (for example, `$orderby=search.score() desc,Rating desc`). For more information, see [$orderby](search-query-odata-orderby.md).
Azure Cognitive Search provides the following scoring algorithms:
| Algorithm | Usage | Range | |--|-|-|
-| BM25Similarity | Fixed algorithm on all search services created after July 2020. You can configure this algorithm, but you can't switch to an older one (classic). | Unbounded. |
-|ClassicSimilarity | Present on older search services. You can [opt-in for BM25](index-ranking-similarity.md) and choose an algorithm on a per-index basis. | 0 < 1.00 |
+| `BM25Similarity` | Fixed algorithm on all search services created after July 2020. You can configure this algorithm, but you can't switch to an older one (classic). | Unbounded. |
+|`ClassicSimilarity` | Present on older search services. You can [opt-in for BM25](index-ranking-similarity.md) and choose an algorithm on a per-index basis. | 0 < 1.00 |
Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking results. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research.
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
search Query Lucene Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-lucene-syntax.md
You can embed Boolean operators in a query string to improve the precision of a
|--|-- |--|-| | AND | `+` | `wifi AND luxury` | Specifies terms that a match must contain. In the example, the query engine looks for documents containing both `wifi` and `luxury`. The plus character (`+`) can also be used directly in front of a term to make it required. For example, `+wifi +luxury` stipulates that both terms must appear somewhere in the field of a single document.| | OR | (none) <sup>1</sup> | `wifi OR luxury` | Finds a match when either term is found. In the example, the query engine returns match on documents containing either `wifi` or `luxury` or both. Because OR is the default conjunction operator, you could also leave it out, such that `wifi luxury` is the equivalent of `wifi OR luxury`.|
-| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns a match on documents that exclude the term. For example, `wifi ΓÇôluxury` searches for documents that have the `wifi` term but not `luxury`. </p>It's important to note that the NOT operator (`NOT`, `!`, or `-`) behaves differently in full syntax than it does in simple syntax. In full syntax, negations will always be ANDed onto the query such that `wifi -luxury` is interpreted as "wifi AND NOT luxury" regardless of if the `searchMode` parameter is set to `any` or `all`. This gives you a more intuitive behavior for negations by default. </p>A single negation such as the query `-luxury` isn't allowed in full search syntax and will always return an empty result set.|
+| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns a match on documents that exclude the term. For example, `wifi ΓÇôluxury` searches for documents that have the `wifi` term but not `luxury`. |
<sup>1</sup> The `|` character isn't supported for OR operations.
+### <a name="bkmk_boolean_not"></a> NOT Boolean operator
+
+> [!Important]
+>
+> The NOT operator (`NOT`, `!`, or `-`) behaves differently in full syntax than it does in simple syntax.
+
+* In simple syntax, queries with negation always have a wildcard automatically added. For example, the query `-luxury` is automatically expanded to `-luxury *`.
+* In full syntax, queries with negation cannot be combined with a wildcard. For example, the queries `-luxury *` is not allowed.
+* In full syntax, queries with a single negation are not allowed. For example, the query `-luxury` is not allowed.
+* In full syntax, negations will behave as if they are always ANDed onto the query regardless of the search mode.
+ * For example, the full syntax query `wifi -luxury` in full syntax only fetches documents that contain the term `wifi`, and then applies the negation `-luxury` to those documents.
+* If you want to use negations to search over all documents in the index, simple syntax with the any search mode is recommended.
+* If you want to use negations to search over a subset of documents in the index, full syntax or the simple syntax with the all search mode are recommended.
+
+| Query Type | Search Mode | Example Query | Behavior |
+| - | -- | - | -- |
+| Simple | any | `wifi -luxury`| Returns all documents in the index. Documents with the term "wifi" or documents missing the term "luxury" are ranked higher than other documents. The query is expanded to `wifi OR -luxury OR *`. |
+| Simple | all | `wifi -luxury`| Returns only documents in the index that contain the term "wifi" and don't contain the term "luxury". The query is expanded to `wifi AND -luxury AND *`. |
+| Full | any | `wifi -luxury`| Returns only documents in the index that contain the term "wifi", and then documents that contain the term "luxury" are removed from the results. |
+| Full | all | `wifi -luxury`| Returns only documents in the index that contain the term "wifi", and then documents that contain the term "luxury" are removed from the results. |
+ ## <a name="bkmk_fields"></a> Fielded search You can define a fielded search operation with the `fieldName:searchExpression` syntax, where the search expression can be a single word or a phrase, or a more complex expression in parentheses, optionally with Boolean operators. Some examples include the following:
search Resource Demo Sites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-demo-sites.md
Previously updated : 10/27/2022 Last updated : 08/22/2023 # Demos - Azure Cognitive Search
The following demos are built and hosted by Microsoft.
| Demo name | Description | Source code | |--| |-|
+| [Chat with your data](https://entgptsearch.azurewebsites.net/) | An Azure web app that uses ChatGPT in Azure OpenAI with fictitous health plan data in a search index. | [https://github.com/Azure-Samples/azure-search-openai-demo/](https://github.com/Azure-Samples/azure-search-openai-demo/) |
| [AzSearchLab](https://azuresearchlab.azurewebsites.net/) | A web front end that makes calls to a search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) | | [NYC Jobs demo](https://azjobsdemo.azurewebsites.net/) | An ASP.NET app with facets, filters, details, geo-search (map controls). | [https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs](https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs) | | [JFK files demo](https://jfk-demo-2019.azurewebsites.net/#/) | An ASP.NET web app built on a public data set, transformed with custom and predefined skills to extract searchable content from scanned document (JPEG) files. [Learn more...](https://www.microsoft.com/ai/ai-lab-jfk-files) | [https://github.com/Microsoft/AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) |
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md
Previously updated : 07/10/2023 Last updated : 08/29/2023 # Features of Azure Cognitive Search
-Azure Cognitive Search provides a full-text search engine, persistent storage of search indexes, integrated AI used during indexing to extract more text and structure, and APIs and tools.
+Azure Cognitive Search provides information retrieval and uses optional AI integration to extract more text and structure content.
The following table summarizes features by category. For more information about how Cognitive Search compares with other search technologies, see [Compare search options](search-what-is-azure-search.md#compare-search-options).
+There's feature parity in all Azure public, private, and sovereign clouds, but some features aren't supported in specific regions. For more information, see [product availability by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search&regions=all&rar=true).
+ > [!NOTE] > Looking for preview features? See the [preview features list](search-api-preview.md).
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Title: "Quickstart: Create a search index in the Azure portal"
-description: Create, load, and query your first search index using the Import Data wizard in Azure portal. This quickstart uses a fictitious hotel dataset for sample data.
+description: Learn how to create, load, and query your first search index by using the Import Data wizard in the Azure portal. This quickstart uses a fictitious hotel dataset for sample data.
Previously updated : 11/16/2022 Last updated : 09/01/2023 + # Quickstart: Create a search index in the Azure portal
-In this Azure Cognitive Search quickstart, you'll create your first search index using the **Import data** wizard and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index (hotels-sample-index) so that you can write interesting queries within minutes.
+In this Azure Cognitive Search quickstart, you create your first _search index_ by using the [**Import data** wizard](search-import-data-portal.md) and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index to help you write interesting queries within minutes.
+
+Search queries iterate over an index that contains searchable data, metadata, and other constructs that optimize certain search behaviors. An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are created programmatically. In the Azure portal, you can create them through the **Import data** wizard. For more information, see [Indexes in Azure Cognitive Search](search-what-is-an-index.md) and [Indexers in Azure Cognitive Search](search-indexer-overview.md) .
-Although you won't use the options in this quickstart, the wizard includes a page for AI enrichment so that you can extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see [Quickstart: Create a skillset](cognitive-search-quickstart-blob.md).
+> [!NOTE]
+> The **Import data** wizard includes options for AI enrichment that aren't reviewed in this quickstart. You can use these options to extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see [Quickstart: Create a skillset in the Azure portal](cognitive-search-quickstart-blob.md).
## Prerequisites
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-+ An Azure Cognitive Search service (any tier, any region). [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
+- An Azure Cognitive Search service for any tier and any region. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
### Check for space Many customers start with the free service. The free tier is limited to three indexes, three data sources, and three indexers. Make sure you have room for extra items before you begin. This quickstart creates one of each object.
-Check the service overview page to find out how many indexes, indexers, and data sources you already have.
+Check the **Overview** page for the service to see how many indexes, indexers, and data sources you already have.
## Create and load an index
-Search queries iterate over an [*index*](search-what-is-an-index.md) that contains searchable data, metadata, and other constructs that optimize certain search behaviors.
+Azure Cognitive Search uses an indexer by using the **Import data** wizard. The hotels-sample data set is hosted on Microsoft on Azure Cosmos DB and accessed over an internal connection. You don't need your own Azure Cosmos DB account or source files to access the data.
+
+### Start the wizard
+
+To get started, browse to your Azure Cognitive Search service in the Azure portal and open the **Import data** wizard.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, and go to your Azure Cognitive Search service.
+
+1. On the **Overview** page, select **Import data** to create and populate a search index.
-For this quickstart, we'll create and load the index using a built-in sample dataset that can be crawled using an [*indexer*](search-indexer-overview.md) via the [**Import data wizard**](search-import-data-portal.md). The hotels-sample data set is hosted on Microsoft on Azure Cosmos DB and accessed over an internal connection. You don't need your own Cosmos DB account or source files to access the data.
+ :::image type="content" source="medi.png" alt-text="Screenshot that shows how to open the Import data wizard in the Azure portal.":::
-An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are created programmatically, but in the portal, you can create them through the **Import data wizard**.
+ The **Import data** wizard opens.
-### Step 1 - Start the Import data wizard and create a data source
+### Connect to a data source
-1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
+The next step is to connect to a data source to use for the search index.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to create and populate a search index.
+1. In the **Import data** wizard on the **Connect to your data** tab, expand the **Data Source** dropdown list and select **Samples**.
- :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command in the command bar." border="true":::
+1. In the list of built-in samples, select **hotels-sample**.
-1. In the wizard, select **Connect to your data** > **Samples** > **hotels-sample**. This data source is built in. If you were creating your own data source, you would need to specify a name, type, and connection information. Once created, it becomes an "existing data source" that can be reused in other import operations.
+ :::image type="content" source="media/search-get-started-portal/import-hotels-sample.png" alt-text="Screenshot that shows how to select the hotels-sample data source in the Import data wizard." border="false":::
- :::image type="content" source="media/search-get-started-portal/import-datasource-sample.png" alt-text="Screenshot of the select sample dataset page in the wizard." border="true":::
+ In this quickstart, you use a built-in data source. If you want to create your own data source, you need to specify a name, type, and connection information. After you create a data source, it can be reused in other import operations.
-1. Continue to the next page.
+1. Select **Next: Add cognitive skills (Optional)** to continue.
-### Step 2 - Skip the "Enrich content" page
+### Skip configuration for cognitive skills
-The wizard supports the creation of an [AI enrichment pipeline](cognitive-search-concept-intro.md) for incorporating the Azure AI services algorithms into indexing.
+The **Import data** wizard supports the creation of an AI-enrichment pipeline for incorporating the Azure AI services algorithms into indexing. For more information, see [AI enrichment in Azure Cognitive Search](cognitive-search-concept-intro.md).
-We'll skip this step for now, and move directly on to **Customize target index**.
+1. For this quickstart, ignore the AI enrichment configuration options on the **Add cognitive skills** tab.
- :::image type="content" source="media/search-get-started-portal/skip-cog-skill-step.png" alt-text="Screenshot of the Skip cognitive skill button in the wizard." border="true":::
+1. Select **Skip to: Customize target index** to continue.
+
+ :::image type="content" source="media/search-get-started-portal/skip-cognitive-skills.png" alt-text="Screenshot that shows how to Skip to the Customize target index tab in the Import data wizard.":::
> [!TIP]
-> You can step through an AI-indexing example in a [quickstart](cognitive-search-quickstart-blob.md) or [tutorial](cognitive-search-tutorial-blob.md).
+> If you want to try an AI-indexing example, see the following articles:
+> - [Quickstart: Create a skillset in the Azure portal](cognitive-search-quickstart-blob.md)
+> - [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
-### Step 3 - Configure index
+### Configure the index
-For the built-in hotels sample index, a default index schema is defined for you. Except for a few advanced filter examples, queries in the documentation and samples that target the hotel-samples index will run on this index definition:
+The Azure Cognitive Search service generates a schema for the built-in hotels-sample index. Except for a few advanced filter examples, queries in the documentation and samples that target the hotels-sample index run on this index definition. The definition is shown on the **Customize target index** tab in the **Import data** wizard:
-Typically, in a code-based exercise, index creation is completed prior to loading data. The Import data wizard condenses these steps by generating a basic index for any data source it can crawl. Minimally, an index requires a name and a fields collection. One of the fields should be marked as the document key to uniquely identify each document. Additionally, you can specify language analyzers or suggesters if you want autocomplete or suggested queries.
+Typically, in a code-based exercise, index creation is completed prior to loading data. The **Import data** wizard condenses these steps by generating a basic index for any data source it can crawl.
-Fields have a data type and attributes. The check boxes across the top are *attributes* controlling how the field is used.
+At a minimum, the index requires an **Index name** and a collection of **Fields**. One field must be marked as the _document key_ to uniquely identify each document. The index **Key** provides the unique document identifier. The value is always a string. If you want autocomplete or suggested queries, you can specify language **Analyzers** or **Suggesters**.
-+ **Key** is the unique document identifier. It's always a string, and it's required. Only one field can be the key.
-+ **Retrievable** means that field contents show up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox, for example for fields used only in filter expressions.
-+ **Filterable**, **Sortable**, and **Facetable** determine whether fields are used in a filter, sort, or faceted navigation structure.
-+ **Searchable** means that a field is included in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
+Each field has a name, data type, and _attributes_ that control how to use the field in the search index. The **Customize target index** tab uses checkboxes to enable or disable the following attributes for all fields or specific fields:
-[Storage requirements](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters) can vary as a result of attribute selection. For example, **Filterable** requires more storage, but **Retrievable** doesn't.
+- **Retrievable**: Include the field contents in the search index.
+- **Filterable**: Allow the field contents to be used as filters for the search index.
+- **Sortable**: Make the field contents available for sorting the search index.
+- **Facetable**: Use the field contents for faceted navigation structure.
+- **Searchable**: Include the field contents in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
-By default, the wizard scans the data source for unique identifiers as the basis for the key field. *Strings* are attributed as **Retrievable** and **Searchable**. *Integers* are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
+The storage requirements for the index can vary as a result of attribute selection. For example, enabling a field as **Filterable** requires more storage, but enabling a field as **Retrievable** doesn't. For more information, see [Example demonstrating the storage implications of attributes and suggesters](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters).
-1. Accept the defaults.
+By default, the **Import data** wizard scans the data source for unique identifiers as the basis for the **Key** field. Strings are attributed as **Retrievable** and **Searchable**. Integers are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
- If you rerun the wizard a second time using an existing hotels data source, the index won't be configured with default attributes. You'll have to manually select attributes on future imports.
+Follow these steps to configure the index:
-1. Continue to the next page.
+1. Accept the system-generated values for the **Index name** (_hotels-sample-index_) and **Key** field (_HotelId_).
-### Step 4 - Configure indexer
+1. Accept the system-generated values for all field attributes.
-Still in the **Import data** wizard, select **Indexer** > **Name**, and type a name for the indexer.
+ > [!IMPORTANT]
+ > If you rerun the wizard and use an existing hotels-sample data source, the index isn't configured with default attributes.
+ > You have to manually select attributes on future imports.
-This object defines an executable process. You could put it on recurring schedule, but for now use the default option to run the indexer once, immediately.
+1. Select **Next: Create an indexer** to continue.
-Select **Submit** to create and simultaneously run the indexer.
+### Configure the indexer
- :::image type="content" source="media/search-get-started-portal/hotels-indexer.png" alt-text="Screenshot of the hotels indexer definition in the wizard." border="true":::
+The last step is to configure the indexer for the search index. This object defines an executable process. You can configure the indexer to run on a recurring schedule.
-## Monitor progress
+1. Accept the system-generated value for the **Indexer name** (_hotels-sample-indexer_).
-The wizard should take you to the Indexers list where you can monitor progress. For self-navigation, go to the Overview page and select the **Indexers** tab.
+1. For this quickstart, use the default option to run the indexer once, immediately.
-It can take a few minutes for the portal to update the page, but you should see the newly created indexer in the list, with status indicating "in progress" or success, along with the number of documents indexed.
+1. Select **Submit** to create and simultaneously run the indexer.
- :::image type="content" source="media/search-get-started-portal/indexers-inprogress.png" alt-text="Screenshot of the indexer progress message in the wizard." border="true":::
+ :::image type="content" source="media/search-get-started-portal/hotels-sample-indexer.png" alt-text="Screenshot that shows how to configure the indexer for the hotels-sample data source in the Import data wizard." border="false":::
-## Check results
+## Monitor indexer progress
-The service overview page provides links to the resources created in your Azure Cognitive Search service. To view the index you just created, select **Indexes** from the list of links.
+After you complete the **Import data** wizard, you can monitor creation of the indexer or index. The service **Overview** page provides links to the resources created in your Azure Cognitive Search service.
-Wait for the portal page to refresh. After a few minutes, you should see the index with a document count and storage size.
+1. Go to the **Overview** page for your Azure Cognitive Search service in the Azure portal.
- :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the service dashboard." border="true":::
+1. Select **Usage** to see the summary details for the service resources.
-From this list, you can select on the *hotels-sample* index that you just created, view the index schema. and optionally add new fields.
+1. In the **Indexers** box, select **View indexers**.
-The **Fields** tab shows the index schema. If you're writing queries and need to check whether a field is filterable or sortable, this tab shows you the attributes.
+ :::image type="content" source="media/search-get-started-portal/view-indexers.png" alt-text="Screenshot that shows how to check the status of the indexer creation process in the Azure portal." lightbox="media/search-get-started-portal/view-indexers.png":::
-Scroll to the bottom of the list to enter a new field. While you can always create a new field, in most cases, you can't change existing fields. Existing fields have a physical representation in your search service and are thus non-modifiable, not even in code. To fundamentally change an existing field, create a new index, dropping the original.
+ It can take a few minutes for the page results to update in the Azure portal. You should see the newly created indexer in the list with a status of _In progress_ or _Success_. The list also shows the number of documents indexed.
- :::image type="content" source="media/search-get-started-portal/sample-index-def.png" alt-text="Screenshot of the sample index definition in Azure portal." border="true":::
+ :::image type="content" source="media/search-get-started-portal/indexers-status.png" alt-text="Screenshot that shows the creation of the indexer in progress in the Azure portal.":::
-Other constructs, such as scoring profiles and CORS options, can be added at any time.
+## Check search index results
-To clearly understand what you can and can't edit during index design, take a minute to view index definition options. Grayed-out options are an indicator that a value can't be modified or deleted.
+On the **Overview** page for the service, you can do a similar check for the index creation.
-## <a name="query-index"></a> Query using Search explorer
+1. In the **Indexes** box, select **View indexes**.
-You now have a search index that can be queried using [**Search explorer**](search-explorer.md).
+ Wait for the Azure portal page to refresh. You should see the index with a document count and storage size.
-**Search explorer** sends REST calls that conform to the [Search Documents API](/rest/api/searchservice/search-documents). The tool supports [simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and [full Lucene query parser](/rest/api/searchservice/lucene-query-syntax-in-azure-search).
+ :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the Azure Cognitive Search service dashboard in the Azure portal.":::
-1. Select **Search explorer** on the command bar.
+1. To view the schema for the new index, select the index name, **hotels-sample-index**.
- :::image type="content" source="medi.png" alt-text="Screenshot of the Search Explorer command on the command bar." border="true":::
+1. On the **hotels-sample-index** index page, select the **Fields** tab to view the index schema.
+
+ If you're writing queries and need to check whether a field is **Filterable** or **Sortable**, use this tab to see the attribute settings.
-1. From **Index**, choose "hotels-sample-index".
+ :::image type="content" source="media/search-get-started-portal/index-schema-definition.png" alt-text="Screenshot that shows the schema definition for an index in the Azure Cognitive Search service in the Azure portal.":::
- :::image type="content" source="media/search-get-started-portal/search-explorer-changeindex.png" alt-text="Screenshot of the Index and API selection lists in Search Explorer." border="true":::
+## Add or change fields
-1. In the search bar, paste in a query string from the examples below and select **Search**.
+On the **Fields** tab, you can create a new field for a schema definition with the **Add field** option. Specify the field name, the data type, and attribute settings.
+
+While you can always create a new field, in most cases, you can't change existing fields. Existing fields have a physical representation in your search service so they aren't modifiable, not even in code. To fundamentally change an existing field, you need to create a new index, which replaces the original. Other constructs, such as scoring profiles and CORS options, can be added at any time.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-string-example.png" alt-text="Screenshot of the query string text field and search button in Search Explorer." border="true":::
+To clearly understand what you can and can't edit during index design, take a minute to view the index definition options. Grayed options in the field list indicate values that can't be modified or deleted.
-## Run more example queries
+## <a name="query-index"></a> Query with Search explorer
+
+You now have a search index that can be queried with the **Search explorer** tool in Azure Cognitive Search. **Search explorer** sends REST calls that conform to the [Search Documents REST API](/rest/api/searchservice/search-documents). The tool supports [simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and [full Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search).
+
+You can access the tool from the **Search explorer** tab on the index page and from the **Overview** page for the service.
+
+1. Go to the **Overview** page for your Azure Cognitive Search service in the Azure portal, and select **Search explorer**.
-All of the queries in this section are designed for **Search Explorer** and the Hotels sample index. Results are returned as verbose JSON documents. All fields marked as "retrievable" in the index can appear in results. For more information about queries, see [Querying in Azure Cognitive Search](search-query-overview.md).
+ :::image type="content" source="media/search-get-started-portal/open-search-explorer.png" alt-text="Screenshot that shows how to open the Search Explorer tool from the Overview page for the Azure Cognitive Search service in the Azure portal.":::
-| Query | Description |
-|-|-|
-| `search=spa` | Simple full text query with top N results. The **`search=`** parameter is used for keyword search, in this case, returning hotel data for those containing *spa* in any searchable field in the document. |
-| `search=beach &$filter=Rating gt 4` | Filtered query. In this case, ratings greater than 4. |
-| `search=spa &$select=HotelName,Description,Tags &$count=true &$top=10` | Parameterized query. The **`&`** symbol is used to append search parameters, which can be specified in any order. </br>**`$select`** parameter returns a subset of fields for more concise search results. </br>**`$count=true`** parameter returns the total count of all documents that match the query. </br>**`$top=10`** returns the highest ranked 10 documents out of the total. By default, Azure Cognitive Search returns the first 50 best matches. You can increase or decrease the amount using this parameter. |
-| `search=* &facet=Category &$top=2` | Facet query, used to return an aggregated count of documents that match a facet value you provide. On an empty or unqualified search, all documents are represented. In the hotels index, the Category field is marked as "facetable". |
-| `search=spa &facet=Rating`| Facet on numeric values. This query is facet for rating, on a text search for "spa". The term "Rating" can be specified as a facet because the field is marked as retrievable, filterable, and facetable in the index, and its numeric values (1 through 5) are suitable for grouping results by each value.|
-| `search=beach &highlight=Description &$select=HotelName, Description, Category, Tags` | Hit highlighting. The term "beach" will be highlighted when it appears in the "Description" field. |
-| `search=seatle` followed by </br>`search=seatle~ &queryType=full` | Fuzzy search. By default, misspelled query terms, like *seatle* for "Seattle", fail to return matches in typical search. The first example returns no results. Adding **`queryType=full`** invokes the full Lucene query parser, which supports the `~` operand for fuzzy search. |
-| `$filter=geo.distance(Location, geography'POINT(-122.12 47.67)') le 5 &search=* &$select=HotelName, Address/City, Address/StateProvince &$count=true` | Geospatial search. The example query filters all results for positional data, where results are less than 5 kilometers from a given point, as specified by latitude and longitude coordinates (this example uses Redmond, Washington as the point of origin). |
+1. In the **Index** dropdown list, select the new index, **hotels-sample-index**.
-## Takeaways
+ :::image type="content" source="media/search-get-started-portal/search-explorer-change-index.png" alt-text="Screenshot that shows how to select an index in the Search Explorer tool in the Azure portal.":::
+
+ The **Request URL** box updates to show the link target with the selected index and API version.
+
+1. In the **Query string** box, enter a query string.
+
+ For this quickstart, you can choose a query string from the examples provided in the [Run more example queries](#run-more-example-queries) section. The following example uses the query `search=beach &$filter=Rating gt 4`.
+
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-string.png" alt-text="Screenshot that shows how to enter and run a query in the Search Explorer tool.":::
+
+ To change the presentation of the query syntax, use the **View** dropdown menu to switch between **Query view** and **JSON view**.
+
+1. Select **Search** to run the query.
+
+ The **Results** box updates to show the query results. For long results, use the **Mini-map** for the **Results** box to jump quickly to nonvisible areas of the output.
+
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-results.png" alt-text="Screenshot that shows long results for a query in the Search Explorer tool and the mini-map.":::
+
+For more information, see [Quickstart: Use Search explorer to run queries in the Azure portal](search-explorer.md).
+
+## Run more example queries
-This quickstart provided a quick introduction to Azure Cognitive Search using the Azure portal.
+The queries in the following table are designed for searching the hotels-sample index with **Search Explorer**. The results are returned as verbose JSON documents. All fields marked as **Retrievable** in the index can appear in the results.
-You learned how to create a search index using the **Import data** wizard. You created your first [indexer](search-indexer-overview.md) and learned the basic workflow for index design. See [Import data wizard in Azure Cognitive Search](search-import-data-portal.md) for more information about the wizard's benefits and limitations.
+| Query syntax | Query type | Description | Results |
+| | | | |
+| `search=spa` | Full text query | The `search=` parameter searches for specific keywords. | The query seeks hotel data that contains the keyword `spa` in any searchable field in the document. |
+| `search=beach &$filter=Rating gt 4` | Filtered query | The `filter` parameter filters on the supplied conditions. | The query seeks beach hotels with a rating value greater than four. |
+| `search=spa &$select=HotelName,Description,Tags &$count=true &$top=10` | Parameterized query | The ampersand symbol `&` appends search parameters, which can be specified in any order. <br> - The `$select` parameter returns a subset of fields for more concise search results. <br> - The `$count=true` parameter returns the total count of all documents that match the query. <br> - The `$top` parameter returns the specified number of highest ranked documents out of the total. | The query seeks the top 10 spa hotels and displays their names, descriptions, and tags. <br><br> By default, Azure Cognitive Search returns the first 50 best matches. You can increase or decrease the amount by using this parameter. |
+| `search=* &facet=Category &$top=2` | Facet query on a string value | The `facet` parameter returns an aggregated count of documents that match the specified field. <br> - The specified field must be marked as **Facetable** in the index. <br> - On an empty or unqualified search, all documents are represented. | The query seeks the aggregated count for the `Category` field and displays the top 2. |
+| `search=spa &facet=Rating`| Facet query on a numeric value | The `facet` parameter returns an aggregated count of documents that match the specified field. <br> - Although the `Rating` field is a numeric value, it can be specified as a facet because it's marked as **Retrievable**, **Filterable**, and **Facetable** in the index. | The query seeks spa hotels for the `Rating` field data. The `Rating` field has numeric values (1 through 5) that are suitable for grouping results by each value. |
+| `search=beach &highlight=Description &$select=HotelName, Description, Category, Tags` | Hit highlighting | The `highlight` parameter applies highlighting to matching instances of the specified keyword in the document data. | The query seeks and highlights instances of the keyword `beach` in the `Description` field, and displays the corresponding hotel names, descriptions, category, and tags. |
+| Original: `search=seatle` <br><br> Adjusted: `search=seatle~ &queryType=full` | Fuzzy search | By default, misspelled query terms like `seatle` for `Seattle` fail to return matches in a typical search. The `queryType=full` parameter invokes the full Lucene query parser, which supports the tilde `~` operand. When these parameters are present, the query performs a fuzzy search for the specified keyword. The query seeks matching results along with results that are similar to but not an exact match to the keyword. | The original query returns no results because the keyword `seatle` is misspelled. <br><br> The adjusted query invokes the full Lucene query parser to match instances of the term `seatle~`. |
+| `$filter=geo.distance(Location, geography'POINT(-122.12 47.67)') le 5 &search=* &$select=HotelName, Address/City, Address/StateProvince &$count=true` | Geospatial search | The `$filter=geo.distance` parameter filters all results for positional data based on the specified `Location` and `geography'POINT` coordinates. | The query seeks hotels that are within 5 kilometers of the latitude longitude coordinates `-122.12 47.67`, which is "Redmond, Washington, USA." The query displays the total number of matches `&$count=true` with the hotel names and address locations. |
-Using the **Search explorer** in the Azure portal, you learned some basic query syntax through hands-on examples that demonstrated key capabilities such as filters, hit highlighting, fuzzy search, and geospatial search.
+Take a minute to try a few of these example queries for your index. For more information about queries, see [Querying in Azure Cognitive Search](search-query-overview.md).
## Clean up resources
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+When you work in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
+You can find and manage resources for your service in the Azure portal under **All resources** or **Resource groups** in the left pane.
-If you're using a free service, remember that the limit is three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you use a free service, remember that the limit is three indexes, indexers, and data sources. You can delete individual items in the Azure portal to stay under the limit.
## Next steps
-Use a portal wizard to generate a ready-to-use web app that runs in a browser. You can try out this wizard on the small index you just created, or use one of the built-in sample data sets for a richer search experience.
+Try an Azure portal wizard to generate a ready-to-use web app that runs in a browser. Use this wizard on the small index you created in this quickstart, or use one of the built-in sample data sets for a richer search experience.
> [!div class="nextstepaction"] > [Create a demo app in the portal](search-create-app-portal.md)
search Search Get Started Semantic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-semantic.md
This quickstart walks you through the query modifications that invoke semantic s
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-+ Azure Cognitive Search, at Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
++ Azure Cognitive Search, at Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). + An API key and service endpoint:
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Get started with vector search in Azure Cognitive Search using the **2023-07-01-
+ An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ Azure Cognitive Search, in any region and on any tier. However, if you want to also use [semantic search](semantic-search-overview.md), as shown in the last two examples, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
++ Azure Cognitive Search, in any region and on any tier. However, if you want to also use [semantic search](semantic-search-overview.md), as shown in the last two examples, your search service must be Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
api-key: {{admin-api-key}}
### Semantic hybrid search
-Assuming that you've [enabled semantic search](semantic-search-overview.md#enable-semantic-search) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search, plus keyword search with semantic ranking, caption, answers, and spell check.
+Assuming that you've [enabled semantic search](semantic-how-to-enable-disable.md) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search, plus keyword search with semantic ranking, caption, answers, and spell check.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
The data source definition specifies the data to index, credentials, and policie
### Supported credentials and connection strings
-Indexers can connect to a collection using the following connections. For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit "ApiKind" from the connection string.
+Indexers can connect to a collection using the following connections.
Avoid port numbers in the endpoint URL. If you include the port number, the connection will fail.
Avoid port numbers in the endpoint URL. If you include the port number, the conn
| Managed identity connection string | ||
-|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information.|
+|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)/(IdentityAuthType=[identity-auth-type])" }`|
+|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md). For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit `ApiKind` from the connection string. For more information about `ApiKind`, `IdentityAuthType` see [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).|
<a name="flatten-structures"></a>
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
You can use a system-assigned managed identity or a user-assigned managed identi
* [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Azure Cosmos DB.
+* Assign the **Cosmos DB Account Reader** role to the search service managed identity. This role grants the ability to read Azure Cosmos DB account data. For more information about role assignments in Cosmos DB, see [Configure role-based access control to data](search-howto-managed-identities-data-sources.md#assign-a-role).
+
+* Data Plane Role assignment: Follow [Data plane Role assignment](../cosmos-db/how-to-setup-rbac.md)
+to know more.
+
+* Example for a read-only data plane role assignment:
+```azurepowershell
+$cosmosdb_acc_name = <cosmos db account name>
+$resource_group = <resource group name>
+$subsciption = <subscription id>
+$system_assigned_principal = <principal id for system assigned identity>
+$readOnlyRoleDefinitionId = "00000000-0000-0000-0000-000000000001"
+$scope=$(az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup --query id --output tsv)
+```
+
+Role assignment for system-assigned identity:
- For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Azure Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role.
+```azurepowershell
+az cosmosdb sql role assignment create --account-name $cosmosdbname --resource-group $resourcegroup --role-definition-id $readOnlyRoleDefinitionId --principal-id $sys_principal --scope $scope
+```
+* For Cosmos DB for NoSQL, you can optionally [Enforcing RBAC as the only authentication method](../cosmos-db/how-to-setup-rbac.md#disable-local-auth)
+for data connections by setting `disableLocalAuth` to `true` for your Cosmos DB account.
- At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Azure Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Azure Cosmos DB.
+* *For Gremlin and MongoDB Collections*:
+ Indexer support is currently in preview. At this time, a preview limitation exists that requires Cognitive Search to connect using keys. You can still set up a managed identity and role assignment, but Cognitive Search will only use the role assignment to get keys for the connection. This limitation means that you can't configure an [RBAC-only approach](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) if your indexers are connecting to Gremlin or MongoDB using Search with managed identities to connect to Azure Cosmos DB.
* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-index-cosmosdb.md).
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th
When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind".
+* For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections.
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string and use a preview REST API. * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string and use a preview REST API.
api-key: [Search service admin key]
"name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {
- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];"
+ "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]"
}, "container": { "name": "[my-cosmos-collection]", "query": null }, "dataChangeDetectionPolicy": null
The 2021-04-30-preview REST API supports connections based on a user-assigned ma
* First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind".
+ * For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections.
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string.
api-key: [Search service admin key]
"name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {
- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];"
+ "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]"
}, "container": { "name": "[my-cosmos-collection]", "query": null
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegrou
## (preview) Disable semantic search
-Although [semantic search isn't enabled](semantic-search-overview.md#enable-semantic-search) by default, you could lock down the feature at the service level.
+Although [semantic search isn't enabled](semantic-how-to-enable-disable.md) by default, you could lock down the feature at the service level.
```rest PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Previously updated : 04/18/2023 Last updated : 08/31/2023 # How to work with search results in Azure Cognitive Search
This article explains how to work with a query response in Azure Cognitive Searc
Parameters on the query determine:
-+ Selection of fields within results
++ Field selection + Count of matches found in the index for the query++ Paging results + Number of results in the response (up to 50, by default) + Sort order + Highlighting of terms within a result, matching on either the whole or partial term in the body
Count won't be affected by routine maintenance or other workloads on the search
By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search. Otherwise, the top 50 are an arbitrary order for exact match queries (where uniform "@searchScore=1.0" indicates arbitrary ranking).
-To control the paging of all documents returned in a result set, add `$top` and `$skip` parameters to the query request. The following list explains the logic.
+To control the paging of all documents returned in a result set, add `$top` and `$skip` parameters to the GET query request, or `top` and `skip` to the POST query request. The following list explains the logic.
+ Return the first set of 15 matching documents plus a count of total matches: `GET /indexes/<INDEX-NAME>/docs?search=<QUERY STRING>&$top=15&$skip=0&$count=true`
The results of paginated queries aren't guaranteed to be stable if the underlyin
Following is an example of how you might get duplicates. Assume an index with four documents:
-```text
+```json
{ "id": "1", "rating": 5 } { "id": "2", "rating": 3 } { "id": "3", "rating": 2 }
Following is an example of how you might get duplicates. Assume an index with fo
Now assume you want results returned two at a time, ordered by rating. You would execute this query to get the first page of results: `$top=2&$skip=0&$orderby=rating desc`, producing the following results:
-```text
+```json
{ "id": "1", "rating": 5 } { "id": "2", "rating": 3 } ``` On the service, assume a fifth document is added to the index in between query calls: `{ "id": "5", "rating": 4 }`. Shortly thereafter, you execute a query to fetch the second page: `$top=2&$skip=2&$orderby=rating desc`, and get these results:
-```text
+```json
{ "id": "2", "rating": 3 } { "id": "3", "rating": 2 } ``` Notice that document 2 is fetched twice. This is because the new document 5 has a greater value for rating, so it sorts before document 2 and lands on the first page. While this behavior might be unexpected, it's typical of how a search engine behaves.
+### Paging through a large number of results
+
+Using `$top` and `$skip` allows a search query to page through 100,000 results, but what if results are larger than 100,000? To page through a response this large, use a [sort order](search-query-odata-orderby.md) and [range filter](search-query-odata-comparison-operators.md) as a workaround for `$skip`.
+
+In this workaround, sort and filter are applied to a document ID field or another field that is unique for each document. The unique field must have `filterable` and `sortable` attribution in the search index.
+
+1. Issue a query to return a full page of sorted results.
+
+ ```http
+ POST /indexes/good-books/docs/search?api-version=2020-06-30
+ {
+ "search": "divine secrets",
+ "top": 50,
+ "orderby": "id asc"
+ }
+ ```
+
+1. Choose the last result returned by the search query. An example result with only an "id" value is shown here.
+
+ ```json
+ {
+ "id": "50"
+ }
+ ```
+
+1. Use that "id" value in a range query to fetch the next page of results. This "id" field should have unique values, otherwise pagination may include duplicate results.
+
+ ```http
+ POST /indexes/good-books/docs/search?api-version=2020-06-30
+ {
+ "search": "divine secrets",
+ "top": 50,
+ "orderby": "id asc",
+ "filter": "id ge 50"
+ }
+ ```
+
+1. Pagination ends when the query returns zero results.
+
+> [!NOTE]
+> The "filterable" and "sortable" attributes can only be enabled when a field is first added to an index, they cannot be enabled on an existing field.
+ ## Ordering results In a full text search query, results can be ranked by:
Hit highlighting instructions are provided on the [query request](/rest/api/sear
### Requirements for hit highlighting
-+ Fields must be Edm.String or Collection(Edm.String)
++ Fields must be `Edm.String` or `Collection(Edm.String)` + Fields must be attributed at **searchable** ### Specify highlighting in the request
Within a highlighted field, formatting is applied to whole terms. For example, o
"original_title": "Grave Secrets", "title": "Grave Secrets (Temperance Brennan, #5)" }
+]
``` ### Phrase search highlighting
POST /indexes/good-books/docs/search?api-version=2020-06-30
} ```
-Because the criteria now specifies both terms, only one match is found in the search index. The response to the above query looks like this:
+Because the criteria now has both terms, only one match is found in the search index. The response to the above query looks like this:
```json {
search Search Performance Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-analysis.md
Title: Analyze performance
-description: TBD
+description: Learn about the tools, behaviors, and approaches for analyzing query and indexing performance in Cognitive Search.
Previously updated : 01/30/2023 Last updated : 08/31/2023 # Analyze performance in Azure Cognitive Search
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Previously updated : 08/10/2023 Last updated : 08/30/2023 # What's Azure Cognitive Search?
Across the Azure platform, Cognitive Search can integrate with other Azure servi
On the search service itself, the two primary workloads are *indexing* and *querying*.
-+ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into your search service and makes it searchable. Internally, inbound text is processed into tokens and store in inverted indexes, and inbound vectors are stored in vector indexes. You can upload JSON documents, or use an indexer to serialize your data into JSON.
++ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into your search service and makes it searchable. Internally, inbound text is processed into tokens and store in inverted indexes, and inbound vectors are stored in vector indexes. The document format that Cognitive Search can index is JSON. You can upload JSON documents that you've assembled, or use an indexer to retrieve and serialize your data into JSON.
- [AI enrichment](cognitive-search-concept-intro.md) through [cognitive skills](cognitive-search-working-with-skillsets.md) is an extension of indexing. If your content needs image or language analysis before it can be indexed, AI enrichment can extract text embedded in application files, translate text, and also infer text and structure from non-text files by analyzing the content.
+ [AI enrichment](cognitive-search-concept-intro.md) through [cognitive skills](cognitive-search-working-with-skillsets.md) is an extension of indexing. If you have images or large unstructured text, you can attach image and language analysis in an indexing pipeline. AI enrichment can extract text embedded in application files, translate text, and also infer text and structure from non-text files by analyzing the content.
+ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you control.
On the search service itself, the two primary workloads are *indexing* and *quer
Azure Cognitive Search is well suited for the following application scenarios:
-+ Consolidate heterogeneous content into a private, user-defined search index.
++ Search over your content, isolated from the internet.+++ Consolidate heterogeneous content into a user-defined search index. + Offload indexing and query workloads onto a dedicated search service. + Easily implement search-related features: relevance tuning, faceted navigation, filters (including geo-spatial search), synonym mapping, and autocomplete.
-+ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Azure Cosmos DB, into searchable chunks. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing.
++ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Azure Cosmos DB, into searchable chunks. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing from Azure AI. + Add linguistic or custom text analysis. If you have non-English content, Azure Cognitive Search supports both Lucene analyzers and Microsoft's natural language processors. You can also configure analyzers to achieve specialized processing of raw content, such as filtering out diacritics, or recognizing and preserving patterns in strings.
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 01/16/2023 Last updated : 08/14/2023 # Return a semantic answer in Azure Cognitive Search
The "semanticConfiguration" parameter is required. It's defined in a search inde
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details.
++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
search Semantic How To Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-enable-disable.md
+
+ Title: Enable or disable semantic search
+
+description: Steps for turning semantic search on or off in Cognitive Search.
++++++ Last updated : 8/22/2023++
+# Enable or disable semantic search
+
+> [!IMPORTANT]
+> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through Azure portal, preview REST APIs, and beta SDKs. This feature is billable. See [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+
+Semantic search is a premium feature that's billed by usage. By default, semantic search is disabled on all services.
+
+## Enable semantic search
+
+Follow these steps to enable [semantic search](semantic-search-overview.md) for your search service.
+
+### [**Azure portal**](#tab/enable-portal)
+
+1. Open the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your search service. The service must be a billable tier.
+
+1. Determine whether the service region supports semantic search:
+
+ 1. Find your service region in the overview page in the Azure portal.
+
+ 1. Check the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page on the Azure web site to see if your region is listed.
+
+1. On the left-nav pane, select **Semantic Search (Preview)**.
+
+1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time.
++
+The free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota the next time you issue a semantic query. When this happens, you need to upgrade to the standard plan to continue using semantic search.
+
+### [**REST**](#tab/enable-rest)
+
+To enable Semantic Search using the REST API, you can use the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch).
+
+Management REST API calls are authenticated through Azure Active Directory. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for instructions on how to authenticate.
+
+* Management REST API version 2021-04-01-Preview provides the semantic search property.
+
+* Owner or Contributor permissions are required to enable or disable features.
+
+> [!NOTE]
+> Create or Update supports two HTTP methods: PUT and PATCH. Both PUT and PATCH can be used to update existing services, but only PUT can be used to create a new service. If PUT is used to update an existing service, it replaces all properties in the service with their defaults if they are not specified in the request. When PATCH is used to update an existing service, it only replaces properties that are specified in the request. When using PUT to update an existing service, it's possible to accidentally introduce an unexpected scaling or configuration change. When enabling semantic search on an existing service, it's recommended to use PATCH instead of PUT.
+
+```http
+PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
+ {
+ "properties": {
+ "semanticSearch": "standard"
+ }
+ }
+```
+++
+## Disable semantic search using the REST API
+
+To reverse feature enablement, or for full protection against accidental usage and charges, you can disable semantic search using the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) on your search service. After the feature is disabled, any requests that include the semantic query type will be rejected.
+
+Management REST API calls are authenticated through Azure Active Directory. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for instructions on how to authenticate.
+
+```http
+PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
+ {
+ "properties": {
+ "semanticSearch": "disabled"
+ }
+ }
+```
+
+To re-enable semantic search, rerun the above request, setting "semanticSearch" to either "free" (default) or "standard".
+
+## Next steps
+
+[Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Previously updated : 7/14/2023 Last updated : 8/15/2023 # Configure semantic ranking and return captions in search results
There are two main activities to perform:
If you have an existing Basic or greater service in a supported region, you can enable semantic search without having to create a new service.
-+ Semantic search [enabled on your search service](semantic-search-overview.md#enable-semantic-search).
++ Semantic search [enabled on your search service](semantic-how-to-enable-disable.md). + An existing search index with rich content in a [supported query language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive.
You can add or update a semantic configuration at any time without rebuilding yo
### [**Azure portal**](#tab/portal)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic search enabled](semantic-how-to-enable-disable.md).
1. Open an index.
Your next step is adding parameters to the query request. To be successful, your
[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To configure semantic ranking in the portal, follow the steps below:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-how-to-enable-disable.md).
1. Select **Search explorer** at the top of the overview page.
The following example in this section uses the [hotels-sample-index](search-get-
1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't.
- Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
+ Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
+
+ For semantic captions, the fields referenced in the "semanticConfiguration" must have a word limit in the range of 2000-3000 words (or equivalent to 10000 tokens), otherwise, it will miss important caption results. If you anticipate that the fields used by the "semanticConfiguration" word count could be higher than the exposed limit and you need to use captions, consider [Text split cognitive skill]cognitive-search-skill-textsplit.md) as part of your [AI enrichment pipeline](cognitive-search-concept-intro.md) while indexing your data with [built-in pull indexers](search-indexer-overview.md).
1. Set "highlightPreTag" and "highlightPostTag" if you want to override the default highlight formatting that's applied to captions.
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
Previously updated : 07/14/2023 Last updated : 08/31/2023 # Semantic ranking in Azure Cognitive Search
Each document is now represented by a single long string.
> [!NOTE] > In the 2020-06-30-preview, the "searchFields" parameter is used rather than the semantic configuration to determine which fields to use. We recommend upgrading to the 2021-04-30-preview API version for best results.
-The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens are roughly equivalent to a string that is 128 words in length.
+The string is composed of tokens, not characters or words. The maximum token count is 256 unique tokens. For estimation purposes, you can assume that 256 tokens are roughly equivalent to a string that is 256 words in length.
> [!NOTE] > Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from "searchFields". For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
Previously updated : 07/31/2023 Last updated : 08/22/2023
Semantic search is a premium feature that's billed by usage. We recommend this a
> [!div class="checklist"] > * [Check regional and service tier requirements](#availability-and-pricing).
-> * [Enable semantic search for semantic ranking](#enable-semantic-search) on your search service.
+> * [Enable semantic search for semantic ranking](semantic-how-to-enable-disable.md) on your search service.
> * Create or modify queries to [return semantic captions and highlights](semantic-how-to-query-request.md). > * Add a few more query properties to also [return semantic answers](semantic-answers.md).
Although semantic search isn't beneficial in every scenario, certain content can
## Availability and pricing
-Semantic search and spell check are available on services that meet the criteria in the table below. To use semantic search, your first need to [enable the capabilities](#enable-semantic-search) on your search service.
+Semantic search and spell check are available on services that meet the criteria in the table below. To use semantic search, your first need to [enable the capabilities](semantic-how-to-enable-disable.md) on your search service.
| Feature | Tier | Region | Sign up | Pricing | |||--|||
Semantic search and spell check are available on services that meet the criteria
Charges for semantic search are levied when query requests include "queryType=semantic" and the search string isn't empty (for example, "search=pet friendly hotels in New York"). If your search string is empty ("search=*"), you won't be charged, even if the queryType is set to "semantic".
-## Enable semantic search
-
-By default, semantic search is disabled on all services. To enable semantic search for your search service:
-
-1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to your Standard tier search service.
-1. Determine whether the service region supports semantic search. Search service region is noted on the overview page. Semantic search regions are noted on the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page.
-1. On the left-nav pane, select **Semantic Search (Preview)**.
-1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time.
--
-Semantic Search's free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota whenever you issue a semantic query. When this happens, you need to upgrade to the standard plan to continue using semantic search.
-
-Alternatively, you can also enable semantic search using the REST API that's described in the next section.
-
-## Enable semantic search using the REST API
-
-To enable Semantic Search using the REST API, you can use the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch).
-
-> [!NOTE]
-> Create or Update supports two HTTP methods: PUT and PATCH. Both PUT and PATCH can be used to update existing services, but only PUT can be used to create a new service. If PUT is used to update an existing service, it replaces all properties in the service with their defaults if they are not specified in the request. When PATCH is used to update an existing service, it only replaces properties that are specified in the request. When using PUT to update an existing service, it's possible to accidentally introduce an unexpected scaling or configuration change. When enabling semantic search on an existing service, it's recommended to use PATCH instead of PUT.
-
-* Management REST API version 2021-04-01-Preview provides the semantic search property
-
-* Owner or Contributor permissions are required to enable or disable features
-
-```
-PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
- {
- "properties": {
- "semanticSearch": "standard"
- }
- }
-```
-
-## Disable semantic search using the REST API
-
-To reverse feature enablement, or for full protection against accidental usage and charges, you can [disable semantic search](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) using the Create or Update Service API on your search service. After the feature is disabled, any requests that include the semantic query type will be rejected.
-
-```http
-PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
- {
- "properties": {
- "semanticSearch": "disabled"
- }
- }
-```
-
-To re-enable semantic search, rerun the above request, setting "semanticSearch" to either "free" (default) or "standard".
-
-> [!TIP]
-> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
- ## Next steps
-[Enable semantic search](#enable-semantic-search) for your search service and follow the steps in [Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
-
+[Enable semantic search](semantic-how-to-enable-disable.md) for your search service and follow the steps in [Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Previously updated : 07/18/2023 Last updated : 08/29/2023 ms.devlang: javascript
ms.devlang: javascript
# 2 - Create and load Search Index with JavaScript Continue to build your search-enabled website by following these steps:+ * Create a search resource * Create a new index
-* Import data with JavaScript using the [sample script](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/bulk-insert-v4/bulk_insert_books.js) and Azure SDK [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents).
+* Import data with JavaScript using the [bulk_insert_books script](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/bulk-insert-v4/bulk_insert_books.js) and Azure SDK [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents).
## Create an Azure Cognitive Search resource
The ESM script uses the Azure SDK for Cognitive Search:
1. As the code runs, the console displays progress. 1. When the upload is complete, the last statement printed to the console is "done".
-## Review the new Search Index
+## Review the new search index
[!INCLUDE [tutorial-load-index-review-index](includes/tutorial-add-search-website-load-index-review.md)]
The ESM script uses the Azure SDK for Cognitive Search:
[!INCLUDE [tutorial-load-index-copy](includes/tutorial-add-search-website-load-index-copy-resource-name.md)] - ## Next steps [Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Javascript Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-deploy-static-web-app.md
Previously updated : 07/18/2023 Last updated : 08/29/2023 ms.devlang: javascript
ms.devlang: javascript
## Next steps
-* [Understand Search integration for the search-enabled website](tutorial-javascript-search-query-integration.md)
+[Explore the JavaScript search code](tutorial-javascript-search-query-integration.md)
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md
Previously updated : 07/18/2023 Last updated : 08/29/2023 ms.devlang: javascript # 1 - Overview of adding search to a website
-This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web Apps resource.
+In this Azure Cognitive Search tutorial, create a web app that searches through a catalog of books, and then deploy the website to an Azure Static Web Apps resource.
-The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4)
-* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books)
+This tutorial is for JavaScript developers who want to create a frontend client app that includes search interactions like faceted navigation, typeahead, and pagination. It also demonstrates the `@azure/search-documents` library in the Azure SDK for JavaScript for calls to Azure Cognitive Search for indexing and query workflows on the backend.
+
+Source code is available in the [azure-search-javascript-samples](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4) GitHub repository.
## What does the sample do?
The application is available:
## How is the sample organized?
-The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4) includes the following:
+The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4) includes the following components:
|App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|
The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tr
## Set up your development environment
-Install the following for your local development environment.
+Install the following software in your local development environment.
- [Node.js LTS](https://nodejs.org/en/download) - Select latest runtime and version from this [list of supported language versions](../azure-functions/functions-versions.md?pivots=programming-language-javascript&tabs=azure-cli%2clinux%2cin-process%2cv4#languages).
- - If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (nvm) or a Docker container.
+ - If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (`nvm`) or a Docker container.
- [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps)
Forking the sample repository is critical to be able to deploy the Static Web Ap
1. On GitHub, [fork the sample repository](https://github.com/Azure-Samples/azure-search-javascript-samples/fork).
- Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App.
+ Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App.
+
+1. At a bash terminal, download your forked sample application to your local computer.
+
+ Replace `YOUR-GITHUB-ALIAS` with your GitHub alias.
+
+ ```bash
+ git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-javascript-samples
+ ```
+
+1. At the same bash terminal, go into your forked repository for this website search example:
+ ```bash
+ cd azure-search-javascript-samples
+ ```
+
+1. Use the Visual Studio Code command, `code .` to open your forked repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
+
+ ```bash
+ code .
+ ```
## Create a resource group for your Azure resources
Forking the sample repository is critical to be able to deploy the Static Web Ap
## Next steps
-* [Create a Search Index and load with documents](tutorial-javascript-create-load-index.md)
-* [Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
+[Create a Search Index and load with documents](tutorial-javascript-create-load-index.md)
search Tutorial Javascript Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-search-query-integration.md
Previously updated : 07/18/2023 Last updated : 08/29/2023 ms.devlang: javascript # 4 - Explore the JavaScript search code
-In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your JavaScript app, this article explains what you need to know.
+In the previous lessons, you added search to a static web app. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your JavaScript app, this article explains what you need to know.
-The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4)
-* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books)
+The source code is available in the [azure-search-javascript-samples](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website-functions-v4) GitHub repository.
## Azure SDK @azure/search-documents
The Function app uses the Azure SDK for Cognitive Search:
* NPM: [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents) * Reference Documentation: [Client Library](/javascript/api/overview/azure/search-documents-readme)
-The Function app authenticates through the SDK to the cloud-based Cognitive Search API using your resource name, resource key, and index name. The secrets are stored in the Static Web App settings and pulled in to the Function as environment variables.
+The Function app authenticates through the SDK to the cloud-based Cognitive Search API using your resource name, [API key](search-security-api-keys.md), and index name. The secrets are stored in the static web app settings and pulled in to the function as environment variables.
## Configure secrets in a configuration file
The Function app authenticates through the SDK to the cloud-based Cognitive Sear
## Azure Function: Search the catalog
-The `Search` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api-v4/src/functions/search.js) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+The [Search API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api-v4/src/functions/search.js) takes a search term and searches across the documents in the search index, returning a list of matches.
-The Azure Function pulls in the Search configuration information, and fulfills the query.
+The Azure Function pulls in the search configuration information, and fulfills the query.
:::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/api-v4/src/functions/search.js" :::
Call the Azure Function in the React client with the following code.
## Azure Function: Suggestions from the catalog
-The `Suggest` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api-v4/src/functions/suggest.js) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+The [Suggest API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api-v4/src/functions/suggest.js) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/bulk-insert-v4/good-books-index.json) used during bulk upload.
The Suggest function API is called in the React app at `\src\components\SearchBa
## Azure Function: Get specific document
-The `Lookup` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api-v4/src/functions/lookup.js) takes an ID and returns the document object from the Search Index.
+The [Lookup API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api-v4/src/functions/lookup.js) takes an ID and returns the document object from the search index.
:::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/api-v4/src/functions/lookup.js" :::
This function API is called in the React app at `\src\pages\Details\Detail.js` a
## Next steps
-* [Index Azure SQL data](search-indexer-tutorial.md)
+In this tutorial series, you learned how to create and load a search index in JavaScript, and you built a web app that provides a search experience that includes a search bar, faceted navigation and filters, suggestions, pagination, and document lookup.
+
+As a next step, you can extend this sample in several directions:
+
+* Add [autocomplete](search-add-autocomplete-suggestions.md) for more typeahead.
+* Add or modify [facets](search-faceted-navigation.md) and [filters](search-filters.md).
+* Change the authentication and authorization model, using [Azure Active Directory](search-security-rbac.md) instead of [key-based authentication](search-security-api-keys.md).
+* Change the [indexing methodology](search-what-is-data-import.md). Instead of pushing JSON to a search index, preload a blob container with the good-books dataset and [set up a blob indexer](search-howto-indexing-azure-blob-storage.md) to ingest the data. Knowing how to work with indexers gives you more options for data ingestion and [content enrichment](cognitive-search-concept-intro.md) during indexing.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognit
+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
-+ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-query.md).
++ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-create-index.md). + Use REST API version **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal.
-+ (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
++ (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). ## Limitations
api-key: {{admin-api-key}}
The response includes 5 matches, and each result provides a search score, title, content, and category. In a similarity search, the response always includes "k" matches, even if the similarity is weak. For indexes that have fewer than "k" documents, only those number of documents will be returned.
-Notice that "select" returns textual fields from the index. Although the vector field is "retrievable" in this example, its content isn't usable as a search result, so it's not included in the results.
+Notice that "select" returns textual fields from the index. Although the vector field is "retrievable" in this example, its content isn't usable as a search result, so it's often excluded in the results.
+
+### Vector query response
+
+Here's a modified example so that you can see the basic structure of a response from a pure vector query.
+
+```json
+{
+ "@odata.count": 3,
+ "value": [
+ {
+ "@search.score": 0.80025613,
+ "title": "Azure Search",
+ "category": "AI + Machine Learning",
+ "contentVector": [
+ -0.0018343845,
+ 0.017952163,
+ 0.0025753193,
+ ...
+ ]
+ },
+ {
+ "@search.score": 0.78856903,
+ "title": "Azure Application Insights",
+ "category": "Management + Governance",
+ "contentVector": [
+ -0.016821077,
+ 0.0037742127,
+ 0.016136652,
+ ...
+ ]
+ },
+ {
+ "@search.score": 0.78650564,
+ "title": "Azure Media Services",
+ "category": "Media",
+ "contentVector": [
+ -0.025449317,
+ 0.0038463024,
+ -0.02488436,
+ ...
+ ]
+ }
+ ]
+}
+```
+
+**Key points:**
+++ It's reduced to 3 "k" matches.++ It shows a **`@search.score`** that's determined by the HNSW algorithm and a `cosine` similarity metric. ++ Fields include text and vector values. The content vector field consists of 1536 dimensions for each match, so it's truncated for brevity (normally, you might exclude vector fields from results). The text fields used in the response (`"select": "title, category"`) aren't used during query execution. The match is made on vector data alone. However, a response can include any "retrievable" field in an index. As such, the inclusion of text fields is helpful because its values are easily recognized by users. ### [**.NET**](#tab/dotnet-vector-query)
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Last updated 08/10/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
-This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers [terms and concepts](#vector-search-concepts) related to vector search development.
+This article is a high-level introduction to vector support in Azure Cognitive Search. It also explains integration with other Azure services and covers [terminology and concepts](#vector-search-concepts) related to vector search development.
We recommend this article for background, but if you'd rather get started, follow these steps:
Support for vector search is in public preview and available through the [**2023
## What's vector search in Cognitive Search?
-Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401).
+Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://aka.ms/what-is-rag).
The following diagram shows the indexing and query workflows for vector search. On the indexing side, prepare source documents that contain embeddings. Cognitive Search doesn't generate embeddings, so your solution should include calls to Azure OpenAI or other models that can transform image, audio, text, and other content into vector representations. Add a *vector field* to your index definition on Cognitive Search. Load the index with a documents payload that includes the vectors. Your index is now ready to query.
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
Previously updated : 07/14/2023 Last updated : 08/31/2023 # Vector query execution and scoring in Azure Cognitive Search
This article is for developers who need a deeper understanding of vector query e
## Vector similarity
-In a vector query, the search query is a vector as opposed to text in full-text queries. Documents that match the vector query are ranked using vector similarity configured on the vector field defined in the index. A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be returned from the index.
+In a vector query, the search query is a vector, as opposed to a string in full-text queries. Documents that match the vector query are ranked using the vector similarity algorithm configured on the vector field defined in the index. A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be returned in the results.
> [!NOTE]
-> Full-text search queries could return fewer than the requested number of results if there are fewer or no matches, but vector search will return up to `k` matches as long as there are enough documents in the index. This is because with vector search, similarity is relative to the input query vector, not absolute. This means less relevant results have a worse similarity score, but they can still be the "nearest" vectors if there aren't any closer vectors. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low.
+> Full-text search queries could return fewer than the requested number of results if there are insufficient matches, but vector search always return up to `k` matches as long as there are enough documents in the index. This is because with vector search, similarity is relative to the input query vector, not absolute. Less relevant results have a worse similarity score, but they are still the "nearest" vectors if there isn't anything closer. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low.
-In a typical application, the input data within a query request would be fed into the same machine learning model that generated the embedding space for the vector index. This model would output a vector in the same embedding space. Since similar data are clustered close together, finding matches is equivalent to finding the nearest vectors and returning the associated documents as the search result.
+In a typical application, the input value within a query request would be fed into the same machine learning model that generated the embedding space for the vector index. This model would output a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the nearest vectors and returning the associated documents as the search result.
-If a query request is about dogs, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about dogs. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant.
+For example, if a query request is about dogs, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about dogs. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant.
-Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are summarized here:
+### Similarity metrics used to measure nearness
-+ `cosine` calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity).
+A similarity metric measures the distance between neighboring vectors. Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are summarized in the following table.
-+ `euclidean` calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors.
+| Metric | Description |
+|--|-|
+| `cosine` | Calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity). |
+| `euclidean` | Calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors. |
+| `dotProduct` | Calculates the products of vectors' magnitudes and the angle between them. |
-+ `dotProduct` is affected by both vectors' magnitudes and the angle between them.
-
-For normalized embedding spaces, dotProduct is equivalent to the cosine similarity, but is more efficient.
+For normalized embedding spaces, `dotProduct` is equivalent to the `cosine` similarity, but is more efficient.
## Hybrid search
Hybrid search combines results from both term and vector queries, which use diff
## Reciprocal Rank Fusion (RRF) for hybrid queries
-For hybrid search scoring, Cognitive Search uses Reciprocal Rank Fusion (RRF). In information retrieval, RRF combines the results of different search methods to produce a single, more accurate and relevant result. (Here, a search method refers to methods such as vector search and full-text search.) RRF is based on the concept of reciprocal rank, which is the inverse of the rank of the first relevant document in a list of search results. 
+For hybrid search scoring, Cognitive Search uses Reciprocal Rank Fusion (RRF). In information retrieval, RRF combines the results of different search methods to produce a single, more accurate and relevant result. Here, a search method refers to methods such as vector search and full-text search. RRF is based on the concept of reciprocal rank, which is the inverse of the rank of the first relevant document in a list of search results. 
At a basic level, RRF works by taking the search results from multiple methods, assigning a reciprocal rank score to each document in the results, and then combining these scores to create a new ranking. The main idea behind this method is that documents appearing in the top positions across multiple search methods are likely to be more relevant and should be ranked higher in the combined result. Here's a simple explanation of the RRF process:
-1. Obtain search results from multiple methods: Let's say we have two search methods, A and B. (In the context of Azure Cognitive Search, this will be vector search and full-text search.) We search for a specific query on both methods and get ranked lists of documents as results.
+1. Obtain search results from multiple methods.
+
+ In the context of Azure Cognitive Search, this is vector search and full-text search, with or without semantic ranking. We search for a specific query using both methods and get parallel ranked lists of documents as results. Each method has a ranking methodology. With BM25 ranking on full-text search, rank is by **`@search.score`**. With semantic reranking over BM25 ranked results, rank is by **`@search.rerankerScore`**. With similarity search for vector queries, the similarity score is also articulated as **`@search.score`** within its result set.
+
+1. Assign reciprocal rank scores for result in each of the ranked lists. A new **`@search.score`** property is generated by the RFF algorithm for each match in each result set. For each document in the search results, we assign a reciprocal rank score based on its position in the list. The score is calculated as `1/(rank + k)`, where `rank` is the position of the document in the list, and `k` is a constant, which was experimentally observed to perform best if it's set to a small value like 60.
+
+1. Combine scores. For each document, we sum the reciprocal rank scores obtained from each search system. This gives us a combined score for each document. 
-1. Assign reciprocal rank scores: For each document in the search results, we assign a reciprocal rank score based on its position in the list. The score is calculated as 1/(rank + k), where rank is the position of the document in the list, and k is a constant which was experimentally observed to perform best if it's set to a small value like 60.
+1. Rank documents based on combined scores. Finally, we sort the documents based on their combined scores, and the resulting list is the fused ranking. A
-1. Combine scores: For each document, we sum the reciprocal rank scores obtained from each search system. This gives us a combined score for each document. 
+Whenever results are ranked, **`@search.score`** property contains the value used to order the results. Scores are generated by ranking algorithms that vary for each method and aren't comparable.
-1. Rank documents based on combined scores: Finally, we sort the documents based on their combined scores, and the resulting list is the fused ranking.
+| Search method | @search.score algorithm |
+||-|
+| full-text search | **`@search.score`** is produced by the BM25 algorithm and its values are unbounded. |
+| vector similarity search | **`@search.score`** is produced by the HNSW algorithm, plus the similarity metric specified in the configuration. |
+| hybrid search | **`@search.score`** is produced by the RFF algorithm that merges results from parallel query execution, such as vector and full-text search. |
+| hybrid search with semantic reranking | **`@search.score`** is the RRF score from your initial retrieval, but you'll also see the **@search.rerankerScore** which is from the reranking model powered by Bing, which ranges from 0-4.
By default, if you aren't using pagination, Cognitive Search returns the top 50 highest ranking matches for full text search, and it returns `k` matches for vector search. In a hybrid query, the top 50 highest ranked matches of the unified result set are returned. You can use `$top`, `$skip`, and `$next` for paginated results. For more information, see [How to work with search results](search-pagination-page-layout.md).
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 08/02/2023 Last updated : 08/22/2023
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
> [!NOTE] > Looking for preview features? Previews are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place.
+## August 2023
+
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| [**Enhanced semantic ranking**](semantic-ranking.md) | Feature | Upgraded models are rolling out for semantic reranking, and availability is extended to more regions. Maximum unique token counts doubled from 128 to 256. |
+ ## July 2023 | Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
security Secure Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-deploy.md
Title: Deploy secure applications on Microsoft Azure description: This article discusses best practices to consider during the release and response phases of your web application project. -+ Previously updated : 06/15/2022 Last updated : 08/29/2023
security Secure Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md
description: This article discusses best practices to consider during the implem
Previously updated : 01/22/2023 Last updated : 08/30/2023
The following SDL phases are covered in this article:
## Implementation
-The focus of the implementation phase is to establish best practices for early prevention and to detect and remove security issues from the code. Assume that your application will be used in ways that you didn't intend it to be used. This helps you guard against accidental or intentional misuse of your application.
+The focus of the implementation phase is to establish best practices for early prevention and to detect and remove security issues from the code. Assume that your application is used in ways that you didn't intend it to be used. This helps you guard against accidental or intentional misuse of your application.
### Perform code reviews
Before you check in code, conduct code reviews to increase overall code quality
### Perform static code analysis
-[Static code analysis](https://owasp.org/www-community/controls/Static_Code_Analysis) (also known as *source code analysis*) is usually performed as part of a code review. Static code analysis commonly refers to running static code analysis tools to find potential vulnerabilities in non-running code by using techniques like [taint checking](https://en.wikipedia.org/wiki/Taint_checking) and [data flow analysis](https://en.wikipedia.org/wiki/Data-flow_analysis).
+[Static code analysis](https://owasp.org/www-community/controls/Static_Code_Analysis) (also known as *source code analysis*) is performed as part of a code review. Static code analysis commonly refers to running static code analysis tools to find potential vulnerabilities in nonrunning code. Static code analysis uses techniques like [taint checking](https://en.wikipedia.org/wiki/Taint_checking) and [data flow analysis](https://en.wikipedia.org/wiki/Data-flow_analysis).
Azure Marketplace offers [developer tools](https://azuremarketplace.microsoft.com/marketplace/apps/category/developer-tools?page=1&search=code%20review) that perform static code analysis and assist with code reviews.
Any output that you present either visually or within a document should always b
Escaping makes sure that everything is displayed as *output.* Escaping also lets the interpreter know that the data isn't intended to be executed, and this prevents attacks from working. This is another common attack technique called *cross-site scripting* (XSS).
-If you are using a web framework from a third party, you can verify your options for output encoding on websites by using the [OWASP XSS prevention cheat sheet](https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.md).
+If you're using a web framework from a third party, you can verify your options for output encoding on websites by using the [OWASP XSS prevention cheat sheet](https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.md).
### Use parameterized queries when you contact the database
See [removing standard server headers on Azure websites](https://azure.microsoft
### Segregate your production data
-Your production data, or "real" data, should not be used for development, testing, or any other purpose than what the business intended. A masked ([anonymized](https://en.wikipedia.org/wiki/Data_anonymization)) dataset should be used for all development and testing.
+Your production data, or "real" data, shouldn't be used for development, testing, or any other purpose than what the business intended. A masked ([anonymized](https://en.wikipedia.org/wiki/Data_anonymization)) dataset should be used for all development and testing.
This means fewer people have access to your real data, which reduces your attack surface. It also means fewer employees see personal data, which eliminates a potential breach in confidentiality.
Antimalware protection helps identify and remove viruses, spyware, and other mal
### Don't cache sensitive content
-Don't cache sensitive content on the browser. Browsers can store information for caching and history. Cached files are stored in a folder like the Temporary Internet Files folder, in the case of Internet Explorer. When these pages are referred to again, the browser displays the pages from its cache. If sensitive information (address, credit card details, Social security number, username) is displayed to the user, the information might be stored in the browser's cache and be retrievable by examining the browser's cache or by simply pressing the browser's **Back** button.
+Don't cache sensitive content on the browser. Browsers can store information for caching and history. Cached files are stored in a folder like the Temporary Internet Files folder, in the case of Internet Explorer. When these pages are referred to again, the browser displays the pages from its cache. If sensitive information (address, credit card details, Social security number, username) is displayed to the user, the information might be stored in the browser's cache and be retrievable by examining the browser's cache or by pressing the browser's **Back** button.
## Verification
You scan your application and its dependent libraries to identify any known vuln
### Test your application in an operating state
-Dynamic application security testing (DAST) is a process of testing an application in an operating state to find security vulnerabilities. DAST tools analyze programs while they are executing to find security vulnerabilities such as memory corruption, insecure server configuration, cross-site scripting, user privilege issues, SQL injection, and other critical security concerns.
+Dynamic application security testing (DAST) is a process of testing an application in an operating state to find security vulnerabilities. DAST tools analyze programs while they're executing to find security vulnerabilities such as memory corruption, insecure server configuration, cross-site scripting, user privilege issues, SQL injection, and other critical security concerns.
-DAST is different from static application security testing (SAST). SAST tools analyze source code or compiled versions of code when the code is not executing in order to find security flaws.
+DAST is different from static application security testing (SAST). SAST tools analyze source code or compiled versions of code when the code isn't executing in order to find security flaws.
Perform DAST, preferably with the assistance of a security professional (a [penetration tester](../fundamentals/pen-testing.md) or vulnerability assessor). If a security professional isn't available, you can perform DAST yourself with a web proxy scanner and some training. Plug in a DAST scanner early on to ensure that you don't introduce obvious security issues into your code. See the [OWASP](https://owasp.org/www-community/Vulnerability_Scanning_Tools) site for a list of web application vulnerability scanners.
In [fuzz testing](https://www.microsoft.com/security/blog/2007/09/20/fuzz-testin
Reviewing the attack surface after code completion helps ensure that any design or implementation changes to an application or system has been considered. It helps ensure that any new attack vectors that were created as a result of the changes, including threat models, has been reviewed and mitigated.
-You can build a picture of the attack surface by scanning the application. Microsoft offers an attack surface analysis tool called [Attack Surface Analyzer](https://www.microsoft.com/download/details.aspx?id=58105). You can choose from many commercial dynamic testing and vulnerability scanning tools or services, including [OWASP Zed Attack Proxy Project](https://owasp.org/www-project-zap/), [Arachni](http://arachni-scanner.com/), and [w3af](http://w3af.sourceforge.net/). These scanning tools crawl your app and map the parts of the application that are accessible over the web. You can also search the Azure Marketplace for similar [developer tools](https://azuremarketplace.microsoft.com/marketplace/apps/category/developer-tools?page=1).
+You can build a picture of the attack surface by scanning the application. Microsoft offers an attack surface analysis tool called [Attack Surface Analyzer](https://www.microsoft.com/download/details.aspx?id=58105). You can choose from many commercial dynamic testing and vulnerability scanning tools or services, including [OWASP Attack Surface Detector](https://owasp.org/www-project-attack-surface-detector/), [Arachni](http://arachni-scanner.com/), and [w3af](http://w3af.sourceforge.net/). These scanning tools crawl your app and map the parts of the application that are accessible over the web. You can also search the Azure Marketplace for similar [developer tools](https://azuremarketplace.microsoft.com/marketplace/apps/category/developer-tools?page=1).
### Perform security penetration testing
security Azure Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-marketplace-images.md
description: This article provides recommendations for images included in the ma
documentationcenter: na -+ ms.assetid: --++ Previously updated : 01/11/2019 Last updated : 08/29/2023
security Backup Plan To Protect Against Ransomware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md
Title: Azure backup and restore plan to protect against ransomware | Microsoft Docs description: Learn what to do before and during a ransomware attack to protect your critical business systems and ensure a rapid recovery of business operations. --++ Previously updated : 10/10/2022 Last updated : 08/29/2023
Apply these best practices before an attack.
| Task | Detail | | | | | Identify the important systems that you need to bring back online first (using top five categories above) and immediately begin performing regular backups of those systems. | To get back up and running as quickly as possible after an attack, determine today what is most important to you. |
-| Migrate your organization to the cloud. <br><br>Consider purchasing a Microsoft Unified Support plan or working with a Microsoft partner to help support your move to the cloud. | Reduce your on-premises exposure by moving data to cloud services with automatic backup and self-service rollback. Microsoft Azure has a robust set of tools to help you backup your business-critical systems and restore your backups faster. <br><br>[Microsoft Unified Support](https://www.microsoft.com/en-us/msservices/unified-support-solutions) is a cloud services support model that is there to help you whenever you need it. Unified Support: <br><br>Provides a designated team that is available 24x7 with as-needed problem resolution and critical incident escalation <br><br>Helps you monitor the health of your IT environment and works proactively to make sure problems are prevented before they happen |
+| Migrate your organization to the cloud. <br><br>Consider purchasing a Microsoft Unified Support plan or working with a Microsoft partner to help support your move to the cloud. | Reduce your on-premises exposure by moving data to cloud services with automatic backup and self-service rollback. Microsoft Azure has a robust set of tools to help you back up your business-critical systems and restore your backups faster. <br><br>[Microsoft Unified Support](https://www.microsoft.com/en-us/msservices/unified-support-solutions) is a cloud services support model that is there to help you whenever you need it. Unified Support: <br><br>Provides a designated team that is available 24x7 with as-needed problem resolution and critical incident escalation <br><br>Helps you monitor the health of your IT environment and works proactively to make sure problems are prevented before they happen |
| Move user data to cloud solutions like OneDrive and SharePoint to take advantage of [versioning and recycle bin capabilities](/compliance/assurance/assurance-malware-and-ransomware-protection#sharepoint-online-and-onedrive-for-business-protection-against-ransomware). <br><br>Educate users on how to recover their files by themselves to reduce delays and cost of recovery. For example, if a userΓÇÖs OneDrive files were infected by malware, they can [restore](https://support.microsoft.com/office/restore-your-onedrive-fa231298-759d-41cf-bcd0-25ac53eb8a15?ui=en-US&rs=en-US&ad=US) their entire OneDrive to a previous time. <br><br>Consider a defense strategy, such as [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender), before allowing users to restore their own files. | User data in the Microsoft cloud can be protected by built-in security and data management features. <br><br>It's good to teach users how to restore their own files but you need to be careful that your users do not restore the malware used to carry out the attack. You need to: <br><br>Ensure your users don't restore their files until you are confident that the attacker has been evicted <br><br>Have a mitigation in place in case a user does restore some of the malware <br><br>Microsoft 365 Defender uses AI-powered automatic actions and playbooks to remediate impacted assets back to a secure state. Microsoft 365 Defender leverages automatic remediation capabilities of the suite products to ensure all impacted assets related to an incident are automatically remediated where possible. | | Implement the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). | The Microsoft cloud security benchmark is our security control framework based on industry-based security control frameworks such as NIST SP800-53, CIS Controls v7.1. It provides organizations guidance on how to configure Azure and Azure services and implement the security controls. See [Backup and Recovery](/security/benchmark/azure/security-controls-v3-backup-recovery). | | Regularly exercise your business continuity/disaster recovery (BC/DR) plan. <br><br>Simulate incident response scenarios. Exercises you perform in preparing for an attack should be planned and conducted around your prioritized backup and restore lists. <br><br>Regularly test ΓÇÿRecover from ZeroΓÇÖ scenario to ensure your BC/DR can rapidly bring critical business operations online from zero functionality (all systems down). | Ensures rapid recovery of business operations by treating a ransomware or extortion attack with the same importance as a natural disaster. <br><br>Conduct practice exercise(s) to validate cross-team processes and technical procedures, including out of band employee and customer communications (assume all email and chat is down). | | Consider creating a risk register to identify potential risks and address how you will mediate through preventative controls and actions. Add ransomware to risk register as high likelihood and high impact scenario. | A risk register can help you prioritize risks based on the likelihood of that risk occurring and the severity to your business should that risk occur. <br><br>Track mitigation status via [Enterprise Risk Management (ERM)](/compliance/assurance/assurance-risk-management) assessment cycle. |
-| Backup all critical business systems automatically on a regular schedule (including backup of critical dependencies like Active Directory). <br><br>Validate that your backup is good as your backup is created. | Allows you to recover data up to the last backup. |
+| Back up all critical business systems automatically on a regular schedule (including backup of critical dependencies like Active Directory). <br><br>Validate that your backup is good as your backup is created. | Allows you to recover data up to the last backup. |
| Protect (or print) supporting documents and systems required for recovery such as restoration procedure documents, CMDB, network diagrams, and SolarWinds instances. | Attackers deliberately target these resources because it impacts your ability to recover. | | Ensure you have well-documented procedures for engaging any third-party support, particularly support from threat intelligence providers, antimalware solution providers, and from the malware analysis provider. Protect (or print) these procedures. | Third-party contacts may be useful if the given ransomware variant has known weaknesses or decryption tools are available. | | Ensure backup and recovery strategy includes: <br><br>Ability to back up data to a specific point in time. <br><br>Multiple copies of backups are stored in isolated, offline (air-gapped) locations. <br><br>Recovery time objectives that establish how quickly backed up information can be retrieved and put into production environment. <br><br>Rapid restore of back up to a production environment/sandbox. | Backups are essential for resilience after an organization has been breached. Apply the 3-2-1 rule for maximum protection and availability: 3 copies (original + 2 backups), 2 storage types, and 1 offsite or cold copy. |
security Best Practices And Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/best-practices-and-patterns.md
Title: Security best practices and patterns - Microsoft Azure | Microsoft Docs description: This article links you to security best practices and patterns for different Azure resources.-+ documentationcenter: na -+ ms.assetid: 1cbbf8dc-ea94-4a7e-8fa0-c2cb198956c5
na Previously updated : 6/02/2022 Last updated : 08/29/2023
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Title: Cloud feature availability for commercial and US Government customers description: This article describes security feature availability in Azure and Azure Government clouds + + - Previously updated : 01/13/2023+ Last updated : 08/31/2023 # Cloud feature availability for commercial and US Government customers
security Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md
documentationcenter: na ms.assetid: 02c5b7d2-a77f-4e7f-9a1e-40247c57e7e2-++ na Previously updated : 10/28/2019 Last updated : 08/29/2023
The first step in protecting your VMs is to ensure that only authorized users ca
> >
-**Best practice**: Control VM access.
+**Best practice**: Control VM access.
**Detail**: Use [Azure policies](../../governance/policy/overview.md) to establish conventions for resources in your organization and create customized policies. Apply these policies to resources, such as [resource groups](../../azure-resource-manager/management/overview.md). VMs that belong to a resource group inherit its policies. If your organization has many subscriptions, you might need a way to efficiently manage access, policies, and compliance for those subscriptions. [Azure management groups](../../governance/management-groups/overview.md) provide a level of scope above subscriptions. You organize subscriptions into management groups (containers) and apply your governance conditions to those groups. All subscriptions within a management group automatically inherit the conditions applied to the group. Management groups give you enterprise-grade management at a large scale no matter what type of subscriptions you might have.
-**Best practice**: Reduce variability in your setup and deployment of VMs.
+**Best practice**: Reduce variability in your setup and deployment of VMs.
**Detail**: Use [Azure Resource Manager](../../azure-resource-manager/templates/syntax.md) templates to strengthen your deployment choices and make it easier to understand and inventory the VMs in your environment.
-**Best practice**: Secure privileged access.
+**Best practice**: Secure privileged access.
**Detail**: Use a [least privilege approach](/windows-server/identity/ad-ds/plan/security-best-practices/implementing-least-privilege-administrative-models) and built-in Azure roles to enable users to access and set up VMs: - [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor): Can manage VMs, but not the virtual network or storage account to which they are connected.
Organizations that control VM access and setup improve their overall VM security
## Use multiple VMs for better availability If your VM runs critical applications that need to have high availability, we strongly recommend that you use multiple VMs. For better availability, use an [availability set](../../virtual-machines/availability-set-overview.md) or availability [zones](../../availability-zones/az-overview.md).
-An availability set is a logical grouping that you can use in Azure to ensure that the VM resources you place within it are isolated from each other when theyΓÇÖre deployed in an Azure datacenter. Azure ensures that the VMs you place in an availability set run across multiple physical servers, compute racks, storage units, and network switches. If a hardware or Azure software failure occurs, only a subset of your VMs are affected, and your overall application continues to be available to your customers. Availability sets are an essential capability when you want to build reliable cloud solutions.
+An availability set is a logical grouping that you can use in Azure to ensure that the VM resources you place within it are isolated from each other when they're deployed in an Azure datacenter. Azure ensures that the VMs you place in an availability set run across multiple physical servers, compute racks, storage units, and network switches. If a hardware or Azure software failure occurs, only a subset of your VMs are affected, and your overall application continues to be available to your customers. Availability sets are an essential capability when you want to build reliable cloud solutions.
## Protect against malware
-You should install antimalware protection to help identify and remove viruses, spyware, and other malicious software. You can install [Microsoft Antimalware](antimalware.md) or a Microsoft partnerΓÇÖs endpoint protection solution ([Trend Micro](https://help.deepsecurity.trendmicro.com/Welcome.html), [Broadcom](https://www.broadcom.com/products), [McAfee](https://www.mcafee.com/us/products.aspx), [Windows Defender](https://www.microsoft.com/windows/comprehensive-security), and [System Center Endpoint Protection](/configmgr/protect/deploy-use/endpoint-protection)).
+You should install antimalware protection to help identify and remove viruses, spyware, and other malicious software. You can install [Microsoft Antimalware](antimalware.md) or a Microsoft partner's endpoint protection solution ([Trend Micro](https://cloudone.trendmicro.com/docs/workload-security/), [Broadcom](https://www.broadcom.com/products), [McAfee](https://www.mcafee.com/us/products.aspx), [Windows Defender](https://www.microsoft.com/windows/comprehensive-security), and [System Center Endpoint Protection](/configmgr/protect/deploy-use/endpoint-protection)).
Microsoft Antimalware includes features like real-time protection, scheduled scanning, malware remediation, signature updates, engine updates, samples reporting, and exclusion event collection. For environments that are hosted separately from your production environment, you can use an antimalware extension to help protect your VMs and cloud services.
security Identity Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-best-practices.md
documentationcenter: na ms.assetid: 07d8e8a8-47e8-447c-9c06-3a88d2713bc1--++ na Previously updated : 12/19/2022 Last updated : 08/29/2023
The following sections list best practices for identity and access security usin
In a hybrid identity scenario we recommend that you integrate your on-premises and cloud directories. Integration enables your IT team to manage accounts from one location, regardless of where an account is created. Integration also helps your users be more productive by providing a common identity for accessing both cloud and on-premises resources.
-**Best practice**: Establish a single Azure AD instance. Consistency and a single authoritative source will increase clarity and reduce security risks from human errors and configuration complexity.
+**Best practice**: Establish a single Azure AD instance. Consistency and a single authoritative source will increase clarity and reduce security risks from human errors and configuration complexity.
**Detail**: Designate a single Azure AD directory as the authoritative source for corporate and organizational accounts. **Best practice**: Integrate your on-premises directories with Azure AD.
In a hybrid identity scenario we recommend that you integrate your on-premises a
> [!Note] > There are [factors that affect the performance of Azure AD Connect](../../active-directory/hybrid/plan-connect-performance-factors.md). Ensure Azure AD Connect has enough capacity to keep underperforming systems from impeding security and productivity. Large or complex organizations (organizations provisioning more than 100,000 objects) should follow the [recommendations](../../active-directory/hybrid/whatis-hybrid-identity.md) to optimize their Azure AD Connect implementation.
-**Best practice**: DonΓÇÖt synchronize accounts to Azure AD that have high privileges in your existing Active Directory instance.
+**Best practice**: DonΓÇÖt synchronize accounts to Azure AD that have high privileges in your existing Active Directory instance.
**Detail**: DonΓÇÖt change the default [Azure AD Connect configuration](../../active-directory/hybrid/how-to-connect-sync-configure-filtering.md) that filters out these accounts. This configuration mitigates the risk of adversaries pivoting from cloud to on-premises assets (which could create a major incident). **Best practice**: Turn on password hash synchronization.
Even if you decide to use federation with Active Directory Federation Services (
For more information, see [Implement password hash synchronization with Azure AD Connect sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md).
-**Best practice**: For new application development, use Azure AD for authentication.
+**Best practice**: For new application development, use Azure AD for authentication.
**Detail**: Use the correct capabilities to support authentication: - Azure AD for employees
To balance security and productivity, you need to think about how a resource is
**Best practice**: Manage and control access to corporate resources. **Detail**: Configure common Azure AD [Conditional Access policies](../../active-directory/conditional-access/concept-conditional-access-policy-common.md) based on a group, location, and application sensitivity for SaaS apps and Azure ADΓÇôconnected apps.
-**Best practice**: Block legacy authentication protocols.
+**Best practice**: Block legacy authentication protocols.
**Detail**: Attackers exploit weaknesses in older protocols every day, particularly for password spray attacks. Configure Conditional Access to [block legacy protocols](../../active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md). ## Plan for routine security improvements
Security is always evolving, and it is important to build into your cloud and id
Identity Secure Score is a set of recommended security controls that Microsoft publishes that works to provide you a numerical score to objectively measure your security posture and help plan future security improvements. You can also view your score in comparison to those in other industries as well as your own trends over time.
-**Best practice**: Plan routine security reviews and improvements based on best practices in your industry.
+**Best practice**: Plan routine security reviews and improvements based on best practices in your industry.
**Detail**: Use the Identity Secure Score feature to rank your improvements over time. ## Enable password management
If you have multiple tenants or you want to enable users to [reset their own pas
**Best practice**: Monitor how or if SSPR is really being used. **Detail**: Monitor the users who are registering by using the Azure AD [Password Reset Registration Activity report](../../active-directory/authentication/howto-sspr-reporting.md). The reporting feature that Azure AD provides helps you answer questions by using prebuilt reports. If you're appropriately licensed, you can also create custom queries.
-**Best practice**: Extend cloud-based password policies to your on-premises infrastructure.
+**Best practice**: Extend cloud-based password policies to your on-premises infrastructure.
**Detail**: Enhance password policies in your organization by performing the same checks for on-premises password changes as you do for cloud-based password changes. Install [Azure AD password protection](../../active-directory/authentication/concept-password-ban-bad.md) for Windows Server Active Directory agents on-premises to extend banned password lists to your existing infrastructure. Users and admins who change, set, or reset passwords on-premises are required to comply with the same password policy as cloud-only users. ## Enforce multi-factor verification for users
There are multiple options for requiring two-step verification. The best option
Following are options and benefits for enabling two-step verification:
-**Option 1**: Enable MFA for all users and login methods with Azure AD Security Defaults
+**Option 1**: Enable MFA for all users and login methods with Azure AD Security Defaults
**Benefit**: This option enables you to easily and quickly enforce MFA for all users in your environment with a stringent policy to: * Challenge administrative accounts and administrative logon mechanisms
This method is available to all licensing tiers but is not able to be mixed with
To determine where Multi-Factor Authentication needs to be enabled, see [Which version of Azure AD MFA is right for my organization?](../../active-directory/authentication/concept-mfa-howitworks.md).
-**Option 3**: [Enable Multi-Factor Authentication with Conditional Access policy](../../active-directory/authentication/howto-mfa-getstarted.md).
+**Option 3**: [Enable Multi-Factor Authentication with Conditional Access policy](../../active-directory/authentication/howto-mfa-getstarted.md).
**Benefit**: This option allows you to prompt for two-step verification under specific conditions by using [Conditional Access](../../active-directory/conditional-access/concept-conditional-access-policy-common.md). Specific conditions can be user sign-in from different locations, untrusted devices, or applications that you consider risky. Defining specific conditions where you require two-step verification enables you to avoid constant prompting for your users, which can be an unpleasant user experience. This is the most flexible way to enable two-step verification for your users. Enabling a Conditional Access policy works only for Azure AD Multi-Factor Authentication in the cloud and is a premium feature of Azure AD. You can find more information on this method in [Deploy cloud-based Azure AD Multi-Factor Authentication](../../active-directory/authentication/howto-mfa-getstarted.md).
Your security team needs visibility into your Azure resources in order to assess
You can use [Azure RBAC](../../role-based-access-control/overview.md) to assign permissions to users, groups, and applications at a certain scope. The scope of a role assignment can be a subscription, a resource group, or a single resource.
-**Best practice**: Segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, allow only certain actions at a particular scope.
+**Best practice**: Segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, allow only certain actions at a particular scope.
**Detail**: Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) in Azure to assign privileges to users. > [!Note] > Specific permissions create unneeded complexity and confusion, accumulating into a ΓÇ£legacyΓÇ¥ configuration thatΓÇÖs difficult to fix without fear of breaking something. Avoid resource-specific permissions. Instead, use management groups for enterprise-wide permissions and resource groups for permissions within subscriptions. Avoid user-specific permissions. Instead, assign access to groups in Azure AD.
-**Best practice**: Grant security teams with Azure responsibilities access to see Azure resources so they can assess and remediate risk.
+**Best practice**: Grant security teams with Azure responsibilities access to see Azure resources so they can assess and remediate risk.
**Detail**: Grant security teams the Azure RBAC [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) role. You can use the root management group or the segment management group, depending on the scope of responsibilities: * **Root management group** for teams responsible for all enterprise resources * **Segment management group** for teams with limited scope (commonly because of regulatory or other organizational boundaries)
-**Best practice**: Grant the appropriate permissions to security teams that have direct operational responsibilities.
+**Best practice**: Grant the appropriate permissions to security teams that have direct operational responsibilities.
**Detail**: Review the Azure built-in roles for the appropriate role assignment. If the built-in roles don't meet the specific needs of your organization, you can create [Azure custom roles](../../role-based-access-control/custom-roles.md). As with built-in roles, you can assign custom roles to users, groups, and service principals at subscription, resource group, and resource scopes.
-**Best practices**: Grant Microsoft Defender for Cloud access to security roles that need it. Defender for Cloud allows security teams to quickly identify and remediate risks.
+**Best practices**: Grant Microsoft Defender for Cloud access to security roles that need it. Defender for Cloud allows security teams to quickly identify and remediate risks.
**Detail**: Add security teams with these needs to the Azure RBAC [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) role so they can view security policies, view security states, edit security policies, view alerts and recommendations, and dismiss alerts and recommendations. You can do this by using the root management group or the segment management group, depending on the scope of responsibilities. Organizations that donΓÇÖt enforce data access control by using capabilities like Azure RBAC might be giving more privileges than necessary to their users. This can lead to data compromise by allowing users to access types of data (for example, high business impact) that they shouldnΓÇÖt have.
The following summarizes the best practices found in [Securing privileged access
**Best practice**: Ensure all critical admin accounts are managed Azure AD accounts. **Detail**: Remove any consumer accounts from critical admin roles (for example, Microsoft accounts like hotmail.com, live.com, and outlook.com).
-**Best practice**: Ensure all critical admin roles have a separate account for administrative tasks in order to avoid phishing and other attacks to compromise administrative privileges.
+**Best practice**: Ensure all critical admin roles have a separate account for administrative tasks in order to avoid phishing and other attacks to compromise administrative privileges.
**Detail**: Create a separate admin account thatΓÇÖs assigned the privileges needed to perform the administrative tasks. Block the use of these administrative accounts for daily productivity tools like Microsoft 365 email or arbitrary web browsing. **Best practice**: Identify and categorize accounts that are in highly privileged roles.
The following summarizes the best practices found in [Securing privileged access
Evaluate the accounts that are assigned or eligible for the global admin role. If you donΓÇÖt see any cloud-only accounts by using the `*.onmicrosoft.com` domain (intended for emergency access), create them. For more information, see [Managing emergency access administrative accounts in Azure AD](../../active-directory/roles/security-emergency-access.md).
-**Best practice**: Have a ΓÇ£break glass" process in place in case of an emergency.
+**Best practice**: Have a ΓÇ£break glass" process in place in case of an emergency.
**Detail**: Follow the steps in [Securing privileged access for hybrid and cloud deployments in Azure AD](../../active-directory/roles/security-planning.md).
-**Best practice**: Require all critical admin accounts to be password-less (preferred), or require Multi-Factor Authentication.
+**Best practice**: Require all critical admin accounts to be password-less (preferred), or require Multi-Factor Authentication.
**Detail**: Use the [Microsoft Authenticator app](../../active-directory/authentication/howto-authentication-passwordless-phone.md) to sign in to any Azure AD account without using a password. Like [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification), the Microsoft Authenticator uses key-based authentication to enable a user credential thatΓÇÖs tied to a device and uses biometric authentication or a PIN. Require Azure AD Multi-Factor Authentication at sign-in for all individual users who are permanently assigned to one or more of the Azure AD admin roles: Global Administrator, Privileged Role Administrator, Exchange Online Administrator, and SharePoint Online Administrator. Enable [Multi-Factor Authentication for your admin accounts](../../active-directory/authentication/howto-mfa-userstates.md) and ensure that admin account users have registered.
-**Best practice**: For critical admin accounts, have an admin workstation where production tasks arenΓÇÖt allowed (for example, browsing and email). This will protect your admin accounts from attack vectors that use browsing and email and significantly lower your risk of a major incident.
+**Best practice**: For critical admin accounts, have an admin workstation where production tasks arenΓÇÖt allowed (for example, browsing and email). This will protect your admin accounts from attack vectors that use browsing and email and significantly lower your risk of a major incident.
**Detail**: Use an admin workstation. Choose a level of workstation security: - Highly secure productivity devices provide advanced security for browsing and other productivity tasks. - [Privileged Access Workstations (PAWs)](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) provide a dedicated operating system thatΓÇÖs protected from internet attacks and threat vectors for sensitive tasks.
-**Best practice**: Deprovision admin accounts when employees leave your organization.
+**Best practice**: Deprovision admin accounts when employees leave your organization.
**Detail**: Have a process in place that disables or deletes admin accounts when employees leave your organization.
-**Best practice**: Regularly test admin accounts by using current attack techniques.
+**Best practice**: Regularly test admin accounts by using current attack techniques.
**Detail**: Use Microsoft 365 Attack Simulator or a third-party offering to run realistic attack scenarios in your organization. This can help you find vulnerable users before a real attack occurs. **Best practice**: Take steps to mitigate the most frequently used attacked techniques.
security Infrastructure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-monitoring.md
description: Learn about infrastructure monitoring aspects of the Azure producti
documentationcenter: na -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e + na Previously updated : 06/28/2018 Last updated : 08/29/2023
security Infrastructure Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-operations.md
description: This article describes how Microsoft manages and operates the Azure
documentationcenter: n -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
na Previously updated : 05/30/2019 Last updated : 08/29/2023
security Infrastructure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-sql.md
description: This article provides a general description of how Azure SQL Databa
documentationcenter: na -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na Previously updated : 03/09/2020 Last updated : 08/29/2023
Azure SQL Database provides a relational database service in Azure. To protect c
Azure SQL Database supports only the tabular data stream (TDS) protocol, which requires the database to be accessible over only the default port of TCP/1433. ### Azure SQL Database firewall
-To help protect customer data, Azure SQL Database includes a firewall functionality, which by default prevents all access to SQL Database, as shown below.
+To help protect customer data, Azure SQL Database includes a firewall functionality, which by default prevents all access to SQL Database.
![Azure SQL Database firewall](./media/infrastructure-sql/sql-database-firewall.png)
The gateway firewall can limit addresses, which allows customers granular contro
Customers can achieve firewall configuration by using a management portal or programmatically using the Azure SQL Database Management REST API. The Azure SQL Database gateway firewall by default prevents all customer TDS access to Azure SQL Database. Customers must configure access by using access-control lists (ACLs) to permit Azure SQL Database connections by source and destination internet addresses, protocols, and port numbers. ### DoSGuard
-Denial of service (DoS) attacks are reduced by a SQL Database gateway service called DoSGuard. DoSGuard actively tracks failed logins from IP addresses. If there are multiple failed logins from a specific IP address within a period of time, the IP address is blocked from accessing any resources in the service for a pre-defined time period.
+DosGuard, a SQL Database gateway service, reduces denial of service (DoS) attacks. DoSGuard actively tracks failed logins from IP addresses. If there are multiple failed logins from an IP address within a period of time, the IP address is blocked from accessing any resources in the service for a predefined time period.
In addition, the Azure SQL Database gateway performs: - Secure channel capability negotiations to implement TDS FIPS 140-2 validated encrypted connections when it connects to the database servers.-- Stateful TDS packet inspection while it accepts connections from clients. The gateway validates the connection information and passes on the TDS packets to the appropriate physical server based on the database name that's specified in the connection string.
+- Stateful TDS packet inspection while it accepts connections from clients. The gateway validates the connection information. The gateway passes on the TDS packets to the appropriate physical server based on the database name that's specified in the connection string.
The overarching principle for network security of the Azure SQL Database offering is to allow only the connection and communication that is necessary to allow the service to operate. All other ports, protocols, and connections are blocked by default. Virtual local area networks (VLANs) and ACLs are used to restrict network communications by source and destination networks, protocols, and port numbers.
-Mechanisms that are approved to implement network-based ACLs include ACLs on routers and load balancers. These mechanisms are managed by Azure networking, guest VM firewall, and Azure SQL Database gateway firewall rules, which are configured by the customer.
+Mechanisms that are approved to implement network-based ACLs include ACLs on routers and load balancers. These mechanisms are managed by Azure networking, guest VM firewall, and Azure SQL Database gateway firewall rules configured by the customer.
## Data segregation and customer isolation The Azure production network is structured such that publicly accessible system components are segregated from internal resources. Physical and logical boundaries exist between web servers that provide access to the public-facing Azure portal and the underlying Azure virtual infrastructure, where customer application instances and customer data reside.
-All publicly accessible information is managed within the Azure production network. The production network is subject to two-factor authentication and boundary protection mechanisms, uses the firewall and security feature set that is described in the previous section, and uses data isolation functions as noted in the next sections.
+All publicly accessible information is managed within the Azure production network. The production network is:
+
+- Subject to two-factor authentication and boundary protection mechanisms
+- Uses the firewall and security feature set described in the previous section
+- Uses data isolation functions noted in the next sections
### Unauthorized systems and isolation of the FC
-Because the fabric controller (FC) is the central orchestrator of the Azure fabric, significant controls are in place to mitigate threats to it, especially from potentially compromised FAs within customer applications. The FC does not recognize any hardware whose device information (for example, MAC address) is not pre-loaded within the FC. The DHCP servers on the FC have configured lists of MAC addresses of the nodes they are willing to boot. Even if unauthorized systems are connected, they are not incorporated into fabric inventory, and therefore not connected or authorized to communicate with any system within the fabric inventory. This reduces the risk of unauthorized systems' communicating with the FC and gaining access to the VLAN and Azure.
+Because the fabric controller (FC) is the central orchestrator of the Azure fabric, significant controls are in place to mitigate threats to it, especially from potentially compromised FAs within customer applications. The FC doesn't recognize any hardware whose device information (for example, MAC address) isn't preloaded within the FC. The DHCP servers on the FC have configured lists of MAC addresses of the nodes they're willing to boot. Even if unauthorized systems are connected, they're not incorporated into fabric inventory, and therefore not connected or authorized to communicate with any system within the fabric inventory. This reduces the risk of unauthorized systems' communicating with the FC and gaining access to the VLAN and Azure.
### VLAN isolation The Azure production network is logically segregated into three primary VLANs:
The Azure production network is logically segregated into three primary VLANs:
The IPFilter and the software firewalls that are implemented on the root OS and guest OS of the nodes enforce connectivity restrictions and prevent unauthorized traffic between VMs. ### Hypervisor, root OS, and guest VMs
-The isolation of the root OS from the guest VMs and the guest VMs from one another is managed by the hypervisor and the root OS.
+The hypervisor and the root OS manages the isolation of the root OS from the guest VMs and the guest VMs from one another.
### Types of rules on firewalls A rule is defined as: {Src IP, Src Port, Destination IP, Destination Port, Destination Protocol, In/Out, Stateful/Stateless, Stateful Flow Timeout}.
-Synchronous idle character (SYN) packets are allowed in or out only if any one of the rules permits it. For TCP, Azure uses stateless rules where the principle is that it allows only all non-SYN packets into or out of the VM. The security premise is that any host stack is resilient of ignoring a non-SYN if it has not seen a SYN packet previously. The TCP protocol itself is stateful, and in combination with the stateless SYN-based rule achieves an overall behavior of a stateful implementation.
+Synchronous idle character (SYN) packets are allowed in or out only if any one of the rules permits it. For TCP, Azure uses stateless rules where the principle is that it allows only all non-SYN packets into or out of the VM. The security premise is that any host stack is resilient of ignoring a non-SYN if it hasn't seen a SYN packet previously. The TCP protocol itself is stateful, and in combination with the stateless SYN-based rule achieves an overall behavior of a stateful implementation.
For User Datagram Protocol (UDP), Azure uses a stateful rule. Every time a UDP packet matches a rule, a reverse flow is created in the other direction. This flow has a built-in timeout.
-Customers are responsible for setting up their own firewalls on top of what Azure provides. Here customers are able to define the rules for inbound and outbound traffic.
+Customers are responsible for setting up their own firewalls on top of what Azure provides. Here, customers are able to define the rules for inbound and outbound traffic.
### Production configuration management Standard secure configurations are maintained by respective operations teams in Azure and Azure SQL Database. All configuration changes to production systems are documented and tracked through a central tracking system. Software and hardware changes are tracked through the central tracking system. Networking changes that relate to ACL are tracked using an ACL management service.
-All configuration changes to Azure are developed and tested in the staging environment, and they are thereafter deployed in production environment. Software builds are reviewed as part of testing. Security and privacy checks are reviewed as part of entry checklist criteria. Changes are deployed on scheduled intervals by the respective deployment team. Releases are reviewed and signed off by the respective deployment team personnel before they are deployed into production.
+All configuration changes to Azure are developed and tested in the staging environment, and they're thereafter deployed in production environment. Software builds are reviewed as part of testing. Security and privacy checks are reviewed as part of entry checklist criteria. Changes are deployed on scheduled intervals by the respective deployment team. Releases are reviewed and signed off by the respective deployment team personnel before they're deployed into production.
Changes are monitored for success. On a failure scenario, the change is rolled back to its previous state or a hotfix is deployed to address the failure with approval of the designated personnel. Source Depot, Git, TFS, Master Data Services (MDS), runners, Azure security monitoring, the FC, and the WinFabric platform are used to centrally manage, apply, and verify the configuration settings in the Azure virtual environment.
security Isolation Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/isolation-choices.md
Title: Isolation in the Azure Public Cloud | Microsoft Docs
description: Learn how Azure provides isolation against both malicious and non-malicious users and offers various isolation choices to architects. documentationcenter: na-+ ms.assetid:--++ na Previously updated : 08/30/2021 Last updated : 08/29/2023
One of the primary benefits of cloud computing is concept of a shared, common in
In the cloud-enabled workplace, a tenant can be defined as a client or organization that owns and manages a specific instance of that cloud service. With the identity platform provided by Microsoft Azure, a tenant is simply a dedicated instance of Azure Active Directory (Azure AD) that your organization receives and owns when it signs up for a Microsoft cloud service.
-Each Azure AD directory is distinct and separate from other Azure AD directories. Just like a corporate office building is a secure asset specific to only your organization, an Azure AD directory was also designed to be a secure asset for use by only your organization. The Azure AD architecture isolates customer data and identity information from co-mingling. This means that users and administrators of one Azure AD directory cannot accidentally or maliciously access data in another directory.
+Each Azure AD directory is distinct and separate from other Azure AD directories. Just like a corporate office building is a secure asset specific to only your organization, an Azure AD directory was also designed to be a secure asset for use by only your organization. The Azure AD architecture isolates customer data and identity information from co-mingling. This means that users and administrators of one Azure AD directory can't accidentally or maliciously access data in another directory.
### Azure Tenancy
Users, groups, and applications from that directory can manage resources in the
- Access to data in Azure AD requires user authentication via a security token service (STS). Information on the userΓÇÖs existence, enabled state, and role is used by the authorization system to determine whether the requested access to the target tenant is authorized for this user in this session. -- Tenants are discrete containers and there is no relationship between these.
+- Tenants are discrete containers and there's no relationship between these.
- No access across tenants unless tenant admin grants it through federation or provisioning user accounts from other tenants. - Physical access to servers that comprise the Azure AD service, and direct access to Azure ADΓÇÖs back-end systems, is restricted. -- Azure AD users have no access to physical assets or locations, and therefore it is not possible for them to bypass the logical Azure RBAC policy checks stated following.
+- Azure AD users have no access to physical assets or locations, and therefore it isn't possible for them to bypass the logical Azure RBAC policy checks stated following.
For diagnostics and maintenance needs, an operational model that employs a just-in-time privilege elevation system is required and used. Azure AD Privileged Identity Management (PIM) introduces the concept of an eligible admin. [Eligible admins](../../active-directory/privileged-identity-management/pim-configure.md) should be users that need privileged access now and then, but not every day. The role is inactive until the user needs access, then they complete an activation process and become an active admin for a predetermined amount of time.
Azure Active Directory hosts each tenant in its own protected container, with po
The concept of tenant containers is deeply ingrained in the directory service at all layers, from portals all the way to persistent storage.
-Even when metadata from multiple Azure Active Directory tenants is stored on the same physical disk, there is no relationship between the containers other than what is defined by the directory service, which in turn is dictated by the tenant administrator.
+Even when metadata from multiple Azure Active Directory tenants is stored on the same physical disk, there's no relationship between the containers other than what is defined by the directory service, which in turn is dictated by the tenant administrator.
### Azure role-based access control (Azure RBAC)
Azure RBAC has three basic roles that apply to all resource types:
![Azure role-based access control (Azure RBAC)](./media/isolation-choices/azure-isolation-fig3.png)
-The rest of the Azure roles in Azure allow management of specific Azure resources. For example, the Virtual Machine Contributor role allows the user to create and manage virtual machines. It does not give them access to the Azure Virtual Network or the subnet that the virtual machine connects to.
+The rest of the Azure roles in Azure allow management of specific Azure resources. For example, the Virtual Machine Contributor role allows the user to create and manage virtual machines. It doesn't give them access to the Azure Virtual Network or the subnet that the virtual machine connects to.
[Azure built-in roles](../../role-based-access-control/built-in-roles.md) list the roles available in Azure. It specifies the operations and scope that each built-in role grants to users. If you're looking to define your own roles for even more control, see how to build [Custom roles in Azure RBAC](../../role-based-access-control/custom-roles.md). Some other capabilities for Azure Active Directory include: -- Azure AD enables SSO to SaaS applications, regardless of where they are hosted. Some applications are federated with Azure AD, and others use password SSO. Federated applications can also support user provisioning and [password vaulting](https://www.techopedia.com/definition/31415/password-vault).
+- Azure AD enables SSO to SaaS applications, regardless of where they're hosted. Some applications are federated with Azure AD, and others use password SSO. Federated applications can also support user provisioning and [password vaulting](https://www.techopedia.com/definition/31415/password-vault).
- Access to data in [Azure Storage](https://azure.microsoft.com/services/storage/) is controlled via authentication. Each storage account has a primary key ([storage account key](../../storage/common/storage-account-create.md), or SAK) and a secondary secret key (the shared access signature, or SAS).
Some other capabilities for Azure Active Directory include:
Microsoft takes strong measures to protect your data from inappropriate access or use by unauthorized persons. These operational processes and controls are backed by the [Online Services Terms](https://aka.ms/Online-Services-Terms), which offer contractual commitments that govern access to your data. -- Microsoft engineers do not have default access to your data in the cloud. Instead, they are granted access, under management oversight, only when necessary. That access is carefully controlled and logged, and revoked when it is no longer needed.-- Microsoft may hire other companies to provide limited services on its behalf. Subcontractors may access customer data only to deliver the services for which, we have hired them to provide, and they are prohibited from using it for any other purpose. Further, they are contractually bound to maintain the confidentiality of our customersΓÇÖ information.
+- Microsoft engineers don't have default access to your data in the cloud. Instead, they're granted access, under management oversight, only when necessary. That access is carefully controlled and logged, and revoked when it's no longer needed.
+- Microsoft may hire other companies to provide limited services on its behalf. Subcontractors may access customer data only to deliver the services for which, we have hired them to provide, and they're prohibited from using it for any other purpose. Further, they're contractually bound to maintain the confidentiality of our customersΓÇÖ information.
Business services with audited certifications such as ISO/IEC 27001 are regularly verified by Microsoft and accredited audit firms, which perform sample audits to attest that access, only for legitimate business purposes. You can always access your own customer data at any time and for any reason. If you delete any data, Microsoft Azure deletes the data, including any cached or backup copies. For in-scope services, that deletion will occur within 90 days after the end of the retention period. (In-scope services are defined in the Data Processing Terms section of our [Online Services Terms](https://aka.ms/Online-Services-Terms).)
-If a disk drive used for storage suffers a hardware failure, it is securely [erased or destroyed](https://microsoft.com/trustcenter/privacy/you-own-your-data) before Microsoft returns it to the manufacturer for replacement or repair. The data on the drive is overwritten to ensure that the data cannot be recovered by any means.
+If a disk drive used for storage suffers a hardware failure, it's securely [erased or destroyed](https://microsoft.com/trustcenter/privacy/you-own-your-data) before Microsoft returns it to the manufacturer for replacement or repair. The data on the drive is overwritten to ensure that the data can't be recovered by any means.
## Compute Isolation
In addition to the isolated hosts described in the preceding section, Azure also
### Hyper-V & Root OS Isolation Between Root VM & Guest VMs
-AzureΓÇÖs compute platform is based on machine virtualizationΓÇömeaning that all customer code executes in a Hyper-V virtual machine. On each Azure node (or network endpoint), there is a Hypervisor that runs directly over the hardware and divides a node into a variable number of Guest Virtual Machines (VMs).
+AzureΓÇÖs compute platform is based on machine virtualizationΓÇömeaning that all customer code executes in a Hyper-V virtual machine. On each Azure node (or network endpoint), there's a Hypervisor that runs directly over the hardware and divides a node into a variable number of Guest Virtual Machines (VMs).
![Hyper-V & Root OS Isolation Between Root VM & Guest VMs](./media/isolation-choices/azure-isolation-fig4.jpg) Each node also has one special Root VM, which runs the Host OS. A critical boundary is the isolation of the root VM from the guest VMs and the guest VMs from one another, managed by the hypervisor and the root OS. The hypervisor/root OS pairing leverages Microsoft's decades of operating system security experience, and more recent learning from Microsoft's Hyper-V, to provide strong isolation of guest VMs.
-The Azure platform uses a virtualized environment. User instances operate as standalone virtual machines that do not have access to a physical host server.
+The Azure platform uses a virtualized environment. User instances operate as standalone virtual machines that don't have access to a physical host server.
The Azure hypervisor acts like a micro-kernel and passes all hardware access requests from guest virtual machines to the host for processing by using a shared-memory interface called VM Bus. This prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources.
By default, all traffic is blocked when a virtual machine is created, and then t
There are two categories of rules that are programmed: -- **Machine configuration or infrastructure rules:** By default, all communication is blocked. There are exceptions to allow a virtual machine to send and receive DHCP and DNS traffic. Virtual machines can also send traffic to the ΓÇ£publicΓÇ¥ internet and send traffic to other virtual machines within the same Azure Virtual Network and the OS activation server. The virtual machinesΓÇÖ list of allowed outgoing destinations does not include Azure router subnets, Azure management, and other Microsoft properties.
+- **Machine configuration or infrastructure rules:** By default, all communication is blocked. There are exceptions to allow a virtual machine to send and receive DHCP and DNS traffic. Virtual machines can also send traffic to the ΓÇ£publicΓÇ¥ internet and send traffic to other virtual machines within the same Azure Virtual Network and the OS activation server. The virtual machinesΓÇÖ list of allowed outgoing destinations doesn't include Azure router subnets, Azure management, and other Microsoft properties.
- **Role configuration file:** This defines the inbound Access Control Lists (ACLs) based on the tenant's service model. ### VLAN Isolation
Communication is permitted from the FC VLAN to the main VLAN, but cannot be init
As part of its fundamental design, Microsoft Azure separates VM-based computation from storage. This separation enables computation and storage to scale independently, making it easier to provide multi-tenancy and isolation.
-Therefore, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically. This means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. **The first time a customer writes data on the virtual disk, space on the physical disk is allocated, and a pointer to it is placed in the table.**
+Therefore, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically. This means that when a virtual disk is created, disk space isn't allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. **The first time a customer writes data on the virtual disk, space on the physical disk is allocated, and a pointer to it's placed in the table.**
### Isolation Using Storage Access control
-**Access Control in Azure Storage** has a simple access control model. Each Azure subscription can create one or more Storage Accounts. Each Storage Account has a single secret key that is used to control access to all data in that Storage Account.
+**Access Control in Azure Storage** has a simple access control model. Each Azure subscription can create one or more Storage Accounts. Each Storage Account has a single secret key that's used to control access to all data in that Storage Account.
![Isolation Using Storage Access control](./media/isolation-choices/azure-isolation-fig9.png)
The SAS means that we can grant a client limited permissions, to objects in our
You can establish firewalls and define an IP address range for your trusted clients. With an IP address range, only clients that have an IP address within the defined range can connect to [Azure Storage](../../storage/blobs/security-recommendations.md).
-IP storage data can be protected from unauthorized users via a networking mechanism that is used to allocate a dedicated or dedicated tunnel of traffic to IP storage.
+IP storage data can be protected from unauthorized users via a networking mechanism that's used to allocate a dedicated or dedicated tunnel of traffic to IP storage.
### Encryption
Azure offers the following types of Encryption to protect data:
#### Encryption in Transit
-Encryption in transit is a mechanism of protecting data when it is transmitted across networks. With Azure Storage, you can secure data using:
+Encryption in transit is a mechanism of protecting data when it's transmitted across networks. With Azure Storage, you can secure data using:
- [Transport-level encryption](../../storage/blobs/security-recommendations.md), such as HTTPS when you transfer data into or out of Azure Storage. - [Wire encryption](../../storage/blobs/security-recommendations.md), such as SMB 3.0 encryption for Azure File shares.-- [Client-side encryption](../../storage/blobs/security-recommendations.md), to encrypt the data before it is transferred into storage and to decrypt the data after it is transferred out of storage.
+- [Client-side encryption](../../storage/blobs/security-recommendations.md), to encrypt the data before it's transferred into storage and to decrypt the data after it's transferred out of storage.
#### Encryption at Rest
-For many organizations, [data encryption at rest](isolation-choices.md) is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure features that provide encryption of data that is "at rest":
+For many organizations, [data encryption at rest](isolation-choices.md) is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure features that provide encryption of data that's "at rest":
- [Storage Service Encryption](../../storage/blobs/security-recommendations.md) allows you to request that the storage service automatically encrypt data when writing it to Azure Storage. - [Client-side Encryption](../../storage/blobs/security-recommendations.md) also provides the feature of encryption at rest.
For more information, see [Overview of managed disk encryption options](../../vi
The Disk Encryption solution for Windows is based on [Microsoft BitLocker Drive Encryption](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732774(v=ws.11)), and the Linux solution is based on [dm-crypt](https://en.wikipedia.org/wiki/Dm-crypt).
-The solution supports the following scenarios for IaaS VMs when they are enabled in Microsoft Azure:
+The solution supports the following scenarios for IaaS VMs when they're enabled in Microsoft Azure:
- Integration with Azure Key Vault - Standard tier VMs: A, D, DS, G, GS, and so forth, series IaaS VMs
The solution supports the following scenarios for IaaS VMs when they are enabled
- Enabling encryption on Windows VMs that are configured by using storage spaces - All Azure public regions are supported
-The solution does not support the following scenarios, features, and technology in the release:
+The solution doesn't support the following scenarios, features, and technology in the release:
- Basic tier IaaS VMs - Disabling encryption on an OS drive for Linux IaaS VMs
The account and subscription are Microsoft Azure platform concepts to associate
Logical SQL servers and databases are SQL Database-specific concepts and are managed by using SQL Database, provided OData and TSQL interfaces or via the Azure portal.
-Servers in SQL Database are not physical or VM instances, instead they are collections of databases, sharing management and security policies, which are stored in so called ΓÇ£logical masterΓÇ¥ database.
+Servers in SQL Database aren't physical or VM instances, instead they 're collections of databases, sharing management and security policies, which are stored in so called ΓÇ£logical masterΓÇ¥ database.
![SQL Database](./media/isolation-choices/azure-isolation-fig11.png)
Logical master databases include:
- SQL logins used to connect to the server - Firewall rules
-Billing and usage-related information for databases from the same server are not guaranteed to be on the same physical instance in the cluster, instead applications must provide the target database name when connecting.
+Billing and usage-related information for databases from the same server aren't guaranteed to be on the same physical instance in the cluster, instead applications must provide the target database name when connecting.
From a customer perspective, a server is created in a geo-graphical region while the actual creation of the server happens in one of the clusters in the region.
From a customer perspective, a server is created in a geo-graphical region while
When a server is created and its DNS name is registered, the DNS name points to the so called ΓÇ£Gateway VIPΓÇ¥ address in the specific data center where the server was placed.
-Behind the VIP (virtual IP address), we have a collection of stateless gateway services. In general, gateways get involved when there is coordination needed between multiple data sources (master database, user database, etc.). Gateway services implement the following:
+Behind the VIP (virtual IP address), we have a collection of stateless gateway services. In general, gateways get involved when there's coordination needed between multiple data sources (master database, user database, etc.). Gateway services implement the following:
- **TDS connection proxying.** This includes locating user database in the backend cluster, implementing the login sequence and then forwarding the TDS packets to the backend and back. - **Database management.** This includes implementing a collection of workflows to do CREATE/ALTER/DROP database operations. The database operations can be invoked by either sniffing TDS packets or explicit OData APIs.
Behind the VIP (virtual IP address), we have a collection of stateless gateway s
The tier behind the gateways is called ΓÇ£back-endΓÇ¥. This is where all the data is stored in a highly available fashion. Each piece of data is said to belong to a ΓÇ£partitionΓÇ¥ or ΓÇ£failover unitΓÇ¥, each of them having at least three replicas. Replicas are stored and replicated by SQL Server engine and managed by a failover system often referred to as ΓÇ£fabricΓÇ¥.
-Generally, the back-end system does not communicate outbound to other systems as a security precaution. This is reserved to the systems in the front-end (gateway) tier. The gateway tier machines have limited privileges on the back-end machines to minimize the attack surface as a defense-in-depth mechanism.
+Generally, the back-end system doesn't communicate outbound to other systems as a security precaution. This is reserved to the systems in the front-end (gateway) tier. The gateway tier machines have limited privileges on the back-end machines to minimize the attack surface as a defense-in-depth mechanism.
### Isolation by Machine Function and Access
-SQL Database (is composed of services running on different machine functions. SQL Database is divided into ΓÇ£backendΓÇ¥ Cloud Database and ΓÇ£front-endΓÇ¥ (Gateway/Management) environments, with the general principle of traffic only going into back-end and not out. The front-end environment can communicate to the outside world of other services and in general, has only limited permissions in the back-end (enough to call the entry points it needs to invoke).
+SQL Database is composed of services running on different machine functions. SQL Database is divided into ΓÇ£backendΓÇ¥ Cloud Database and ΓÇ£front-endΓÇ¥ (Gateway/Management) environments, with the general principle of traffic only going into back-end and not out. The front-end environment can communicate to the outside world of other services and in general, has only limited permissions in the back-end (enough to call the entry points it needs to invoke).
## Networking Isolation
security Log Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md
documentationcenter: na ms.assetid:
na Previously updated : 07/08/2022 Last updated : 08/29/2023
security Operational Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md
This checklist is intended to help enterprises think through various operational
| [<br>Data Protection & Storage](../../storage/blobs/security-recommendations.md)|<ul><li>Use Management Plane Security to secure your Storage Account using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).</li><li>Data Plane Security to Securing Access to your Data using [Shared Access Signatures (SAS)](../../storage/common/storage-sas-overview.md) and Stored Access Policies.</li><li>Use Transport-Level Encryption ΓÇô Using HTTPS and the encryption used by [SMB (Server message block protocols) 3.0](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) for [Azure File Shares](../../storage/files/storage-dotnet-how-to-use-files.md).</li><li>Use [Client-side encryption](../../storage/common/storage-client-side-encryption.md) to secure data that you send to storage accounts when you require sole control of encryption keys. </li><li>Use [Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md) to automatically encrypt data in Azure Storage, and [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) to encrypt virtual machine disk files for the OS and data disks.</li><li>Use Azure [Storage Analytics](/rest/api/storageservices/storage-analytics) to monitor authorization type; like with Blob Storage, you can see if users have used a Shared Access Signature or the storage account keys.</li><li>Use [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) to access storage resources from different domains.</li></ul> | |[<br>Security Policies & Recommendations](../../defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md#security-policies-and-recommendations)|<ul><li>Use [Microsoft Defender for Cloud](../../defender-for-cloud/integration-defender-for-endpoint.md) to deploy endpoint solutions.</li><li>Add a [web application firewall (WAF)](../../web-application-firewall/ag/ag-overview.md) to secure web applications.</li><li>Use [Azure Firewall](../../firewall/overview.md) to increase your security protections. </li><li>Apply security contact details for your Azure subscription. The [Microsoft Security Response Center](https://technet.microsoft.com/security/dn528958.aspx) (MSRC) contacts you if it discovers that your customer data has been accessed by an unlawful or unauthorized party.</li></ul> | | [<br>Identity & Access Management](identity-management-best-practices.md)|<ul><li>[Synchronize your on-premises directory with your cloud directory using Azure AD](../../active-directory/hybrid/whatis-hybrid-identity.md).</li><li>Use [single sign-on](../../active-directory/manage-apps/what-is-single-sign-on.md) to enable users to access their SaaS applications based on their organizational account in Azure AD.</li><li>Use the [Password Reset Registration Activity](../../active-directory/authentication/howto-sspr-reporting.md) report to monitor the users that are registering.</li><li>Enable [multi-factor authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) for users.</li><li>Developers to use secure identity capabilities for apps like [Microsoft Security Development Lifecycle (SDL)](https://www.microsoft.com/download/details.aspx?id=12379).</li><li>Actively monitor for suspicious activities by using Azure AD Premium anomaly reports and [Azure AD identity protection capability](../../active-directory/identity-protection/overview-identity-protection.md).</li></ul> |
-|[<br>Ongoing Security Monitoring](../../defender-for-cloud/defender-for-cloud-introduction.md)|<ul><li>Use Malware Assessment Solution [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) to report on the status of antimalware protection in your infrastructure.</li><li>Use [Update Management](../../automation/update-management/overview.md) to determine the overall exposure to potential security problems, and whether or how critical these updates are for your environment.</li><li>The [Azure Active Directory portal](https://aad.portal.azure.com/) to gain visibility into the integrity and security of your organization's directory. |
+|[<br>Ongoing Security Monitoring](../../defender-for-cloud/defender-for-cloud-introduction.md)|<ul><li>Use Malware Assessment Solution [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) to report on the status of antimalware protection in your infrastructure.</li><li>Use [Update Management](../../automation/update-management/overview.md) to determine the overall exposure to potential security problems, and whether or how critical these updates are for your environment.</li><li>The [Microsoft Entra admin center](https://entra.microsoft.com) provides visibility into the integrity and security of your organization's directory. |
| [<br>Microsoft Defender for Cloud detection capabilities](../../security-center/security-center-alerts-overview.md#detect-threats)|<ul><li>Use [Cloud Security Posture Management](../../defender-for-cloud/concept-cloud-security-posture-management.md) (CSPM) for hardening guidance that helps you efficiently and effectively improve your security.</li><li>Use [alerts](../../defender-for-cloud/alerts-overview.md) to be notified when threats are identified in your cloud, hybrid, or on-premises environment. </li><li>Use [security policies, initiatives, and recommendations](../../defender-for-cloud/security-policy-concept.md) to improve your security posture.</li></ul> | ## Conclusion
security Operational Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-overview.md
Title: Azure operational security overview| Microsoft Docs
description: Learn about Azure operational security in this overview. Operational security refers to asset protection services, controls, and features. documentationcenter: na-+ + ms.assetid:--++ na Previously updated : 10/31/2019- Last updated : 08/29/2023+
security Paas Applications Using App Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-app-services.md
description: "Learn about Azure App Service security best practices for securing your PaaS web and mobile applications. " documentationcenter: na--++ ms.assetid:-++ na Previously updated : 07/18/2019 Last updated : 08/29/2023
security Protection Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/protection-customer-data.md
documentationcenter: na ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na Previously updated : 05/10/2022 Last updated : 08/29/2023
Azure provides customers with strong data security, both by default and as custo
**In-transit data protection**: Microsoft provides a number of options that can be utilized by customers for securing data in transit internally within the Azure network and externally across the Internet to the end user. These include communication through Virtual Private Networks (utilizing IPsec/IKE encryption), Transport Layer Security (TLS) 1.2 or later (via Azure components such as Application Gateway or Azure Front Door), protocols directly on the Azure virtual machines (such as Windows IPsec or SMB), and more.
-Additionally, "encryption by default" using MACsec (an IEEE standard at the data-link layer) is enabled for all Azure traffic travelling between Azure datacenters to ensure confidentiality and integrity of customer data.
+Additionally, "encryption by default" using MACsec (an IEEE standard at the data-link layer) is enabled for all Azure traffic traveling between Azure datacenters to ensure confidentiality and integrity of customer data.
**Data redundancy**: Microsoft helps ensure that data is protected if there is a cyberattack or physical damage to a datacenter. Customers may opt for:
security Ransomware Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection.md
Title: Ransomware protection in Azure description: Ransomware protection in Azure-++ - Previously updated : 01/10/2022+ Last updated : 8/31/2023
This article lays out key Azure native capabilities and defenses for ransomware
## A growing threat
-Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can cripple a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
+Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can disable a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
Recent trends on the number of attacks are quite alarming. While 2020 wasn't a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states.
security Service Fabric Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/service-fabric-best-practices.md
Title: Best practices for Azure Service Fabric security description: This article provides a set of best practices for Azure Service Fabric security.--+++ Previously updated : 01/16/2019 Last updated : 08/29/2023 # Azure Service Fabric security best practices In addition to this article, please also review [Service Fabric security checklist](../../service-fabric/service-fabric-best-practices-security.md) for more information.
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
Attackers who get control of privileged accounts can do tremendous damage, so it
All set? Let's get started on the checklist.
-## Step 1 - Strengthen your credentials
+## Step 1: Strengthen your credentials
Although other types of attacks are emerging, including consent phishing and attacks on nonhuman identities, password-based attacks on user identities are still the most prevalent vector of identity compromise. Well-established spear phishing and password spray campaigns by adversaries continue to be successful against organizations that havenΓÇÖt yet implemented multi-factor authentication (MFA) or other protections against this common tactic.
Passwords are never stored in clear text or encrypted with a reversible algorith
Smart lockout helps lock out bad actors that try to guess your users' passwords or use brute-force methods to get in. Smart lockout can recognize sign-ins that come from valid users and treat them differently than ones of attackers and other unknown sources. Attackers get locked out, while your users continue to access their accounts and be productive. Organizations, which configure applications to authenticate directly to Azure AD benefit from Azure AD smart lockout. Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection).
-## Step 2 - Reduce your attack surface area
+## Step 2: Reduce your attack surface area
Given the pervasiveness of password compromise, minimizing the attack surface in your organization is critical. Disabling the use of older, less secure protocols, limiting access entry points, moving to cloud authentication, and exercising more significant control of administrative access to resources and embracing Zero Trust security principles.
Make sure users can request admin approval for new applications to reduce user f
For more information, see the article [Azure Active Directory consent framework](../../active-directory/develop/consent-framework.md).
-## Step 3 - Automate threat response
+## Step 3: Automate threat response
Azure Active Directory has many capabilities that automatically intercept attacks, to remove the latency between detection and response. You can reduce the costs and risks, when you reduce the time criminals use to embed themselves into your environment. Here are the concrete steps you can take.
Learn more about Microsoft Threat Protection and the importance of integrating d
Monitoring and auditing your logs is important to detect suspicious behavior. The Azure portal has several ways to integrate Azure AD logs with other tools, like Microsoft Sentinel, Azure Monitor, and other SIEM tools. For more information, see the [Azure Active Directory security operations guide](../../active-directory/fundamentals/security-operations-introduction.md#data-sources).
-## Step 4 - Utilize cloud intelligence
+## Step 4: Utilize cloud intelligence
Auditing and logging of security-related events and related alerts are essential components of an efficient protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help you detect patterns that may indicate attempted or successful external penetration of the network, and internal attacks. You can use auditing to monitor user activity, document regulatory compliance, do forensic analysis, and more. Alerts provide notifications of security events. Make sure you have a log retention policy in place for both your sign-in logs and audit logs for Azure AD by exporting into Azure Monitor or a SIEM tool.
Microsoft Azure services and features provide you with configurable security aud
Users can be tricked into navigating to a compromised web site or apps that will gain access to their profile information and user data, such as their email. A malicious actor can use the consented permissions it received to encrypt their mailbox content and demand a ransom to regain your mailbox data. [Administrators should review and audit](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) the permissions given by users. In addition to auditing the permissions given by users, you can [locate risky or unwanted OAuth applications](/cloud-app-security/investigate-risky-oauth) in premium environments.
-## Step 5 - Enable end-user self-service
+## Step 5: Enable end-user self-service
As much as possible you'll want to balance security with productivity. Approaching your journey with the mindset that you're setting a foundation for security, you can remove friction from your organization by empowering your users while remaining vigilant and reducing your operational overheads.
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
Security benefits of Azure Active Directory (Azure AD) include the ability to:
- Provision secure remote access to on-premises web applications through Azure AD Application Proxy.
-The [Azure Active Directory portal](https://aad.portal.azure.com/) is available as part of the Azure portal. From this dashboard, you can get an overview of the state of your organization, and easily manage the directory, users, or application access.
- ![Azure Active Directory](./media/technical-capabilities/azure-security-technical-capabilities-fig2.png) The following are core Azure identity management capabilities:
Not only do users not have to manage multiple sets of usernames and passwords, a
Security monitoring and alerts and machine learning-based reports that identify inconsistent access patterns can help you protect your business. You can use Azure Active Directory's access and usage reports to gain visibility into the integrity and security of your organizationΓÇÖs directory. With this information, a directory admin can better determine where possible security risks may lie so that they can adequately plan to mitigate those risks.
-In the Azure portal or through the [Azure Active Directory portal](https://aad.portal.azure.com/), [reports](../../active-directory/reports-monitoring/overview-reports.md) are categorized in the following ways:
+In the [Azure portal](https://portal.azure.com), [reports](../../active-directory/reports-monitoring/overview-reports.md) are categorized in the following ways:
- Anomaly reports ΓÇô contain sign in events that we found to be anomalous. Our goal is to make you aware of such activity and enable you to be able to decide about whether an event is suspicious.
sentinel Add Entity To Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-entity-to-threat-intelligence.md
For example, you may discover an IP address performing port scans across your ne
Microsoft Sentinel allows you to flag these types of entities as malicious, right from within your incident investigation, and add it to your threat indicator lists. You'll then be able to view the added indicators both in Logs and in the Threat Intelligence blade, and use them across your Microsoft Sentinel workspace.
-> [!IMPORTANT]
-> Adding entities as TI indicators is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Add an entity to your indicators list The new [incident details page](investigate-incidents.md) gives you another way to add entities to threat intelligence, in addition to the investigation graph. Both ways are shown below.
The new [incident details page](investigate-incidents.md) gives you another way
1. Find the entity from the **Entities** widget that you want to add as a threat indicator. (You can filter the list or enter a search string to help you locate it.)
-1. Select the three dots to the right of the entity, and select **Add to TI (Preview)** from the pop-up menu.
+1. Select the three dots to the right of the entity, and select **Add to TI** from the pop-up menu.
Only the following types of entities can be added as threat indicators: - Domain name
sentinel Connect Azure Functions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-functions-template.md
Make sure that you have the following permissions and credentials before using A
> - Some data connectors depend on a parser based on a [Kusto Function](/azure/data-explorer/kusto/query/functions/user-defined-functions) to work as expected. See the section for your service in the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page for links to instructions to create the Kusto function and alias.
-### STEP 1 - Get your source system's API credentials
+### Step 1: Get your source system's API credentials
Follow your source system's instructions to get its **API credentials / authorization keys / tokens**. Copy and paste them into a text file for later. You can find details on the exact credentials you'll need, and links to your product's instructions for finding or creating them, on the data connector page in the portal and in the section for your service in the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page. You may also need to configure logging or other settings on your source system. You'll find the relevant instructions together with those in the preceding paragraph.
-### STEP 2 - Deploy the connector and the associated Azure Function App
+### Step 2: Deploy the connector and the associated Azure Function App
#### Choose a deployment option
This method provides an automated deployment of your Azure Function-based connec
1. The **Custom deployment** screen will appear. - Select a **subscription**, **resource group**, and **region** in which to deploy your Function App.
- - Enter your API credentials / authorization keys / tokens that you saved in [Step 1](#step-1get-your-source-systems-api-credentials) above.
+ - Enter your API credentials / authorization keys / tokens that you saved in [Step 1](#step-1-get-your-source-systems-api-credentials) above.
- Enter your Microsoft Sentinel **Workspace ID** and **Workspace Key** (primary key) that you copied and put aside.
sentinel Connect Services Diagnostic Setting Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-services-diagnostic-setting-based.md
To ingest data into Microsoft Sentinel:
|Data connector |Licensing, costs, and other information | ||| |Azure Activity| This connector now uses the diagnostic settings pipeline. If you're using the legacy method, you must disconnect the existing subscriptions from the legacy method before setting up the new Azure Activity log connector.<br><br>1. From the Microsoft Sentinel navigation menu, select **Data connectors**. From the list of connectors, select **Azure Activity**, and then select the **Open connector page** button on the lower right.<br>2. Under the **Instructions** tab, in the **Configuration** section, in step 1, review the list of your existing subscriptions that are connected to the legacy method, and disconnect them all at once by clicking the **Disconnect All** button below.<br>3. Continue setting up the new connector with the instructions in this section. |
- |Azure DDoS Protection|- Configured [Azure DDoS Standard protection plan](../ddos-protection/manage-ddos-protection.md#create-a-ddos-protection-plan).<br>- Configured [virtual network with Azure DDoS Standard enabled](../ddos-protection/manage-ddos-protection.md#enable-ddos-protection-for-a-new-virtual-network)<br>- Other charges may apply<br>- The **Status** for Azure DDoS Protection Data Connector changes to **Connected** only when the protected resources are under a DDoS attack.|
+ |Azure DDoS Protection|- Configured [Azure DDoS Standard protection plan](../ddos-protection/manage-ddos-protection.md#create-a-ddos-protection-plan).<br>- Configured [virtual network with Azure DDoS Standard enabled](../ddos-protection/manage-ddos-protection.md#enable-for-a-new-virtual-network)<br>- Other charges may apply<br>- The **Status** for Azure DDoS Protection Data Connector changes to **Connected** only when the protected resources are under a DDoS attack.|
|Azure Storage Account|The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.</br>When configuring diagnostics for a storage account, you must select and configure: <br><br>- The parent account resource, exporting the **Transaction** metric.<br>- Each of the child storage-type resources, exporting all the logs and metrics.<br><br>You will only see the storage types that you actually have defined resources for.| ### Instructions
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md
For examples of this method, see:
- [Connect your VMware Carbon Black Cloud Endpoint Standard to Microsoft Sentinel with Azure Function](./data-connectors/vmware-carbon-black-cloud-using-azure-functions.md) - [Connect your Okta Single Sign-On to Microsoft Sentinel with Azure Function](./data-connectors/okta-single-sign-on-using-azure-function.md)-- [Connect your Proofpoint TAP to Microsoft Sentinel with Azure Function](./data-connectors/proofpoint-tap-using-azure-function.md)
+- [Connect your Proofpoint TAP to Microsoft Sentinel with Azure Function](./data-connectors/proofpoint-tap-using-azure-functions.md)
- [Connect your Qualys VM to Microsoft Sentinel with Azure Function](data-connectors/qualys-vulnerability-management-using-azure-functions.md) - [Ingesting XML, CSV, or other formats of data](../azure-monitor/logs/create-pipeline-datacollector-api.md#ingesting-xml-csv-or-other-formats-of-data) - [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) (blog)
sentinel Create Nrt Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md
You create NRT rules the same way you create regular [scheduled-query analytics
The configuration of NRT rules is in most ways the same as that of scheduled analytics rules.
- - You can refer to [**watchlists**](watchlists.md) in your query logic.
+ - You can refer to multiple tables and [**watchlists**](watchlists.md) in your query logic.
- You can use all of the alert enrichment methods: [**entity mapping**](map-data-fields-to-entities.md), [**custom details**](surface-custom-details-in-alerts.md), and [**alert details**](customize-alert-details.md).
You create NRT rules the same way you create regular [scheduled-query analytics
In addition, the query itself has the following requirements:
- - The query itself can refer to only one table, and cannot contain unions or joins.
- - You can't run the query across workspaces. - Due to the size limits of the alerts, your query should make use of `project` statements to include only the necessary fields from your table. Otherwise, the information you want to surface could end up being truncated.
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customer-managed-keys.md
To provision CMK, follow these steps: 
1. Onboard the workspace to Microsoft Sentinel via the [Onboarding API](/rest/api/securityinsights/preview/sentinel-onboarding-states/create). 1. Contact the Microsoft Sentinel Product group to confirm onboarding.
-### STEP 1: Create an Azure Key Vault and generate or import a key
+### Step 1: Create an Azure Key Vault and generate or import a key
1. [Create Azure Key Vault resource](/azure-stack/user/azure-stack-key-vault-manage-portal), then generate or import a key to be used for data encryption.
To provision CMK, follow these steps: 
- Turn on [Purge protection](../key-vault/general/soft-delete-overview.md#purge-protection) to guard against forced deletion of the secret/vault even after soft delete.
-### STEP 2: Enable CMK on your Log Analytics workspace
+### Step 2: Enable CMK on your Log Analytics workspace
Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that is used as the Microsoft Sentinel workspace in the following steps.
-### STEP 3: Register the Azure Cosmos DB Resource Provider
+### Step 3: Register the Azure Cosmos DB Resource Provider
Microsoft Sentinel works with Azure Cosmos DB as an additional storage resource. Make sure to register to the Azure Cosmos DB Resource Provider. Follow the instructions to [Register the Azure Cosmos DB Resource Provider](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) for your Azure subscription.
-### STEP 4: Add an access policy to your Azure Key Vault instance
+### Step 4: Add an access policy to your Azure Key Vault instance
Add an access policy that allows your Azure Cosmos DB to access the Azure Key Vault instance created in [**STEP 1**](#step-1-create-an-azure-key-vault-and-generate-or-import-a-key).
Follow the instructions here to [add an access policy to your Azure Key Vault in
:::image type="content" source="../cosmos-db/media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" lightbox="../cosmos-db/media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" alt-text="Screenshot of the Select principal option on the Add access policy page.":::
-### STEP 5: Onboard the workspace to Microsoft Sentinel via the onboarding API
+### Step 5: Onboard the workspace to Microsoft Sentinel via the onboarding API
Onboard the CMK enabled workspace to Microsoft Sentinel via the [onboarding API](/rest/api/securityinsights/preview/sentinel-onboarding-states/create) using the `customerManagedKey` property as `true`. For more context on the onboarding API, see [this document](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx) in the Microsoft Sentinel GitHub repo.
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
} ```
-### STEP 6: Contact the Microsoft Sentinel Product group to confirm onboarding
+### Step 6: Contact the Microsoft Sentinel Product group to confirm onboarding
Lastly, you must confirm the onboarding status of your CMK enabled workspace by contacting the [Microsoft Sentinel Product Group](mailto:onboardrecoeng@microsoft.com).
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 07/26/2023 Last updated : 08/28/2023
Data connectors are available as part of the following offerings:
## Box -- [Box (using Azure Function)](data-connectors/box-using-azure-function.md)
+- [Box (using Azure Functions)](data-connectors/box-using-azure-functions.md)
## Broadcom
Data connectors are available as part of the following offerings:
## Cisco Systems, Inc. - [Cisco Firepower eStreamer](data-connectors/cisco-firepower-estreamer.md)
+- [Cisco Software Defined WAN](data-connectors/cisco-software-defined-wan.md)
## Citrix
Data connectors are available as part of the following offerings:
- [AI Analyst Darktrace](data-connectors/ai-analyst-darktrace.md) - [Darktrace Connector for Microsoft Sentinel REST API](data-connectors/darktrace-connector-for-microsoft-sentinel-rest-api.md)
+## Defend Limited
+
+- [Cortex XDR - Incidents](data-connectors/cortex-xdr-incidents.md)
+ ## Delinea Inc. - [Delinea Secret Server](data-connectors/delinea-secret-server.md)
Data connectors are available as part of the following offerings:
- [Common Event Format (CEF) via AMA](data-connectors/common-event-format-cef-via-ama.md) - [DNS](data-connectors/dns.md) - [Fortinet FortiWeb Web Application Firewall](data-connectors/fortinet-fortiweb-web-application-firewall.md)
+- [Microsoft 365 (formerly, Office 365)](data-connectors/microsoft-365.md)
- [Microsoft 365 Defender](data-connectors/microsoft-365-defender.md) - [Microsoft 365 Insider Risk Management](data-connectors/microsoft-365-insider-risk-management.md) - [Microsoft Defender for Cloud](data-connectors/microsoft-defender-for-cloud.md)
Data connectors are available as part of the following offerings:
- [Microsoft Purview (Preview)](data-connectors/microsoft-purview.md) - [Microsoft Purview Information Protection](data-connectors/microsoft-purview-information-protection.md) - [Network Security Groups](data-connectors/network-security-groups.md)-- [Office 365](data-connectors/office-365.md) - [Security Events via Legacy Agent](data-connectors/security-events-via-legacy-agent.md) - [Syslog](data-connectors/syslog.md) - [Threat intelligence - TAXII](data-connectors/threat-intelligence-taxii.md)
Data connectors are available as part of the following offerings:
- [Forcepoint CSG](data-connectors/forcepoint-csg.md) - [Forcepoint DLP](data-connectors/forcepoint-dlp.md) - [Forcepoint NGFW](data-connectors/forcepoint-ngfw.md)
+- [MISP2Sentinel](data-connectors/misp2sentinel.md)
## MongoDB
Data connectors are available as part of the following offerings:
- [MuleSoft Cloudhub (using Azure Functions)](data-connectors/mulesoft-cloudhub-using-azure-functions.md)
+## Nasuni Corporation
+
+- [Nasuni Edge Appliance](data-connectors/nasuni-edge-appliance.md)
+ ## NetClean Technologies AB - [Netclean ProActive Incidents](data-connectors/netclean-proactive-incidents.md)
Data connectors are available as part of the following offerings:
- [Palo Alto Networks (Firewall)](data-connectors/palo-alto-networks-firewall.md) - [Palo Alto Networks Cortex Data Lake (CDL)](data-connectors/palo-alto-networks-cortex-data-lake-cdl.md)-- [Palo Alto Prisma Cloud CSPM (using Azure Functions)](data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function.md)
+- [Palo Alto Prisma Cloud CSPM (using Azure Functions)](data-connectors/palo-alto-prisma-cloud-cspm-using-azure-functions.md)
## Perimeter 81
Data connectors are available as part of the following offerings:
## Proofpoint - [Proofpoint On Demand Email Security (using Azure Functions)](data-connectors/proofpoint-on-demand-email-security-using-azure-functions.md)-- [Proofpoint TAP (using Azure Functions)](data-connectors/proofpoint-tap-using-azure-function.md)
+- [Proofpoint TAP (using Azure Functions)](data-connectors/proofpoint-tap-using-azure-functions.md)
## Pulse Secure
Data connectors are available as part of the following offerings:
## Rubrik, Inc. -- [Rubrik Security Cloud data connector (using Azure Functions)](data-connectors/rubrik-security-cloud-data-connector-using-azure-function.md)
+- [Rubrik Security Cloud data connector (using Azure Functions)](data-connectors/rubrik-security-cloud-data-connector-using-azure-functions.md)
## SailPoint
Data connectors are available as part of the following offerings:
## Sophos - [Sophos Cloud Optix](data-connectors/sophos-cloud-optix.md)-- [Sophos Endpoint Protection (using Azure Functions)](data-connectors/sophos-endpoint-protection-using-azure-function.md)
+- [Sophos Endpoint Protection (using Azure Functions)](data-connectors/sophos-endpoint-protection-using-azure-functions.md)
- [Sophos XG Firewall](data-connectors/sophos-xg-firewall.md) ## Squid
Data connectors are available as part of the following offerings:
- [AI Vectra Stream](data-connectors/ai-vectra-stream.md) - [Vectra AI Detect](data-connectors/vectra-ai-detect.md)
+- [Vectra XDR (using Azure Functions)](data-connectors/vectra-xdr-using-azure-functions.md)
## VMware
sentinel Amazon Web Services S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/amazon-web-services-s3.md
Title: "Amazon Web Services S3 connector for Microsoft Sentinel"
+ Title: "Amazon Web Services S3 connector for Microsoft Sentinel (preview)"
description: "Learn how to install the connector Amazon Web Services S3 to connect your data source to Microsoft Sentinel."
-# Amazon Web Services S3 connector for Microsoft Sentinel
+# Amazon Web Services S3 connector for Microsoft Sentinel (preview)
This connector allows you to ingest AWS service logs, collected in AWS S3 buckets, to Microsoft Sentinel. The currently supported data types are: * AWS CloudTrail
This connector allows you to ingest AWS service logs, collected in AWS S3 bucket
| Connector attribute | Description | | | | | **Log Analytics table(s)** | AWSGuardDuty<br/> AWSVPCFlow<br/> AWSCloudTrail<br/> |
-| **Data collection rules support** | Not currently supported |
+| **Data collection rules support** | [Supported as listed](/azure/azure-monitor/logs/tables-feature-support) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
sentinel Box Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/box-using-azure-function.md
- Title: "Box (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Box (using Azure Functions) to connect your data source to Microsoft Sentinel."
-- Previously updated : 06/22/2023----
-# Box (using Azure Functions) connector for Microsoft Sentinel
-
-The Box data connector provides the capability to ingest [Box enterprise's events](https://developer.box.com/guides/events/#admin-events) into Microsoft Sentinel using the Box REST API. Refer to [Box documentation](https://developer.box.com/guides/events/enterprise-events/for-enterprise/) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Application settings** | AzureSentinelWorkspaceId<br/>AzureSentinelSharedKey<br/>BOX_CONFIG_JSON<br/>logAnalyticsUri (optional) |
-| **Azure functions app code** | https://aka.ms/sentinel-BoxDataConnector-functionapp |
-| **Log Analytics table(s)** | BoxEvents_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**All Box events**
- ```kusto
-BoxEvents
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with Box (using Azure Functions) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Box API Credentials**: Box config JSON file is required for Box REST API JWT authentication. [See the documentation to learn more about JWT authentication](https://developer.box.com/guides/authentication/jwt/).--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the Box REST API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-> [!NOTE]
- > This connector depends on a parser based on Kusto Function to work as expected [**BoxEvents**](https://aka.ms/sentinel-BoxDataConnector-parser) which is deployed with the Microsoft Sentinel Solution.
--
-**STEP 1 - Configuration of the Box events collection**
-
-See documentation to [setup JWT authentication](https://developer.box.com/guides/authentication/jwt/jwt-setup/) and [obtain JSON file with credentials](https://developer.box.com/guides/authentication/jwt/with-sdk/#prerequisites).
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Box data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Box JSON configuration file, readily available.
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Box data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-BoxDataConnector-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **AzureSentinelWorkspaceId**, **AzureSentinelSharedKey**, **BoxConfigJSON**
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Box data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-BoxDataConnector-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. BoxXYZ).
-
- e. **Select a runtime:** Choose Python 3.6 (note that other versions of python are not supported for this function).
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- AzureSentinelWorkspaceId
- AzureSentinelSharedKey
- BOX_CONFIG_JSON
- logAnalyticsUri (optional)
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-3. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-box?tab=Overview) in the Azure Marketplace.
sentinel Box Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/box-using-azure-functions.md
+
+ Title: "Box (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Box (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Box (using Azure Functions) connector for Microsoft Sentinel
+
+The Box data connector provides the capability to ingest [Box enterprise's events](https://developer.box.com/guides/events/#admin-events) into Microsoft Sentinel using the Box REST API. Refer to [Box documentation](https://developer.box.com/guides/events/enterprise-events/for-enterprise/) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | BoxEvents_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**All Box events**
+ ```kusto
+BoxEvents
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Box (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Box API Credentials**: Box config JSON file is required for Box REST API JWT authentication. [See the documentation to learn more about JWT authentication](https://developer.box.com/guides/authentication/jwt/).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Box REST API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This connector depends on a parser based on Kusto Function to work as expected [**BoxEvents**](https://aka.ms/sentinel-BoxDataConnector-parser) which is deployed with the Microsoft Sentinel Solution.
++
+**STEP 1 - Configuration of the Box events collection**
+
+See documentation to [setup JWT authentication](https://developer.box.com/guides/authentication/jwt/jwt-setup/) and [obtain JSON file with credentials](https://developer.box.com/guides/authentication/jwt/with-sdk/#prerequisites).
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Box data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Box JSON configuration file, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-box?tab=Overview) in the Azure Marketplace.
sentinel Braodcom Symantec Dlp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/braodcom-symantec-dlp.md
Title: "Broadcom Symantec DLP connector for Microsoft Sentinel"
description: "Learn how to install the connector Broadcom Symantec DLP to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
2. Forward Symantec DLP logs to a Syslog agent Configure Symantec DLP to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.
-1. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+1. [Follow these instructions](https://knowledge.broadcom.com/external/article/159509/generating-syslog-messages-from-data-los.html) to configure the Symantec DLP to forward syslog
+2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
-2. Validate connection
+3. Validate connection
Follow the instructions to validate your connectivity:
sentinel Cisco Firepower Estreamer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-firepower-estreamer.md
Title: "Cisco Firepower eStreamer connector for Microsoft Sentinel"
+ Title: "Cisco Firepower eStreamer connector for Microsoft Sentinel (preview)"
description: "Learn how to install the connector Cisco Firepower eStreamer to connect your data source to Microsoft Sentinel."
-# Cisco Firepower eStreamer connector for Microsoft Sentinel
+# Cisco Firepower eStreamer connector for Microsoft Sentinel (preview)
eStreamer is a Client Server API designed for the Cisco Firepower NGFW Solution. The eStreamer client requests detailed event data on behalf of the SIEM or logging solution in the Common Event Format (CEF).
sentinel Cisco Software Defined Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-software-defined-wan.md
+
+ Title: "Cisco Software Defined WAN connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cisco Software Defined WAN to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Cisco Software Defined WAN connector for Microsoft Sentinel
+
+The Cisco Software Defined WAN(SD-WAN) data connector provides the capability to ingest [Cisco SD-WAN](https://www.cisco.com/c/en_in/solutions/enterprise-networks/sd-wan/https://docsupdatetracker.net/index.html) Syslog and Netflow data into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function alias** | CiscoSyslogUTD |
+| **Kusto function url** | https://aka.ms/sentinel-CiscoSyslogUTD-parser |
+| **Log Analytics table(s)** | Syslog<br/> CiscoSDWANNetflow_CL<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cisco Systems](https://globalcontacts.cloudapps.cisco.com/contacts/contactDetails/en_US/c1o1-c2o2-c3o8) |
+
+## Query samples
+
+**Syslog Events - All Syslog Events.**
+ ```kusto
+Syslog
+
+ | sort by TimeGenerated desc
+ ```
+
+**Cisco SD-WAN Netflow Events - All Netflow Events.**
+ ```kusto
+CiscoSDWANNetflow_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Vendor installation instructions
++
+**To ingest Cisco SD-WAN Syslog and Netflow data into Microsoft Sentinel follow the steps below.**
+
+1. Steps to ingest Syslog data to Microsoft sentinel
+
+Azure Monitor Agent will be used to collect the syslog data into Microsoft sentinel. For that first need to create an azure arc server for the VM from which syslog data will be sent.
++
+1.1 Steps to Add Azure Arc Server
+
+1. In Azure portal, go to Servers - Azure Arc and click on Add.
+2. Select Generate Script under Add a single server section. A User can also generate scripts for Multiple Servers as well.
+3. Review the information on the Prerequisites page, then select Next.
+4. On the Resource details page, provide the subscription and resource group of the Microsoft Sentinel, Region, Operating system and Connectivity method. Then select Next.
+5. On the Tags page, review the default Physical location tags suggested and enter a value, or specify one or more Custom tags to support your standards. Then select Next
+6. Select Download to save the script file.
+7. Now that you have generated the script, the next step is to run it on the server that you want to onboard to Azure Arc.
+8. If you have Azure VM follow the steps mentioned in the [link](/azure/azure-arc/servers/plan-evaluate-on-azure-virtual-machine) before running the script.
+9. Run the script by the following command: `./<ScriptName>.sh`
+10. After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the Azure portal.
+> **Reference link:** [https://learn.microsoft.com/azure/azure-arc/servers/learn/quick-enable-hybrid-vm](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm)
+
+1.2 Steps to Create Data Collection Rule (DCR)
+
+1. In Azure Portal search for Monitor. Under Settings, select Data Collection Rules and Select Create.
+2. On the Basics panel, enter the Rule Name, Subscription, Resource group, Region and Platform Type.
+3. Select Next: Resources.
+4. Select Add resources.Use the filters to find the virtual machine that you&#39;ll use to collect logs.
+5. Select the virtual machine. Select Apply.
+6. Select Next: Collect and deliver.
+7. Select Add data source. For Data source type, select Linux syslog.
+8. For Minimum log level, leave the default values LOG_DEBUG.
+9. Select Next: Destination.
+10. Select Add destination and add Destination type, Subscription and Account or namespace.
+11. Select Add data source. Select Next: Review + create.
+12. Select Create. Wait for 20 minutes. In Microsoft Sentinel or Azure Monitor, verify that the Azure Monitor agent is running on your VM.
+> **Reference link:** [https://learn.microsoft.com/azure/sentinel/forward-syslog-monitor-agent](/azure/sentinel/forward-syslog-monitor-agent)
+
+2. Steps to ingest Netflow data to Microsoft sentinel
+
+To Ingest Netflow data into Microsoft sentinel, Filebeat and Logstash needs to be installed and configured on the VM. After the configuration, vm will be able to receive netflow data on the configured port and that data will be ingested into the workspace of Microsoft sentinel.
++
+2.1 Install filebeat and logstash
+
+1. For the installation of filebeat and logstash using apt refer to this doc:
+ 1. Filebeat: [https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html](https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html).
+ 2. Logstash: [https://www.elastic.co/guide/en/logstash/current/installing-logstash.html](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html).
+2. For the installation of filebeat and logstash for RedHat based Linux (yum) steps are as follows:
+ 1. Filebeat: [https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html#_yum](https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html#_yum).
+ 2. Logstash: [https://www.elastic.co/guide/en/logstash/current/installing-logstash.html#_yum](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html#_yum)
+
+2.2 Configure Filebeat to send events to Logstash
+
+1. Edit filebeat.yml file: `vi /etc/filebeat/filebeat.yml`
+2. Comment out the Elasticsearch Output section.
+3. Uncomment Logstash Output section (Uncomment out only these two lines)-
+ output.logstash
+ hosts: ["localhost:5044"]
+3. In the Logstash Output section, if you want to send the data other than the default port i.e. 5044 port, then replace the port number in the hosts field. (Note: This port should be added in the conf file, while configuring logstash.)
+4. In the 'filebeat.inputs' section comment out existing configuration and add the following configuration:
+ - type: netflow
+ max_message_size: 10KiB
+ host: "0.0.0.0:2055"
+ protocols: [ v5, v9, ipfix ]
+ expiration_timeout: 30m
+ queue_size: 8192
+ custom_definitions:
+ - /etc/filebeat/custom.yml
+ detect_sequence_reset: true
+ enabled: true
+6. In the Filebeat inputs section, if you want to receive the data other than the default port i.e. 2055 port, then replace the port number in the host field.
+7. Add the provided [custom.yml](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Cisco%20SD-WAN/Data%20Connectors/custom.yml) file inside the /etc/filebeat/ directory.
+8. Open the filebeat input and output port in the firewall.
+ 1. Run command: `firewall-cmd --zone=public --permanent --add-port=2055/udp`
+ 2. Run command: `firewall-cmd --zone=public --permanent --add-port=5044/udp`
+> Note: if a custom port is added for filebeat input/output, then open that port in the firewall.
+
+2.3 Configure Logstash to send events to Microsoft Sentinel
+
+1. Install the Azure Log Analytics plugin:
+ 1. Run Command: `sudo /usr/share/logstash/bin/logstash-plugin install microsoft-logstash-output-azure-loganalytics`
+3. Store the Log Analytics workspace key in the Logstash key store. The workspace key can be found in Azure Portal under Log analytic workspace > Select workspace > Under Settings select Agent > Log Analytics agent instructions.
+4. Copy the Primary key and run the following commands:
+ 1. `sudo /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create LogAnalyticsKey`
+ 2. `sudo /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add LogAnalyticsKey`
+5. Create the configuration file /etc/logstash/cisco-netflow-to-sentinel.conf:
+ input {
+ beats {
+ port => <port_number> #(Enter output port number which has been configured during filebeat configuration i.e. filebeat.yml file .)
+ }
+ }
+ output {
+ microsoft-logstash-output-azure-loganalytics {
+ workspace_id => "<workspace_id>"
+ workspace_key => "${LogAnalyticsKey}"
+ custom_log_table_name => "CiscoSDWANNetflow"
+ }
+ }
+> Note: If table is not present in Microsoft sentinel, then it will create a new table in sentinel.
+
+2.4 Run Filebeat:
+
+1. Open a terminal and run the command:
+> `systemctl start filebeat`
+
+2. This command will start running filebeat in the background. To see the logs stop the filebeat (`systemctl stop filebeat`) then run the following command:
+> `filebeat run -e`
+
+2.5 Run Logstash:
+
+1. In another terminal run the command:
+> `/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/cisco-netflow-to-sentinel.conf &`
+
+2. This command will start running the logstash in the background. To see the logs of logstash kill the above process and run the following command :
+> `/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/cisco-netflow-to-sentinel.conf`
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-catalyst-sdwan-sentinel?tab=Overview) in the Azure Marketplace.
sentinel Citrix Adc Former Netscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-adc-former-netscaler.md
Title: "Citrix ADC (former NetScaler) connector for Microsoft Sentinel"
description: "Learn how to install the connector Citrix ADC (former NetScaler) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
CitrixADCEvent
> [!NOTE]
- > 1. This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CitrixADCEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Citrix%20ADC/Parsers/CitrixADCEvent.txt), this function maps Citrix ADC (former NetScaler) events to Advanced Security Information Model [ASIM](/azure/sentinel/normalization). The function usually takes 10-15 minutes to activate after solution installation/update.
+> 1. This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CitrixADCEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Citrix%20ADC/Parsers/CitrixADCEvent.txt), this function maps Citrix ADC (former NetScaler) events to Advanced Security Information Model [ASIM](/azure/sentinel/normalization). The function usually takes 10-15 minutes to activate after solution installation/update.
+> 2. This parser requires a watchlist named **`Sources_by_SourceType`**
-> [!NOTE]
- > 2. This parser requires a watchlist named **`Sources_by_SourceType`** with fields **`SourceType`** and **`Source`**. To create this watchlist and populate it with SourceType and Source data, follow the instructions [here](/azure/sentinel/normalization-manage-parsers#configure-the-sources-relevant-to-a-source-specific-parser).
+> i. If you don't have watchlist already created, please click [here](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FASIM%2Fdeploy%2FWatchlists%2FASimSourceType.json) to create.
+
+> ii. Open watchlist **`Sources_by_SourceType`** and add entries for this data source.
+
+> iii. The SourceType value for CitrixADC is **`CitrixADC`**.
-> The SourceType value for CitrixADC is **`CitrixADC`**.
+> You can refer [this](/azure/sentinel/normalization-manage-parsers?WT.mc_id=Portal-fx#configure-the-sources-relevant-to-a-source-specific-parser) documentation for more details
1. Install and onboard the agent for Linux
sentinel Common Event Format Cef Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/common-event-format-cef-via-ama.md
Title: "Common Event Format (CEF) via AMA connector for Microsoft Sentinel (preview)"
+ Title: "Common Event Format (CEF) via AMA connector for Microsoft Sentinel"
description: "Learn how to install the connector Common Event Format (CEF) via AMA to connect your data source to Microsoft Sentinel." Previously updated : 07/27/2023 Last updated : 08/28/2023
-# Common Event Format (CEF) via AMA connector for Microsoft Sentinel (preview)
+# Common Event Format (CEF) via AMA connector for Microsoft Sentinel
Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by many security vendors to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2223547&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
sentinel Cortex Xdr Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cortex-xdr-incidents.md
+
+ Title: "Cortex XDR - Incidents connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cortex XDR - Incidents to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Cortex XDR - Incidents connector for Microsoft Sentinel
+
+Custom Data connector from DEFEND to utilise the Cortex API to ingest incidents from Cortex XDR platform into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | {{graphQueriesTableName}}<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [DEFEND Ltd.](https://www.defend.co.nz/) |
+
+## Query samples
+
+**All Cortex XDR Incidents**
+ ```kusto
+{{graphQueriesTableName}}
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Cortex XDR - Incidents make sure you have:
+
+- **Cortex API credentials**: **Cortex API Token** is required for REST API. [See the documentation to learn more about API](https://docs.paloaltonetworks.com/cortex/cortex-xdr/cortex-xdr-api.html). Check all requirements and follow the instructions for obtaining credentials.
++
+## Vendor installation instructions
+
+Enable Cortex XDR API
+
+Connect Cortex XDR to Microsoft Sentinel via Cortex API to process Cortex Incidents.
++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/defendlimited1682894612656.cortex_xdr_connector?tab=Overview) in the Azure Marketplace.
sentinel Exabeam Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/exabeam-advanced-analytics.md
Title: "Exabeam Advanced Analytics connector for Microsoft Sentinel"
description: "Learn how to install the connector Exabeam Advanced Analytics to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023
ExabeamEvent
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias - and load the function code, on the second line of the query, enter the hostname(s) of your - device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Exabeam Advanced Analytics and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Exabeam%20Advanced%20Analytics/Parsers/ExabeamEvent.txt), on the second line of the query, enter the hostname(s) of your Exabeam Advanced Analytics device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
> [!NOTE]
Configure the custom log directory to be collected
3. Configure Exabeam event forwarding to Syslog
-[Follow these instructions](https://docs.exabeam.com/en/advanced-analytics/i54/advanced-analytics-administration-guide/113254-configure-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) to send Exabeam Advanced Analytics activity log data via syslog.
+[Follow these instructions](https://docs.exabeam.com/en/advanced-analytics/i56/advanced-analytics-administration-guide/125351-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) to send Exabeam Advanced Analytics activity log data via syslog.
sentinel Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet.md
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
> 2. You must have elevated permissions (sudo) on your machine. Run the following command to install and apply the CEF collector:-
+ ```
sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py &&sudo python cef_installer.py {0} {1}-
+ ```
2. Forward Fortinet logs to Syslog agent Set your Fortinet to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machineΓÇÖs IP address.
If the logs are not received, run the following connectivity validation script:
>2. You must have elevated permissions (sudo) on your machine Run the following command to validate your connectivity:-
+ ```
sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py &&sudo python cef_troubleshoot.py {0}-
+ ```
4. Secure your machine Make sure to configure the machine's security according to your organization's security policy
sentinel Github Using Webhooks Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/github-using-webhooks-using-azure-function.md
If you're already signed in, go to the next step.
1. In the Function App, select the Function App Name and select **Configuration**. 2. In the **Application settings** tab, select ** New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional) - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-4. Once all application settings have been entered, click **Save**.
+3. Add each of the following application settings individually, with their respective string values (case-sensitive):
+
+ - WorkspaceID
+ - WorkspaceKey
+ - logAnalyticsUri (optional) - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+ - GithubWebhookSecret (optional) - To enable webhook authentication generate a secret value and store it in this setting.
+
+1. Once all application settings have been entered, click **Save**.
**Post Deployment steps**
If you're already signed in, go to the next step.
2. Click on Settings. 3. Click on "Webhooks" and enter the function app url which was copied from above STEP 1 under payload URL textbox. 4. Choose content type as "application/json".
- 5. Subscribe for events and Click on "Add Webhook"
+ 1. (Optional) To enable webhook authentication, add to the "Secret" field value you saved to GithubWebhookSecret from Function App settings.
+ 1. Subscribe for events and click on "Add Webhook"
*Now we are done with the GitHub Webhook configuration. Once the GitHub events triggered and after the delay of 20 to 30 mins (As there will be a dealy for LogAnalytics to spin up the resources for the first time), you should be able to see all the transactional events from the GitHub into LogAnalytics workspace table called "githubscanaudit_CL".*
sentinel Holm Security Asset Data Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/holm-security-asset-data-using-azure-functions.md
Title: "Holm Security Asset Data (using Azure Functions) connector for Microsoft
description: "Learn how to install the connector Holm Security Asset Data (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
To integrate with Holm Security Asset Data (using Azure Functions) make sure you
**STEP 1 - Configuration steps for the Holm Security API**
- [Follow these instructions](https://www.holmsecurity.com/platform/api-scanning) to create an API authentication token.
+ [Follow these instructions](https://support.holmsecurity.com/hc/en-us/articles/360027651591-How-do-I-set-up-an-API-token-) to create an API authentication token.
**STEP 2 - Use the below deployment option to deploy the connector and the associated Azure Function**
sentinel Illumio Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/illumio-core.md
Title: "Illumio Core connector for Microsoft Sentinel"
description: "Learn how to install the connector Illumio Core to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023 # Illumio Core connector for Microsoft Sentinel
-The [Illumio Core](https://www.illumio.com/products/core) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel.
+The [Illumio Core](https://www.illumio.com/products/) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel.
## Connector attributes
IllumioCoreEvent
## Vendor installation instructions
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias IllumioCoreEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Illumio%20Core/Parsers/IllumioCoreEvent.txt).The function usually takes 10-15 minutes to activate after solution installation/update and maps Illumio Core events to Azure Sentinel Information Model (ASIM).
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias IllumioCoreEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Illumio%20Core/Parsers/IllumioCoreEvent.txt).The function usually takes 10-15 minutes to activate after solution installation/update and maps Illumio Core events to Microsoft Sentinel Information Model (ASIM).
1. Linux Syslog agent configuration
sentinel Island Enterprise Browser Admin Audit Polling Ccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/island-enterprise-browser-admin-audit-polling-ccp.md
Title: "Island Enterprise Browser Admin Audit (Polling CCP) connector for Micros
description: "Learn how to install the connector Island Enterprise Browser Admin Audit (Polling CCP) to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023
To integrate with Island Enterprise Browser Admin Audit (Polling CCP) make sure
Connect Island to Microsoft Sentinel
-Provide the Island API Key.
+Provide the Island API URL and Key. API URL is https://management.island.io/api/external/v1/adminActions for US or https://eu.management.island.io/api/external/v1/adminActions for EU.
+ Generate the API Key in the Management Console under Settings > API.
sentinel Island Enterprise Browser User Activity Polling Ccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/island-enterprise-browser-user-activity-polling-ccp.md
Title: "Island Enterprise Browser User Activity (Polling CCP) connector for Micr
description: "Learn how to install the connector Island Enterprise Browser User Activity (Polling CCP) to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023
To integrate with Island Enterprise Browser User Activity (Polling CCP) make sur
Connect Island to Microsoft Sentinel
-Provide the Island API Key.
+Provide the Island API URL and Key. API URL is https://management.island.io/api/external/v1/timeline for US or https://eu.management.island.io/api/external/v1/timeline for EU.
+ Generate the API Key in the Management Console under Settings > API.
sentinel Mcafee Network Security Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mcafee-network-security-platform.md
The [McAfee® Network Security Platform](https://www.mcafee.com/enterprise/en-us
## Query samples **Top 10 Sources**+ ```kusto
-McAfeeNSPEvent
-
+ McAfeeNSPEvent
| summarize count() by tostring(DvcHostname)
-
| top 10 by count_ ```--- ## Vendor installation instructions -
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**McAfeeNSPEvent**](https://aka.ms/sentinel-mcafeensp-parser) which is deployed with the Microsoft Sentinel Solution.
-- > [!NOTE]
- > This data connector has been developed using McAfee® Network Security Platform version: 10.1.x
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Server where the McAfee® Network Security Platform logs are forwarded.
-
-> Logs from McAfee® Network Security Platform Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
-
+> This data connector depends on a parser based on a Kusto Function to work as expected [**McAfeeNSPEvent**](https://aka.ms/sentinel-mcafeensp-parser) which is deployed with the Microsoft Sentinel Solution. This data connector has been developed using McAfee® Network Security Platform version: 10.1.x
+1. Install and onboard the agent for Linux or Windows.
+ Install the agent on the Server where the McAfee® Network Security Platform logs are forwarded.
-2. Configure McAfee® Network Security Platform event forwarding
+ Logs from McAfee® Network Security Platform Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
-Follow the configuration steps below to get McAfee® Network Security Platform logs into Microsoft Sentinel.
-1. [Follow these instructions](https://docs.mcafee.com/bundle/network-security-platform-10.1.x-product-guide/page/GUID-E4A687B0-FAFB-4170-AC94-1D968A10380F.html) to forward alerts from the Manager to a syslog server.
-2. Add a syslog notification profile, [more details here](https://docs.mcafee.com/bundle/network-security-platform-10.1.x-product-guide/page/GUID-5BADD5D7-21AE-4E3B-AEE2-A079F3FD6A38.html). This is mandatory. While creating profile, to make sure that events are formatted correctly, enter the following text in the Message text box:
- <SyslogAlertForwarderNSP>:|SENSOR_ALERT_UUID|ALERT_TYPE|ATTACK_TIME|ATTACK_NAME|ATTACK_ID
- |ATTACK_SEVERITY|ATTACK_SIGNATURE|ATTACK_CONFIDENCE|ADMIN_DOMAIN|SENSOR_NAME|INTERFACE
- |SOURCE_IP|SOURCE_PORT|DESTINATION_IP|DESTINATION_PORT|CATEGORY|SUB_CATEGORY
- |DIRECTION|RESULT_STATUS|DETECTION_MECHANISM|APPLICATION_PROTOCOL|NETWORK_PROTOCOL|
+2. Configure McAfee® Network Security Platform event forwarding.
+ Follow the configuration steps below to get McAfee® Network Security Platform logs into Microsoft Sentinel.
+
+ 1. [Follow these instructions](https://docs.mcafee.com/bundle/network-security-platform-10.1.x-product-guide/page/GUID-E4A687B0-FAFB-4170-AC94-1D968A10380F.html) to forward alerts from the Manager to a syslog server.
+ 2. Add a syslog notification profile, [more details here](https://docs.mcafee.com/bundle/network-security-platform-10.1.x-product-guide/page/GUID-5BADD5D7-21AE-4E3B-AEE2-A079F3FD6A38.html). This is mandatory. While creating a profile, to make sure that events are formatted correctly, enter the following text in the Message text box:
+
+ ```text
+ <SyslogAlertForwarderNSP>:|SENSOR_ALERT_UUID|ALERT_TYPE|ATTACK_TIME|ATTACK_NAME|ATTACK_ID
+ |ATTACK_SEVERITY|ATTACK_SIGNATURE|ATTACK_CONFIDENCE|ADMIN_DOMAIN|SENSOR_NAME|INTERFACE
+ |SOURCE_IP|SOURCE_PORT|DESTINATION_IP|DESTINATION_PORT|CATEGORY|SUB_CATEGORY
+ |DIRECTION|RESULT_STATUS|DETECTION_MECHANISM|APPLICATION_PROTOCOL|NETWORK_PROTOCOL|
+ ```
## Next steps
sentinel Microsoft 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-365.md
+
+ Title: "Microsoft 365 connector for Microsoft Sentinel"
+description: "Learn how to install the connector Microsoft 365 (formerly, Office 365) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Microsoft 365 connector for Microsoft Sentinel
+
+The Microsoft 365 (formerly, Office 365) activity log connector provides insight into ongoing user activities. You will get details of operations such as file downloads, access requests sent, changes to group events, set-mailbox and details of the user who performed the actions. By connecting Microsoft 365 logs into Microsoft Sentinel you can use this data to view dashboards, create custom alerts, and improve your investigation process. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219943&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | OfficeActivity (SharePoint)<br/> OfficeActivity (Exchange)<br/> OfficeActivity (Teams)<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-office365?tab=Overview) in the Azure Marketplace.
sentinel Microsoft Defender For Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-defender-for-office-365.md
Title: "Microsoft Defender for Office 365 connector for Microsoft Sentinel"
+ Title: "Microsoft Defender for Office 365 connector for Microsoft Sentinel (preview)"
description: "Learn how to install the connector Microsoft Defender for Office 365 to connect your data source to Microsoft Sentinel."
-# Microsoft Defender for Office 365 connector for Microsoft Sentinel
+# Microsoft Defender for Office 365 connector for Microsoft Sentinel (preview)
Microsoft Defender for Office 365 safeguards your organization against malicious threats posed by email messages, links (URLs) and collaboration tools. By ingesting Microsoft Defender for Office 365 alerts into Microsoft Sentinel, you can incorporate information about email- and URL-based threats into your broader risk analysis and build response scenarios accordingly.
sentinel Microsoft Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-powerbi.md
Title: "Microsoft PowerBI connector for Microsoft Sentinel"
+ Title: "Microsoft PowerBI connector for Microsoft Sentinel (preview)"
description: "Learn how to install the connector Microsoft PowerBI to connect your data source to Microsoft Sentinel."
-# Microsoft PowerBI connector for Microsoft Sentinel
+# Microsoft PowerBI connector for Microsoft Sentinel (preview)
Microsoft PowerBI is a collection of software services, apps, and connectors that work together to turn your unrelated sources of data into coherent, visually immersive, and interactive insights. Your data may be an Excel spreadsheet, a collection of cloud-based and on-premises hybrid data warehouses, or a data store of some other type. This connector lets you stream PowerBI audit logs into Microsoft Sentinel, allowing you to track user activities in your PowerBI environment. You can filter the audit data by date range, user, dashboard, report, dataset, and activity type.
sentinel Microsoft Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-project.md
Title: "Microsoft Project connector for Microsoft Sentinel"
+ Title: "Microsoft Project connector for Microsoft Sentinel (preview)"
description: "Learn how to install the connector Microsoft Project to connect your data source to Microsoft Sentinel."
-# Microsoft Project connector for Microsoft Sentinel
+# Microsoft Project connector for Microsoft Sentinel (preview)
Microsoft Project (MSP) is a project management software solution. Depending on your plan, Microsoft Project lets you plan projects, assign tasks, manage resources, create reports and more. This connector allows you to stream your Azure Project audit logs into Microsoft Sentinel in order to track your project activities.
sentinel Misp2sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/misp2sentinel.md
+
+ Title: "MISP2Sentinel connector for Microsoft Sentinel"
+description: "Learn how to install the connector MISP2Sentinel to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# MISP2Sentinel connector for Microsoft Sentinel
+
+This solution installs the MISP2Sentinel connector that allows you to automatically push threat indicators from MISP to Microsoft Sentinel via the Upload Indicators REST API. After installing the solution, configure and enable this data connector by following guidance in Manage solution view.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | ThreatIntelligenceIndicator<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Community](https://github.com/cudeso/misp2sentinel) |
+
+## Query samples
+
+**All Threat Intelligence APIs Indicators**
+ ```kusto
+ThreatIntelligenceIndicator
+ | where SourceSystem == 'MISP'
+ | sort by TimeGenerated desc
+ ```
+++
+## Vendor installation instructions
+
+Installation and setup instructions
+
+Use the documentation from this GitHub repository to install and configure the MISP to Microsoft Sentinel connector:
+
+https://github.com/cudeso/misp2sentinel
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview) in the Azure Marketplace.
sentinel Nasuni Edge Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nasuni-edge-appliance.md
+
+ Title: "Nasuni Edge Appliance connector for Microsoft Sentinel"
+description: "Learn how to install the connector Nasuni Edge Appliance to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Nasuni Edge Appliance connector for Microsoft Sentinel
+
+The [Nasuni](https://www.nasuni.com/) connector allows you to easily connect your Nasuni Edge Appliance Notifications and file system audit logs with Microsoft Sentinel. This gives you more insight into activity within your Nasuni infrastructure and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Syslog<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Nasuni](https://github.com/nasuni-labs/Azure-Sentinel) |
+
+## Query samples
+
+**Last 1000 generated events**
+ ```kusto
+Syslog
+
+ | top 1000 by TimeGenerated
+ ```
+
+**All events by facility except for cron**
+ ```kusto
+Syslog
+
+ | summarize count() by Facility
+ | where Facility != "cron"
+ ```
+++
+## Vendor installation instructions
+
+1. Install and onboard the agent for Linux
+
+Typically, you should install the agent on a different computer from the one on which the logs are generated.
+
+> Syslog logs are collected only from **Linux** agents.
++
+2. Configure the logs to be collected
+
+Follow the configuration steps below to configure your Linux machine to send Nasuni event information to Microsoft Sentinel. Refer to the [Azure Monitor Agent documenation](/azure/azure-monitor/agents/agents-overview) for additional details on these steps.
+Configure the facilities you want to collect and their severities.
+1. Select the link below to open your workspace agents configuration, and select the Syslog tab.
+2. Select Add facility and choose from the drop-down list of facilities. Repeat for all the facilities you want to add.
+3. Mark the check boxes for the desired severities for each facility.
+4. Click Apply.
+++
+3. Configure Nasuni Edge Appliance settings
+
+Follow the instructions in the [Nasuni Management Console Guide](https://view.highspot.com/viewer/629a633ae5b4caaf17018daa?iid=5e6fbfcbc7143309f69fcfcf) to configure Nasuni Edge Appliances to forward syslog events. Use the IP address or hostname of the Linux device running the Azure Monitor Agent in the Servers configuration field for the syslog settings.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nasunicorporation.nasuni-sentinel?tab=Overview) in the Azure Marketplace.
sentinel Netskope Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-functions.md
Title: "Netskope (using Azure Functions) connector for Microsoft Sentinel"
description: "Learn how to install the connector Netskope (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
Netskope
To integrate with Netskope (using Azure Functions) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **Netskope API Token**: A Netskope API Token is required.
- > [!NOTE]
- > A Netskope account is required
+- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required
+ ## Vendor installation instructions
-> [!NOTE]
-> This connector uses Azure Functions to connect to Netskope to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
> [!NOTE]
-> This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Netskope and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Netskope/Parsers/Netskope.txt), on the second line of the query, enter the hostname(s) of your Netskope device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+> - This connector uses Azure Functions to connect to Netskope to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+> - This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Netskope and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Netskope/Parsers/Netskope.txt), on the second line of the query, enter the hostname(s) of your Netskope device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
This method provides the step-by-step instructions to deploy the Netskope connec
3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected. 4. Make other preferrable configuration changes, if needed, then click **Create**. + **2. Import Function App Code** 1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**.
This method provides the step-by-step instructions to deploy the Netskope connec
4. Once all application settings have been entered, click **Save**. 5. After successfully deploying the connector, download the Kusto Function to normalize the data fields. [Follow the steps](https://aka.ms/sentinelgithubparsersnetskope) to use the Kusto function alias, **Netskope**. ++ ## Next steps For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netskope.netskope_mss?tab=Overview) in the Azure Marketplace.
sentinel Nozomi Networks N2os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nozomi-networks-n2os.md
Title: "Nozomi Networks N2OS connector for Microsoft Sentinel"
description: "Learn how to install the connector Nozomi Networks N2OS to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023 # Nozomi Networks N2OS connector for Microsoft Sentinel
-The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides the capability to ingest Nozomi Networks Events into Microsoft Sentinel. Refer to the Nozomi Networks [PDF documentation](https://www.nozominetworks.com/resources/data-sheets/) for more information.
+The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides the capability to ingest Nozomi Networks Events into Microsoft Sentinel. Refer to the Nozomi Networks [PDF documentation](https://www.nozominetworks.com/resources/data-sheets-brochures-learning-guides/) for more information.
## Connector attributes
sentinel Nxlog Dns Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-dns-logs.md
Title: "NXLog DNS Logs connector for Microsoft Sentinel"
description: "Learn how to install the connector NXLog DNS Logs to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows
| | | | **Log Analytics table(s)** | NXLog_DNS_Server_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/) |
+| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
## Query samples
sentinel Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/office-365.md
- Title: "Office 365 connector for Microsoft Sentinel"
-description: "Learn how to install the connector Office 365 to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/23/2023----
-# Office 365 connector for Microsoft Sentinel
-
-The Office 365 activity log connector provides insight into ongoing user activities. You will get details of operations such as file downloads, access requests sent, changes to group events, set-mailbox and details of the user who performed the actions. By connecting Office 365 logs into Microsoft Sentinel you can use this data to view dashboards, create custom alerts, and improve your investigation process. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219943&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | OfficeActivity (SharePoint)<br/> OfficeActivity (Exchange)<br/> OfficeActivity (Teams)<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
--
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-office365?tab=Overview) in the Azure Marketplace.
sentinel Okta Single Sign On Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/okta-single-sign-on-using-azure-function.md
The [Okta Single Sign-On (SSO)](https://www.okta.com/products/single-sign-on/) c
## Query samples **Top 10 Active Applications**
- ```kusto
-Okta_CL
+ ```kusto
+ Okta_CL
| mv-expand todynamic(target_s) - | where target_s.type == "AppInstance" - | summarize count() by tostring(target_s.alternateId) - | top 10 by count_ ``` **Top 10 Client IP Addresses** ```kusto
-Okta_CL
-
+ Okta_CL
| summarize count() by client_ipAddress_s - | top 10 by count_ ```--- ## Prerequisites
-To integrate with Okta Single Sign-On (using Azure Function) make sure you have:
+To integrate with Okta Single Sign-On (using Azure Function), make sure you have the following prerequisites:
- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions). - **Okta API Token**: An Okta API Token is required. See the documentation to learn more about the [Okta System Log API](https://developer.okta.com/docs/reference/api/system-log/). - ## Vendor installation instructions - > [!NOTE]
- > This connector uses Azure Functions to connect to Okta SSO to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+> This connector uses Azure Functions to connect to Okta SSO to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
> [!NOTE]
- > This connector has been updated, if you have previously deployed an earlier version, and want to update, please delete the existing Okta Azure Function before redeploying this version.
+> This connector has been updated. If you have previously deployed an earlier version, and want to update, please delete the existing Okta Azure Function before redeploying this version.
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
To integrate with Okta Single Sign-On (using Azure Function) make sure you have:
[Follow these instructions](https://developer.okta.com/docs/guides/create-an-api-token/create-the-token/) to create an API Token. -
-**Note** - For more information on the rate limit restrictions enforced by Okta, please refer to the **[documentation](https://developer.okta.com/docs/reference/rl-global-mgmt/)**.
-
+ > [!NOTE]
+ > For more information on the rate limit restrictions enforced by Okta, see **[OKTA developer reference documentation](https://developer.okta.com/docs/reference/rl-global-mgmt/)**.
**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
->**IMPORTANT:** Before deploying the Okta SSO connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Okta SSO API Authorization Token, readily available.
-
+> [!IMPORTANT]
+> Before deploying the Okta SSO connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Okta SSO API Authorization Token, readily available.
+### Option 1 - Azure Resource Manager (ARM) Template
-Option 1 - Azure Resource Manager (ARM) Template
+This method provides an automated deployment of the Okta SSO connector using an ARM Template.
-This method provides an automated deployment of the Okta SSO connector using an ARM Tempate.
+1. Select the following **Deploy to Azure** button.
-1. Click the **Deploy to Azure** button below.
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentineloktaazuredeployv2-solution)
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentineloktaazuredeployv2-solution)
2. Select the preferred **Subscription**, **Resource Group** and **Location**. + 3. Enter the **Workspace ID**, **Workspace Key**, **API Token** and **URI**.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-Option 2 - Manual Deployment of Azure Functions
+ Use the following schema for the `uri` value: `https://<OktaDomain>/api/v1/logs?since=` Replace `<OktaDomain>` with your domain. [Click here](https://developer.okta.com/docs/reference/api-overview/#url-namespace) for further details on how to identify your Okta domain namespace. There's no need to add a time value to the URI. The Function App will dynamically append the initial start time of logs to UTC 0:00 for the current UTC date as a time value to the URI in the proper format.
-Use the following step-by-step instructions to deploy the Okta SSO connector manually with Azure Functions.
+ > [!NOTE]
+ > If using Azure Key Vault secrets for any of the preceding values, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+
+5. Select **Purchase** to deploy.
+
+### Option 2 - Manual Deployment of Azure Functions
+Use the following step-by-step instructions to deploy the Okta SSO connector manually with Azure Functions.
**1. Create a Function App**
-1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
+1. From the Azure portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**. 3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected.
-4. Make other preferrable configuration changes, if needed, then click **Create**.
-
+4. Make other preferable configuration changes, if needed, then click **Create**.
**2. Import Function App Code**
Use the following step-by-step instructions to deploy the Okta SSO connector man
**3. Configure the Function App** 1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following five (5) application settings individually, with their respective string values (case-sensitive):
- apiToken
- workspaceID
- workspaceKey
- uri
- logAnalyticsUri (optional)
-4. Once all application settings have been entered, click **Save**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following five (5) application settings individually, with their respective string values (case-sensitive):
+ - apiToken
+ - workspaceID
+ - workspaceKey
+ - uri
+ - logAnalyticsUri (optional)
+
+ Use the following schema for the `uri` value: `https://<OktaDomain>/api/v1/logs?since=` Replace `<OktaDomain>` with your domain. For more information on how to identify your Okta domain namespace, see the [Okta Developer reference](https://developer.okta.com/docs/reference/api-overview/#url-namespace). There's no need to add a time value to the URI. The Function App dynamically appends the initial start time of logs to UTC 0:00 (for the current UTC date) as a time value to the URI in the proper format.
+
+ > [!NOTE]
+ > If using Azure Key Vault secrets for any of the preceding values, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+
+ Use logAnalyticsUri to override the log analytics API endpoint for a dedicated cloud. For example, for the public cloud, leave the value empty; for the Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+
+5. Once all application settings have been entered, click **Save**.
## Next steps
sentinel Palo Alto Prisma Cloud Cspm Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function.md
- Title: "Palo Alto Prisma Cloud CSPM (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Palo Alto Prisma Cloud CSPM (using Azure Function) to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/23/2023----
-# Palo Alto Prisma Cloud CSPM (using Azure Function) connector for Microsoft Sentinel
-
-The Palo Alto Prisma Cloud CSPM data connector provides the capability to ingest [Prisma Cloud CSPM alerts](https://prisma.pan.dev/api/cloud/cspm/alerts#operation/get-alerts) and [audit logs](https://prisma.pan.dev/api/cloud/cspm/audit-logs#operation/rl-audit-logs) into Microsoft sentinel using the Prisma Cloud CSPM API. Refer to [Prisma Cloud CSPM API documentation](https://prisma.pan.dev/api/cloud/cspm) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Azure function app code** | https://aka.ms/sentinel-PaloAltoPrismaCloud-functionapp |
-| **Log Analytics table(s)** | PaloAltoPrismaCloudAlert_CL<br/> PaloAltoPrismaCloudAudit_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**All Prisma Cloud alerts**
- ```kusto
-PaloAltoPrismaCloudAlert_CL
-
- | sort by TimeGenerated desc
- ```
-
-**All Prisma Cloud audit logs**
- ```kusto
-PaloAltoPrismaCloudAudit_CL
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with Palo Alto Prisma Cloud CSPM (using Azure Function) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Palo Alto Prisma Cloud API Credentials**: **Prisma Cloud API Url**, **Prisma Cloud Access Key ID**, **Prisma Cloud Secret Key** are required for Prisma Cloud API connection. See the documentation to learn more about [creating Prisma Cloud Access Key](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/create-access-keys.html) and about [obtaining Prisma Cloud API Url](https://prisma.pan.dev/api/cloud/api-urls)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the Palo Alto Prisma Cloud REST API to pull logs into Microsoft sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**PaloAltoPrismaCloud**](https://aka.ms/sentinel-PaloAltoPrismaCloud-parser) which is deployed with the Microsoft sentinel Solution.
--
-**STEP 1 - Configuration of the Prisma Cloud**
-
-Follow the documentation to [create Prisma Cloud Access Key](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/create-access-keys.html) and [obtain Prisma Cloud API Url](https://api.docs.prismacloud.io/reference)
-
- NOTE: Please use SYSTEM ADMIN role for giving access to Prisma Cloud API because only SYSTEM ADMIN role is allowed to View Prisma Cloud Audit Logs. Refer to [Prisma Cloud Administrator Permissions (paloaltonetworks.com)](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/prisma-cloud-admin-permissions) for more details of administrator permissions.
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Prisma Cloud data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Prisma Cloud API credentials, readily available.
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Prisma Cloud data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-PaloAltoPrismaCloud-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Prisma Cloud API Url**, **Prisma Cloud Access Key ID**, **Prisma Cloud Secret Key**, **Microsoft sentinel Workspace Id**, **Microsoft sentinel Shared Key**
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Prisma Cloud data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-PaloAltoPrismaCloud-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. PrismaCloud).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- PrismaCloudAPIUrl
- PrismaCloudAccessKeyID
- PrismaCloudSecretKey
- AzureSentinelWorkspaceId
- AzureSentinelSharedKey
- logAnalyticsUri (Optional)
-4. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltoprisma?tab=Overview) in the Azure Marketplace.
sentinel Palo Alto Prisma Cloud Cspm Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-functions.md
+
+ Title: "Palo Alto Prisma Cloud CSPM (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Palo Alto Prisma Cloud CSPM (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Palo Alto Prisma Cloud CSPM (using Azure Functions) connector for Microsoft Sentinel
+
+The Palo Alto Prisma Cloud CSPM data connector provides the capability to ingest [Prisma Cloud CSPM alerts](https://prisma.pan.dev/api/cloud/cspm/alerts#operation/get-alerts) and [audit logs](https://prisma.pan.dev/api/cloud/cspm/audit-logs#operation/rl-audit-logs) into Microsoft sentinel using the Prisma Cloud CSPM API. Refer to [Prisma Cloud CSPM API documentation](https://prisma.pan.dev/api/cloud/cspm) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | PaloAltoPrismaCloudAlert_CL<br/> PaloAltoPrismaCloudAudit_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**All Prisma Cloud alerts**
+ ```kusto
+PaloAltoPrismaCloudAlert_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**All Prisma Cloud audit logs**
+ ```kusto
+PaloAltoPrismaCloudAudit_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Palo Alto Prisma Cloud CSPM (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Palo Alto Prisma Cloud API Credentials**: **Prisma Cloud API Url**, **Prisma Cloud Access Key ID**, **Prisma Cloud Secret Key** are required for Prisma Cloud API connection. See the documentation to learn more about [creating Prisma Cloud Access Key](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/create-access-keys.html) and about [obtaining Prisma Cloud API Url](https://prisma.pan.dev/api/cloud/api-urls)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Palo Alto Prisma Cloud REST API to pull logs into Microsoft sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**PaloAltoPrismaCloud**](https://aka.ms/sentinel-PaloAltoPrismaCloud-parser) which is deployed with the Microsoft sentinel Solution.
++
+**STEP 1 - Configuration of the Prisma Cloud**
+
+Follow the documentation to [create Prisma Cloud Access Key](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/create-access-keys.html) and [obtain Prisma Cloud API Url](https://api.docs.prismacloud.io/reference)
+
+ NOTE: Please use SYSTEM ADMIN role for giving access to Prisma Cloud API because only SYSTEM ADMIN role is allowed to View Prisma Cloud Audit Logs. Refer to [Prisma Cloud Administrator Permissions (paloaltonetworks.com)](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/manage-prisma-cloud-administrators/prisma-cloud-admin-permissions) for more details of administrator permissions.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Prisma Cloud data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Prisma Cloud API credentials, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltoprisma?tab=Overview) in the Azure Marketplace.
sentinel Proofpoint Tap Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/proofpoint-tap-using-azure-function.md
- Title: "Proofpoint TAP (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Proofpoint TAP (using Azure Function) to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/28/2023----
-# Proofpoint TAP (using Azure Function) connector for Microsoft Sentinel
-
-The [Proofpoint Targeted Attack Protection (TAP)](https://www.proofpoint.com/us/products/advanced-threat-protection/targeted-attack-protection) connector provides the capability to ingest Proofpoint TAP logs and events into Microsoft Sentinel. The connector provides visibility into Message and Click events in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Application settings** | apiUsername<br/>apipassword<br/>workspaceID<br/>workspaceKey<br/>uri<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://aka.ms/sentinelproofpointtapazurefunctioncode |
-| **Log Analytics table(s)** | ProofPointTAPClicksPermitted_CL<br/> ProofPointTAPClicksBlocked_CL<br/> ProofPointTAPMessagesDelivered_CL<br/> ProofPointTAPMessagesBlocked_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Malware click events permitted**
- ```kusto
-ProofPointTAPClicksPermitted_CL
-
- | where classification_s == "malware"
-
- | take 10
- ```
-
-**Phishing click events blocked**
- ```kusto
-ProofPointTAPClicksBlocked_CL
-
- | where classification_s == "phish"
-
- | take 10
- ```
-
-**Malware messages events delivered**
- ```kusto
-ProofPointTAPMessagesDelivered_CL
-
- | mv-expand todynamic(threatsInfoMap_s)
-
- | extend classification = tostring(threatsInfoMap_s.classification)
-
- | where classification == "malware"
-
- | take 10
- ```
-
-**Phishing message events blocked**
- ```kusto
-ProofPointTAPMessagesBlocked_CL
-
- | mv-expand todynamic(threatsInfoMap_s)
-
- | extend classification = tostring(threatsInfoMap_s.classification)
-
- | where classification == "phish"
- ```
---
-## Prerequisites
-
-To integrate with Proofpoint TAP (using Azure Function) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Proofpoint TAP API Key**: A Proofpoint TAP API username and password is required. [See the documentation to learn more about Proofpoint SIEM API](https://help.proofpoint.com/Threat_Insight_Dashboard/API_Documentation/SIEM_API).--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to Proofpoint TAP to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-**STEP 1 - Configuration steps for the Proofpoint TAP API**
-
-1. Log into the Proofpoint TAP console
-2. Navigate to **Connect Applications** and select **Service Principal**
-3. Create a **Service Principal** (API Authorization Key)
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Proofpoint TAP connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Proofpoint TAP API Authorization Key(s), readily available.
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Proofpoint TAP connector.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinelproofpointtapazuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, and validate the **Uri**.
-> - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-This method provides the step-by-step instructions to deploy the Proofpoint TAP connector manually with Azure Function.
--
-**1. Create a Function App**
-
-1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
-2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**.
-3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected.
-4. Make other preferrable configuration changes, if needed, then click **Create**.
--
-**2. Import Function App Code**
-
-1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**.
-2. Select **Timer Trigger**.
-3. Enter a unique Function **Name** and modify the cron schedule, if needed. The default value is set to run the Function App every 5 minutes. (Note: the Timer trigger should match the `timeInterval` value below to prevent overlapping data), click **Create**.
-4. Click on **Code + Test** on the left pane.
-5. Copy the [Function App Code](https://aka.ms/sentinelproofpointtapazurefunctioncode) and paste into the Function App `run.ps1` editor.
-5. Click **Save**.
--
-**3. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following six (6) application settings individually, with their respective string values (case-sensitive):
- apiUsername
- apipassword
- workspaceID
- workspaceKey
- uri
- logAnalyticsUri (optional)
-> - Set the `uri` value to: `https://tap-api-v2.proofpoint.com/v2/siem/all?format=json&sinceSeconds=300`
-> - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`
-4. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-proofpoint?tab=Overview) in the Azure Marketplace.
sentinel Proofpoint Tap Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/proofpoint-tap-using-azure-functions.md
+
+ Title: "Proofpoint TAP (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Proofpoint TAP (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Proofpoint TAP (using Azure Functions) connector for Microsoft Sentinel
+
+The [Proofpoint Targeted Attack Protection (TAP)](https://www.proofpoint.com/us/products/advanced-threat-protection/targeted-attack-protection) connector provides the capability to ingest Proofpoint TAP logs and events into Microsoft Sentinel. The connector provides visibility into Message and Click events in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Application settings** | apiUsername<br/>apipassword<br/>workspaceID<br/>workspaceKey<br/>uri<br/>logAnalyticsUri (optional) |
+| **Azure function app code** | https://aka.ms/sentinelproofpointtapazurefunctioncode |
+| **Log Analytics table(s)** | ProofPointTAPClicksPermitted_CL<br/> ProofPointTAPClicksBlocked_CL<br/> ProofPointTAPMessagesDelivered_CL<br/> ProofPointTAPMessagesBlocked_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Malware click events permitted**
+ ```kusto
+ProofPointTAPClicksPermitted_CL
+
+ | where classification_s == "malware"
+
+ | take 10
+ ```
+
+**Phishing click events blocked**
+ ```kusto
+ProofPointTAPClicksBlocked_CL
+
+ | where classification_s == "phish"
+
+ | take 10
+ ```
+
+**Malware messages events delivered**
+ ```kusto
+ProofPointTAPMessagesDelivered_CL
+
+ | mv-expand todynamic(threatsInfoMap_s)
+
+ | extend classification = tostring(threatsInfoMap_s.classification)
+
+ | where classification == "malware"
+
+ | take 10
+ ```
+
+**Phishing message events blocked**
+ ```kusto
+ProofPointTAPMessagesBlocked_CL
+
+ | mv-expand todynamic(threatsInfoMap_s)
+
+ | extend classification = tostring(threatsInfoMap_s.classification)
+
+ | where classification == "phish"
+ ```
+++
+## Prerequisites
+
+To integrate with Proofpoint TAP (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Proofpoint TAP API Key**: A Proofpoint TAP API username and password is required. [See the documentation to learn more about Proofpoint SIEM API](https://help.proofpoint.com/Threat_Insight_Dashboard/API_Documentation/SIEM_API).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to Proofpoint TAP to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**STEP 1 - Configuration steps for the Proofpoint TAP API**
+
+1. Log into the Proofpoint TAP console
+2. Navigate to **Connect Applications** and select **Service Principal**
+3. Create a **Service Principal** (API Authorization Key)
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Proofpoint TAP connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Proofpoint TAP API Authorization Key(s), readily available.
+++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the Proofpoint TAP connector.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinelproofpointtapazuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, and validate the **Uri**.
+> - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+This method provides the step-by-step instructions to deploy the Proofpoint TAP connector manually with Azure Function.
++
+**1. Create a Function App**
+
+1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
+2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**.
+3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected.
+4. Make other preferrable configuration changes, if needed, then click **Create**.
++
+**2. Import Function App Code**
+
+1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**.
+2. Select **Timer Trigger**.
+3. Enter a unique Function **Name** and modify the cron schedule, if needed. The default value is set to run the Function App every 5 minutes. (Note: the Timer trigger should match the `timeInterval` value below to prevent overlapping data), click **Create**.
+4. Click on **Code + Test** on the left pane.
+5. Copy the [Function App Code](https://aka.ms/sentinelproofpointtapazurefunctioncode) and paste into the Function App `run.ps1` editor.
+5. Click **Save**.
++
+**3. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following six (6) application settings individually, with their respective string values (case-sensitive):
+ apiUsername
+ apipassword
+ workspaceID
+ workspaceKey
+ uri
+ logAnalyticsUri (optional)
+> - Set the `uri` value to: `https://tap-api-v2.proofpoint.com/v2/siem/all?format=json&sinceSeconds=300`
+> - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion.
+> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`
+4. Once all application settings have been entered, click **Save**.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-proofpoint?tab=Overview) in the Azure Marketplace.
sentinel Pulse Connect Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/pulse-connect-secure.md
Title: "Pulse Connect Secure connector for Microsoft Sentinel"
description: "Learn how to install the connector Pulse Connect Secure to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 08/28/2023
Configure the facilities you want to collect and their severities.
3. Configure and connect the Pulse Connect Secure
-[Follow the instructions](https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring%20Syslog.htm) to enable syslog streaming of Pulse Connect Secure logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+[Follow the instructions](https://docs.pulsesecure.net/WebHelp/PCS/8.3R3/Content/PCS/PCS_AdminGuide_8.3/Configuring_Syslog.htm) to enable syslog streaming of Pulse Connect Secure logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
sentinel Rubrik Security Cloud Data Connector Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rubrik-security-cloud-data-connector-using-azure-function.md
- Title: "Rubrik Security Cloud data connector (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Rubrik Security Cloud data connector (using Azure Function) to connect your data source to Microsoft Sentinel."
-- Previously updated : 05/22/2023----
-# Rubrik Security Cloud data connector (using Azure Function) connector for Microsoft Sentinel
-
-The Rubrik Security Cloud data connector enables security operations teams to integrate insights from RubrikΓÇÖs Data Observability services into Microsoft Sentinel. The insights include identification of anomalous filesystem behavior associated with ransomware and mass deletion, assess the blast radius of a ransomware attack, and sensitive data operators to prioritize and more rapidly investigate potential incidents.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Azure function app code** | https://aka.ms/sentinel-RubrikWebhookEvents-functionapp |
-| **Log Analytics table(s)** | Rubrik_Anomaly_Data_CL<br/> Rubrik_Ransomware_Data_CL<br/> Rubrik_ThreatHunt_Data_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Rubrik](https://support.rubrik.com) |
-
-## Query samples
-
-**Rubrik Anomaly Events - Anomaly Events for all severity types.**
- ```kusto
- Rubrik_Anomaly_Data_CL
- | sort by TimeGenerated desc
- ```
-
-**Rubrik Ransomware Analysis Events - Ransomware Analysis Events for all severity types.**
- ```kusto
- Rubrik_Ransomware_Data_CL
- | sort by TimeGenerated desc
- ```
-
-**Rubrik ThreatHunt Events - Threat Hunt Events for all severity types.**
- ```kusto
- Rubrik_ThreatHunt_Data_CL
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with Rubrik Security Cloud data connector (using Azure Function) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the Rubrik webhook which push its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-**STEP 1 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Rubrik Sentinel data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following) readily available..
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Rubrik connector.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-RubrikWebhookEvents-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the below information :
- Function Name
- Workspace ID
- Workspace Key
- Anomalies_table_name
- RansomwareAnalysis_table_name
- ThreatHunts_table_name
-
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Rubrik Sentinel data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-RubrikWebhookEvents-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. RubrikXXXXX).
-
- e. **Select a runtime:** Choose Python 3.8 or above.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective values (case-sensitive):
- WorkspaceID
- WorkspaceKey
- Anomalies_table_name
- RansomwareAnalysis_table_name
- ThreatHunts_table_name
- logAnalyticsUri (optional)
-4. Once all application settings have been entered, click **Save**.
--
-**Post Deployment steps**
----
-**STEP 1 - To get the Azure Function url**
-
- 1. Go to Azure function Overview page and Click on "Functions" in the left blade.
- 2. Click on the Rubrik defined function for the event.
- 3. Go to "GetFunctionurl" and copy the function url.
--
-**STEP 2 - Follow the Rubrik User Guide instructions to [Add a Webhook](https://docs.rubrik.com/en-us/saas/saas/common/adding_webhook.html) to begin receiving event information related to Ransomware Anomalies.**
-
- 1. Select the Generic as the webhook Provider(This will use CEF formatted event information)
- 2. Enter the Function App URL as the webhook URL endpoint for the Rubrik Sentinel Solution
- 3. Select the Custom Authentication option
- 4. Enter x-functions-key as the HTTP header
- 5. Enter the Function access key as the HTTP value(Note: if you change this function access key in Sentinel in the future you will need to update this webhook configuration)
- 6. Select the following Event types: Anomaly, Ransomware Investigation Analysis, Threat Hunt
- 7. Select the following severity levels: Critical, Warning, Informational
--
-*Now we are done with the rubrik Webhook configuration. Once the webhook events triggered , you should be able to see the Anomaly, Ransomware Analysis, ThreatHunt events from the Rubrik into respective LogAnalytics workspace table called "Rubrik_Anomaly_Data_CL", "Rubrik_Ransomware_Data_CL", "Rubrik_ThreatHunt_Data_CL".*
-----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/rubrik_inc.rubrik_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Rubrik Security Cloud Data Connector Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rubrik-security-cloud-data-connector-using-azure-functions.md
+
+ Title: "Rubrik Security Cloud data connector (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Rubrik Security Cloud data connector (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Rubrik Security Cloud data connector (using Azure Functions) connector for Microsoft Sentinel
+
+The Rubrik Security Cloud data connector enables security operations teams to integrate insights from RubrikΓÇÖs Data Observability services into Microsoft Sentinel. The insights include identification of anomalous filesystem behavior associated with ransomware and mass deletion, assess the blast radius of a ransomware attack, and sensitive data operators to prioritize and more rapidly investigate potential incidents.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Azure function app code** | https://aka.ms/sentinel-RubrikWebhookEvents-functionapp |
+| **Log Analytics table(s)** | Rubrik_Anomaly_Data_CL<br/> Rubrik_Ransomware_Data_CL<br/> Rubrik_ThreatHunt_Data_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Rubrik](https://support.rubrik.com) |
+
+## Query samples
+
+**Rubrik Anomaly Events - Anomaly Events for all severity types.**
+ ```kusto
+Rubrik_Anomaly_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Rubrik Ransomware Analysis Events - Ransomware Analysis Events for all severity types.**
+ ```kusto
+Rubrik_Ransomware_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Rubrik ThreatHunt Events - Threat Hunt Events for all severity types.**
+ ```kusto
+Rubrik_ThreatHunt_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Rubrik Security Cloud data connector (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Rubrik webhook which push its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**STEP 1 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Rubrik Microsoft Sentinel data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following) readily available..
+++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the Rubrik connector.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-RubrikWebhookEvents-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the below information :
+ Function Name
+ Workspace ID
+ Workspace Key
+ Anomalies_table_name
+ RansomwareAnalysis_table_name
+ ThreatHunts_table_name
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the Rubrik Microsoft Sentinel data connector manually with Azure Functions (Deployment via Visual Studio Code).
++
+**1. Deploy a Function App**
+
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+
+1. Download the [Azure Function App](https://aka.ms/sentinel-RubrikWebhookEvents-functionapp) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. RubrikXXXXX).
+
+ e. **Select a runtime:** Choose Python 3.8 or above.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure Portal for the Function App configuration.
++
+**2. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following application settings individually, with their respective values (case-sensitive):
+ WorkspaceID
+ WorkspaceKey
+ Anomalies_table_name
+ RansomwareAnalysis_table_name
+ ThreatHunts_table_name
+ logAnalyticsUri (optional)
+ - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+4. Once all application settings have been entered, click **Save**.
++
+**Post Deployment steps**
++++
+**STEP 1 - To get the Azure Function url**
+
+ 1. Go to Azure function Overview page and Click on "Functions" in the left blade.
+ 2. Click on the Rubrik defined function for the event.
+ 3. Go to "GetFunctionurl" and copy the function url.
++
+**STEP 2 - Follow the Rubrik User Guide instructions to [Add a Webhook](https://docs.rubrik.com/en-us/saas/saas/common/adding_webhook.html) to begin receiving event information related to Ransomware Anomalies.**
+
+ 1. Select the Generic as the webhook Provider(This will use CEF formatted event information)
+ 2. Enter the Function App URL as the webhook URL endpoint for the Rubrik Microsoft Sentinel Solution
+ 3. Select the Custom Authentication option
+ 4. Enter x-functions-key as the HTTP header
+ 5. Enter the Function access key as the HTTP value(Note: if you change this function access key in Microsoft Sentinel in the future you will need to update this webhook configuration)
+ 6. Select the following Event types: Anomaly, Ransomware Investigation Analysis, Threat Hunt
+ 7. Select the following severity levels: Critical, Warning, Informational
++
+*Now we are done with the rubrik Webhook configuration. Once the webhook events triggered , you should be able to see the Anomaly, Ransomware Analysis, ThreatHunt events from the Rubrik into respective LogAnalytics workspace table called "Rubrik_Anomaly_Data_CL", "Rubrik_Ransomware_Data_CL", "Rubrik_ThreatHunt_Data_CL".*
+++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/rubrik_inc.rubrik_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Slack Audit Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/slack-audit-using-azure-functions.md
Title: "Slack Audit (using Azure Functions) connector for Microsoft Sentinel"
description: "Learn how to install the connector Slack Audit (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
The [Slack](https://slack.com) Audit data connector provides the capability to i
| Connector attribute | Description | | | |
-| **Application settings** | SlackAPIBearerToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://aka.ms/sentinel-SlackAuditAPI-functionapp |
| **Kusto function alias** | SlackAudit | | **Kusto function url** | https://aka.ms/sentinel-SlackAuditAPI-parser | | **Log Analytics table(s)** | SlackAudit_CL<br/> |
To integrate with Slack Audit (using Azure Functions) make sure you have:
-Option 1 - Azure Resource Manager (ARM) Template
-Use this method for automated deployment of the Slack Audit data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-SlackAuditAPI-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.
-3. Enter the **SlackAPIBearerToken** and deploy.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Slack Audit data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-SlackAuditAPI-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. SlackAuditXXXXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select ** New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- SlackAPIBearerToken
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional)
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-3. Once all application settings have been entered, click **Save**.
sentinel Sonicwall Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sonicwall-firewall.md
Title: "SonicWall Firewall connector for Microsoft Sentinel"
description: "Learn how to install the connector SonicWall Firewall to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023
CommonSecurityLog
| where DeviceVendor == "SonicWall"
- | sort by TimeGenerated
+ | sort by TimeGenerated desc
``` **Summarize by destination IP and port**
CommonSecurityLog
| summarize count() by DestinationIP, DestinationPort, TimeGenerated
- | sort by TimeGenerated
+ | sort by TimeGenerated desc
``` **Show all dropped traffic from the SonicWall Firewall**
sentinel Sophos Endpoint Protection Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sophos-endpoint-protection-using-azure-function.md
- Title: "Sophos Endpoint Protection (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Sophos Endpoint Protection (using Azure Functions) to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/23/2023----
-# Sophos Endpoint Protection (using Azure Functions) connector for Microsoft Sentinel
-
-The [Sophos Endpoint Protection](https://www.sophos.com/en-us/products/endpoint-antivirus.aspx) data connector provides the capability to ingest [Sophos events](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/common/concepts/Events.html) into Microsoft Sentinel. Refer to [Sophos Central Admin documentation](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/Logs.html) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Application settings** | SOPHOS_TOKEN<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) |
-| **Azure functions app code** | https://aka.ms/sentinel-SophosEP-functionapp |
-| **Log Analytics table(s)** | SophosEP_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**All logs**
- ```kusto
-SophosEP_CL
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with Sophos Endpoint Protection (using Azure Functions) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **REST API Credentials/permissions**: **API token** is required. [See the documentation to learn more about API token](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/ep_ApiTokenManagement.html)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the Sophos Central APIs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**SophosEPEvent**](https://aka.ms/sentinel-SophosEP-parser) which is deployed with the Microsoft Sentinel Solution.
--
-**STEP 1 - Configuration steps for the Sophos Central API**
-
- Follow the instructions to obtain the credentials.
-
-1. In Sophos Central Admin, go to **Global Settings > API Token Management**.
-2. To create a new token, click **Add token** from the top-right corner of the screen.
-3. Select a **token name** and click **Save**. The **API Token Summary** for this token is displayed.
-4. Click **Copy** to copy your **API Access URL + Headers** from the **API Token Summary** section into your clipboard.
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Sophos Endpoint Protection data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following).
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Sophos Endpoint Protection data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-SophosEP-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.
-3. Enter the **Sophos API Access URL and Headers**, **AzureSentinelWorkspaceId**, **AzureSentinelSharedKey**.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Sophos Endpoint Protection data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-SophosEP-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. SophosEPXXXXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select ** New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- SOPHOS_TOKEN
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional)
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-4. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sophosep?tab=Overview) in the Azure Marketplace.
sentinel Sophos Endpoint Protection Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sophos-endpoint-protection-using-azure-functions.md
+
+ Title: "Sophos Endpoint Protection (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Sophos Endpoint Protection (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Sophos Endpoint Protection (using Azure Functions) connector for Microsoft Sentinel
+
+The [Sophos Endpoint Protection](https://www.sophos.com/en-us/products/endpoint-antivirus.aspx) data connector provides the capability to ingest [Sophos events](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/common/concepts/Events.html) into Microsoft Sentinel. Refer to [Sophos Central Admin documentation](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/Logs.html) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | SophosEP_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+SophosEP_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Sophos Endpoint Protection (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **REST API Credentials/permissions**: **API token** is required. [See the documentation to learn more about API token](https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/ep_ApiTokenManagement.html)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Sophos Central APIs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**SophosEPEvent**](https://aka.ms/sentinel-SophosEP-parser) which is deployed with the Microsoft Sentinel Solution.
++
+**STEP 1 - Configuration steps for the Sophos Central API**
+
+ Follow the instructions to obtain the credentials.
+
+1. In Sophos Central Admin, go to **Global Settings > API Token Management**.
+2. To create a new token, click **Add token** from the top-right corner of the screen.
+3. Select a **token name** and click **Save**. The **API Token Summary** for this token is displayed.
+4. Click **Copy** to copy your **API Access URL + Headers** from the **API Token Summary** section into your clipboard.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Sophos Endpoint Protection data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following).
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sophosep?tab=Overview) in the Azure Marketplace.
sentinel Symantec Vip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-vip.md
Title: "Symantec VIP connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec VIP to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 08/28/2023
Configure the facilities you want to collect and their severities.
2. Select **Apply below configuration to my machines** and select the facilities and severities. 3. Click **Save**. +
+3. Configure and connect the Symantec VIP
+
+[Follow these instructions](https://help.symantec.com/cs/VIP_EG_INSTALL_CONFIG/VIP/v134652108_v128483142/Configuring-syslog) to configure the Symantec VIP Enterprise Gateway to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+++ ## Next steps For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecvip?tab=Overview) in the Azure Marketplace.
sentinel Vectra Xdr Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vectra-xdr-using-azure-functions.md
+
+ Title: "Vectra XDR (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Vectra XDR (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 08/28/2023++++
+# Vectra XDR (using Azure Functions) connector for Microsoft Sentinel
+
+The [Vectra XDR](https://www.vectra.ai/) connector gives the capability to ingest Vectra Detections, Audits, Entity Scoring, Lockdown and Health data into Microsoft Sentinel through the Vectra REST API. Refer to the API documentation: `https://support.vectra.ai/s/article/KB-VS-1666` for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Azure function app code** | https://aka.ms/sentinel-VectraXDRAPI-functionapp |
+| **Kusto function alias** | VectraDetections |
+| **Kusto function url** | https://aka.ms/sentinel-VectraDetections-parser |
+| **Log Analytics table(s)** | Detections_Data_CL<br/> Audits_Data_CL<br/> Entity_Scoring_Data_CL<br/> Lockdown_Data_CL<br/> Health_Data_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Vectra Support](https://www.vectra.ai/support) |
+
+## Query samples
+
+**Vectra Detections Events - All Detections Events.**
+ ```kusto
+Detections_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Vectra Audits Events - All Audits Events.**
+ ```kusto
+Audits_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Vectra Entity Scoring Events - All Entity Scoring Events.**
+ ```kusto
+Entity_Scoring_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Vectra Lockdown Events - All Lockdown Events.**
+ ```kusto
+Lockdown_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Vectra Health Events - All Health Events.**
+ ```kusto
+Health_Data_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Vectra XDR (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **REST API Credentials/permissions**: **Vectra Client ID** and **Client Secret** is required for Health, Entity Scoring, Detections, Lockdown and Audit data collection. See the documentation to learn more about API on the `https://support.vectra.ai/s/article/KB-VS-1666`.
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Vectra API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected. Follow these steps for [Detections Parser](https://aka.ms/sentinel-VectraDetections-parser), [Audits Parser](https://aka.ms/sentinel-VectraAudits-parser), [Entity Scoring Parser](https://aka.ms/sentinel-VectraEntityScoring-parser), [Lockdown Parser](https://aka.ms/sentinel-VectraLockdown-parser) and [Health Parser](https://aka.ms/sentinel-VectraHealth-parser) to create the Kusto functions alias, **VectraDetections**, **VectraAudits**, **VectraEntityScoring**, **VectraLockdown** and **VectraHealth**.
++
+**STEP 1 - Configuration steps for the Vectra API Credentials**
+
+ Follow these instructions to create a Vectra Client ID and Client Secret.
+ 1. Log into your Vectra portal
+ 2. Navigate to Manage -> API Clients
+ 3. From the API Clients page, select 'Add API Client' to create a new client.
+ 4. Add Client Name, select Role and click on Generate Credentials to obtain your client credentials.
+ 5. Be sure to record your Client ID and Secret Key for safekeeping. You will need these two pieces of information to obtain an access token from the Vectra API. An access token is required to make requests to all of the Vectra API endpoints.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Vectra data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following) readily available.., as well as the Vectra API Authorization Credentials
+++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the Vectra connector.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-VectraXDRAPI-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the below information :
+ Function Name
+ Workspace ID
+ Workspace Key
+ Vectra Base URL (`https://<vectra-portal-url>`)
+ Vectra Client Id - Health
+ Vectra Client Secret Key - Health
+ Vectra Client Id - Entity Scoring
+ Vectra Client Secret - Entity Scoring
+ Vectra Client Id - Detections
+ Vectra Client Secret - Detections
+ Vectra Client ID - Audits
+ Vectra Client Secret - Audits
+ Vectra Client ID - Lockdown
+ Vectra Client Secret - Lockdown
+ StartTime (in MM/DD/YYYY HH:MM:SS Format)
+ Audits Table Name
+ Detections Table Name
+ Entity Scoring Table Name
+ Lockdown Table Name
+ Health Table Name
+ Log Level (Default: INFO)
+ Lockdown Schedule
+ Health Schedule
+ Detections Schedule
+ Audits Schedule
+ Entity Scoring Schedule
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the Vectra data connector manually with Azure Functions (Deployment via Visual Studio Code).
++
+**1. Deploy a Function App**
+
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+
+1. Download the [Azure Function App](https://aka.ms/sentinel-VectraXDRAPI-functionapp) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. VECTRAXXXXX).
+
+ e. **Select a runtime:** Choose Python 3.8 or above.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure Portal for the Function App configuration.
++
+**2. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following application settings individually, with their respective values (case-sensitive):
+ Workspace ID
+ Workspace Key
+ Vectra Base URL (`https://<vectra-portal-url>`)
+ Vectra Client Id - Health
+ Vectra Client Secret Key - Health
+ Vectra Client Id - Entity Scoring
+ Vectra Client Secret - Entity Scoring
+ Vectra Client Id - Detections
+ Vectra Client Secret - Detections
+ Vectra Client ID - Audits
+ Vectra Client Secret - Audits
+ Vectra Client ID - Lockdown
+ Vectra Client Secret - Lockdown
+ StartTime (in MM/DD/YYYY HH:MM:SS Format)
+ Audits Table Name
+ Detections Table Name
+ Entity Scoring Table Name
+ Lockdown Table Name
+ Health Table Name
+ Log Level (Default: INFO)
+ Lockdown Schedule
+ Health Schedule
+ Detections Schedule
+ Audits Schedule
+ Entity Scoring Schedule
+ logAnalyticsUri (optional)
+ - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+4. Once all application settings have been entered, click **Save**.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vectraaiinc.vectra-xdr-for-microsoft-sentinel?tab=Overview) in the Azure Marketplace.
sentinel Vmware Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md
Title: "VMware vCenter connector for Microsoft Sentinel"
description: "Learn how to install the connector VMware vCenter to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 08/28/2023
The [vCenter](https://www.vmware.com/in/products/vcenter-server.html) connector
```kusto vCenter
- | summarize count() by eventtype
+ | summarize count() by EventType
``` **log in/out to vCenter Server** ```kusto vCenter
- | where eventtype in ('UserLogoutSessionEvent','UserLoginSessionEvent')
+ | where EventType in ('UserLogoutSessionEvent','UserLoginSessionEvent')
- | summarize count() by eventtype,eventid,username,useragent
+ | summarize count() by EventType,EventID,UserName,UserAgent
| top 10 by count_ ```
vCenter
**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMware vCenter and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt), on the second line of the query, enter the hostname(s) of your VMware vCenter device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt) to use the Kusto function alias, **vCenter**
+> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://aka.ms/sentinel-vCenter-parser) to use the Kusto function alias, **vCenter**
1. Install and onboard the agent for Linux
sentinel Zero Networks Segment Audit Function Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zero-networks-segment-audit-function-using-azure-functions.md
Title: "Zero Networks Segment Audit (Function) (using Azure Functions) connector
description: "Learn how to install the connector Zero Networks Segment Audit (Function) (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 08/28/2023
The [Zero Networks Segment](https://zeronetworks.com/product/) Audit data connec
| Connector attribute | Description | | | | | **Application settings** | APIToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional)<br/>uri<br/>tableName |
+| **Azure function app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip |
| **Log Analytics table(s)** | ZNSegmentAudit_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Zero Networks](https://zeronetworks.com) |
Use the following step-by-step instructions to deploy the Zero Networks Segment
> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-powershell#prerequisites) for Azure function development.
-1. Download the Azure Function App file. Extract archive to your local development computer.
+1. Download the [Azure Function App](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/ZeroNetworks/SegmentFunctionConnector/AzureFunction_ZeroNetworks_Segment_Audit.zip) file. Extract archive to your local development computer.
2. Start VS Code. Choose File in the main menu and select Open Folder. 3. Select the top level folder from extracted files. 4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
sentinel Deploy Dynamics 365 Finance Operations Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dynamics-365/deploy-dynamics-365-finance-operations-solution.md
To enable data collection, you create a new role in Finance and Operations with
To collect the managed identity application ID from Azure Active Directory:
-1. In theΓÇ»[Azure Active Directory portal](https://aad.portal.azure.com/), selectΓÇ»**Enterprise Applications**.
-
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Browse to **Azure Active Directory** > **Enterprise applications**.
1. Change the application type filter to **Managed Identities**.- 1. Search for and open the Function App created in the [previous step](#deploy-the-azure-resource-manager-arm-template). Copy the Application ID and save it for later use. ### Create a role for data collection in Finance and Operations
sentinel Incident Investigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/incident-investigation.md
Last updated 01/01/2023
Microsoft Sentinel gives you a complete, full-featured case management platform for investigating and managing security incidents. **Incidents** are Microsoft SentinelΓÇÖs name for case files that contain a complete and constantly updated chronology of a security threat, whether itΓÇÖs individual pieces of evidence (alerts), suspects and parties of interest (entities), insights collected and curated by security experts and AI/machine learning models, or comments and logs of all the actions taken in the course of the investigation.
-The incident investigation experience in Microsoft Sentinel begins with the **Incidents** page ΓÇô a new experience designed to give you everything you need for your investigation in one place. The key goal of this new experience is to increase your SOCΓÇÖs efficiency and effectiveness, reducing its mean time to resolve (MTTR).
-
-> [!IMPORTANT]
->
-> The new incident experience is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> Some of the individual functionalities mentioned below are also in **PREVIEW**. They will be so indicated.
+The incident investigation experience in Microsoft Sentinel begins with the **Incidents** page&mdash;a new experience designed to give you everything you need for your investigation in one place. The key goal of this new experience is to increase your SOCΓÇÖs efficiency and effectiveness, reducing its mean time to resolve (MTTR).
This article takes you through the phases of a typical incident investigation, presenting all the displays and tools available to you to help you along.
This can benefit your investigation in several ways:
The widget shows you the 20 most similar incidents. Microsoft Sentinel decides which incidents are similar based on common elements including entities, the source analytics rule, and alert details. From this widget you can jump directly to any of these incidents' full details pages, while keeping the connection to the current incident intact.
-Learn more about what you can do with [similar incidents](investigate-incidents.md#similar-incidents-preview).
+Learn more about what you can do with [similar incidents](investigate-incidents.md#similar-incidents).
### Examine top insights
The **Entities tab** contains a list of all the entities in the incident. When a
Depending on the entity type, you can take a number of further actions from this side panel: - Pivot to the entity's full [entity page](entity-pages.md) to get even more details over a longer timespan or launch the graphical investigation tool centered on that entity. - Run a [playbook](respond-threats-during-investigation.md) to take specific response or remediation actions on the entity (in Preview).-- Classify the entity as an [indicator of compromise (IOC)](add-entity-to-threat-intelligence.md) and add it to your Threat intelligence list (in Preview).
+- Classify the entity as an [indicator of compromise (IOC)](add-entity-to-threat-intelligence.md) and add it to your Threat intelligence list.
Each of these actions is currently supported for certain entity types and not for others. The following table shows which actions are supported for which entity types:
-| Available actions &#9654;<br>Entity types &#9660; | View full details<br>(in entity page) | Add to TI *<br>(Preview) | Run playbook *<br>(Preview) |
+| Available actions &#9654;<br>Entity types &#9660; | View full details<br>(in entity page) | Add to TI * | Run playbook *<br>(Preview) |
| -- | :-: | :-: | :-: | | **User account** | &#10004; | | &#10004; | | **Host** | &#10004; | | &#10004; |
Each of these actions is currently supported for certain entity types and not fo
| **Azure resource** | &#10004; | | | | **IoT device** | &#10004; | | |
-\* For entities for which either or both of these two actions are available, you can take those actions right from the **Entities** widget in the **Overview tab**, never leaving the incident page.
+\* For entities for which the **Add to TI** or **Run playbook** actions are available, you can take those actions right from the **Entities** widget in the **Overview tab**, never leaving the incident page.
### Explore logs
sentinel Investigate Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-incidents.md
Microsoft Sentinel gives you a complete, full-featured case management platform
This article takes you through all the panels and options available on the incident details page, helping you navigate and investigate your incidents more quickly, effectively, and efficiently, and reducing your mean time to resolve (MTTR).
-> [!IMPORTANT]
->
-> The new incident experience is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> Some of the individual functionalities mentioned below are also in **PREVIEW**. They will be so indicated.
- See instructions for the [previous version of incident investigation](investigate-cases.md). Incidents are your case files that contain an aggregation of all the relevant evidence for specific investigations. Each incident is created (or added to) based on pieces of evidence ([alerts](detect-threats-built-in.md)) that were either generated by analytics rules or imported from third-party security products that produce their own alerts. Incidents inherit the [entities](entities.md) contained in the alerts, as well as the alerts' properties, such as severity, status, and MITRE ATT&CK tactics and techniques.
The **Overview** tab contains the following widgets, each of which represents an
[Learn more about the **Incident timeline** widget below](#incident-timeline). -- In the **Similar incidents (Preview)** widget, you'll see a collection of up to 20 other incidents that most closely resemble the current incident. This allows you to view the incident in a larger context and helps direct your investigation.
+- In the **Similar incidents** widget, you'll see a collection of up to 20 other incidents that most closely resemble the current incident. This allows you to view the incident in a larger context and helps direct your investigation.
- [Learn more about the **Similar incidents** widget below](#similar-incidents-preview).
+ [Learn more about the **Similar incidents** widget below](#similar-incidents).
- The **Entities** widget shows you all the [entities](entities.md) that have been identified in the alerts. These are the objects that played a role in the incident, whether they be users, devices, addresses, files, or [any other types](./entities-reference.md). Select an entity to see its full details (which will be displayed in the **Entities tab**&mdash;see below).
From the incident timeline widget, you can also take the following actions on al
:::image type="content" source="media/investigate-incidents/remove-alert.png" alt-text="Screenshot of removing an alert from an incident.":::
-### Similar incidents (preview)
+<a name="similar-incidents-preview"></a>
+### Similar incidents
As a security operations analyst, when investigating an incident you'll want to pay attention to its larger context. For example, you'll want to see if other incidents like this have happened before or are happening now.
As a security operations analyst, when investigating an incident you'll want to
- You might want to identify the owners of past similar incidents, to find the people in your SOC who can provide more context, or to whom you can escalate the investigation.
-The **similar incidents** widget in the incident details page, now in preview, presents up to 20 other incidents that are the most similar to the current one. Similarity is calculated by internal Microsoft Sentinel algorithms, and the incidents are sorted and displayed in descending order of similarity.
+The **similar incidents** widget in the incident details page presents up to 20 other incidents that are the most similar to the current one. Similarity is calculated by internal Microsoft Sentinel algorithms, and the incidents are sorted and displayed in descending order of similarity.
:::image type="content" source="media/investigate-incidents/similar-incidents.png" alt-text="Screenshot of the similar incidents display." lightbox="media/investigate-incidents/similar-incidents.png":::
You can search the list of entities in the entities widget, or filter the list b
:::image type="content" source="media/investigate-incidents/entity-actions-from-overview.png" alt-text="Screenshot of the actions you can take on an entity from the overview tab.":::
-If you already know that a particular entity is a known indicator of compromise, select the three dots on the entity's row and choose **Add to TI (Preview)** to [add the entity to your threat intelligence](add-entity-to-threat-intelligence.md). (This option is available for [supported entity types](incident-investigation.md#view-entities).)
+If you already know that a particular entity is a known indicator of compromise, select the three dots on the entity's row and choose **Add to TI** to [add the entity to your threat intelligence](add-entity-to-threat-intelligence.md). (This option is available for [supported entity types](incident-investigation.md#view-entities).)
If you want to [trigger an automatic response sequence for a particular entity](respond-threats-during-investigation.md), select the three dots and choose **Run playbook (Preview)**. (This option is available for [supported entity types](incident-investigation.md#view-entities).)
If the entity name appears as a link, selecting the entity's name will redirect
You can take the same actions here that you can take from the widget on the overview page. Select the three dots in the row of the entity to either run a playbook or add the entity to your threat intelligence.
-You can also take these actions by selecting the button next to **View full details** at the bottom of the side panel. The button will read either **Add to TI (Preview)**, **Run playbook (Preview)**, or **Entity actions**&mdash;in which case a menu will appear with the other two choices.
+You can also take these actions by selecting the button next to **View full details** at the bottom of the side panel. The button will read either **Add to TI**, **Run playbook (Preview)**, or **Entity actions**&mdash;in which case a menu will appear with the other two choices.
The **View full details** button itself will redirect you to the entity's full entity page.
sentinel Livestream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/livestream.md
You can create a livestream session from an existing hunting query, or create yo
- If you started livestream from scratch, create your query. > [!NOTE]
- > Livestream supports **cross-resource queries** of data in Azure Data Explorer. [**Learn more about cross-resource queries**](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
+ > Livestream supports **cross-resource queries** of data in Azure Data Explorer. [**Learn more about cross-resource queries**](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
1. Select **Play** from the command bar.
sentinel Monitor Analytics Rule Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-analytics-rule-integrity.md
For either **Scheduled analytics rule run** or **NRT analytics rule run**, you m
| An internal server error occurred while running the query. | | | The query execution timed out. | | | A table referenced in the query was not found. | Verify that the relevant data source is connected. |
- | A semantic error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | A semantic error occurred while running the query. | Try resetting the analytics rule by editing and saving it (without changing any settings). |
| A function called by the query is named with a reserved word. | Remove or rename the function. |
- | A syntax error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | A syntax error occurred while running the query. | Try resetting the analytics rule by editing and saving it (without changing any settings). |
| The workspace does not exist. | |
- | This query was found to use too many system resources and was prevented from running. | |
+ | This query was found to use too many system resources and was prevented from running. | Review and tune the analytics rule. Consult our Kusto Query Language [overview](kusto-overview.md) and [best practices](/azure/data-explorer/kusto/query/best-practices?toc=%2Fazure%2Fsentinel%2FTOC.json&bc=%2Fazure%2Fsentinel%2Fbreadcrumb%2Ftoc.json) documentation. |
| A function called by the query was not found. | Verify the existence in your workspace of all functions called by the query. | | The workspace used in the query was not found. | Verify that all workspaces in the query exist. |
- | You don't have permissions to run this query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | You don't have permissions to run this query. | Try resetting the analytics rule by editing and saving it (without changing any settings). |
| You don't have access permissions to one or more of the resources in the query. | | | The query referred to a storage path that was not found. | | | The query was denied access to a storage path. | |
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
The following limitations currently govern the use of NRT rules:
(Since the NRT rule type is supposed to approximate **real-time** data ingestion, it doesn't afford you any advantage to use NRT rules on log sources with significant ingestion delay, even if it's far less than 12 hours.)
-1. As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
-
- 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists.
-
- 1. You cannot use unions or joins.
+1. The syntax for this type of rule is gradually evolving. At this time the following limitations remain in effect:
1. Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
The following limitations currently govern the use of NRT rules:
1. Event grouping is now configurable to a limited degree. NRT rules can produce up to 30 single-event alerts. A rule with a query that results in more than 30 events will produce alerts for the first 29, then a 30th alert that summarizes all the applicable events.
+ 1. Queries defined in an NRT rule can now reference **more than one table**.
+ ## Next steps In this document, you learned how near-real-time (NRT) analytics rules work in Microsoft Sentinel.
sentinel Normalization Schema Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-network.md
The following filtering parameters are available:
| **srcipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.| | **dstipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.| | **ipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) or [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.<br><br>The field [ASimMatchingIpAddr](normalization-common-fields.md#asimmatchingipaddr) is set with the one of the values `SrcIpAddr`, `DstIpAddr`, or `Both` to reflect the matching fields or fields. |
-| **dstportnum** | Int | Filter only network sessions with the specified destination port number. |
+| **dstportnumber** | Int | Filter only network sessions with the specified destination port number. |
| **hostname_has_any** | dynamic/string | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. The length of the list is limited to 10,000 items.<br><br> The field [ASimMatchingHostname](normalization-common-fields.md#asimmatchinghostname) is set with the one of the values `SrcHostname`, `DstHostname`, or `Both` to reflect the matching fields or fields. | | **dvcaction** | dynamic/string | Filter only network sessions for which the [Device Action field](#dvcaction) is any of the values listed. | | **eventresult** | String | Filter only network sessions with a specific **EventResult** value. |
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
In this section, you deploy the data connector agent. After you deploy the agent
### Prerequisites -- To get started with deploying the data connector agent via the UI, **[complete the sign-up form](https://aka.ms/SentinelSAPMultiSIDUX)** so that we can provision your subscription with access to the preview. WeΓÇÖll send a confirmation email once your subscription is active. - Follow the [Microsoft Sentinel Solution for SAP deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md). - If you plan to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), [deploy the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md). - Set up a [managed identity](#managed-identity) or a [registered application](#registered-application). For more information on these options, see the [overview section](#data-connector-agent-deployment-overview).
In this section, you deploy the data connector agent. After you deploy the agent
Once the connector is deployed, proceed to deploy Microsoft Sentinel solution for SAP® applications content: > [!div class="nextstepaction"]
-> [Deploy the solution content from the content hub](deploy-sap-security-content.md)
+> [Deploy the solution content from the content hub](deploy-sap-security-content.md)
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
By default, all analytics rules provided in the Microsoft Sentinel solution for
To enable or disable the ingestion of a specific log:
-1. Edit the *systemconfig.ini* file located under */opt/sapcon/SID/* on the connector's VM.
+1. Edit the *systemconfig.json* file located under */opt/sapcon/SID/* on the connector's VM.
1. Inside the configuration file, locate the relevant log and do one of the following: - To enable the log, change the value to `True`. - To disable the log, change the value to `False`.
To enable or disable the ingestion of a specific log:
For example, to stop ingestion for the `ABAPJobLog`, change its value to `False`: ```
-ABAPJobLog = False
+"abapjoblog": "True",
```
-Review the list of available logs in the [Systemconfig.ini file reference](reference-systemconfig.md#logs-activation-status-section).
+Review the list of available logs in the [Systemconfig.json file reference](reference-systemconfig-json.md).
You can also [stop ingesting the user master data tables](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
You can also [stop ingesting the user master data tables](sap-solution-deploy-al
To stop ingesting SAP logs into the Microsoft Sentinel workspace, and to stop the data stream from the Docker container, run this command: ```
-docker stop sapcon-[SID]
+docker stop sapcon-[SID/agent-name]
```-
+To stop ingesting a specific SID for a multi-SID container you must delete the SID from the connector page UI in Sentinel
The Docker container stops and doesn't send any more SAP logs to the Microsoft Sentinel workspace. This stops both the ingestion and billing for the SAP system related to the connector. If you need to reenable the Docker container, run this command:
Reference files:
- [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
-For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Sap Incident Response Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-incident-response-playbooks.md
The process for deploying Standard logic apps generally is more complex than it
Currently available Standard playbooks in GitHub: - [**Lock SAP User from Teams - Basic** Standard playbook](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Playbooks/Basic-SAPLockUser-STD)
-Keep tabs on the [SAP playbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Playbooks) in the GitHub repository for more playbooks as they become available. There's also a short introductory video there to help you get started.
+Keep tabs on the [SAP playbooks folder](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Playbooks) in the GitHub repository for more playbooks as they become available. There's also a [short introductory video (external link)](https://www.youtube.com/watch?v=b-AZnR-nQpg) there to help you get started.
## Next steps
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
This article lists the most common service limits you might encounter as you use
[!INCLUDE [sentinel-service-limits](includes/sentinel-limits-workbooks.md)]
+## Workspace manager limits
++ ## Next steps - [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
Finally, do you want to try it yourself? The Microsoft Sentinel All-In-One Accel
Thousands of organizations and service providers are using Microsoft Sentinel. As is usual with security products, most organizations don't go public about it. Still, here are a few who have:
-* Find [public customer use cases](https://customers.microsoft.com/home).
+* Find [public customer use cases](https://customers.microsoft.com/).
* [Insight](https://www.insightcdct.com/) released a use case about [an NBA team adopts Microsoft Sentinel](https://www.insightcdct.com/Resources/Case-Studies/Case-Studies/NBA-Team-Adopts-Azure-Sentinel-for-a-Modern-Securi). * Stuart Gregg, Security Operations Manager at ASOS, posted a much more detailed [blog post from the Microsoft Sentinel experience, focusing on hunting](https://medium.com/@stuart.gregg/proactive-phishing-with-azure-sentinel-part-1-b570fff3113).
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions
### MISP Open Source Threat Intelligence Platform -- For a sample script that provides clients with MISP instances to migrate threat indicators to the Microsoft Graph Security API, see the [MISP to Microsoft Graph Security Script](https://github.com/microsoftgraph/security-api-solutions/tree/master/Samples/MISP).
+- Push threat indicators from MISP to Microsoft Sentinel using the TI upload indicators API with [MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/).
+- Azure Marketplace link for [MISP2Sentinel](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview).
- [Learn more about the MISP Project](https://www.misp-project.org/). ### Palo Alto Networks MineMeld
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
Another way to run playbooks automatically in response to **alerts** is to call
**This method will be deprecated as of March 2026.**
-Beginning **June 2023**, you can no longer add playbooks to analytics rules in this way. However, you can still see the existing playbooks called from analytics rules, and these playbooks will still run until March 2006. You are strongly encouraged to [create automation rules to call these playbooks instead](migrate-playbooks-to-automation-rules.md) before then.
+Beginning **June 2023**, you can no longer add playbooks to analytics rules in this way. However, you can still see the existing playbooks called from analytics rules, and these playbooks will still run until March 2026. You are strongly encouraged to [create automation rules to call these playbooks instead](migrate-playbooks-to-automation-rules.md) before then.
## Run a playbook on demand
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## August 2023
+
+- [New incident investigation experience is now GA](#new-incident-investigation-experience-is-now-ga)
+- [Updated MISP2Sentinel solution utilizes the new upload indicators API.](#updated-misp2sentinel-solution)
+- [New and improved entity pages](#new-and-improved-entity-pages)
+
+### New incident investigation experience is now GA
+
+Microsoft Sentinel's comprehensive [incident investigation and case management experience](incident-investigation.md) is now generally available in both commercial and government clouds. This experience includes the revamped incident page, which itself includes displays of the incident's entities, insights, and similar incidents for comparison. The new experience also includes an incident log history and a task list.
+
+Also generally available are the similar incidents widget and the ability to add entities to your threat intelligence list of indicators of compromise (IoCs).
+
+- Learn more about [investigating incidents](investigate-incidents.md) in Microsoft Sentinel.
+
+### Updated MISP2Sentinel solution
+The open source threat intelligence sharing platform, MISP, has an updated solution to push indicators to Microsoft Sentinel. This notable solution utilizes the new [upload indicators API](#connect-threat-intelligence-with-the-upload-indicators-api) to take advantage of workspace granularity and align the MISP ingested TI to STIX-based properties.
+
+Learn more about the implementation details from the [MISP blog entry for MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/).
+
+### New and improved entity pages
+
+Microsoft Sentinel now provides you enhanced and enriched entity pages and panels, giving you more security information on user accounts, full entity data to enrich your incident context, and a reduction in latency for a faster, smoother experience.
+
+- Read more about these changes in this blog post: [Taking Entity Investigation to the Next Level: Microsoft SentinelΓÇÖs Upgraded Entity Pages](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/taking-entity-investigation-to-the-next-level-microsoft-sentinel/ba-p/3878382).
+
+- Learn more about [entities in Microsoft Sentinel](entities.md).
+ ## July 2023 - [Higher limits for entities in alerts and entity mappings in analytics rules](#higher-limits-for-entities-in-alerts-and-entity-mappings-in-analytics-rules) - Announcement: [Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting](#changes-to-microsoft-defender-for-office-365-connector-alerts-that-apply-when-disconnecting-and-reconnecting) - [Content Hub generally available and centralization changes released](#content-hub-generally-available-and-centralization-changes-released) - [Deploy incident response playbooks for SAP](#deploy-incident-response-playbooks-for-sap)-- [Microsoft Sentinel solution for D365 Finance and Operations (Preview)](#microsoft-sentinel-solution-for-d365-finance-and-operations-preview)
+- [Microsoft Sentinel solution for Dynamics 365 Finance and Operations (Preview)](#microsoft-sentinel-solution-for-dynamics-365-finance-and-operations-preview)
- [Simplified pricing tiers](#simplified-pricing-tiers) in [Announcements](#announcements) section below - [Monitor and optimize the execution of your scheduled analytics rules (Preview)](#monitor-and-optimize-the-execution-of-your-scheduled-analytics-rules-preview)
Take advantage of Microsoft Sentinel's security orchestration, automation, and r
Learn more about [Microsoft Sentinel incident response playbooks for SAP](sap/sap-incident-response-playbooks.md).
-### Microsoft Sentinel solution for D365 Finance and Operations (Preview)
+### Microsoft Sentinel solution for Dynamics 365 Finance and Operations (Preview)
-The Microsoft Sentinel Solution for D365 Finance and Operations monitors and protects your Dynamics 365 Finance and Operations system: It collects audits and activity logs from the Dynamics 365 Finance and Operations environment, and detects threats, suspicious activities, illegitimate activities, and more.
+The Microsoft Sentinel Solution for Dynamics 365 Finance and Operations monitors and protects your Dynamics 365 Finance and Operations system: It collects audits and activity logs from the Dynamics 365 Finance and Operations environment, and detects threats, suspicious activities, illegitimate activities, and more.
The solution includes the **Dynamics 365 Finance and Operations** connector and [built-in analytics rules](dynamics-365/dynamics-365-finance-operations-security-content.md#built-in-analytics-rules) to detect suspicious activity in your Dynamics 365 Finance and Operations environment.
sentinel Workspace Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/workspace-manager.md
Workspace manager groups allow you to organize workspaces together based on busi
## Publish the Group definition At this point, the content items selected haven't been published to the member workspace(s) yet.
+> [!NOTE]
+> The publish action will fail if the [maximum publish operations](#known-limitations) are exceeded.
+> Consider splitting up member workspaces into additional groups if you approach this limit.
+ 1. Select the group > **Publish content**. :::image type="content" source="media/workspace-manager/publish-group.png" alt-text="Screenshot shows the group publish window.":::
Common reasons for failure include:
- A member workspace has been deleted. ### Known limitations
+- The maximum published operations per group is 2000. *Published operations* = (*member workspaces*) * (*content items*).<br>For example, if you have 10 member workspaces in a group and you publish 20 content items in that group,<br>*published operations* = *10* * *20* = *200*.
- Playbooks attributed or attached to analytics and automation rules aren't currently supported. - Workbooks stored in bring-your-own-storage aren't currently supported. - Workspace manager only manages content items published from the central workspace. It doesn't manage content created locally from member workspace(s).
service-bus-messaging Deprecate Service Bus Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/deprecate-service-bus-management.md
For more information on Service Manager and Resource Manager APIs for Azure Serv
| Service Manager APIs (Deprecated) | Resource Manager - Service Bus API | Resource Manager - Event Hubs API | Resource Manager - Relay API | | | -- | -- | -- |
-| **Namespaces-GetNamespaceAsync** <br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/> ```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) |
-| **ConnectionDetails-GetConnectionDetails**<br/>Service Bus/Event Hub/Relay GetConnectionDetals<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/ConnectionDetails``` | [listkeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) | [listkeys](/rest/api/eventhub/stable/authorization-rules-event-hubs/list-keys) | [listkeys](/rest/api/relay/namespaces/listkeys) |
-| **Topics-GetTopicsAsync**<br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics? $skip={skip}&$top={top}``` | [list](/rest/api/servicebus/stable/topics/listbynamespace) | &nbsp; | &nbsp; |
-| **Queues-GetQueueAsync** <br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/queues/{queueName}``` | [get](/rest/api/servicebus/stable/queues/get) | &nbsp; | &nbsp; |
-| **Relays-GetRelaysAsync**<br/>[Get Relays](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/relays? $skip={skip}&$top={top}```| &nbsp; | &nbsp; | [list](/rest/api/relay/wcfrelays/listbynamespace) |
-| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay GetNamespaceAuthRule<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/authorizationrules?``` | [getauthorizationrule](/rest/api/servicebus/stable/namespaces-authorization-rules/get-authorization-rule) | [getauthorizationrule](/rest/api/eventhub/stable/authorization-rules-namespaces/get-authorization-rule) | [getauthorizationrule](/rest/api/relay/namespaces/getauthorizationrule) |
-| **Namespaces-DeleteNamespaceAsync**<br/>[Service Bus Delete Namespace](/rest/api/servicebus/delete-namespace)<br/>[Event Hubs Delete Namespace](/rest/api/eventhub/delete-event-hub)<br/>[Relays Delete Namespace](/rest/api/servicebus/delete-namespace)<br/> ```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [delete](/rest/api/servicebus/stable/namespaces/delete) | [delete](/rest/api/eventhub/stable/namespaces/delete) | [delete](/rest/api/relay/namespaces/delete) |
-| **MessagingSKUPlan-GetPlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) |
-| **MessagingSKUPlan-UpdatePlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) |
-| **NamespaceAuthorizationRules-UpdateNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdateauthorizationrule](/rest/api/eventhub/stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/namespaces/createorupdateauthorizationrule) |
+| **Namespaces-GetNamespaceAsync** <br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/> ```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/controlplane-stable/namespaces/get) | [get](/rest/api/eventhub/controlplane-stable/namespaces/get) | [get](/rest/api/relay/controlplane-preview/namespaces/get) |
+| **ConnectionDetails-GetConnectionDetails**<br/>Service Bus/Event Hub/Relay GetConnectionDetals<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/ConnectionDetails``` | [listkeys](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-keys) | [listkeys](/rest/api/eventhub/controlplane-stable/authorization-rules-event-hubs/list-keys) | [listkeys](/rest/api/relay/controlplane-stable/wcf-relays/list-keys) |
+| **Topics-GetTopicsAsync**<br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics? $skip={skip}&$top={top}``` | [list](/rest/api/servicebus/controlplane-preview/topics/list-by-namespace) | &nbsp; | &nbsp; |
+| **Queues-GetQueueAsync** <br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/queues/{queueName}``` | [get](/rest/api/servicebus/controlplane-stable/queues/get) | &nbsp; | &nbsp; |
+| **Relays-GetRelaysAsync**<br/>[Get Relays](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/relays? $skip={skip}&$top={top}```| &nbsp; | &nbsp; | [list](/rest/api/relay/controlplane-stable/wcf-relays/list-by-namespace) |
+| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay GetNamespaceAuthRule<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/authorizationrules?``` | [getauthorizationrule](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/get-authorization-rule) | [getauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-namespaces/get-authorization-rule) | [getauthorizationrule](/rest/api/relay/controlplane-stable/wcf-relays/get-authorization-rule) |
+| **Namespaces-DeleteNamespaceAsync**<br/>[Service Bus Delete Namespace](/rest/api/servicebus/delete-namespace)<br/>[Event Hubs Delete Namespace](/rest/api/eventhub/delete-event-hub)<br/>[Relays Delete Namespace](/rest/api/servicebus/delete-namespace)<br/> ```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [delete](/rest/api/servicebus/controlplane-stable/namespaces/delete) | [delete](/rest/api/eventhub/controlplane-stable/namespaces/delete) | [delete](/rest/api/relay/controlplane-preview/namespaces/delete) |
+| **MessagingSKUPlan-GetPlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [get](/rest/api/servicebus/controlplane-stable/namespaces/get) | [get](/rest/api/eventhub/controlplane-stable/namespaces/get) | [get](/rest/api/relay/controlplane-preview/namespaces/get) |
+| **MessagingSKUPlan-UpdatePlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [createorupdate](/rest/api/servicebus/controlplane-preview/namespaces/create-or-update) | [createorupdate](/rest/api/eventhub/controlplane-stable/event-hubs/create-or-update) | [createorupdate](/rest/api/relay/controlplane-stable/namespaces/create-or-update) |
+| **NamespaceAuthorizationRules-UpdateNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [createorupdate](/rest/api/servicebus/controlplane-preview/namespaces/create-or-update) | [createorupdateauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/controlplane-stable/namespaces/create-or-update-authorization-rule) |
| **NamespaceAuthorizationRules-CreateNamespaceAuthorizationRuleAsync**<br/>
-Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` |[createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdateauthorizationrule](/rest/api/eventhub/stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/namespaces/createorupdateauthorizationrule) |
-| **NamespaceProperties-GetNamespacePropertiesAsync**<br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) |
+Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` |[createorupdate](/rest/api/servicebus/controlplane-preview/namespaces/create-or-update) | [createorupdateauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/controlplane-stable/namespaces/create-or-update-authorization-rule) |
+| **NamespaceProperties-GetNamespacePropertiesAsync**<br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/controlplane-stable/namespaces/get) | [get](/rest/api/eventhub/controlplane-stable/namespaces/get) | [get](/rest/api/relay/controlplane-preview/namespaces/get) |
| **RegionCodes-GetRegionCodesAsync**<br/>Service Bus/EventHub/Relay Get Namespace<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | &nbsp; | &nbsp; | &nbsp; |
-| **NamespaceProperties-UpdateNamespacePropertyAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Regions/``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) |
-| **EventHubsCrud-ListEventHubsAsync**<br/>[List Event Hubs](/rest/api/eventhub/list-event-hubs)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs?$skip={skip}&$top={top}``` | &nbsp; | [list](/rest/api/eventhub/preview/event-hubs/list-by-namespace) | &nbsp; |
+| **NamespaceProperties-UpdateNamespacePropertyAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Regions/``` | [createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/create-or-update) | [createorupdate](/rest/api/eventhub/controlplane-stable/event-hubs/create-or-update) | [createorupdate](/rest/api/relay/controlplane-stable/namespaces/create-or-update) |
+| **EventHubsCrud-ListEventHubsAsync**<br/>[List Event Hubs](/rest/api/eventhub/list-event-hubs)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs?$skip={skip}&$top={top}``` | &nbsp; | [list](/rest/api/eventhub/controlplane-preview/event-hubs/list-by-namespace) | &nbsp; |
| **EventHubsCrud-GetEventHubAsync**<br/>[Get Event Hubs](/rest/api/eventhub/get-event-hub)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs/{eventHubPath}``` | &nbsp; | [get](/rest/api/eventhub/get-event-hub) | &nbsp; |
-| **NamespaceAuthorizationRules-DeleteNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay<br/>```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [deleteauthorizationrule](/rest/api/servicebus/stable/namespaces-authorization-rules/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/eventhub/stable/authorization-rules-namespaces/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/relay/namespaces/deleteauthorizationrule) |
-| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRulesAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules``` | [listauthorizationrules](/rest/api/servicebus/stable/namespaces-authorization-rules/list-authorization-rules) | [listauthorizationrules](/rest/api/eventhub/stable/authorization-rules-namespaces/list-authorization-rules) | [listauthorizationrules](/rest/api/relay/namespaces/listauthorizationrules) |
-| **NamespaceAvailability-IsNamespaceAvailable**<br/>[Service Bus Namespace Availability](/rest/api/servicebus/check-namespace-availability)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/CheckNamespaceAvailability/?namespace=<namespaceValue>``` | [checknameavailability](/rest/api/servicebus/stable/namespaces-check-name-availability/check-name-availability) | [checknameavailability](/rest/api/eventhub/stable/check-name-availability-namespaces/check-name-availability) | [checknameavailability](/rest/api/relay/namespaces/checknameavailability) |
-| **Namespaces-CreateOrUpdateNamespaceAsync**<br/>Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) |
-| **Topics-GetTopicAsync**<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics/{topicPath}``` | [get](/rest/api/servicebus/stable/topics/get) | &nbsp; | &nbsp; |
+| **NamespaceAuthorizationRules-DeleteNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay<br/>```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [deleteauthorizationrule](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-namespaces/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/relay/controlplane-stable/namespaces/delete-authorization-rule) |
+| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRulesAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules``` | [listauthorizationrules](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-authorization-rules) | [listauthorizationrules](/rest/api/eventhub/controlplane-stable/authorization-rules-namespaces/list-authorization-rules) | [listauthorizationrules](/rest/api/relay/controlplane-stable/namespaces/list-authorization-rules) |
+| **NamespaceAvailability-IsNamespaceAvailable**<br/>[Service Bus Namespace Availability](/rest/api/servicebus/check-namespace-availability)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/CheckNamespaceAvailability/?namespace=<namespaceValue>``` | [checknameavailability](/rest/api/servicebus/controlplane-stable/namespaces-check-name-availability/check-name-availability) | [checknameavailability](/rest/api/eventhub/controlplane-stable/check-name-availability-namespaces/check-name-availability) | [checknameavailability](/rest/api/relay/controlplane-stable/namespaces/check-name-availability) |
+| **Namespaces-CreateOrUpdateNamespaceAsync**<br/>Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/create-or-update) | [createorupdate](/rest/api/eventhub/controlplane-stable/namespaces/create-or-update) | [createorupdate](/rest/api/relay/controlplane-stable/namespaces/create-or-update) |
+| **Topics-GetTopicAsync**<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics/{topicPath}``` | [get](/rest/api/servicebus/controlplane-stable/topics/get) | &nbsp; | &nbsp; |
## Service Manager PowerShell - Resource Manager PowerShell | Service Manager PowerShell command (Deprecated) | New Resource Manager Commands | Newer Resource Manager Command |
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
To use the Service Bus Explorer, navigate to the Service Bus namespace on which
1. If you're looking to run operations against a queue, select **Queues** from the navigation menu. If you're looking to run operations against a topic (and it's related subscriptions), select **Topics**. :::image type="content" source="./media/service-bus-explorer/queue-topics-left-navigation.png" alt-text="Screenshot of left side navigation, where entity can be selected." lightbox="./media/service-bus-explorer/queue-topics-left-navigation.png":::- 1. After selecting **Queues** or **Topics**, select the specific queue or topic.+
+ :::image type="content" source="./media/service-bus-explorer/select-specific-queue.png" alt-text="Screenshot of the Queues page with a specific queue selected." lightbox="./media/service-bus-explorer/select-specific-queue.png":::
1. Select the **Service Bus Explorer** from the left navigation menu :::image type="content" source="./media/service-bus-explorer/left-navigation-menu-selected.png" alt-text="Screenshot of queue page where Service Bus Explorer can be selected." lightbox="./media/service-bus-explorer/left-navigation-menu-selected.png":::
After peeking or receiving a message, we can resend it, which will send a copy o
When working with Service Bus Explorer, it's possible to use either **Access Key** or **Azure Active Directory** authentication. 1. Select the **Settings** button.+
+ :::image type="content" source="./media/service-bus-explorer/select-settings.png" alt-text="Screenshot indicating the Settings button in Service Bus Explorer." lightbox="./media/service-bus-explorer/select-settings.png":::
1. Choose the desired authentication method, and select the **Save** button. :::image type="content" source="./media/service-bus-explorer/queue-select-authentication-type.png" alt-text="Screenshot indicating the Settings button and a page showing the different authentication types." lightbox="./media/service-bus-explorer/queue-select-authentication-type.png":::
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
service-bus-messaging Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/private-link-service.md
# Allow access to Azure Service Bus namespaces via private endpoints Azure Private Link Service enables you to access Azure services (for example, Azure Service Bus, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a **private endpoint** in your virtual network.
-A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
+A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
For more information, see [What is Azure Private Link?](../private-link/private-link-overview.md)
There are four provisioning states:
1. In the search bar, type in **Service Bus**. 1. Select the **namespace** that you want to manage. 1. Select the **Networking** tab.
-5. Go to the appropriate section below based on the operation you want to: approve, reject, or remove.
+5. See the appropriate following section based on the operation you want to: approve, reject, or remove.
### Approve a private endpoint connection
Aliases: <service-bus-namespace-name>.servicebus.windows.net
## Limitations and Design Considerations
-**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-
-**Limitations**: This feature is available in all Azure public regions.
-
-**Maximum number of private endpoints per Service Bus namespace**: 120.
+- For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+- This feature is available in all Azure public regions.
+- Maximum number of private endpoints per Service Bus namespace: 120.
+- The traffic is blocked at the application layer, not at the TCP layer. Therefore, you see TCP connections or `nslookup` operations succeeding against the public endpoint even though the public access is disabled.
For more, see [Azure Private Link service: Limitations](../private-link/private-link-service-overview.md#limitations)
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
service-bus-messaging Service Bus Amqp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-overview.md
Title: Overview of AMQP 1.0 in Azure Service Bus description: Learn how Azure Service Bus supports Advanced Message Queuing Protocol (AMQP), an open standard protocol. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Advanced Message Queueing Protocol (AMQP) 1.0 support in Service Bus
service-bus-messaging Service Bus Amqp Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-troubleshoot.md
Title: Troubleshoot AMQP errors in Azure Service Bus | Microsoft Docs description: Provides a list of AMQP errors you may receive when using Azure Service Bus, and cause of those errors. Previously updated : 09/20/2021 Last updated : 08/16/2023 # AMQP errors in Azure Service Bus
-This article provides some of the errors you receive when using AMQP with Azure Service Bus. They are all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
+This article provides some of the errors you receive when using AMQP with Azure Service Bus. They're all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
## Link is closed You see the following error when the AMQP connection and link are active but no calls (for example, send or receive) are made using the link for 10 minutes. So, the link is closed. The connection is still open.
amqp:link:detach-forced:The link 'G2:7223832:user.tenant0.cud_00000000000-0000-0
``` ## Connection is closed
-You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link has not been created in 5 minutes.
+You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes.
``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null} ```
-## Link is not created
-You see this error when a new AMQP connection is created but a link is not created within 1 minute of the creation of the AMQP Connection.
+## Link isn't created
+You see this error when a new AMQP connection is created but a link isn't created within 1 minute of the creation of the AMQP Connection.
``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:0000000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T18:41:51', info=null}
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
If you try to create a pairing between a primary namespace with a private endpoi
> [!NOTE] > When you try to pair the primary namespace with a private endpoint and the secondary namespace, the validation process only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the endpoint works or will work after failover. It's your responsibility to ensure that the secondary namespace with private endpoint will work as expected after failover. >
-> To test that the private endpoint configurations are same, send a [Get queues](/rest/api/servicebus/stable/queues/get) request to the secondary namespace from outside the virtual network, and verify that you receive an error message from the service.
+> To test that the private endpoint configurations are same, send a [Get queues](/rest/api/servicebus/controlplane-stable/queues/get) request to the secondary namespace from outside the virtual network, and verify that you receive an error message from the service.
### Existing pairings If pairing between primary and secondary namespace already exists, private endpoint creation on the primary namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one for the primary namespace.
Azure Active Directory (Azure AD) role-based access control (RBAC) assignments t
## Next steps -- See the Geo-disaster recovery [REST API reference here](/rest/api/servicebus/stable/disasterrecoveryconfigs).
+- See the Geo-disaster recovery [REST API reference here](/rest/api/servicebus/controlplane-stable/disaster-recovery-configs).
- Run the Geo-disaster recovery [sample on GitHub](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/GeoDR/SBGeoDR2/SBGeoDR2). - See the Geo-disaster recovery [sample that sends messages to an alias](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/GeoDR/TestGeoDR/ConsoleApp1).
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-ip-filtering.md
The IP firewall rules are applied at the Service Bus namespace level. Therefore,
- Azure Functions > [!NOTE]
-> You see the **Networking** tab only for **premium** namespaces. To set IP firewall rules for the other tiers, use [Azure Resource Manager templates](#use-resource-manager-template).
+> You see the **Networking** tab only for **premium** namespaces. To set IP firewall rules for the other tiers, use [Azure Resource Manager templates](#use-resource-manager-template), [Azure CLI](#use-azure-cli), [PowerShell](#use-azure-powershell) or [REST API](#rest-api).
## Use Azure portal
Use the following Azure PowerShell commands to add, list, remove, update, and de
- [`New-AzServiceBusIPRuleConfig`](/powershell/module/az.servicebus/new-azservicebusipruleconfig) and [`Set-AzServiceBusNetworkRuleSet`](/powershell/module/az.servicebus/set-azservicebusnetworkruleset) together to add an IP firewall rule - [`Remove-AzServiceBusIPRule`](/powershell/module/az.servicebus/remove-azservicebusiprule) to remove an IP firewall rule.
-## default action and public network access
+## Default action and public network access
### REST API
From API version **2021-06-01-preview onwards**, the default value of the `defau
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
-For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/preview/private-endpoint-connections/create-or-update).
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/controlplane-preview/private-endpoint-connections/create-or-update).
> [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
Title: Azure Service Bus Subscription Rule SQL Filter syntax | Microsoft Docs description: This article provides details about SQL filter grammar. A SQL filter supports a subset of the SQL-92 standard. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Subscription Rule SQL Filter Syntax
-A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown below.
+A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown in this section.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html) through the JMS 2.0 API.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://
## Remarks
-An attempt to access a non-existent system property is an error, while an attempt to access a non-existent user property isn't an error. Instead, a non-existent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation.
+An attempt to access a nonexistent system property is an error, while an attempt to access a nonexistent user property isn't an error. Instead, a nonexistent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation.
## property_name
Consider the following Sql Filter semantics:
### Property evaluation semantics -- An attempt to evaluate a non-existent system property throws a `FilterException` exception.
+- An attempt to evaluate a nonexistent system property throws a `FilterException` exception.
- A property that doesn't exist is internally evaluated as **unknown**.
service-bus-messaging Service Bus Migrate Standard Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md
Title: Migrate Azure Service Bus namespaces - standard to premium
description: Guide to allow migration of existing Azure Service Bus standard namespaces to premium Previously updated : 06/27/2022 Last updated : 08/17/2023 # Migrate existing Azure Service Bus standard namespaces to the premium tier
Previously, Azure Service Bus offered namespaces only on the standard tier. Name
This article describes how to migrate existing standard tier namespaces to the premium tier. >[!WARNING]
-> Migration is intended for Service Bus standard namespaces to be upgraded to the premium tier. The migration tool does not support downgrading.
+> Migration is intended for Service Bus standard namespaces to be upgraded to the premium tier. The migration tool doesn't support downgrading.
Some of the points to note: - This migration is meant to happen in place, meaning that existing sender and receiver applications **don't require any changes to code or configuration**. The existing connection string will automatically point to the new premium namespace.-- The **premium** namespace should have **no entities** in it for the migration to succeed.
+- If you're using an existing premium name, the **premium** namespace should have **no entities** in it for the migration to succeed.
- All **entities** in the standard namespace are **copied** to the premium namespace during the migration process. - Migration supports **1,000 entities per messaging unit** on the premium tier. To identify how many messaging units you need, start with the number of entities that you have on your current standard namespace. - You can't directly migrate from **basic tier** to **premium tier**, but you can do so indirectly by migrating from basic to standard first and then from the standard to premium in the next step.-- The role-based access control (RBAC) settings are not migrated, so you will need to add them manually after the migration.
+- The role-based access control (RBAC) settings aren't migrated, so you'll need to add them manually after the migration.
## Migration steps Some conditions are associated with the migration process. Familiarize yourself with the following steps to reduce the possibility of errors. These steps outline the migration process, and the step-by-step details are listed in the sections that follow.
-1. Create a new premium namespace.
+1. Create a new premium namespace. You complete the next three steps using the following CLI or Azure portal instructions in this article.
1. Pair the standard and premium namespaces to each other. 1. Sync (copy-over) entities from the standard to the premium namespace. 1. Commit the migration.
To migrate your Service Bus standard namespace to premium by using the Azure CLI
1. Create a new Service Bus premium namespace. You can reference the [Azure Resource Manager templates](service-bus-resource-manager-namespace.md) or [use the Azure portal](service-bus-quickstart-portal.md#create-a-namespace-in-the-azure-portal). Be sure to select **premium** for the **serviceBusSku** parameter.
-1. Set the following environment variables to simplify the migration commands.
+1. Set the following environment variables to simplify the migration commands. You can get the Azure Resource Manager ID for your premium namespace by navigating to the namespace in the Azure portal and copying the portion of the URL that looks like the following sample: `/subscriptions/00000000-0000-0000-0000-00000000000000/resourceGroups/contosoresourcegroup/providers/Microsoft.ServiceBus/namespaces/contosopremiumnamespace`.
``` resourceGroup = <resource group for the standard namespace>
To migrate your Service Bus standard namespace to premium by using the Azure CLI
Migration by using the Azure portal has the same logical flow as migrating by using the commands. Follow these steps to migrate by using the Azure portal.
-1. On the **Navigation** menu in the left pane, select **Migrate to premium**. Click the **Get Started** button to continue to the next page.
+1. On the **Navigation** menu in the left pane, select **Migrate to premium**. Select the **Get Started** button to continue to the next page.
:::image type="content" source="./media/service-bus-standard-premium-migration/migrate-premium-page.png" alt-text="Image showing the Migrate to premium page."::: 1. You see the following **Setup Namespaces** page.
Migration by using the Azure portal has the same logical flow as migrating by us
## Caveats
-Some of the features provided by Azure Service Bus Standard tier are not supported by Azure Service Bus Premium tier. These are by design since the premium tier offers dedicated resources for predictable throughput and latency.
+Some of the features provided by Azure Service Bus Standard tier aren't supported by Azure Service Bus Premium tier. These are by design since the premium tier offers dedicated resources for predictable throughput and latency.
-Here is a list of features not supported by Premium and their mitigation -
+Here's a list of features not supported by Premium and their mitigation -
### Express entities
-Express entities that don't commit any message data to storage are not supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system.
+Express entities that don't commit any message data to storage aren't supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system.
During migration, any of your express entities in your Standard namespace will be created on the Premium namespace as a non-express entity.
-If you utilize Azure Resource Manager (ARM) templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
+If you utilize Azure Resource Manager templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
### RBAC settings The role-based access control (RBAC) settings on the namespace aren't migrated to the premium namespace. You'll need to add them manually after the migration.
The role-based access control (RBAC) settings on the namespace aren't migrated t
After the migration is committed, the connection string that pointed to the standard namespace will point to the premium namespace.
-The sender and receiver applications will disconnect from the standard Namespace and reconnect to the premium namespace automatically.
+The sender and receiver applications will disconnect from the standard namespace and reconnect to the premium namespace automatically.
-If your are using the ARM Id for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the ARM Id to be that of the Premium namespace.
+If you are using the Azure Resource Manager ID for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the Azure Resource Manager ID to be that of the premium namespace.
### What do I do after the standard to premium migration is complete? The standard to premium migration ensures that the entity metadata such as topics, subscriptions, and filters are copied from the standard namespace to the premium namespace. The message data that was committed to the standard namespace isn't copied from the standard namespace to the premium namespace.
-The standard namespace may have some messages that were sent and committed while the migration was underway. Manually drain these messages from the standard Namespace and manually send them to the premium Namespace. To manually drain the messages, use a console app or a script that drains the standard namespace entities by using the Post Migration DNS name that you specified in the migration commands. Send these messages to the premium namespace so that they can be processed by the receivers.
+The standard namespace may have some messages that were sent and committed while the migration was underway. Manually drain these messages from the standard namespace and manually send them to the premium namespace. To manually drain the messages, use a console app or a script that drains the standard namespace entities by using the post-migration DNS name that you specified in the migration commands. Send these messages to the premium namespace so that they can be processed by the receivers.
After the messages have been drained, delete the standard namespace. >[!IMPORTANT]
-> After the messages from the standard namespace have been drained, delete the standard namespace. This is important because the connection string that initially referred to the standard namespace now refers to the premium namespace. You won't need the standard Namespace anymore. Deleting the standard namespace that you migrated helps reduce later confusion.
+> After the messages from the standard namespace have been drained, delete the standard namespace. This is important because the connection string that initially referred to the standard namespace now refers to the premium namespace. You won't need the standard namespace anymore. Deleting the standard namespace that you migrated helps reduce later confusion.
### How much downtime do I expect?
During migration, the actual message data/payload isn't copied from the standard
However, if you can migrate during a planned maintenance/housekeeping window, and you don't want to manually drain and send the messages, follow these steps: 1. Stop the sender applications. The receiver applications will process the messages that are currently in the standard namespace and will drain the queue.
-1. After the queues and subscriptions in the standard Namespace are empty, follow the procedure that is described earlier to execute the migration from the standard to the premium namespace.
+1. After the queues and subscriptions in the standard namespace are empty, follow the procedure that is described earlier to execute the migration from the standard to the premium namespace.
1. After the migration is complete, you can restart the sender applications. 1. The senders and receivers will now automatically connect with the premium namespace.
service-bus-messaging Service Bus Prefetch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-prefetch.md
Title: Prefetch messages from Azure Service Bus description: Improve performance by prefetching messages from Azure Service Bus queues or subscriptions. Messages are readily available for local retrieval before the application requests for them. Previously updated : 12/15/2022 Last updated : 08/29/2023 ms.devlang: csharp,java,javascript,python # Prefetch Azure Service Bus messages
-When you enable the **Prefetch** feature for any of the official Service Bus clients, the receiver acquires more messages than what the application initially asked for, up to the specified prefetch count. As messages are returned to the application, the client acquires further messages in the background, to fill the prefetch buffer.
+When you enable the **Prefetch** feature for any of the official Service Bus clients, the receiver acquires more messages than what the application initially asked for, up to the specified prefetch count. As messages are returned to the application, the client acquires more messages in the background, to fill the prefetch buffer.
## Enable Prefetch To enable the Prefetch feature, set the prefetch count of the queue or subscription client to a number greater than zero. Setting the value to zero turns off prefetch. # [.NET](#tab/dotnet)
-If you're using the latest Azure.Messaging.ServiceBus library, you can set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects.
+Set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects.
-If you're using the older .NET client library for Service Bus (Microsoft.Azure.ServiceBus), you can set the prefetch count property on the [MessageReceiver](/dotnet/api/microsoft.servicebus.messaging.messagereceiver.prefetchcount), [QueueClient](/dotnet/api/microsoft.azure.servicebus.queueclient.prefetchcount#Microsoft_Azure_ServiceBus_QueueClient_PrefetchCount) or the [SubscriptionClient](/dotnet/api/microsoft.azure.servicebus.subscriptionclient.prefetchcount).
-
# [Java](#tab/java)
-If you're using the latest azure-messaging-servicebus library, you can set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects.
+Set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects.
-If you're using the older Java client library for Service Bus (azure-servicebus), you can set the prefetch count property on the [MessageReceiver](/java/api/com.microsoft.azure.servicebus.imessagereceiver.setprefetchcount#com_microsoft_azure_servicebus_IMessageReceiver_setPrefetchCount_int_), [QueueClient](/java/api/com.microsoft.azure.servicebus.queueclient.setprefetchcount#com_microsoft_azure_servicebus_QueueClient_setPrefetchCount_int_) or the [SubscriptionClient](/java/api/com.microsoft.azure.servicebus.subscriptionclient.setprefetchcount#com_microsoft_azure_servicebus_SubscriptionClient_setPrefetchCount_int_).
-
# [Python](#tab/python) You can set **prefetch_count** on the [azure.servicebus.ServiceBusReceiver](/python/api/azure-servicebus/azure.servicebus.servicebusreceiver) or [azure.servicebus.aio.ServiceBusReceiver](/python/api/azure-servicebus/azure.servicebus.aio.servicebusreceiver).
While messages are available in the prefetch buffer, any subsequent receive call
## Why is Prefetch not the default option? Prefetch speeds up the message flow by having a message readily available for local retrieval before the application asks for one. This throughput gain is the result of a trade-off that the application author must make explicitly.
-With the [receive-and-delete](message-transfers-locks-settlement.md#receiveanddelete) mode, all messages that are acquired into the prefetch buffer are no longer available in the queue. The messages stay only in the in-memory prefetch buffer until they're received into the application. If the application ends before the messages are received into the application, those messages are irrecoverable (lost).
+When you use the [receive and delete](message-transfers-locks-settlement.md#receiveanddelete) mode, all messages that are acquired into the prefetch buffer are no longer available in the queue. The messages stay only in the in-memory prefetch buffer until they're received into the application. If the application ends before the messages are received into the application, those messages are irrecoverable (lost).
-In the [peek-lock](message-transfers-locks-settlement.md#peeklock) receive mode, messages fetched into the prefetch buffer are acquired into the buffer in a locked state. They have the timeout clock for the lock ticking. If the prefetch buffer is large, and processing takes so long that message locks expire while staying in the prefetch buffer or even while the application is processing the message, there might be some confusing events for the application to handle.
+When you use the [peek lock](message-transfers-locks-settlement.md#peeklock) receive mode, messages fetched into the prefetch buffer are acquired into the buffer in a locked state. They have the timeout clock for the lock ticking. If the prefetch buffer is large, and processing takes so long that message locks expire while staying in the prefetch buffer or even while the application is processing the message, there might be some confusing events for the application to handle. The application might acquire a message with an expired or imminently expiring lock. If so, the application might process the message, but then find that it can't complete the message because of a lock expiration. The application can check the `LockedUntilUtc` property (which is subject to clock skew between the broker and local machine clock).
-The application might acquire a message with an expired or imminently expiring lock. If so, the application might process the message, but then find that it can't complete the message because of a lock expiration. The application can check the `LockedUntilUtc` property (which is subject to clock skew between the broker and local machine clock). If the message lock has expired, the application must ignore the message, and shouldn't make any API call on the message. If the message isn't expired but expiration is imminent, the lock can be renewed and extended by another default lock period.
+If the message lock has expired, the application must ignore the message, and shouldn't make any API call on the message. If the message isn't expired but expiration is imminent, the lock can be renewed and extended by another default lock period. If the lock silently expires in the prefetch buffer, the message is treated as abandoned and is again made available for retrieval from the queue. It might cause the message to be fetched into the prefetch buffer and placed at the end. If the prefetch buffer can't usually be worked through during the message expiration, messages are repeatedly prefetched but never effectively delivered in a usable (validly locked) state, and are eventually moved to the dead-letter queue once the maximum delivery count is exceeded.
-If the lock silently expires in the prefetch buffer, the message is treated as abandoned and is again made available for retrieval from the queue. It might cause the message to be fetched into the prefetch buffer and placed at the end. If the prefetch buffer can't usually be worked through during the message expiration, messages are repeatedly prefetched but never effectively delivered in a usable (validly locked) state, and are eventually moved to the dead-letter queue once the maximum delivery count is exceeded.
+If an application explicitly abandons a message, the message may again be available for retrieval from the queue. When the prefetch is enabled, the message is fetched into the prefetch buffer again and placed at the end. As the messages from the prefetch buffer are drained in the first-in first-out (FIFO) order, the application may receive messages out of order. For example, the application may receive a message with ID 2 and then a message with ID 1 (that was abandoned earlier) from the buffer.
-If you need a high degree of reliability for message processing, and processing takes significant work and time, we recommend that you use the Prefetch feature conservatively, or not at all.
-
-If you need high throughput and message processing is commonly cheap, prefetch yields significant throughput benefits.
+If you need a high degree of reliability for message processing, and processing takes significant work and time, we recommend that you use the Prefetch feature conservatively, or not at all. If you need high throughput and message processing is commonly cheap, prefetch yields significant throughput benefits.
The maximum prefetch count and the lock duration configured on the queue or subscription need to be balanced such that the lock timeout at least exceeds the cumulative expected message processing time for the maximum size of the prefetch buffer, plus one message. At the same time, the lock timeout shouldn't be so long that messages can exceed their maximum time to live when they're accidentally dropped, and so requiring their lock to expire before being redelivered.
Try the samples in the language of your choice to explore Azure Service Bus feat
- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
-Find samples for the older .NET and Java client libraries below:
+Samples for the older .NET and Java client libraries:
- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Prefetch** sample. - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus) - See the **Prefetch** sample.
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md
From API version **2021-06-01-preview onwards**, the default value of the `defau
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
-For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/preview/private-endpoint-connections/create-or-update).
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/controlplane-preview/private-endpoint-connections/create-or-update).
> [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
# Integrate Azure Database for MySQL with Service Connector
-This page shows the supported authentication types and client types of Azure Database for MySQL - Flexible Server using Service Connector. You might still be able to connect to Azure Database for MySQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types, client types and sample codes of Azure Database for MySQL - Flexible Server using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample codes about how to make connection to the database. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
[!INCLUDE [Azure-database-for-mysql-single-server-deprecation](../mysql/includes/azure-database-for-mysql-single-server-deprecation.md)] ## Supported compute service -- Azure App Service-- Azure Container Apps-- Azure Spring Apps
+- Azure App Service. You can get the configurations from Azure App Service configurations.
+- Azure Container Apps. You can get the configurations from Azure Container Apps environment variables.
+- Azure Spring Apps. You can get the configurations from Azure Spring Apps runtime.
## Supported authentication types and client types
Supported authentication and clients for App Service, Container Apps, and Azure
> [!NOTE] > System-assigned managed identity, User-assigned managed identity and Service principal are only supported on Azure CLI.
-## Default environment variable names or application properties
+## Default environment variable names or application properties and Sample codes
-Use the connection details below to connect compute services to Azure Database for MySQL. For each example below, replace the placeholder texts `<MySQL-DB-name>`, `<MySQL-DB-username>`, `<MySQL-DB-password>`, `<server-host>`, and `<port>` with your Azure Database for MySQL name, Azure Database for MySQL username, Azure Database for MySQL password, server host, and port.
+Reference the connection details and sample codes in following tables, accordling to your connection's authentication type and client type, to connect compute services to Azure Database for MySQL.
-### .NET (MySqlConnector)
-#### .NET (MySqlConnector) System-assigned managed identity
+### System assigned Managed Identity
+
+#### [.NET](#tab/dotnet)
| Default environment variable name | Description | Example value | |--||-|
-| `Azure_MYSQL_CONNECTIONSTRING ` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;SSL Mode=Required;` |
+| `AZURE_MYSQL_CONNECTIONSTRING ` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;SSL Mode=Required;` |
-#### .NET (MySqlConnector) User-assigned managed identity
-| Default environment variable name | Description | Example value |
-|--||-|
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
-| `Azure_MYSQL_CONNECTIONSTRING` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;SSL Mode=Required;` |
-#### .NET (MySqlConnector) secret / connection string
+#### [Java](#tab/java)
-| Default environment variable name | Description | Example value |
-|--||-|
-| `Azure_MYSQL_CONNECTIONSTRING` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;Password=<MySQL-DB-password>;SSL Mode=Required` |
+| Default environment variable name | Description | Example value |
+|--|||
+| `AZURE_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
-#### .NET (MySqlConnector) Service principal
-| Default environment variable name | Description | Example value |
-|--||-|
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-| `Azure_MYSQL_CONNECTIONSTRING` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;SSL Mode=Required` |
-### Go (go-sql-driver for mysql)
+#### [SpringBoot](#tab/spring)
-#### Go (go-sql-driver for mysql) System-assigned managed identity
+| Application properties | Description | Example value |
+|||--|
+| `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication | `true` |
+| `spring.datasource.url` | Spring Boot JDBC database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
+| `spring.datasource.username` | Database username | `<MySQL-DB-username>` |
-| Default environment variable name | Description | Example value |
-|--||--|
-| `Azure_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
-#### Go (go-sql-driver for mysql) User-assigned managed identity
+#### [Python](#tab/python)
-| Default environment variable name | Description | Example value |
-|--||--|
-| `Azure_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
+| Default environment variable name | Description | Example value |
+|-|-|--|
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST ` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-#### Go (go-sql-driver for mysql) secret / connection string
+#### [Django](#tab/django)
+
+| Default environment variable name | Description | Example value |
+|-|-|--|
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| Default environment variable name | Description | Example value |
-|--||--|
-| `Azure_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>:<MySQL-DB-password>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
-#### Go (go-sql-driver for mysql) Service principal
+#### [Go](#tab/go)
| Default environment variable name | Description | Example value | |--||--|
-| `Azure_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
+| `AZURE_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
-### Java (JDBC)
-#### Java (JDBC) System-assigned managed identity
+#### [NodeJS](#tab/node)
-| Default environment variable name | Description | Example value |
-|--|||
-| `Azure_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
+| Default environment variable name | Description | Example value |
+|-|-|--|
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `MySQL-DB-username` |
+| `AZURE_MYSQL_DATABASE` | Database name | `<database-name>` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_SSL` | SSL option | `true` |
-#### Java (JDBC) User-assigned managed identity
-| Default environment variable name | Description | Example value |
-|--|||
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
-| `Azure_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
+#### [PHP](#tab/php)
-#### Java (JDBC) secret / connection string
+| Default environment variable name | Description | Example value |
+|-|--|--|
+| `AZURE_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>` |
-| Default environment variable name | Description | Example value |
-|--||-|
-| `Azure_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>&password=<Uri.EscapeDataString(<MySQL-DB-password>)` |
+#### [Ruby](#tab/ruby)
-#### Java (JDBC) Service principal
+| Default environment variable name | Description | Example value |
+|-|-|--|
+| `AZURE_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_SSLMODE` | SSL option | `required` |
-| Default environment variable name | Description | Example value |
-|-|||
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-| `Azure_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
-### Java - Spring Boot (JDBC)
+
-#### Java - Spring Boot (JDBC) System-assigned managed identity
+#### Sample codes
-| Application properties | Description | Example value |
-|||--|
-| `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication | `true` |
-| `spring.datasource.url` | Spring Boot JDBC database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
-| `spring.datasource.username` | Database username | `<MySQL-DB-username>` |
+Follow these steps and sample codes to connect to Azure Database for MySQL.
-#### Java - Spring Boot (JDBC) User-assigned managed identity
-| Application properties | Description | Example value |
-|--|--||
-| `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication| `true` |
-| `spring.cloud.azure.credential.client-id` | Your client ID | `<identity-client-ID>` |
-| `spring.cloud.azure.credential.client-managed-identity-enabled` | Enable client managed identity | `true` |
-| `spring.datasource.url` | Database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
-| `spring.datasource.username` | Database username | `username` |
+### User assigned Managed Identity
+#### [.NET](#tab/dotnet)
+
+| Default environment variable name | Description | Example value |
+|--||-|
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+| `AZURE_MYSQL_CONNECTIONSTRING` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;SSL Mode=Required;` |
-#### Java - Spring Boot (JDBC) secret / connection string
-| Application properties | Description | Example value |
-||-|--|
-| `spring.datasource.url` | Spring Boot JDBC database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
-| `spring.datasource.username` | Database username | `<MySQL-DB-username>` |
-| `spring.datasource.password` | Database password | `MySQL-DB-password`
+#### [Java](#tab/java)
+
+| Default environment variable name | Description | Example value |
+|--|||
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+| `AZURE_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
+
-#### Java - Spring Boot (JDBC) Service principal
+
+#### [SpringBoot](#tab/spring)
| Application properties | Description | Example value | |--|--||
-| `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication| `true` |
-| `spring.cloud.azure.credential.client-id` | Your client ID | `<client-ID>` |
-| `spring.cloud.azure.credential.client-secret` | Your client secret | `<client-secret>` |
-| `spring.cloud.azure.credential.tenant-id` | Your tenant ID | `<tenant-ID>` |
+| `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication| `true` |
+| `spring.cloud.azure.credential.client-id` | Your client ID | `<identity-client-ID>` |
+| `spring.cloud.azure.credential.client-managed-identity-enabled` | Enable client managed identity | `true` |
| `spring.datasource.url` | Database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
-| `spring.datasource.username` | Database username | `username` |
+| `spring.datasource.username` | Database username | `username` |
-### Node.js (mysql)
-#### Node.js (mysql) System-assigned managed identity
+#### [Python](#tab/python)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `MySQL-DB-username` |
-| `Azure_MYSQL_DATABASE` | Database name | `<database-name>` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_SSL` | SSL option | `true` |
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `identity-client-ID` |
-#### Node.js (mysql) User-assigned managed identity
+#### [Django](#tab/django)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `MySQL-DB-username` |
-| `Azure_MYSQL_DATABASE` | Database name | `<database-name>` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_SSL` | SSL option | `true` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER ` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
-#### Node.js (mysql) secret / connection string
-| Default environment variable name | Description | Example value |
-|-|-|--|
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `MySQL-DB-username` |
-| `Azure_MYSQL_PASSWORD` | Database password | `MySQL-DB-password` |
-| `Azure_MYSQL_DATABASE` | Database name | `<database-name>` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_SSL` | SSL option | `true` |
+#### [Go](#tab/go)
-#### Node.js (mysql) Service principal
+| Default environment variable name | Description | Example value |
+|--||--|
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+| `AZURE_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
-| Default environment variable name | Description | Example value |
-|-|--|--|
-| `Azure_MYSQL_HOST ` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database username | `MySQL-DB-username` |
-| `Azure_MYSQL_DATABASE` | Database name | `<database-name>` |
-| `Azure_MYSQL_PORT ` | Port number | `3306` |
-| `Azure_MYSQL_SSL` | SSL option | `true` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-### Python (mysql-connector-python)
-#### Python (mysql-connector-python) System-assigned managed identity
+#### [NodeJS](#tab/node)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST ` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `MySQL-DB-username` |
+| `AZURE_MYSQL_DATABASE` | Database name | `<database-name>` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_SSL` | SSL option | `true` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
-#### Python (mysql-connector-python) User-assigned managed identity
-| Default environment variable name | Description | Example value |
-|-|-|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `identity-client-ID` |
+#### [PHP](#tab/php)
-#### Python (mysql-connector-python) secret / connection string
+| Default environment variable name | Description | Example value |
+|-|--|--|
+| `AZURE_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+
+#### [Ruby](#tab/ruby)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_PASSWORD` | Database password | `MySQL-DB-password` |
+| `AZURE_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_SSLMODE` | SSL option | `required` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
++
-#### Python (mysql-connector-python) Service principal
+#### Sample codes
+
+Follow these steps and sample codes to connect to Azure Database for MySQL.
-| Default environment variable name | Description | Example value |
-|-|--|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-#### Python-Django System-assigned managed identity
+
+### Connection String
+
+#### [.NET](#tab/dotnet)
+
+| Default environment variable name | Description | Example value |
+|--||-|
+| `AZURE_MYSQL_CONNECTIONSTRING` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;Password=<MySQL-DB-password>;SSL Mode=Required` |
+
+#### [Java](#tab/java)
+
+| Default environment variable name | Description | Example value |
+|--||-|
+| `AZURE_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>&password=<Uri.EscapeDataString(<MySQL-DB-password>)` |
++
+#### [SpringBoot](#tab/spring)
+
+| Application properties | Description | Example value |
+||-|--|
+| `spring.datasource.url` | Spring Boot JDBC database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
+| `spring.datasource.username` | Database username | `<MySQL-DB-username>` |
+| `spring.datasource.password` | Database password | `MySQL-DB-password` |
+
+After created a `springboot` client type connection, Service Connector service will automatically add properties `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. So Spring boot application could add beans automatically.
+++
+#### [Python](#tab/python)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_PASSWORD` | Database password | `MySQL-DB-password` |
+
-#### Python-Django User-assigned managed identity
+#### [Django](#tab/django)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER ` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_PASSWORD` | Database password | `MySQL-DB-password` |
+
+#### [Go](#tab/go)
-#### Python-Django secret / connection string
+| Default environment variable name | Description | Example value |
+|--||--|
+| `AZURE_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>:<MySQL-DB-password>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
++
+#### [NodeJS](#tab/node)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_PASSWORD` | Database password | `MySQL-DB-password` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `MySQL-DB-username` |
+| `AZURE_MYSQL_PASSWORD` | Database password | `MySQL-DB-password` |
+| `AZURE_MYSQL_DATABASE` | Database name | `<database-name>` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_SSL` | SSL option | `true` |
-#### Python-Django Service principal
+#### [PHP](#tab/php)
| Default environment variable name | Description | Example value | |-|--|--|
-| `Azure_MYSQL_NAME` | Database name | `MySQL-DB-name` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USER ` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+| `AZURE_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>` |
+| `AZURE_MYSQL_PASSWORD` | Database password | `<MySQL-DB-password>` |
-#### PHP (MySQLi)
-#### PHP (MySQLi) System-assigned managed identity
-| Default environment variable name | Description | Example value |
-|-|--|--|
-| `Azure_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>` |
+#### [Ruby](#tab/ruby)
+
+| Default environment variable name | Description | Example value |
+|-|-|--|
+| `AZURE_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_PASSWORD` | Database password | `<MySQL-DB-password>` |
+| `AZURE_MYSQL_SSLMODE` | SSL option | `required` |
+++
+#### Sample codes
+
+Follow these steps and sample codes to connect to Azure Database for MySQL.
-#### PHP (MySQLi) User-assigned managed identity
-| Default environment variable name | Description | Example value |
-|-|--|--|
-| `Azure_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
-#### PHP (MySQLi) secret / connection string
+### Service Principal
+
+#### [.NET](#tab/dotnet)
+
+| Default environment variable name | Description | Example value |
+|--||-|
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+| `AZURE_MYSQL_CONNECTIONSTRING` | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;User Id=<MySQL-DBusername>;SSL Mode=Required` |
++
+#### [Java](#tab/java)
+
+| Default environment variable name | Description | Example value |
+|-|||
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+| `AZURE_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
++
+#### [SpringBoot](#tab/spring)
+
+| Application properties | Description | Example value |
+|--|--||
+| `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication| `true` |
+| `spring.cloud.azure.credential.client-id` | Your client ID | `<client-ID>` |
+| `spring.cloud.azure.credential.client-secret` | Your client secret | `<client-secret>` |
+| `spring.cloud.azure.credential.tenant-id` | Your tenant ID | `<tenant-ID>` |
+| `spring.datasource.url` | Database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
+| `spring.datasource.username` | Database username | `username` |
++
+#### [Python](#tab/python)
| Default environment variable name | Description | Example value | |-|--|--|
-| `Azure_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>` |
-| `Azure_MYSQL_PASSWORD` | Database password | `<MySQL-DB-password>` |
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-#### PHP (MySQLi) Service principal
+#### [Django](#tab/django)
| Default environment variable name | Description | Example value | |-|--|--|
-| `Azure_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_PORT` | Port number | `3306` |
-| `Azure_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+| `AZURE_MYSQL_NAME` | Database name | `MySQL-DB-name` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER ` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-### Ruby (mysql2)
+#### [Go](#tab/go)
-#### Ruby (mysql2) System-assigned managed identity
+| Default environment variable name | Description | Example value |
+|--||--|
+| `AZURE_MYSQL_CLIENTID` | Your client ID |`<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>`
+| `AZURE_MYSQL_CONNECTIONSTRING` | Go-sql-driver connection string | `<MySQL-DB-username>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
-| Default environment variable name | Description | Example value |
-|-|-|--|
-| `Azure_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_SSLMODE` | SSL option | `required` |
-#### Ruby (mysql2) User-assigned managed identity
+#### [NodeJS](#tab/node)
-| Default environment variable name | Description | Example value |
-|-|-|--|
-| `Azure_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_SSLMODE` | SSL option | `required` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<identity-client-ID>` |
+| Default environment variable name | Description | Example value |
+|-|--|--|
+| `AZURE_MYSQL_HOST ` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USER` | Database username | `MySQL-DB-username` |
+| `AZURE_MYSQL_DATABASE` | Database name | `<database-name>` |
+| `AZURE_MYSQL_PORT ` | Port number | `3306` |
+| `AZURE_MYSQL_SSL` | SSL option | `true` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-#### Ruby (mysql2) secret / connection string
+#### [PHP](#tab/php)
-| Default environment variable name | Description | Example value |
-|-|-|--|
-| `Azure_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_PASSWORD` | Database password | `<MySQL-DB-password>` |
-| `Azure_MYSQL_SSLMODE` | SSL option | `required` |
+| Default environment variable name | Description | Example value |
+|-|--|--|
+| `AZURE_MYSQL_DBNAME` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_PORT` | Port number | `3306` |
+| `AZURE_MYSQL_FLAG` | SSL or other flags | `MySQL_CLIENT_SSL` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret | `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
-#### Ruby (mysql2) Service principal
+#### [Ruby](#tab/ruby)
| Default environment variable name | Description | Example value | |-|-|--|
-| `Azure_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
-| `Azure_MYSQL_HOST` | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
-| `Azure_MYSQL_USERNAME` | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
-| `Azure_MYSQL_SSLMODE` | SSL option | `required` |
-| `Azure_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
-| `Azure_MYSQL_CLIENTSECRET` | Your client secret| `<client-secret>` |
-| `Azure_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+| `AZURE_MYSQL_DATABASE` | Database name | `<MySQL-DB-name>` |
+| `AZURE_MYSQL_HOST` | Database host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| `AZURE_MYSQL_USERNAME` | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| `AZURE_MYSQL_SSLMODE` | SSL option | `required` |
+| `AZURE_MYSQL_CLIENTID` | Your client ID | `<client-ID>` |
+| `AZURE_MYSQL_CLIENTSECRET` | Your client secret| `<client-secret>` |
+| `AZURE_MYSQL_TENANTID` | Your tenant ID | `<tenant-ID>` |
+++
+#### Sample codes
+
+Follow these steps and sample codes to connect to Azure Database for MySQL.
++ ## Next steps
-Follow the tutorials listed below to learn more about Service Connector.
+Follow the documentations to learn more about Service Connector.
> [!div class="nextstepaction"] > [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
This page shows the supported authentication types and client types of Azure Blo
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--|
Supported authentication and clients for App Service, Container Apps and Azure S
| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Blob Storage. For each example below, replace the placeholder texts `<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name.
-### Azure App Service and Azure Container Apps
-
-#### Secret / connection string
+### Secret / connection string
+#### .NET, Java, Node.JS, Python
| Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
-#### system-assigned managed identity
+#### Java - SpringBoot
+
+| Application properties | Description | Example value |
+|--|--||
+| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` |
+| azure.storage.account-key | Your Blob Storage account key | `<account-key>` |
+| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
++
+### System-assigned managed identity
| Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
-#### User-assigned managed identity
+### User-assigned managed identity
| Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | | AZURE_STORAGEBLOB_CLIENTID | Your client ID | `<client-ID>` |
-#### Service principal
+### Service principal
| Default environment variable name | Description | Example value | ||--||
Use the connection details below to connect compute services to Blob Storage. Fo
| AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `<tenant-ID>` |
-### Azure Spring Apps
-
-#### secret / connection string
-
-| Application properties | Description | Example value |
-|--|--||
-| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` |
-| azure.storage.account-key | Your Blob Storage account key | `<account-key>` |
-| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
## Next steps
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
This page shows the supported authentication types and client types of Azure Que
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
+|Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
- ## Default environment variable names or application properties
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Queue Storage. For each example below, replace the placeholder texts `<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name.
-### .NET, Java, Node.JS, Python
+### Secret/ connection string
-#### Secret/ connection string
+#### .NET, Java, Node.JS, Python
| Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGEQUEUE_CONNECTIONSTRING | Queue storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
-#### System-assigned managed identity
+#### Java - Spring Boot
+
+| Application properties | Description | Example value |
+|-|-|--|
+| spring.cloud.azure.storage.account | Queue storage account name | `<storage-account-name>` |
+| spring.cloud.azure.storage.access-key | Queue storage account key | `<account-key>` |
+
+### System-assigned managed identity
| Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
-#### User-assigned managed identity
+
+### User-assigned managed identity
| Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` | | AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `<client-ID>` |
-#### Service principal
+### Service principal
| Default environment variable name | Description | Example value | |-||-|
Use the connection details below to connect compute services to Queue Storage. F
| AZURE_STORAGEQUEUE_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEQUEUE_TENANTID | Your tenant ID | `<tenant-ID>` |
-### Azure Spring Apps
-
-#### Java - Spring Boot secret / connection string
-
-| Application properties | Description | Example value |
-|-|-|--|
-| spring.cloud.azure.storage.account | Queue storage account name | `<storage-account-name>` |
-| spring.cloud.azure.storage.access-key | Queue storage account key | `<account-key>` |
## Next steps
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
Supported authentication and clients for App Service, Container Apps and Azure S
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |-|-|--|--|-|
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
+| .NET |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)|
+| Java |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)|
+| Node.js |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)|
+| Python |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)|
## Default environment variable names or application properties Use the connection details below to connect compute services to Azure Table Storage. For each example below, replace the placeholder texts `<account-name>` and `<account-key>` with your own account name and account key.
-### .NET, Java, Node.JS and Python secret / connection string
+### Secret / connection string
| Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
+### System-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` |
++
+### User-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` |
+| AZURE_STORAGETABLE_CLIENTID | Your client ID | `<client-ID>` |
+
+### Service principal
+
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` |
+| AZURE_STORAGETABLE_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_STORAGETABLE_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_STORAGETABLE_TENANTID | Your tenant ID | `<tenant-ID>` |
++ ## Next steps Follow the tutorials listed below to learn more about Service Connector.
service-connector Quickstart Portal Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-spring-cloud-connection.md
This quickstart shows you how to connect Azure Spring Apps to other Cloud resources using the Azure portal and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings.
+> [!NOTE]
+> For information on connecting resources using Azure CLI, see [Create a service connection in Azure Spring Apps with the Azure CLI](./quickstart-cli-spring-cloud-connection.md).
+ ## Prerequisites - An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free).
service-connector Tutorial Java Jboss Connect Managed Identity Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-jboss-connect-managed-identity-mysql-database.md
Last updated 08/14/2023
-+ # Tutorial: Connect to a MySQL Database from Java JBoss EAP App Service with passwordless connection
service-connector Tutorial Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-passwordless.md
Last updated 07/17/2023 ms.devlang: azurecli-+ zone_pivot_group_filename: service-connector/zone-pivot-groups.json zone_pivot_groups: passwordless
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-networking.md
The following steps describe enable public IP on your node.
```json {
- "name": "Secondary Node Type",
+ "name": "<secondary_node_type_name>",
"apiVersion": "2023-02-01-preview", "properties": { "isPrimary" : false,
- "vmImageResourceId": "/subscriptions/<SubscriptionID>/resourceGroups/<myRG>/providers/Microsoft.Compute/images/<MyCustomImage>",
+ "vmImageResourceId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Compute/images/<your_custom_image>",
"vmSize": "Standard_D2", "vmInstanceCount": 5, "dataDiskSizeGB": 100,
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_0>", "ipConfiguration": { "id": "<configuration_id_0>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>", "provisioningState": "Succeeded", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static",
- "resourceGroup": "<your_resource_group",
+ "resourceGroup": "<your_resource_group>",
"resourceGuid": "resource_guid_0", "sku": { "name": "Standard"
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_1>", "ipConfiguration": { "id": "<configuration_id_1>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>",
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_2>", "ipConfiguration": { "id": "<configuration_id_2>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>",
service-fabric How To Managed Cluster Public Ip Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-public-ip-prefix.md
-+ Last updated 07/05/2023
service-fabric Managed Cluster Deny Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/managed-cluster-deny-assignment.md
+
+ Title: Deny assignment policy for Service Fabric managed clusters
+description: An overview of the deny assignment policy for Service Fabric managed clusters.
+++++ Last updated : 08/18/2023++
+# Deny assignment policy for Service Fabric managed clusters
+
+Deny assignment policies for Service Fabric managed clusters enable customers to protect the resources of their clusters. Deny assignments attach a set of deny actions to a user, group, or service principal at a particular scope to deny access. Limiting access to certain actions can help users from inadvertently damaging their clusters when they delete, deallocate restart, or reimage their clusters' scale set directly in the infrastructure resource group, which can cause the resources of the cluster to be unsynchronized with the data in the managed cluster.
+
+All actions that are related to managed clusters should be done through the managed cluster resource APIs instead of directly against the infrastructure resource group. Using the resource APIs ensures the resources of the cluster are synchronized with the data in the managed cluster.
+
+This feature ensures that the correct, supported APIs are used when performing delete operations to avoid any errors.
+
+You can learn more about deny assignments in the [Azure role-based access control (RBAC) documentation](../role-based-access-control/deny-assignments.md).
+
+## Best practices
+
+The following are some best practices to minimize the threat of desyncing your cluster's resources:
+* Instead of deleting virtual machine scale sets directly from the managed resource group, use NodeType level APIs to delete the NodeType or virtual machine scale set. Options include the Node blade on the Azure portal and [Azure PowerShell](/powershell/module/az.servicefabric/remove-azservicefabricmanagednodetype).
+* Use the correct APIs to restart or reimage your scale sets:
+ * [Virtual machine scale set restarts](/powershell/module/az.servicefabric/restart-azservicefabricmanagednodetype)
+ * [Virtual machine scale set reimage](/powershell/module/az.servicefabric/set-azservicefabricmanagednodetype)
+
+## Next steps
+
+* Learn more about [granting permission to access resources on managed clusters](how-to-managed-cluster-grant-access-other-resources.md)
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Previously updated : 08/03/2023 Last updated : 08/25/2023 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
service-fabric Service Fabric Cluster Creation Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-create-template.md
+ Last updated 07/14/2022
service-fabric Service Fabric Keyvault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-keyvault-references.md
string secret = Environment.GetEnvironmentVariable("MySecret");
## Use Managed KeyVaultReferences in your application
-First, you must enable secret monitoring by upgrading your cluster definition:
+First, you must enable secret monitoring by upgrading your cluster definition to add the `EnableSecretMonitoring` setting, in addition to the [other required CSS configurations](service-fabric-application-secret-store.md):
```json "fabricSettings": [
First, you must enable secret monitoring by upgrading your cluster definition:
{ "name": "EnableSecretMonitoring", "value": "true"
+ },
+ {
+ "name": "DeployedState",
+ "value": "enabled"
+ },
+ {
+ "name" : "EncryptionCertificateThumbprint",
+ "value": "<thumbprint>"
+ },
+ {
+ "name": "MinReplicaSetSize",
+ "value": "<size>"
+ },
+ {
+ "name": "TargetReplicaSetSize",
+ "value": "<size>"
} ] }
service-health Resource Health Checks Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md
Below is a complete list of all the checks executed through resource health by r
## Microsoft.AnalysisServices/servers |Executed Checks| ||
-|<ul><li>Is the server up and running?</li><li>Has the server run out of memory?</li><li>Is the server starting up?</li><li>Is the server recovering?</li></ul>|
+| - Is the server up and running?<br> - Has the server run out of memory?<br> - Is the server starting up?<br> - Is the server recovering?|
## Microsoft.ApiManagement/service |Executed Checks| ||
-|<ul><li>Is the Api Management service up and running?</li></ul>|
+| - Is the Api Management service up and running?|
## Microsoft.AppPlatform/Spring |Executed Checks| ||
-|<ul><li>Is the Azure Spring Cloud instance available?</li></ul>|
+| - Is the Azure Spring Cloud instance available?|
## Microsoft.Batch/batchAccounts |Executed Checks| ||
-|<ul><li>Is the Batch account up and running?</li><li>Has the pool quota been exceeded for this batch account?</li></ul>|
+| - Is the Batch account up and running?<br> - Has the pool quota been exceeded for this batch account?|
## Microsoft.Cache/Redis |Executed Checks| ||
-|<ul><li>Are all the Cache nodes up and running?</li><li>Can the Cache be reached from within the datacenter?</li><li>Has the Cache reached the maximum number of connections?</li><li> Has the cache exhausted its available memory? </li><li>Is the Cache experiencing a high number of page faults?</li><li>Is the Cache under heavy load?</li></ul>|
+| - Are all the Cache nodes up and running?<br> - Can the Cache be reached from within the datacenter?<br> - Has the Cache reached the maximum number of connections?<br> - Has the cache exhausted its available memory? <br> - Is the Cache experiencing a high number of page faults?<br> - Is the Cache under heavy load?|
## Microsoft.CDN/profile |Executed Checks| ||
-|<ul> <li>Is the supplemental portal accessible for CDN configuration operations?</li><li>Are there ongoing delivery issues with the CDN endpoints?</li><li>Can users change the configuration of their CDN resources?</li><li>Are configuration changes propagating at the expected rate?</li><li>Can users manage the CDN configuration using the Azure portal, PowerShell, or the API?</li> </ul>|
+| - Is the supplemental portal accessible for CDN configuration operations?<br> - Are there ongoing delivery issues with the CDN endpoints?<br> - Can users change the configuration of their CDN resources?<br> - Are configuration changes propagating at the expected rate?<br> - Can users manage the CDN configuration using the Azure portal, PowerShell, or the API?|
## Microsoft.classiccompute/virtualmachines |Executed Checks| ||
-|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Is there ongoing planned maintenance?</li><li>Is there heartbeats between Guest and host agent *(if Guest extension is installed)*?</li></ul>|
+| - Is the server hosting this virtual machine up and running?<br> - Is the virtual machine container provisioned and powered up?<br> - Is there network connectivity between the host and the storage account?<br> - Is there ongoing planned maintenance?<br> - Are there heartbeats between Guest and host agent *(if Guest extension is installed)*?|
## Microsoft.classiccompute/domainnames |Executed Checks| ||
-|<ul><li>Is production slot deployment healthy across all role instances?</li><li>Is the role healthy across all its VM instances?</li><li>What is the health status of each VM within a role of a cloud service?</li><li>Was the VM status change due to platform or customer initiated operation?</li><li>Has the booting of the guest OS completed?</li><li>Is there ongoing planned maintenance?</li><li>Is the host hardware degraded and predicted to fail soon?</li><li>[Learn More](../cloud-services/resource-health-for-cloud-services.md) about Executed Checks</li></ul>|
+| - Is production slot deployment healthy across all role instances?<br> - Is the role healthy across all its VM instances?<br> - What is the health status of each VM within a role of a cloud service?<br> - Was the VM status change due to platform or customer initiated operation?<br> - Has the booting of the guest OS completed?<br> - Is there ongoing planned maintenance?<br> - Is the host hardware degraded and predicted to fail soon?<br> - [Learn More](../cloud-services/resource-health-for-cloud-services.md) about Executed Checks|
## Microsoft.cognitiveservices/accounts |Executed Checks| ||
-|<ul><li>Can the account be reached from within the datacenter?</li><li>Is the Azure AI services resource provider available?</li><li>Is the Cognitive Service available in the appropriate region?</li><li>Can read operations be performed on the storage account holding the resource metadata?</li><li>Has the API call quota been reached?</li><li>Has the API call read-limit been reached?</li></ul>|
+| - Can the account be reached from within the datacenter?<br> - Is the Azure AI services resource provider available?<br> - Is the Cognitive Service available in the appropriate region?<br> - Can read operations be performed on the storage account holding the resource metadata?<br> - Has the API call quota been reached?<br> - Has the API call read-limit been reached?|
## Microsoft.compute/hostgroups/hosts |Executed Checks| ||
-|<ul><li>Is the host up and running?</li><li>Is the host hardware degraded?</li><li>Is the host deallocated?</li><li>Has the host hardware service healed to different hardware?</li></ul>|
+| - Is the host up and running?<br> - Is the host hardware degraded?<br> - Is the host deallocated?<br> - Has the host hardware service healed to different hardware?|
## Microsoft.compute/virtualmachines |Executed Checks| ||
-|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Is there ongoing planned maintenance?</li><li>Is there heartbeats between Guest and host agent *(if Guest extension is installed)*?</li></ul>|
+| - Is the server hosting this virtual machine up and running?<br> - Is the virtual machine container provisioned and powered up?<br> - Is there network connectivity between the host and the storage account?<br> - Is there ongoing planned maintenance?<br> - Are there heartbeats between Guest and host agent *(if Guest extension is installed)*?|
## Microsoft.compute/virtualmachinescalesets |Executed Checks| ||
-|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Is there ongoing planned maintenance?</li><li>Is there heartbeats between Guest and host agent *(if Guest extension is installed)*?</li></ul>|
+| - Is the server hosting this virtual machine up and running?<br> - Is the virtual machine container provisioned and powered up?<br> - Is there network connectivity between the host and the storage account?<br> - Is there ongoing planned maintenance?<br> - Are there heartbeats between Guest and host agent *(if Guest extension is installed)*?|
## Microsoft.ContainerService/managedClusters |Executed Checks| ||
-|<ul><li>Is the cluster up and running?</li><li>Are core services available on the cluster?</li><li>Are all cluster nodes ready?</li><li>Is the service principal current and valid?</li></ul>|
+| - Is the cluster up and running?<br> - Are core services available on the cluster?<br> - Are all cluster nodes ready?<br> - Is the service principal current and valid?|
## Microsoft.datafactory/factories |Executed Checks| ||
-|<ul><li>Have there been pipeline run failures?</li><li>Is the cluster hosting the Data Factory healthy?</li></ul>|
+| - Have there been pipeline run failures?<br> - Is the cluster hosting the Data Factory healthy?|
## Microsoft.datalakeanalytics/accounts |Executed Checks| ||
-|<ul><li>Have users experienced problems submitting or listing their Data Lake Analytics jobs?</li><li>Are Data Lake Analytics jobs unable to complete due to system errors?</li></ul>|
+| - Have users experienced problems submitting or listing their Data Lake Analytics jobs?<br> - Are Data Lake Analytics jobs unable to complete due to system errors?|
## Microsoft.datalakestore/accounts |Executed Checks| ||
-|<ul><li>Have users experienced problems uploading data to Data Lake Store?</li><li>Have users experienced problems downloading data from Data Lake Store?</li></ul>|
+| - Have users experienced problems uploading data to Data Lake Store?<br> - Have users experienced problems downloading data from Data Lake Store?|
## Microsoft.datamigration/services |Executed Checks| ||
-|<ul><li>Has the database migration service failed to provision?</li><li>Has the database migration service stopped due to inactivity or user request?</li></ul>|
+| - Has the database migration service failed to provision?<br> - Has the database migration service stopped due to inactivity or user request?|
## Microsoft.DataShare/accounts |Executed Checks| ||
-|<ul><li>Is the Data Share account up and running?</li><li>Is the cluster hosting the Data Share available?</li></ul>|
+| - Is the Data Share account up and running?<br> - Is the cluster hosting the Data Share available?|
## Microsoft.DBforMariaDB/servers |Executed Checks| ||
-|<ul><li>Is the server unavailable due to maintenance?</li><li>Is the server unavailable due to reconfiguration?</li></ul>|
+| - Is the server unavailable due to maintenance?<br> - Is the server unavailable due to reconfiguration?|
## Microsoft.DBforMySQL/servers |Executed Checks| ||
-|<ul><li>Is the server unavailable due to maintenance?</li><li>Is the server unavailable due to reconfiguration?</li></ul>|
+| - Is the server unavailable due to maintenance?<br> - Is the server unavailable due to reconfiguration?|
## Microsoft.DBforPostgreSQL/servers |Executed Checks| ||
-|<ul><li>Is the server unavailable due to maintenance?</li><li>Is the server unavailable due to reconfiguration?</li></ul>|
+| - Is the server unavailable due to maintenance?<br> - Is the server unavailable due to reconfiguration?|
## Microsoft.devices/iothubs |Executed Checks| ||
-|<ul><li>Is the IoT hub up and running?</li></ul>|
+| - Is the IoT hub up and running?|
## Microsoft.DigitalTwins/DigitalTwinsInstances |Executed Checks| ||
-|<ul><li>Is the Azure Digital Twins instance up and running?</li></ul>|
+| - Is the Azure Digital Twins instance up and running?|
## Microsoft.documentdb/databaseAccounts |Executed Checks| ||
-|<ul><li>Have there been any database or collection requests not served due to an Azure Cosmos DB service unavailability?</li><li>Have there been any document requests not served due to an Azure Cosmos DB service unavailability?</li></ul>|
+| - Have there been any database or collection requests not served due to an Azure Cosmos DB service unavailability?<br> - Have there been any document requests not served due to an Azure Cosmos DB service unavailability?|
## Microsoft.eventhub/namespaces |Executed Checks| ||
-|<ul><li>Is the Event Hubs namespace experiencing user generated errors?</li><li>Is the Event Hubs namespace currently being upgraded?</li></ul>|
+| - Is the Event Hubs namespace experiencing user generated errors?<br> - Is the Event Hubs namespace currently being upgraded?|
## Microsoft.hdinsight/clusters |Executed Checks| ||
-|<ul><li>Are core services available on the HDInsight cluster?</li><li>Can the HDInsight cluster access the key for BYOK encryption at rest?</li></ul>|
+| - Are core services available on the HDInsight cluster?<br> - Can the HDInsight cluster access the key for BYOK encryption at rest?|
## Microsoft.HybridCompute/machines |Executed Checks| ||
-|<ul><li>Is the agent on your server connected to Azure and sending heartbeats?</li></ul>|
+| - Is the agent on your server connected to Azure and sending heartbeats?|
## Microsoft.IoTCentral/IoTApps |Executed Checks| ||
-|<ul><li>Is the IoT Central Application available?</li></ul>|
+| - Is the IoT Central Application available?|
## Microsoft.KeyVault/vaults |Executed Checks| ||
-|<ul><li>Are requests to key vault failing due to Azure KeyVault platform issues?</li><li>Are requests to key vault being throttled due to too many requests made by customer?</li></ul>|
+| - Are requests to key vault failing due to Azure KeyVault platform issues?<br> - Are requests to key vault being throttled due to too many requests made by customer?|
## Microsoft.Kusto/clusters |Executed Checks| ||
-|<ul><li>Is the cluster experiencing low ingestion success rates?</li><li>Is the cluster experiencing high ingestion latency?</li><li>Is the cluster experiencing a high number of query failures?</li></ul>|
+| - Is the cluster experiencing low ingestion success rates?<br> - Is the cluster experiencing high ingestion latency?<br> - Is the cluster experiencing a high number of query failures?|
## Microsoft.MachineLearning/webServices |Executed Checks| ||
-|<ul><li>Is the web service up and running?</li></ul>|
+| - Is the web service up and running?|
## Microsoft.Media/mediaservices |Executed Checks| ||
-|<ul><li>Is the media service up and running?</li></ul>|
+| - Is the media service up and running?|
## Microsoft.network/applicationgateways |Executed Checks| ||
-|<ul><li>Is performance of the Application Gateway degraded?</li><li>Is the Application Gateway available?</li></ul>|
+| - Is performance of the Application Gateway degraded?<br> - Is the Application Gateway available?|
## Microsoft.network/azureFirewalls |Executed Checks| ||
-|<ul><li>Are there enough remaining available ports to perform Source NAT?</li><li>Are there enough remaining available connections?</li></ul>|
+| - Are there enough remaining available ports to perform Source NAT?<br> - Are there enough remaining available connections?|
## Microsoft.network/bastionhosts |Executed Checks| ||
-|<ul><li>Is the Bastion Host up and running?</li></ul>|
+| - Is the Bastion Host up and running?|
## Microsoft.network/connections |Executed Checks| ||
-|<ul><li>Is the VPN tunnel connected?</li><li>Are there configuration conflicts in the connection?</li><li>Are the pre-shared keys properly configured?</li><li>Is the VPN on-premises device reachable?</li><li>Are there mismatches in the IPSec/IKE security policy?</li><li>Is the S2S VPN connection properly provisioned or in a failed state?</li><li>Is the VNET-to-VNET connection properly provisioned or in a failed state?</li></ul>|
+| - Is the VPN tunnel connected?<br> - Are there configuration conflicts in the connection?<br> - Are the pre-shared keys properly configured?<br> - Is the VPN on-premises device reachable?<br> - Are there mismatches in the IPSec/IKE security policy?<br> - Is the S2S VPN connection properly provisioned or in a failed state?<br> - Is the VNET-to-VNET connection properly provisioned or in a failed state?|
## Microsoft.network/expressroutecircuits |Executed Checks| ||
-|<ul><li>Is the ExpressRoute circuit healthy?</li></ul>|
+| - Is the ExpressRoute circuit healthy?|
## Microsoft.network/frontdoors |Executed Checks| ||
-|<ul><li>Are Front Door backends responding with errors to health probes?</li><li>Are configuration changes delayed?</li></ul>|
+| - Are Front Door backends responding with errors to health probes?<br> - Are configuration changes delayed?|
## Microsoft.network/LoadBalancers |Executed Checks| ||
-|<ul><li>Are the load balancing endpoints available?</li></ul>|
+| - Are the load balancing endpoints available?|
## Microsoft.network/natGateways |Executed Checks| ||
-|<ul><li>Are the NAT gateway endpoints available?</li></ul>|
+| - Are the NAT gateway endpoints available?|
## Microsoft.network/trafficmanagerprofiles |Executed Checks| ||
-|<ul><li>Are there any issues impacting the Traffic Manager profile?</li></ul>|
+| - Are there any issues impacting the Traffic Manager profile?|
## Microsoft.network/virtualNetworkGateways |Executed Checks| ||
-|<ul><li>Is the VPN gateway reachable from the internet?</li><li>Is the VPN Gateway in standby mode?</li><li>Is the VPN service running on the gateway?</li></ul>|
+| - Is the VPN gateway reachable from the internet?<br> - Is the VPN Gateway in standby mode?<br> - Is the VPN service running on the gateway?|
## Microsoft.NotificationHubs/namespace |Executed Checks| ||
-|<ul><li>Can runtime operations like registration, installation, or send be performed on the namespace?</li></ul>|
+| - Can runtime operations like registration, installation, or send be performed on the namespace?|
## Microsoft.operationalinsights/workspaces |Executed Checks| ||
-|<ul><li>Are there indexing delays for the workspace?</li></ul>|
+| - Are there indexing delays for the workspace?|
## Microsoft.PowerBIDedicated/Capacities |Executed Checks| ||
-|<ul><li>Is the capacity resource up and running?</li><li>Are all the workloads up and running?</li></ul>
+| - Is the capacity resource up and running?<br> - Are all the workloads up and running?
## Microsoft.search/searchServices |Executed Checks| ||
-|<ul><li>Can diagnostics operations be performed on the cluster?</li></ul>|
+| - Can diagnostics operations be performed on the cluster?|
## Microsoft.ServiceBus/namespaces |Executed Checks| ||
-|<ul><li>Are customers experiencing user generated Service Bus errors?</li><li>Are users experiencing an increase in transient errors due to a Service Bus namespace upgrade?</li></ul>|
+| - Are customers experiencing user generated Service Bus errors?<br> - Are users experiencing an increase in transient errors due to a Service Bus namespace upgrade?|
## Microsoft.ServiceFabric/clusters |Executed Checks| ||
-|<ul><li>Is the Service Fabric cluster up and running?</li><li>Can the Service Fabric cluster be managed through Azure Resource Manager?</li></ul>|
+| - Is the Service Fabric cluster up and running?<br> - Can the Service Fabric cluster be managed through Azure Resource Manager?|
## Microsoft.SQL/managedInstances/databases |Executed Checks| ||
-|<ul><li>Is the database up and running?</li></ul>|
+| - Is the database up and running?|
## Microsoft.SQL/servers/databases |Executed Checks| ||
-|<ul><li>Have login attempts to the database failed because the database was unavailable?</li></ul>|
+| - Have login attempts to the database failed because the database was unavailable?|
## Microsoft.Storage/storageAccounts |Executed Checks| ||
-|<ul><li>Are requests to read data from the Storage account failing due to Azure Storage platform issues?</li><li>Are requests to write data to the Storage account failing due to Azure Storage platform issues?</li><li>Is the Storage cluster where the Storage account resides unavailable?</li></ul>|
+| - Are requests to read data from the Storage account failing due to Azure Storage platform issues?<br> - Are requests to write data to the Storage account failing due to Azure Storage platform issues?<br> - Is the Storage cluster where the Storage account resides unavailable?|
## Microsoft.StreamAnalytics/streamingjobs |Executed Checks| ||
-|<ul><li>Are all the hosts where the job is executing up and running?</li><li>Was the job unable to start?</li><li>Are there ongoing runtime upgrades?</li><li>Is the job in an expected state (for example running or stopped by customer)?</li><li>Has the job encountered out of memory exceptions?</li><li>Are there ongoing scheduled compute updates?</li><li>Is the Execution Manager (control plan) available?</li></ul>|
+| - Are all the hosts where the job is executing up and running?<br> - Was the job unable to start?<br> - Are there ongoing runtime upgrades?<br> - Is the job in an expected state (for example running or stopped by customer)?<br> - Has the job encountered out of memory exceptions?<br> - Are there ongoing scheduled compute updates?<br> - Is the Execution Manager (control plan) available?|
## Microsoft.web/serverFarms |Executed Checks| ||
-|<ul><li>Is the host server up and running?</li><li>Is Internet Information Services running?</li><li>Is the Load balancer running?</li><li>Can the App Service Plan be reached from within the datacenter?</li><li>Is the storage account hosting the sites content for the serverFarm available??</li></ul>|
+| - Is the host server up and running?<br> - Is Internet Information Services running?<br> - Is the Load balancer running?<br> - Can the App Service Plan be reached from within the datacenter?<br> - Is the storage account hosting the sites content for the serverFarm available?|
## Microsoft.web/sites |Executed Checks| ||
-|<ul><li>Is the host server up and running?</li><li>Is Internet Information server running?</li><li>Is the Load balancer running?</li><li>Can the Web App be reached from within the datacenter?</li><li>Is the storage account hosting the site content available?</li></ul>|
+| - Is the host server up and running?<br> - Is Internet Information server running?<br> - Is the Load balancer running?<br> - Can the Web App be reached from within the datacenter?<br> - Is the storage account hosting the site content available?|
## Microsoft.RecoveryServices/vaults | Executed Checks | | |
-|<ul><li>Are any Backup operations on Backup Items configured in this vault failing due to causes beyond user control?</li><li>Are any Restore operations on Backup Items configured in this vault failing due to causes beyond user control?</li></ul> |
+| - Are any Backup operations on Backup Items configured in this vault failing due to causes beyond user control?<br>- Are any Restore operations on Backup Items configured in this vault failing due to causes beyond user control?|
+
+## Microsoft.VoiceServices/communicationsgateway
+
+| Executed Checks |
+| |
+| - Are traffic-carrying instances running?<br>- Can the service handle calls?|
## Next Steps - See [Introduction to Azure Service Health dashboard](service-health-overview.md) and [Introduction to Azure Resource Health](resource-health-overview.md) to understand more about them.
site-recovery Avs Tutorial Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md
Previously updated : 08/23/2022 Last updated : 08/18/2023
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Previously updated : 12/07/2022 Last updated : 07/24/2023
Prerequisites should be in place, and you should have created a Recovery Service
## Enable replication
-Use the following procedure to replicate Azure VMs to another Azure region. As an example, primary Azure region is Eastasia, and the secondary is Southeast Asia.
+Use the following procedure to replicate Azure VMs to another Azure region. As an example, primary Azure region is East Asia, and the secondary is Southeast Asia.
1. In the vault > **Site Recovery** page, under **Azure virtual machines**, select **Enable replication**. 1. In the **Enable replication** page, under **Source**, do the following:
Use the following procedure to replicate Azure VMs to another Azure region. As a
:::image type="fields needed to configure replication" source="./media/azure-to-azure-how-to-enable-replication/source.png" alt-text="Screenshot that highlights the fields needed to configure replication."::: 1. Select **Next**.
-1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to ten VMs. Then select **Next**.
+1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to 10 VMs. Then select **Next**.
:::image type="Virtual machine selection" source="./media/azure-to-azure-how-to-enable-replication/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines."::: 1. In **Replication settings**, you can configure the following settings: 1. Under **Location and Resource group**,
- - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery will provide you the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location.
+ - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery provides you with the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location.
- **Target subscription**: Select the target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription. - **Target resource group**: Select the resource group to which all your replicated virtual machines belong. - By default, Site Recovery creates a new resource group in the target region with an *asr* suffix in the name.
Use the following procedure to replicate Azure VMs to another Azure region. As a
- **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk. - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard. >[!Note]
- >Azure Site Recovery supports High churn (Public Preview) where you can choose to use **High Churn** for the VM. With this, you can use a *Premium Block Blob* type of storage account. By default, **Normal Churn** is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
-
- :::image type="Cache storage" source="./media/azure-to-azure-how-to-enable-replication/cache-storage.png" alt-text="Screenshot of customize target settings.":::
+ >Azure Site Recovery has a *High Churn* option that you can choose to protect VMs with high data change rate. With this, you can use a *Premium Block Blob* type of storage account. By default, the **Normal Churn** option is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
+ >:::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churns.png" alt-text="Screenshot of churn.":::
1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options. >[!NOTE]
Use the following procedure to replicate Azure VMs to another Azure region. As a
:::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication/availability-option.png" alt-text="Screenshot of availability option."::: 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](../virtual-machines/capacity-reservation-overview.md).
- Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
+ Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM is created in the assigned Capacity Reservation Group.
:::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
-1. Select **Next**.
+ 1. Select **Next**.
+
1. In **Manage**, do the following: 1. Under **Replication policy**, - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention.
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Previously updated : 04/20/2023 Last updated : 07/14/2023
site-recovery Azure To Azure Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-network-mapping.md
Previously updated : 03/27/2023 Last updated : 08/31/2023
Map networks as follows:
If you haven't prepared network mapping before you configure disaster recovery for Azure VMs, you can specify a target network when you [set up and enable replication](azure-to-azure-how-to-enable-replication.md). When you do this the following happens: - Based on the target you select, Site Recovery automatically creates network mappings from the source to target region, and from the target to source region.-- By default, Site Recovery creates a network in the target region that's identical to the source network. Site Recovery adds **-asr** as a suffix to the name of the source network. You can customize the target network.
+- By default, Site Recovery creates a network in the target region that's identical to the source network. Site Recovery adds **-asr** as a suffix to the name of the target network. You can customize the target network. For example, if the source network name was *contoso-vnet*, then the target network is named *contoso-vnet-asr*.
+
+So, if the source network name was "contoso-vnet", then the target network name will be "contoso-vnet-asr". Source network's name will not be edited by ASR.
- If network mapping has already occurred for a source network, the mapped target network will always be the default at the time of enabling replications for more VMs. You can choose to change the target virtual network by choosing other available options from the dropdown. - To change the default target virtual network for new replications, you need to modify the existing network mapping. - If you wish to modify a network mapping from region A to region B, ensure that you first delete the network mapping from region B to region A. After reverse mapping deletion, modify the network mapping from region A to region B and then create the relevant reverse mapping.
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
Previously updated : 12/07/2022 Last updated : 07/14/2023
$WusToEusPCMapping = Get-AzRecoveryServicesAsrProtectionContainerMapping -Protec
## Create cache storage account and target storage account
-A cache storage account is a standard storage account in the same Azure region as the virtual machine being replicated. The cache storage account is used to hold replication changes temporarily, before the changes are moved to the recovery Azure region. High churn support (Public Preview) is now available in Azure Site Recovery using which you can create a Premium Block Blob type of storage accounts that can be used as cache storage account to get high churn limits. You can choose to, but it's not necessary, to specify different cache storage accounts for the different disks of a virtual machine. If you use different cache storage accounts, ensure they are of the same type (Standard or Premium Block Blobs). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
+A cache storage account is a standard storage account in the same Azure region as the virtual machine being replicated. The cache storage account is used to hold replication changes temporarily, before the changes are moved to the recovery Azure region. High churn support is also available in Azure Site Recovery to get higher churn limits. To use this feature, please create a Premium Block Blob type of storage accounts and then use it as the cache storage account. You can choose to, but it's not necessary, to specify different cache storage accounts for the different disks of a virtual machine. If you use different cache storage accounts, ensure they are of the same type (Standard or Premium Block Blobs). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
```azurepowershell #Create Cache storage account for replication logs in the primary region
site-recovery Azure To Azure Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-quickstart.md
Title: Set up Azure VM disaster recovery to a secondary region with Azure Site Recovery description: Quickly set up disaster recovery to another Azure region for an Azure VM, using the Azure Site Recovery service. Previously updated : 05/02/2022 Last updated : 07/14/2023
The [Azure Site Recovery](site-recovery-overview.md) service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business applications online during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery.
+Azure Site Recovery has an option of *High Churn*, enabling you to configure disaster recovery for Azure VMs having data churn up to 100 MB/s. This helps you to enable disaster recovery for more IO intensive workloads. [Learn more](../site-recovery/concepts-azure-to-azure-high-churn-support.md).
+ This quickstart describes how to set up disaster recovery for an Azure VM by replicating it to a secondary Azure region. In general, default settings are used to enable replication. [Learn more](azure-to-azure-tutorial-enable-replication.md). ## Prerequisites
The following steps enable VM replication to a secondary location.
1. In **Operations**, select **Disaster recovery**. 1. From **Basics** > **Target region**, select the target region. 1. To view the replication settings, select **Review + Start replication**. If you need to change any defaults, select **Advanced settings**.
+ >[!Note]
+ >Azure Site Recovery has a *High Churn* option that you can choose to protect VMs with high data change rate. With this, you can use a *Premium Block Blob* type of storage account. By default, the **Normal Churn** option is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
+ >:::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/churn-for-vms.png" alt-text="Screenshot of Churn for VM.":::
1. To start the job that enables VM replication, select **Start replication**. :::image type="content" source="media/azure-to-azure-quickstart/enable-replication1.png" alt-text="Enable replication.":::
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
This table summarizes support for the cache storage account used by Site Recover
**Setting** | **Support** | **Details** | | General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is recommended because GPv1 doesn't support ZRS (Zonal Redundant Storage).
-Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support (in Public Preview). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
+Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md).
Region | Same region as virtual machine | Cache storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | Cache storage account need not be in the same subscription as the source virtual machine(s). Azure Storage firewalls for virtual networks | Supported | If you're using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Don't restrict virtual network access to your storage accounts used for Site Recovery. You should allow access from 'All networks'.
Windows Server 2008 R2 with SP1/SP2 | Supported.<br/><br/> From version [9.30](h
Windows 10 (x64) | Supported. Windows 8.1 (x64) | Supported. Windows 8 (x64) | Supported.
-Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery) of the Mobility service extension for Azure VMs, you need to install a Windows [servicing stack update (SSU)](https://support.microsoft.com/help/4490628) and [SHA-2 update](https://support.microsoft.com/help/4474419) on machines running Windows 7 with SP1. SHA-1 isn't supported from September 2019, and if SHA-2 code signing isn't enabled the agent extension won't install/upgrade as expected.. Learn more about [SHA-2 upgrade and requirements](https://aka.ms/SHA-2KB).
+Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery) of the Mobility service extension for Azure VMs, you need to install a Windows [servicing stack update (SSU)](https://support.microsoft.com/help/4490628) and [SHA-2 update](https://support.microsoft.com/help/4474419) on machines running Windows 7 with SP1. SHA-1 isn't supported from September 2019, and if SHA-2 code signing isn't enabled the agent extension won't install/upgrade as expected. Learn more about [SHA-2 upgrade and requirements](https://aka.ms/SHA-2KB).
GRS | Supported |
RA-GRS | Supported | ZRS | Supported | Cool and Hot Storage | Not supported | Virtual machine disks aren't supported on cool and hot storage
-Azure Storage firewalls for virtual networks | Supported | If restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions).
+Azure Storage firewalls for virtual networks | Supported | If you want to restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions).
General purpose V2 storage accounts (Both Hot and Cool tier) | Supported | Transaction costs increase substantially compared to General purpose V1 storage accounts Generation 2 (UEFI boot) | Supported NVMe disks | Not supported
site-recovery Azure To Azure Tutorial Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md
Title: Tutorial to set up Azure VM disaster recovery with Azure Site Recovery
description: In this tutorial, set up disaster recovery for Azure VMs to another Azure region, using the Site Recovery service. Previously updated : 01/04/2023 Last updated : 07/14/2023 #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable.
Site Recovery retrieves the VMs associated with the selected subscription/resour
### Review replication settings 1. In **Replication settings**, review the settings. Site Recovery creates default settings/policy for the target region. For the purposes of this tutorial, we use the default settings.
+ >[!Note]
+ >Azure Site Recovery has a *High Churn* option that you can choose to protect VMs with high data change rate. With this, you can use a *Premium Block Blob* type of storage account. By default, the **Normal Churn** option is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). You can select the **High Churn** option from **Storage** > **View/edit storage configuration** > **Churn for the VM**.
+ >:::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churns.png" alt-text="Screenshot of churn.":::
2. Select **Next**.
Site Recovery retrieves the VMs associated with the selected subscription/resour
- **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines. 1. Under **Extension settings**, - Select **Update settings** and **Automation account**.-
- :::image type="manage" source="./media/azure-to-azure-tutorial-enable-replication/manage.png" alt-text="Screenshot showing manage tab.":::
+ :::image type="manage" source="./media/azure-to-azure-tutorial-enable-replication/manage.png" alt-text="Screenshot showing manage tab.":::
1. Select **Next**.
The VMs you enable appear on the vault > **Replicated items** page.
## Next steps
-In this tutorial, you enabled disaster recovery for an Azure VM. Now, run a drill to check that failover works as expected.
-
-> [!div class="nextstepaction"]
-> [Run a disaster recovery drill](azure-to-azure-tutorial-dr-drill.md)
+In this tutorial, you enabled disaster recovery for an Azure VM. Now, [run a disaster recovery drill](azure-to-azure-tutorial-dr-drill.md) to check that failover works as expected.
site-recovery Concepts Azure To Azure High Churn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-azure-to-azure-high-churn-support.md
Title: Azure VM Disaster Recovery - High Churn support (Public Preview)
-description: Describes how to protect your Azure VMs having high churning workloads
+ Title: Azure VM Disaster Recovery - High Churn support
+description: Describes how to protect your Azure VMs having high churning workloads.
Previously updated : 12/07/2022 Last updated : 07/14/2023
-# Azure VM Disaster Recovery - High Churn Support (Public Preview)
+# Azure VM Disaster Recovery - High Churn Support
-Azure Site Recovery supports churn (data change rate) up to 100 MB/s per VM. You will be able to protect your Azure VMs having high churning workloads (like databases) using Azure Site Recovery which earlier could not be protected efficiently because Azure Site Recovery has churn limits up to 54 MB/s per VM. You may be able to achieve better RPO performance for your high churning workloads.
+Azure Site Recovery supports churn (data change rate) up to 100 MB/s per VM. You will be able to protect your Azure VMs having high churning workloads (like databases) using the *High Churn* option in Azure Site Recovery which supports churn up to 100 MB/s per VM. You may be able to achieve better RPO performance for your high churning workloads. With the default Normal Churn option, you can [support churn only up to 54 MB/s per VM](./azure-to-azure-support-matrix.md#limits-and-data-change-rates).
## Limitations
Azure Site Recovery supports churn (data change rate) up to 100 MB/s per VM. You
- Recommend VM SKUs with RAM of min 32GB. - Source disks must be Managed Disks.
->[!Warning]
-> This Public Preview feature has been expanded in [all public regions](../site-recovery/azure-to-azure-support-matrix.md#region-support) where Azure Site Recovery is supported. However, this feature is not available in any Government cloud regions. When using *High Churn* with any other regions outside the supported regions, replication and/or reprotection may fail.
+> [!NOTE]
+> This feature is available in all [public regions](./azure-to-azure-support-matrix.md#region-support) where Azure Site Recovery is supported and premium block blobs are available. However, this feature is not yet available in any Government cloud regions.
+> When using High Churn with any other regions outside the supported regions, replication and/or reprotection may fail.
## Data change limits - These limits are based on our tests and don't cover all possible application I/O combinations. - Actual results may vary based on your app I/O mix. -- There are two limits to consider, per disk data churn and per virtual machine data churn.
+- There are two limits to consider:
+ - per disk data churn
+ - per virtual machine data churn.
- Limit per virtual machine data churn - 100 MB/s. The following table summarizes Site Recovery limits:
The following table summarizes Site Recovery limits:
|Standard or P10 or P15|24 KB|6 MB/s| |Standard or P10 or P15|32 KB and above|10 MB/s| |P20|8 KB|10 MB/s|
-|P20|16 KB|20 MB/s|
+|P20 |16 KB|20 MB/s|
|P20|24 KB and above|30 MB/s| |P30 and above|8 KB|20 MB/s| |P30 and above|16 KB|35 MB/s|
The following table summarizes Site Recovery limits:
2. Under **Replication Settings** > **Storage**, select **View/edit storage configuration**. The **Customize target settings** page opens.
- :::image type="Replication settings" source="media/concepts-azure-to-azure-high-churn-support/replication-settings-storage.png" alt-text="Screenshot of Replication settings storage.":::
-
+ :::image type="Replication settings" source="media/concepts-azure-to-azure-high-churn-support/replication-settings-storages.png" alt-text="Screenshot of Replication settings storage." lightbox="media/concepts-azure-to-azure-high-churn-support/replication-settings-storages.png":::
3. Under **Churn for the VM**, there are two options:
- - **Normal Churn** (default option) - You can get up to 54 MB/s per VM. Select Normal Churn to use *Standard* storage accounts only for Cache Storage. Hence, Cache storage dropdown will list only *Standard* storage accounts.
+ - **Normal Churn** (this is the default option) - You can get up to 54 MB/s per VM. Select Normal Churn to use *Standard* storage accounts only for Cache Storage. Hence, Cache storage dropdown will list only *Standard* storage accounts.
- **High Churn** - You can get up to 100 MB/s per VM. Select High Churn to use *Premium Block Blob* storage accounts only for Cache Storage. Hence, Cache storage dropdown will list only *Premium Block blob* storage accounts.
- :::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churn.png" alt-text="Screenshot of churn.":::
+ :::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churns.png" alt-text="Screenshot of churn.":::
+
-4. Select **High Churn (Public Preview)**.
+4. Select **High Churn** from the dropdown option.
- :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/high-churn.png" alt-text="Screenshot of high-churn.":::
+ :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/high-churn-new.png" alt-text="Screenshot of high-churn.":::
If you select multiple source VMs to configure Site Recovery and want to enable High Churn for all these VMs, select **High Churn** at the top level.
- :::image type="Churn top level" source="media/concepts-azure-to-azure-high-churn-support/churn-top-level.png" alt-text="Screenshot of churn top level.":::
- 5. After you select High Churn for the VM, you will see Premium Block Blob options only available for cache storage account. Select cache storage account and then select **Confirm Selection**.
- :::image type="Cache storage" source="media/concepts-azure-to-azure-high-churn-support/cache-storage.png" alt-text="Screenshot of Cache storage.":::
+ :::image type="Cache storage" source="media/concepts-azure-to-azure-high-churn-support/cache-storages.png" alt-text="Screenshot of Cache storage.":::
6. Configure other settings and enable the replication.
The following table summarizes Site Recovery limits:
:::image type="Storage" source="media/concepts-azure-to-azure-high-churn-support/storage-show-details.png" alt-text="Screenshot of Storage show details.":::
-6. Under **Storage settings** > **Churn for the VM**, select **High Churn (Public Preview)**. You will be able to use Premium Block Blob type of storage accounts only for cache storage.
+6. Under **Storage settings** > **Churn for the VM**, select **High Churn**. You will be able to use Premium Block Blob type of storage accounts only for cache storage.
- :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/churn-for-vm.png" alt-text="Screenshot of Churn for VM.":::
+ :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/churn-for-vms.png" alt-text="Screenshot of Churn for VM.":::
6. Select **Next: Review + Start replication**.
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
Title: Deploy Azure Site Recovery replication appliance - Modernized
-description: This article describes support and requirements when deploying the replication appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized
+description: This article describes how to replicate appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized
Previously updated : 08/01/2023 Last updated : 08/23/2023
You deploy an on-premises replication appliance when you use [Azure Site Recover
- The replication appliance coordinates communications between on-premises VMware and Azure. It also manages data replication. - [Learn more](vmware-azure-architecture-modernized.md) about the Azure Site Recovery replication appliance components and processes.
-## Pre-requisites
-
-### Hardware requirements
-
-**Component** | **Requirement**
- |
-CPU cores | 8
-RAM | 32 GB
-Number of disks | 2, including the OS disk - 80 GB and a data disk - 620 GB
-
-### Software requirements
-
-**Component** | **Requirement**
- |
-Operating system | Windows Server 2019
-Operating system locale | English (en-*)
-Windows Server roles | Don't enable these roles: <br> - Active Directory Domain Services <br>- Internet Information Services <br> - Hyper-V
-Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))
-IIS | - No pre-existing default website <br> - No pre-existing website/application listening on port 443 <br>- Enable [anonymous authentication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731244(v=ws.10)) <br> - Enable [FastCGI](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753077(v=ws.10)) setting
-FIPS (Federal Information Processing Standards) | Don't enable FIPS mode|
-
-### Network requirements
-
-|**Component** | **Requirement**|
-| | |
-|Fully qualified domain name (FQDN) | Static|
-|Ports | 443 (Control channel orchestration)<br>9443 (Data transport)|
-|NIC type | VMXNET3 (if the appliance is a VMware VM)|
-|NAT | Supported |
--
-#### Allow URLs
-
-Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity:
-
- | **URL** | **Details** |
- | - | -|
- | portal.azure.com | Navigate to the Azure portal. |
- | `login.windows.net `<br>`graph.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. |
- |`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. |
- |management.azure.com |Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
- |`*.services.visualstudio.com `|Upload app logs used for internal monitoring. |
- |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure that the machines that need to be replicated have access to this URL. |
- |aka.ms |Allow access to "also known as" links. Used for Azure Site Recovery appliance updates. |
- |download.microsoft.com/download |Allow downloads from Microsoft download. |
- |`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. |
- |`*.discoverysrv.windowsazure.com `<br><br>`*.hypervrecoverymanager.windowsazure.com `<br><br> `*.backup.windowsazure.com ` |Connect to Azure Site Recovery micro-service URLs.
- |`*.blob.core.windows.net `|Upload data to Azure storage, which is used to create target disks. |
- | `*.prod.migration.windowsazure.com `| To discover your on-premises estate.
-
-#### Allow URLs for government clouds
-
-Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity, when enabling replication to a government cloud:
-
- | **URL for Fairfax** | **URL for Mooncake** | **Details** |
- | - | -| -|
- | `login.microsoftonline.us/*` <br> `graph.microsoftazure.us` | `login.chinacloudapi.cn/*` <br> `graph.chinacloudapi.cn` | To sign-in to your Azure subscription. |
- | `portal.azure.us` | `portal.azure.cn` |Navigate to the Azure portal. |
- | `*.microsoftonline.us/*` <br> `management.usgovcloudapi.net` | `*.microsoftonline.cn/*` <br> `management.chinacloudapi.cn/*` | Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
- | `*.hypervrecoverymanager.windowsazure.us` <br> `*.migration.windowsazure.us` <br> `*.backup.windowsazure.us` | `*.hypervrecoverymanager.windowsazure.cn` <br> `*.migration.windowsazure.cn` <br> `*.backup.windowsazure.cn` | Connect to Azure Site Recovery micro-service URLs. |
- |`*.vault.usgovcloudapi.net`| `*.vault.azure.cn` |Manage secrets in the Azure Key Vault. Note: Ensure that the machines, which need to be replicated have access to this URL. |
--
-### Folder exclusions from Antivirus program
-
-#### If Antivirus Software is active on appliance
-
-Exclude following folders from Antivirus software for smooth replication and to avoid connectivity issues.
-
-C:\ProgramData\Microsoft Azure <br>
-C:\ProgramData\ASRLogs <br>
-C:\Windows\Temp\MicrosoftAzure
-C:\Program Files\Microsoft Azure Appliance Auto Update <br>
-C:\Program Files\Microsoft Azure Appliance Configuration Manager <br>
-C:\Program Files\Microsoft Azure Push Install Agent <br>
-C:\Program Files\Microsoft Azure RCM Proxy Agent <br>
-C:\Program Files\Microsoft Azure Recovery Services Agent <br>
-C:\Program Files\Microsoft Azure Server Discovery Service <br>
-C:\Program Files\Microsoft Azure Site Recovery Process Server <br>
-C:\Program Files\Microsoft Azure Site Recovery Provider <br>
-C:\Program Files\Microsoft Azure to On-Premises Reprotect agent <br>
-C:\Program Files\Microsoft Azure VMware Discovery Service <br>
-C:\Program Files\Microsoft On-Premise to Azure Replication agent <br>
-E:\ <br>
-
-#### If Antivirus software is active on source machine
-
-If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\ProgramData\ASR\agent for smooth replication.
-
-## Sizing and capacity
-An appliance that uses an in-built process server to protect the workload can handle up to 200 virtual machines, based on the following configurations:
-
- |CPU | Memory | Cache disk size | Data change rate | Protected machines |
- ||-|--||-|
- |16 vCPUs (2 sockets * 8 cores @ 2.5 GHz) | 32 GB | 1 TB | >1 TB to 2 TB | Use to replicate 151 to 200 machines.|
--- You can perform discovery of all the machines in a vCenter server, using any of the replication appliances in the vault.--- You can [switch a protected machine](switch-replication-appliance-modernized.md), between different appliances in the same vault, given the selected appliance is healthy.-
-For detailed information about how to use multiple appliances and failover a replication appliance, see [this article](switch-replication-appliance-modernized.md)
-- ## Prepare Azure account To create and register the Azure Site Recovery replication appliance, you need an Azure account with:
Go to **Recovery Services Vault** > **Getting Started**. In VMware machines to A
:::image type="Recovery Services Vault Modernized" source="./media/deploy-vmware-azure-replication-appliance-modernized/prepare-infra.png" alt-text="Screenshot showing recovery services vault modernized.":::
-To set up a new appliance, you can use an OVF template (recommended) or PowerShell. Ensure you meet all the [hardware ](#hardware-requirements) and [software requirements](#software-requirements), and any other prerequisites.
+To set up a new appliance, you can use an OVF template (recommended) or PowerShell. Ensure you meet all the [hardware](./replication-appliance-support-matrix.md#hardware-requirements) and [software requirements](./replication-appliance-support-matrix.md#software-requirements), and any other prerequisites.
## Create Azure Site Recovery replication appliance
The OVF template spins up a machine with the required specifications.
### Set up the appliance through PowerShell
-If there is any organizational restrictions, you can manually set up the Site Recovery replication appliance through PowerShell. Follow these steps:
+If there are any organizational restrictions, you can manually set up the Site Recovery replication appliance through PowerShell. Follow these steps:
1. Download the installers from [here](https://aka.ms/V2ARcmApplianceCreationPowershellZip) and place this folder on the Azure Site Recovery replication appliance. 2. After successfully copying the zip folder, unzip and extract the components of the folder.
If there is any organizational restrictions, you can manually set up the Site Re
All Azure Site Recovery services will use these settings to connect to the internet. Only HTTP proxy is supported.
-2. Ensure the [required URLs](#allow-urls) are allowed and are reachable from the Azure Site Recovery replication appliance for continuous connectivity.
+2. Ensure the [required URLs](./replication-appliance-support-matrix.md#allow-urls) are allowed and are reachable from the Azure Site Recovery replication appliance for continuous connectivity.
3. Once the prerequisites have been checked, in the next step information about all the appliance components will be fetched. Review the status of all components and then select **Continue**. After saving the details, proceed to choose the appliance connectivity. 4. After saving connectivity details, Select **Continue** to proceed to registration with Microsoft Azure.
-5. Ensure the [prerequisites](#pre-requisites) are met, proceed with registration.
+5. Ensure the [prerequisites](./replication-appliance-support-matrix.md#pre-requisites) are met, proceed with registration.
:::image type="Register appliance" source="./media/deploy-vmware-azure-replication-appliance-modernized/app-setup-register.png" alt-text="Screenshot showing register appliance.":::
If there is any organizational restrictions, you can manually set up the Site Re
- **Azure Site Recovery replication appliance key**: Copy the key from the portal by navigating to **Recovery Services vault** > **Getting started** > **Site Recovery** > **VMware to Azure: Prepare Infrastructure**.
- - After pasting the key, select **Login.** You'll be redirected to a new authentication tab.
+ - After pasting the key, select **Login.** You're redirected to a new authentication tab.
- By default, an authentication code will be generated as highlighted below, in the **Appliance configuration manager** page. Use this code in the authentication tab.
+ By default, an authentication code is generated as highlighted below, in the **Appliance configuration manager** page. Use this code in the authentication tab.
- Enter your Microsoft Azure credentials to complete registration.
If there is any organizational restrictions, you can manually set up the Site Re
:::image type="Configuration of vCenter" source="./media/deploy-vmware-azure-replication-appliance-modernized/vcenter-information.png" alt-text="Screenshot showing configuration of vCenter.":::
-7. Select **Add vCenter Server** to add vCenter information. Enter the server name or IP address of the vCenter and port information. Post that, provide username, password, and friendly name. This is used to fetch details of [virtual machine managed through the vCenter](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery). The user account details will be encrypted and stored locally in the machine.
+7. Select **Add vCenter Server** to add vCenter information. Enter the server name or IP address of the vCenter and port information. Post that, provide username, password, and friendly name. This is used to fetch details of [virtual machine managed through the vCenter](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery). The user account details are encrypted and stored locally in the machine.
>[!NOTE] >If you're trying to add the same vCenter Server to multiple appliances, then ensure that the same friendly name is used in all the appliances.
If there is any organizational restrictions, you can manually set up the Site Re
>[!NOTE] > Appliance cloning is not supported with the modernized architecture. If you attempt to clone, it might disrupt the recovery flow. - ## View Azure Site Recovery replication appliance in Azure portal After successful configuration of Azure Site Recovery replication appliance, navigate to Azure portal, **Recovery Services Vault**.
You'll also be able to see a tab for **Discovered items** that lists all of the
## Next steps
-Set up disaster recovery of [VMware VMs](vmware-azure-set-up-replication-tutorial-modernized.md) to Azure.
+
+- Set up disaster recovery of [VMware VMs](vmware-azure-set-up-replication-tutorial-modernized.md) to Azure.
+- [Learn](replication-appliance-support-matrix.md) about the support requirements for Azure Site Recovery replication appliance.
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
Previously updated : 01/30/2023 Last updated : 08/31/2023 # Replicate on-premises machines by using private endpoints
When using the private link with modernized experience for VMware VMs, public ac
| `*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. | |`*.microsoftonline.com `<br>`*.microsoftonline-p.com `| Create Azure Active Directory applications for the appliance to communicate with Azure Site Recovery. | | `management.azure.com` | Used for Azure Resource Manager deployments and operations. |
+ | `*.siterecovery.windowsazure.com` | Used to connect to Site Recovery services. |
Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity, when enabling replication to a government cloud:
Ensure the following URLs are allowed and reachable from the Azure Site Recovery
| `*.portal.azure.us` | `*.portal.azure.cn` | Navigate to the Azure portal. | | `management.usgovcloudapi.net` | `management.chinacloudapi.cn` | Create Azure Active Directory applications for the appliance to communicate with the Azure Site Recovery service. | - ## Create and use private endpoints for site recovery The following sections describe the steps you need to take to create and use private endpoints for site recovery in your virtual networks.
When the private endpoint is created, five fully qualified domain names (FQDNs)
The five domain names are formatted in this pattern:
-`{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.siterecovery.windowsazure.com`
+`{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.privatelink.siterecovery.windowsazure.com`
### Approve private endpoints for site recovery
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
Hyper-V without Virtual Machine Manager | You can perform disaster recovery to A
**Server** | **Requirements** | **Details** | |
-Hyper-V (running without Virtual Machine Manager) | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2 with latest updates <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If you have already configured Windows Server 2012 R2 with/or SCVMM 2012 R2 with Azure Site Recovery and plan to upgrade the OS, please follow the guidance [documentation.](upgrade-2012R2-to-2016.md)
-Hyper-V (running with Virtual Machine Manager) | Virtual Machine Manager 2022 (Server core not supported), Virtual Machine Manager 2019, Virtual Machine Manager 2016, Virtual Machine Manager 2012 R2 <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If Virtual Machine Manager is used, Windows Server 2019 hosts should be managed in Virtual Machine Manager 2019. Similarly, Windows Server 2016 hosts should be managed in Virtual Machine Manager 2016.
+Hyper-V (running without Virtual Machine Manager) | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2 with latest updates <br/><br/> **Note:** Server core installations of these operating systems are also supported. | If you have already configured Windows Server 2012 R2 with/or SCVMM 2012 R2 with Azure Site Recovery and plan to upgrade the OS, please follow the guidance [documentation.](upgrade-2012R2-to-2016.md)
+Hyper-V (running with Virtual Machine Manager) | Virtual Machine Manager 2022 (Server core not supported), Virtual Machine Manager 2019, Virtual Machine Manager 2016, Virtual Machine Manager 2012 R2 <br/><br/> **Note:** Server core installations of these operating systems are also supported. | If Virtual Machine Manager is used, Windows Server 2019 hosts should be managed in Virtual Machine Manager 2019. Similarly, Windows Server 2016 hosts should be managed in Virtual Machine Manager 2016.
> [!NOTE] > Ensure that .NET Framework 4.6.2 or higher is present on the on-premises server.
Move storage, network, Azure VMs across resource groups<br/><br/> Within and acr
> [!NOTE] > When replicating Hyper-VMs from on-premises to Azure, you can replicate to only one AD tenant from one specific environment - Hyper-V site or Hyper-V with VMM as applicable. - ## Provider and agent To make sure your deployment is compatible with settings in this article, make sure you're running the latest provider and agent versions.
To make sure your deployment is compatible with settings in this article, make s
Azure Site Recovery provider | Coordinates communications between on-premises servers and Azure <br/><br/> Hyper-V with Virtual Machine Microsoft Azure Recovery Services agent | Coordinates replication between Hyper-V VMs and Azure<br/><br/> Installed on on-premises Hyper-V servers (with or without Virtual Machine Manager) | Latest agent available from the portal ----- ## Next steps Learn how to [prepare Azure](tutorial-prepare-azure.md) for disaster recovery of on-premises Hyper-V VMs.
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
Title: Set up Hyper-V disaster recovery by using Azure Site Recovery
-description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery.
+description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery and MARS.
Last updated 05/04/2023
It's important to prepare the infrastructure before you set up disaster recovery
### Source settings
-To set up the source environment, you create a Hyper-V site. You add to the site the Hyper-V hosts that contain VMs you want to replicate. Then, you download and install the Azure Site Recovery provider and the Azure Recovery Services agent on each host, and register the Hyper-V site in the vault.
+To set up the source environment, you create a Hyper-V site. You add to the site the Hyper-V hosts that contain VMs you want to replicate. Then, you download and install the Azure Site Recovery provider and the Microsoft Azure Recovery Services (MARS) agent for Azure Site Recovery on each host, and register the Hyper-V site in the vault.
1. On **Prepare infrastructure**, on the **Source settings** tab, complete these steps: 1. For **Are you Using System Center VMM to manage Hyper-V hosts?**, select **No**.
Site Recovery checks for compatible Azure storage accounts and networks in your
#### Install the provider
-Install the downloaded setup file (*AzureSiteRecoveryProvider.exe*) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Site Recovery provider and the Recovery Services agent on each Hyper-V host.
+Install the downloaded setup file (*AzureSiteRecoveryProvider.exe*) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Site Recovery provider and the Recovery Services agent (MARS for Azure Site Recovery) on each Hyper-V host.
1. Run the setup file. 1. In the Azure Site Recovery provider setup wizard, for **Microsoft Update**, opt in to use Microsoft Update to check for provider updates.
site-recovery Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-log-analytics.md
Title: Monitor Azure Site Recovery with Azure Monitor Logs
description: Learn how to monitor Azure Site Recovery with Azure Monitor Logs (Log Analytics) Previously updated : 02/07/2023 Last updated : 08/31/2023
The churn and upload rate data will start feeding into the workspace.
You retrieve data from logs using log queries written with the [Kusto query language](../azure-monitor/logs/get-started-queries.md). This section provides a few examples of common queries you might use for Site Recovery monitoring. > [!NOTE]
-> Some of the examples use **replicationProviderName_s** set to **A2A**. This retrieves Azure VMs that are replicated to a secondary Azure region using Site Recovery. In these examples, you can replace **A2A** with **InMageAzureV2**, if you want to retrieve on-premises VMware VMs or physical servers that are replicated to Azure using Site Recovery.
+> Some of the examples use **replicationProviderName_s** set to **A2A**. This retrieves Azure VMs that are replicated to a secondary Azure region using Site Recovery. In these examples, you can replace **A2A** with **InMageRcm**, if you want to retrieve on-premises VMware VMs or physical servers that are replicated to Azure using Site Recovery.
### Query replication health
This query plots a summary table for VMware VMs and physical servers replicated
``` AzureDiagnosticsΓÇ»
-| where replicationProviderName_s == "InMageAzureV2"  
+| where replicationProviderName_s == "InMageRcm"  
| where isnotempty(name_s) and isnotnull(name_s)   | summarize hint.strategy=partitioned arg_max(TimeGenerated, *) by name_s   | project VirtualMachine = name_s , Vault = Resource , ReplicationHealth = replicationHealth_s, Status = protectionState_s, RPO_in_seconds = rpoInSeconds_d, TestFailoverStatus = failoverHealth_s, AgentVersion = agentVersion_s, ReplicationError = replicationHealthErrors_s, ProcessServer = processServerName_g 
AzureDiagnosticsΓÇ»
You can set up Site Recovery alerts based on Azure Monitor data. [Learn more](../azure-monitor/alerts/alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal) about setting up log alerts. > [!NOTE]
-> Some of the examples use **replicationProviderName_s** set to **A2A**. This sets alerts for Azure VMs that are replicated to a secondary Azure region. In these examples, you can replace **A2A** with **InMageAzureV2** if you want to set alerts for on-premises VMware VMs or physical servers replicated to Azure.
+> Some of the examples use **replicationProviderName_s** set to **A2A**. This sets alerts for Azure VMs that are replicated to a secondary Azure region. In these examples, you can replace **A2A** with **InMageRcm** if you want to set alerts for on-premises VMware VMs or physical servers replicated to Azure.
### Multiple machines in a critical state
site-recovery Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-from-classic-to-modernized-vmware-disaster-recovery.md
Ensure the following for the replicated items you're planning to move:
- The replicated item isn't replicating the data from Azure to on-premises.  - The initial replication isn't under progress and has already been completed.   - The replicated item isn't in the ‘resynchronization’ state.  -- The configuration server’s version is 9.50 or later and its health is in a non critical state. 
+- The configuration serverΓÇÖs version is 9.50 or later and its health is in a noncritical state.ΓÇ»
- The configuration server has a healthy heartbeat.ΓÇ» - The mobility service agentΓÇÖs version, installed on the source machine, is 9.50 or later.ΓÇ» - The Recovery Services vaults with MSI enabled are supported.
The same formula is used to calculate time for migration and is shown on the por
## How to define required infrastructure
-When migrating machines from classic to modernized architecture, you'll need to make sure that the required infrastructure has already been registered in the modernized Recovery Services vault. Refer to the replication applianceΓÇÖs [sizing and capacity details](./deploy-vmware-azure-replication-appliance-modernized.md#sizing-and-capacity) to help define the required infrastructure.
+When migrating machines from classic to modernized architecture, you'll need to make sure that the required infrastructure has already been registered in the modernized Recovery Services vault. Refer to the replication applianceΓÇÖs [sizing and capacity details](./replication-appliance-support-matrix.md#sizing-and-capacity) to help define the required infrastructure.
-As a rule, you should set up the same number of replication appliances, as the number of process servers in your classic Recovery Services vault. In the classic vault, if there was one configuration server and four process servers, then you should set up four replication appliances in the modernized Recovery Services vault.
+As a rule, you should setup the same number of replication appliances, as the number of process servers in your classic Recovery Services vault. In the classic vault, if there was one configuration server and four process servers, then you should setup four replication appliances in the modernized Recovery Services vault.
## Pricing
site-recovery Replication Appliance Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/replication-appliance-support-matrix.md
+
+ Title: Support requirements for Azure Site Recovery replication appliance
+description: This article describes support and requirements when deploying the replication appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized
++ Last updated : 08/09/2023++++
+# Support matrix for deploy Azure Site Recovery replication appliance - Modernized
+
+This article describes support and requirements when deploying the replication appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized
+
+>[!NOTE]
+> The information in this article applies to Azure Site Recovery - Modernized. For information about configuration server requirements in Classic releases, [see this article](vmware-azure-configuration-server-requirements.md).
+
+>[!NOTE]
+> Ensure you create a new and exclusive Recovery Services vault for setting up the ASR replication appliance. Don't use an existing vault.
+
+You deploy an on-premises replication appliance when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs or physical servers to Azure.
+
+- The replication appliance coordinates communications between on-premises VMware and Azure. It also manages data replication.
+- [Learn more](vmware-azure-architecture-modernized.md) about the Azure Site Recovery replication appliance components and processes.
+
+## Pre-requisites
+
+### Hardware requirements
+
+**Component** | **Requirement**
+ |
+CPU cores | 8
+RAM | 32 GB
+Number of disks | 2, including the OS disk - 80 GB and a data disk - 620 GB
+
+### Software requirements
+
+**Component** | **Requirement**
+ |
+Operating system | Windows Server 2019
+Operating system locale | English (en-*)
+Windows Server roles | Don't enable these roles: <br> - Active Directory Domain Services <br>- Internet Information Services <br> - Hyper-V
+Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))
+IIS | - No pre-existing default website <br> - No pre-existing website/application listening on port 443 <br>- Enable [anonymous authentication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731244(v=ws.10)) <br> - Enable [FastCGI](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753077(v=ws.10)) setting
+FIPS (Federal Information Processing Standards) | Don't enable FIPS mode|
+
+### Network requirements
+
+|**Component** | **Requirement**|
+| | |
+|Fully qualified domain name (FQDN) | Static|
+|Ports | 443 (Control channel orchestration)<br>9443 (Data transport)|
+|NIC type | VMXNET3 (if the appliance is a VMware VM)|
+|NAT | Supported |
++
+#### Allow URLs
+
+Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity:
+
+ | **URL** | **Details** |
+ | - | -|
+ | portal.azure.com | Navigate to the Azure portal. |
+ | `login.windows.net `<br>`graph.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. |
+ |`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. |
+ |management.azure.com |Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
+ |`*.services.visualstudio.com `|Upload app logs used for internal monitoring. |
+ |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure that the machines that need to be replicated have access to this URL. |
+ |aka.ms |Allow access to "also known as" links. Used for Azure Site Recovery appliance updates. |
+ |download.microsoft.com/download |Allow downloads from Microsoft download. |
+ |`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. |
+ |`*.discoverysrv.windowsazure.com `<br><br>`*.hypervrecoverymanager.windowsazure.com `<br><br> `*.backup.windowsazure.com ` |Connect to Azure Site Recovery micro-service URLs.
+ |`*.blob.core.windows.net `|Upload data to Azure storage, which is used to create target disks. |
+ | `*.prod.migration.windowsazure.com `| To discover your on-premises estate.
+
+#### Allow URLs for government clouds
+
+Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity, when enabling replication to a government cloud:
+
+ | **URL for Fairfax** | **URL for Mooncake** | **Details** |
+ | - | -| -|
+ | `login.microsoftonline.us/*` <br> `graph.microsoftazure.us` | `login.chinacloudapi.cn/*` <br> `graph.chinacloudapi.cn` | To sign-in to your Azure subscription. |
+ | `portal.azure.us` | `portal.azure.cn` |Navigate to the Azure portal. |
+ | `*.microsoftonline.us/*` <br> `management.usgovcloudapi.net` | `*.microsoftonline.cn/*` <br> `management.chinacloudapi.cn/*` | Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
+ | `*.hypervrecoverymanager.windowsazure.us` <br> `*.migration.windowsazure.us` <br> `*.backup.windowsazure.us` | `*.hypervrecoverymanager.windowsazure.cn` <br> `*.migration.windowsazure.cn` <br> `*.backup.windowsazure.cn` | Connect to Azure Site Recovery micro-service URLs. |
+ |`*.vault.usgovcloudapi.net`| `*.vault.azure.cn` |Manage secrets in the Azure Key Vault. Note: Ensure that the machines, which need to be replicated have access to this URL. |
++
+### Folder exclusions from Antivirus program
+
+#### If Antivirus Software is active on appliance
+
+Exclude following folders from Antivirus software for smooth replication and to avoid connectivity issues.
+
+C:\ProgramData\Microsoft Azure <br>
+C:\ProgramData\ASRLogs <br>
+C:\Windows\Temp\MicrosoftAzure
+C:\Program Files\Microsoft Azure Appliance Auto Update <br>
+C:\Program Files\Microsoft Azure Appliance Configuration Manager <br>
+C:\Program Files\Microsoft Azure Push Install Agent <br>
+C:\Program Files\Microsoft Azure RCM Proxy Agent <br>
+C:\Program Files\Microsoft Azure Recovery Services Agent <br>
+C:\Program Files\Microsoft Azure Server Discovery Service <br>
+C:\Program Files\Microsoft Azure Site Recovery Process Server <br>
+C:\Program Files\Microsoft Azure Site Recovery Provider <br>
+C:\Program Files\Microsoft Azure to on-premises Reprotect agent <br>
+C:\Program Files\Microsoft Azure VMware Discovery Service <br>
+C:\Program Files\Microsoft on-premises to Azure Replication agent <br>
+E:\ <br>
+
+#### If Antivirus software is active on source machine
+
+If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\ProgramData\ASR\agent for smooth replication.
+
+## Sizing and capacity
+
+An appliance that uses an in-built process server to protect the workload can handle up to 200 virtual machines, based on the following configurations:
+
+ |CPU | Memory | Cache disk size | Data change rate | Protected machines |
+ ||-|--||-|
+ |16 vCPUs (2 sockets * 8 cores @ 2.5 GHz) | 32 GB | 1 TB | >1 TB to 2 TB | Use to replicate 151 to 200 machines.|
+
+- You can perform discovery of all the machines in a vCenter server, using any of the replication appliances in the vault.
+
+- You can [switch a protected machine](switch-replication-appliance-modernized.md), between different appliances in the same vault, given the selected appliance is healthy.
+
+For detailed information about how to use multiple appliances and failover a replication appliance, see [this article](switch-replication-appliance-modernized.md)
+
+## Next steps
+
+- [Learn](vmware-azure-set-up-replication-tutorial-modernized.md) how to set up disaster recovery of VMware VMs to Azure.
+- [Learn](../site-recovery/deploy-vmware-azure-replication-appliance-modernized.md) how to deploy Azure Site Recovery replication appliance.
site-recovery Service Updates How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/service-updates-how-to.md
Title: Updates and component upgrades in Azure Site Recovery
-description: Provides an overview of Azure Site Recovery service updates, and component upgrades.
+description: Provides an overview of Azure Site Recovery service updates, MARS agent and component upgrades.
We recommend always upgrading to the latest component versions:
Review the latest update rollup (version N) in [this article](site-recovery-whats-new.md). Remember that Site Recovery provides support for N-4 versions. - ## Component expiry Site Recovery notifies you of expired components (or nearing expiry) by email (if you subscribed to email notifications), or on the vault dashboard in the portal.
The example in the table shows how this works.
## Between an on-premises VMM site and Azure+ 1. Download the update for the Microsoft Azure Site Recovery Provider. 2. Install the Provider on the VMM server. If VMM is deployed in a cluster, install the Provider on all cluster nodes.
-3. Install the latest Microsoft Azure Recovery Services agent on all Hyper-V hosts or cluster nodes.
-
+3. Install the latest Microsoft Azure Recovery Services agent (MARS for Azure Site Recovery) on all Hyper-V hosts or cluster nodes.
## Between two on-premises VMM sites+ 1. Download the latest update for the Microsoft Azure Site Recovery Provider. 2. Install the latest Provider on the VMM server managing the secondary recovery site. If VMM is deployed in a cluster, install the Provider on all cluster nodes. 3. After the recovery site is updated, install the Provider on the VMM server that's managing the primary site. ## Next steps
-Follow our [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page to track new updates and releases.
+Follow our [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page to track new updates and releases.
site-recovery Site Recovery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-overview.md
Title: About Azure Site Recovery
description: Provides an overview of the Azure Site Recovery service, and summarizes disaster recovery and migration deployment scenarios. Previously updated : 12/14/2022 Last updated : 07/24/2023
Azure Recovery Services contributes to your BCDR strategy:
- **Site Recovery service**: Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery [replicates](azure-to-azure-quickstart.md) workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to a secondary location, and access apps from there. After the primary location is running again, you can fail back to it. - **Backup service**: The [Azure Backup](../backup/index.yml) service keeps your data safe and recoverable.
+Azure Site Recovery has an option of *High Churn*, enabling you to configure disaster recovery for Azure VMs having data churn up to 100 MB/s. This helps you to enable disaster recovery for more IO intensive workloads. [Learn more](../site-recovery/concepts-azure-to-azure-high-churn-support.md).
+ Site Recovery can manage replication for: - Azure VMs replicating between Azure regions
Site Recovery can manage replication for:
**VMware VM replication** | You can replicate VMware VMs to Azure using the improved Azure Site Recovery replication appliance that offers better security and resilience than the configuration server. For more information, see [Disaster recovery of VMware VMs](vmware-azure-about-disaster-recovery.md). **On-premises VM replication** | You can replicate on-premises VMs and physical servers to Azure, or to a secondary on-premises datacenter. Replication to Azure eliminates the cost and complexity of maintaining a secondary datacenter. **Workload replication** | Replicate any workload running on supported Azure VMs, on-premises Hyper-V and VMware VMs, and Windows/Linux physical servers.
-**Data resilience** | Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that provides. When failover occurs, Azure VMs are created based on the replicated data. This also applies to Public MEC to Azure region Azure Site Recovery scenario. In case of Azure Public MEC to Public MEC Azure Site Recovery scenario (the ASR functionality for Public MEC is in preview state), data is stored in the Public MEC.
+**Data resilience** | Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that provides. When failover occurs, Azure VMs are created based on the replicated data. This also applies to Public MEC to Azure region Azure Site Recovery scenario. In case of Azure Public MEC to Public MEC Azure Site Recovery scenario (the Azure Site Recovery functionality for Public MEC is in preview state), data is stored in the Public MEC.
**RTO and RPO targets** | Keep recovery time objectives (RTO) and recovery point objectives (RPO) within organizational limits. Site Recovery provides continuous replication for Azure VMs and VMware VMs, and replication frequency as low as 30 seconds for Hyper-V. You can reduce RTO further by integrating with [Azure Traffic Manager](https://azure.microsoft.com/blog/reduce-rto-by-using-azure-traffic-manager-with-azure-site-recovery/). **Keep apps consistent over failover** | You can replicate using recovery points with application-consistent snapshots. These snapshots capture disk data, all data in memory, and all transactions in process. **Testing without disruption** | You can easily run disaster recovery drills, without affecting ongoing replication.
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Previously updated : 08/01/2023 Last updated : 08/16/2023 # Add Azure Automation runbooks to recovery plans
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. **Azure VM disaster recovery** | Added support for Oracle Linux 8.7 with UEK7 kernel, RHEL 9 and Cent OS 9 Linux distros.
-**VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.7 with UEK7 kernel, RHEL 9, Cent OS 9 and Oracle Linux 9 Linux distros. <br> <br/> Added support for Windows Server 2019 as the ASR replication appliance. <br> <br/> Added support for Microsoft Edge to be the default browser in Appliance Configuration Manager. <br> <br/> Added support to select an Availability set or a Proximity Placement group, after enabling replication using modernized VMware/Physical machine replication scenario.
+**VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.7 with UEK7 kernel, RHEL 9, Cent OS 9 and Oracle Linux 9 Linux distros. <br> <br/> Added support for Windows Server 2019 as the Azure Site Recovery replication appliance. <br> <br/> Added support for Microsoft Edge to be the default browser in Appliance Configuration Manager. <br> <br/> Added support to select an Availability set or a Proximity Placement group, after enabling replication using modernized VMware/Physical machine replication scenario.
## Updates (February 2023)
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. **Azure VM disaster recovery** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.
-**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](deploy-vmware-azure-replication-appliance-modernized.md#allow-urls-for-government-clouds).
+**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](./replication-appliance-support-matrix.md#allow-urls-for-government-clouds).
## Updates (October 2022)
site-recovery Vmware Physical Large Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-large-deployment.md
Title: Scale VMware/physical disaster recovery with Azure Site Recovery
description: Learn how to set up disaster recovery to Azure for large numbers of on-premises VMware VMs or physical servers with Azure Site Recovery. Previously updated : 12/30/2022 Last updated : 08/31/2023
Process server capacity is affected by data churn rates, and not by the number o
**CPU** | **Memory** | **Cache disk** | **Churn rate** | | |
-12 vCPUs<br> 2 sockets*6 cores @ 2.5 Ghz | 24 GB | 1 GB | Up to 2 TB a day
+12 vCPUs<br> 2 sockets*6 cores @ 2.5 Ghz | 24 GB | 1 TB | Up to 2 TB a day
Set up the process server as follows:
To run a large-scale failover, we recommend the following:
## Next steps > [!div class="nextstepaction"]
-> [Monitor Site Recovery](site-recovery-monitor-and-troubleshoot.md)
+> [Monitor Site Recovery](site-recovery-monitor-and-troubleshoot.md)
site-recovery Vmware Physical Secondary Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-support-matrix.md
- Title: Support for VMware/physical disaster recovery to a secondary site with Azure Site Recovery
-description: Summarizes the support for disaster recovery of VMware VMs and physical servers to a secondary site with Azure Site Recovery.
--- Previously updated : 11/14/2019----
-# Support matrix for disaster recovery of VMware VMs and physical servers to a secondary site
-
-This article summarizes what's supported when you use the [Azure Site Recovery](site-recovery-overview.md) service for disaster recovery of VMware VMs or Windows/Linux physical servers to a secondary VMware site.
--- If you want to replicate VMware VMs or physical servers to Azure, review [this support matrix](vmware-physical-azure-support-matrix.md).-- If you want to replicate Hyper-V VMs to a secondary site, review [this support matrix](hyper-v-azure-support-matrix.md).-
-> [!NOTE]
-> Replication of on-premises VMware VMs and physical servers is provided by InMage Scout. InMage Scout is included in Azure Site Recovery service subscription.
-
-## End-of-support announcement
-The Site Recovery scenario for replication between on-premises VMware or physical datacenters is reaching end-of-support.
--- From August 2018, the scenario canΓÇÖt be configured in the Recovery Services vault, and the InMage Scout software canΓÇÖt be downloaded from the vault. Existing deployments will be supported.-
-Existing partners can onboard new customers to the scenario until support ends.
-- During 2018 and 2019, two updates will be released:-
- - Update 7: Fixes network configuration and compliance issues, and provides TLS 1.2 support.
- - Update 8: Adds support for Linux operating systems RHEL/CentOS 7.3/7.4/7.5, and for SUSE 12
- - After Update 8, no further updates will be released. There will be limited hotfix support for the operating systems added in Update 8, and bug fixes based on best effort.
-
-## Host servers
-
-**Operating system** | **Details**
- |
-vCenter server | vCenter 5.5, 6.0 and 6.5<br/><br/> If you run 6.0 or 6.5, note that only 5.5 features are supported.
--
-## Replicated VM support
-
-The following table summarizes operating system support for machines replicated with Site Recovery. Any workload can be running on the supported operating system.
-
-**Operating system** | **Details**
- |
-Windows Server | 64-bit Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2 with at least SP1.
-Linux | Red Hat Enterprise Linux 6.7, 6.8, 6.9, 7.1, 7.2 <br/><br/> Centos 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2 <br/><br/> Oracle Enterprise Linux 6.4, 6.5, 6.8 running the Red Hat compatible kernel, or Unbreakable Enterprise Kernel Release 3 (UEK3) <br/><br/> SUSE Linux Enterprise Server 11 SP3, 11 SP4
--
-## Linux machine storage
-
-Only Linux machines with the following storage can be replicated:
--- File system (EXT3, ETX4, ReiserFS, XFS).-- Multipath software-device Mapper.-- Volume manager (LVM2).-- Physical servers with HP CCISS controller storage are not supported.-- The ReiserFS file system is supported only on SUSE Linux Enterprise Server 11 SP3.-
-## Network configuration - Host/Guest VM
-
-**Configuration** | **Supported**
- |
-Host - NIC teaming | Yes
-Host - VLAN | Yes
-Host - IPv4 | Yes
-Host - IPv6 | No
-Guest VM - NIC teaming | No
-Guest VM - IPv4 | Yes
-Guest VM - IPv6 | No
-Guest VM - Windows/Linux - Static IP address | Yes
-Guest VM - Multi-NIC | Yes
--
-## Storage
-
-### Host storage
-
-**Storage (host)** | **Supported**
- |
-NFS | Yes
-SMB 3.0 | N/A
-SAN (ISCSI) | Yes
-Multi-path (MPIO) | Yes
-
-### Guest or physical server storage
-
-**Configuration** | **Supported**
- |
-VMDK | Yes
-VHD/VHDX | N/A
-Gen 2 VM | N/A
-Shared cluster disk | Yes
-Encrypted disk | No
-UEFI| Yes
-NFS | No
-SMB 3.0 | No
-RDM | Yes
-Disk > 1 TB | Yes
-Volume with striped disk > 1 TB<br/><br/> LVM | Yes
-Storage Spaces | No
-Hot add/remove disk | Yes
-Exclude disk | Yes
-Multi-path (MPIO) | N/A
-
-## Vaults
-
-**Action** | **Supported**
- |
-Move vaults across resource groups (within or across subscriptions) | No
-Move storage, network, Azure VMs across resource groups (within or across subscriptions) | No
-
-## Mobility service and updates
-
-The Mobility service coordinates replication between on-premises VMware servers or physical servers, and the secondary site. When you set up replication, you should make sure you have the latest version of the Mobility service, and of other components.
-
-| **Update** | **Details** |
-| | |
-|Scout updates | Scout updates are cumulative. <br/><br/> [Learn about and download](vmware-physical-secondary-disaster-recovery.md#updates) the latest Scout updates |
-|Component updates | Scout updates include updates for all components, including the RX server, configuration server, process and master target servers, vContinuum servers, and source servers you want to protect.<br/><br/> [Learn more](vmware-physical-secondary-disaster-recovery.md#download-and-install-component-updates).|
--
-## Next steps
-
-Download the [InMage Scout user guide](https://aka.ms/asr-scout-user-guide)
--- [Replicate Hyper-V VMs in VMM clouds to a secondary site](./hyper-v-vmm-disaster-recovery.md)-- [Replicate VMware VMs and physical servers to a secondary site](./vmware-physical-secondary-disaster-recovery.md)
spring-apps Concept App Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-app-customer-responsibilities.md
description: This article describes customer responsibilities developing Azure S
+ Last updated 08/10/2023
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Java ❌ C#
+**This article applies to:** ✔️ Java ✔️ C#
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:14
#### [Service Connector](#tab/service-connector)
+> [!NOTE]
+> Service Connectors are created at the deployment level. So, if another deployment is created, you need to create the connections again.
+ Configure your app deployed to Azure Spring Apps to connect to an Azure SQL Database with a system-assigned managed identity using the `az spring connection create` command, as shown in the following example. 1. Use the following command to install the Service Connector passwordless extension for the Azure CLI:
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Java ❌ C#
+**This article applies to:** ✔️ Java ✔️ C#
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
Instead of manually configuring your Spring Boot applications, you can automatic
* An Azure Cosmos DB database instance. * [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
-## Prepare your Java project
+## Prepare your project
+
+### [Java](#tab/Java)
1. Add one of the following dependencies to your application's *pom.xml* file. Choose the dependency that is appropriate for your API type.
Instead of manually configuring your Spring Boot applications, you can automatic
1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
+### [Polyglot](#tab/Polyglot)
+
+All the connection strings and credentials are injected as environment variables, which you can reference in your application code.
+
+For the default environment variable names, see the following articles:
+
+* [Azure Cosmos DB for Table](../service-connector/how-to-integrate-cosmos-table.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for NoSQL](../service-connector/how-to-integrate-cosmos-sql.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for MongoDB](../service-connector/how-to-integrate-cosmos-db.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for Gremlin](../service-connector/how-to-integrate-cosmos-gremlin.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for Cassandra](../service-connector/how-to-integrate-cosmos-cassandra.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+++ ## Connect your app to the Azure Cosmos DB ### [Service Connector](#tab/Service-Connector)
+> [!NOTE]
+> Service Connectors are created at the deployment level. So, if another deployment is created, you need to create the connections again.
+ #### Use the Azure CLI Use the Azure CLI to configure your Spring app to connect to a Cosmos NoSQL Database by using the `az spring connection create` command, as shown in the following example. Be sure to replace the variables in the example with actual values.
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Java ❌ C#
+**This article applies to:** ✔️ Java ✔️ C#
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
With Azure Spring Apps, you can connect selected Azure services to your applicat
* An Azure Database for MySQL Flexible Server instance. * [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
-## Prepare your Java project
+## Prepare your project
+
+### [Java](#tab/Java)
1. In your project's *pom.xml* file, add the following dependency:
With Azure Spring Apps, you can connect selected Azure services to your applicat
1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
+### [Polyglot](#tab/Polyglot)
+
+All the connection strings and credentials are injected as environment variables, which you can reference in your application code.
+
+For the default environment variable names, see [Integrate Azure Database for MySQL with Service Connector](../service-connector/how-to-integrate-mysql.md#default-environment-variable-names-or-application-properties-and-sample-codes).
+++ ## Connect your app to the Azure Database for MySQL instance ### [Service Connector](#tab/Service-Connector)
+> [!NOTE]
+> Service Connectors are created at the deployment level. So, if another deployment is created, you need to create the connections again.
+ Follow these steps to configure your Spring app to connect to an Azure Database for MySQL Flexible Server with a system-assigned managed identity. 1. Use the following command to install the Service Connector passwordless extension for the Azure CLI.
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
zone_pivot_groups: passwordless-postgresql
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Java ❌ C#
+**This article applies to:** ✔️ Java ✔️ C#
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
PostgreSQL authentication uses accounts stored in PostgreSQL. If you choose to u
* An Azure Database for PostgreSQL Flexible Server instance. * [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
-## Prepare your Java project
+## Prepare your project
+
+### [Java](#tab/JavaFlex)
Use the following steps to prepare your project.
Use the following steps to prepare your project.
1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
+### [Polyglot](#tab/PolyglotFlex)
+
+All the connection strings and credentials are injected as environment variables, which you can reference in your application code.
+
+For the default environment variable names, see [Integrate Azure Database for PostgreSQL with Service Connector](../service-connector/how-to-integrate-postgres.md#default-environment-variable-names-or-application-properties).
+++ ## Bind your app to the Azure Database for PostgreSQL instance > [!NOTE] > Be sure to select only one of the following approaches to create a connection. If you've already created tables with one connection, other users can't access or modify the tables. When you try the other approach, the application will throw errors such as "Permission denied". To fix this issue, connect to a new database or delete and recreate an existing one.
+> [!NOTE]
+> Service Connectors are created at the deployment level. So, if another deployment is created, you need to create the connections again.
+ ### [Passwordless (Recommended)](#tab/Passwordlessflex) 1. Install the [Service Connector](../service-connector/overview.md) passwordless extension for the Azure CLI:
Use the following steps to bind your app using a secret.
* An Azure Database for PostgreSQL Single Server instance. * [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
-## Prepare your Java project
+## Prepare your project
+
+### [Java](#tab/JavaSingle)
Use the following steps to prepare your project.
Use the following steps to prepare your project.
1. Update the current app by running `az spring app deploy`, or create a new deployment for this change by running `az spring app deployment create`.
+### [Polyglot](#tab/PolyglotSingle)
+
+All the connection strings and credentials will be injected as the environment variables, which can be referenced in your application codes.
+
+You can find the default environment variable names in this doc: [Integrate Azure Database for PostgreSQL with Service Connector](../service-connector/how-to-integrate-postgres.md#default-environment-variable-names-or-application-properties)
+++ ## Bind your app to the Azure Database for PostgreSQL instance
+> [!NOTE]
+> Service Connectors are created at the deployment level. So if another deployment is created, you need to create the connections again.
+ ### [Passwordless](#tab/PasswordlessSingle) 1. Install the [Service Connector](../service-connector/overview.md) passwordless extension for the Azure CLI:
spring-apps How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-redis.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Java ❌ C#
+**This article applies to:** ✔️ Java ✔️ C#
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
Instead of manually configuring your Spring Boot applications, you can automatic
If you don't have a deployed Azure Spring Apps instance, follow the steps in the [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
-## Prepare your Java project
+## Prepare your project
+
+### [Java](#tab/Java)
1. Add the following dependency to your project's *pom.xml* file:
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
1. Update the current deployment using `az spring app update` or create a new deployment using `az spring app deployment create`.
+### [Polyglot](#tab/Polyglot)
+
+All the connection strings and credentials are injected as environment variables, which you can reference in your application code.
+
+For the default environment variable names, see [Integrate Azure Cache for Redis with Service Connector](../service-connector/how-to-integrate-redis-cache.md#default-environment-variable-names-or-application-properties).
+++ ## Connect your app to the Azure Cache for Redis ### [Service Connector](#tab/Service-Connector)
+> [!NOTE]
+> Service Connectors are created at the deployment level. So, if another deployment is created, you need to create the connections again.
+ 1. Use the Azure CLI to configure your Spring app to connect to a Redis database with an access key using the `az spring connection create` command, as shown in the following example. ```azurecli
spring-apps How To Configure Enterprise Spring Cloud Gateway Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway-filters.md
Last updated 07/12/2023 -+ # How to use VMware Spring Cloud Gateway route filters with the Azure Spring Apps Enterprise plan
spring-apps How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-palo-alto.md
This article describes how to use Azure Spring Apps with a Palo Alto firewall.
-For example, the [Azure Spring Apps reference architecture](./reference-architecture.md) includes an Azure Firewall to secure your applications. However, if your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
+If your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
-You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
+You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
-> [!Note]
+> [!NOTE]
> In describing the use of REST APIs, this article uses the PowerShell variable syntax to indicate names and values that are left to your discretion. Be sure to use the same values in all the steps. > > After you've configured the TLS/SSL certificate in Palo Alto, remove the `-SkipCertificateCheck` argument from all Palo Alto REST API calls in the examples below.
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md
The following example shows how to add rules to your firewall. For more informat
az network firewall network-rule create \ --resource-group $RG \ --firewall-name $FWNAME \
- --collection-name 'asafwnr' -n 'apiudp' \
- --protocols 'UDP' \
- --source-addresses '*' \
- --destination-addresses "AzureCloud" \
- --destination-ports 1194 \
- --action allow \
- --priority 100
-az network firewall network-rule create \
- --resource-group $RG \
- --firewall-name $FWNAME \
- --collection-name 'asafwnr' -n 'springcloudtcp' \
+ --collection-name 'asafwnr' \
+ --name 'springcloudtcp' \
--protocols 'TCP' \ --source-addresses '*' \ --destination-addresses "AzureCloud" \ --destination-ports 443 445
-az network firewall network-rule create \
- --resource-group $RG \
- --firewall-name $FWNAME \
- --collection-name 'asafwnr' \
- --name 'time' \
- --protocols 'UDP' \
- --source-addresses '*' \
- --destination-fqdns 'ntp.ubuntu.com' \
- --destination-ports 123
# Add firewall application rules.
az network firewall application-rule create \
--collection-name 'aksfwar'\ --name 'fqdn' \ --source-addresses '*' \
- --protocols 'http=80' 'https=443' \
+ --protocols 'https=443' \
--fqdn-tags "AzureKubernetesService" \ --action allow --priority 100 ```
spring-apps How To Enterprise Configure Apm Integration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-integration-and-ca-certificates.md
You can create an APM configuration and bind to app builds and deployments, as e
You can manage APM integration by configuring properties or secrets in the APM configuration using the Azure portal or the Azure CLI. > [!NOTE]
-> When configuring properties or secrets for APM, use key names without a prefix. For example, don't use a `DT_` prefix for a Dynatrace binding or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks transform the key name to the original environment variable name with a prefix.
+> When configuring properties or secrets via APM configurations, use key names without the APM name as prefix. For example, don't use a `DT_` prefix for Dynatrace or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks transform the key name to the original environment variable name with a prefix.
+>
+> If you intend to override or configure some properties or secrets, such as app name or app level, you need to set environment variables when deploying an app with the original environment variables with the APM name as prefix.
##### [Azure portal](#tab/azure-portal)
Use the following steps to show, add, edit, or delete an APM configuration:
1. Open the [Azure portal](https://portal.azure.com). 1. In the navigation pane, select **APM**.
-1. To create an APM configuration, select **Add**. If you want to enable the APM configuration globally, select **Enable globally**. All the subsequent builds and deployments will use the APM configuration automatically.
+1. To create an APM configuration, select **Add**. If you want to enable the APM configuration globally, select **Enable globally**. All the subsequent builds and deployments use the APM configuration automatically.
:::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/add-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Add button highlighted." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/add-apm.png":::
Use the following steps to show, add, edit, or delete an APM configuration:
:::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Edit APM option selected." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-apm.png":::
-1. To delete an APM configuration, select the ellipsis (**...**) button for the configuration, then select **Delete**. If the APM configuration is used by any build or deployment, you won't be able to delete it.
+1. To delete an APM configuration, select the ellipsis (**...**) button for the configuration and then select **Delete**. If the APM configuration is used by any build or deployment, you aren't able to delete it.
:::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/delete-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Delete button highlighted." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/delete-apm.png":::
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
The following table lists the features supported in Azure Spring Apps:
| Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` | | Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` |
+There are some limitations for Java Native Image. For more information, see the [Java Native Image limitations](#java-native-image-limitations) section.
+ ### Deploy PHP applications The buildpack for deploying PHP applications is [tanzu-buildpacks/php](https://network.tanzu.vmware.com/products/tbs-dependencies/#/releases/1335849/artifact_references).
spring-apps How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-prepare-app-deployment.md
Azure Spring Apps will support the latest Spring Boot or Spring Cloud major vers
The following table lists the supported Spring Boot and Spring Cloud combinations:
-### [Basic/Standard plan](#tab/basic-standard-plan)
-
-| Spring Boot version | Spring Cloud version |
-||--|
-| 3.0.x | 2022.0.x |
-| 2.7.x | 2021.0.3+ aka Jubilee |
- ### [Enterprise plan](#tab/enterprise-plan) | Spring Boot version | Spring Cloud version |
The following table lists the supported Spring Boot and Spring Cloud combination
| 2.6.x | 2021.0.0+ aka Jubilee | | 2.5.x | 2020.3+ aka Ilford+ |
+### [Basic/Standard plan](#tab/basic-standard-plan)
+
+| Spring Boot version | Spring Cloud version |
+||--|
+| 3.0.x | 2022.0.x |
+| 2.7.x | 2021.0.3+ aka Jubilee |
+ For more information, see the following pages:
public class GatewayApplication {
### Distributed configuration
-#### [Basic/Standard plan](#tab/basic-standard-plan)
-
-To enable distributed configuration, include the following `spring-cloud-config-client` dependency in the dependencies section of your *pom.xml* file:
-
-```xml
-<dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-config-client</artifactId>
-</dependency>
-<dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-starter-bootstrap</artifactId>
-</dependency>
-```
-
-> [!WARNING]
-> Don't specify `spring.cloud.config.enabled=false` in your bootstrap configuration. Otherwise, your application stops working with Config Server.
- #### [Enterprise plan](#tab/enterprise-plan) To enable distributed configuration in the Enterprise plan, use [Application Configuration Service for VMware Tanzu®](https://docs.pivotal.io/tcs-k8s/0-1/), which is one of the proprietary VMware Tanzu components. Application Configuration Service for Tanzu is Kubernetes-native, and totally different from Spring Cloud Config Server. Application Configuration Service for Tanzu enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories.
To use Application Configuration Service for Tanzu, do the following steps for e
--config-file-pattern <config-file-pattern> ```
+#### [Basic/Standard plan](#tab/basic-standard-plan)
+
+To enable distributed configuration, include the following `spring-cloud-config-client` dependency in the dependencies section of your *pom.xml* file:
+
+```xml
+<dependency>
+ <groupId>org.springframework.cloud</groupId>
+ <artifactId>spring-cloud-config-client</artifactId>
+</dependency>
+<dependency>
+ <groupId>org.springframework.cloud</groupId>
+ <artifactId>spring-cloud-starter-bootstrap</artifactId>
+</dependency>
+```
+
+> [!WARNING]
+> Don't specify `spring.cloud.config.enabled=false` in your bootstrap configuration. Otherwise, your application stops working with Config Server.
+ ### Metrics
spring-apps How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-service.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
+**This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
This article shows you how to start or stop your Azure Spring Apps service instance.
-> [!NOTE]
-> You can stop and start your Azure Spring Apps service instance to help you save costs, but you shouldn't stop and start a running instance for service recovery.
- Your applications running in Azure Spring Apps may not need to run continuously. For example, an application may not need to run continuously if you have a service instance that's used only during business hours. There may be times when Azure Spring Apps is idle and running only the system components.
-You can reduce the active footprint of Azure Spring Apps by reducing the running instances and ensuring costs for compute resources are reduced.
+You can reduce the active footprint of Azure Spring Apps by reducing the running instances, which reduces costs for compute resources. For more information, see [Start, stop, and delete an application in Azure Spring Apps](./how-to-start-stop-delete.md) and [Scale an application in Azure Spring Apps](./how-to-scale-manual.md).
-To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components will be stopped. However, all your objects and network settings will be saved so you can restart your service instance and pick up right where you left off.
+To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components are stopped. However, all your objects and network settings are saved so you can restart your service instance and pick up right where you left off.
-> [!NOTE]
-> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days. If your cluster is stopped for more than 90 days, you can't recover the cluster state.
+## Limitations
+
+The ability to stop and start your Azure Spring Apps service instance has the following limitations:
-You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app.
+- You can stop and start your Azure Spring Apps service instance to help you save costs. However, you shouldn't stop and start a running instance for service recovery - for example, to recover from an invalid virtual network configuration.
+- The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days. If your cluster is stopped for more than 90 days, you can't recover the cluster state.
+- You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app.
+- If an Azure Spring Apps service instance has been stopped or started successfully, you have to wait for at least 30 minutes to start or stop the instance again. However, if your last operation failed, you can try again to start or stop without having to wait.
+- For virtual network instances, the start operation may fail due to invalid virtual network configurations. For more information, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
## Prerequisites
In the Azure portal, use the following steps to stop a running Azure Spring Apps
:::image type="content" source="media/how-to-start-stop-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Stop button and Status value highlighted.":::
-1. After the instance stops, the status will show **Succeeded (Stopped)**.
+1. After the instance stops, the status shows **Succeeded (Stopped)**.
## Start a stopped instance
In the Azure portal, use the following steps to start a stopped Azure Spring App
:::image type="content" source="media/how-to-start-stop-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Start button and Status value highlighted.":::
-1. After the instance starts, the status will show **Succeeded (Running)**.
+1. After the instance starts, the status shows **Succeeded (Running)**.
## [Azure CLI](#tab/azure-cli)
spring-apps How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md
The following sections describe configuration in API portal.
API portal supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider (IdP) that supports the OpenID Connect Discovery protocol. > [!NOTE]
-> Only authorization servers supporting the OpenID Connect Discovery protocol are supported. Be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
+> Only authorization servers supporting the OpenID Connect Discovery protocol are supported. Be sure to configure the external authorization server to allow redirects back to the API portal. Refer to your authorization server's documentation and add `https://<api-portal-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
| Property | Required? | Description | | - | - | - |
To set up SSO with Azure AD, see [How to set up single sign-on with Azure AD for
> [!NOTE] > If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration.
-> [!IMPORTANT]
-> If you're using the SSO feature, only one instance count is supported.
- ### Configure the instance count Configuration of the instance count for API portal is supported, unless you're using SSO. If you're using the SSO feature, only one instance count is supported.
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
+**This article applies to:** ✔️ Enterprise ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard
Azure Spring Apps makes it easy to deploy Spring Boot applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
The following quickstarts apply to the Basic/Standard plan only. For Enterprise
* [Set up Spring Cloud Config Server for Azure Spring Apps](quickstart-setup-config-server.md) * [Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md)
-## Standard consumption and dedicated plan
-
-The Standard consumption and dedicated plan provides a hybrid pricing solution that combines the best of pay-as-you-go and resource-based pricing. With this comprehensive package, you have the flexibility to pay only for compute time as you get started, while enjoying enhanced cost predictability and significant savings when your resources scale up.
-
-When you create a Standard consumption and dedicated plan, a consumption workload profile is always created by default. You can additionally add dedicated workload profiles to the same plan to fit the requirements of your workload.
-
-Workload profiles determine the amount of compute and memory resources available to Spring apps deployed in the Standard consumption and dedicated plan. There are different machine sizes and characteristics with different workload profiles. For more information, see [Workload profiles in Consumption + Dedicated plan structure environments in Azure Container Apps (preview)](../container-apps/workload-profiles-overview.md).
-
-You can run your apps in any combination of consumption or dedicated workload profiles. Consider using the consumption workload profile when your applications need to start from and scale to zero. Use the dedicated workload profile when you need dedicated hardware for single tenancy, and for customizable compute as with a memory optimized machine. You can also use the dedicated workload profile to optimize for cost savings when resources are running at scale.
-
-The Standard consumption and dedicated plan simplifies the virtual network experience for running polyglot applications. In the Standard consumption and dedicated plan, when you deploy frontend applications as containers in Azure Container Apps, all your applications share the same virtual network in the same Azure Container Apps environment. There's no need to create disparate subnets and Network Security Groups for frontend apps, Spring apps, and the Spring service runtime.
-
-The following diagram shows the architecture of a virtual network in Azure Spring Apps:
--
-### Get started with the Standard consumption and dedicated plan
-
-The following articles help you get started using the Standard consumption and dedicated plan:
-
-* [Provision an Azure Spring Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md)
-* [Create an Azure Spring Apps Standard consumption and dedicated plan instance in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md)
-* [Access applications using Azure Spring Apps Standard consumption and dedicated plan in a virtual network](quickstart-access-standard-consumption-within-virtual-network.md)
-* [Deploy an event-driven application to Azure Spring Apps](quickstart-deploy-event-driven-app.md)
-* [Set up autoscale for applications in Azure Spring Apps Standard consumption and dedicated plan](quickstart-apps-autoscale-standard-consumption.md)
-* [Map a custom domain to Azure Spring Apps with the Standard consumption and dedicated plan](quickstart-standard-consumption-custom-domain.md)
-* [Analyze logs and metrics in the Azure Spring Apps Standard consumption and dedicated plan](quickstart-analyze-logs-and-metrics-standard-consumption.md)
-* [Enable your own persistent storage in Azure Spring Apps with the Standard consumption and dedicated plan](how-to-custom-persistent-storage-with-standard-consumption.md)
-* [Customer responsibilities for Azure Spring Apps Standard consumption and dedicated plan in a virtual network](standard-consumption-customer-responsibilities.md)
- ## Enterprise plan The Enterprise plan provides commercially supported Tanzu components with SLA assurance. For more information, see the [SLA for Azure Spring Apps](https://azure.microsoft.com/support/legal/sla/spring-apps). This support helps enterprise customers ship faster for mission-critical workloads with peace of mind. The Enterprise plan helps unlock SpringΓÇÖs full potential while including feature parity and region parity with the Standard plan.
As a quick reference, the articles listed previously and the articles in the fol
* [Enable system-assigned managed identity for an application in Azure Spring Apps](how-to-enable-system-assigned-managed-identity.md?pivots=sc-enterprise-tier) * [Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md?pivots=sc-enterprise-tier)
+## Standard consumption and dedicated plan
+
+The Standard consumption and dedicated plan provides a hybrid pricing solution that combines the best of pay-as-you-go and resource-based pricing. With this comprehensive package, you have the flexibility to pay only for compute time as you get started, while enjoying enhanced cost predictability and significant savings when your resources scale up.
+
+When you create a Standard consumption and dedicated plan, a consumption workload profile is always created by default. You can additionally add dedicated workload profiles to the same plan to fit the requirements of your workload.
+
+Workload profiles determine the amount of compute and memory resources available to Spring apps deployed in the Standard consumption and dedicated plan. There are different machine sizes and characteristics with different workload profiles. For more information, see [Workload profiles in Consumption + Dedicated plan structure environments in Azure Container Apps (preview)](../container-apps/workload-profiles-overview.md).
+
+You can run your apps in any combination of consumption or dedicated workload profiles. Consider using the consumption workload profile when your applications need to start from and scale to zero. Use the dedicated workload profile when you need dedicated hardware for single tenancy, and for customizable compute as with a memory optimized machine. You can also use the dedicated workload profile to optimize for cost savings when resources are running at scale.
+
+The Standard consumption and dedicated plan simplifies the virtual network experience for running polyglot applications. In the Standard consumption and dedicated plan, when you deploy frontend applications as containers in Azure Container Apps, all your applications share the same virtual network in the same Azure Container Apps environment. There's no need to create disparate subnets and Network Security Groups for frontend apps, Spring apps, and the Spring service runtime.
+
+The following diagram shows the architecture of a virtual network in Azure Spring Apps:
++
+### Get started with the Standard consumption and dedicated plan
+
+The following articles help you get started using the Standard consumption and dedicated plan:
+
+* [Provision an Azure Spring Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md)
+* [Create an Azure Spring Apps Standard consumption and dedicated plan instance in an Azure Container Apps environment with a virtual network](quickstart-provision-standard-consumption-app-environment-with-virtual-network.md)
+* [Access applications using Azure Spring Apps Standard consumption and dedicated plan in a virtual network](quickstart-access-standard-consumption-within-virtual-network.md)
+* [Deploy an event-driven application to Azure Spring Apps](quickstart-deploy-event-driven-app.md)
+* [Set up autoscale for applications in Azure Spring Apps Standard consumption and dedicated plan](quickstart-apps-autoscale-standard-consumption.md)
+* [Map a custom domain to Azure Spring Apps with the Standard consumption and dedicated plan](quickstart-standard-consumption-custom-domain.md)
+* [Analyze logs and metrics in the Azure Spring Apps Standard consumption and dedicated plan](quickstart-analyze-logs-and-metrics-standard-consumption.md)
+* [Enable your own persistent storage in Azure Spring Apps with the Standard consumption and dedicated plan](how-to-custom-persistent-storage-with-standard-consumption.md)
+* [Customer responsibilities for Azure Spring Apps Standard consumption and dedicated plan in a virtual network](standard-consumption-customer-responsibilities.md)
+ ## Next steps > [!div class="nextstepaction"]
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
spring-apps Principles Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/principles-microservice-apps.md
- Title: Java and base OS for Azure Spring Apps apps
-description: Principles for maintaining healthy Java and base operating system for Azure Spring Apps apps
---- Previously updated : 10/12/2021---
-# Java and Base OS for Azure Spring Apps apps
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ✔️ Java
-
-The following are principles for maintaining healthy Java and base operating system for Azure Spring Apps apps.
-
-## Principles for healthy Java and Base OS
-
-* Shall be the same base operating system across plans - Basic | Standard | Premium.
-
- * Currently, apps on Azure Spring Apps use a mix of Debian 10 and Ubuntu 18.04.
- * VMware Tanzu® Build Service™ uses Ubuntu 18.04.
-
-* Shall be the same base operating system regardless of deployment starting points - source | JAR
-
- * Currently, apps on Azure Spring Apps use a mix of Debian 10 and Ubuntu 18.04.
-
-* Base operating system shall be free of security vulnerabilities.
-
- * Debian 10 base operating system has 147 open CVEs.
- * Ubuntu 18.04 base operating system has 132 open CVEs.
-
-* Shall use JRE-headless.
-
- * Currently, apps on Azure Spring Apps use JDK. JRE-headless is a smaller image.
-
-* Shall use the most recent builds of Java.
-
- * Currently, apps on Azure Spring Apps use Java 8 build 242. This is an outdated build.
-
-Azul Systems will continuously scan for changes to base operating systems and keep the last built images up to date. Azure Spring Apps looks for changes to images and continuously updates them across deployments.
-
-## FAQ for Azure Spring Apps
-
-* Which versions of Java are supported? Major version and build number.
-
- * Support LTS versions - Java 8 and 11.
- * Uses the most recent build - for example, right now, Java 8 build 252 and Java 11 build 7.
-
-* Who built these Java runtimes?
-
- * Azul Systems.
-
-* What is the base operating system for images?
-
- * Ubuntu 20.04 LTS (Focal Fossa). Apps will continue to stay on the most recent LTS version of Ubuntu.
- * See [Ubuntu 20.04 LTS (Focal Fossa)](http://releases.ubuntu.com/focal/)
-
-* How can I download a supported Java runtime for local dev?
-
- * See [Install the JDK for Azure and Azure Stack](/azure/developer/java/fundamentals/java-jdk-install)
-
-* How can I get support for issues at the Java runtime level?
-
- * Open a support ticket with Azure Support.
-
-## Default deployment on Azure Spring Apps
-
-> ![Default deployment](media/spring-cloud-principles/spring-cloud-default-deployment.png)
-
-## Next steps
-
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
-* [Java long-term support for Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure)
spring-apps Quickstart Configure Single Sign On Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-configure-single-sign-on-enterprise.md
To complete the single sign-on experience, use the following steps to deploy the
--name identity-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name identity-service \
- --routes-file azure/routes/identity-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/identity-service.json
``` ## Configure single sign-on for Spring Cloud Gateway
spring-apps Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps-enterprise.md
Use the following steps to deploy and build applications. For these steps, make
--resource-group <resource-group-name> \ --name quickstart-builder \ --service <Azure-Spring-Apps-service-instance-name> \
- --builder-file azure/builder.json
+ --builder-file azure-spring-apps-enterprise/resources/json/tbs/builder.json
``` 1. Use the following command to build and deploy the payment service:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name cart-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name cart-service \
- --routes-file azure/routes/cart-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/cart-service.json
``` 1. Use the following command to create routes for the order service:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name order-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name order-service \
- --routes-file azure/routes/order-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/order-service.json
``` 1. Use the following command to create routes for the catalog service:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name catalog-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name catalog-service \
- --routes-file azure/routes/catalog-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/catalog-service.json
``` 1. Use the following command to create routes for the frontend:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name frontend-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name frontend \
- --routes-file azure/routes/frontend.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/frontend.json
``` 1. Use the following commands to retrieve the URL for Spring Cloud Gateway:
spring-apps Quickstart Deploy Event Driven App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app.md
The sample project is an event-driven application that subscribes to a [Service
:::image type="content" source="media/quickstart-deploy-event-driven-app/diagram.png" alt-text="Diagram showing the Azure Spring Apps event-driven app architecture." lightbox="media/quickstart-deploy-event-driven-app/diagram.png" border="false"::: [!INCLUDE [quickstart-tool-introduction](includes/quickstart-deploy-event-driven-app/quickstart-tool-introduction.md)]
The sample project is an event-driven application that subscribes to a [Service
## 1. Prerequisites --- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- [Git](https://git-scm.com/downloads).-- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.-- ### [Azure portal](#tab/Azure-portal)
The sample project is an event-driven application that subscribes to a [Service
- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure Developer CLI (AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.
+- [Azure Developer CLI (AZD)](/azure/developer/azure-developer-cli/install-azd), version 1.2.0 or higher.
Use the following steps to confirm that the event-driven app works correctly. Yo
1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md). 3. Use the following command to check the app's log to investigate any deployment issue:
Use the following steps to confirm that the event-driven app works correctly. Yo
::: zone-end 3. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
spring-apps Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-azure-cli.md
The Enterprise deployment plan includes the following Tanzu components:
## Review the Azure CLI deployment script
-The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
-
-### [Standard plan](#tab/azure-spring-apps-standard)
-
+The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Enterprise plan](#tab/azure-spring-apps-enterprise) :::code language="azurecli" source="~/azure-spring-apps-reference-architecture/CLI/brownfield-deployment/azuredeploySpringEnterprise.sh":::
+### [Standard plan](#tab/azure-spring-apps-standard)
++ ## Deploy the cluster
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-bicep.md
To deploy the cluster, use the following steps.
First, create an *azuredeploy.bicep* file with the following contents:
-### [Standard plan](#tab/azure-spring-apps-standard)
-- ### [Enterprise plan](#tab/azure-spring-apps-enterprise) :::code language="bicep" source="~/azure-spring-apps-reference-architecture/Bicep/brownfield-deployment/azuredeploySpringEnterprise.bicep":::
+### [Standard plan](#tab/azure-spring-apps-standard)
++ Next, open a Bash window and run the following Azure CLI command, replacing the *\<value>* placeholders with the following values:
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md
For more customization including custom domain support, see the [Azure Spring Ap
## Review the Terraform plan
-The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
-
-### [Standard plan](#tab/azure-spring-apps-standard)
-
+The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Enterprise plan](#tab/azure-spring-apps-enterprise) :::code language="hcl" source="~/azure-spring-apps-reference-architecture/terraform/brownfield-deployment/Enterprise/main.tf":::
+### [Standard plan](#tab/azure-spring-apps-standard)
++ ## Apply the Terraform plan
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet.md
The Enterprise deployment plan includes the following Tanzu components:
## Review the template
-The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](reference-architecture.md).
-
-### [Standard plan](#tab/azure-spring-apps-standard)
-
+The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Enterprise plan](#tab/azure-spring-apps-enterprise) :::code language="json" source="~/azure-spring-apps-reference-architecture/ARM/brownfield-deployment/azuredeploySpringEnterprise.json":::
+### [Standard plan](#tab/azure-spring-apps-standard)
++ Two Azure resources are defined in the template:
To deploy the template, use the following steps.
First, select the following image to sign in to Azure and open a template. The template creates an Azure Spring Apps instance in an existing Virtual Network and a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace.
-### [Standard plan](#tab/azure-spring-apps-standard)
-- ### [Enterprise plan](#tab/azure-spring-apps-enterprise) :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-apps-landing-zone-accelerator%2Freference-architecture%2FARM%2Fbrownfield-deployment%2fazuredeploySpringEnterprise.json":::
+### [Standard plan](#tab/azure-spring-apps-standard)
++ Next, enter values for the following fields:
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
* Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md).
spring-apps Quickstart Deploy Java Native Image App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-java-native-image-app.md
+
+ Title: Quickstart - Deploy your first Java Native Image application to Azure Spring Apps
+description: Describes how to deploy a Java Native Image application to Azure Spring Apps.
+++ Last updated : 08/29/2023++++
+# Quickstart: Deploy your first Java Native Image application to Azure Spring Apps
+
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
+
+This quickstart shows how to deploy a Spring Boot application to Azure Spring Apps as a Native Image.
+
+[Native Image](https://www.graalvm.org/latest/reference-manual/native-image/) capability enables you to compile Java applications to standalone executables, known as Native Images. These executables can provide significant benefits, including faster startup times and lower runtime memory overhead compared to a traditional JVM (Java Virtual Machine).
+
+The sample project is the Spring Petclinic application. The following screenshot shows the application:
++
+## 1. Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
++
+## 5. Validate Native Image App
+
+Now you can access the deployed Native Image app to see whether it works. Use the following steps to validate:
+
+1. After the deployment has completed, you can run the following command to get the app URL:
+
+ ```azurecli
+ az spring app show \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${NATIVE_APP_NAME} \
+ --output table
+ ```
+
+ You can access the app with the URL shown in the output as `Public Url`. The page should appear as you saw it o localhost.
+
+1. Use the following command to check the app's log to investigate any deployment issue:
+
+ ```azurecli
+ az spring app logs \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${NATIVE_APP_NAME}
+ ```
+
+## 6. Compare performance for JAR and Native Image
+
+The following sections describe how to compare the performance between JAR and Native Image deployment.
+
+### Server startup time
+
+Use the following command to check the app's log `Started PetClinicApplication in XXX seconds` to get the server startup time for a JAR app:
+
+```azurecli
+az spring app logs \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${JAR_APP_NAME}
+```
+
+The server startup time is around 25 s for a JAR app.
+
+Use the following command to check the app's log to get the server startup time for a Native Image app:
+
+```azurecli
+az spring app logs \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${NATIVE_APP_NAME}
+```
+
+The server startup time is less than 0.5 s for a Native Image app.
+
+### Memory usage
+
+Use the following command to scale down the memory size to 512 Mi for a Native Image app:
+
+```azurecli
+az spring app scale \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${NATIVE_APP_NAME} \
+ --memory 512Mi
+```
+
+The command output should show that the Native Image app started successfully.
+
+Use the following command to scale down the memory size to 512 Mi for the JAR app:
+
+```azurecli
+az spring app scale \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${JAR_APP_NAME} \
+ --memory 512Mi
+```
+
+The command output should show that the JAR app failed to start due to insufficient memory. The output message should be similar to the following example: `Terminating due to java.lang.OutOfMemoryError: Java heap space`.
+
+The following figure shows the optimized memory usage for the Native Image deployment for a constant workload of 400 requests per second into the Petclinic application. The memory usage is about 1/5th of the memory consumed by its equivalent JAR deployment.
++
+Native Images offer quicker startup times and reduced runtime memory overhead when compared to the conventional Java Virtual Machine (JVM).
+
+## 7. Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group:
+
+```azurecli
+az group delete --name ${RESOURCE_GROUP}
+```
+
+## 8. Next steps
+
+> [!div class="nextstepaction"]
+> [How to deploy Java Native Image apps in the Azure Spring Apps Enterprise plan](./how-to-enterprise-deploy-polyglot-apps.md#deploy-java-native-image-applications-preview)
+
+> [!div class="nextstepaction"]
+> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
+
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md)
+
+> [!div class="nextstepaction"]
+> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md)
+
+> [!div class="nextstepaction"]
+> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
+
+> [!div class="nextstepaction"]
+> [Run the polyglot ACME fitness store apps on Azure Spring Apps](./quickstart-sample-app-acme-fitness-store-introduction.md)
+
+For more information, see the following articles:
+
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
The following diagram shows the architecture of the system:
:::image type="content" source="media/quickstart-deploy-web-app/diagram.png" alt-text="Diagram that shows the architecture of a Spring web application." border="false"::: This article provides the following options for deploying to Azure Spring Apps:
This article provides the following options for deploying to Azure Spring Apps:
## 1. Prerequisites ### [Azure portal](#tab/Azure-portal)
This article provides the following options for deploying to Azure Spring Apps:
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure Developer CLI](https://aka.ms/azd-install), version 1.0.
+- [Azure Developer CLI (AZD)](/azure/developer/azure-developer-cli/install-azd), version 1.2.0 or higher.
::: zone-end - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`--- - If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ::: zone-end
This article provides the following options for deploying to Azure Spring Apps:
Now you can access the deployed app to see whether it works. Use the following steps to validate: -
-1. After the deployment has completed, use the following command to access the app with the URL retrieved:
-
- ```azurecli
- az spring app show \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --query properties.url \
- --output tsv
- ```
-
- The page should appear as you saw in localhost.
-
-1. Use the following command to check the app's log to investigate any deployment issue:
-
- ```azurecli
- az spring app logs \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME}
- ```
-- ::: zone pivot="sc-enterprise" 1. After the deployment has completed, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
Now you can access the deployed app to see whether it works. Use the following s
::: zone-end 1. Access the application with the output application URL. The page should appear as you saw in localhost.
Now you can access the deployed app to see whether it works. Use the following s
## 6. Clean up resources [!INCLUDE [clean-up-resources-portal-or-azd](includes/quickstart-deploy-web-app/clean-up-resources.md)] ::: zone-end If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group:
spring-apps Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-and-redis-enterprise.md
The following instructions describe how to provision an Azure Cache for Redis an
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure/templates/azuredeploy.json).
+You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure-spring-apps-enterprise/resources/json/deploy/azuredeploy.json).
To deploy this template, follow these steps: 1. Select the following image to sign in to Azure and open a template. The template creates an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server.
- :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure%2Ftemplates%2Fazuredeploy.json":::
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure-spring-apps-enterprise%2Fresources%2Fjson%2Fdeploy%2Fazuredeploy.json":::
1. Enter values for the following fields:
spring-apps Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-introduction.md
PetClinic is decomposed into four core Spring apps. All of them are independentl
There are several common patterns in distributed systems that support core services. Azure Spring Apps provides tools that enhance Spring Boot applications to implement the following patterns:
-### [Basic/Standard plan](#tab/basic-standard-plan)
-
-* **Config service**: Azure Spring Apps Config is a horizontally scalable centralized configuration service for distributed systems. It uses a pluggable repository that currently supports local storage, Git, and Subversion.
-* **Service discovery**: It allows automatic detection of network locations for service instances, which could have dynamically assigned addresses because of autoscaling, failures, and upgrades.
- ### [Enterprise plan](#tab/enterprise-plan) * **Application Configuration Service for Tanzu**: Application Configuration Service for Tanzu is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories. * **Tanzu Service Registry**: Tanzu Service Registry is one of the commercial VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key tenets of a Spring-based architecture. Your apps can use the Service Registry to dynamically discover and call registered services.
+### [Basic/Standard plan](#tab/basic-standard-plan)
+
+* **Config service**: Azure Spring Apps Config is a horizontally scalable centralized configuration service for distributed systems. It uses a pluggable repository that currently supports local storage, Git, and Subversion.
+* **Service discovery**: It allows automatic detection of network locations for service instances, which could have dynamically assigned addresses because of autoscaling, failures, and upgrades.
+ ## Database configuration
For full implementation details, see our fork of [PetClinic](https://github.com/
## Next steps
-### [Basic/Standard plan](#tab/basic-standard-plan)
+### [Enterprise plan](#tab/enterprise-plan)
> [!div class="nextstepaction"]
-> [Quickstart: Provision an Azure Spring Apps service instance](./quickstart-provision-service-instance.md)
+> [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md)
-### [Enterprise plan](#tab/enterprise-plan)
+### [Basic/Standard plan](#tab/basic-standard-plan)
> [!div class="nextstepaction"]
-> [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md)
+> [Quickstart: Provision an Azure Spring Apps service instance](./quickstart-provision-service-instance.md)
spring-apps Quickstart Set Request Rate Limits Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-set-request-rate-limits-enterprise.md
az spring gateway route-config update \
--service <Azure-Spring-Apps-service-instance-name> \ --name catalog-routes \ --app-name catalog-service \
- --routes-file azure/routes/catalog-service_rate-limit.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/catalog-service_rate-limit.json
``` Use the following commands to retrieve the URL for the `/products` route in Spring Cloud Gateway:
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
This article explains how to deploy a small application to run on Azure Spring A
The application code used in this tutorial is a simple app. When you've completed this example, the application is accessible online, and you can manage it through the Azure portal. [!INCLUDE [quickstart-tool-introduction](includes/quickstart/quickstart-tool-introduction.md)]
The application code used in this tutorial is a simple app. When you've complete
## 1. Prerequisites --- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- [Git](https://git-scm.com/downloads).-- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.-- ### [Azure portal](#tab/Azure-portal)
The application code used in this tutorial is a simple app. When you've complete
- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure Developer CLI(AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.
+- [Azure Developer CLI (AZD)](/azure/developer/azure-developer-cli/install-azd), version 1.2.0 or higher.
The application code used in this tutorial is a simple app. When you've complete
After deployment, you can access the app at `https://<your-Azure-Spring-Apps-instance-name>-demo.azuremicroservices.io`. When you open the app, you get the response `Hello World`. -
-Use the following command to check the app's log to investigate any deployment issue:
-
-```azurecli
-az spring app logs \
- --service ${SERVICE_NAME} \
- --name ${APP_NAME}
-```
-- From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
spring-apps Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/reference-architecture.md
- Previously updated : 05/31/2022-- Title: Azure Spring Apps reference architecture---
-description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps.
--
-# Azure Spring Apps reference architecture
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ✔️ Standard ✔️ Enterprise
-
-This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. In the design, Azure Spring Apps is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16].
-
-There are two flavors of Azure Spring Apps: Standard plan and Enterprise plan.
-
-The Azure Spring Apps Standard plan is composed of the Spring Cloud Config Server, the Spring Cloud Service Registry, and the kpack build service.
-
-The Azure Spring Apps Enterprise plan is composed of the VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, VMware Tanzu® Service Registry, Spring Cloud Gateway for VMware Tanzu®, and API portal for VMware Tanzu®.
-
-For an implementation of this architecture, see the [Azure Spring Apps Reference Architecture][10] on GitHub.
-
-Deployment options for this architecture include Azure Resource Manager (ARM), Terraform, Azure CLI, and Bicep. The artifacts in this repository provide a foundation that you can customize for your environment. You can group resources such as Azure Firewall or Application Gateway into different resource groups or subscriptions. This grouping helps keep different functions separate, such as IT infrastructure, security, business application teams, and so on.
-
-## Planning the address space
-
-Azure Spring Apps requires two dedicated subnets:
-
-* Service runtime
-* Spring Boot applications
-
-Each of these subnets requires a dedicated Azure Spring Apps cluster. Multiple clusters can't share the same subnets. The minimum size of each subnet is /28. The number of application instances that Azure Spring Apps can support varies based on the size of the subnet. You can find the detailed virtual network requirements in the [Virtual network requirements][11] section of [Deploy Azure Spring Apps in a virtual network][17].
-
-> [!WARNING]
-> The selected subnet size can't overlap with the existing virtual network address space, and shouldn't overlap with any peered or on-premises subnet address ranges.
-
-## Use cases
-
-Typical uses for this architecture include:
-
-* Private applications: Internal applications deployed in hybrid cloud environments
-* Public applications: Externally facing applications
-
-These use cases are similar except for their security and network traffic rules. This architecture is designed to support the nuances of each.
-
-## Private applications
-
-The following list describes the infrastructure requirements for private applications. These requirements are typical in highly regulated environments.
-
-* A subnet must only have one instance of Azure Spring Apps.
-* Adherence to at least one Security Benchmark should be enforced.
-* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* Azure service dependencies should communicate through Service Endpoints or Private Link.
-* Data at rest should be encrypted.
-* Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
-* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* If [Azure Spring Apps Config Server][8] is used to load config properties from a repository, the repository must be private.
-* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault.
-* Name resolution of hosts on-premises and in the Cloud should be bidirectional.
-* No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
-* Subnets managed by the Azure Spring Apps deployment must not be modified.
-
-The following list shows the components that make up the design:
-
-* On-premises network
- * Domain Name Service (DNS)
- * Gateway
-* Hub subscription
- * Application Gateway Subnet
- * Azure Firewall Subnet
- * Shared Services Subnet
-* Connected subscription
- * Azure Bastion Subnet
- * Virtual Network Peer
-
-The following list describes the Azure services in this reference architecture:
-
-* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
-
-* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
-
-* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-
-* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-
-The following diagrams represent a well-architected hub and spoke design that addresses the above requirements:
-
-### [Standard plan](#tab/azure-spring-standard)
--
-### [Enterprise plan](#tab/azure-spring-enterprise)
----
-## Public applications
-
-The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments.
-
-* A subnet must only have one instance of Azure Spring Apps.
-* Adherence to at least one Security Benchmark should be enforced.
-* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* Azure DDoS Protection should be enabled.
-* Azure service dependencies should communicate through Service Endpoints or Private Link.
-* Data at rest should be encrypted.
-* Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
-* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* Ingress traffic should be managed by at least Application Gateway or Azure Front Door.
-* Internet routable addresses should be stored in Azure Public DNS.
-* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault.
-* Name resolution of hosts on-premises and in the Cloud should be bidirectional.
-* No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
-* Subnets managed by the Azure Spring Apps deployment must not be modified.
-
-The following list shows the components that make up the design:
-
-* On-premises network
- * Domain Name Service (DNS)
- * Gateway
-* Hub subscription
- * Application Gateway Subnet
- * Azure Firewall Subnet
- * Shared Services Subnet
-* Connected subscription
- * Azure Bastion Subnet
- * Virtual Network Peer
-
-The following list describes the Azure services in this reference architecture:
-
-* [Azure Application Firewall][7]: a feature of Azure Application Gateway that provides centralized protection of applications from common exploits and vulnerabilities.
-
-* [Azure Application Gateway][6]: a load balancer responsible for application traffic with Transport Layer Security (TLS) offload operating at layer 7.
-
-* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
-
-* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
-
-* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-
-* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-
-The following diagrams represent a well-architected hub and spoke design that addresses the above requirements. Only the hub-virtual-network communicates with the internet:
-
-### [Standard plan](#tab/azure-spring-standard)
--
-### [Enterprise plan](#tab/azure-spring-enterprise)
----
-## Azure Spring Apps on-premises connectivity
-
-Applications in Azure Spring Apps can communicate to various Azure, on-premises, and external resources. By using the hub and spoke design, applications can route traffic externally or to the on-premises network using Express Route or Site-to-Site Virtual Private Network (VPN).
-
-## Azure Well-Architected Framework considerations
-
-The [Azure Well-Architected Framework][16] is a set of guiding tenets to follow in establishing a strong infrastructure foundation. The framework contains the following categories: cost optimization, operational excellence, performance efficiency, reliability, and security.
-
-### Cost optimization
-
-Because of the nature of distributed system design, infrastructure sprawl is a reality. This reality results in unexpected and uncontrollable costs. Azure Spring Apps is built using components that scale so that it can meet demand and optimize cost. The core of this architecture is the Azure Kubernetes Service (AKS). The service is designed to reduce the complexity and operational overhead of managing Kubernetes, which includes efficiencies in the operational cost of the cluster.
-
-You can deploy different applications and application types to a single instance of Azure Spring Apps. The service supports autoscaling of applications triggered by metrics or schedules that can improve utilization and cost efficiency.
-
-You can also use Application Insights and Azure Monitor to lower operational cost. With the visibility provided by the comprehensive logging solution, you can implement automation to scale the components of the system in real time. You can also analyze log data to reveal inefficiencies in the application code that you can address to improve the overall cost and performance of the system.
-
-### Operational excellence
-
-Azure Spring Apps addresses multiple aspects of operational excellence. You can combine these aspects to ensure that the service runs efficiently in production environments, as described in the following list:
-
-* You can use Azure Pipelines to ensure that deployments are reliable and consistent while helping you avoid human error.
-* You can use Azure Monitor and Application Insights to store log and telemetry data.
- You can assess collected log and metric data to ensure the health and performance of your applications. Application Performance Monitoring (APM) is fully integrated into the service through a Java agent. This agent provides visibility into all the deployed applications and dependencies without requiring extra code. For more information, see the blog post [Effortlessly monitor applications and dependencies in Azure Spring Apps][15].
-* You can use Microsoft Defender for Cloud to ensure that applications maintain security by providing a platform to analyze and assess the data provided.
-* The service supports various deployment patterns. For more information, see [Set up a staging environment in Azure Spring Apps][14].
-
-### Reliability
-
-Azure Spring Apps is built on AKS. While AKS provides a level of resiliency through clustering, this reference architecture goes even further by incorporating services and architectural considerations to increase availability of the application if there's component failure.
-
-By building on top of a well-defined hub and spoke design, the foundation of this architecture ensures that you can deploy it to multiple regions. For the private application use case, the architecture uses Azure Private DNS to ensure continued availability during a geographic failure. For the public application use case, Azure Front Door and Azure Application Gateway ensure availability.
-
-### Security
-
-The security of this architecture is addressed by its adherence to industry-defined controls and benchmarks. In this context, "control" means a concise and well-defined best practice, such as "Employ the least privilege principle when implementing information system access. IAM-05" The controls in this architecture are from the [Cloud Control Matrix][19] (CCM) by the [Cloud Security Alliance][18] (CSA) and the [Microsoft Azure Foundations Benchmark][20] (MAFB) by the [Center for Internet Security][21] (CIS). In the applied controls, the focus is on the primary security design principles of governance, networking, and application security. It is your responsibility to handle the design principles of Identity, Access Management, and Storage as they relate to your target infrastructure.
-
-#### Governance
-
-The primary aspect of governance that this architecture addresses is segregation through the isolation of network resources. In the CCM, DCS-08 recommends ingress and egress control for the datacenter. To satisfy the control, the architecture uses a hub and spoke design using Network Security Groups (NSGs) to filter east-west traffic between resources. The architecture also filters traffic between central services in the hub and resources in the spoke. The architecture uses an instance of Azure Firewall to manage traffic between the internet and the resources within the architecture.
-
-The following list shows the control that addresses datacenter security in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-|:-|:--|
-| DCS-08 | Datacenter Security Unauthorized Persons Entry |
-
-#### Network
-
-The network design supporting this architecture is derived from the traditional hub and spoke model. This decision ensures that network isolation is a foundational construct. CCM control IVS-06 recommends that traffic between networks and virtual machines are restricted and monitored between trusted and untrusted environments. This architecture adopts the control by implementation of the NSGs for east-west traffic (within the "data center"), and the Azure Firewall for north-south traffic (outside of the "data center"). CCM control IPY-04 recommends that the infrastructure should use secure network protocols for the exchange of data between services. The Azure services supporting this architecture all use standard secure protocols such as TLS for HTTP and SQL.
-
-The following list shows the CCM controls that address network security in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-| :-- | :-|
-| IPY-04 | Network Protocols |
-| IVS-06 | Network Security |
-
-The network implementation is further secured by defining controls from the MAFB. The controls ensure that traffic into the environment is restricted from the public Internet.
-
-The following list shows the CIS controls that address network security in this reference:
-
-| CIS Control ID | CIS Control Description |
-|:|:|
-| 6.2 | Ensure that SSH access is restricted from the internet. |
-| 6.3 | Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP). |
-| 6.5 | Ensure that Network Watcher is 'Enabled'. |
-| 6.6 | Ensure that ingress using UDP is restricted from the internet. |
-
-Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. You must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
-
-#### Application security
-
-This design principle covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
-
-The following list shows the CCM controls that address key management in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-|:-|:--|
-| EKM-01 | Encryption and Key Management Entitlement |
-| EKM-02 | Encryption and Key Management Key Generation |
-| EKM-03 | Encryption and Key Management Sensitive Data Protection |
-| EKM-04 | Encryption and Key Management Storage and Access |
-
-From the CCM, EKM-02, and EKM-03 recommend policies and procedures to manage keys and to use encryption protocols to protect sensitive data. EKM-01 recommends that all cryptographic keys have identifiable owners so that they can be managed. EKM-04 recommends the use of standard algorithms.
-
-The following list shows the CIS controls that address key management in this reference:
-
-| CIS Control ID | CIS Control Description |
-|:|:-|
-| 8.1 | Ensure that the expiration date is set on all keys. |
-| 8.2 | Ensure that the expiration date is set on all secrets. |
-| 8.4 | Ensure the key vault is recoverable. |
-
-The CIS controls 8.1 and 8.2 recommend that expiration dates are set for credentials to ensure that rotation is enforced. CIS control 8.4 ensures that the contents of the key vault can be restored to maintain business continuity.
-
-The aspects of application security set a foundation for the use of this reference architecture to support a Spring workload in Azure.
-
-## Next steps
-
-Explore this reference architecture through the ARM, Terraform, and Azure CLI deployments available in the [Azure Spring Apps Reference Architecture][10] repository.
-
-<!-- Reference links in article -->
-[1]: ./index.yml
-[2]: ../key-vault/index.yml
-[3]: ../azure-monitor/index.yml
-[4]: ../security-center/index.yml
-[5]: /azure/devops/pipelines/
-[6]: ../application-gateway/index.yml
-[7]: ../web-application-firewall/index.yml
-[8]: ./how-to-config-server.md
-[9]: https://steeltoe.io/
-[10]: https://github.com/Azure/azure-spring-apps-landing-zone-accelerator/tree/reference-architecture
-[11]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[12]: ./vnet-customer-responsibilities.md#azure-spring-apps-network-requirements
-[13]: ./vnet-customer-responsibilities.md#azure-spring-apps-fqdn-requirements--application-rules
-[14]: ./how-to-staging-environment.md
-[15]: https://devblogs.microsoft.com/java/monitor-applications-and-dependencies-in-azure-spring-cloud/
-[16]: /azure/architecture/framework/
-[17]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[18]: https://cloudsecurityalliance.org/
-[19]: https://cloudsecurityalliance.org/research/working-groups/cloud-controls-matrix
-[20]: /azure/security/benchmarks/v2-cis-benchmark
-[21]: https://www.cisecurity.org/
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw
- [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/) - [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml)-- [Azure Spring Apps reference architecture](reference-architecture.md)
+- [Azure Spring Apps architecture design](/azure/architecture/web-apps/spring-apps?toc=/azure/spring-apps/toc.json&bc=/azure/spring-apps/breadcrumb/toc.json)
- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-apps) applications to Azure Spring Apps
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
spring-apps Tutorial Authenticate Client With Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-authenticate-client-with-gateway.md
+
+ Title: Tutorial - Authenticate client with Spring Cloud Gateway on Azure Spring Apps
+description: Learn how to authenticate client with Spring Cloud Gateway on Azure Spring Apps.
+++ Last updated : 08/31/2023++++
+# Tutorial: Authenticate client with Spring Cloud Gateway on Azure Spring Apps
+
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption and dedicated (Preview)
+
+This quickstart shows you how to secure communication between a client application and a microservice application that is hosted on Azure Spring Apps and shielded with a Spring Cloud Gateway app. The client application is verified as a security principal to initiate contact with the microservice deployed on Azure Spring Apps, using the app built with [Spring Cloud Gateway](https://docs.spring.io/spring-cloud-gateway/docs/current/reference/html/). This method employs Spring Cloud Gateway's Token Relay and Spring Security's Resource Server features for the processes of authentication and authorization, realized through the execution of the [OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+
+The following list shows the composition of the sample project:
+
+- Books SPA: This Single Page Application (SPA), hosted locally, interacts with the Books microservice for adding or searching for books.
+- Books microservice:
+ - A Spring Cloud Gateway app hosted in Azure Spring Apps. This app operates as a gateway to the Books RESTful APIs.
+ - A Spring Boot RESTful API app hosted in Azure Spring Apps. This app stores the book information in an H2 database. The Books service exposes two REST endpoints to write and read books.
+
+## 1. Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- A Microsoft Entra ID tenant. For more information on how to create a Microsoft Entra ID tenant, see [Quickstart: Create a new tenant in Azure AD](../active-directory/fundamentals/create-new-tenant.md).
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
+- Install [Node.js](https://nodejs.org).
++
+## 5. Validate the app
+
+You can access the Books SPA app that communicates with the Books RESTful APIs through the `gateway-service` app.
+
+1. Go to `http://localhost:3000` in your browser to access the application.
+
+1. Enter values for **Author** and **Title**, and then select **Add Book**. You see a response similar to the following example:
+
+ ```output
+ Book added successfully: {"id":1,"author":"Jeff Black","title":"Spring In Action"}
+ ```
++
+## 7. Next steps
+
+> [!div class="nextstepaction"]
+> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
+
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md)
+
+> [!div class="nextstepaction"]
+> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md)
+
+> [!div class="nextstepaction"]
+> [Run microservice apps (Pet Clinic)](./quickstart-sample-app-introduction.md)
+
+> [!div class="nextstepaction"]
+> [Run polyglot apps on Enterprise plan (ACME Fitness Store)](./quickstart-sample-app-acme-fitness-store-introduction.md)
+
+For more information, see the following articles:
+
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Apps service
| Destination Endpoint | Port | Use | Note | |-||-|--| | \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
-| \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
| \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
|--|--|| | <i>*.azmk8s.io</i> | HTTPS:443 | Underlying Kubernetes Cluster management. | | <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |
-| <i>*.cdn.mscr.io</i> | HTTPS:443 | MCR storage backed by the Azure CDN. |
| <i>*.data.mcr.microsoft.com</i> | HTTPS:443 | MCR storage backed by the Azure CDN. | | <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
-| <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
-| <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+| <i>login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
| <i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. | | <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI. |
-| *mscrl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
-| *crl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
-| *crl3.digicert.com*<sup>1</sup> | HTTPS:80 | Third-Party TLS/SSL Certificate Chain Paths. |
-
-<sup>1</sup> Please note that these FQDNs aren't included in the FQDN tag.
## Azure Spring Apps optional FQDN for third-party application performance management
static-web-apps Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/application-settings.md
In a terminal or command line, execute the following command to delete a setting
## Next steps > [!div class="nextstepaction"]
-> [Define configuration for Azure Static Web Apps in the _staticwebapp.config.json_ file](configuration.md)
+> [Define configuration for Azure Static Web Apps in the _staticwebapp.config.json_ file](configuration.md)
## Related articles
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/authentication-authorization.md
# Authenticate and authorize Static Web Apps
-Azure Static Web Apps provides a streamlined authentication experience, where no other actions or configurations are required to use GitHub, Twitter, and Azure Active Directory (Azure AD) for authentication.
+> [!WARNING]
+> Due to changes in X(formerly Twitter) API policy we canΓÇÖt continue to support it as part of the pre-configured providers for your app.
+> If you want to continue to use X(formerly Twitter) for authentication/authorization with your app, update your app configuration to [register a custom provider](./authentication-custom.md).
++
+Azure Static Web Apps provides a streamlined authentication experience, where no other actions or configurations are required to use GitHub and Azure Active Directory (Azure AD) for authentication.
In this article, learn about default behavior, how to set up sign-in and sign-out, how to block an authentication provider, and more.
Be aware of the following defaults and resources for authentication and authoriz
**Defaults:** - Any user can authenticate with a pre-configured provider - GitHub
- - Twitter
- Azure Active Directory (Azure AD) - To restrict an authentication provider, [block access](#block-an-authentication-provider) with a custom route rule - After sign-in, users belong to the `anonymous` and `authenticated` roles. For more information about roles, see [Manage roles](authentication-custom.md#manage-roles)
Use the following table to find the provider-specific route.
| - | -- | | Azure AD | `/.auth/login/aad` | | GitHub | `/.auth/login/github` |
-| Twitter | `/.auth/login/twitter` |
For example, to sign in with GitHub, you could include something similar to the following link.
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
The `platform` section controls platform specific settings, such as the API lang
## Networking
-The `networking` section controls the network configuration of your static web app. To restrict access to your app, specify a list of allowed IP address blocks in `allowedIpRanges`.
+The `networking` section controls the network configuration of your static web app. To restrict access to your app, specify a list of allowed IP address blocks in `allowedIpRanges`. See the [quotas](/articles/static-web-apps/quotas.md) page for details on the amount of allowed IP address blocks.
> [!NOTE] > Networking configuration is only available in the Azure Static Web Apps Standard plan.
Define each IPv4 address block in Classless Inter-Domain Routing (CIDR) notation
When one or more IP address blocks are specified, requests originating from IP addresses that don't match a value in `allowedIpRanges` are denied access.
-In addition to IP address blocks, you can also specify [service tags](../virtual-network/service-tags-overview.md) in the `allowedIpRanges` array to restrict traffic to certain Azure services.
+In addition to IP address blocks, you can also specify [service tags](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) in the `allowedIpRanges` array to restrict traffic to certain Azure services.
```json "networking": {
static-web-apps Deploy Nextjs Static Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-static-export.md
By default, the application is treated as a hybrid rendered Next.js application,
api_location: "" # Api source code path - optional output_location: "" # Built app content directory - optional env: # Add environment variables here
- is_static_export: true
+ IS_STATIC_EXPORT: true
``` ### [Azure Pipelines](#tab/azure-pipelines) ```yaml
- - task: AzureStaticWebAppLatest@0
+ - task: AzureStaticWebApp@0
inputs: azure_static_web_apps_api_token: $(AZURE_STATIC_WEB_APPS_TOKEN) ###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md
If you don't already have the [Azure Static Web Apps extension for Visual Studio
If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance through the extension.
-In the Visual Studio Code Explorer window, return to the _Resources_ section and under _Static Web Apps_, right-click **my-first-static-web-app** and select **Delete**.
+In the Visual Studio Code Azure window, return to the _Resources_ section and under _Static Web Apps_, right-click **my-first-static-web-app** and select **Delete**.
## Next steps
static-web-apps Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/key-vault-secrets.md
The access policy is now saved to Key Vault. Next, access the secret's URI to us
```text @Microsoft.KeyVault(SecretUri=<YOUR-KEY-VAULT-SECRET-URI>) ```
+ For example, a final string would look like the following sample:
+
+ ```
+ @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
+ ```
+
+ Alternatively:
+
+ ```
+ @Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret)
+ ```
Use the following steps to build the full secret value.
static-web-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/overview.md
With Static Web Apps, static assets are separated from a traditional web server
- **Free SSL certificates**, which are automatically renewed. - **Custom domains** to provide branded customizations to your app. - **Seamless security model** with a reverse-proxy when calling APIs, which requires no CORS configuration.-- **Authentication provider integrations** with Azure Active Directory, GitHub, and Twitter.
+- **Authentication provider integrations** with Azure Active Directory and GitHub.
- **Customizable authorization role definition** and assignments. - **Back-end routing rules** enabling full control over the content and routes you serve. - **Generated staging versions** powered by pull requests enabling preview versions of your site before publishing.
static-web-apps Publish Vuepress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-vuepress.md
# Tutorial: Publish a VuePress site to Azure Static Web Apps
-This article demonstrates how to create and deploy a [VuePress](https://vuepress.vuejs.org/) web application to [Azure Azure Static Web Apps](overview.md). The final result is a new Azure Static Web Apps application with the associated GitHub Actions that give you control over how the app is built and published.
+This article demonstrates how to create and deploy a [VuePress](https://vuepress.vuejs.org/) web application to [Azure Static Web Apps](overview.md). The final result is a new Azure Static Web Apps application with the associated GitHub Actions that give you control over how the app is built and published.
In this tutorial, you learn how to:
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
The agent displays detailed progress. Once the registration is complete, you're
To accomplish seamless authentication with Azure and authorization to various Azure resources, the agent is registered with the following Azure - Azure Storage Mover (Microsoft.StorageMover)-- Azure ARC (Microsoft.HybridCompute)
+- Azure Arc (Microsoft.HybridCompute)
### Azure Storage Mover service
Registration to the Azure Storage mover service is visible and manageable throug
You can reference this Azure Resource Manager (ARM) resource when you want to assign migration jobs to the specific agent VM it symbolizes.
-### Azure ARC service
+### Azure Arc service
-The agent is also registered with the [Azure ARC service](../azure-arc/overview.md). ARC is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent.
+The agent is also registered with the [Azure Arc service](../azure-arc/overview.md). Arc is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent.
Azure Storage Mover uses a system-assigned managed identity. A managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is also automatically removed. The process of deletion is automatically initiated when you unregister the agent. However, there are other ways to remove this identity. Doing so incapacitates the registered agent and require the agent to be unregistered. Only the registration process can get an agent to obtain and maintain its Azure identity properly. > [!NOTE]
-> During public preview, there is a side effect of the registration with the Azure ARC service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource.
+> During public preview, there is a side effect of the registration with the Azure Arc service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource.
It may appear that you're able to manage aspects of the storage mover agent through the *Server-Azure Arc* resource, but in most cases you can't. It's best to exclusively manage the agent through the *Registered agents* pane in your storage move resource or through the local administrative shell. > [!WARNING]
-> Do not delete the Azure ARC server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to.
+> Do not delete the Azure Arc server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to.
### Authorization
For a migration job, access to the target endpoint is perhaps the most important
These assignments are made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it. > [!WARNING]
-> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure ARC services.
+> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure Arc services.
## Next steps
storage-mover Endpoint Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/endpoint-manage.md
Previously updated : 08/07/2023- Last updated : 08/18/2023+ <!--
REVIEW Engineering: not reviewed
EDIT PASS: started Initial doc score: 93
-Current doc score: 100 (3269 words and 0 issues)
+Current doc score: 100 (3365 words and 0 issues)
!######################################################## -->
Current doc score: 100 (3269 words and 0 issues)
While the term *endpoint* is often used in networking, it's used in the context of the Storage Mover service to describe a storage location with a high level of detail.
-A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition. Only certain types of endpoints may be used as a source or a target, respectively.
+A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition to define the source and target locations for a particular copy operation. Only certain types of endpoints may be used as a source or a target, respectively. For example, data contained within an NFS (Network File System) file share endpoint can only be copied to a blob storage container. Similarly, copy operations with an SMB-based (Server Message Block) file share target can only be migrated to an Azure file share,
This article guides you through the creation and management of Azure Storage Mover endpoints. To follow these examples, you need a top-level storage mover resource. If you haven't yet created one, follow the steps within the [Create a Storage Mover resource](storage-mover-create.md) article before continuing.
After you complete the steps within this article, you'll be able to create and m
Within the Azure Storage Mover resource hierarchy, a migration project is used to organize migration jobs into logical tasks or components. A migration project in turn contains at least one job definition, which describes both the source and target locations for your migration project. The [Understanding the Storage Mover resource hierarchy](resource-hierarchy.md) article contains more detailed information about the relationships between a Storage Mover, its endpoints, and its projects.
-Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB (Server Message Block) shares, and Azure Storage blob container endpoints each require fundamentally different information.
+Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
[!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
Agent access to both your Key Vault and target storage resources is controlled t
There are many use cases that require preserving metadata values such as file and folder timestamps, ACLs, and file attributes. Storage Mover supports the same level of file fidelity as the underlying Azure file share. Azure Files in turn [supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). The following table represents common metadata that is migrated:
-|Metadata property |Outcome |
-|--|--|
+|Metadata property |Outcome |
+|--||
|Directory structure |The original directory structure of the source is preserved on the target share. |
-|Access permissions |Permissions on the source file or directory are preserved on the target share. |
-|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. |
+|Access permissions |Permissions on the source file or directory are preserved on the target share. |
+|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. |
|Create timestamp |The original create timestamp of the source file is preserved on the target share. | |Change timestamp |The original change timestamp of the source file is preserved on the target share. | |Modified timestamp |The original modified timestamp of the source file is preserved on the target share. |
Follow the steps in this section to view endpoints accessible to your Storage Mo
1. On the **Storage endpoints** page, the default **Storage endpoints** view displays the names of any provisioned source endpoints and a summary of their associated properties. To view provisioned destination endpoint, select **Target endpoints**. You can filter the results further by selecting the **Protocol** or **Host** filters and the relevant option.
- :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing the endpoint details and the location of the target endpoint filters." lightbox="media/endpoint-manage/endpoint-filter-lrg.png":::
+ :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing endpoint details and the target endpoint filters location." lightbox="media/endpoint-manage/endpoint-filter-lrg.png":::
- At this time, the Azure Portal doesn't provide the ability to to directly modify provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure Portal should be deleted and recreated.
+ At this time, the Azure portal doesn't support the direct modification of provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure portal should be deleted and recreated.
### [PowerShell](#tab/powershell)
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Title: Configure anonymous public read access for containers and blobs description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container public access setting to make containers and blobs available for anonymous access.-+ Last updated 11/09/2022-+ ms.devlang: powershell, azurecli
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
Title: Overview of remediating anonymous public read access for blob data description: Learn how to remediate anonymous public read access to blob data for both Azure Resource Manager and classic storage accounts.-+ Last updated 11/09/2022-+
storage Anonymous Read Access Prevent Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md
Title: Remediate anonymous public read access to blob data (classic deployments) description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous public access to containers.-+ Last updated 11/09/2022-+ ms.devlang: powershell, azurecli
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
Title: Remediate anonymous public read access to blob data (Azure Resource Manager deployments) description: Learn how to analyze anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container.-+ Last updated 05/23/2023-+ ms.devlang: powershell, azurecli
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Rehydration of an archived blob may take up to 15 hours, and it is inefficient t
Azure Event Grid raises one of the following two events on blob rehydration, depending on which operation was used to rehydrate the blob: -- The **Microsoft.Storage.BlobCreated** event fires when a blob is created. In the context of blob rehydration, this event fires when a [Copy Blob](/rest/api/storageservices/copy-blob) operation creates a new destination blob in either the Hot or Cool tier and the blob's data is fully rehydrated from the Archive tier.
+- The **Microsoft.Storage.BlobCreated** event fires when a blob is created. In the context of blob rehydration, this event fires when a [Copy Blob](/rest/api/storageservices/copy-blob) operation creates a new destination blob in either the Hot or Cool tier and the blob's data is fully rehydrated from the Archive tier. If the account has the **hierarchical namespace** feature enabled on it, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed.
+
- The **Microsoft.Storage.BlobTierChanged** event fires when a blob's tier is changed. In the context of blob rehydration, this event fires when a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation successfully changes an archived blob's tier to the Hot or Cool tier. To learn how to capture an event on rehydration and send it to an Azure Function event handler, see [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md).
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Title: Assign an Azure role for access to blob data description: Learn how to assign permissions for blob data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD.-+ Last updated 04/19/2022-+ ms.devlang: powershell, azurecli
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Title: Authorize access to blobs using Active Directory description: Authorize access to Azure blobs using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account.-+ Last updated 03/17/2023-+
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-cli.md
Title: Authorize access to blob data with Azure CLI description: Specify how to authorize data operations against blob data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token.-+ Last updated 07/12/2021-+ ms.devlang: azurecli
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
Title: Authorize access to blob data in the Azure portal description: When you access blob data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ Last updated 12/10/2021-+
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Azure AD credentials to access blob data description: PowerShell supports signing in with Azure AD credentials to run commands on blob data in Azure Storage. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ Last updated 05/12/2022-+ ms.devlang: powershell
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
Previously updated : 03/02/2022 Last updated : 08/28/2023 ms.devlang: azurecli
You can use the `az storage blob upload-batch` command to recursively upload mul
In the following example, the first operation uses the `az storage blob upload` command to upload a single, named file. The source file and destination storage container are specified with the `--file` and `--container-name` parameters.
-The second operation demonstrates the use of the `az storage blob upload-batch` command to upload multiple files. The `--if-unmodified-since` parameter ensures that only files modified with the last seven days will be uploaded. The value supplied by this parameter must be provided in UTC format.
+The second operation demonstrates the use of the `az storage blob upload-batch` command to upload multiple files. The `--if-modified-since` parameter ensures that only files modified within the last seven days will be uploaded. The value supplied by this parameter must be provided in UTC format.
```azurecli-interactive #!/bin/bash storageAccount="<storage-account>" containerName="demo-container"
-lastModified=`date -d "10 days ago" '+%Y-%m-%dT%H:%MZ'`
+lastModified=`date -d "7 days ago" '+%Y-%m-%dT%H:%MZ'`
path="C:\\temp\\" filename="demo-file.txt"
az storage blob upload-batch \
--pattern *.png \ --account-name $storageAccount \ --auth-mode login \
- --if-unmodified-since $lastModified
+ --if-modified-since $lastModified
```
storage Blob V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-dotnet.md
description: View code samples that use the Azure Blob Storage client library for .NET version 11.x. -+ Last updated 04/03/2023
storage Blob V11 Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-javascript.md
description: View code samples that use the Azure Blob Storage client library for JavaScript version 11.x. -+ Last updated 04/03/2023
storage Blob V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v2-samples-python.md
description: View code samples that use the Azure Blob Storage client library for Python version 2.1. -+ Last updated 04/03/2023
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
description: The Blob Storage client library supports client-side encryption and
-+ Last updated 12/12/2022
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Previously updated : 05/09/2023 Last updated : 08/30/2023 ms.devlang: python
To set file and directory level permissions, see any of the following articles:
|REST API |[Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update)| > [!IMPORTANT]
-> If the security principal is a *service* principal, it's important to use the object ID of the service principal and not the object ID of the related app registration. To get the object ID of the service principal open the Azure CLI, and then use this command: `az ad sp show --id <Your App ID> --query objectId`. make sure to replace the `<Your App ID>` placeholder with the App ID of your app registration.
+> If the security principal is a *service* principal, it's important to use the object ID of the service principal and not the object ID of the related app registration. To get the object ID of the service principal open the Azure CLI, and then use this command: `az ad sp show --id <Your App ID> --query objectId`. Make sure to replace the `<Your App ID>` placeholder with the App ID of your app registration. The service principal is treated as a named user. You'll add this ID to the ACL as you would any named user. Named users are described later in this article.
## Types of ACLs
Every file and directory has distinct permissions for these identities:
The identities of users and groups are Azure Active Directory (Azure AD) identities. So unless otherwise noted, a *user*, in the context of Data Lake Storage Gen2, can refer to an Azure AD user, service principal, managed identity, or security group.
+### The super-user
+
+A super-user has the most rights of all the users. A super-user:
+
+- Has RWX Permissions to **all** files and folders.
+
+- Can change the permissions on any file or folder.
+
+- Can change the owning user or owning group of any file or folder.
+
+If a container, file, or directory is created using Shared Key, an Account SAS, or a Service SAS, then the owner and owning group are set to `$superuser`.
+ ### The owning user The user who created the item is automatically the owning user of the item. An owning user can:
storage Encryption Customer Provided Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-customer-provided-keys.md
Title: Provide an encryption key on a request to Blob storage
description: Clients making requests against Azure Blob storage can provide an encryption key on a per-request basis. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations. -+ Last updated 05/09/2022 -+
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Title: Create and manage encryption scopes
description: Learn how to create an encryption scope to isolate blob data at the container or blob level. -+ Last updated 05/10/2023 -+ ms.devlang: powershell, azurecli
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md
Title: Encryption scopes for Blob storage
description: Encryption scopes provide the ability to manage encryption at the level of the container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. -+ Last updated 06/01/2023 -+
Keep in mind that customer-managed keys are protected by soft delete and purge p
## Billing for encryption scopes
-When you enable an encryption scope, you are billed for a minimum of one month (30 days). After the first month, charges for an encryption scope are prorated on an hourly basis.
+When you enable an encryption scope, you are billed for a minimum of 30 days. After 30 days, charges for an encryption scope are prorated on an hourly basis.
-If you disable the encryption scope within the first month, then you are billed for that full month, but not for subsequent months. If you disable the encryption scope after the first month, then you are charged for the first month, plus the number of hours that the encryption scope was in effect after the first month.
+After enabling the encryption scope, if you disable it within 30 days, you are still billed for 30 days. If you disable the encryption scope after 30 days, you are charged for those 30 days plus the number of hours the encryption scope was in effect after 30 days.
Disable any encryption scopes that are not needed to avoid unnecessary charges.
To learn about pricing for encryption scopes, see [Blob Storage pricing](https:/
- [Create and manage encryption scopes](encryption-scope-manage.md) - [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md) - [What is Azure Key Vault?](../../key-vault/general/overview.md)+
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 08/10/2023 Last updated : 08/30/2023
Filters include:
| Filter name | Filter type | Notes | Is Required | |-|-|-|-| | blobTypes | An array of predefined enum values. | The current release supports `blockBlob` and `appendBlob`. Only delete is supported for `appendBlob`, set tier isn't supported. | Yes |
-| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`.<br /><br />To match the container or blob name exactly, include the trailing forward slash ('/'), *e.g.*, `sample-container/` or `sample-container/blob1/`. To match the container or blob name pattern, omit the trailing forward slash, *e.g.*, `sample-container` or `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. | No |
+| prefixMatch | An array of strings for prefixes to be matched. Each rule can define up to 10 case-sensitive prefixes. A prefix string must start with a container name. For example, if you want to match all blobs under `https://myaccount.blob.core.windows.net/sample-container/blob1/...` for a rule, the prefixMatch is `sample-container/blob1`.<br /><br />To match the container or blob name exactly, include the trailing forward slash ('/'), *e.g.*, `sample-container/` or `sample-container/blob1/`. To match the container or blob name pattern, omit the trailing forward slash, *e.g.*, `sample-container` or `sample-container/blob1`. | If you don't define prefixMatch, the rule applies to all blobs within the storage account. Prefix strings don't support wildcard matching. Characters such as `*` and `?` are treated as string literals. | No |
| blobIndexMatch | An array of dictionary values consisting of blob index tag key and value conditions to be matched. Each rule can define up to 10 blob index tag condition. For example, if you want to match all blobs with `Project = Contoso` under `https://myaccount.blob.core.windows.net/` for a rule, the blobIndexMatch is `{"name": "Project","op": "==","value": "Contoso"}`. | If you don't define blobIndexMatch, the rule applies to all blobs within the storage account. | No | To learn more about the blob index feature together with known issues and limitations, see [Manage and find data on Azure Blob Storage with blob index](storage-manage-find-blobs.md).
The run conditions are based on age. Current versions use the last modified time
The platform runs the lifecycle policy once a day. Once you configure or edit a policy, it can take up to 24 hours for changes to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run. Therefore, the policy actions may take up to 48 hours to complete.
-If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes.
+If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes and you're billed for any actions that are required to complete the run. See [Regional availability and pricing](#regional-availability-and-pricing).
### Lifecycle policy completed event
When last access time tracking is enabled, the blob property called `LastAccessT
If last access time tracking is enabled, lifecycle management uses `LastAccessTime` to determine whether the run condition **daysAfterLastAccessTimeGreaterThan** is met. Lifecycle management uses the date the lifecycle policy was enabled instead of `LastAccessTime` in the following cases: - The value of the `LastAccessTime` property of the blob is a null value.+ > [!NOTE] > The `LastAccessTime` property of the blob is null if a blob hasn't been accessed since last access time tracking was enabled.
To minimize the effect on read access latency, only the first read of the last 2
In the following example, blobs are moved to cool storage if they haven't been accessed for 30 days. The `enableAutoTierToHotFromCool` property is a Boolean value that indicates whether a blob should automatically be tiered from cool back to hot if it's accessed again after being tiered to cool.
+> [!TIP]
+> If a blob is moved to the cool tier, and then is automatically moved back before 30 days has elapsed, an early deletion fee is charged. Before you set the `enablAutoTierToHotFromCool` property, make sure to analyze the access patterns of your data so you can reduce unexpected charges.
+ ```json { "enabled": true,
Each update to a blob's last access time is billed under the [other operations](
For more information about pricing, see [Block Blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+## Known issues and limitations
+
+- Tiering is not yet supported in a premium block blob storage account. For all other accounts, tiering is allowed only on block blobs and not for append and page blobs.
+
+- A lifecycle management policy must be read or written in full. Partial updates are not supported.
+
+- Each rule can have up to 10 case-sensitive prefixes and up to 10 blob index tag conditions.
+
+- If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
+
+- A lifecycle management policy can't change the tier of a blob that uses an encryption scope.
+
+- The delete action of a lifecycle management policy won't work with any blob in an immutable container. With an immutable policy, objects can be created and read, but not modified or deleted. For more information, see [Store business-critical blob data with immutable storage](./immutable-storage-overview.md).
+ ## Frequently asked questions (FAQ) See [Lifecycle management FAQ](storage-blob-faq.yml#lifecycle-management-policies).
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
description: Configure a lifecycle management policy to automatically move data
Previously updated : 05/02/2023 Last updated : 08/30/2023
To define a lifecycle management policy with an Azure Resource Manager template,
-A lifecycle management policy must be read or written in full. Partial updates are not supported.
--
-> [!NOTE]
-> Each rule can have up to 10 case-sensitive prefixes and up to 10 blob index tag conditions.
-
-> [!NOTE]
-> If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
-
-> [!NOTE]
-> A lifecycle management policy can't change the tier of a blob that uses an encryption scope.
-
-> [!NOTE]
-> The delete action of a lifecycle management policy won't work with any blob in an immutable container. With an immutable policy, objects can be created and read, but not modified or deleted. For more information, see [Store business-critical blob data with immutable storage](./immutable-storage-overview.md).
- ## See also - [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
+- [Known issues and limitations for lifecycle management policies](lifecycle-management-overview.md#known-issues-and-limitations)
- [Access tiers for blob data](access-tiers-overview.md)
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
Previously updated : 06/23/2021 Last updated : 08/18/2023 -+ # Known issues with Network File System (NFS) 3.0 protocol support for Azure Blob Storage
This article describes limitations and known issues of Network File System (NFS)
- GRS, GZRS, and RA-GRS redundancy options aren't supported when you create an NFS 3.0 storage account.
+- Access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if the ACL or a blob or directory contains an entry for a named user or group, that file becomes inaccessible on the client for non-root users. You'll have to remove these entries to restore access to non-root users on the client. For information about how to remove an ACL entry for named users and groups, see [How to set ACLs](data-lake-storage-access-control.md#how-to-set-acls).
+ ## NFS 3.0 features The following NFS 3.0 features aren't yet supported.
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Previously updated : 06/21/2023 Last updated : 08/18/2023 -+ # Mount Blob Storage by using the Network File System (NFS) 3.0 protocol
Your storage account must be contained within a virtual network. A virtual netwo
## Step 2: Configure network security
-Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs), are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
+Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. See [Network security recommendations for Blob storage](security-recommendations.md#networking).
-To secure the data in your account, see these recommendations: [Network security recommendations for Blob storage](security-recommendations.md#networking).
+Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if you add an entry for a named user or group to the ACL of a blob or directory, that file becomes inaccessible on the client for non-root users. You would have to remove that entry to restore access to non-root users on the client.
> [!IMPORTANT] > The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports.
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md
Previously updated : 02/14/2023 Last updated : 08/18/2023 -+ # Network File System (NFS) 3.0 protocol support for Azure Blob Storage
For step-by-step guidance, see [Mount Blob storage by using the Network File Sys
## Network security
-Traffic must originate from a VNet. A VNet enables clients to securely connect to your storage account. The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
+Traffic must originate from a VNet. A VNet enables clients to securely connect to your storage account. The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request.
To learn more, see [Network security recommendations for Blob storage](security-recommendations.md#networking).
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication isn't supported for blobs in the source account that are encr
Customer-managed failover isn't supported for either the source or the destination account in an object replication policy.
-Object replication is not supported for blobs that are uploaded to the Data Lake Storage endpoint (`dfs.core.windows.net`) by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
+Object replication is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
## How object replication works
storage Object Replication Prevent Cross Tenant Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-prevent-cross-tenant-policies.md
For more information on how to configure object replication policies, including
To prevent object replication across Azure AD tenants, set the **AllowCrossTenantReplication** property for the storage account to **false**. If a storage account does not currently participate in any cross-tenant object replication policies, then setting the **AllowCrossTenantReplication** property to *false* prevents future configuration of cross-tenant object replication policies with this storage account as the source or destination. However, if a storage account currently participates in one or more cross-tenant object replication policies, then setting the **AllowCrossTenantReplication** property to *false* is not permitted until you delete the existing cross-tenant policies.
-Cross-tenant policies are permitted by default for a storage account. However, the **AllowCrossTenantReplication** property is not set by default for a new or existing storage account and does not return a value until you explicitly set it. The storage account can participate in object replication policies across tenants when the property value is either **null** or **true**.
+Cross-tenant policies are permitted by default for a storage account. However, the **AllowCrossTenantReplication** property is not set by default for a new or existing storage account and does not return a value until you explicitly set it. The storage account can participate in object replication policies across tenants when the property value is either **null** or **true**. Setting the **AllowCrossTenantReplication** property does not incur any downtime on the storage account.
### Remediate cross-tenant replication for a new account
storage Quickstart Blobs C Plus Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-c-plus-plus.md
Title: "Quickstart: Azure Blob Storage library v12 - C++"
+ Title: "Quickstart: Azure Blob Storage library - C++"
-description: In this quickstart, you learn how to use the Azure Blob Storage client library version 12 for C++ to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
+description: In this quickstart, you learn how to use the Azure Blob Storage client library for C++ to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
Previously updated : 06/21/2021- Last updated : 08/30/2023+ ms.devlang: cpp
-# Quickstart: Azure Blob Storage client library v12 for C++
+# Quickstart: Azure Blob Storage client library for C++
-Get started with the Azure Blob Storage client library v12 for C++. Azure Blob Storage is Microsoft's object storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob Storage is optimized for storing massive amounts of unstructured data.
+Get started with the Azure Blob Storage client library for C++. Azure Blob Storage is Microsoft's object storage solution for the cloud. Follow these steps to install the package and try out example code for basic tasks.
-Use the Azure Blob Storage client library v12 for C++ to:
--- Create a container-- Upload a blob to Azure Storage-- List all of the blobs in a container-- Download the blob to your local computer-- Delete a container-
-Resources:
--- [API reference documentation](https://azure.github.io/azure-sdk-for-cpp/storage.html)-- [Library source code](https://github.com/Azure/azure-sdk-for-cpp/tree/master/sdk/storage)-- [Samples](../common/storage-samples-c-plus-plus.md?toc=/azure/storage/blobs/toc.json)
+| [API reference documentation](https://azure.github.io/azure-sdk-for-cpp/storage.html) | [Library source code](https://github.com/Azure/azure-sdk-for-cpp/tree/master/sdk/storage) | [Samples](../common/storage-samples-c-plus-plus.md?toc=/azure/storage/blobs/toc.json) |
## Prerequisites -- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure storage account](../common/storage-account-create.md)
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- Azure storage account - [create a storage account](../common/storage-account-create.md)
- [C++ compiler](https://azure.github.io/azure-sdk/cpp_implementation.html#supported-platforms) - [CMake](https://cmake.org/)-- [Vcpkg - C and C++ package manager](https://github.com/microsoft/vcpkg/blob/master/README.md)
+- [vcpkg - C and C++ package manager](https://vcpkg.io/en/getting-started.html)
## Setting up
-This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for C++.
+This section walks you through preparing a project to work with the Azure Blob Storage client library for C++. The easiest way to acquire the Azure SDK for C++ is to use the `vcpkg` package manager.
### Install the packages
-The `vcpkg install` command will install the Azure Storage Blobs SDK for C++ and necessary dependencies:
+Use the `vcpkg install` command to install the Azure Blob Storage library for C++ and necessary dependencies:
+
+```console
+vcpkg.exe install azure-storage-blobs-cpp
+```
+
+The Azure Identity library is needed for passwordless connections to Azure
```console
-vcpkg.exe install azure-storage-blobs-cpp:x64-windows
+vcpkg.exe install azure-identity-cpp
```
-For more information, visit GitHub to acquire and build the [Azure SDK for C++](https://github.com/Azure/azure-sdk-for-cpp/).
+For more information on project setup and working with the Azure SDK for C++, see the [Azure SDK for C++ readme](https://github.com/Azure/azure-sdk-for-cpp#azure-sdk-for-c).
### Create the project
-In Visual Studio, create a new C++ console application for Windows called *BlobQuickstartV12*.
+In Visual Studio, create a new C++ console application for Windows called *BlobQuickstart*.
:::image type="content" source="./media/quickstart-blobs-c-plus-plus/vs-create-project.jpg" alt-text="Visual Studio dialog for configuring a new C++ Windows console app"::: - ## Object model Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data. Blob Storage offers three types of resources:
Use these C++ classes to interact with these resources:
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library for C++: - [Add include files](#add-include-files)-- [Get the connection string](#get-the-connection-string)
+- [Authenticate to Azure and authorize access to blob data](#authenticate-to-azure-and-authorize-access-to-blob-data)
- [Create a container](#create-a-container) - [Upload blobs to a container](#upload-blobs-to-a-container) - [List the blobs in a container](#list-the-blobs-in-a-container)
These example code snippets show you how to do the following tasks with the Azur
From the project directory:
-1. Open the *BlobQuickstartV12.sln* solution file in Visual Studio
-1. Inside Visual Studio, open the *BlobQuickstartV12.cpp* source file
+1. Open the *BlobQuickstart.sln* solution file in Visual Studio
+1. Inside Visual Studio, open the *BlobQuickstart.cpp* source file
1. Remove any code inside `main` that was autogenerated
-1. Add `#include` statements
+1. Add `#include` and `using namespace` statements
+
+```cpp
+#include <iostream>
+#include <azure/core.hpp>
+#include <azure/identity/default_azure_credential.hpp>
+#include <azure/storage/blobs.hpp>
+
+using namespace Azure::Identity;
+using namespace Azure::Storage::Blobs;
+```
+
+### Authenticate to Azure and authorize access to blob data
+
+Application requests to Azure Blob Storage must be authorized. Using the `DefaultAzureCredential` class provided by the Azure Identity client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
+
+You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example.
+
+### [Passwordless (Recommended)](#tab/managed-identity)
+
+The Azure Identity library provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK. It provides a set of `TokenCredential` implementations which can be used to construct Azure SDK clients which support Azure AD token authentication. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime.
+
+#### Assign roles to your Azure AD user account
++
+#### Sign in and connect your app code to Azure using DefaultAzureCredential
+
+You can authorize access to data in your storage account using the following steps:
+
+1. Make sure you're authenticated with the same Azure AD account you assigned the role to on your storage account. You can authenticate via [Azure CLI](/cli/azure/install-azure-cli). Sign in to Azure through the Azure CLI using the following command:
+
+ ```azurecli
+ az login
+ ```
+
+2. To use `DefaultAzureCredential`, make sure that the **azure-identity-cpp** package is [installed](#install-the-packages) and the following `#include` is added:
+
+ ```cpp
+ #include <azure/identity/default_azure_credential.hpp>
+ ```
+
+3. Add this code to the end of `main()`. When the code runs on your local workstation, `DefaultAzureCredential` uses the developer credentials for Azure CLI to authenticate to Azure.
+ ```cpp
+ // Initialize an instance of DefaultAzureCredential
+ auto defaultAzureCredential = std::make_shared<DefaultAzureCredential>();
-### Get the connection string
+ auto accountURL = "https://<storage-account-name>.blob.core.windows.net";
+ BlobServiceClient blobServiceClient(accountURL, defaultAzureCredential);
+ ```
-The code below retrieves the connection string for your storage account from the environment variable created in [Configure your storage connection string](#configure-your-storage-connection-string).
+4. Make sure to update the storage account name in the URI of your `BlobServiceClient` object. The storage account name can be found on the overview page of the Azure portal.
-Add this code inside `main()`:
+ :::image type="content" source="./media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name.":::
+ > [!NOTE]
+ > When using the C++ SDK in a production environment, it's recommended that you only enable credentials that you know your application will use. Instead of using `DefaultAzureCredential`, you should authorize using a specific credential type, or by using `ChainedTokenCredential` with the supported credentials.
+
+### [Connection String](#tab/connection-string)
+
+A connection string includes the storage account access key and uses it to authorize requests. Always be careful to never expose the keys in an unsecure location.
+
+> [!NOTE]
+> To authorize data access with the storage account access key, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](../../role-based-access-control/resource-provider-operations.md#microsoftstorage). The least privileged built-in role with permissions for this action is [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access), but any role which includes this action will work.
++
+#### Configure your storage connection string
+
+After you copy the connection string, write it to a new environment variable on the local machine running the application. To set the environment variable, open a console window, and follow the instructions for your operating system. Replace `<yourconnectionstring>` with your actual connection string.
+
+**Windows**:
+
+```cmd
+setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"
+```
+
+After you add the environment variable in Windows, you must start a new instance of the command window.
+
+**Linux**:
+
+```bash
+export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
+```
+
+The following code example retrieves the connection string for the storage account from the environment variable created earlier, and uses the connection string to construct a service client object.
+
+Add this code to the end of `main()`:
+
+```cpp
+ // Retrieve the connection string for use with the application. The storage
+ // connection string is stored in an environment variable on the machine
+ // running the application called AZURE_STORAGE_CONNECTION_STRING.
+ // Note that _MSC_VER is set when using MSVC compiler.
+ static const char* AZURE_STORAGE_CONNECTION_STRING = "AZURE_STORAGE_CONNECTION_STRING";
+
+#if !defined(_MSC_VER)
+ const char* connectionString = std::getenv(AZURE_STORAGE_CONNECTION_STRING);
+#else
+ // Use getenv_s for MSVC
+ size_t requiredSize;
+ getenv_s(&requiredSize, NULL, NULL, AZURE_STORAGE_CONNECTION_STRING);
+ if (requiredSize == 0) {
+ throw std::runtime_error("missing connection string from env.");
+ }
+ std::vector<char> value(requiredSize);
+ getenv_s(&requiredSize, value.data(), value.size(), AZURE_STORAGE_CONNECTION_STRING);
+ std::string connectionStringStr = std::string(value.begin(), value.end());
+ const char* connectionString = connectionStringStr.c_str();
+#endif
+
+ auto blobServiceClient = BlobServiceClient::CreateFromConnectionString(connectionString);
+```
+
+> [!IMPORTANT]
+> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
++ ### Create a container
-Create an instance of the [BlobContainerClient](https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-storage-blobs/12.0.0/class_azure_1_1_storage_1_1_blobs_1_1_blob_container_client.html) class by calling the [CreateFromConnectionString](https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-storage-blobs/12.0.0/class_azure_1_1_storage_1_1_blobs_1_1_blob_container_client.html#a5d253aacb6e20578b7f5f233547be3e2) function. Then call [CreateIfNotExists](https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-storage-blobs/12.0.0/class_azure_1_1_storage_1_1_blobs_1_1_blob_container_client.html#ab3ef187d2e30e1a19ebadf45d0fdf9c4) to create the actual container in your storage account.
+Decide on a name for the new container. Then create an instance of `BlobContainerClient` and create the container.
> [!IMPORTANT] > Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata). Add this code to the end of `main()`:
+```cpp
+std::string containerName = "myblobcontainer";
+auto containerClient = blobServiceClient.GetBlobContainerClient("myblobcontainer");
+
+// Create the container if it does not exist
+std::cout << "Creating container: " << containerName << std::endl;
+containerClient.CreateIfNotExists();
+```
### Upload blobs to a container The following code snippet:
-1. Declares a string containing "Hello Azure!".
+1. Declares a string containing "Hello Azure!"
1. Gets a reference to a [BlockBlobClient](https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-storage-blobs/1.0.0-beta.2/class_azure_1_1_storage_1_1_blobs_1_1_block_blob_client.html) object by calling [GetBlockBlobClient](https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-storage-blobs/1.0.0-beta.2/class_azure_1_1_storage_1_1_blobs_1_1_blob_container_client.html#acd8c68e3f37268fde0010dd478ff048f) on the container from the [Create a container](#create-a-container) section. 1. Uploads the string to the blob by calling the [ΓÇïUploadΓÇïFrom](https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-storage-blobs/1.0.0-beta.2/class_azure_1_1_storage_1_1_blobs_1_1_block_blob_client.html#af93af7e37f8806e39481596ef253f93d) function. This function creates the blob if it doesn't already exist, or updates it if it does.
This app creates a container and uploads a text file to Azure Blob Storage. The
The output of the app is similar to the following example: ```output
-Azure Blob Storage v12 - C++ quickstart sample
+Azure Blob Storage - C++ quickstart sample
Creating container: myblobcontainer Uploading blob: blob.txt Listing blobs...
In this quickstart, you learned how to upload, download, and list blobs using C+
To see a C++ Blob Storage sample, continue to: > [!div class="nextstepaction"]
-> [Azure Blob Storage SDK v12 for C++ sample](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/storage/azure-storage-blobs/samples)
+> [Azure Blob Storage client library for C++ samples](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/storage/azure-storage-blobs/samples)
storage Quickstart Blobs Javascript Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-javascript-browser.md
This code calls the [ContainerClient.deleteBlob](/javascript/api/@azure/storage-
http://localhost:1234 ```
-## Step 1 - Create a container
+## Step 1: Create a container
1. In the web app, select **Create container**. The status indicates that a container was created. 2. In the Azure portal, verify your container was created. Select your storage account. Under **Blob service**, select **Containers**. Verify that the new container appears. (You may need to select **Refresh**.)
-## Step 2 - Upload a blob to the container
+## Step 2: Upload a blob to the container
1. On your local computer, create and save a test file, such as *test.txt*. 2. In the web app, select **Select and upload files**. 3. Browse to your test file, and then select **Open**. The status indicates that the file was uploaded, and the file list was retrieved. 4. In the Azure portal, select the name of the new container that you created earlier. Verify that the test file appears.
-## Step 3 - Delete the blob
+## Step 3: Delete the blob
1. In the web app, under **Files**, select the test file. 2. Select **Delete selected files**. The status indicates that the file was deleted and that the container contains no files. 3. In the Azure portal, select **Refresh**. Verify that you see **No blobs found**.
-## Step 4 - Delete the container
+## Step 4: Delete the container
1. In the web app, select **Delete container**. The status indicates that the container was deleted. 2. In the Azure portal, select the **\<account-name\> | Containers** link at the top-left of the portal pane.
storage Sas Service Create Dotnet Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Java Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-javascript.md
description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for JavaScript. -+ Last updated 01/19/2023
storage Sas Service Create Python Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Sas Service Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Scalability Targets Premium Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-block-blobs.md
Title: Scalability targets for premium block blob storage accounts description: Learn about premium-performance block blob storage accounts. Block blob storage accounts are optimized for applications that use smaller, kilobyte-range objects.-+ Last updated 12/18/2019-+
storage Scalability Targets Premium Page Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-page-blobs.md
Title: Scalability targets for premium page blob storage accounts description: A premium performance page blob storage account is optimized for read/write operations. This type of storage account backs an unmanaged disk for an Azure virtual machine.-+ Last updated 09/24/2021-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets.md
Title: Scalability and performance targets for Blob storage description: Learn about scalability and performance targets for Blob storage.-+ Last updated 01/11/2023-+
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Title: Security recommendations for Blob storage description: Learn about security recommendations for Blob storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ Last updated 04/06/2023-+
storage Simulate Primary Region Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/simulate-primary-region-failure.md
description: Simulate an error in reading data from the primary region when the storage account is configured for read-access geo-zone-redundant storage (RA-GZRS). -+ Last updated 09/06/2022
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
description: Learn how to use the .NET client library to create a read-only snap
-+ Last updated 08/27/2020 ms.devlang: csharp
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
-+ Last updated 03/15/2023
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
description: Create and use account SAS tokens in a JavaScript application that
-+ Last updated 11/30/2022
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Title: Append data to a blob with .NET
-description: Learn how to append data to a blob in Azure Storage by using the.NET client library.
+description: Learn how to append data to an append blob in Azure Storage by using the.NET client library.
Previously updated : 03/28/2022- Last updated : 09/01/2023+ ms.devlang: csharp, python-+
-# Append data to a blob in Azure Storage using the .NET client library
+# Append data to an append blob with .NET
You can append data to a blob by creating an append blob. Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append blobs are ideal for scenarios such as logging data from virtual machines.
static async Task AppendToBlob(
await appendBlobClient.CreateIfNotExistsAsync();
- var maxBlockSize = appendBlobClient.AppendBlobMaxAppendBlockBytes;
-
- if (logEntryStream.Length <= maxBlockSize)
- {
- await appendBlobClient.AppendBlockAsync(logEntryStream);
- }
- else
+ int maxBlockSize = appendBlobClient.AppendBlobMaxAppendBlockBytes;
+ long bytesLeft = logEntryStream.Length;
+ byte[] buffer = new byte[maxBlockSize];
+ while (bytesLeft > 0)
{
- var bytesLeft = logEntryStream.Length;
-
- while (bytesLeft > 0)
+ int blockSize = (int)Math.Min(bytesLeft, maxBlockSize);
+ int bytesRead = await logEntryStream.ReadAsync(buffer, 0, blockSize);
+ using (MemoryStream memoryStream = new MemoryStream(buffer, 0, bytesRead))
{
- var blockSize = (int)Math.Min(bytesLeft, maxBlockSize);
- var buffer = new byte[blockSize];
- await logEntryStream.ReadAsync(buffer, 0, blockSize);
- await appendBlobClient.AppendBlockAsync(new MemoryStream(buffer));
- bytesLeft -= blockSize;
+ await appendBlobClient.AppendBlockAsync(memoryStream);
}
+ bytesLeft -= bytesRead;
} } ```
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
Title: Premium block blob storage accounts description: Achieve lower and consistent latencies for Azure Storage workloads that require fast and consistent response times.-+ -+ Last updated 10/14/2021
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
description: Learn how to create and manage clients that interact with data reso
-+ Last updated 02/08/2023
storage Storage Blob Container Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 08/02/2023
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. -+ Last updated 11/30/2022
storage Storage Blob Container Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 08/02/2023
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library using TypeScript. -+ Last updated 03/21/2023
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 07/25/2022
storage Storage Blob Container Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 08/02/2023
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
-+ Last updated 11/30/2022 ms.devlang: javascript
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 08/02/2023
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
-+ Last updated 03/21/2023 ms.devlang: TypeScript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
-+ Last updated 03/28/2022
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 08/02/2023
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 08/02/2023
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Container User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/22/2023
storage Storage Blob Container User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/12/2023
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/09/2023
storage Storage Blob Containers List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 08/02/2023
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 08/02/2023
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Async Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md
Last updated 04/18/2023-+ ms.devlang: java
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
Last updated 04/28/2023-+ ms.devlang: python
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Url Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Last updated 04/14/2023-+ ms.devlang: csharp
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
-+ Last updated 07/15/2022
storage Storage Blob Customer Provided Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-customer-provided-key.md
Title: Specify a customer-provided key on a request to Blob storage with .NET
description: Learn how to specify a customer-provided key on a request to Blob storage using .NET. -+ Last updated 05/09/2022-+ ms.devlang: csharp
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Last updated 05/11/2023-+ ms.devlang: csharp
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
-+ Last updated 07/12/2023 ms.devlang: csharp
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Last updated 04/21/2023-+ ms.devlang: javascript
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Last updated 05/23/2023-+ ms.devlang: csharp
storage Storage Blob Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md
Title: Check the encryption status of a blob
description: Learn how to use Azure portal, PowerShell, or Azure CLI to check whether a given blob is encrypted. -+ Last updated 02/09/2023-+
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
Last updated 09/13/2022-+ ms.devlang: javascript
storage Storage Blob Get Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript-+ # Get URL for container or blob with TypeScript
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
-+ Last updated 11/30/2022
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Object Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-object-model.md
-+ Last updated 03/07/2023
storage Storage Blob Pageblob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-pageblob-overview.md
Title: Overview of Azure page blobs description: An overview of Azure page blobs and their advantages, including use cases with sample scripts. -+ Last updated 05/11/2023-+ ms.devlang: csharp
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Query Endpoint Srp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md
-+ Last updated 06/07/2023
storage Storage Blob Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md
Title: Optimize costs for Blob storage with reserved capacity
description: Learn about purchasing Azure Storage reserved capacity to save costs on block blob and Azure Data Lake Storage Gen2 resources. -+ Last updated 05/17/2021-+ # Optimize costs for Blob storage with reserved capacity
An Azure Storage reservation covers only the amount of data that is stored in a
Azure Storage reserved capacity is available for resources in standard storage accounts, including general-purpose v2 (GPv2) and Blob storage accounts.
-All access tiers (hot, cool, and archive) are supported for reservations. For more information on access tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+Hot, cool, and archive tier are supported for reservations. For more information on access tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
All types of redundancy are supported for reservations. For more information about redundancy options, see [Azure Storage redundancy](../common/storage-redundancy.md).
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Typescript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md
-+ Last updated 03/21/2023
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
Last updated 06/20/2023-+ ms.devlang: javascript
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 07/07/2023- Last updated : 08/28/2023+ ms.devlang: csharp
The following table shows the available options for the checksum algorithm, as d
| MD5 | 2 | Standard MD5 hash algorithm. | | StorageCrc64 | 3 | Azure Storage custom 64-bit CRC. |
+> [!NOTE]
+> If the checksum specified in the request doesn't match the checksum calculated by the service, the upload operation fails. The operation is not retried when using a default retry policy. In .NET, a `RequestFailedException` is thrown with status code 400 and error code `Md5Mismatch` or `Crc64Mismatch`, depending on which algorithm is used.
+ ### Upload with index tags Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. You can add tags to a [BlobUploadOptions](/dotnet/api/azure.storage.blobs.models.blobuploadoptions) instance, and pass that instance into the `UploadAsync` method.
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
-+ Last updated 07/03/2023 ms.devlang: csharp
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
-+ Last updated 06/28/2023 ms.devlang: javascript
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
-+ Last updated 06/28/2023 ms.devlang: typescript
storage Storage Blob User Delegation Sas Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md
Title: Use Azure CLI to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using Azure CLI. -+ Last updated 12/18/2019-+
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/22/2023
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/12/2023
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
Title: Use PowerShell to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using PowerShell. -+ Last updated 12/18/2019-+
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/06/2023
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
Title: Introduction to Blob (object) Storage
description: Use Azure Blob Storage to store massive amounts of unstructured object data, such as text or binary data. Azure Blob Storage is highly scalable and available. -+ Last updated 03/28/2023-+
storage Storage Blobs Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md
Title: Latency in Blob storage
description: Understand and measure latency for Blob storage operations, and learn how to design your Blob storage applications for low latency. -+ Last updated 09/05/2019-+ # Latency in Blob storage
storage Storage Blobs List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: java
To list the blobs in a storage account, call one of these methods:
- [listBlobs](/java/api/com.azure.storage.blob.BlobContainerClient) - [listBlobsByHierarchy](/java/api/com.azure.storage.blob.BlobContainerClient)
+### Manage how many results are returned
+
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for Java](/azure/developer/java/sdk/pagination).
+
+### Filter results with a prefix
+
+To filter the list of blobs, pass a string as the `prefix` parameter to [ListBlobsOptions.setPrefix(String prefix)](/java/api/com.azure.storage.blob.models.listblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+ ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
Page 3
Name: folderA/folderB/file3.txt, Is deleted? false ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-java.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
-+ Previously updated : 11/30/2022 Last updated : 08/16/2023 ms.devlang: javascript
Related functionality can be found in the following methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results).
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
```javascript const listOptions = {
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
Flat listing: 5: folder2/sub1/c
Flat listing: 6: folder2/sub1/d ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: python
To list the blobs in a container using a hierarchical listing, call the followin
- [ContainerClient.walk_blobs](/python/api/azure-storage-blob/azure.storage.blob.containerclient#azure-storage-blob-containerclient-walk-blobs) (along with the name, you can optionally include metadata, tags, and other information associated with each blob)
+### Filter results with a prefix
+
+To filter the list of blobs, specify a string for the `name_starts_with` keyword argument. The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+ ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
Name: folderA/file2.txt
Name: folderA/folderB/file3.txt ```
-You can also specify options to filter list results or show additional information. The following example lists blobs with a specified prefix, and also lists blob tags:
+You can also specify options to filter list results or show additional information. The following example lists blobs and blob tags:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py" id="Snippet_list_blobs_flat_options":::
Name: folderA/file2.txt, Tags: None
Name: folderA/folderB/file3.txt, Tags: {'tag1': 'value1', 'tag2': 'value2'} ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-python.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
-+ Previously updated : 03/21/2023 Last updated : 08/16/2023 ms.devlang: typescript
Related functionality can be found in the following methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results)
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
```typescript const listOptions: ContainerListBlobsOptions = {
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
Flat listing: 5: folder2/sub1/c
Flat listing: 6: folder2/sub1/d ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
-+ Previously updated : 02/14/2023 Last updated : 08/16/2023 ms.devlang: csharp
To list the blobs in a storage account, call one of these methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for .NET](/dotnet/azure/sdk/pagination).
### Filter results with a prefix
By default, a listing operation returns blobs in a flat listing. In a flat listi
The following example lists the blobs in the specified container using a flat listing, with an optional segment size specified, and writes the blob name to a console window.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobsFlatListing"::: The sample output is similar to:
Blob name: FolderA/FolderB/FolderC/blob2.txt
Blob name: FolderA/FolderB/FolderC/blob3.txt ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-dotnet.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-overview.md
Title: About Blob (object) storage
description: Azure Blob storage stores massive amounts of unstructured object data, such as text or binary data. Blob storage also supports Azure Data Lake Storage Gen2 for big data analytics. -+ Last updated 11/04/2019-+ # What is Azure Blob storage?
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 07/07/2023 ms.devlang: python
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 12/09/2022 ms.devlang: csharp
storage Storage Create Geo Redundant Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md
description: Use read-access geo-zone-redundant (RA-GZRS) storage to make your a
-+ Last updated 09/02/2022
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
Title: Encrypt and decrypt blobs using Azure Key Vault
description: Learn how to encrypt and decrypt a blob using client-side encryption with Azure Key Vault. -+ Last updated 11/2/2022
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md
Title: Performance and scalability checklist for Blob storage
description: A checklist of proven practices for use with Blob storage in developing high-performance applications. -+ Last updated 06/01/2023-+ ms.devlang: csharp
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
description: In this quickstart, you will learn how to use the Azure Blob Storag
Last updated 11/09/2022-+ ms.devlang: csharp
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
description: In this quickstart, you learn how to use the Azure Blob Storage client library for Go to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. Previously updated : 02/13/2023- Last updated : 08/29/2023+ ms.devlang: golang
go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
## Authenticate to Azure and authorize access to blob data
+Application requests to Azure Blob Storage must be authorized. Using `DefaultAzureCredential` and the Azure Identity client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
-`DefaultAzureCredential` is a class provided by the Azure Identity client library for Go. `DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
+You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example.
+
+`DefaultAzureCredential` is a credential chain implementation provided by the Azure Identity client library for Go. `DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
To learn more about the order and locations in which `DefaultAzureCredential` looks for credentials, see [Azure Identity library overview](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential).
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Last updated 10/24/2022-+ ms.devlang: java
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
description: In this quickstart, you learn how to use the Azure Blob Storage for
Last updated 10/28/2022-+ ms.devlang: javascript
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Last updated 10/24/2022 -+ ms.devlang: python
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
description: Learn about retry policies and how to implement them for Blob Storage. This article helps you set up a retry policy for Blob Storage requests using the Azure Storage client library for .NET. -+ Last updated 12/14/2022
storage Versions Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versions-manage-dotnet.md
Title: Create and list blob versions in .NET description: Learn how to use the .NET client library to create a previous version of a blob.-+ -+ Last updated 02/14/2023
storage Account Encryption Key Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/account-encryption-key-create.md
Title: Create an account that supports customer-managed keys for tables and queu
description: Learn how to create a storage account that supports configuring customer-managed keys for tables and queues. Use the Azure CLI or an Azure Resource Manager template to create a storage account that relies on the account encryption key for Azure Storage encryption. You can then configure customer-managed keys for the account. -+ Last updated 06/09/2021-+
storage Authorization Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorization-resource-provider.md
Title: Use the Azure Storage resource provider to access management resources description: The Azure Storage resource provider is a service that provides access to management resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and delete resources such as storage accounts, private endpoints, and account access keys. -+ Last updated 12/12/2019-+
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Title: Authorize operations for data access
description: Learn about the different ways to authorize access to data in Azure Storage. Azure Storage supports authorization with Azure Active Directory, Shared Key authorization, or shared access signatures (SAS), and also supports anonymous access to blobs. -+ Last updated 05/31/2023-+
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
Title: How to migrate your classic storage accounts to Azure Resource Manager
description: Learn how to migrate your classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024. -+ Last updated 05/02/2023-+
For more information about errors that may occur when deleting disk artifacts an
# [PowerShell](#tab/azure-powershell)
-To learn how to locate and delete disk artifacts in classic storage accounts with PowerShell, see [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account).
+To learn how to locate and delete disk artifacts in classic storage accounts with PowerShell, see [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-5b-migrate-a-storage-account).
storage Classic Account Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md
Title: We're retiring classic storage accounts on August 31, 2024
description: Overview of migration of classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024. -+ Last updated 07/26/2023-+
storage Classic Account Migration Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md
Title: Understand storage account migration from classic to Azure Resource Manag
description: Learn about the process of migrating classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024. -+ Last updated 04/28/2023-+
The Validation step analyzes the state of resources in the classic deployment mo
The Validation step doesn't check for VM disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they contain VM disks. For more information, see the following articles: - [Migrate classic storage accounts to Azure Resource Manager](classic-account-migrate.md)-- [Migrate VMs to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account)
+- [Migrate VMs to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-5b-migrate-a-storage-account)
- [Migrate VMs to Resource Manager using Azure CLI](../../virtual-machines/migration-classic-resource-manager-cli.md#step-5-migrate-a-storage-account) Keep in mind that it's not possible to check for every constraint that the Azure Resource Manager stack might impose on the storage account during migration. Some constraints are only checked when the resources undergo transformation in the next step of migration (the Prepare step).
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Title: Configure cross-tenant customer-managed keys for an existing storage acco
description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account resides. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider. -+ Last updated 10/31/2022-+
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Title: Configure cross-tenant customer-managed keys for a new storage account
description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account will be created. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider. -+ Last updated 10/31/2022-+
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
Title: Configure customer-managed keys in the same tenant for an existing storag
description: Learn how to configure Azure Storage encryption with customer-managed keys for an existing storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault. -+ Last updated 06/07/2023-+
storage Customer Managed Keys Configure Key Vault Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault-hsm.md
Title: Configure encryption with customer-managed keys stored in Azure Key Vault
description: Learn how to configure Azure Storage encryption with customer-managed keys stored in Azure Key Vault Managed HSM by using Azure CLI. -+ Last updated 05/05/2022-+
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
Title: Configure customer-managed keys in the same tenant for a new storage acco
description: Learn how to configure Azure Storage encryption with customer-managed keys for a new storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault. -+ Last updated 03/23/2023-+
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Title: Customer-managed keys for account encryption
description: You can use your own encryption key to protect the data in your storage account. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Customer-managed keys offer greater flexibility to manage access controls. -+ Last updated 05/11/2023 -+
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
Title: Enable infrastructure encryption for double encryption of data
description: Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account or encryption scope is encrypted twice with two different encryption algorithms and two different keys. -+ Last updated 10/19/2022 -+
storage Lock Account Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/lock-account-resource.md
Title: Apply an Azure Resource Manager lock to a storage account
description: Learn how to apply an Azure Resource Manager lock to a storage account. -+ Last updated 03/09/2021-+
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023 --++
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Previously updated : 07/14/2023 Last updated : 08/18/2023
The following table provides an overview of redundancy options available for sto
| Premium file shares | &#x2705; | &#x2705; | | &#x2705; <sup>1</sup> | &#x2705; | | Premium block blob | &#x2705; | &#x2705; | | | &#x2705; | | Premium page blob | &#x2705; | | | | |
-| Managed disks<sup>2</sup> | &#x2705; | | | | |
+| Managed disks<sup>2</sup> | &#x2705; | &#x2705; | &#x2705; | | &#x2705; |
| Standard general purpose v1 | &#x2705; | | <sup>3</sup> | | &#x2705; | | ZRS Classic<sup>4</sup><br /><sub>(available in standard general purpose v1 accounts)</sub> | &#x2705; | | | | <sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-requested-conversion); [Customer-initiated conversion](#customer-initiated-conversion) is not currently supported.<br />
-<sup>2</sup> Managed disks are only available for LRS and cannot be migrated to ZRS. You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
+<sup>2</sup> Managed disks are available for LRS and ZRS, though ZRS disks have some [limitations](../../virtual-machines/disks-redundancy.md#limitations). If a LRS disk is regional (no zone specified) it may be converted by [changing the SKU](../../virtual-machines/disks-convert-types.md). If a LRS disk is zonal, then it can only be manually migrated by following the process in [Migrate your managed disks](../../reliability/migrate-vm.md#migrate-your-managed-disks). You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
<sup>3</sup> If your storage account is v1, you'll need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br /> <sup>4</sup> ZRS Classic storage accounts have been deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Title: Azure Resource Graph sample queries description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties.-+ Last updated 07/07/2022 -+
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Title: Configure an expiration policy for shared access signatures (SAS)
description: Configure a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks. -+ Last updated 12/12/2022-+
storage Scalability Targets Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-resource-provider.md
Title: Scalability targets for the Azure Storage resource provider description: Scalability and performance targets for operations against the Azure Storage resource provider. The resource provider implements Azure Resource Manager for Azure Storage. -+ Last updated 12/18/2019-+
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md
Title: Scalability and performance targets for standard storage accounts
description: Learn about scalability and performance targets for standard storage accounts. -+ Last updated 05/25/2022-+
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 --++
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Title: Prevent authorization with Shared Key
description: To require clients to use Azure AD to authorize requests, you can disallow requests to the storage account that are authorized with Shared Key. -+ Last updated 06/06/2023-+ ms.devlang: azurecli
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Title: Create a storage account
description: Learn to create a storage account to store blobs, files, queues, and tables. An Azure storage account provides a unique namespace in Microsoft Azure for reading and writing your data. -+ Previously updated : 05/02/2023- Last updated : 08/18/2023+ -+ # Create a storage account
A storage account is an Azure Resource Manager resource. Resource Manager is the
Every Resource Manager resource, including an Azure storage account, must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group, or use an existing resource group. This how-to shows how to create a new resource group.
+### Storage account type parameters
+
+When you create a storage account using PowerShell, the Azure CLI, Bicep, or Azure Templates, the storage account type is specified by the `kind` parameter (for example, `StorageV2`). The performance tier and redundancy configuration are specified together by the `sku` or `SkuName` parameter (for example, `Standard_GRS`). The following table shows which values to use for the `kind` parameter and the `sku` or `SkuName` parameter to create a particular type of storage account with the desired redundancy configuration.
+
+| Type of storage account | Supported redundancy configurations | Supported values for the kind parameter | Supported values for the sku or SkuName parameter | Supports hierarchical namespace |
+|--|--|--|--|--|
+| Standard general-purpose v2 | LRS / GRS / RA-GRS / ZRS / GZRS / RA-GZRS | StorageV2 | Standard_LRS / Standard_GRS / Standard_RAGRS/ Standard_ZRS / Standard_GZRS / Standard_RAGZRS | Yes |
+| Premium block blobs | LRS / ZRS | BlockBlobStorage | Premium_LRS / Premium_ZRS | Yes |
+| Premium file shares | LRS / ZRS | FileStorage | Premium_LRS / Premium_ZRS | No |
+| Premium page blobs | LRS | StorageV2 | Premium_LRS | No |
+| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
+| Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
+ # [Portal](#tab/azure-portal) To create an Azure storage account with the Azure portal, follow these steps:
To enable a hierarchical namespace for the storage account to use [Azure Data La
The following table shows which values to use for the `SkuName` and `Kind` parameters to create a particular type of storage account with the desired redundancy configuration.
-| Type of storage account | Supported redundancy configurations | Supported values for the Kind parameter | Supported values for the SkuName parameter | Supports hierarchical namespace |
-|--|--|--|--|--|
-| Standard general-purpose v2 | LRS / GRS / RA-GRS / ZRS / GZRS / RA-GZRS | StorageV2 | Standard_LRS / Standard_GRS / Standard_RAGRS/ Standard_ZRS / Standard_GZRS / Standard_RAGZRS | Yes |
-| Premium block blobs | LRS / ZRS | BlockBlobStorage | Premium_LRS / Premium_ZRS | Yes |
-| Premium file shares | LRS / ZRS | FileStorage | Premium_LRS / Premium_ZRS | No |
-| Premium page blobs | LRS | StorageV2 | Premium_LRS | No |
-| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
-| Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
- # [Azure CLI](#tab/azure-cli) To create a general-purpose v2 storage account with Azure CLI, first create a new resource group by calling the [az group create](/cli/azure/group#az-group-create) command.
az storage account show \
To enable a hierarchical namespace for the storage account to use [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), set the `enable-hierarchical-namespace` parameter to `true` on the call to the **az storage account create** command. Creating a hierarchical namespace requires Azure CLI version 2.0.79 or later.
-The following table shows which values to use for the `sku` and `kind` parameters to create a particular type of storage account with the desired redundancy configuration.
-
-| Type of storage account | Supported redundancy configurations | Supported values for the kind parameter | Supported values for the sku parameter | Supports hierarchical namespace |
-|--|--|--|--|--|
-| Standard general-purpose v2 | LRS / GRS / RA-GRS / ZRS / GZRS / RA-GZRS | StorageV2 | Standard_LRS / Standard_GRS / Standard_RAGRS/ Standard_ZRS / Standard_GZRS / Standard_RAGZRS | Yes |
-| Premium block blobs | LRS / ZRS | BlockBlobStorage | Premium_LRS / Premium_ZRS | Yes |
-| Premium file shares | LRS / ZRS | FileStorage | Premium_LRS / Premium_ZRS | No |
-| Premium page blobs | LRS | StorageV2 | Premium_LRS | No |
-| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
-| Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No |
- # [Bicep](#tab/bicep) You can use either Azure PowerShell or Azure CLI to deploy a Bicep file to create a storage account. The Bicep file used in this how-to article is from [Azure Resource Manager quickstart templates](https://azure.microsoft.com/resources/templates/storage-account-create/). Bicep currently doesn't support deploying a remote file. Download and save [the Bicep file](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/main.bicep) to your local computer, and then run the scripts.
az storage account delete --name storageAccountName --resource-group resourceGro
Alternately, you can delete the resource group, which deletes the storage account and any other resources in that resource group. For more information about deleting a resource group, see [Delete resource group and resources](../../azure-resource-manager/management/delete-resource-group.md).
+## Create a general purpose v1 storage account
++
+General purpose v1 (GPv1) storage accounts can no longer be created from the Azure portal. If you need to create a GPv1 storage account, follow the steps in section [Create a storage account](#create-a-storage-account-1) for PowerShell, the Azure CLI, Bicep, or Azure Templates. For the `kind` parameter, specify `Storage`, and choose a `sku` or `SkuName` from the [table of supported values](#storage-account-type-parameters).
+ ## Next steps - [Storage account overview](storage-account-overview.md)
Alternately, you can delete the resource group, which deletes the storage accoun
- [Move a storage account to another region](storage-account-move.md) - [Recover a deleted storage account](storage-account-recover.md) - [Migrate a classic storage account](classic-account-migrate.md)
-
-
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
Title: Get storage account configuration information
description: Use the Azure portal, PowerShell, or Azure CLI to retrieve storage account configuration properties, including the Azure Resource Manager resource ID, account location, account type, or replication SKU. -+ -+ Last updated 12/12/2022
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Title: Manage account access keys
description: Learn how to view, manage, and rotate your storage account access keys. -+ Last updated 03/22/2023-+
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Title: Move an Azure Storage account to another region description: Shows you how to move an Azure Storage account to another region. -+ Last updated 06/15/2022-+
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
Title: Storage account overview
description: Learn about the different types of storage accounts in Azure Storage. Review account naming, performance tiers, access tiers, redundancy, encryption, endpoints, and more. -+ Last updated 06/28/2022-+ # Storage account overview
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Title: Recover a deleted storage account
description: Learn how to recover a deleted storage account within the Azure portal. -+ Last updated 01/25/2023-+
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Title: Upgrade to a general-purpose v2 storage account
description: Upgrade to general-purpose v2 storage accounts using the Azure portal, PowerShell, or the Azure CLI. Specify an access tier for blob data. -+ Previously updated : 04/29/2021-- Last updated : 08/17/2023++ # Upgrade to a general-purpose v2 storage account General-purpose v2 storage accounts support the latest Azure Storage features and incorporate all of the functionality of general-purpose v1 and Blob storage accounts. General-purpose v2 accounts are recommended for most storage scenarios. General-purpose v2 accounts deliver the lowest per-gigabyte capacity prices for Azure Storage, as well as industry-competitive transaction prices. General-purpose v2 accounts support default account access tiers of hot or cool and blob level tiering between hot, cool, or archive.
-Upgrading to a general-purpose v2 storage account from your general-purpose v1 or Blob storage accounts is straightforward. You can upgrade using the Azure portal, PowerShell, or Azure CLI. There is no downtime or risk of data loss associated with upgrading to a general-purpose v2 storage account. The account upgrade happens via a simple Azure Resource Manager operation that changes the account type.
+Upgrading to a general-purpose v2 storage account from your general-purpose v1 or Blob storage accounts is straightforward. You can upgrade using the Azure portal, PowerShell, or Azure CLI. There's no downtime or risk of data loss associated with upgrading to a general-purpose v2 storage account. The account upgrade happens via a simple Azure Resource Manager operation that changes the account type.
> [!IMPORTANT] > Upgrading a general-purpose v1 or Blob storage account to general-purpose v2 is permanent and cannot be undone.
-> [!NOTE]
-> Although Microsoft recommends general-purpose v2 accounts for most scenarios, Microsoft will continue to support general-purpose v1 accounts for new and existing customers. You can create general-purpose v1 storage accounts in new regions whenever Azure Storage is available in those regions. Microsoft does not currently have a plan to deprecate support for general-purpose v1 accounts and will provide at least one year's advance notice before deprecating any Azure Storage feature. Microsoft will continue to provide security updates for general-purpose v1 accounts, but no new feature development is expected for this account type.
->
-> For new Azure regions that have come online after October 1, 2020, pricing for general-purpose v1 accounts has changed and is equivalent to pricing for general-purpose v2 accounts in those regions. Pricing for general-purpose v1 accounts in Azure regions that existed prior to October 1, 2020 has not changed. For pricing details for general-purpose v1 accounts in a specific region, see the Azure Storage pricing page. Choose your region, and then next to **Pricing offers**, select **Other**.
## Upgrade an account
az storage account update -g <resource-group> -n <storage-account> --set kind=St
## Specify an access tier for blob data
-General-purpose v2 accounts support all Azure storage services and data objects, but access tiers are available only to block blobs within Blob storage. When you upgrade to a general-purpose v2 storage account, you can specify a default account access tier of hot or cool, which indicates the default tier your blob data will be uploaded as if the individual blob access tier parameter is not specified.
+General-purpose v2 accounts support all Azure storage services and data objects, but access tiers are available only to block blobs within Blob storage. When you upgrade to a general-purpose v2 storage account, you can specify a default account access tier of hot or cool, which indicates the default tier your blob data will be uploaded as if the individual blob access tier parameter isn't specified.
Blob access tiers enable you to choose the most cost-effective storage based on your anticipated usage patterns. Block blobs can be stored in a hot, cool, or archive tiers. For more information on access tiers, see [Azure Blob storage: Hot, Cool, and Archive storage tiers](../blobs/access-tiers-overview.md).
-By default, a new storage account is created in the hot access tier, and a general-purpose v1 storage account can be upgraded to either the hot or cool account tier. If an account access tier is not specified on upgrade, it will be upgraded to hot by default. If you are exploring which access tier to use for your upgrade, consider your current data usage scenario. There are two typical user scenarios for migrating to a general-purpose v2 account:
+By default, a new storage account is created in the hot access tier, and a general-purpose v1 storage account can be upgraded to either the hot or cool account tier. If an account access tier isn't specified on upgrade, it will be upgraded to hot by default. If you're exploring which access tier to use for your upgrade, consider your current data usage scenario. There are two typical user scenarios for migrating to a general-purpose v2 account:
- You have an existing general-purpose v1 storage account and want to evaluate an upgrade to a general-purpose v2 storage account, with the right storage access tier for blob data. - You have decided to use a general-purpose v2 storage account or already have one and want to evaluate whether you should use the hot or cool storage access tier for blob data.
In both cases, the first priority is to estimate the cost of storing, accessing,
## Pricing and billing
-Upgrading a v1 storage account to a general-purpose v2 account is free. You may specify the desired account tier during the upgrade process. If an account tier is not specified on upgrade, the default account tier of the upgraded account will be `Hot`. However, changing the storage access tier after the upgrade may result in changes to your bill so it is recommended to specify the new account tier during upgrade.
+Upgrading a v1 storage account to a general-purpose v2 account is free. You may specify the desired account tier during the upgrade process. If an account tier isn't specified on upgrade, the default account tier of the upgraded account will be `Hot`. However, changing the storage access tier after the upgrade may result in changes to your bill so it's recommended to specify the new account tier during upgrade.
All storage accounts use a pricing model for blob storage based on the tier of each blob. When using a storage account, the following billing considerations apply: - **Storage costs**: In addition to the amount of data stored, the cost of storing data varies depending on the storage access tier. The per-gigabyte cost decreases as the tier gets cooler. -- **Data access costs**: Data access charges increase as the tier gets cooler. For data in the cool and archive storage access tier, you are charged a per-gigabyte data access charge for reads.
+- **Data access costs**: Data access charges increase as the tier gets cooler. For data in the cool and archive storage access tier, you're charged a per-gigabyte data access charge for reads.
-- **Transaction costs**: There is a per-transaction charge for all tiers that increases as the tier gets cooler.
+- **Transaction costs**: There's a per-transaction charge for all tiers that increases as the tier gets cooler.
- **Geo-Replication data transfer costs**: This charge only applies to accounts with geo-replication configured, including GRS and RA-GRS. Geo-replication data transfer incurs a per-gigabyte charge.
With this enabled, capacity data is recorded daily for a storage account's Blob
To monitor data access patterns for Blob storage, you need to enable the hourly transaction metrics from the API. With hourly transaction metrics enabled, per API transactions are aggregated every hour, and recorded as a table entry that is written to the *$MetricsHourPrimaryTransactionsBlob* table within the same storage account. The *$MetricsHourSecondaryTransactionsBlob* table records the transactions to the secondary endpoint when using RA-GRS storage accounts. > [!NOTE]
-> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process is not applicable. The capacity data does not differentiate block blobs from other types, and does not give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill.
+> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process isn't applicable. The capacity data doesn't differentiate block blobs from other types, and doesn't give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill.
To get a good approximation of your data consumption and access pattern, we recommend you choose a retention period for the metrics that is representative of your regular usage and extrapolate. One option is to retain the metrics data for seven days and collect the data every week, for analysis at the end of the month. Another option is to retain the metrics data for the last 30 days and collect and analyze the data at the end of the 30-day period.
This total capacity consumed by both user data and analytics logs (if enabled) c
The sum of *'TotalBillableRequests'*, across all entries for an API in the transaction metrics table indicates the total number of transactions for that particular API. *For example*, the total number of *'GetBlob'* transactions in a given period can be calculated by the sum of total billable requests for all entries with the row key *'user;GetBlob'*.
-In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they are priced differently.
+In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they're priced differently.
- Write transactions such as *'PutBlob'*, *'PutBlock'*, *'PutBlockList'*, *'AppendBlock'*, *'ListBlobs'*, *'ListContainers'*, *'CreateContainer'*, *'SnapshotBlob'*, and *'CopyBlob'*. - Delete transactions such as *'DeleteBlob'* and *'DeleteContainer'*.
In order to estimate transaction costs for GPv1 storage accounts, you need to ag
#### Data access and geo-replication data transfer costs
-While storage analytics does not provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes.
+While storage analytics doesn't provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes.
In order to estimate the data access costs for Blob storage accounts, you need to break down the transactions into two groups.
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Title: Configure a connection string
description: Configure a connection string for an Azure storage account. A connection string contains the information needed to authorize access to a storage account from your application at runtime using Shared Key authorization. -+ Last updated 01/24/2023-+
storage Storage Encryption Key Model Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-encryption-key-model-get.md
Title: Determine which encryption key model is in use for the storage account
description: Use Azure portal, PowerShell, or Azure CLI to check how encryption keys are being managed for the storage account. Keys may be managed by Microsoft (the default), or by the customer. Customer-managed keys must be stored in Azure Key Vault. -+ Last updated 03/13/2020-+
storage Storage Explorers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorers.md
Title: Microsoft client tools for working with Azure Storage description: A list of client tools provided by Microsoft that enable you to view and interact with your Azure Storage data. -+ Last updated 09/27/2019-+
storage Storage Failover Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-private-endpoints.md
+
+ Title: Failover considerations for storage accounts with private endpoints
+
+description: Learn how to architect highly available storage accounts using Private Endpoints
+++++ Last updated : 05/07/2021+++
+# Failover considerations for storage accounts with private endpoints
+
+Storage accounts work differently than many other Azure services when it comes to high availability configurations. They don't often use a secondary instance deployed by the customer for resiliency. Instead, storage accounts configured to be [geo-redundant](./storage-account-overview.md#types-of-storage-accounts) replicate to another region, based on [regional pairs](/azure/reliability/cross-region-replication-azure). When necessary, the storage account can fail over to this replicated copy, and operate in the secondary region.
+
+This feature means that customers don't need to plan to have a second storage account already running in their second region. You could have multiple storage accounts and use customer managed operations to move data between them, but that is an uncommon pattern.
+
+When a storage account is failed over, the name of the service itself doesn't change. If you're using the public endpoint for ingress, then systems can use the same DNS resolution to access the service regardless of its fail over state.
+
+DNS resolution works when the storage account and the systems accessing it are failed over. It also works when just one set of services has been failed over. This resilience reduces the number of BCDR tasks needed for the storage account.
+
+If you're using [private endpoints](../../private-link/private-endpoint-dns.md), more configuration is needed to support this feature. This article provides an example architecture of a geo-replicated storage account using private endpoints for secure networking, and what is needed for each BCDR scenario.
+
+> [!NOTE]
+> Not all storage account types support geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS). For example, data lakes deployed with premium block blob can only be locally redundant or zone redundant in a single region. Review [Azure Storage redundancy](./storage-redundancy.md) to make sure your scenario is supported.
+
+## Example architecture
+
+This architecture uses a primary and secondary region that can be used to handle active/active or failover scenarios. Each region has a hub network for shared network infrastructure. Each region also has a spoke where the storage account and other workload solutions are deployed.
+
+The geo-redundant storage account is deployed in the primary region, but has private endpoints for its blob endpoint in both regions.
+
+[ ![Diagram of PE environment.](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-topology.png) ](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-topology.png#lightbox)
+
+The two private endpoints can't use the same Private DNS Zone for the same endpoint. As a result, each region uses its own Private DNS Zone. Each regional zone is attached to the hub network for the region. This design uses the [DNS forwarder scenario](../../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) to provide resolution.
+
+As a result, regardless of the region of the VM trying to access the private endpoint, there's a local endpoint available that can access the storage blob, regardless of the region the storage account is currently operating in.
+
+For connections from a data center, a VPN connection would be made to the hub network in the region. However, for DNS resolution, each data center would have its conditional forwarding set up to only one of the two DNS resolver server sets, to ensure that it resolves to the closest network location.
+
+### Architecture concepts
+
+This architecture uses functionality of private endpoints that may not be commonly encountered when doing single region deployments.
+
+First, an individual service can have multiple private endpoints attached to it. For example, a storage account could have a private endpoint for its blob containers located in multiple different virtual networks, and each one functions independently.
+
+However, this pattern isn't used often in hub and spoke scenarios because a Private DNS Zone can only have one record for a private endpoint. If you register your first private endpoint to your Private DNS Zone, other private endpoints would need to use other zones.
+
+In addition, the private endpoints aren't required to be in the same region as the resource they're connecting to. A storage account in East US 2 can have a private endpoint deployed in Central US, to give one example.
+
+So long as there's an alternate Private DNS Zone for that region, resources in the second can resolve and interact with the storage account.
+
+It's common to use private endpoints located in the same region to reduce costs. But when considering failover, this functionality can allow regional private networking to work despite failures in one region.
+
+### Cross-region traffic costs
+
+There are costs associated with having private endpoints in multiple regions. First, there's a cost per private endpoint. The above design would have two endpoints, and so would be charged twice. In addition, there's a cost for sending the traffic between regions. For more information about private endpoint costs, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+
+Global virtual network peering is a service that connects virtual networks in multiple regions. It also has a data transfer cost between regions. This cost depends on the zone your networks are in. For more information about network costs, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network).
+
+Global peering can be used to enable services to communicate to each other during a service failure in a region. However, it supports fewer scenarios and may have more manual activities involved in activating a failover. An organization should review the costs of operating in a highly available or resilient architecture, and compare that to the risks of longer durations to restore service.
+
+## Failover scenarios
+
+This topology supports the following scenarios, and each scenario has its own considerations for DNS failover.
+
+| Scenario | Description | DNS Considerations |
+||||
+| [Scenario 1 - Storage Account Failover](#scenario-1storage-account-failover) | A service interruption to the storage account hosted in the primary region requires it to be failed over to a secondary region. | No changes required. |
+| [Scenario 2 - Other Services Failover](#scenario-2other-services-failover) | A service interruption to services in the primary region requires them to be failed over to a secondary region. The storage account doesn't need to be failed over. | If the outage impacts the DNS servers hosted in the primary region, then the conditional forwarders from on-premises need to be updated to the secondary region. |
+| [Scenario 3 - Whole Region Outage](#scenario-3whole-region-outage) | A service interruption to multiple services in a region requires both the storage account and other services to be failed over. | Conditional forwarders from on-premises DNS need to be updated to the secondary region. |
+| [Scenario 4 - Running in High-Availability](#scenario-4running-in-high-availability) | The services and storage accounts are working in Azure in an active/active configuration. | If a region's DNS or storage account is impacted, conditional forwarders on-premises need to be updated to operational regions. |
+
+### Scenario 1 - storage account failover
+
+In this scenario, an issue with the storage account requires it to be failed over to the secondary regions. With storage accounts that are zone redundant and geo-redundant, these outages are uncommon, but should still be planned for.
+
+When the storage account is failed over to the paired secondary region, network routing stays the same. No changes to DNS are needed - each region can continue to use its local endpoint to communicate with the storage account.
+
+Connections from an on-premises data center connected by VPN would continue to operate as well. Each endpoint can respond to connections routed to it, and both hub networks are able to resolve to a valid endpoint.
+
+After failover, the service will operate as illustrated:
+
+[ ![Diagram of topology with storage account failed over, showing communication via DNS and Private Endpoint.](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-1.png) ](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-1.png#lightbox)
+
+When the service is restored in the primary region, the storage account can be failed back.
+
+### Scenario 2 - other services failover
+
+In this scenario, there's an issue with the services that connect to the storage account. In our environment, these are virtual machines, but they could be application services or other services.
+
+These resources need to be failed over to the secondary region following their own process. VMs might use Azure Site Recovery to replicate VMs prior to the outage, or you might deploy a new instance of your web app in the secondary region.
+
+Once the services are active in the secondary region, they can begin to connect to the storage account through its regional endpoint. No changes are needed for it to support connections.
+
+Connections from an on-premises data center connected by VPN would continue to operate as long as the service outage doesn't impact the DNS resolution services in the hub. If the hub is disabled, such as due to a VM service outage, then the conditional forwarders in the data center would need to be adjusted to point to the secondary region until the service is restored.
+
+After failover, the service will operate as illustrated:
+
+[ ![Diagram of topology with services in the secondary region accessing the storage account through its regional endpoint.](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-2.png) ](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-2.png#lightbox)
+
+When the service is restored in the primary region, the services can be failed back and on-premises DNS reset.
+
+> [!NOTE]
+> If you only need to connect to the storage account from on-premises for administrative tasks, a jump box in the secondary region could be used instead of updating DNS in the primary region. on-premises DNS only needs to be updated if you need direct connection to the storage account from systems on-premises.
+
+### Scenario 3 - whole region outage
+
+In this scenario, there's a regional outage of sufficient scope as both the storage account and other services need to be failed over.
+
+This failover operates like a combination of Scenario 1 and Scenario 2. The storage account is failed over, as are the Azure services. The primary region is effectively unable to operate, but the services can continue to operate in the secondary region until the service is restored.
+
+Similar to Scenario 2, if the primary hub is unable to handle DNS responses to its endpoint, or there are other networking outages, then the conditional forwarders on-premises should be updated to the secondary region.
+
+After failover, the service will operate as illustrated:
+
+[ ![Diagram of topology with services in the secondary region working.](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-3.png) ](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-3.png#lightbox)
+
+When the services are restored, resources can be failed back, and on-premises DNS can be reset back to its normal configuration.
+
+### Scenario 4 - running in high availability
+
+In this scenario, you have your workload running in an active/active mode. There are compute resources running in both the primary and secondary region, and clients are connecting to either region based off of load balancing rules.
+
+In it, both services can communicate to the storage account through their regional private endpoints. See [Azure network round-trip latency statistics](../../networking/azure-network-latency.md) to review latency between regions.
+
+If there's a regional outage, the load balancing front end should redirect all application traffic to the active region.
+
+For connectivity from on-premises data center locations, if the outage impacts a region's DNS or storage account, then conditional forwarders from the data center need to be set to regions that are still available. This change doesn't impact Azure services.
+
+While both regions are healthy, the service operates as illustrated:
+
+[ ![Diagram of topology with services in both regions working independently.](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-4.png) ](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-scenario-4.png#lightbox)
+
+## Next steps
+
+As part of planning for your storage account resiliency, you can review the following articles for more information:
+
+- [Well-Architected Framework Reliability Pillar](/azure/well-architected/resiliency/overview)
+- [Storage Account Overview](./storage-account-overview.md)
+- [Azure Storage redundancy](./storage-redundancy.md)
+- [Initiate Account Failover](./storage-initiate-account-failover.md)
+- [Cross region replication](/azure/reliability/cross-region-replication-azure)
+- [Private endpoint DNS](../../private-link/private-endpoint-dns.md)
+
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Title: Introduction to Azure Storage - Cloud storage on Azure description: The Azure Storage platform is Microsoft's cloud storage solution. Azure Storage provides highly available, secure, durable, massively scalable, and redundant storage for data objects in the cloud. Learn about the services available in Azure Storage and how you can use them in your applications, services, or enterprise solutions. -+ Last updated 01/10/2023-+
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks
-description: Configure layered network security for your storage account by using Azure Storage firewalls and Azure Virtual Network.
+description: Configure layered network security for your storage account by using the Azure Storage firewall.
Previously updated : 08/01/2023 Last updated : 08/15/2023 -+ # Configure Azure Storage firewalls and virtual networks
-Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources that you use.
+Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments require. In this article, you will learn how to configure the Azure Storage firewall to protect the data in your storage account at the network layer.
-When you configure network rules, only applications that request data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests that come from specified IP addresses, IP ranges, subnets in an Azure virtual network, or resource instances of some Azure services.
+> [!IMPORTANT]
+> Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
+>
+> Some operations, such as blob container operations, can be performed through both the control plane and the data plane. So if you attempt to perform an operation such as listing containers from the Azure portal, the operation will succeed unless it is blocked by another mechanism. Attempts to access blob data from an application such as Azure Storage Explorer are controlled by the firewall restrictions.
+>
+> For a list of data plane operations, see the [Azure Storage REST API Reference](/rest/api/storageservices/).
+> For a list of control plane operations, see the [Azure Storage Resource Provider REST API Reference](/rest/api/storagerp/).
-Storage accounts have a public endpoint that's accessible through the internet. You can also create [private endpoints for your storage account](storage-private-endpoints.md). Creating private endpoints assigns a private IP address from your virtual network to the storage account. It helps secure traffic between your virtual network and the storage account over a private link.
+## Configure network access to Azure Storage
-The Azure Storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when you're using private endpoints. Your firewall configuration also enables trusted Azure platform services to access the storage account.
+You can control access to the data in your storage account over network endpoints, or through trusted services or resources in any combination including:
-An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized. The firewall rules remain in effect and will block anonymous traffic.
+- [Allow access from selected virtual network subnets using private endpoints](storage-private-endpoints.md).
+- [Allow access from selected virtual network subnets using service endpoints](#grant-access-from-a-virtual-network).
+- [Allow access from specific public IP addresses or ranges](#grant-access-from-an-internet-ip-range).
+- [Allow access from selected Azure resource instances](#grant-access-from-azure-resource-instances).
+- [Allow access from trusted Azure services](#grant-access-to-trusted-azure-services) (using [Manage exceptions](#manage-exceptions)).
+- [Configure exceptions for logging and metrics services](#manage-exceptions).
-Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service that operates within an Azure virtual network or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
+### About virtual network endpoints
-You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up.
+There are two types of virtual network endpoints for storage accounts:
+- [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+- [Private endpoints](storage-private-endpoints.md)
-## Scenarios
+Virtual network service endpoints are public and accessible via the internet. The Azure Storage firewall provides the ability to control access to your storage account over such public endpoints. When you enable public network access to your storage account, all incoming requests for data are blocked by default. Only applications that request data from allowed sources that you configure in your storage account firewall settings will be able to access your data. Sources can include the source IP address or virtual network subnet of a client, or an Azure service or resource instance through which clients or services access your data. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services, unless you explicitly allow access in your firewall configuration.
-To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific virtual networks. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration helps you build a secure network boundary for your applications.
+A private endpoint uses a private IP address from your virtual network to access a storage account over the Microsoft backbone network. With a private endpoint, traffic between your virtual network and the storage account are secured over a private link. Storage firewall rules only apply to the public endpoints of a storage account, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, you can use the firewall to block all access through the public endpoint.
-You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. You can apply storage firewall rules to existing storage accounts or when you create new storage accounts.
+To help you decide when to use each type of endpoint in your environment, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
-Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
+### How to approach network security for your storage account
-> [!IMPORTANT]
-> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
->
-> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
+To secure your storage account and build a secure network boundary for your applications:
+
+1. Start by disabling all public network access for the storage account under the **Public network access** setting in the storage account firewall.
+1. Where possible, configure private links to your storage account from private endpoints on virtual network subnets where the clients reside that require access to your data.
+1. If client applications require access over the public endpoints, change the **Public network access** setting to **Enabled from selected virtual networks and IP addresses**. Then, as needed:
-Network rules are enforced on all network protocols for Azure Storage, including REST and SMB. To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must configure explicit network rules.
+ 1. Specify the virtual network subnets from which you want to allow access.
+ 1. Specify the public IP address ranges of clients from which you want to allow access, such as those on on-premises networks.
+ 1. Allow access from selected Azure resource instances.
+ 1. Add exceptions to allow access from trusted services required for operations such as backing up data.
+ 1. Add exceptions for logging and metrics.
After you apply network rules, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but they don't grant new access beyond configured network rules.
-Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O. Network rules help protect REST access to page blobs.
+## Restrictions and considerations
+
+Before implementing network security for your storage accounts, review the important restrictions and considerations discussed in this section.
+
+> [!div class="checklist"]
+>
+> - Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
+> - Review the [Restrictions for IP network rules](#restrictions-for-ip-network-rules).
+> - To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must be on a machine within the trusted boundary that you establish when configuring network security rules.
+> - Network rules are enforced on all network protocols for Azure Storage, including REST and SMB.
+> - Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O, but they do help protect REST access to page blobs.
+> - You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by [creating an exception](#manage-exceptions). Firewall exceptions aren't applicable to managed disks, because Azure already manages them.
+> - Classic storage accounts don't support firewalls and virtual networks.
+> - If you delete a subnet that's included in a virtual network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> - When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior. Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
+> - By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. If you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) that you previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services might still have access to the storage account.
+
+### Authorization
-Classic storage accounts don't support firewalls and virtual networks.
+Clients granted access via network rules must continue to meet the authorization requirements of the storage account to access the data. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token.
-You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by creating an exception. The [Manage exceptions](#manage-exceptions) section of this article documents this process. Firewall exceptions aren't applicable with managed disks, because Azure already manages them.
+When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized, but the firewall rules remain in effect and will block anonymous traffic.
## Change the default network access rule
By default, storage accounts accept connections from clients on any network. You
You must set the default rule to **deny**, or network rules have no effect. However, changing this setting can affect your application's ability to connect to Azure Storage. Be sure to grant access to any allowed networks or set up access through a private endpoint before you change this setting. + ### [Portal](#tab/azure-portal) 1. Go to the storage account that you want to secure.
You can enable a [service endpoint](../../virtual-network/virtual-network-servic
Each storage account supports up to 200 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range). > [!IMPORTANT]
-> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
+>
+> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
### Required permissions
Cross-region service endpoints for Azure Storage became generally available in A
Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
-When you're planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
+When you're planning for disaster recovery during a regional outage, create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
Local and cross-region service endpoints can't coexist on the same subnet. To replace existing service endpoints with cross-region ones, delete the existing `Microsoft.Storage` endpoints and re-create them as cross-region endpoints (`Microsoft.Storage.Global`).
If you want to enable access to your storage account from a virtual network or s
6. Select **Save** to apply your changes.
+> [!IMPORTANT]
+> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+ #### [PowerShell](#tab/azure-powershell) 1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
If you want to enable access to your storage account from a virtual network or s
You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic.
+### Restrictions for IP network rules
+ The following restrictions apply to IP address ranges: - IP network rules are allowed only for *public internet* IP addresses.
To learn more about working with storage analytics, see [Use Azure Storage analy
## Next steps Learn more about [Azure network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).- Dig deeper into [Azure Storage security](../blobs/security-recommendations.md).
storage Storage Powershell Independent Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md
Title: Use PowerShell to manage data in Azure independent clouds
description: Managing Storage in the China Cloud, Government Cloud, and German Cloud Using Azure PowerShell. -+ Last updated 12/04/2019-+
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
ZRS is supported for premium file shares (Azure Files) through the `FileStorage`
For a list of regions that support zone-redundant storage (ZRS) for premium file share accounts, see [Azure Files zone-redundant storage for premium file shares](../files/redundancy-premium-file-shares.md).
+#### Managed disks
+
+ZRS is supported for managed disks with the following [limitations](../../virtual-machines/disks-redundancy.md#limitations).
+
+For a list of regions that support zone-redundant storage (ZRS) for managed disks, see [regional availability](../../virtual-machines/disks-redundancy.md#regional-availability).
+ ## Redundancy in a secondary region For applications requiring high durability, you can choose to additionally copy the data in your storage account to a secondary region that is hundreds of miles away from the primary region. If your storage account is copied to a secondary region, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable.
Unmanaged disks don't support ZRS or GZRS.
For pricing information for each redundancy option, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). > [!NOTE]
-> Azure Premium Disk Storage currently supports only locally redundant storage (LRS). Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
+Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
### Support for customer-managed account failover
Azure Storage regularly verifies the integrity of data stored using cyclic redun
- [Azure Files](https://azure.microsoft.com/pricing/details/storage/files/) - [Table Storage](https://azure.microsoft.com/pricing/details/storage/tables/) - [Queue Storage](https://azure.microsoft.com/pricing/details/storage/queues/)
+ - [Azure Disks](https://azure.microsoft.com/pricing/details/managed-disks/)
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
azcopy copy [source] [destination] [flags]
## Examples
-Upload a single file by using OAuth authentication. If you haven't yet logged into AzCopy, please run the azcopy login command before you run the following command.
+Upload a single file by using OAuth authentication. If you haven't yet logged into AzCopy, run the azcopy login command before you run the following command.
`azcopy cp "/path/to/file.txt" "https://[account].blob.core.windows.net/[container]/[path/to/blob]"`
Upload files and directories to Azure Storage account and set the query-string e
- To set tags {key = "bla bla", val = "foo"} and {key = "bla bla 2", val = "bar"}, use the following syntax: - `azcopy cp "/path/*foo/*bar*" "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" --blob-tags="bla%20bla=foo&bla%20bla%202=bar"` - Keys and values are URL encoded and the key-value pairs are separated by an ampersand('&')-- While setting tags on the blobs, there are additional permissions('t' for tags) in SAS without which the service will give authorization error back.
+- While setting tags on the blobs, there are more permissions('t' for tags) in SAS without which the service will give authorization error back.
-Download a single file by using OAuth authentication. If you haven't yet logged into AzCopy, please run the azcopy login command before you run the following command.
+Download a single file by using OAuth authentication. If you haven't yet logged into AzCopy, run the azcopy login command before you run the following command.
`azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/blob]" "/path/to/file.txt"`
Copy all buckets to Blob Storage from an Amazon Web Services (AWS) region by usi
`azcopy cp "https://s3-[region].amazonaws.com/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive=true`
-Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Like the previous examples, you'll need an access key and a SAS token. Make sure to set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
+Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Like the previous examples, you need an access key and a SAS token. Make sure to set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.
`azcopy cp "https://s3.amazonaws.com/[bucket*name]/" "https://[destaccount].blob.core.windows.net?[SAS]" --recursive=true`
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--block-blob-tier` (string) upload block blob to Azure Storage using this blob tier. (default "None")
-`--block-size-mb` (float) Use this block size (specified in MiB) when uploading to Azure Storage, and downloading from Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+`--block-size-mb` (float) Use this block size (specified in MiB) when uploading to Azure Storage, and downloading from Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25). When uploading or downloading, the maximum allowed block size is 0.75 * AZCOPY_BUFFER_GB. To learn more, see [Optimize memory use](storage-use-azcopy-optimize.md#optimize-memory-use).
`--cache-control` (string) Set the cache-control header. Returned on download.
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--content-type` (string) Specifies the content type of the file. Implies no-guess-mime-type. Returned on download.
-`--cpk-by-name` (string) Client provided key by name that gives clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data
+`--cpk-by-name` (string) Client provided key by name that gives clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key name is fetched from Azure Key Vault and is used to encrypt the data
-`--cpk-by-value` Client provided key by name that let clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
+`--cpk-by-value` Client provided key by name that lets clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash is fetched from environment variables
-`--decompress` Automatically decompress files when downloading, if their content-encoding indicates that they're compressed. The supported content-encoding values are 'gzip' and 'deflate'. File extensions of '.gz'/'.gzip' or '.zz' aren't necessary, but will be removed if present.
+`--decompress` Automatically decompress files when downloading, if their content-encoding indicates that they're compressed. The supported content-encoding values are 'gzip' and 'deflate'. File extensions of '.gz'/'.gzip' or '.zz' aren't necessary, but is removed if present.
`--disable-auto-decoding` False by default to enable automatic decoding of illegal chars on Windows. Can be set to true to disable automatic decoding.
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--no-guess-mime-type` Prevents AzCopy from detecting the content-type based on the extension or content of the file.
-`--overwrite` (string) Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default 'true') Possible values include 'true', 'false', 'prompt', and 'ifSourceNewer'. For destinations that support folders, conflicting folder-level properties will be overwritten this flag is 'true' or if a positive response is provided to the prompt. (default "true")
+`--overwrite` (string) Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default 'true') Possible values include 'true', 'false', 'prompt', and 'ifSourceNewer'. For destinations that support folders, conflicting folder-level properties are overwritten if this flag is 'true' or if a positive response is provided to the prompt. (default "true")
`--page-blob-tier` (string) Upload page blob to Azure Storage using this blob tier. (default 'None'). (default "None")
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--preserve-owner` Only has an effect in downloads, and only when `--preserve-smb-permissions` is used. If true (the default), the file Owner and Group are preserved in downloads. If set to false,
-`--preserve-smb-permissions` will still preserve ACLs but Owner and Group will be based on the user running AzCopy (default true)
+`--preserve-smb-permissions` will still preserve ACLs but Owner and Group is based on the user running AzCopy (default true)
`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Azure Data Lake Storage Gen2 to Azure Data Lake Storage Gen2). For Hierarchical Namespace accounts, you'll need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern).
-`--preserve-smb-info` For SMB-aware locations, flag will be set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). Only the attribute bits supported by Azure Files will be transferred; any others will be ignored. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for `Last Write Time` which is never preserved for folders. (default true)
+`--preserve-smb-info` For SMB-aware locations, flag is set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). Only the attribute bits supported by Azure Files are transferred; any others are ignored. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for `Last Write Time` which is never preserved for folders. (default true)
`--put-md5` Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading. `--recursive` Look into subdirectories recursively when uploading from local file system.
-`--s2s-detect-source-changed` Detect if the source file/blob changes while it is being read. (This parameter only applies to service-to-service copies, because the corresponding check is permanently enabled for uploads and downloads.)
+`--s2s-detect-source-changed` Detect if the source file/blob changes while it's being read. (This parameter only applies to service-to-service copies, because the corresponding check is permanently enabled for uploads and downloads.)
`--s2s-handle-invalid-metadata` (string) Specifies how invalid metadata keys are handled. Available options: ExcludeIfInvalid, FailIfInvalid, RenameIfInvalid. (default 'ExcludeIfInvalid'). (default "ExcludeIfInvalid")
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
Note: if include and exclude flags are used together, only files matching the in
## Options
-`--block-size-mb` (float) Use this block size (specified in MiB) when uploading to Azure Storage or downloading from Azure Storage. Default is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+`--block-size-mb` (float) Use this block size (specified in MiB) when uploading to Azure Storage or downloading from Azure Storage. Default is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25). When uploading or downloading, the maximum allowed block size is 0.75 * AZCOPY_BUFFER_GB. To learn more, see [Optimize memory use](storage-use-azcopy-optimize.md#optimize-memory-use).
`--check-md5` (string) Specifies how strictly MD5 hashes should be validated when downloading. This option is only available when downloading. Available values include: NoCheck, LogOnly, FailIfDifferent, FailIfDifferentOrMissing. (default 'FailIfDifferent'). (default "FailIfDifferent")
Note: if include and exclude flags are used together, only files matching the in
`--exclude-regex` (string) Exclude the relative path of the files that match with the regular expressions. Separate regular expressions with ';'.
+`--force-if-read-only` When overwriting an existing file on Windows or Azure Files, force the overwrite to work even if the existing file has its read-only attribute set.
+ `--from-to` (string) Optionally specifies the source destination combination. For Example: LocalBlob, BlobLocal, LocalFile, FileLocal, BlobFile, FileBlob, etc. `-h`, `--help` help for sync
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Title: Grant limited access to data with shared access signatures (SAS)
description: Learn about using shared access signatures (SAS) to delegate access to Azure Storage resources, including blobs, queues, tables, and files. -+ Last updated 06/07/2023-+
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md
Title: Azure Storage encryption for data at rest description: Azure Storage protects your data by automatically encrypting it before persisting it to the cloud. You can rely on Microsoft-managed keys for the encryption of the data in your storage account, or you can manage encryption with your own keys. -+ Last updated 02/09/2023 -+
storage Storage Use Azcopy Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md
description: You can provide authorization credentials for AzCopy operations by
Previously updated : 04/01/2021 Last updated : 09/05/2023
$env:AZCOPY_SPA_CERT_PASSWORD="$(Read-Host -prompt "Enter key")"
Next, type the following command, and then press the ENTER key. ```azcopy
-azcopy login --service-principal --certificate-path <path-to-certificate-file> --tenant-id=<tenant-id>
+azcopy login --service-principal --application-id application-id --certificate-path <path-to-certificate-file> --tenant-id=<tenant-id>
```
-Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
> [!NOTE] > Consider using a prompt as shown in this example. That way, your password won't appear in your console's command history.
Type the following command, and then press the ENTER key.
```bash export AZCOPY_AUTO_LOGIN_TYPE=SPN
+export AZCOPY_SPA_APPLICATION_ID=<application-id>
export AZCOPY_SPA_CERT_PATH=<path-to-certificate-file> export AZCOPY_SPA_CERT_PASSWORD=<certificate-password> export AZCOPY_TENANT_ID=<tenant-id> ```
-Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<certificate-password>` placeholder with the password of the certificate. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
+Replace the `<application-id>` placeholder with the application ID of your service principal's app registration. Replace the `<path-to-certificate-file>` placeholder with the relative or fully qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place. Replace the `<certificate-password>` placeholder with the password of the certificate. Replace the `<tenant-id>` placeholder with the tenant ID of the organization to which the storage account belongs. To find the tenant ID, select **Azure Active Directory > Properties > Directory ID** in the Azure portal.
> [!NOTE] > Consider using a prompt to collect the password from the user. That way, your password won't appear in your command history.
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
description: Learn how to install Azure Container Storage Preview on an Azure Ku
Previously updated : 08/03/2023 Last updated : 08/18/2023 -+ # Quickstart: Use Azure Container Storage Preview with Azure Kubernetes Service
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- Sign up for the public preview by completing the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp).- - This quickstart requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). - You'll need an AKS cluster with an appropriate [virtual machine type](install-container-storage-aks.md#vm-types). If you don't have one, see [Create an AKS cluster](install-container-storage-aks.md#create-aks-cluster). - You'll need the Kubernetes command-line client, `kubectl`. You can install it locally by running the `az aks install-cli` command.
+- Optional: We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp).
+ ## Install Azure Container Storage Follow these instructions to install Azure Container Storage on your AKS cluster using an installation script.
Follow these instructions to install Azure Container Storage on your AKS cluster
| -g | --resource-group | The resource group name.| | -c  | --cluster-name | The name of the cluster where Azure Container Storage is to be installed.| | -n  | --nodepool-name | The name of the nodepool. Defaults to the first nodepool in the cluster.|
- | -r  | --release-train | The release train for the installation. Defaults to prod.|
+ | -r  | --release-train | The release train for the installation. Defaults to stable.|
For example:
storage Container Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md
description: An overview of Azure Container Storage Preview, a service built nat
Previously updated : 08/02/2023 Last updated : 08/14/2023
Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built natively for containers. It integrates with Kubernetes, allowing you to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters.
-To sign up for Azure Container Storage Preview, complete the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). To get started using Azure Container Storage, see [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md) or watch the video.
+To get started using Azure Container Storage, see [Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md) or watch the video.
+
+We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp).
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/I_2nCQ1FKTU" title="Get started with Azure Container Storage" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/I_2nCQ1FKTU]
:::column-end::: :::column::: This video provides an introduction to Azure Container Storage, an end-to-end storage management and orchestration service for stateful applications. See how simple it is to create and manage volumes for production-scale stateful container applications. Learn how to optimize the performance of stateful workloads on Azure Kubernetes Service (AKS) to effectively scale across storage services while providing a cost-effective container-native experience.
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
description: Learn how to install Azure Container Storage Preview for use with A
Previously updated : 08/02/2023 Last updated : 08/14/2023
- Take note of your Azure subscription ID. We recommend using a subscription on which you have an [Owner](../../role-based-access-control/built-in-roles.md#owner) role. If you don't have access to one, you can still proceed, but you'll need admin assistance to complete the steps in this article. -- Sign up for the public preview by completing the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp).- - This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using the Bash environment in Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. For more information, see [Quickstart for Bash in Azure Cloud Shell](../../cloud-shell/quickstart.md).
+- Optional: We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp).
+ > [!NOTE] > Instead of following the steps in this article, you can install Azure Container Storage Preview using a provided installation script. See [Quickstart: Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md).
storage Use Container Storage With Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md
description: Configure Azure Container Storage Preview for use with Azure Elasti
Previously updated : 07/03/2023 Last updated : 08/14/2023
## Regional availability
-Azure Container Storage Preview is only available in the following Azure regions:
--- East US-- West Europe-- West US 2-- West US 3
+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central.
## Create a storage pool
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
description: Configure Azure Container Storage Preview for use with Ephemeral Di
Previously updated : 07/03/2023 Last updated : 08/14/2023
## Regional availability
-Azure Container Storage Preview is only available in the following Azure regions:
--- East US-- West Europe-- West US 2-- West US 3
+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central.
## Create a storage pool
storage Use Container Storage With Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md
description: Configure Azure Container Storage Preview for use with Azure manage
Previously updated : 07/03/2023 Last updated : 08/14/2023
## Regional availability
-Azure Container Storage Preview is only available in the following Azure regions:
--- East US-- West Europe-- West US 2-- West US 3
+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central.
## Create a storage pool
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
description: Learn how to connect to an Azure Elastic SAN Preview volume an Azur
Previously updated : 04/28/2023 Last updated : 07/11/2023
The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h
## Prerequisites -- Have an [Azure Elastic SAN](elastic-san-create.md) with volumes - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations
After deployment, check the pods status to verify that the driver installed.
```bash kubectl -n kube-system get pod -o wide -l app=csi-iscsi-node ```
-### Configure Elastic SAN Volume Group
-
-To connect an Elastic SAN volume to an AKS cluster, you need to configure Elastic SAN Volume Group to allow access from AKS node pool subnets, follow [Configure Elastic SAN networking Preview](elastic-san-networking.md)
### Get volume information
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 04/24/2023 Last updated : 07/11/2023 -+ # Connect to Elastic SAN Preview volumes - Linux
In this article, you'll add the Storage service endpoint to an Azure virtual net
## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Networking configuration
-
-To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-
-### Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
-```
--
-### Configure volume group networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Create**.
-1. Add an existing virtual network and subnet and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
-virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
-
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
-- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 04/24/2023 Last updated : 07/11/2023 -+ # Connect to Elastic SAN Preview volumes - Windows
In this article, you'll add the Storage service endpoint to an Azure virtual net
## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Configure networking
-
-To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-
-### Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
-```
--
-### Configure volume group networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Create**.
-1. Add an existing virtual network and subnet and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
-virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
-
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
-- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure p
Previously updated : 08/14/2023 Last updated : 08/30/2023
This article explains how to deploy and configure an elastic storage area networ
- If you're using Azure CLI, install the [latest version](/cli/azure/install-azure-cli). - Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN. +
+## Preview Registration
+Register your subscription with Microsoft.ElasticSAN resource provider and the preview feature using the following command:
+
+# [Portal](#tab/azure-portal)
+If you are using the portal, follow the steps in either the Azure PowerShell module or the Azure CLI to register your subscription for the preview.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.ElasticSan
+Register-AzProviderFeature -FeatureName ElasticSanPreviewAccess -ProviderNamespace Microsoft.ElasticSan
+```
+
+It may take a few minutes for registration to complete. To confirm that you've registered, use the following command:
+
+```azurepowershell
+Get-AzResourceProvider -ProviderNamespace Microsoft.ElasticSan
+Get-AzProviderFeature -FeatureName "ElasticSanPreviewAccess" -ProviderNamespace "Microsoft.ElasticSan"
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az provider register --namespace Microsoft.ElasticSan
+az feature register --name ElasticSanPreviewAccess --namespace Microsoft.ElasticSan
+```
+
+It may take a few minutes for registration to complete. To confirm you've registered, use the following command:
+
+```azurecli
+az provider show --namespace Microsoft.ElasticSan
+az feature show --name ElasticSanPreviewAccess --namespace Microsoft.ElasticSan
+```
++ ## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
This article explains how to deploy and configure an elastic storage area networ
# [PowerShell](#tab/azure-powershell)
-The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`.
+Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in of all the examples in this article:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources will be deployed. |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. |
+| `<VolumeName>` | The name of the Elastic SAN Volume to be created. |
+| `<Location>` | The region where the new resources will be created. |
+| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+
+The following command creates an Elastic SAN that uses **locally-redundant** storage.
+
+```azurepowershell
+# Define some variables.
+$RgName = "<ResourceGroupName>"
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+$VolumeName = "<VolumeName>"
+$Location = "<Location>"
+$Zone = <Zone>
+
+# Create the SAN.
+New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -AvailabilityZone $Zone -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
+```
+
+The following command creates an Elastic SAN that uses **zone-redundant** storage.
```azurepowershell
-## Variables
-$rgName = "yourResourceGroupName"
-## Select the same availability zone as where you plan to host your workload
-$zone = 1
-## Select the same region as your Azure virtual network
-$region = "yourRegion"
-$sanName = "desiredSANName"
-$volGroupName = "desiredVolumeGroupName"
-$volName = "desiredVolumeName"
-
-## Create the SAN, itself
-New-AzElasticSAN -ResourceGroupName $rgName -Name $sanName -AvailabilityZone $zone -Location $region -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
+# Define some variables.
+$RgName = "<ResourceGroupName>"
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+$VolumeName = "<VolumeName>"
+$Location = "<Location>"
+
+# Create the SAN
+New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_ZRS
```+ # [Azure CLI](#tab/azure-cli)
-The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`.
+Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in of all the examples in this article:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources will be deployed. |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. |
+| `<VolumeName>` | The name of the Elastic SAN Volume to be created. |
+| `<Location>` | The region where the new resources will be created. |
+| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+
+The following command creates an Elastic SAN that uses **locally-redundant** storage.
```azurecli
-## Variables
-sanName="yourSANNameHere"
-resourceGroupName="yourResourceGroupNameHere"
-sanLocation="desiredRegion"
-volumeGroupName="desiredVolumeGroupName"
+# Define some variables.
+RgName="<ResourceGroupName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+VolumeName="<VolumeName>"
+Location="<Location>"
+Zone=<Zone>
+
+az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}" --availability-zones $Zone
+```
-az elastic-san create -n $sanName -g $resourceGroupName -l $sanLocation --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}"
+The following command creates an Elastic SAN that uses **zone-redundant** storage.
+
+```azurecli
+# Define some variables.
+RgName="<ResourceGroupName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+VolumeName="<VolumeName>"
+Location="<Location>"
+
+az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_ZRS,tier:Premium}"
```+ ## Create volume groups
Now that you've configured the basic settings and provisioned your storage, you
# [PowerShell](#tab/azure-powershell)
+The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
```azurepowershell
-## Create the volume group, this script only creates one.
-New-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSANName $sanName -Name $volGroupName
+# Create the volume group, this script only creates one.
+New-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSANName $EsanName -Name $EsanVgName
``` # [Azure CLI](#tab/azure-cli)
+The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
+ ```azurecli
-az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n $volumeGroupName
+az elastic-san volume-group create --elastic-san-name $EsanName -g $RgName -n $EsanVgName
```
Volumes are usable partitions of the SAN's total capacity, you must allocate a p
# [PowerShell](#tab/azure-powershell)
-In this article, we provide you the command to create a single volume. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md).
+The following sample command creates a single volume in the Elastic SAN volume group you created previously. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md). Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
> [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
-Replace `volumeName` with the name you'd like the volume to use, then run the following script:
+Use the same variables, then run the following script:
```azurepowershell
-## Create the volume, this command only creates one.
-New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -Name $volName -sizeGiB 2000
+# Create the volume, this command only creates one.
+New-AzElasticSanVolume -ResourceGroupName $RgName -ElasticSanName $EsanName -VolumeGroupName $EsanVgName -Name $VolumeName -sizeGiB 2000
``` # [Azure CLI](#tab/azure-cli)
New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -Volu
> [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
-Replace `$volumeName` with the name you'd like the volume to use, then run the following script:
+The following sample command creates an Elastic SAN volume in the Elastic SAN volume group you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
```azurecli
-az elastic-san volume create --elastic-san-name $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib 2000
+az elastic-san volume create --elastic-san-name $EsanName -g $RgName -v $EsanVgName -n $VolumeName --size-gib 2000
```
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
# Delete an Elastic SAN Preview
-To delete an elastic storage area network (SAN), you first need to disconnect every volume in your Elastic SAN Preview from any connected hosts.
+Your Elastic storage area network (SAN) resources can be deleted at different resource levels. This article covers the overall deletion process, starting from disconnecting iSCSI connections to volumes, deleting the volumes themselves, deleting a volume group, and deleting an elastic SAN itself. Before you delete your elastic SAN, make sure it's not being used in any running workloads.
## Disconnect volumes from clients
iscsiadm --mode node --target **yourStorageTargetIQN** --portal **yourStorageTar
## Delete a SAN
-When your SAN has no active connections to any clients, you may delete it using the Azure portal or Azure PowerShell module.
+When your SAN has no active connections to any clients, you may delete it using the Azure portal or Azure PowerShell module. If you delete a SAN or a volume group, the corresponding child resources will be deleted along with it. The delete commands for each of the resource levels are below.
-First, delete each volume.
+
+To delete volumes, run the following commands.
# [PowerShell](#tab/azure-powershell)
az elastic-san volume delete -e $sanName -g $resourceGroupName -v $volumeGroupNa
```
-Then, delete each volume group.
+To delete volume groups, run the following commands.
# [PowerShell](#tab/azure-powershell)
az elastic-san volume-group delete -e $sanName -g $resourceGroupName -n $volumeG
```
-Finally, delete the Elastic SAN itself.
+To delete the Elastic SAN itself, run the following commands.
# [PowerShell](#tab/azure-powershell)
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN Preview, a service that enables yo
Previously updated : 05/02/2023 Last updated : 08/15/2023
The status of items in this table may change over time.
| Encryption at rest| ✔️ | | Encryption in transit| ⛔ | | [LRS or ZRS redundancy types](elastic-san-planning.md#redundancy)| ✔️ |
-| Private endpoints | Γ¢ö |
+| Private endpoints | ✔️ |
| Grant network access to specific Azure virtual networks| ✔️ | | Soft delete | ⛔ | | Snapshots | ⛔ |
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
+
+ Title: Azure Elastic SAN networking Preview concepts
+description: An overview of Azure Elastic SAN Preview networking options, including storage service endpoints, private endpoints, and iSCSI.
+++ Last updated : 08/16/2023++++
+# Elastic SAN Preview networking
+
+Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+
+You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
+
+Depending on your configuration, applications on peered virtual networks or on-premises networks can also access volumes in the group. On-premises networks must be connected to the virtual network by a VPN or ExpressRoute. For more details about virtual network configurations, see [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+
+There are two types of virtual network endpoints you can configure to allow access to an Elastic SAN volume group:
+
+- [Storage service endpoints](#storage-service-endpoints)
+- [Private endpoints](#private-endpoints)
+
+To decide which option is best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). Generally, you should use private endpoints instead of service endpoints since Private Link offers better capabilities. For more information, see [Azure Private Link](../../private-link/private-endpoint-overview.md).
+
+After configuring endpoints, you can configure network rules to further control access to your Elastic SAN volume group. Once the endpoints and network rules have been configured, clients can connect to volumes in the group to process their workloads.
+
+## Storage service endpoints
+
+[Azure Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
+
+[Cross-region service endpoints for Azure Storage](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints) work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from a subnet to a storage account uses a private IP address as a source IP.
+
+> [!TIP]
+> The original local service endpoints, identified as **Microsoft.Storage**, are still supported for backward compatibility, but you should create cross-region endpoints, identified as **Microsoft.Storage.Global**, for new deployments.
+>
+> Cross-region service endpoints and local ones can't coexist on the same subnet. To use cross-region service endpoints, you might have to delete existing **Microsoft.Storage** endpoints and recreate them as **Microsoft.Storage.Global**.
+
+## Private endpoints
+
+> [!IMPORTANT]
+> For Elastic SANs using [locally-redundant storage (LRS)](elastic-san-planning.md#redundancy) as their redundancy option, private endpoints are supported in all regions that Elastic SAN is available. Private endpoints aren't currently supported for elastic SANs using [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) as their redundancy option.
+
+Azure [Private Link](../../private-link/private-link-overview.md) enables you to access an Elastic SAN volume group securely over a [private endpoint](../../private-link/private-endpoint-overview.md) from a virtual network subnet. Traffic between your virtual network and the service traverses the Microsoft backbone network, eliminating the risk of exposing your service to the public internet. An Elastic SAN private endpoint uses a set of IP addresses from the subnet address space for each volume group. The maximum number used per endpoint is 20.
+
+Private endpoints have several advantages over service endpoints. For a complete comparison of private endpoints to service endpoints, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+Traffic between the virtual network and the Elastic SAN is routed over an optimal path on the Azure backbone network. Unlike service endpoints, you don't need to configure network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
+
+For details on how to configure private endpoints, see [Enable private endpoint](elastic-san-networking.md#configure-a-private-endpoint).
+
+## Virtual network rules
+
+To further secure access to your Elastic SAN volumes, you can create virtual network rules for volume groups configured with service endpoints to allow access from specific subnets. You don't need network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
+
+Each volume group supports up to 200 virtual network rules. If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+
+Clients granted access via these network rules must also be granted the appropriate permissions to the Elastic SAN to volume group.
+
+To learn how to define network rules, see [Managing virtual network rules](elastic-san-networking.md#configure-virtual-network-rules).
+
+## Client connections
+
+After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more details on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections)
+
+> [!NOTE]
+> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+
+## Next steps
+
+[Configure Elastic SAN networking Preview](elastic-san-networking.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Title: Azure Elastic SAN networking Preview
-description: An overview of Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
+ Title: How to configure Azure Elastic SAN Preview networking
+description: How to configure networking for Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
Previously updated : 05/04/2023 Last updated : 08/25/2023 -+
-# Configure Elastic SAN networking Preview
+# Configure networking for an Elastic SAN Preview
-Azure Elastic storage area network (SAN) allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments demand, based on the type and subset of networks or resources used. When network rules are configured, only applications requesting data over the specified set of networks or through the specified set of Azure resources that can access an Elastic SAN Preview. Access to your SAN's volumes are limited to resources in subnets in the same Azure Virtual Network that your SAN's volume group is configured with.
+Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require.
-Volume groups are configured to allow access only from specific subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+This article describes how to configure your Elastic SAN to allow access from your Azure virtual network infrastructure.
-You must enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the SAN that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the Elastic SAN to access the data.
+To configure network access to your Elastic SAN:
-Each volume group supports up to 200 virtual network rules.
+> [!div class="checklist"]
+> - [Configure a virtual network endpoint](#configure-a-virtual-network-endpoint).
+> - [Configure client connections](#configure-client-connections).
+
+## Configure a virtual network endpoint
+
+You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets may belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Azure Active Directory tenant.
+
+You can allow access to your Elastic SAN volume group from two types of Azure virtual network endpoints:
+
+- [Private endpoints](../../private-link/private-endpoint-overview.md)
+- [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+
+A private endpoint uses one or more private IP addresses from your virtual network subnet to access an Elastic SAN volume group over the Microsoft backbone network. With a private endpoint, traffic between your virtual network and the volume group are secured over a private link.
+
+Virtual network service endpoints are public and accessible via the internet. You can [Configure virtual network rules](#configure-virtual-network-rules) to control access to your volume group when using storage service endpoints.
+
+Network rules only apply to the public endpoints of a volume group, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, do not enable service endpoints for the volume group.
+
+To decide which type of endpoint works best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
+
+The process for enabling each type of endpoint follows:
+
+- [Configure a private endpoint](#configure-a-private-endpoint)
+- [Configure an Azure Storage service endpoint](#configure-an-azure-storage-service-endpoint)
+
+### Configure a private endpoint
> [!IMPORTANT]
-> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+> - For Elastic SANs using [locally-redundant storage (LRS)](elastic-san-planning.md#redundancy) as their redundancy option, private endpoints are supported in all regions that Elastic SAN is available. Private endpoints aren't currently supported for elastic SANs using [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) as their redundancy option.
+>
+> - Before you can create a private endpoint connection to a volume group, it must contain at least one volume.
-## Enable Storage service endpoint
+There are two steps involved in configuring a private endpoint connection:
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+> [!div class="checklist"]
+> - Creating the endpoint and the associated connection.
+> - Approving the connection.
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+You can also use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to refine access control over private endpoints.
+
+To create a private endpoint for an Elastic SAN volume group, you must have the [Elastic SAN Volume Group Owner](../../role-based-access-control/built-in-roles.md#elastic-san-volume-group-owner) role. To approve a new private endpoint connection, you must have permission to the [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftelasticsan) `Microsoft.ElasticSan/elasticSans/PrivateEndpointConnectionsApproval/action`. Permission for this operation is included in the [Elastic SAN Network Admin](../../role-based-access-control/built-in-roles.md#elastic-san-owner) role, but it can also be granted via a custom Azure role.
+
+If you create the endpoint from a user account that has all of the necessary roles and permissions required for creation and approval, the process can be completed in one step. If not, it will require two separate steps by two different users.
+
+The Elastic SAN and the virtual network may be in different resource groups, regions and subscriptions, including subscriptions that belong to different Azure AD tenants. In these examples, we are creating the private endpoint in the same resource group as the virtual network.
# [Portal](#tab/azure-portal)
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
+Currently, you can only configure a private endpoint using PowerShell or the Azure CLI.
# [PowerShell](#tab/azure-powershell)
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
+Deploying a private endpoint for an Elastic SAN Volume group using PowerShell involves these steps:
+
+1. Get the subnet from which applications will connect.
+1. Get the Elastic SAN Volume Group.
+1. Create a private link service connection using the volume group as input.
+1. Create the private endpoint using the subnet and the private link service connection as input.
+1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace all placeholder text with your own values:
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
+| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
+| `<VnetName>` | The name of the virtual network that includes the subnet. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
+| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
+| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
+| `<PrivateEndpointName>` | The name of the new private endpoint. |
+| `<Location>` | The region where the new private endpoint will be created. |
+| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
+```powershell
+# Set the resource group name.
+$RgName = "<ResourceGroupName>"
+
+# Get the virtual network and subnet, which is input to creating the private endpoint.
+$VnetName = "<VnetName>"
+$SubnetName = "<SubnetName>"
+
+$Vnet = Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RgName
+$Subnet = $Vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName}
+
+# Get the Elastic SAN, which is input to creating the private endpoint service connection.
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+
+$Esan = Get-AzElasticSan -Name $EsanName -ResourceGroupName $RgName
+
+# Create the private link service connection, which is input to creating the private endpoint.
+$PLSvcConnectionName = "<PrivateLinkSvcConnectionName>"
+$EsanPlSvcConn = New-AzPrivateLinkServiceConnection -Name $PLSvcConnectionName -PrivateLinkServiceId $Esan.Id -GroupId $EsanVgName
+
+# Create the private endpoint.
+$EndpointName = '<PrivateEndpointName>'
+$Location = '<Location>'
+$PeArguments = @{
+ Name = $EndpointName
+ ResourceGroupName = $RgName
+ Location = $Location
+ Subnet = $Subnet
+ PrivateLinkServiceConnection = $EsanPlSvcConn
+}
+New-AzPrivateEndpoint @PeArguments # -ByManualRequest # (Uncomment the `-ByManualRequest` parameter if you are using the two-step process).
+```
+
+Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+
+```powershell
+# Get the private endpoint and associated connection.
+$PrivateEndpoint = Get-AzPrivateEndpoint -Name $EndpointName -ResourceGroupName $RgName
+$PeConnArguments = @{
+ ServiceName = $EsanName
+ ResourceGroupName = $RgName
+ PrivateLinkResourceType = "Microsoft.ElasticSan/elasticSans"
+}
+$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments |
+Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)}
+
+# Approve the private link service connection.
+$ApprovalDesc="<ApprovalDesc>"
+Approve-AzPrivateEndpointConnection @PeConnArguments -Name $EndpointConnection.Name -Description $ApprovalDesc
+
+# Get the private endpoint connection anew and verify the connection status.
+$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments |
+Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)}
+$EndpointConnection.PrivateLinkServiceConnectionState
``` # [Azure CLI](#tab/azure-cli)
+Deploying a private endpoint for an Elastic SAN Volume group using the Azure CLI involves three steps:
+
+1. Get the private connection resource ID of the Elastic SAN.
+1. Create the private endpoint using inputs:
+ 1. Private connection resource ID
+ 1. Volume group name
+ 1. Resource group name
+ 1. Subnet name
+ 1. Vnet name
+1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
+
+Use this sample code to create a private endpoint for your Elastic SAN volume group with the Azure CLI. Uncomment the `--manual-request` parameter if you are using the two-step process. Replace all placeholder text with your own values:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
+| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
+| `<VnetName>` | The name of the virtual network that includes the subnet. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
+| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
+| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
+| `<PrivateEndpointName>` | The name of the new private endpoint. |
+| `<Location>` | The region where the new private endpoint will be created. |
+| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
+
+```azurecli
+# Define some variables.
+RgName="<ResourceGroupName>"
+VnetName="<VnetName>"
+SubnetName="<SubnetName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+EndpointName="<PrivateEndpointName>"
+PLSvcConnectionName="<PrivateLinkSvcConnectionName>"
+Location="<Location>"
+ApprovalDesc="<ApprovalDesc>"
+
+# Get the id of the Elastic SAN.
+id=$(az elastic-san show \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --query 'id' \
+ --output tsv)
+
+# Create the private endpoint.
+az network private-endpoint create \
+ --connection-name $PLSvcConnectionName \
+ --name $EndpointName \
+ --private-connection-resource-id $id \
+ --resource-group $RgName \
+ --vnet-name $VnetName \
+ --subnet $SubnetName \
+ --location $Location \
+ --group-id $EsanVgName # --manual-request
+
+# Verify the status of the private endpoint connection.
+PLConnectionName=$(az network private-endpoint-connection list \
+ --name $EsanName \
+ --resource-group $RgName \
+ --type Microsoft.ElasticSan/elasticSans \
+ --query "[?properties.groupIds[0]=='$EsanVgName'].name" -o tsv)
+
+az network private-endpoint-connection show \
+ --resource-name $EsanName \
+ --resource-group $RgName \
+ --type Microsoft.ElasticSan/elasticSans \
+ --name $PLConnectionName
+```
+
+Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+ ```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
+az network private-endpoint-connection approve \
+ --resource-name $EsanName \
+ --resource-group $RgName \
+ --name $PLConnectionName \
+ --type Microsoft.ElasticSan/elasticSans \
+ --description $ApprovalDesc
```+
-### Available virtual network regions
+### Configure an Azure Storage service endpoint
+
+To configure an Azure Storage service endpoint from the virtual network where access is required, you must have permission to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role to configure a service endpoint.
-Service endpoints for Azure Storage work between virtual networks and service instances in any region. They also work between virtual networks and service instances in [paired regions](../../availability-zones/cross-region-replication-azure.md) to allow continuity during a regional failover. When planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your zone-redundant SANs.
+Virtual network service endpoints are public and accessible via the internet. You can [Configure virtual network rules](#configure-virtual-network-rules) to control access to your volume group when using storage service endpoints.
-#### Azure Storage cross-region service endpoints
+> [!NOTE]
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your virtual network and select **Service Endpoints**.
+1. Select **+ Add**.
+1. On the **Add service endpoints** screen:
+ 1. For **Service** select **Microsoft.Storage.Global** to add a [cross-region service endpoint](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints).
+
+ > [!NOTE]
+ > You might see **Microsoft.Storage** listed as an available storage service endpoint. That option is for intra-region endpoints which exist for backward compatibility only. Always use cross-region endpoints unless you have a specific reason for using intra-region ones.
+
+1. For **Subnets** select all the subnets where you want to allow access.
+1. Select **Add**.
++
+# [PowerShell](#tab/azure-powershell)
+
+Use this sample code to create a storage service endpoint for your Elastic SAN volume group with PowerShell.
+
+```powershell
+# Define some variables
+$RgName = "<ResourceGroupName>"
+$VnetName = "<VnetName>"
+$SubnetName = "<SubnetName>"
+
+# Get the virtual network and subnet
+$Vnet = Get-AzVirtualNetwork -ResourceGroupName $RgName -Name $VnetName
+$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $Vnet -Name $SubnetName
+
+# Enable the storage service endpoint
+$Vnet | Set-AzVirtualNetworkSubnetConfig -Name $SubnetName -AddressPrefix $Subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
+```
+
+# [Azure CLI](#tab/azure-cli)
-Cross-region service endpoints for Azure became generally available in April of 2023. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
+Use this sample code to create a storage service endpoint for your Elastic SAN volume group with the Azure CLI.
+
+```azurecli
+# Define some variables
+RgName="<ResourceGroupName>"
+VnetName="<VnetName>"
+SubnetName="<SubnetName>"
+
+# Enable the storage service endpoint
+az network vnet subnet update --resource-group $RgName --vnet-name $VnetName --name $SubnetName --service-endpoints "Microsoft.Storage.Global"
+```
++
-To use cross-region service endpoints, it might be necessary to delete existing **Microsoft.Storage** endpoints and recreate them as cross-region (**Microsoft.Storage.Global**).
+#### Configure virtual network rules
-## Managing virtual network rules
+All incoming requests for data over a service endpoint are blocked by default. Only applications that request data from allowed sources that you configure in your network rules will be able to access your data.
You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
-> [!NOTE]
+> [!IMPORTANT]
> If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
+>
+> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
### [Portal](#tab/azure-portal)
You can manage virtual network rules for volume groups through the Azure portal,
- List virtual network rules. ```azurepowershell
- $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName
+ $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $sanName -Name $volGroupName
$Rules.NetworkAclsVirtualNetworkRule ```
You can manage virtual network rules for volume groups through the Azure portal,
- Add a network rule for a virtual network and subnet. ```azurepowershell
- $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
+ $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $Subnet.Id -Action Allow
- Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
+ Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $RgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
``` > [!TIP]
You can manage virtual network rules for volume groups through the Azure portal,
- List information from a particular volume group, including their virtual network rules. ```azurecli
- az elastic-san volume-group show -e $sanName -g $resourceGroupName -n $volumeGroupName
+ az elastic-san volume-group show -e $sanName -g $RgName -n $volumeGroupName
``` - Enable service endpoint for Azure Storage on an existing virtual network and subnet.
You can manage virtual network rules for volume groups through the Azure portal,
```azurecli # First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
- virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
+ virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $RgName --query 'length(networkAcls.virtualNetworkRules)'
- az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+ az elastic-san volume-group update -e $sanName -g $RgName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/default, action:Allow}]}"
``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like. ```azurecli
- az elastic-san volume-group update -e $sanName -g $resourceGroupName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
+ az elastic-san volume-group update -e $sanName -g $RgName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
``` -+
+## Configure client connections
+
+After you have enabled the desired endpoints and granted access in your network rules, you are ready to configure your clients to connect to the appropriate Elastic SAN volumes.
+
+> [!NOTE]
+> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
## Next steps
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+- [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md)
+- [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md)
+- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Understand planning for an Azure Elastic SAN deployment. Learn abou
Previously updated : 05/02/2023 Last updated : 06/09/2023
Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Sa
## Networking
-In Preview, Elastic SAN supports public access from selected virtual networks, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. You must enable [service endpoint for Azure Storage](../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
+In the Elastic SAN Preview, you can configure access to volume groups over both public [Azure Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
-If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+To allow network access, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.
## Redundancy
Elastic SAN supports the [internet Small Computer Systems Interface](https://en.
- VERIFY (16) - SYNCHRONIZE CACHE (10) - SYNCHRONIZE CACHE (16)
+- RESERVE
+- RELEASE
+- PERSISTENT RESERVE IN
+- PERSISTENT RESERVE OUT
The following iSCSI features aren't currently supported: - CHAP authorization
The following iSCSI features aren't currently supported:
For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
+[Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md)
[Deploy an Elastic SAN Preview](elastic-san-create.md)
storage Elastic San Shared Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-shared-volumes.md
+
+ Title: Use clustered applications on Azure Elastic SAN
+description: Learn more about using clustered applications on an Elastic SAN volume and sharing volumes between compute clients.
+++ Last updated : 08/15/2023++++
+# Use clustered applications on Azure Elastic SAN
+
+Azure Elastic SAN volumes can be simultaneously attached to multiple compute clients, allowing you to deploy or migrate cluster applications to Azure. You need to use a cluster manager to share an Elastic SAN volume, like Windows Server Failover Cluster (WSFC), or Pacemaker. The cluster manager handles cluster node communications and write locking. Elastic SAN doesn't natively offer a fully managed filesystem that can be accessed over SMB or NFS.
+
+When used as a shared volume, elastic SAN volumes can be shared across availability zones or regions. If you share a volume across availability zones, you should select [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) when deploying your SAN. Sharing a volume in a local-redundant storage SAN across zones reduces your performance due to increased latency between the volume and clients.
+
+## Limitations
+
+- Volumes in an Elastic SAN using [ZRS](elastic-san-planning.md#redundancy) can't be used as shared volumes.
+- Elastic SAN connection scripts can be used to attach shared volumes to virtual machines in Virtual Machine Scale Sets or virtual machines in Availability Sets. Fault domain alignment isn't supported.
+- The maximum number of sessions a shared volume supports is 128.
+ - An individual client can create multiple sessions to an individual volume for increased performance. For example, if you create 32 sessions on each of your clients, only four clients could connect to a single volume.
+
+See [Support for Azure Storage features](elastic-san-introduction.md#support-for-azure-storage-features) for other limitations of Elastic SAN.
+
+## Regional availability
+
+All regions that Elastic SAN is available in can use shared volumes.
+
+## How it works
+
+Elastic SAN shared volumes use [SCSI-3 Persistent Reservations](https://www.t10.org/members/w_spc3.htm) to allow initiators (clients) to control access to a shared elastic SAN volume. This protocol enables an initiator to reserve access to an elastic SAN volume, limit write (or read) access by other initiators, and persist the reservation on a volume beyond the lifetime of a session by default.
+
+SCSI-3 PR has a pivotal role in maintaining data consistency and integrity within shared volumes in cluster scenarios. Compute nodes in a cluster can read or write to their attached elastic SAN volumes based on the reservation chosen by their cluster applications.
+
+## Persistent reservation flow
+
+The following diagram illustrates a sample 2-node clustered database application that uses SCSI-3 PR to enable failover from one node to the other.
++
+The flow is as follows:
+
+1. The clustered application running on both Azure VM1 and VM2 registers its intent to read or write to the elastic SAN volume.
+1. The application instance on VM1 then takes an exclusive reservation to write to the volume.
+1. This reservation is enforced on your volume and the database can now exclusively write to the volume. Any writes from the application instance on VM2 fail.
+1. If the application instance on VM1 goes down, the instance on VM2 can initiate a database failover and take over control of the volume.
+1. This reservation is now enforced on the volume, and it won't accept writes from VM1. It only accepts writes from VM2.
+1. The clustered application can complete the database failover and serve requests from VM2.
+
+The following diagram illustrates another common clustered workload consisting of multiple nodes reading data from an elastic SAN volume for running parallel processes, such as training of machine learning models.
++
+The flow is as follows:
+1. The clustered application running on all VMs registers its intent to read or write to the elastic SAN volume.
+1. The application instance on VM1 takes an exclusive reservation to write to the volume while opening up reads to the volume from other VMs.
+1. This reservation is enforced on the volume.
+1. All nodes in the cluster can now read from the volume. Only one node writes back results to the volume, on behalf of all nodes in the cluster.
+
+## Supported SCSI PR commands
+
+The following commands are supported with Elastic SAN volumes:
+
+To interact with the volume, start with the appropriate persistent reservation action:
+- PR_REGISTER_KEY
+- PR_REGISTER_AND_IGNORE
+- PR_GET_CONFIGURATION
+- PR_RESERVE
+- PR_PREEMPT_RESERVATION
+- PR_CLEAR_RESERVATION
+- PR_RELEASE_RESERVATION
+
+When using PR_RESERVE, PR_PREEMPT_RESERVATION, or PR_RELEASE_RESERVATION, provide one of the following persistent reservation type:
+- PR_NONE
+- PR_WRITE_EXCLUSIVE
+- PR_EXCLUSIVE_ACCESS
+- PR_WRITE_EXCLUSIVE_REGISTRANTS_ONLY
+- PR_EXCLUSIVE_ACCESS_REGISTRANTS_ONLY
+- PR_WRITE_EXCLUSIVE_ALL_REGISTRANTS
+- PR_EXCLUSIVE_ACCESS_ALL_REGISTRANTS
+
+Persistent reservation type determines access to the volume from each node in the cluster.
+
+|Persistent Reservation Type |Reservation Holder |Registered |Others |
+|||||
+|NO RESERVATION |N/A |Read-Write |Read-Write |
+|WRITE EXCLUSIVE |Read-Write |Read-Only |Read-Only |
+|EXCLUSIVE ACCESS |Read-Write |No Access |No Access |
+|WRITE EXCLUSIVE - REGISTRANTS ONLY |Read-Write |Read-Write |Read-Only |
+|EXCLUSIVE ACCESS - REGISTRANTS ONLY |Read-Write |Read-Write |No Access |
+|WRITE EXCLUSIVE - ALL REGISTRANTS |Read-Write |Read-Write |Read-Only |
+|EXCLUSIVE ACCESS - ALL REGISTRANTS |Read-Write |Read-Write |No Access |
+
+You also need to provide a persistent-reservation-key when using:
+- PR_RESERVE
+- PR_REGISTER_AND_IGNORE
+- PR_REGISTER_KEY
+- PR_PREEMPT_RESERVATION
+- PR_CLEAR_RESERVATION
+- PR_RELEASE-RESERVATION.
storage Files Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-data-protection-overview.md
Azure Files gives you many tools to protect your data, including soft delete, sh
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/TOHaNJpAOfc" title="How Azure Files can help protect against ransomware and accidental data loss" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/TOHaNJpAOfc]
:::column-end::: :::column::: Watch this video to learn how Azure Files advanced data protection helps enterprises stay protected against ransomware and accidental data loss while delivering greater business continuity.
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table may change over time as support co
NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress may face additional latencies due to the high number of open and close operations. > [!NOTE]
-> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
+> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md).
## Workloads > [!IMPORTANT]
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Azure Files customers can now use identity-based Kerberos authentication for Lin
### 2023 quarter 1 (January, February, March) #### Nconnect for NFS Azure file shares is generally available
-Nconnect is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the Linux client and the Azure Premium Files service for NFSv4.1. With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
+Nconnect is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the Linux client and the Azure Premium Files service for NFSv4.1. With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership. For more information, see [Improve NFS Azure file share performance](nfs-performance.md).
#### Improved Azure File Sync service availability
SMB Multichannel enables SMB clients to establish multiple parallel connections
For more information, see: -- [SMB Multichannel performance in Azure Files](storage-files-smb-multichannel-performance.md)
+- [SMB Multichannel performance in Azure Files](smb-performance.md)
- [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel) - [Overview on SMB Multichannel in the Windows Server documentation](/azure-stack/hci/manage/manage-smb-multichannel)
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
description: Azure Files geo-redundancy for large file shares (preview) signific
Previously updated : 08/13/2023 Last updated : 08/28/2023
Azure Files geo-redundancy for large file shares preview is currently available
- China North 2 - China North 3 - East Asia
+- East US
- East US 2 - France Central - France South
Azure Files geo-redundancy for large file shares preview is currently available
- Korea Central - Korea South - North Central US
+- North Europe
- Norway East - Norway West - South Africa North
Azure Files geo-redundancy for large file shares preview is currently available
- US Gov Texas - US Gov Virginia - West Central US
+- West Europe
- West India
+- West US
- West US 2
+- West US 3
## Pricing
storage Nfs Nconnect Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/nfs-nconnect-performance.md
- Title: Improve NFS Azure file share performance with nconnect
-description: Learn how using nconnect with Linux clients can improve the performance of NFS Azure file shares at scale.
--- Previously updated : 03/20/2023---
-# Improve NFS Azure file share performance with `nconnect`
-
-`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the client and the Azure Premium Files service for NFSv4.1, while maintaining the resiliency of platform as a service (PaaS).
-
-## Applies to
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![No, this article doesn't apply to standard SMB Azure file shares LRS/ZRS.](../media/icons/no-icon.png) | ![NFS shares are only available in premium Azure file shares.](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![No, this article doesn't apply to standard SMB Azure file shares GRS/GZRS.](../media/icons/no-icon.png) | ![NFS is only available in premium Azure file shares.](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![No, this article doesn't apply to premium SMB Azure file shares.](../media/icons/no-icon.png) | ![Yes, this article applies to premium NFS Azure file shares.](../media/icons/yes-icon.png) |
-
-## Benefits of `nconnect`
-
-With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. ThatΓÇÖs almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
-
-| **Metric (operation)** | **I/O size** | **Performance improvement** |
-|||--|
-| IOPS (write) | 64K, 1024K | 3x |
-| IOPS (read) | All I/O sizes | 2-4x |
-| Throughput (write) | 64K, 1024K | 3x |
-| Throughput (read) | All I/O sizes | 2-4x |
--
-## Prerequisites
--- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.-- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.-
-## Performance impact of `nconnect`
-
-We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
---
-## Recommendations
-
-Follow these recommendations to get the best results from `nconnect`.
-
-### Set `nconnect=4`
-While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
-
-### Size virtual machines carefully
-Depending on your workload requirements, itΓÇÖs important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
-
-### Keep queue depth less than or equal to 64
-Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
-
-### `Nconnect` per-mount configuration
-If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
-
-#### Scenario 1: (supported) `nconnect` per-mount configuration over private endpoint with multiple storage accounts
--- StorageAccount.file.core.windows.net = 10.10.10.10-- StorageAccount2.file.core.windows.net = 10.10.10.11
- - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
- - `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
-
-#### Scenario 2: (not supported) `nconnect` per-mount configuration over public endpoint
--- StorageAccount.file.core.windows.net = 52.239.238.8-- StorageAccount2.file.core.windows.net = 52.239.238.7
- - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
- - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare2`
- - `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
-
-> [!NOTE]
-> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
-
-#### Scenario 3: (not supported) `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account
--- StorageAccount.file.core.windows.net = 10.10.10.10
- - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
- - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare2`
- - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare3`
-
-## Performance test configuration
-
-We used the following resources and benchmarking tools to achieve and measure the results outlined in this article.
--- **Single client:** Azure Virtual Machine ([DSv4-Series](../../virtual-machines/dv4-dsv4-series.md#dsv4-series)) with single NIC-- **OS:** Linux (Ubuntu 20.40)-- **NFS storage:** Azure Files premium file share (provisioned 30 TiB, set `nconnect=4`)-
-| **Size** | **vCPU** | **Memory** | **Temp storage (SSD)** | **Max data disks** | **Max NICs** | **Expected network bandwidth** |
-|--|--|||--|--|--|
-| Standard_D16_v4 | 16 | 64 GiB | Remote storage only | 32 | 8 | 12,500 Mbps |
-
-### Benchmarking tools and tests
-
-We used Flexible I/O Tester (FIO), a free, open-source disk I/O tool used both for benchmark and stress/hardware verification. To install FIO, follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
-
-While these tests focus on random I/O access patterns, you get similar results when using sequential I/O.
-
-#### High IOPS: 100% reads
-
-**4k I/O size - random read - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
-```
-
-**8k I/O size - random read - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
-```
-
-#### High throughput: 100% reads
-
-**64k I/O size - random read - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
-```
-
-**1024k I/O size - 100% random read - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
-```
-
-#### High IOPS: 100% writes
-
-**4k I/O size - 100% random write - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
-```
-
-**8k I/O size - 100% random write - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
-```
-
-#### High throughput: 100% writes
-
-**64k I/O size - 100% random write - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
-```
-
-**1024k I/O size - 100% random write - 64 queue depth**
-
-```bash
-fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
-```
-
-## Performance considerations
-
-When using the `nconnect` mount option, you should closely evaluate workloads that have the following characteristics:
--- Latency sensitive write workloads that are single threaded and/or use a low queue depth (less than 16)-- Latency sensitive read workloads that are single threaded and/or use a low queue depth in combination with smaller I/O sizes-
-Not all workloads require high-scale IOPS or throughout performance. For smaller scale workloads, `nconnect` might not make sense. Use the following table to decide whether `nconnect` will be advantageous for your workload. Scenarios highlighted in green are recommended, while those highlighted in red are not. Those highlighted in yellow are neutral.
--
-## See also
-- For mounting instructions, see [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares.md).-- For a comprehensive list of mount options, see [Linux NFS man page](https://linux.die.net/man/5/nfs).-- For information on latency, IOPS, throughput, and other performance concepts, see [Understand Azure Files performance](understand-performance.md).
storage Nfs Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/nfs-performance.md
+
+ Title: Improve NFS Azure file share performance
+description: Learn how to improve the performance of NFS Azure file shares at scale using the nconnect mount option for Linux clients.
+++ Last updated : 08/31/2023+++
+# Improve NFS Azure file share performance
+This article explains how you can improve performance for NFS Azure file shares.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![No, this article doesn't apply to standard SMB Azure file shares LRS/ZRS.](../media/icons/no-icon.png) | ![NFS shares are only available in premium Azure file shares.](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![No, this article doesn't apply to standard SMB Azure file shares GRS/GZRS.](../media/icons/no-icon.png) | ![NFS is only available in premium Azure file shares.](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![No, this article doesn't apply to premium SMB Azure file shares.](../media/icons/no-icon.png) | ![Yes, this article applies to premium NFS Azure file shares.](../media/icons/yes-icon.png) |
+
+## `Nconnect`
+
+`Nconnect` is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the client and the Azure Premium Files service for NFSv4.1, while maintaining the resiliency of platform as a service (PaaS).
+
+## Benefits of `nconnect`
+
+With `nconnect`, you can increase performance at scale using fewer client machines to reduce total cost of ownership (TCO). `Nconnect` increases performance by using multiple TCP channels on one or more NICs, using single or multiple clients. Without `nconnect`, you'd need roughly 20 client machines in order to achieve the bandwidth scale limits (10 GiB/s) offered by the largest premium Azure file share provisioning size. With `nconnect`, you can achieve those limits using only 6-7 clients. ThatΓÇÖs almost a 70% reduction in computing cost, while providing significant improvements to IOPS and throughput at scale (see table).
+
+| **Metric (operation)** | **I/O size** | **Performance improvement** |
+|||--|
+| IOPS (write) | 64K, 1024K | 3x |
+| IOPS (read) | All I/O sizes | 2-4x |
+| Throughput (write) | 64K, 1024K | 3x |
+| Throughput (read) | All I/O sizes | 2-4x |
+
+## Prerequisites
+
+- The latest Linux distributions fully support `nconnect`. For older Linux distributions, ensure that the Linux kernel version is 5.3 or higher.
+- Per-mount configuration is only supported when a single file share is used per storage account over a private endpoint.
+
+## Performance impact of `nconnect`
+
+We achieved the following performance results when using the `nconnect` mount option with NFS Azure file shares on Linux clients at scale. For more information on how we achieved these results, see [performance test configuration](#performance-test-configuration).
+++
+## Recommendations
+
+Follow these recommendations to get the best results from `nconnect`.
+
+### Set `nconnect=4`
+While Azure Files supports setting `nconnect` up to the maximum setting of 16, we recommend configuring the mount options with the optimal setting of `nconnect=4`. Currently, there are no gains beyond four channels for the Azure Files implementation of `nconnect`. In fact, exceeding four channels to a single Azure file share from a single client might adversely affect performance due to TCP network saturation.
+
+### Size virtual machines carefully
+Depending on your workload requirements, itΓÇÖs important to correctly size the client machines to avoid being restricted by their [expected network bandwidth](../../virtual-network/virtual-machine-network-throughput.md#expected-network-throughput). You don't need multiple NICs in order to achieve the expected network throughput. While it's common to use [general purpose VMs](../../virtual-machines/sizes-general.md) with Azure Files, various VM types are available depending on your workload needs and region availability. For more information, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
+
+### Keep queue depth less than or equal to 64
+Queue depth is the number of pending I/O requests that a storage resource can service. We don't recommend exceeding the optimal queue depth of 64. If you do, you won't see any more performance gains. For more information, see [Queue depth](understand-performance.md#queue-depth).
+
+### `Nconnect` per-mount configuration
+If a workload requires mounting multiple shares with one or more storage accounts with different `nconnect` settings from a single client, we can't guarantee that those settings will persist when mounting over the public endpoint. Per-mount configuration is only supported when a single Azure file share is used per storage account over the private endpoint as described in Scenario 1.
+
+#### Scenario 1: (supported) `nconnect` per-mount configuration over private endpoint with multiple storage accounts
+
+- StorageAccount.file.core.windows.net = 10.10.10.10
+- StorageAccount2.file.core.windows.net = 10.10.10.11
+ - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
+ - `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
+
+#### Scenario 2: (not supported) `nconnect` per-mount configuration over public endpoint
+
+- StorageAccount.file.core.windows.net = 52.239.238.8
+- StorageAccount2.file.core.windows.net = 52.239.238.7
+ - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
+ - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare2`
+ - `Mount StorageAccount2.file.core.windows.net:/StorageAccount2/FileShare1`
+
+> [!NOTE]
+> Even if the storage account resolves to a different IP address, we can't guarantee that address will persist because public endpoints aren't static addresses.
+
+#### Scenario 3: (not supported) `nconnect` per-mount configuration over private endpoint with multiple shares on single storage account
+
+- StorageAccount.file.core.windows.net = 10.10.10.10
+ - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare1 nconnect=4`
+ - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare2`
+ - `Mount StorageAccount.file.core.windows.net:/StorageAccount/FileShare3`
+
+## Performance test configuration
+
+We used the following resources and benchmarking tools to achieve and measure the results outlined in this article.
+
+- **Single client:** Azure Virtual Machine ([DSv4-Series](../../virtual-machines/dv4-dsv4-series.md#dsv4-series)) with single NIC
+- **OS:** Linux (Ubuntu 20.40)
+- **NFS storage:** Azure Files premium file share (provisioned 30 TiB, set `nconnect=4`)
+
+| **Size** | **vCPU** | **Memory** | **Temp storage (SSD)** | **Max data disks** | **Max NICs** | **Expected network bandwidth** |
+|--|--|||--|--|--|
+| Standard_D16_v4 | 16 | 64 GiB | Remote storage only | 32 | 8 | 12,500 Mbps |
+
+### Benchmarking tools and tests
+
+We used Flexible I/O Tester (FIO), a free, open-source disk I/O tool used both for benchmark and stress/hardware verification. To install FIO, follow the Binary Packages section in the [FIO README file](https://github.com/axboe/fio#readme) to install for the platform of your choice.
+
+While these tests focus on random I/O access patterns, you get similar results when using sequential I/O.
+
+#### High IOPS: 100% reads
+
+**4k I/O size - random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+**8k I/O size - random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+#### High throughput: 100% reads
+
+**64k I/O size - random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+**1024k I/O size - 100% random read - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randread --group_reporting --ramp_time=300
+```
+
+#### High IOPS: 100% writes
+
+**4k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=4k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+**8k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=8k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+#### High throughput: 100% writes
+
+**64k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=64k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+**1024k I/O size - 100% random write - 64 queue depth**
+
+```bash
+fio --ioengine=libaio --direct=1 --nrfiles=4 --numjobs=1 --runtime=1800 --time_based --bs=1024k --iodepth=64 --filesize=4G --rw=randwrite --group_reporting --ramp_time=300
+```
+
+## Performance considerations
+
+When using the `nconnect` mount option, you should closely evaluate workloads that have the following characteristics:
+
+- Latency sensitive write workloads that are single threaded and/or use a low queue depth (less than 16)
+- Latency sensitive read workloads that are single threaded and/or use a low queue depth in combination with smaller I/O sizes
+
+Not all workloads require high-scale IOPS or throughout performance. For smaller scale workloads, `nconnect` might not make sense. Use the following table to decide whether `nconnect` will be advantageous for your workload. Scenarios highlighted in green are recommended, while those highlighted in red are not. Those highlighted in yellow are neutral.
++
+## See also
+- For mounting instructions, see [Mount NFS file Share to Linux](storage-files-how-to-mount-nfs-shares.md).
+- For a comprehensive list of mount options, see [Linux NFS man page](https://linux.die.net/man/5/nfs).
+- For information on latency, IOPS, throughput, and other performance concepts, see [Understand Azure Files performance](understand-performance.md).
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
+
+ Title: SMB performance - Azure Files
+description: Learn about different ways to improve performance for SMB Azure file shares, including SMB Multichannel.
+++ Last updated : 08/31/2023+++
+# Improve SMB Azure file share performance
+This article explains how you can improve performance for SMB Azure file shares, including using SMB Multichannel.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+
+## Optimizing performance
+
+The following tips might help you optimize performance:
+
+- Ensure that your storage account and your client are colocated in the same Azure region to reduce network latency.
+- Use multi-threaded applications and spread load across multiple files.
+- Performance benefits of SMB Multichannel increase with the number of files distributing load.
+- Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits. For details, see [Understanding provisioning for premium file shares](understanding-billing.md#provisioned-model).
+- Maximum performance of a single VM client is still bound to VM limits. For example, [Standard_D32s_v3](../../virtual-machines/dv3-dsv3-series.md) can support a maximum bandwidth of 16,000 MBps (or 2GBps), egress from the VM (writes to storage) is metered, ingress (reads from storage) is not. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, as well as other factors.
+- The initial test is usually a warm-up. Discard the results and repeat the test.
+- If performance is limited by a single client and workload is still below provisioned share limits, you can achieve higher performance by spreading load over multiple clients.
+
+### The relationship between IOPS, throughput, and I/O sizes
+
+**Throughput = IO size * IOPS**
+
+Higher I/O sizes drive higher throughput and will have higher latencies, resulting in a lower number of net IOPS. Smaller I/O sizes will drive higher IOPS, but will result in lower net throughput and latencies. To learn more, see [Understand Azure Files performance](understand-performance.md).
+
+## SMB Multichannel
+SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind) for Windows clients. On the service side, SMB Multichannel is disabled by default in Azure Files, but there's no additional cost for enabling it.
+
+### Benefits
+SMB Multichannel enables clients to use multiple network connections that provide increased performance while lowering the cost of ownership. Increased performance is achieved through bandwidth aggregation over multiple NICs and utilizing Receive Side Scaling (RSS) support for NICs to distribute the I/O load across multiple CPUs.
+
+- **Increased throughput**:
+ Multiple connections allow data to be transferred over multiple paths in parallel and thereby significantly benefits workloads that use larger file sizes with larger I/O sizes, and require high throughput from a single VM or a smaller set of VMs. Some of these workloads include media and entertainment for content creation or transcoding, genomics, and financial services risk analysis.
+- **Higher IOPS**:
+ NIC RSS capability allows effective load distribution across multiple CPUs with multiple connections. This helps achieve higher IOPS scale and effective utilization of VM CPUs. This is useful for workloads that have small I/O sizes, such as database applications.
+- **Network fault tolerance**:
+ Multiple connections mitigate the risk of disruption since clients no longer rely on an individual connection.
+- **Automatic configuration**:
+ When SMB Multichannel is enabled on clients and storage accounts, it allows for dynamic discovery of existing connections, and can create addition connection paths as necessary.
+- **Cost optimization**:
+ Workloads can achieve higher scale from a single VM, or a small set of VMs, while connecting to premium shares. This could reduce the total cost of ownership by reducing the number of VMs necessary to run and manage a workload.
+
+To learn more about SMB Multichannel, refer to the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel).
+
+This feature provides greater performance benefits to multi-threaded applications but typically doesn't help single-threaded applications. See the [Performance comparison](#performance-comparison) section for more details.
+
+### Limitations
+SMB Multichannel for Azure file shares currently has the following restrictions:
+- Only supported on Windows clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels.
+- Not currently supported or recommended for Linux clients.
+- Maximum number of channels is four, for details see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four).
+
+### Configuration
+SMB Multichannel only works when the feature is enabled on both client-side (your client) and service-side (your Azure storage account).
+
+On Windows clients, SMB Multichannel is enabled by default. You can verify your configuration by running the following PowerShell command:
+
+```PowerShell
+Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel
+```
+
+On your Azure storage account, you'll need to enable SMB Multichannel. See [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel).
+
+### Disable SMB Multichannel
+In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) for more details.
+
+### Verify SMB Multichannel is configured correctly
+
+1. Create a new premium file share or use an existing premium share.
+1. Ensure your client supports SMB Multichannel (one or more network adapters has receive-side scaling enabled). Refer to the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel) for more details.
+1. Mount a file share to your client.
+1. Generate load with your application.
+ A copy tool such as robocopy /MT, or any performance tool such as Diskspd to read/write files can generate load.
+1. Open PowerShell as an admin and use the following command:
+`Get-SmbMultichannelConnection |fl`
+1. Look for **MaxChannels** and **CurrentChannels** properties.
++
+### Performance comparison
+
+There are two categories of read/write workload patterns: single-threaded and multi-threaded. Most workloads use multiple files, but there could be specific use cases where the workload works with a single file in a share. This section covers different use cases and the performance impact for each of them. In general, most workloads are multi-threaded and distribute workload over multiple files so they should observe significant performance improvements with SMB Multichannel.
+
+- **Multi-threaded/multiple files**:
+ Depending on the workload pattern, you should see significant performance improvement in read and write I/Os over multiple channels. The performance gains vary from anywhere between 2x to 4x in terms of IOPS, throughput, and latency. For this category, SMB Multichannel should be enabled for the best performance.
+- **Multi-threaded/single file**:
+ For most use cases in this category, workloads will benefit from having SMB Multichannel enabled, especially if the workload has an average I/O size > ~16k. A few example scenarios that benefit from SMB Multichannel are backup or recovery of a single large file. An exception where you might want to disable SMB Multichannel is if your workload is heavy on small I/Os. In that case, you might observe a slight performance loss of ~10%. Depending on the use case, consider spreading load across multiple files, or disable the feature. See the [Configuration](#configuration) section for details.
+- **Single-threaded/multiple files or single file**:
+ For most single-threaded workloads, there are minimum performance benefits due to lack of parallelism. Usually there is a slight performance degradation of ~10% if SMB Multichannel is enabled. In this case, it's ideal to disable SMB Multichannel, with one exception. If the single-threaded workload can distribute load across multiple files and uses on an average larger I/O size (> ~16k), then there should be slight performance benefits from SMB Multichannel.
+
+### Performance test configuration
+
+For the charts in this article, the following configuration was used: A single Standard D32s v3 VM with a single RSS enabled NIC with four channels. Load was generated using diskspd.exe, multiple-threaded with IO depth of 10, and random I/Os with various I/O sizes.
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected network bandwidth (Mbps) |
+||||||||||
+| [Standard_D32s_v3](../../virtual-machines/dv3-dsv3-series.md) | 32 | 128 | 256 | 32 | 64000/512 (800) | 51200/768 | 8|16000 |
++
+### Multi-threaded/multiple files with SMB Multichannel
+
+Load was generated against 10 files with various IO sizes. The scale up test results showed significant improvements in both IOPS and throughput test results with SMB Multichannel enabled. The following diagrams depict the results:
+++
+- On a single NIC, for reads, performance increase of 2x-3x was observed and for writes, gains of 3x-4x in terms of both IOPS and throughput.
+- SMB Multichannel allowed IOPS and throughput to reach VM limits even with a single NIC and the four channel limit.
+- Since egress (or reads to storage) is not metered, read throughput was able to exceed the VM published limit of 16,000 Mbps (2 GiB/s). The test achieved >2.7 GiB/s. Ingress (or writes to storage) are still subject to VM limits.
+- Spreading load over multiple files allowed for substantial improvements.
+
+An example command used in this testing is:
+
+`diskspd.exe -W300 -C5 -r -w100 -b4k -t8 -o8 -Sh -d60 -L -c2G -Z1G z:\write0.dat z:\write1.dat z:\write2.dat z:\write3.dat z:\write4.dat z:\write5.dat z:\write6.dat z:\write7.dat z:\write8.dat z:\write9.dat `.
+
+### Multi-threaded/single file workloads with SMB Multichannel
+
+The load was generated against a single 128 GiB file. With SMB Multichannel enabled, the scale up test with multi-threaded/single files showed improvements in most cases. The following diagrams depict the results:
+++
+- On a single NIC with larger average I/O size (> ~16k), there were significant improvements in both reads and writes.
+- For smaller I/O sizes, there was a slight impact of ~10% on performance with SMB Multichannel enabled. This could be mitigated by spreading the load over multiple files, or disabling the feature.
+- Performance is still bound by [single file limits](storage-files-scale-targets.md#file-scale-targets).
+
+## Next steps
+- [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel)
+- See the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel) for SMB Multichannel
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
Azure file shares can be mounted in Linux distributions using either the Server
## Mount an NFS share using the Azure portal > [!NOTE]
-> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).
+> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md).
1. Once the file share is created, select the share and select **Connect from Linux**. 1. Enter the mount path you'd like to use, then copy the script.
storage Storage Files Identity Auth Hybrid Identities Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md
Enable the Azure AD Kerberos functionality on the client machine(s) you want to
Use one of the following three methods: -- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled)
+- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled), set to 1
- Configure this group policy on the client(s): `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon` - Create the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
Add an entry for each storage account that uses on-premises AD DS integration. U
Changes aren't instant, and require a policy refresh or a reboot to take effect.
+## Undo the client configuration to retrieve Kerberos tickets
+
+If you no longer want to use a client machine for Azure AD Kerberos authentication, you can disable the Azure AD Kerberos functionality on that machine. Use one of the following three methods:
+
+- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled), set to 0
+- Configure this group policy on the client(s): `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`
+- Create the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 0`
+
+Changes are not instant, and require a policy refresh or a reboot to take effect.
+
+If you followed the steps in [Configure coexistence with storage accounts using on-premises AD DS](#configure-coexistence-with-storage-accounts-using-on-premises-ad-ds), you can optionally remove all host name to Kerberos realm mappings from the client machine. Use one of the following three methods:
+
+- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/HostToRealm](/windows/client-management/mdm/policy-csp-admx-kerberos#hosttorealm)
+- Configure this group policy on the client(s): `Administrative Template\System\Kerberos\Define host name-to-Kerberos realm mappings`
+- Run the `ksetup` Windows command on the client(s): `ksetup /delhosttorealmmap <hostname> <realmname>`
+ - For example, `ksetup /delhosttorealmmap <your storage account name>.file.core.windows.net contoso.local`
+ - You can view the list of current host name to Kerberos realm mappings by inspecting the registry key `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\HostToRealm`.
+
+Changes aren't instant, and require a policy refresh or a reboot to take effect.
+
+> [!IMPORTANT]
+> Once this change is applied, the client(s) won't be able to connect to storage accounts that are configured for Azure AD Kerberos authentication. However, they will be able to connect to storage accounts configured to AD DS, without any additional configuration.
+ ## Disable Azure AD authentication on your storage account If you want to use another authentication method, you can disable Azure AD authentication on your storage account by using the Azure portal, Azure PowerShell, or Azure CLI.
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md
To save time, you should proceed with this phase while you wait for your DataBox
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ]
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
storage Storage Files Migration Robocopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md
With the information in this phase, you will be able to decide how your servers
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ]
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
Your registered on-premises Windows Server instance must be ready and connected
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ]
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
description: Learn how to monitor the performance and availability of Azure File
-+ Last updated 08/07/2023 ms.devlang: csharp
storage Storage Files Networking Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-dns.md
description: Learn how to configure DNS forwarding for Azure Files.
Previously updated : 07/02/2021 Last updated : 08/29/2023 - # Configuring DNS forwarding for Azure Files
We strongly recommend that you read [Planning for an Azure Files deployment](sto
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## Overview
-Azure Files provides two main types of endpoints for accessing Azure file shares:
+Azure Files provides the following types of endpoints for accessing Azure file shares:
+ - Public endpoints, which have a public IP address and can be accessed from anywhere in the world. - Private endpoints, which exist within a virtual network and have a private IP address from within the address space of that virtual network.
+- Service endpoints, which restrict access to the public endpoint to specific virtual networks. You still access the storage account via the public IP address, but access is only possible from the locations you specify in your configuration.
Public and private endpoints exist on the Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues.
Every storage account has a fully qualified domain name (FQDN). For the public c
By default, `storageaccount.file.core.windows.net` resolves to the public endpoint's IP address. The public endpoint for a storage account is hosted on an Azure storage cluster which hosts many other storage accounts' public endpoints. When you create a private endpoint, a private DNS zone is linked to the virtual network it was added to, with a CNAME record mapping `storageaccount.file.core.windows.net` to an A record entry for the private IP address of your storage account's private endpoint. This enables you to use `storageaccount.file.core.windows.net` FQDN within the virtual network and have it resolve to the private endpoint's IP address.
-Since our ultimate objective is to access the Azure file shares hosted within the storage account from on-premises using a network tunnel such as a VPN or ExpressRoute connection, you must configure your on-premises DNS servers to forward requests made to the Azure Files service to the Azure private DNS service. To accomplish this, you need to set up *conditional forwarding* of `*.core.windows.net` (or the appropriate storage endpoint suffix for the US Government, Germany, or China national clouds) to a DNS server hosted within your Azure virtual network. This DNS server will then recursively forward the request on to Azure's private DNS service that will resolve the fully qualified domain name of the storage account to the appropriate private IP address.
+Because our ultimate objective is to access the Azure file shares hosted within the storage account from on-premises using a network tunnel such as a VPN or ExpressRoute connection, you must configure your on-premises DNS servers to forward requests made to the Azure Files service to the Azure private DNS service.
+
+You can configure DNS forwarding one of two ways:
+
+- **Use DNS server VMs:** Set up *conditional forwarding* of `*.core.windows.net` (or the appropriate storage endpoint suffix for the US Government, Germany, or China national clouds) to a DNS server virtual machine hosted within your Azure virtual network. This DNS server will then recursively forward the request on to Azure's private DNS service, which will resolve the FQDN of the storage account to the appropriate private IP address. This is a one-time step for all the Azure file shares hosted within your virtual network.
-Configuring DNS forwarding for Azure Files will require running a virtual machine to host a DNS server to forward the requests, however this is a one time step for all the Azure file shares hosted within your virtual network. Additionally, this is not an exclusive requirement to Azure Files - any Azure service that supports private endpoints that you want to access from on-premises can make use of the DNS forwarding you will configure in this guide, including Azure Blob storage, Azure SQL, and Azure Cosmos DB.
+- **Use Azure DNS Private Resolver:** If you don't want to deploy a VM-based DNS server, you can accomplish the same task using Azure DNS Private Resolver.
-This guide shows the steps for configuring DNS forwarding for the Azure storage endpoint, so in addition to Azure Files, DNS name resolution requests for all of the other Azure storage services (Azure Blob storage, Azure Table storage, Azure Queue storage, etc.) will be forwarded to Azure's private DNS service. Additional endpoints for other Azure services can also be added if desired. DNS forwarding back to your on-premises DNS servers will also be configured, enabling cloud resources within your virtual network (such as a DFS-N server) to resolve on-premises machine names.
+In addition to Azure Files, DNS name resolution requests for other Azure storage services (Azure Blob storage, Azure Table storage, Azure Queue storage, etc.) will be forwarded to Azure's private DNS service. You can add additional endpoints for other Azure services if desired.
## Prerequisites
-Before you can setup DNS forwarding to Azure Files, you need to have completed the following steps:
+Before you can set up DNS forwarding to Azure Files, you'll need the following:
-- A storage account containing an Azure file share you would like to mount. To learn how to create a storage account and an Azure file share, see [Create an Azure file share](storage-how-to-create-file-share.md).-- A private endpoint for the storage account. To learn how to create a private endpoint for Azure Files, see [Create a private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint).
+- A storage account containing an Azure file share you'd like to mount. To learn how to create a storage account and an Azure file share, see [Create an Azure file share](storage-how-to-create-file-share.md).
+- A private endpoint for the storage account. See [Create a private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint).
- The [latest version](/powershell/azure/install-azure-powershell) of the Azure PowerShell module.
-> [!Important]
-> This guide assumes you're using the DNS server within Windows Server in your on-premises environment. All of the steps described in this guide are possible with any DNS server, not just the Windows DNS Server.
+## Configure DNS forwarding using VMs
+If you already have DNS servers in place within your Azure virtual network, or if you prefer to deploy your own DNS server VMs by whatever methodology your organization uses, you can configure DNS with the built-in DNS server PowerShell cmdlets.
-## Configuring DNS forwarding
-If you already have DNS servers in place within your Azure virtual network, or if you simply prefer to deploy your own virtual machines to be DNS servers by whatever methodology your organization uses, you can configure DNS with the built-in DNS server PowerShell cmdlets.
-On your on-premises DNS servers, create a conditional forwarder using `Add-DnsServerConditionalForwarderZone`. This conditional forwarder must be deployed on all of your on-premises DNS servers to be effective at properly forwarding traffic to Azure. Remember to replace `<azure-dns-server-ip>` with the appropriate IP addresses for your environment.
+> [!Important]
+> This guide assumes you're using the DNS server within Windows Server in your on-premises environment. All of the steps described here are possible with any DNS server, not just the Windows DNS Server.
+
+On your on-premises DNS servers, create a conditional forwarder using `Add-DnsServerConditionalForwarderZone`. This conditional forwarder must be deployed on all of your on-premises DNS servers to be effective at properly forwarding traffic to Azure. Remember to replace the `<azure-dns-server-ip>` entries with the appropriate IP addresses for your environment.
```powershell $vnetDnsServers = "<azure-dns-server-ip>", "<azure-dns-server-ip>"
Add-DnsServerConditionalForwarderZone `
-MasterServers $vnetDnsServers ```
-On the DNS servers within your Azure virtual network, you also will need to put a forwarder in place such that requests for the storage account DNS zone are directed to the Azure private DNS service, which is fronted by the reserved IP address `168.63.129.16`. (Remember to populate `$storageAccountEndpoint` if you're running the commands within a different PowerShell session.)
+On the DNS servers within your Azure virtual network, you'll also need to put a forwarder in place so that requests for the storage account DNS zone are directed to the Azure private DNS service, which is fronted by the reserved IP address `168.63.129.16`. (Remember to populate `$storageAccountEndpoint` if you're running the commands within a different PowerShell session.)
```powershell Add-DnsServerConditionalForwarderZone `
Add-DnsServerConditionalForwarderZone `
-MasterServers "168.63.129.16" ```
+## Configure DNS forwarding using Azure DNS Private Resolver
+If you prefer not to deploy DNS server VMs, you can accomplish the same task using Azure DNS Private Resolver. See [Create an Azure DNS Private Resolver using the Azure portal](../../dns/dns-private-resolver-get-started-portal.md).
++
+There's no difference in how you configure your on-premises DNS servers, except that instead of pointing to the IP addresses of the DNS servers in Azure, you point to the resolver's inbound endpoint IP address. The resolver doesn't require any configuration, as it will forward queries to the Azure private DNS server by default. If a private DNS zone is linked to the VNet where the resolver is deployed, the resolver will be able to reply with records from that DNS zone.
+
+> [!Warning]
+> When configuring forwarders for the *core.windows.net* zone, all queries for this public domain will be forwarded to your Azure DNS infrastructure. This causes an issue when you try to access a storage account of a different tenant that has been configured with private endpoints, because Azure DNS will answer the query for the storage account public name with a CNAME that doesnΓÇÖt exist in your private DNS zone. A workaround for this issue is to create a cross-tenant private endpoint in your environment to connect to that storage account.
+
+To configure DNS forwarding using Azure DNS Private Resolver, run this script on your on-premises DNS servers. Replace `<resolver-ip>` with the resolver's inbound endpoint IP address.
+
+```powershell
+$privateResolver = "<resolver-ip>"
+
+$storageAccountEndpoint = Get-AzContext | `
+ Select-Object -ExpandProperty Environment | `
+ Select-Object -ExpandProperty StorageEndpointSuffix
+
+Add-DnsServerConditionalForwarderZone `
+ -Name $storageAccountEndpoint `
+ -MasterServers $privateResolver
+```
+ ## Confirm DNS forwarders
-Before testing to see if the DNS forwarders have successfully been applied, we recommend clearing the DNS cache on your local workstation using `Clear-DnsClientCache`. To test to see if you can successfully resolve the fully qualified domain name of your storage account, use `Resolve-DnsName` or `nslookup`.
+Before testing to see if the DNS forwarders have successfully been applied, we recommend clearing the DNS cache on your local workstation using `Clear-DnsClientCache`. To test if you can successfully resolve the FQDN of your storage account, use `Resolve-DnsName` or `nslookup`.
```powershell # Replace storageaccount.file.core.windows.net with the appropriate FQDN for your storage account.
-# Note the proper suffix (core.windows.net) depends on the cloud you're deployed in.
+# Note that the proper suffix (core.windows.net) depends on the cloud you're deployed in.
Resolve-DnsName -Name storageaccount.file.core.windows.net ```
Section : Answer
IP4Address : 192.168.0.4 ```
-If you're mounting an SMB file share, you can also use the following `Test-NetConnection` command to see that a TCP connection can be successfully made to your storage account.
+If you're mounting an SMB file share, you can also use the `Test-NetConnection` command to confirm that a TCP connection can be successfully made to your storage account.
```PowerShell Test-NetConnection -ComputerName storageaccount.file.core.windows.net -CommonTCPPort SMB
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md
Configuring public and private endpoints for Azure Files is done on the top-leve
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ]
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps. The sections below provide links and additional context to the documentation referenced in the video.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
File scale targets apply to individual files stored in Azure file shares.
<sup>1 Applies to read and write I/Os (typically smaller I/O sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower. These are soft limits, and throttling can occur beyond these limits.</sup>
-<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).</sup>
+<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./smb-performance.md).</sup>
<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If your applications are not opening a handle on the root directory, Azure Files can support more than 10,000 active users per share.</sup>
storage Storage Files Smb Multichannel Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-smb-multichannel-performance.md
- Title: SMB Multichannel performance - Azure Files
-description: Learn how SMB Multichannel can improve performance for Azure file shares.
--- Previously updated : 02/22/2023---
-# SMB Multichannel performance
-SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind) for Windows clients. On the service side, SMB Multichannel is disabled by default in Azure Files, but there's no additional cost for enabling it.
-
-## Applies to
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-
-## Benefits
-SMB Multichannel enables clients to use multiple network connections that provide increased performance while lowering the cost of ownership. Increased performance is achieved through bandwidth aggregation over multiple NICs and utilizing Receive Side Scaling (RSS) support for NICs to distribute the I/O load across multiple CPUs.
--- **Increased throughput**:
- Multiple connections allow data to be transferred over multiple paths in parallel and thereby significantly benefits workloads that use larger file sizes with larger I/O sizes, and require high throughput from a single VM or a smaller set of VMs. Some of these workloads include media and entertainment for content creation or transcoding, genomics, and financial services risk analysis.
-- **Higher IOPS**:
- NIC RSS capability allows effective load distribution across multiple CPUs with multiple connections. This helps achieve higher IOPS scale and effective utilization of VM CPUs. This is useful for workloads that have small I/O sizes, such as database applications.
-- **Network fault tolerance**:
- Multiple connections mitigate the risk of disruption since clients no longer rely on an individual connection.
-- **Automatic configuration**:
- When SMB Multichannel is enabled on clients and storage accounts, it allows for dynamic discovery of existing connections, and can create addition connection paths as necessary.
-- **Cost optimization**:
- Workloads can achieve higher scale from a single VM, or a small set of VMs, while connecting to premium shares. This could reduce the total cost of ownership by reducing the number of VMs necessary to run and manage a workload.
-
-To learn more about SMB Multichannel, refer to the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel).
-
-This feature provides greater performance benefits to multi-threaded applications but typically doesn't help single-threaded applications. See the [Performance comparison](#performance-comparison) section for more details.
-
-## Limitations
-SMB Multichannel for Azure file shares currently has the following restrictions:
-- Only supported on Windows clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels.-- Not currently supported or recommended for Linux clients.-- Maximum number of channels is four, for details see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four).-
-## Configuration
-SMB Multichannel only works when the feature is enabled on both client-side (your client) and service-side (your Azure storage account).
-
-On Windows clients, SMB Multichannel is enabled by default. You can verify your configuration by running the following PowerShell command:
-
-```PowerShell
-Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel
-```
-
-On your Azure storage account, you'll need to enable SMB Multichannel. See [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel).
-
-### Disable SMB Multichannel
-In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) for more details.
-
-## Verify SMB Multichannel is configured correctly
-
-1. Create a new premium file share or use an existing premium share.
-1. Ensure your client supports SMB Multichannel (one or more network adapters has receive-side scaling enabled). Refer to the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel) for more details.
-1. Mount a file share to your client.
-1. Generate load with your application.
- A copy tool such as robocopy /MT, or any performance tool such as Diskspd to read/write files can generate load.
-1. Open PowerShell as an admin and use the following command:
-`Get-SmbMultichannelConnection |fl`
-1. Look for **MaxChannels** and **CurrentChannels** properties.
--
-## Performance comparison
-
-There are two categories of read/write workload patterns: single-threaded and multi-threaded. Most workloads use multiple files, but there could be specific use cases where the workload works with a single file in a share. This section covers different use cases and the performance impact for each of them. In general, most workloads are multi-threaded and distribute workload over multiple files so they should observe significant performance improvements with SMB Multichannel.
--- **Multi-threaded/multiple files**:
- Depending on the workload pattern, you should see significant performance improvement in read and write I/Os over multiple channels. The performance gains vary from anywhere between 2x to 4x in terms of IOPS, throughput, and latency. For this category, SMB Multichannel should be enabled for the best performance.
-- **Multi-threaded/single file**:
- For most use cases in this category, workloads will benefit from having SMB Multichannel enabled, especially if the workload has an average I/O size > ~16k. A few example scenarios that benefit from SMB Multichannel are backup or recovery of a single large file. An exception where you might want to disable SMB Multichannel is if your workload is heavy on small I/Os. In that case, you might observe a slight performance loss of ~10%. Depending on the use case, consider spreading load across multiple files, or disable the feature. See the [Configuration](#configuration) section for details.
-- **Single-threaded/multiple files or single file**:
- For most single-threaded workloads, there are minimum performance benefits due to lack of parallelism. Usually there is a slight performance degradation of ~10% if SMB Multichannel is enabled. In this case, it's ideal to disable SMB Multichannel, with one exception. If the single-threaded workload can distribute load across multiple files and uses on an average larger I/O size (> ~16k), then there should be slight performance benefits from SMB Multichannel.
-
-### Performance test configuration
-
-For the charts in this article, the following configuration was used: A single Standard D32s v3 VM with a single RSS enabled NIC with four channels. Load was generated using diskspd.exe, multiple-threaded with IO depth of 10, and random I/Os with various I/O sizes.
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected network bandwidth (Mbps) |
-||||||||||
-| [Standard_D32s_v3](../../virtual-machines/dv3-dsv3-series.md) | 32 | 128 | 256 | 32 | 64000/512 (800) | 51200/768 | 8|16000 |
--
-### Mutli-threaded/multiple files with SMB Multichannel
-
-Load was generated against 10 files with various IO sizes. The scale up test results showed significant improvements in both IOPS and throughput test results with SMB Multichannel enabled. The following diagrams depict the results:
----- On a single NIC, for reads, performance increase of 2x-3x was observed and for writes, gains of 3x-4x in terms of both IOPS and throughput.-- SMB Multichannel allowed IOPS and throughput to reach VM limits even with a single NIC and the four channel limit.-- Since egress (or reads to storage) is not metered, read throughput was able to exceed the VM published limit of 16,000 Mbps (2 GiB/s). The test achieved >2.7 GiB/s. Ingress (or writes to storage) are still subject to VM limits.-- Spreading load over multiple files allowed for substantial improvements.-
-An example command used in this testing is:
-
-`diskspd.exe -W300 -C5 -r -w100 -b4k -t8 -o8 -Sh -d60 -L -c2G -Z1G z:\write0.dat z:\write1.dat z:\write2.dat z:\write3.dat z:\write4.dat z:\write5.dat z:\write6.dat z:\write7.dat z:\write8.dat z:\write9.dat `.
-
-### Multi-threaded/single file workloads with SMB Multichannel
-
-The load was generated against a single 128 GiB file. With SMB Multichannel enabled, the scale up test with multi-threaded/single files showed improvements in most cases. The following diagrams depict the results:
----- On a single NIC with larger average I/O size (> ~16k), there were significant improvements in both reads and writes.-- For smaller I/O sizes, there was a slight impact of ~10% on performance with SMB Multichannel enabled. This could be mitigated by spreading the load over multiple files, or disabling the feature.-- Performance is still bound by [single file limits](storage-files-scale-targets.md#file-scale-targets).-
-## Optimizing performance
-
-The following tips might help you optimize performance:
--- Ensure that your storage account and your client are colocated in the same Azure region to reduce network latency.-- Use multi-threaded applications and spread load across multiple files.-- Performance benefits of SMB Multichannel increase with the number of files distributing load.-- Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits. For details, see [Understanding provisioning for premium file shares](understanding-billing.md#provisioned-model).-- Maximum performance of a single VM client is still bound to VM limits. For example, [Standard_D32s_v3](../../virtual-machines/dv3-dsv3-series.md) can support a maximum bandwidth of 16,000 MBps (or 2GBps), egress from the VM (writes to storage) is metered, ingress (reads from storage) is not. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, as well as other factors.-- The initial test is usually a warm-up, discard its results and repeat the test.-- If performance is limited by a single client and workload is still below provisioned share limits, higher performance can be achieved by spreading load over multiple clients.-
-### The relationship between IOPS, throughput, and I/O sizes
-
-**Throughput = IO size * IOPS**
-
-Higher I/O sizes drive higher throughput and will have higher latencies, resulting in a lower number of net IOPS. Smaller I/O sizes will drive higher IOPS but will result in lower net throughput and latencies. To learn more, see [Understand Azure Files performance](understand-performance.md).
-
-## Next steps
-- [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel)-- See the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel) for SMB Multichannel
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
To create an Azure file share, you need to answer three questions about how you
Premium file shares are available with local redundancy and zone redundancy in a subset of regions. To find out if premium file shares are available in your region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). For more information, see [Azure Files redundancy](files-redundancy.md). - **What size file share do you need?**
- In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.
+ In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB unless you sign up for [Geo-redundant storage for large file shares (preview)](geo-redundant-storage-for-large-file-shares.md).
For more information on these three choices, see [Planning for an Azure Files deployment](storage-files-planning.md).
To create a FileStorage storage account, ensure the **Performance** radio button
:::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-performance-premium.png" alt-text="A screenshot of the performance radio button with premium selected and account kind with FileStorage selected."::: The other basics fields are independent from the choice of storage account:-- **Storage account name**: The name of the storage account resource to be created. This name must be globally unique. The storage account name will be used as the server name when you mount an Azure file share via SMB. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
+- **Storage account name**: The name of the storage account resource to be created. This name must be globally unique. The storage account name will be used as the server name when you mount an Azure file share via SMB. Storage account names must be between 3 and 24 characters in length. They may contain numbers and lowercase letters only.
- **Location**: The region for the storage account to be deployed into. This can be the region associated with the resource group, or any other available region.-- **Replication**: Although this is labeled replication, this field actually means **redundancy**; this is the desired redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-access geo-zone redundancy (RA-GZRS), which do not apply to Azure file shares; any file share created in a storage account with these selected will actually be either geo-redundant or geo-zone-redundant, respectively.
+- **Replication**: Although this is labeled replication, this field actually means **redundancy**; this is the desired redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-access geo-zone redundancy (RA-GZRS), which don't apply to Azure file shares; any file share created in a storage account with these selected will be either geo-redundant or geo-zone-redundant, respectively.
#### Networking The networking section allows you to configure networking options. These settings are optional for the creation of the storage account and can be configured later if desired. For more information on these options, see [Azure Files networking considerations](storage-files-networking-overview.md).
The advanced section contains several important settings for Azure file shares:
:::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-secure-transfer.png" alt-text="A screenshot of secure transfer enabled in the advanced settings for the storage account."::: -- **Large file shares**: This field enables the storage account for file shares spanning up to 100 TiB. Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options. Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file share capability. FileStorage storage accounts (storage accounts for premium file shares) don't have this option, as all premium file shares can scale up to 100 TiB.
+- **Large file shares**: This field enables the storage account for file shares spanning up to 100 TiB. Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options. Once a GPv2 storage account has been enabled for large file shares, you can't disable the large file share capability. FileStorage storage accounts (storage accounts for premium file shares) don't have this option, as all premium file shares can scale up to 100 TiB.
:::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-large-file-shares.png" alt-text="A screenshot of the large file share setting in the storage account's advanced blade.":::
The other settings that are available in the advanced tab (hierarchical namespac
Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. These are optional and can be applied after storage account creation. #### Review + create
-The final step to create the storage account is to select the **Create** button on the **Review + create** tab. This button won't be available if all of the required fields for a storage account are not filled.
+The final step to create the storage account is to select the **Create** button on the **Review + create** tab. This button won't be available unless all the required fields for a storage account are filled.
# [PowerShell](#tab/azure-powershell)
-To create a storage account using PowerShell, we will use the `New-AzStorageAccount` cmdlet. This cmdlet has many options; only the required options are shown. To learn more about advanced options, see the [`New-AzStorageAccount` cmdlet documentation](/powershell/module/az.storage/new-azstorageaccount).
+To create a storage account using PowerShell, use the `New-AzStorageAccount` cmdlet. This cmdlet has many options; only the required options are shown. To learn more about advanced options, see the [`New-AzStorageAccount` cmdlet documentation](/powershell/module/az.storage/new-azstorageaccount).
-To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish; however, note that the storage account name must be globally unique.
+To simplify creating the storage account and subsequent file share, we'll store several parameters in variables. You may replace the variable contents with whatever values you wish; however, note that the storage account name must be globally unique.
```powershell $resourceGroupName = "myResourceGroup"
$storageAccountName = "mystorageacct$(Get-Random)"
$region = "westus2" ```
-To create a storage account capable of storing standard Azure file shares, we will use the following command. The `-SkuName` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, you must also remove the `-EnableLargeFileShare` parameter.
+To create a storage account capable of storing standard Azure file shares, use the following command. The `-SkuName` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, remove the `-EnableLargeFileShare` parameter.
```powershell $storAcct = New-AzStorageAccount `
$storAcct = New-AzStorageAccount `
-EnableLargeFileShare ```
-To create a storage account capable of storing premium Azure file shares, we will use the following command. Note that the `-SkuName` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant (`LRS`). The `-Kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account.
+To create a storage account capable of storing premium Azure file shares, use the following command. Note that the `-SkuName` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant storage (`LRS`). The `-Kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account.
```powershell $storAcct = New-AzStorageAccount `
$storAcct = New-AzStorageAccount `
``` # [Azure CLI](#tab/azure-cli)
-To create a storage account using Azure CLI, we will use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the [`az storage account create` command documentation](/cli/azure/storage/account).
+To create a storage account using Azure CLI, use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the [`az storage account create` command documentation](/cli/azure/storage/account).
-To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique.
+To simplify the creation of the storage account and subsequent file share, we'll store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique.
```azurecli resourceGroupName="myResourceGroup"
storageAccountName="mystorageacct$RANDOM"
region="westus2" ```
-To create a storage account capable of storing standard Azure file shares, we will use the following command. The `--sku` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, you must also remove the `--enable-large-file-share` parameter.
+To create a storage account capable of storing standard Azure file shares, use the following command. The `--sku` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, remove the `--enable-large-file-share` parameter.
```azurecli az storage account create \
az storage account create \
--output none ```
-To create a storage account capable of storing premium Azure file shares, we will use the following command. Note that the `--sku` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant (`LRS`). The `--kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account.
+To create a storage account capable of storing premium Azure file shares, use the following command. Note that the `--sku` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant storage (`LRS`). The `--kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account.
```azurecli az storage account create \
az storage account update --name <yourStorageAccountName> -g <yourResourceGroup>
## Create a file share Once you've created your storage account, you can create your file share. This process is mostly the same regardless of whether you're using a premium file share or a standard file share. You should consider the following differences:
-Standard file shares may be deployed into one of the standard tiers: transaction optimized (default), hot, or cool. This is a per file share tier that is not affected by the **blob access tier** of the storage account (this property only relates to Azure Blob storage - it does not relate to Azure Files at all). You can change the tier of the share at any time after it has been deployed. Premium file shares cannot be directly converted to any standard tier.
+Standard file shares can be deployed into one of the standard tiers: transaction optimized (default), hot, or cool. This is a per file share tier that isn't affected by the **blob access tier** of the storage account (this property only relates to Azure Blob storage - it doesn't relate to Azure Files at all). You can change the tier of the share at any time after it has been deployed. Premium file shares can't be directly converted to any standard tier.
> [!Important] > You can move file shares between tiers within GPv2 storage account types (transaction optimized, hot, and cool). Share moves between tiers incur transactions: moving from a hotter tier to a cooler tier will incur the cooler tier's write transaction charge for each file in the share, while a move from a cooler tier to a hotter tier will incur the cool tier's read transaction charge for each file the share. The **quota** property means something slightly different between premium and standard file shares: -- For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. If a quota isn't specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property is not set for a storage account). If you did not create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-file-shares-on-an-existing-account) for how to enable 100 TiB file shares.
+- For standard file shares, it's an upper boundary of the Azure file share. If a quota isn't specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property isn't set for a storage account). If you didn't create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-file-shares-on-an-existing-account) for how to enable 100 TiB file shares.
- For premium file shares, quota means **provisioned size**. The provisioned size is the amount that you will be billed for, regardless of actual usage. The IOPS and throughput available on a premium file share is based on the provisioned size. For more information on how to plan for a premium file share, see [provisioning premium file shares](understanding-billing.md#provisioned-model).
storage Storage How To Use Files Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md
You don't need to mount the Azure file share to a particular drive letter to use
`\\storageaccountname.file.core.windows.net\myfileshare`
-You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share.
+You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share. If you do not get prompted for credentials you can add the credentials using the following command:
+
+`cmdkey /add:StorageAccountName.file.core.windows.net /user:localhost\StorageAccountName /pass:StorageAccountKey`
For Azure Government Cloud, simply change the servername to:
storage Understand Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understand-performance.md
Whether you're assessing performance requirements for a new or existing workload
- **Workload duration and frequency:** Short (minutes) and infrequent (hourly) workloads will be less likely to achieve the upper performance limits of standard file shares compared to long-running, frequently occurring workloads. On premium file shares, workload duration is helpful when determining the correct performance profile to use based on the provisioning size. Depending on how long the workload needs to [burst](understanding-billing.md#bursting) for and how long it spends below the baseline IOPS, you can determine if you're accumulating enough bursting credits to consistently satisfy your workload at peak times. Finding the right balance will reduce costs compared to over-provisioning the file share. A common mistake is to run performance tests for only a few minutes, which is often misleading. To get a realistic view of performance, be sure to test at a sufficiently high frequency and duration. -- **Workload parallelization:** For workloads that perform operations in parallel, such as through multiple threads, processes, or application instances on the same client, premium file shares provide a clear advantage over standard file shares: SMB Multichannel. See [SMB Multichannel](storage-files-smb-multichannel-performance.md) for more information.
+- **Workload parallelization:** For workloads that perform operations in parallel, such as through multiple threads, processes, or application instances on the same client, premium file shares provide a clear advantage over standard file shares: SMB Multichannel. See [Improve SMB Azure file share performance](smb-performance.md) for more information.
- **API operation distribution**: Is the workload metadata heavy with file open/close operations? This is common for workloads that are performing read operations against a large number of files. See [Metadata or namespace heavy workload](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-2-metadata-or-namespace-heavy-workload).
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Azure Files provides two distinct billing models: provisioned and pay-as-you-go.
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/m5_-GsKv4-o" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube-nocookie.com/embed/m5_-GsKv4-o]
:::column-end::: :::column::: This video is an interview that discusses the basics of the Azure Files billing model. It covers how to optimize Azure file shares to achieve the lowest costs possible, and how to compare Azure Files to other file storage offerings on-premises and in the cloud.
The following table illustrates a few examples of these formulae for the provisi
| 51,200 | 54,200 | Up to 100,000 | 164,880,000 | 5,220 | | 102,400 | 100,000 | Up to 100,000 | 0 | 10,340 |
-Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and parallelism, among many other factors. To achieve maximum benefit from parallelization, we recommend enabling SMB Multichannel on premium file shares. To learn more see [enable SMB Multichannel](files-smb-protocol.md#smb-multichannel). Refer to [SMB Multichannel performance](storage-files-smb-multichannel-performance.md) and [performance troubleshooting guide](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json) for some common performance issues and workarounds.
+Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and parallelism, among many other factors. To achieve maximum benefit from parallelization, we recommend enabling SMB Multichannel on premium file shares. To learn more see [enable SMB Multichannel](files-smb-protocol.md#smb-multichannel). Refer to [SMB Multichannel performance](smb-performance.md) and [performance troubleshooting guide](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json) for some common performance issues and workarounds.
### Bursting If your workload needs the extra performance to meet peak demand, your share can use burst credits to go above the share's baseline IOPS limit to give the share the performance it needs to meet the demand. Bursting is automated and operates based on a credit system. Bursting works on a best effort basis, and the burst limit isn't a guarantee.
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/assign-azure-role-data-access.md
Title: Assign an Azure role for access to queue data
description: Learn how to assign permissions for queue data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 07/13/2021-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-access-azure-active-directory.md
Title: Authorize access to queues using Active Directory description: Authorize access to Azure queues using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account.-+ Last updated 03/17/2023-+
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-cli.md
Title: Choose how to authorize access to queue data with Azure CLI description: Specify how to authorize data operations against queue data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token. -+ -+ Last updated 02/10/2021
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-portal.md
Title: Choose how to authorize access to queue data in the Azure portal description: When you access queue data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ -+ Last updated 12/13/2021
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Azure AD credentials to access queue data description: PowerShell supports signing in with Azure AD credentials to run commands on Azure Queue Storage data. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ -+ Last updated 02/10/2021
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/scalability-targets.md
Title: Scalability and performance targets for Queue Storage description: Learn about scalability and performance targets for Queue Storage.-+ -+ Last updated 12/18/2019
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md
Title: Security recommendations for Queue Storage description: Learn about security recommendations for Queue Storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ -+ Last updated 05/12/2022
storage Komprise Quick Start Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/komprise-quick-start-guide.md
Title: Analyze and migrate your file data to Azure with Komprise Intelligent Data Manager
-description: Getting started guide to implement Komprise Intelligent Data Manager. Guide shows how to analyze your file infrastructure, and migrate your data to Azure Files, Azure NetApp Files, Azure Blob Storage, or any available ISV NAS solution
-- Previously updated : 05/20/2021-
+description: Getting started guide to implement Komprise Intelligent Data Manager. This guide shows how to analyze your file infrastructure, and migrates your data to Azure Files, Azure NetApp Files, Azure Blob Storage, or any available ISV NAS solution
++ Last updated : 06/01/2023 -+
-# Analyze and migrate to Azure with Komprise
+# Quickstart analyze and migrate to Azure with Komprise
-This article helps you integrate the Komprise Intelligent Data Management infrastructure with Azure storage services. It includes considerations and implementation guidance on how to analyze, and migrate your data.
+This article describes using Komprise Intelligent Data Management to identify and place the right data in the right Azure Storage Service.
-Komprise provides analytics and insights into file, and object data stored in network attached storage systems (NAS), and object stores, both on-premises and in the cloud. It enables migration of data to Azure storage services like Azure Files, Azure NetApp Files, Azure Blob Storage, or other ISV NAS solution. Learn more on [verified partner solutions for primary and secondary storage](../primary-secondary-storage/partner-overview.md).
+Moving data can be intimidating. There are often numerous challenges, beginning with identifying what to move, matching data value to proper storage class, then moving it promptly all while minimizing end-user impacts.
-Common use cases for Komprise include:
+Komprise makes it easy to move your data to Azure storage services like Azure Files, Azure NetApp Files, Azure Blob Storage or other ISV NAS solutions.
-- Analysis of unstructured file and object data to gain insights for data management, movement, positioning, archiving, protection, and confinement,-- Migration of file data to Azure Files, Azure NetApp Files, or ISV NAS solution,-- Policy based tiering and archiving of file data to Azure Blob Storage while retaining transparent access from the original NAS solution and allowing native object access in Azure,-- Copy file data to Azure Blob Storage on configurable schedules while retaining native object access in Azure-- Migration of object data to Azure Blob Storage,-- Tiering and data lifecycle management of objects across Hot, Cool, and Archive tiers of Azure Blob Storage based on last access time
+Learn more about other ISV NAS in the [verified partner solutions article](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview)
-## Reference architecture
+This article reviews where to get started, considerations and recommendations when moving data to Azure. Use the following links to connect to what is important.
+- [Know first, move smarter analyze, tier, move what matters](#know-first-move-smarter-analyze-tier-move-what-matters)
+- [Assessing network and storage performance](#assessing-network-and-storage-performance)
+- [Intelligent data management architecture](#intelligent-data-management-architecture)
+- [Getting started with Komprise](#getting-started-with-komprise)
+- [Getting started with Azure](#getting-started-with-azure)
+- [Migration guide](#migration-guide)
+- [Deployment instructions for migrating object data](#deployment-instructions-for-migrating-object-data)
+- [Migration API](#migration-api)
+- [Next steps](#next-steps)
-The following diagram provides a reference architecture for on-premises to Azure and in-Azure deployments.
+## Where to start
+Visit [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.azure_data_migration_program?tab=PlansAndPrice) to learn more about Komprise and Azure together. Learn how you can get an introduction, reach out to ask questions, arrange to meet your local Komprise field team or sign up for a trial.
-The following diagram provides a reference architecture for migrating cloud and on-premises object workloads to Azure Blob Storage.
+[Visit Komprise directly](https://www.komprise.com/azure-migration) for more information about our solution, including white papers and reference architectures!
+## Know first, move smarter (analyze, tier, move what matters)
-Komprise is a software solution that is easily deployed in a virtual environment. The solutions consist of:
-- **Director** - The administration console for the Komprise Grid. It is used to configure the environment, monitor activities, view reports and graphs, and set policies.-- **Observers** - Manage and analyze shares, summarize reports, communicate with the Director, and handle object and NFS data traffic.-- **Proxies** - Simplify and accelerate SMB/CIFS data flow, easily scale to meet performance requirements of a growing environment.
+Komprise provides quick insights into your unstructured data across all storage platforms with Plan Analysis and Deep Analytics capabilities. Plan Analysis immediately gives summary results with usage graphs and the Analysis Activities page surfaces important file system issues discovered. Deep Analytics allows customers to dig deeper in to understanding their data with custom querying capabilities and graphs to find select data sets, orphaned files and more.
-## Before you begin
+Understanding your data is the first step in selecting the appropriate Azure storage service. It's important to know the type of data, amount, file count, owners, and other information to help determine if the data should be in Azure Files or Azure NetApp Files. This information can also help you understand if the data should be migrated or tiered to Azure Blob for long-term storage and significant cost savings.
-Upfront planning will help in migrating the data with less risk.
+With a quick install of a local Komprise data Observer, in 30 minutes or less you can see:
-### Get started with Azure
+- Immediate results on capacity, file count and temperature with Komprise heat map. The data can be filtered to show results for all shares, groups or individual shares.
+- Komprise includes a cost comparison tool with the ability to edit cost models of current on-premises storage and Azure Storage Solutions costs to determine the best savings and return on investment
+- Usage graphs provide quick summary/comparisons of file types, file sizes, file counts, top owners, groups, shares and directories. Use this information to determine the order of the migration and assess the business impact of migrating data.
-Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade cloud adoption. The CAF includes a step-by-step [Azure setup guide](/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and running quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications, and free training resources to put you on the path to Azure expertise.
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/sample-analysis-charts.png" alt-text="Analysis by file type and storage consumed" lightbox="./media/komprise-quick-start-guide-v2/sample-analysis-charts.png":::
-### Considerations for migrations
+- Look for opportunities to clean up expired data, which reduces the migration effort and the cost of the destination storage.
+- Identify cold data, not accessed in six months or more, that could be cost-effectively tiered or moved to Azure Blob storage.
+- Analysis Activity page helps identify potential issues upfront, before moving data. The issues you donΓÇÖt want to encounter after starting to move data include:
+ - Files and/or Directories, with restricted access or resolution issues
+ - Date set too large for destination storage service in file count or capacity
+ - Data sets with an exceedingly large number of tiny files or with a large number of empty directories
+ - Slow-performing shares
+ - Lack of destination support for sparce files or symbolic links
-Several aspects are important when considering migrations of file data to Azure. Before proceeding learn more:
+Komprise knows it can be challenging to find just the right data across billions of files. Komprise Deep Analytics builds a Global File Index of all your fileΓÇÖs metadata, giving a unified way to search, tag and create select data sets across storage silos. You can identify orphan data, data by name, location, owner, date, application type or extension. Administrators can use these queries and tagged data sets to move, copy, confine, or feed your data pipelines. They can also set data workflow policies. This allows business to use other Azure cloud data services like personal data identification, running cloud data analytics, and culling and feeding edge data to cloud data lakes.
-- [Storage migration overview](../../../common/storage-migration-overview.md)-- latest supported features by Komprise Intelligent Data Management in [migration tools comparison matrix](./migration-tools-comparison.md).
+Learn more at [Komprise Deep Analytics](https://www.komprise.com/use-cases/deep-analytics/)
-Remember, you'll require enough network capacity to support migrations without impacting production applications. This section outlines the tools and techniques that are available to assess your network needs.
-#### Determine unutilized internet bandwidth
+Use all this information when selecting the appropriate Azure storage service. Komprise helps identify key factors like shares, protocol, logical size, file count, data type and performance type.
-It's important to know how much typically unutilized bandwidth (or *headroom*) you have available on a day-to-day basis. To help you assess whether you can meet your goals for:
+- Azure Files
+ - [Azure Files Documentation Site](/azure/storage/files/)
+ - [Planning for an Azure Files deployment](/azure/storage/files/storage-files-planning)
+- Azure Block Blob
+ - [Azure Blob Documentation Site](/azure/storage/blobs/)
+ - [Access Tiers for Azure Blob](/azure/storage/blobs/access-tiers-overview?source=recommendations)
+- Azure Storage Accounts
+ - [Azure Storage Account Overview](/azure/storage/common/storage-account-overview?toc=/azure/storage/blobs/toc.json)
+ - [Create a Storage Account](/azure/storage/common/storage-account-create)
+- Azure NetApp Files
+ - [Azure NetApp Files Documentation Site](/azure/azure-netapp-files/)
+ - [Service Levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels)
-- initial time for migrations when you're not using Azure Data Box for offline method-- time required to do incremental resync before final switch-over to the target file service
+## Assessing network and storage performance
-Use the following methods to identify the bandwidth headroom to Azure that is free to consume.
+Migrations move only as fast as the infrastructure allows. ItΓÇÖs vital to know the combined performance abilities of the network and storage systems together. Measuring networks and storage performance individually may not reveal hidden limitations in port configurations, routing, file system overloading and more.
-- If you're an existing Azure ExpressRoute customer, view your [circuit usage](../../../../expressroute/expressroute-monitoring-metrics-alerts.md#circuits-metrics) in the Azure portal.-- Contact your ISP and request reports to show your existing daily and monthly utilization.-- There are several tools that can measure utilization by monitoring your network traffic at the router/switch level:
- - [SolarWinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS)
- - [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring)
- - [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html)
- - [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring)
+Komprise assesses the network and storage performance, combined, to identify any connectivity issues between your datacenter and Azure storage.
+
+The Komprise Assessment of Customer Environment (ACE) is easy to deploy and run. The tool simulates a series of data movement scenarios between on-premises source NAS shares and destination Azure NAS storage services like Azure Files and Azure NetApp Files. It performs a set of reading, writing and checksum operations collecting overall performance numbers. The results can highlight potential performance losses to investigate. This list details some tools and services to isolate issues.
-## Migration planning guide
+- [SolarWinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS)
+- [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring)
+- [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html)
+- [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring)
-Komprise is simple to set up and enables running multiple migrations simultaneously in three steps:
+If you're using a public network connection, consider changing to a private VPN or contracting with an Azure Express Route service provider. Making this change can improve security, performance, and providing greater opportunity to identify and resolve any connectivity issues.
-1. Analyze your data to identify files and objects to migrate or archive,
-1. Define policies to migrate, move, or copy unstructured data to Azure Storage,
-1. Activate policies that automatically move your data.
+To learn more about Express Routes:
+- [What is Azure ExpressRoute?](/azure/expressroute/expressroute-introduction)
+- [ExpressRoute connectivity models](/azure/expressroute/expressroute-connectivity-models)
+- [Extend an on-premises network using ExpressRoute](/azure/architecture/reference-architectures/hybrid-networking/expressroute)
-The first step is critical in finding and prioritizing the right data to migrate. Komprise analysis provides:
+Other performance items to investigate with secure networks:
+- Existing Azure ExpressRoute customers, review [circuit usage](/azure/expressroute/expressroute-monitoring-metrics-alerts#circuits-metrics) in the Azure portal
+- Work with your ISP and request reports showing existing daily and monthly utilization
-- Information on access time to identify:
- - Less frequently accessed files that you can cache on-premises or store on fast file service
- - Cold data you can archive to blob storage
-- Information on top users, groups, or shares to determine the order of the migration and the most impacted group within the organization to assess business impact-- Number of files, or capacity per file type to determine type of stored files and if there are any possibilities to clean up the content. Cleaning up will reduce the migration effort, and reduce the cost of the target storage. Similar analytics is available for object data.-- Number of files, or capacity per file size to determine the duration of migration. Large number of small files will take longer to migrate than small number of large files. Similar analytics is available for object data.-- Cost of objects by storage tier to determine if cold data is incorrectly placed in expensive tiers, or hot data is incorrectly placed in cheaper tiers with high access costs. Right placing data based on access patterns enables optimizing overall cloud storage costs.
+## Intelligent data management architecture
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-analyze-1.png" alt-text="Analysis by file type and access time":::
+Komprise provides a highly scalable infrastructure to meet every need. Begin assessing your environment with one data Observer then rapidly scale up and out to move terabytes to petabytes of data with more data movers.
+Example Komprise architecture overview
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-analyze-shares.png" alt-text="Example of share analysis":::
-- Custom query capability filter to filter exact set of files and objects for your specific needs
+Komprise software is easy to set up in virtual environments for complete resource flexibility. For optimum performance, flexibility and cost control, Komprise data managers (Observers) and data movers (Proxies) can be deployed on-premises or in the cloud to fit your unique requirements.
+- Director - The administration console for the Komprise Grid. It's used to configure the environment, monitor activities, view reports and graphs and set policies.
+- Observers ΓÇôKomprise data managers analyze storage systems, summarize reports, communicate with the Director, manage migrations and handles data movement.
+- Proxies ΓÇôThese scalable data movers simplify and accelerate SMB/CIFS data flow. Proxy data movers can easily scale to meet the performance requirements of a growing environment or tight timeline.
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-analyze-custom.png" alt-text="Analysis for custom query":::
-## Deployment guide
+## Getting started with Komprise
+1. Contact Komprise, and meet the local team who will set up your own Komprise Director console and assist with a preinstallation call and installation. With preparation, installation should be ~30 minutes from power up to see the first analysis results.
+ Sign up at [https://www.komprise.com/azure-migration](https://www.komprise.com/azure-migration)
+2. After logging in with the Director, the wizard Install page will provide links to Download the Komprise Observer virtual appliance. Power up the Observer VM and configure it with static IP, general network and domain information. The last step in the setup script is to sign-in to the director to establish communication.
-Before deploying Komprise, the target service must be deployed. You can learn more here:
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-komprise-download-page.png" alt-text="Screenshot of the Komprise download page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-komprise-download-page.png":::
-- How to create [Azure File Share](../../../files/storage-how-to-create-file-share.md)-- How to create an [SMB volume](../../../../azure-netapp-files/azure-netapp-files-create-volumes-smb.md) or [NFS export](../../../../azure-netapp-files/azure-netapp-files-create-volumes.md) in Azure NetApp Files
+3. Add shares for analysis on the Specify Shares page. Use Discover shares to identify a NAS system and automatically import all share information.
+ - Enter File System Information:
+ - A platform for the source NAS
+ - Hostname or IP address
+ - Display Name
+ - Credentials (for SMB shares)
-The Komprise Grid is deployed in a virtual environment (Hyper-V, VMware, KVM) for speed, scalability, and resilience. Alternatively, you may set up the environment in your Azure subscription using [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management).
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-enter-credentials.png" alt-text="Screenshot of the dialog box to enter credentials" lightbox="./media/komprise-quick-start-guide-v2/screenshot-enter-credentials.png":::
-1. Open the Azure portal, and search for **storage accounts**.
+ - Repeat these steps to add other source and destination systems. From Menu choose Shares > Sources > Add File Server
+ - Once a File Server is added, drill down to the share level and Enable share to start an analysis. See the Plan page for analysis results
- :::image type="content" source="./media/komprise-quick-start-guide/azure-locate-storage-account.png" alt-text="Shows where you've typed storage in the search box of the Azure portal.":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-plan-page.png" alt-text="Screenshot of the Komprise Plan page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-plan-page.png":::
- You can also click on the default **Storage accounts** icon.
+ - Pause to Analyze the newly added shares reviewing the Plan page, Usage graphs and Analysis Activities results to uncover any issues to address, size and select appropriate Azure Storage Services. See next section, Getting Started with Azure, to create the destination Azure storage services.
+ - Use the Komprise ACE tool to identify and resolve any infrastructure network and storage performance issues before engaging Komprise migration engines. Once everything looks good continue to the next step with adding Azure Storage Services as destination sources for Komprise Migration.
+ - Add Azure Files as a migration destination and configure it on the Sources Tab, not the Targets tab. Target systems are for Komprise Plan operations like seamless tiering with Komprise Transparent Movement TechnologyΓäó (TMT) and Deep Analytics Actions.
- :::image type="content" source="./media/komprise-quick-start-guide/azure-portal.png" alt-text="Shows adding a storage account in the Azure portal.":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-server-analysis.png" alt-text="Screenshot of the Add Server to Sources page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-server-analysis.png":::
-2. Select **Create** to add an account:
- 1. Select existing resource group or **Create new**
- 2. Provide a unique name for your storage account
- 3. Choose the region
- 4. Select **Standard** or **Premium** performance, depending on your needs. If you select **Premium**, select **File shares** under **Premium account type**.
- 5. Choose the **[Redundancy](../../../common/storage-redundancy.md)** that meets your data protection requirements
-
- :::image type="content" source="./media/komprise-quick-start-guide/azure-account-create-1.png" alt-text="Shows storage account settings in the portal.":::
+ Example of adding Azure Files as a migration destination on the Sources tab:
-3. Next, we recommend the default settings from the **Advanced** screen. If you are migrating to Azure Files, we recommend enabling **Large file shares** if available.
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-server-destination.png" alt-text="Screenshot of the Add Destination to Sources page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-server-destination.png":::
- :::image type="content" source="./media/komprise-quick-start-guide/azure-account-create-2.png" alt-text="Shows Advanced settings tab in the portal.":::
+## Getting started with Azure
+Microsoft offers a framework to get you started with Azure. The [Cloud Adoption Framework](/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and a comprehensive guide to planning a production-grade cloud adoption. The CAF includes a step-by-step [Azure setup guide](/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and run quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications and free training resources to put you on the path to Azure expertise.
-4. Keep the default networking options for now and move on to **Data protection**. You can choose to enable soft delete, which allows you to recover an accidentally deleted data within the defined retention period. Soft delete offers protection against accidental or malicious deletion.
+Before starting your project, the target service must be deployed. You can learn more here:
+- How to create [Azure File Share](/azure/storage/files/storage-how-to-create-file-share)
+- How to create an [SMB volume](/azure/azure-netapp-files/azure-netapp-files-create-volumes-smb) or [NFS export](/azure/azure-netapp-files/azure-netapp-files-create-volumes) in Azure NetApp Files
- :::image type="content" source="./media/komprise-quick-start-guide/azure-account-create-3.png" alt-text="Shows the Data Protection settings in the portal.":::
+The Komprise Grid is deployed in a virtual environment (Hyper-V, VMware, KVM) for speed, scalability, and resilience. Alternatively, you may set up the environment in your Azure subscription using [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management).
-5. Add tags for organization if you use tagging and **Create** your account.
-
-6. Two quick steps are all that are now required before you can add the account to your Komprise environment. Navigate to the account you created in the Azure portal and select File shares under the File service menu. Add a File share and choose a meaningful name. Then, navigate to the Access keys item under Settings and copy the Storage account name and one of the two access keys. If the keys are not showing, click on the **Show keys**.
+1. Open the Azure portal and search for storage accounts
- :::image type="content" source="./media/komprise-quick-start-guide/azure-access-key.png" alt-text="Shows access key settings in the portal.":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-portal-search.png" alt-text="Screenshot of the Azure Portal Search Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-portal-search.png":::
-7. Navigate to the **Properties** of the Azure File share. Write down the URL address, it will be required to add the Azure connection into the Komprise target file share:
+ You can also click on the default Storage accounts icon
- :::image type="content" source="./media/komprise-quick-start-guide/azure-files-endpoint.png" alt-text="Find Azure files endpoint.":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-storage-accounts.png" alt-text="Screenshot of the Azure Storage Account Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-storage-accounts.png":::
-8. (_Optional_) You can add extra layers of security to your deployment.
-
- 1. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](../../../common/authorization-resource-provider.md#built-in-roles-for-management-operations).
-
- 2. Restrict access to the account to specific network segments with [storage firewall settings](../../../common/storage-network-security.md). Configure firewall settings to prevent access from outside of your corporate network.
+2. Select Create to add an account:
+ a. Select an existing resource group or Create New.
+ b. Provide a unique name for your storage account.
+ c. Choose the region.
+ d. Select Standard or Premium performance, depending on your needs. If you select Premium, select File shares under Premium account type.
+ e. Choose the [Redundancy](/azure/storage/common/storage-redundancy) that meets your data protection requirements
- :::image type="content" source="./media/komprise-quick-start-guide/azure-storage-firewall.png" alt-text="Shows storage firewall settings in the portal.":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account.png" alt-text="Screenshot of the Azure Create Storage Account Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account.png":::
- 3. Set a [delete lock](../../../../azure-resource-manager/management/lock-resources.md) on the account to prevent accidental deletion of the storage account.
+3. Next, consider keeping the recommended default settings from the Advanced screen. If you're migrating to Azure Files, it's recommended to enable large file shares if available
- :::image type="content" source="./media/komprise-quick-start-guide/azure-resource-lock.png" alt-text="Shows setting a delete lock in the portal.":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-advanced.png" alt-text="Screenshot of the Azure Create Storage Account Advanced Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-advanced.png":::
- 4. Configure extra [security best practices](../../../blobs/security-recommendations.md).
+4. Keep the default networking options for now and move on to data protection. You can choose to enable soft delete, which allows you to recover accidentally deleted data within the defined retention period. Soft delete offers protection against accidental or malicious deletion.
-### Deployment instructions for managing file data
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-data-protection.png" alt-text="Screenshot of the Azure Create Storage Account Data Protection Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-data-protection.png":::
-1. **Download** the Komprise Observer virtual appliance from the Director, deploy it to your hypervisor and configure it with the network and domain. Director is provided as a cloud service managed by Komprise. Information needed to access Director is sent with the welcome email once you purchase the solution.
+5. Add tags for an organization if you use tagging and create your account
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-setup-1.png" alt-text="Download appropriate image for Komprise Observer from Director":::
+6. Two quick steps are all that is now required before you can add the account to your Komprise environment. Navigate to the account you created in the Azure portal and select File shares under the File Service menu. Add a File share providing a meaningful name. Then, navigate to the Access keys item under Settings and copy the Storage account name and one of the two access keys. If the keys aren't showing, select Show keys
-1. To add the shares to analyze and migrate, you have two options:
- 1. **Discover** all the shares in your storage environment by entering:
- - Platform for the source NAS
- - Hostname or IP address
- - Display name
- - Credentials (for SMB shares)
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-manage-access-keys.png" alt-text="Screenshot of the Manage Access Keys dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-manage-access-keys.png":::
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-setup-2.png" alt-text="Specify NAS system to discover":::
+7. Navigate to the Properties of the Azure File share. Write down the URL address, which is required to add the Azure connection into the Komprise target file share
- 1. **Specify** a file share by entering:
- - Storage information
- - Protocol
- - Path
- - Display Name
- - Credentials (for SMB shares)
-
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-setup-3.png" alt-text="Specify NAS solutions to discover":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-azure-file-share-properties.png" alt-text="Screenshot of Azure File Share Properties dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-azure-file-share-properties.png":::
- This step must be repeated to add other source and destination shares. To add Azure Files as a destination, you need to provide the Azure storage account and file share details:
+8. (Optional) You can add extra layers of security to your deployment
+
+ a. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](/azure/storage/common/authorization-resource-provider#built-in-roles-for-management-operations)
+
+ b. Restrict access to the account to specific network segments with [storage firewall settings](/azure/storage/common/storage-network-security). Configure firewall settings to prevent access from outside of your corporate network
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-azure-files-1.png" alt-text="Select Azure Files as a target service":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-network-security.png" alt-text="Screenshot of Azure Network Security dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-network-security.png":::
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-azure-files-2.png" alt-text="Enter details for Azure Files":::
+ c. Set a [delete lock](/azure/azure-resource-manager/management/lock-resources) on the account to prevent accidental deletion of the storage account.
+
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-delete-lock.png" alt-text="Screenshot of Azure Delete Lock dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-delete-lock.png":::
-### Deployment instructions for managing object data
+ d. Review this document for other [security best practices](/azure/storage/blobs/security-recommendations)
-Managing object provides different experience. The Director and Observer are provided as a cloud services, managed by Komprise. If you only need to analyze and archive data in Azure Blob Storage, no further deployment is required. If you need to perform migrations into Azure Blob Storage, get the Komprise Observer virtual appliance sent with the welcome email, and deploy it in a Linux virtual machine in your Azure cloud infrastructure. After deploying, follow the steps on the Komprise Director.
-1. Navigate to **Data Stores** and **Add New Object Store**. Select **Microsoft Azure** as the provider.
+## Migration guide
+### Organizing the migration
+Simplify migration planning tasks by organizing them into a few operational classes. Review the number of files, capacity per file size, file ages and the time required to complete the initial analysis to identify where to begin. Starting with the easy and building to the complex helps with building experience and confidence and confirm the cutover processes before tackling the harder migrations. These steps can be summarized as:
+- Tiering type: data that can move at any time, since the data is typically cold data no one is accessing it could be sent to Azure Blog Archive for long-term storage. Data included could be an entire share, or part of a share. With Transparent Tiering, Komprise leaves a symbolic link so end users never lose access to their files and data.
+- Easy type: fairly static shares with few users that move in one or two iterations. Minimal migration time and short cutover time required.
+- Moderate type: little to moderately active individual shares of average file size (~1 MB). Should need minimal migration time; may require scheduling specific cutover window.
+- Active type: shares with active data change daily, which can have a significant effect on data verification, operations, costs, Observers and Proxy systems placement (on-premises or in the cloud), and final cutover time. It may require multiple migration iterations and scheduling longer final cutover times
+- Complex type: represents moving shares with various dependencies from multiple shares migrating in unison, to shares with many small files, or shares with many empty directories. Complex shares may require advance coordination, possibly several iterations and longer cutover windows depending on the situation.
+
+### Migration administration
+Komprise provides live migration, where end users and applications have continuous data access while the data is moving. With Komprise elastic migrations, multiple migration activities automatically use the full architecture for maximum parallelization. The Director console simplifies the administration of all the migration tasks with one interface.
+KompriseΓÇÖs migration process automates moving directories, files, and links from a source to a destination. At each step, data integrity is checked. All attributes, permissions and access controls from the source are applied. In an object migration, objects, prefixes, and metadata of each object are migrated too.
+To configure and run a migration, follow these steps:
+1. Once you have completed your Analysis and confirmed that the Storage and Network performance are optimally configured you're ready to start with the Archive and Easy migration types.
+2. Navigate to Migrate and select Add Migration
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-object-store.png" alt-text="Screenshot that shows adding new object store":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-migration-dialog.png" alt-text="Screenshot of Komprise Add Migration Task" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-migration-dialog.png":::
-1. Add shares to analyze and migrate. These steps must be repeated for every source, and target share, or container. There are two options to perform the same action:
- 1. **Discover** all the containers by entering:
- - Storage account name
- - Primary access key
- - Display name
-
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-discover-storage-account.png" alt-text="Screenshot that shows how to discover containers in storage account":::
+3. Add migration task by selecting proper source and destination shares. Provide a migration name. Once configured, select Start Migration. This step is slightly different for file and object data migrations as you're selecting data stores instead of shares. Review the following steps.
+You may also choose to verify each data transfer using MD5 checksum. Depending in the position of Komprise data movement components, egress costs may occur when cloud objects are retrieved to calculate the MD5 values.
- Required information can be found in **[Azure Portal](https://portal.azure.com/)** by navigating to the **Access keys** item under **Settings** for the storage account. If the keys are not showing, click on the **Show keys**.
+ File Migration
- 1. **Specify** a container by entering:
- - Container name
- - Storage account name
- - Primary access key
- - Display name
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-file-migration-dialog.png" alt-text="Screenshot of Komprise Add File Migration Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-file-migration-dialog.png":::
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-container.png" alt-text="Screenshot that shows how to add containers in storage account":::
+ File migration provides options to preserve access time and SMB ACLs on the destination. This option depends on the selected source and destination file service and protocol.
- Container name represents the target container for the migration and needs to be created before migration. Other required information can be found in **[Azure Portal](https://portal.azure.com/)** by navigating to the **Access keys** item under **Settings** for the storage account. If the keys are not showing, click on the **Show keys**.
+ Object Migration
-## Migration guide
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-object-migration-dialog.png" alt-text="Screenshot of Komprise Add Object Migration Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-object-migration-dialog.png":::
-Komprise provides live migration, where end users and applications are not disrupted and can continue to access data during the migration. The migration process automates migrating directories, files, and links from a source to a destination. At each step data integrity is checked. All attributes, permissions, and access controls from the source are applied. In an object migration, objects, prefixes, and metadata of each object are migrated.
+ Object migration provides options to choose the destination Azure storage tier (Hot, Cool, Archive).
-To configure and run a migration, follow these steps:
+4. Once the migration started, you can go to Migrate to monitor the progress.
-1. Log into your Komprise console. Information needed to access the console is sent with the welcome email once you purchase the solution.
-1. Navigate to **Migrate** and click on **Add Migration**.
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-migration-management-dialog.png" alt-text="Screenshot of Komprise Migration Management Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-migration-management-dialog.png":::
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-new-migrate.png" alt-text="Add new migration job":::
+5. Once all changes have been migrated, run one final migration by clicking on Actions and selecting Start final iteration. Before final migration, we recommend stopping access to source file shares or moving them to read-only mode (for users and applications). This step makes sure no changes happen on the source.
-1. Add migration task by selecting proper source and destination share. Provide a migration name. Once configured, click on **Start Migration**. This step is slightly different for file and object data migrations.
-
- 1. File migration
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-migration-overview.png" alt-text="Screenshot of Komprise Migration Management Overview" lightbox="./media/komprise-quick-start-guide-v2/screenshot-migration-overview.png":::
+
+ Once the final migration finishes, transition all users and applications to the destination share. Switching over to the new file service usually requires changing the configuration of DNS servers and DFS servers or changing the mount points to the new destination.
+
+6. As the last step, mark the migration completed.
+
+7. There is a full migration audit folder containing all the information about files moved and deleted, attributes and errors encountered for every iteration. The data is written to the ".komprise-audit" folder on the destination, or in a specified system, log folder configured in System | Settings of the console.
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-migration.png" alt-text="Specify details for the migration job":::
- File migration provides options to preserve access time and SMB ACLs on the destination. This option depends on the selected source and destination file service and protocol.
- 1. Object migration
+## Deployment instructions for migrating object data
+Migrating Object storage systems to Azure Blob is an easy process as well. The Director and Observer are provisioned by Komprise as cloud services. Similar to on-premises deployment, you can analyze and understand the data on the sources system, identify any issues and then efficiently move data to Azure Blob Storage.
+The flexibility of the Komprise architecture allows deploying the Observers where they provide the highest performance while keeping data movement costs/charges low.
+To get started, sign-in to the director and do the following:
+1. Navigate to Data Stores and Add Object Store. Here you can choose the add systems by Add Account or by Add Bucket.
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-object-migration.png" alt-text="Screenshot that shows adding object migration":::
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-object-store.png" alt-text="Screenshot of Komprise Add Object Store Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-object-store.png":::
- Object migration provides options to choose the destination Azure storage tier (Hot, Cool, Archive). You may also choose to verify each data transfer using MD5 checksum. Egress costs can occur with MD5 checksums as cloud objects must be retrieved to calculate the MD5 checksum.
+2. Continue adding Source data stores
+3. Enable buckets for Analysis. Reviewing the data stores to build a migration plan.
+4. Add Azure Blob Destination data stores, either by Account or Bucket.
-2. Once the migration started, you can go to **Migrate** to monitor the progress.
+ :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-object-destination.png" alt-text="Screenshot of Komprise Add Object Destination Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-object-destination.png":::
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-monitor-migrations.png" alt-text="Monitor all migration jobs":::
+ With Add Account, discover all the containers by entering:
+ - Storage account name
+ - Primary access key
+ - Display Name
-3. Once all changes have been migrated, run one final migration by clicking on **Actions** and selecting **Start final iteration**. Before final migration, we recommend stopping access to source file shares or moving them to read-only mode (for users and applications). This step will make sure no changes happen on the source.
+ Required information can be found in [Azure portal](https://portal.azure.com/) by navigating to the Access keys item under Settings for the storage account. If the keys aren't showing, select on the Show keys.
- :::image type="content" source="./media/komprise-quick-start-guide/komprise-final-migration.png" alt-text="Do one last migration before switching over":::
+ Or, specify a container by entering:
+ - Container Name
+ - Storage Account Name
+ - Primary Access Key
+ - Display Name
- Once the final migration finishes, transition all users and applications to the destination share. Switch over to the new file service usually requires changing the configuration of DNS servers, DFS servers, or changing the mount points to the new destination.
+The container name represents the destination container for the migration and needs to be created before migration. Other required information can be found in [Azure portal](https://portal.azure.com/) by navigating to the Access keys item under Settings for the storage account. If the keys aren't showing, select on the Show keys.
-4. As a last step, mark the migration completed.
+5. Migrating Object Data Stores uses the same iterative process to move data as the NAS migration steps described previously.
-## Support
-To open a case with Komprise, sign in to the [Komprise support site](https://komprise.freshdesk.com/)
+## Migration API
+Komprise has full migration API support so everything described in the document can be controlled via scripts. Komprise has an example script our customers use to move large numbers of shares effectively. Review with your Komprise team if you require the API.
-## Marketplace
+### Maximize your data value with Azure and Komprise
+Komprise helps you plan and execute your file and object data migrations to Azure. Once your migrations are complete, you can use the full Komprise Intelligent Data Management service to manage data lifecycle, seamlessly tier data from on-premises to Azure and to search, find and execute new data workflows.
-Get Komprise listing on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview).
## Next steps
-Various resources are available to learn more:
+### Marketplace
+
+Get Komprise Data Migration listing on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview).
+Get Komprise full suite listing onΓÇ»[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview).
+
+### Education
-- [Storage migration overview](../../../common/storage-migration-overview.md)-- Features supported by Komprise Intelligent Data Management in [migration tools comparison matrix](./migration-tools-comparison.md)
+Various resources are available to learn more:
+- [Storage migration overview](/azure/storage/common/storage-migration-overview)
+- Features supported by Komprise Intelligent Data Management in [migration tools comparison matrix](/azure/storage/solution-integration/validated-partners/data-management/migration-tools-comparison)
- [Komprise compatibility matrix](https://www.komprise.com/partners/microsoft-azure/)+
storage Migration Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md
Title: Azure Storage migration tools comparison - Unstructured data description: Basic functionality and comparison between tools used for migration of unstructured data--++ Previously updated : 02/21/2022 Last updated : 08/25/2023
The following comparison matrix shows basic functionality of different tools that can be used for migration of unstructured data.
+> [!TIP]
+> Azure File Sync can be utilized for migrating data to Azure Files, even if you don't intend to use a hybrid solution for on-premises caching or syncing. This migration process is efficient and causes no downtime. To use Azure File Sync as a migration tool, [simply deploy it](../../../file-sync/file-sync-deployment-guide.md) and, after the migration is finished, [remove the server endpoint](../../../file-sync/file-sync-server-endpoint-delete.md). Ideally Azure File Sync would be used long-term, while Storage Mover and AzCopy are intended for migration focused activities.
+ ## Supported Azure services
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
-| |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) |
-| **Support provided by** | Microsoft | [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>|
-| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes |
-| **Azure NetApp Files support** | No | Yes | Yes | Yes | Yes |
-| **Azure Blob Hot / Cool support** | No | Yes (via NFS ) | Yes | Yes | Yes |
-| **Azure Blob Archive tier support** | No | No | No | Yes | Yes |
-| **Azure Data Lake Storage support** | No | No | Yes | Yes | No |
-| **Supported Sources** | Windows Server 2012 R2 and up | NAS & cloud file systems | Any NAS, and S3 | Any NAS, Cloud File Storage, or S3 | Any NAS, S3, PFS, and Swift |
+| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) |
+| |--|--|--||||
+| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) |
+| **Support provided by** | Microsoft | Microsoft | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> |
+| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **Azure NetApp Files support** | No | No | Yes | Yes | Yes | Yes |
+| **Azure Blob Hot / Cool support** | Yes | Yes | Yes | Yes | Yes | Yes (via NFS) |
+| **Azure Blob Archive tier support** | Yes | Yes | No | Yes | Yes | No |
+| **Azure Data Lake Storage support** | Yes | No | Yes | Yes | No | No |
+| **Supported Sources** | Any NAS, Azure Blob, Azure Files, Google Cloud Storage, and AWS S3 | NAS & cloud file systems | Any NAS, and S3 | Any NAS, Cloud File Storage, or S3 | Any NAS, S3, PFS, and Swift | NAS & cloud file systems |
## Supported protocols (source / destination)
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
-| |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
-| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes |
-| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes |
-| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes |
-| **NFS v3** | No | Yes | Yes | Yes | Yes |
-| **NFS v4.1** | No | Yes | No | Yes | Yes |
-| **Blob REST API** | No | No | Yes | Yes | Yes |
-| **S3** | No | Yes | Yes | Yes | Yes |
+| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) |
+| |--|--|--||||
+| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) |
+| **SMB 2.1** | Source | Source | Yes | Yes | Yes | Yes |
+| **SMB 3.0** | Source | Source | Yes | Yes | Yes | Yes |
+| **SMB 3.1** | Source/Destination (Azure Files SMB) | Source/Destination (Azure Files SMB) | Yes | Yes | Yes | Yes |
+| **NFS v3** | Source/Destination (Azure Blob NFSv3) | Source/Destination (Azure Blob NFSv3) | Yes | Yes | Yes | Yes |
+| **NFS v4.1** | Source | Source | Yes | No | Yes | Yes |
+| **Blob REST API** | Yes | Destination | Yes | Yes | Yes | No |
+| **S3** | Source | No | Yes | Yes | Yes | Yes |
+| **Google Cloud Storage** | Source | No | Yes | Yes | Yes | Yes |
## Extended features
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
-| |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
-| **UID / SID remapping** | No | Yes | Yes | No | No |
-| **Protocol ACL remapping** | No | No | No | No | No |
-| **DFS Support** | Yes | Yes | Yes | Yes | No |
-| **Throttling support** | Yes | Yes | Yes | Yes | Yes |
-| **File pattern exclusions** | No | Yes | Yes | Yes | Yes |
-| **Support for selective file attributes** | Yes | Yes | Yes | Yes | Yes |
-| **Delete propagations** | Yes | Yes | Yes | Yes | Yes |
-| **Follow NTFS junctions** | No | Yes | No | Yes | Yes |
-| **Override SMB Owner and Group Owner** | Yes | Yes | Yes | No | Yes |
-| **Chain of custody reporting** | No | Yes | Yes | Yes | Yes |
-| **Support for alternate data streams** | No | Yes | Yes | No | Yes |
-| **Scheduling for migration** | No | Yes | Yes | Yes | Yes |
-| **Preserving ACL** | Yes | Yes | Yes | Yes | Yes |
-| **DACL support** | Yes | Yes | Yes | Yes | Yes |
-| **SACL support** | Yes | Yes | Yes | No | Yes |
-| **Preserving access time** | Yes | Yes | Yes | Yes | Yes |
-| **Preserving modified time** | Yes | Yes | Yes | Yes | Yes |
-| **Preserving creation time** | Yes | Yes | Yes | Yes | Yes |
-| **Azure Data Box support** | Yes | Yes | Yes | No | Yes |
-| **Migration of snapshots** | No | Manual | Yes | No | No |
-| **Symbolic link support** | No | Yes | No | Yes | Yes |
-| **Hard link support** | No | Migrated as separate files | Yes | Yes | Yes |
-| **Support for open / locked files** | Yes | Yes | Yes | Yes | Yes |
-| **Incremental migration** | Yes | Yes | Yes | Yes | Yes |
-| **Switchover support** | No | Yes | Yes | No (manual only) | Yes |
-| **[Other features](#other-features)** | [Link](#azure-file-sync)| [Link](#datadobi-dobimigrate) | [Link](#data-dynamics-data-mobility-and-migration) | [Link](#komprise-elastic-data-migration) | [Link](#atempo-miria) |
+| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) |
+| |--|--|--||||
+| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) |
+| **UID / SID remapping** | No | No | Yes | No | No | Yes |
+| **Protocol ACL remapping** | No | No | No | No | No | No |
+| **Azure Data Lake Storage Gen2** | Yes | No | Yes | Yes | No | Yes |
+| **Throttling support** | Yes | No | Yes | Yes | Yes | Yes |
+| **File pattern exclusions** | Yes | No | Yes | Yes | Yes | Yes |
+| **Support for selective file attributes** | No | No | Yes | Yes | Yes | Yes |
+| **Delete propagations** | No | No | Yes | Yes | Yes | Yes |
+| **Follow NTFS junctions** | No | No | No | Yes | Yes | Yes |
+| **Override SMB Owner and Group Owner** | No | No | Yes | No | Yes | Yes |
+| **Chain of custody reporting** | No | No | Yes | Yes | Yes | Yes |
+| **Support for alternate data streams** | No | No | Yes | No | Yes | Yes |
+| **Scheduling for migration** | No | No | Yes | Yes | Yes | Yes |
+| **Preserving ACL** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **DACL support** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **SACL support** | Yes | Yes | Yes | No | Yes | Yes |
+| **Preserving access time** | Yes (Azure Files) | Yes | Yes | Yes | Yes | Yes |
+| **Preserving modified time** | Yes (Azure Files) | Yes | Yes | Yes | Yes | Yes |
+| **Preserving creation time** | Yes (Azure Files) | Yes | Yes | Yes | Yes | Yes |
+| **Azure Data Box support** | Yes | No | Yes | No | Yes | Yes |
+| **Migration of snapshots** | No | No | Yes | No | No | Manual |
+| **Symbolic link support** | Yes | Yes | No | Yes | Yes | Yes |
+| **Hard link support** | Migrated as separate files | Migrated as separate files | Yes | Yes | Yes | Migrated as separate files |
+| **Support for open / locked files** | No | No | Yes | Yes | Yes | Yes |
+| **Incremental migration** | Yes | No | Yes | Yes | Yes | Yes |
+| **Switchover support** | No | No | Yes | No (manual only) | Yes | Yes |
+| **[Other features](#other-features)** | [Link](#azcopy)| | [Link](#data-dynamics-data-mobility-and-migration) | [Link](#komprise-elastic-data-migration) | [Link](#atempo-miria) | [Link](#datadobi-dobimigrate) |
## Assessment and reporting
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
-| |--|--||||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
-| **Capacity** | No | Yes | Yes | Yes | Yes |
-| **# of files / folders** | No | Yes | Yes | Yes | Yes |
-| **Age distribution over time** | No | Yes | Yes | Yes | Yes |
-| **Access time** | No | Yes | Yes | Yes | Yes |
-| **Modified time** | No | Yes | Yes | Yes | Yes |
-| **Creation time** | No | Yes | Yes | Yes | Yes |
-| **Per file / object report status** | Partial | Yes | Yes | Yes | Yes |
+| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) |
+| |--|--|--||||
+| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) |
+| **Capacity** | No | Reporting | Yes | Yes | Yes | Yes |
+| **# of files / folders** | Yes | Reporting | Yes | Yes | Yes | Yes |
+| **Age distribution over time** | No | No | Yes | Yes | Yes | Yes |
+| **Access time** | No | No | Yes | Yes | Yes | Yes |
+| **Modified time** | No | No | Yes | Yes | Yes | Yes |
+| **Creation time** | No | No | Yes | Yes | Yes | Yes |
+| **Per file / object report status** | Yes | Reporting | Yes | Yes | Yes | Yes |
## Licensing
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
-| |--|--||| |
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
-| **BYOL** | N / A | Yes | Yes | Yes | Yes |
-| **Azure Commitment** | Yes | Yes | Yes | Yes | No |
+| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) |
+| |--|--|--||||
+| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) |
+| **BYOL** | N / A | N / A | Yes | Yes | Yes | Yes |
+| **Azure Commitment** | N / A | Yes | Yes | Yes | No | Yes |
## Other features
-### Azure File Sync
+### AzCopy
-- Internal hash validation
+- Multi-platform support
+- Windows 32-bit / 64-bit
+- Linux x86-64 and ARM64
+- macOS Intel and ARM64
+- Benchmarking [azcopy bench](/azure/storage/common/storage-ref-azcopy-bench)
+- Supports block blobs, page blobs, and append blobs
+- MD5 checks for downloads
+- Customizable transfer rate to preserve bandwidth on the client
+- Tagging
-> [!TIP]
-> Azure File Sync can be utilized for migrating data to Azure Files, even if you don't intend to use a hybrid solution for on-premises caching or syncing. This migration process is efficient and causes no downtime. To use Azure File Sync as a migration tool, [simply deploy it](../../../file-sync/file-sync-deployment-guide.md) and, after the migration is finished, [remove the server endpoint](../../../file-sync/file-sync-server-endpoint-delete.md).
-
-### Datadobi DobiMigrate
--- Migration pre checks-- Migration Planning-- Dry Run for cut over testing-- Detect and alert on target side user activity prior to cut over-- Policy driven migrations-- Scheduled copy iterations-- Configurable options for handling root directory security-- On-demand verification runs-- Data read back verification on source and destination-- Graphical, interactive error handling workflow-- Ability to restrict certain operations from propagating like deletes and updates-- Ability to preserve access time on the source (in addition to destination)-- Ability to execute rollback to source during migration switchover-- Ability to migrate selected SMB file attributes-- Ability to clean NTFS security descriptors-- Ability to override NFSv3 permissions and write new mode bits to target-- Ability to convert NFSv3 POSIX draft ACLS to NFSv4 ACLS-- SMB 1 (CIFS)-- Browser-based access-- REST API support for configuration, and migration management-- Support 24 x 7 x 365 ### Data Dynamics Data Mobility and Migration
The following comparison matrix shows basic functionality of different tools tha
- Petabyte-scale data movements - Hash validation
+### Datadobi DobiMigrate
+
+- Migration pre checks
+- Migration Planning
+- Dry Run for cut over testing
+- Detect and alert on target side user activity prior to cut over
+- Policy driven migrations
+- Scheduled copy iterations
+- Configurable options for handling root directory security
+- On-demand verification runs
+- Data read back verification on source and destination
+- Graphical, interactive error handling workflow
+- Ability to restrict certain operations from propagating like deletes and updates
+- Ability to preserve access time on the source (in addition to destination)
+- Ability to execute rollback to source during migration switchover
+- Ability to migrate selected SMB file attributes
+- Ability to clean NTFS security descriptors
+- Ability to override NFSv3 permissions and write new mode bits to target
+- Ability to convert NFSv3 POSIX draft ACLS to NFSv4 ACLS
+- SMB 1 (CIFS)
+- Browser-based access
+- REST API support for configuration, and migration management
+- Support 24 x 7 x 365
+ > [!NOTE]
-> List was last verified on February, 21st 2022.
+> List was last verified on August 24, 2023
## See also
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/assign-azure-role-data-access.md
Title: Assign an Azure role for access to table data
description: Learn how to assign permissions for table data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 03/03/2022-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/authorize-access-azure-active-directory.md
Title: Authorize access to tables using Active Directory
description: Authorize access to Azure tables using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account. -+ Last updated 02/09/2023-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/scalability-targets.md
Title: Scalability and performance targets for Table storage
description: Learn about scalability and performance targets for Table storage. -+ Last updated 03/09/2020-+ # Scalability and performance targets for Table storage
storage Table Storage Design For Modification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-for-modification.md
Title: Design Azure Table storage for data modification
description: Design tables for data modification in Azure Table storage. Optimize insert, update, and delete operations. Ensure consistency in your stored entities. --++ Last updated 04/23/2018
storage Table Storage Design For Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-for-query.md
Title: Design Azure Table storage for queries description: Design tables for queries in Azure Table storage. Choose an appropriate partition key, optimize queries, and sort data for the Table service. --++ Last updated 05/19/2023
storage Table Storage Design Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-guidelines.md
Title: Guidelines for Azure storage table design
description: Understand guidelines for designing your Azure storage table service to support read and write operations efficiently. --++ Last updated 04/23/2018
storage Table Storage Design Modeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-modeling.md
Title: Modeling relationships in Azure Table storage design
description: Understand the modeling process when designing your Azure Table storage solution. Read about one-to-many, one-to-one, and inheritance relationships. --++ Last updated 04/23/2018
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md
Title: Azure storage table design patterns description: Review design patterns that are appropriate for use with Table service solutions in Azure. Address issues and trade-offs that are discussed in other articles. -+ Last updated 06/24/2021-+ ms.devlang: csharp
storage Table Storage Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design.md
Title: Design scalable and performant tables in Azure Table storage.
description: Learn to design scalable and performant tables in Azure Table storage. Review table partitions, Entity Group Transactions, and capacity and cost considerations. --++ Last updated 03/09/2020
storage Table Storage How To Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-how-to-use-powershell.md
Title: Perform Azure Table storage operations with PowerShell description: Learn how to run common tasks such as creating, querying, deleting data from Azure Table storage account by using PowerShell.-+ Last updated 06/23/2022-+
storage Table Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-overview.md
Title: Introduction to Table storage - Object storage in Azure
description: Store structured data in the cloud using Azure Table storage, a NoSQL data store. --++ Last updated 05/27/2021
storage Table Storage Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-quickstart-portal.md
Title: Create a table in the Azure portal
description: Learn how to use the Azure portal to create a new table in Azure Table storage. -+ -+ Last updated 01/25/2023
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
The following resources are available to help you migrate backup files or to cop
Use the following steps to copy data to your environment and then decommission your StorSimple 8000 appliance. If your data has already been migrated to your own environment, you can proceed with decommissioning your appliance.
-**Step 1. Copy backup files or live data to your own environment.**
+**Step 1: Copy backup files or live data to your own environment.**
- **Backup files.** If you have backup files, use the Azure StorSimple 8000 Series Copy Utility to migrate backup files to your environment. For more information, see [Copy Utility documentation](https://aka.ms/storsimple-copy-utility-docs). - **Live data.** If you have live data to copy, you can access and copy live data to your environment via iSCSI.
-**Step 2. Decommission your device.**
+**Step 2: Decommission your device.**
After you complete your data migration, use the following steps to decommission the device. Before you decommission your device, make sure to copy all data from your appliance, using either local host copy operations or using the Utility.
Decommission operations can't be undone. We recommend that you complete your dat
The system reboots multiple times. You're notified when the reset has successfully completed. Depending on the system model, it can take 45-60 minutes for an 8100 device and 60-90 minutes for an 8600 to finish this process.
-**Step 3. Shut down the device.**
+**Step 3: Shut down the device.**
This section explains how to shut down a running or a failed StorSimple device from a remote computer. A device is turned off after both the device controllers are shut down. A device shutdown is complete when the device is physically moved or is taken out of service.
-**Step 3.1** - Use the following steps to identify and shut down the passive controller on your device. Perform this operation in Windows PowerShell for StorSimple.
+**Step 3a:** Use the following steps to identify and shut down the passive controller on your device. Perform this operation in Windows PowerShell for StorSimple.
1. Access the device via the serial console or a telnet session from a remote computer. To connect to Controller 0 or Controller 1, follow these steps to use PuTTY to connect to the device serial console.
This section explains how to shut down a running or a failed StorSimple device f
This restarts the controller you're connected to. When you restart the active controller, it fails over to the passive controller before the restart.
-**Step 3.2** - Repeat the previous step to shut down the active controller.
+**Step 3b:** Repeat the previous step to shut down the active controller.
-**Step 3.3** - You must now look at the back plane of the device. After the two controllers are shut down, the status LEDs on both the controllers should be blinking red. To turn off the device completely at this time, flip the power switches on both Power and Cooling Modules (PCMs) to the OFF position. This turns off the device.
+**Step 3c:** You must now look at the back plane of the device. After the two controllers are shut down, the status LEDs on both the controllers should be blinking red. To turn off the device completely at this time, flip the power switches on both Power and Cooling Modules (PCMs) to the OFF position. This turns off the device.
## Create a support request
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
Previously updated : 05/24/2022 Last updated : 08/15/2023 # Capture data from Event Hubs in Parquet format-
-This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in Parquet format. You have the flexibility of specifying a time or size interval.
+This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the Parquet format.
## Prerequisites -- Your Azure Event Hubs and Azure Data Lake Storage Gen2 resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.-- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+- An Azure Event Hubs namespace with an event hub and an Azure Data Lake Storage Gen2 account with a container to store the captured data. These resources must be publicly accessible and can't be behind a firewall or secured in an Azure virtual network.
+
+ If you don't have an event hub, create one by following instructions from [Quickstart: Create an event hub](../event-hubs/event-hubs-create.md).
+
+ If you don't have a Data Lake Storage Gen2 account, create one by following instructions from [Create a storage account](../storage/blobs/create-data-lake-storage-account.md)
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format. For testing purposes, select **Generate data (preview)** on the left menu, select **Stocks data** for dataset, and then select **Send**.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/stocks-data.png" alt-text="Screenshot showing the Generate data page to generate sample stocks data." lightbox="./media/capture-event-hub-data-parquet/stocks-data.png":::
## Configure a job to capture data Use the following steps to configure a Stream Analytics job to capture data in Azure Data Lake Storage Gen2. 1. In the Azure portal, navigate to your event hub.
-1. Select **Features** > **Process Data**, and select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+1. On the left menu, select **Process Data** under **Features**. Then, select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" alt-text="Screenshot showing the Process Event Hubs data start cards." lightbox="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" :::
-1. Enter a **name** to identify your Stream Analytics job. Select **Create**.
- :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" :::
-1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job will use to connect to Event Hubs. Then select **Connect**.
+1. Enter a **name** for your Stream Analytics job, and then select **Create**.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." :::
+1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job uses to connect to Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/capture-event-hub-data-parquet/event-hub-configuration.png" :::
-1. When the connection is established successfully, you'll see:
+1. When the connection is established successfully, you see:
- Fields that are present in the input data. You can choose **Add field** or you can select the three dot symbol next to a field to optionally remove, rename, or change its name. - A live sample of incoming data in the **Data preview** table under the diagram view. It refreshes periodically. You can select **Pause streaming preview** to view a static view of the sample input.
+
:::image type="content" source="./media/capture-event-hub-data-parquet/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-parquet/edit-fields.png" ::: 1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration. 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps: 1. Select the subscription, storage account name and container from the drop-down menu. 1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in.
+ 1. Select **Parquet** for **Serialization** format.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/job-top-settings.png" alt-text="Screenshot showing the Data Lake Storage Gen2 configuration page." lightbox="./media/capture-event-hub-data-parquet/job-top-settings.png":::
1. For streaming blobs, the directory path pattern is expected to be a dynamic value. It's required for the date to be a part of the file path for the blob ΓÇô referenced as `{date}`. To learn about custom path patterns, see to [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md).
+
:::image type="content" source="./media/capture-event-hub-data-parquet/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-parquet/blob-configuration.png" ::: 1. Select **Connect**
-1. When the connection is established, you'll see fields that are present in the output data.
+1. When the connection is established, you see fields that are present in the output data.
1. Select **Save** on the command bar to save your configuration.+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/save-configuration.png" alt-text="Screenshot showing the Save button selected on the command bar." :::
1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window: 1. Choose the output start time.
+ 1. Select the pricing plan.
1. Select the number of Streaming Units (SU) that the job runs with. SU represents the computing resources that are allocated to execute a Stream Analytics job. For more information, see [Streaming Units in Azure Stream Analytics](stream-analytics-streaming-unit-consumption.md).
- 1. In the **Choose Output data error handling** list, select the behavior you want when the output of the job fails due to data error. Select **Retry** to have the job retry until it writes successfully or select another option.
+
:::image type="content" source="./media/capture-event-hub-data-parquet/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-parquet/start-job.png" :::
+1. You should see the Stream Analytic job in the **Stream Analytics job** tab of the **Process data** page for your event hub.
-## Verify output
-Verify that the Parquet files are generated in the Azure Data Lake Storage container.
-
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" alt-text="Screenshot showing the Stream Analytics job on the Process data page." lightbox="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" :::
+
+## Verify output
-The new job is shown on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
+1. On the Event Hubs instance page for your event hub, select **Generate data**, select **Stocks data** for dataset, and then select **Send** to send some sample data to the event hub.
+1. Verify that the Parquet files are generated in the Azure Data Lake Storage container.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the ADLS container." lightbox="./media/capture-event-hub-data-parquet/verify-captured-data.png" :::
+1. Select **Process data** on the left menu. Switch to the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
-Here's an example screenshot of metrics showing input and output events.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/open-metrics-link.png" alt-text="Screenshot showing Open Metrics link selected." lightbox="./media/capture-event-hub-data-parquet/open-metrics-link.png" :::
+
+ Here's an example screenshot of metrics showing input and output events.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/job-metrics.png" alt-text="Screenshot showing metrics of the Stream Analytics job." lightbox="./media/capture-event-hub-data-parquet/job-metrics.png" :::
## Next steps
stream-analytics Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-protection.md
Azure Stream Analytics persists the following metadata and data in order to run:
* Connection details of the resources used by your Stream Analytics job
-To help you meet your compliance obligations in any regulated industry or environment, you can read more about [Microsoft's compliance offerings](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942).
- ## In-Region Data Residency+ Azure Stream Analytics stores customer data and other metadata described above. Customer data is stored by Azure Stream Analytics in a single region by default, so this service automatically satisfies in region data residency requirements including those specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/). Additionally, you can choose to store all data assets (customer data and other metadata) related to your stream analytics job in a single region by encrypting them in a storage account of your choice.
If the storage account you want to use is in an Azure Virtual Network, you must
Encrypt your storage account to secure all of your data and explicitly choose the location of your private data.
-To help you meet your compliance obligations in any regulated industry or environment, you can read more about [Microsoft's compliance offerings](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942).
- Use the following steps to configure your storage account for private data assets. This configuration is made from your Stream Analytics job, not from your storage account. 1. Sign in to the [Azure portal](https://portal.azure.com/).
Any private data that is required to be persisted by Stream Analytics is stored
Connection details of your resources, which are used by your Stream Analytics job, are also stored. Encrypt your storage account to secure all of your data.
-To help you meet your compliance obligations in any regulated industry or environment, you can read more about [Microsoft's compliance offerings](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942).
- ## Enables Data Residency You may use this feature to enforce any data residency requirements you may have by providing a storage account accordingly.
stream-analytics Event Hubs Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-managed-identity.md
Last updated 05/15/2023
-# Use managed identities to access Event Hubs from an Azure Stream Analytics job
+# Use managed identities to access Event Hubs  from an Azure Stream Analytics job
Azure Stream Analytics supports Managed Identity authentication for both Azure Event Hubs input and output. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days. When you remove the need to manually authenticate, your Stream Analytics deployments can be fully automated. 
stream-analytics No Code Transform Filter Ingest Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md
Previously updated : 06/07/2022 Last updated : 06/13/2023 # Use Azure Stream Analytics no-code editor to transform and store data in Azure SQL database
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md
Previously updated : 05/30/2021 Last updated : 08/16/2023 # Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI
-[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it is no longer required for a user to interactively log in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you will not need to periodically reauthorize the job.
+[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it's no longer required for a user to interactively sign in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you won't need to periodically reauthorize the job.
This article shows you how to enable Managed Identity for the Power BI output(s) of a Stream Analytics job through the Azure portal and through an Azure Resource Manager deployment.
+> [!NOTE]
+> Only **system-assigned** managed identities are supported with the Power BI output. Currently, using user-assigned managed identities with the Power BI output isn't supported.
+ ## Prerequisites
-The following are required for using this feature:
+You must have the following prerequisites before you use this feature:
- A Power BI account with a [Pro license](/power-bi/service-admin-purchasing-power-bi-pro).--- An upgraded workspace within your Power BI account. See [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/) of this feature for more details.
+- An upgraded workspace within your Power BI account. For more information, see [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/).
## Create a Stream Analytics job using the Azure portal
-1. Create a new Stream Analytics job or open an existing job in the Azure portal. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Configure**. Ensure that "Use System-assigned Managed Identity" is selected and then select the **Save** button on the bottom of the screen.
+1. Create a new Stream Analytics job or open an existing job in the Azure portal.
+1. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Settings**.
- ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity.png)
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png" alt-text="Screenshot showing the Managed Identity page with Select identity button selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png":::
+1. On the **Select identity** page, select **System assigned identity***. If you select the latter option, specify the managed identity you want to use. Then, select **Save**.
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png" alt-text="Screenshot showing the Select identity page with System assigned identity selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png":::
+1. On the **Managed identity** page, confirm that you see the **Principal ID** and **Principal name** assigned to your Stream Analytics job. The principal name should be same as your Stream Analytics job name.
2. Before configuring the output, give the Stream Analytics job access to your Power BI workspace by following the directions in the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.
+3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and sign in with your Power BI account.
-3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and log in with your Power BI account.
-
- ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png)
+ [ ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png#lightbox)
4. Once authorized, a dropdown list will be populated with all of the workspaces you have access to. Select the workspace that you authorized in the previous step. Then select **Managed Identity** as the "Authentication mode". Finally, select the **Save** button.
- ![Configure Power BI output with Managed Identity](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png)
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png" alt-text="Screenshot showing the Power BI output configuration with Managed identity authentication mode selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png":::
## Azure Resource Manager deployment
Azure Resource Manager allows you to fully automate the deployment of your Strea
} ```
- If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned "principalId".
+ If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned `principalId`.
3. Now that the job is created, continue to the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.
Now that the Stream Analytics job has been created, it can be given access to a
### Use the Power BI UI > [!Note]
- > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. See [Get started with a service principal](/power-bi/developer/embed-service-principal) for more details.
+ > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. For more information, see [Get started with a service principal](/power-bi/developer/embed-service-principal).
-1. Navigate to the workspace's access settings. See this article for more details: [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace).
+1. Navigate to the workspace's access settings. For more information, see [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace).
2. Type the name of your Stream Analytics job in the text box and select **Contributor** as the access level. 3. Select **Add** and close the pane.
- ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png)
+ [ ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png#lightbox)
### Use the Power BI PowerShell cmdlets
Now that the Stream Analytics job has been created, it can be given access to a
> [!Important] > Please ensure you are using version 1.0.821 or later of the cmdlets.
-```powershell
-Install-Module -Name MicrosoftPowerBIMgmt
-```
-
-2. Log in to Power BI.
-
-```powershell
-Login-PowerBI
-```
+ ```powershell
+ Install-Module -Name MicrosoftPowerBIMgmt
+ ```
+2. Sign in to Power BI.
+ ```powershell
+ Login-PowerBI
+ ```
3. Add your Stream Analytics job as a Contributor to the workspace.
-```powershell
-Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor
-```
+ ```powershell
+ Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor
+ ```
### Use the Power BI REST API
Request Body
### Use a Service Principal to grant permission for an ASA job's Managed Identity
-For automated deployments, using an interactive login to give an ASA job access to a Power BI workspace is not possible. This can be done be using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell:
+For automated deployments, using an interactive sign-in to give an ASA job access to a Power BI workspace isn't possible. It can be done using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell:
```powershell Connect-PowerBIServiceAccount -ServicePrincipal -TenantId "<tenant-id>" -CertificateThumbprint "<thumbprint>" -ApplicationId "<app-id>"
Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -Pr
## Remove Managed Identity
-The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There is no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
+The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There's no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
## Limitations Below are the limitations of this feature: -- Classic Power BI workspaces are not supported.
+- Classic Power BI workspaces aren't supported.
- Azure accounts without Azure Active Directory. -- Multi-tenant access is not supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and cannot be used with a resource that resides in a different Azure Active Directory tenant.
+- Multi-tenant access isn't supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and can't be used with a resource that resides in a different Azure Active Directory tenant.
-- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) is not supported. This means you are not able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.
+- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) isn't supported. This means you aren't able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.
## Next steps
stream-analytics Quick Create Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-resource-manager.md
Previously updated : 06/07/2023 Last updated : 08/07/2023 # Quickstart: Create an Azure Stream Analytics job by using an ARM template
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
stream-analytics Sql Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-reference-data.md
Use the following steps to add Azure SQL Database as a reference input source us
1. Create a Stream Analytics job.
-2. Create a storage account to be used by the Stream Analytics job.
+2. Create a storage account to be used by the Stream Analytics job.
+ > [!IMPORTANT]
+ > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job.
-3. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job.
+4. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job.
### Define SQL Database reference data input
Use the following steps to add Azure SQL Database as a reference input source us
2. Become familiar with the [Stream Analytics tools for Visual Studio](stream-analytics-quick-create-vs.md) quickstart. 3. Create a storage account.
+ > [!IMPORTANT]
+ > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job.
### Create a SQL Database table
stream-analytics Stream Analytics Get Started With Azure Stream Analytics To Process Data From Iot Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md
Previously updated : 03/23/2022 Last updated : 08/15/2023 # Process real-time IoT data streams with Azure Stream Analytics
In this article, you learn how to create stream-processing logic to gather data
## Scenario
-Contoso, which is a company in the industrial automation space, has completely automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.
+Contoso, a company in the industrial automation space, has automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.
-In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format and looks like the following:
+In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format as shown in the following sample snippet:
```json {
In this example, the data is generated from a Texas Instruments sensor tag devic
} ```
-In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or Iot Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
+In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or IoT Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
-For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you will learn how to connect your job to inputs and outputs and deploy them to the Azure service.
+For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you learn how to connect your job to inputs and outputs and deploy them to the Azure service.
## Create a Stream Analytics job
-1. In the [Azure portal](https://portal.azure.com), select **+ Create a resource** from the left navigation menu. Then, select **Stream Analytics job** from **Analytics**.
+1. Navigate to the [Azure portal](https://portal.azure.com).
+1. On the left navigation menu, select **All services**, select **Analytics**, hover the mouse over **Stream Analytics jobs**, and then select **Create**.
- ![Create a new Stream Analytics job](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png)
-
-1. Enter a unique job name and verify the subscription is the correct one for your job. Create a new resource group or select an existing one from your subscription.
-
-1. Select a location for your job. Use the same location for your resource group and all resources to increased processing speed and reduced of costs. After you've made the configurations, select **Create**.
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png" alt-text="Screenshot that shows the selection of Create button for a Stream Analytics job." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png":::
+1. On the **New Stream Analytics job** page, follow these steps:
+ 1. For **Subscription**, select your **Azure subscription**.
+ 1. For **Resource group**, select an existing resource group or create a resource group.
+ 1. For **Name**, enter a unique name for the Stream Analytics job.
+ 1. Select the **Region** in which you want to deploy the Stream Analytics job. Use the same location for your resource group and all resources to increase the processing speed and reduce costs.
+ 1. Select **Review + create**.
- ![Create a new Stream Analytics job details](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png" alt-text="Screenshot that shows the New Stream Analytics job page.":::
+1. On the **Review + create** page, review settings, and select **Create**.
+1. After the deployment succeeds, select **Go to resource** to navigate to the **Stream Analytics job** page for your Stream Analytics job.
## Create an Azure Stream Analytics query
-The next step after your job is created is to write a query. You can test queries against sample data without connecting an input or output to your job.
-
-Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json
-) from GitHub. Then, navigate to your Azure Stream Analytics job in the Azure portal.
-
-Select **Query** under **Job topology** from the left menu. Then select **Upload sample input**. Upload the `HelloWorldASA-InputStream.json` file, and select **Ok**.
+After your job is created, write a query. You can test queries against sample data without connecting an input or output to your job.
-![Stream Analytics dashboard query tile](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png)
+1. Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json
+) from GitHub.
+1. On the **Azure Stream Analytics job** page in the Azure portal, select **Query** under **Job topology** from the left menu.
+1. Select **Upload sample input**, select the `HelloWorldASA-InputStream.json` file you downloaded, and select **OK**.
-Notice that a preview of the data is automatically populated in the **Input preview** table.
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png" alt-text="Screenshot that shows the **Query** page with **Upload sample input** selected." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png":::
+1. Notice that a preview of the data is automatically populated in the **Input preview** table.
-![Preview of sample input data](./media/stream-analytics-get-started-with-iot-devices/input-preview.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/input-preview.png" alt-text="Screenshot that shows sample input data in the Input preview tab.":::
### Query: Archive your raw data The simplest form of query is a pass-through query that archives all input data to its designated output. This query is the default query populated in a new Azure Stream Analytics job.
-```sql
-SELECT
- *
-INTO
- Output
-FROM
- InputStream
-```
+1. In the **Query** window, enter the following query, and then select **Test query** on the toolbar.
-Select **Test query** and view the results in the **Test results** table.
+ ```sql
+ SELECT
+ *
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias
+ ```
+2. View the results in the **Test results** tab in the bottom pane.
-![Test results for Stream Analytics query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png" alt-text="Screenshot that shows the sample query and its results.":::
### Query: Filter the data based on a condition
-Let's try to filter the results based on a condition. We would like to show results for only those events that come from "sensorA."
-
-```sql
-SELECT
- time,
- dspl AS SensorName,
- temp AS Temperature,
- hmdt AS Humidity
-INTO
- Output
-FROM
- InputStream
-WHERE dspl='sensorA'
-```
+Let's update the query to filter the results based on a condition. For example, the following query shows events that come from `sensorA`."
+
+1. Update the query with the following sample:
-Paste the query in the editor and select **Test query** to review the results.
+ ```sql
+ SELECT
+ time,
+ dspl AS SensorName,
+ temp AS Temperature,
+ hmdt AS Humidity
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias
+ WHERE dspl='sensorA'
+ ```
+2. Select **Test query** to see the results of the query.
-![Filtering a data stream](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png" alt-text="Screenshot that shows the query results with the filter.":::
### Query: Alert to trigger a business workflow Let's make our query more detailed. For every type of sensor, we want to monitor average temperature per 30-second window and display results only if the average temperature is above 100 degrees.
-```sql
-SELECT
- System.Timestamp AS OutputTime,
- dspl AS SensorName,
- Avg(temp) AS AvgTemperature
-INTO
- Output
-FROM
- InputStream TIMESTAMP BY time
-GROUP BY TumblingWindow(second,30),dspl
-HAVING Avg(temp)>100
-```
+1. Update the query to:
+
+ ```sql
+ SELECT
+ System.Timestamp AS OutputTime,
+ dspl AS SensorName,
+ Avg(temp) AS AvgTemperature
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias TIMESTAMP BY time
+ GROUP BY TumblingWindow(second,30),dspl
+ HAVING Avg(temp)>100
+ ```
+1. Select **Test query** to see the results of the query.
-![30-second filter query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png" alt-text="Screenshot that shows the query with a tumbling window.":::
-You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics).
+ You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics).
### Query: Detect absence of events
-How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then did not send events for the next 5 seconds.
-
-```sql
-SELECT
- t1.time,
- t1.dspl AS SensorName
-INTO
- Output
-FROM
- InputStream t1 TIMESTAMP BY time
-LEFT OUTER JOIN InputStream t2 TIMESTAMP BY time
-ON
- t1.dspl=t2.dspl AND
- DATEDIFF(second,t1,t2) BETWEEN 1 and 5
-WHERE t2.dspl IS NULL
-```
+How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then didn't send events for the next 5 seconds.
+
+1. Update the query to:
+
+ ```sql
+ SELECT
+ t1.time,
+ t1.dspl AS SensorName
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias t1 TIMESTAMP BY time
+ LEFT OUTER JOIN yourinputalias t2 TIMESTAMP BY time
+ ON
+ t1.dspl=t2.dspl AND
+ DATEDIFF(second,t1,t2) BETWEEN 1 and 5
+ WHERE t2.dspl IS NULL
+ ```
+2. Select **Test query** to see the results of the query.
+
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png" alt-text="Screenshot that shows the query that detects absence of events.":::
-![Detect absence of events](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png)
-Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is very useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics).
+ Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics).
## Conclusion
-The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this is just to get you started. Stream Analytics supports a variety of inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
+The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this article is just to get you started. Stream Analytics supports various inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
stream-analytics Stream Analytics Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-introduction.md
As a managed service, Stream Analytics guarantees event processing with a 99.9%
In terms of security, Azure Stream Analytics encrypts all incoming and outgoing communications and supports TLS 1.2. Built-in checkpoints are also encrypted. Stream Analytics doesn't store the incoming data since all processing is done in-memory. Stream Analytics also supports Azure Virtual Networks (VNET) when running a job in a [Stream Analytics Cluster](./cluster-overview.md).
-### Compliance
-
-Azure Stream Analytics follows multiple compliance certifications as described in the [overview of Azure compliance](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942).
## Performance
stream-analytics Stream Analytics Streaming Unit Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-streaming-unit-consumption.md
There's an automatic conversion of Streaming Units which occurs from REST API la
| ... | ... | ... |
-## Understanding consumption and memory utilization
+## Understand consumption and memory utilization
To achieve low latency stream processing, Azure Stream Analytics jobs perform all processing in memory. When running out of memory, the streaming job fails. As a result, for a production job, itΓÇÖs important to monitor a streaming jobΓÇÖs resource usage, and make sure there's enough resource allocated to keep the jobs running 24/7. The SU % utilization metric, which ranges from 0% to 100%, describes the memory consumption of your workload. For a streaming job with minimal footprint, this metric is usually between 10% to 20%. If SU% utilization is high (above 80%), or if input events get backlogged (even with a low SU% utilization since it doesn't show CPU usage), your workload likely requires more compute resources, which requires you to increase the number of streaming units. It's best to keep the SU metric below 80% to account for occasional spikes. To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there's an impact.
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
Previously updated : 09/29/2022 Last updated : 08/15/2023 # User-assigned managed identities for Azure Stream Analytics
With support for both system-assigned identity and user-assigned identity, here
2. You can switch from an existing user-assigned identity to a newly created user-assigned identity. The previous identity is not removed from storage access control list. 3. You cannot add multiple identities to your stream analytics job. 4. Currently we do not support deleting an identity from a stream analytics job. You can replace it with another user-assigned or system-assigned identity.
+5. You cannot use user-assigned identity to authenticate via allow-trusted services.
## Next steps
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
- Group: divides a group of faces into disjoint groups based on similarity ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/GroupFaces.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.GroupFaces)) ### Speech
-[**Speech Services**](https://azure.microsoft.com/services/cognitive-services/speech-services/)
+[**Speech Services**](https://azure.microsoft.com/products/ai-services/ai-speech)
- Speech-to-text: transcribes audio streams ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/SpeechToText.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.SpeechToText)) - Conversation Transcription: transcribes audio streams into live transcripts with identified speakers. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/ConversationTranscription.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.ConversationTranscription)) - Text to Speech: Converts text to realistic audio ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/TextToSpeech.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.TextToSpeech)) ### Language
-[**Text Analytics**](https://azure.microsoft.com/services/cognitive-services/text-analytics/)
+[**Text Analytics**](https://azure.microsoft.com/products/ai-services/text-analytics)
- Language detection: detects language of the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/LanguageDetector.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.LanguageDetector)) - Key phrase extraction: identifies the key talking points in the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/KeyPhraseExtractor.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.KeyPhraseExtractor)) - Named entity recognition: identifies known entities and general named entities in the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/NER.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.NER))
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
### Translation
-[**Translator**](https://azure.microsoft.com/services/cognitive-services/translator/)
+[**Translator**](https://azure.microsoft.com/products/ai-services/translator)
- Translate: Translates text. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/Translate.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.Translate)) - Transliterate: Converts text in one language from one script to another script. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/Transliterate.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.Transliterate)) - Detect: Identifies the language of a piece of text. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/Detect.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.Detect))
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
- Document Translation: Translates documents across all supported languages and dialects while preserving document structure and data format. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DocumentTranslator.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DocumentTranslator)) ### Document Intelligence
-[**Document Intelligence**](https://azure.microsoft.com/services/form-recognizer/) (formerly known as Azure AI Document Intelligence)
+[**Document Intelligence**](https://azure.microsoft.com/products/ai-services/ai-document-intelligence) (formerly known as Azure AI Document Intelligence)
- Analyze Layout: Extract text and layout information from a given document. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeLayout.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeLayout)) - Analyze Receipts: Detects and extracts data from receipts using optical character recognition (OCR) and our receipt model, enabling you to easily extract structured data from receipts such as merchant name, merchant phone number, transaction date, transaction total, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeReceipts.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeReceipts)) - Analyze Business Cards: Detects and extracts data from business cards using optical character recognition (OCR) and our business card model, enabling you to easily extract structured data from business cards such as contact names, company names, phone numbers, emails, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeBusinessCards.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeBusinessCards))
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
- List Custom Models: Get information about all custom models. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/ListCustomModels.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.ListCustomModels)) ### Decision
-[**Anomaly Detector**](https://azure.microsoft.com/services/cognitive-services/anomaly-detector/)
+[**Anomaly Detector**](https://azure.microsoft.com/products/ai-services/ai-anomaly-detector)
- Anomaly status of latest point: generates a model using preceding points and determines whether the latest point is anomalous ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DetectLastAnomaly.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DetectLastAnomaly)) - Find anomalies: generates a model using an entire series and finds anomalies in the series ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DetectAnomalies.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DetectAnomalies)) ### Search-- [Bing Image search](https://azure.microsoft.com/services/cognitive-services/bing-image-search-api/) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BingImageSearch.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BingImageSearch))
+- [Bing Image search](https://www.microsoft.com/bing/apis/bing-image-search-api) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BingImageSearch.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BingImageSearch))
- [Azure Cognitive Search](../../search/search-what-is-azure-search.md) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter)) ## Prerequisites
synapse-analytics Tutorial Horovod Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-tensorflow.md
Title: 'Tutorial: Distributed training with Horovod and Tensorflow'
-description: Tutorial on how to run distributed training with the Horovod Runner and Tensorflow
+ Title: 'Tutorial: Distributed training with Horovod and TensorFlow'
+description: Tutorial on how to run distributed training with the Horovod Runner and TensorFlow
-# Tutorial: Distributed Training with Horovod Runner and Tensorflow (Preview)
+# Tutorial: Distributed Training with Horovod Runner and TensorFlow (Preview)
[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
-Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime.For Spark ML pipeline applications using Tensorflow, users can use ```HorovodRunner```. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial leverages Tensorflow and the ```HorovodRunner``` to run the training process.
+Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime.For Spark ML pipeline applications using TensorFlow, users can use ```HorovodRunner```. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial leverages TensorFlow and the ```HorovodRunner``` to run the training process.
## Prerequisites
Within Azure Synapse Analytics, users can quickly get started with Horovod using
## Configure the Apache Spark session
-At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For Tensorflow models, users will need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
+At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For TensorFlow models, users will need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided below are the suggested, best practice values for Azure Synapse GPU-large pools.
def get_dataset(rank=0, size=1):
## Define DNN model
-Once we have finished processing our dataset, we can now define our Tensorflow model. The same code could also be used to train a single-node Tensorflow model.
+Once we have finished processing our dataset, we can now define our TensorFlow model. The same code could also be used to train a single-node TensorFlow model.
```python # Define the TensorFlow model without any Horovod-specific parameters
def get_model():
## Define a training function for a single node
-First, we will train our Tensorflow model on the driver node of the Apache Spark pool. Once we have finished the training process, we will evaluate the model and print the loss and accuracy scores.
+First, we will train our TensorFlow model on the driver node of the Apache Spark pool. Once we have finished the training process, we will evaluate the model and print the loss and accuracy scores.
```python
To ensure the Spark instance is shut down, end any connected sessions(notebooks)
## Next steps * [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
-* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
+* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/system-integration.md
This article highlights Microsoft system integration partner companies building
| :::image type="content" source="./media/system-integration/blue-granite-logo.png" alt-text="The logo of Blue Granite."::: |**Blue Granite**<br>The BlueGranite Catalyst for Analytics is an engagement approach that features their "think big, but start small" philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform.|[Blue Granite](https://www.blue-granite.com/)<br>| | :::image type="content" source="./media/system-integration/capax-global-logo.png" alt-text="The logo of Capax Global."::: |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Capax Global](https://www.capaxglobal.com/)<br>| | :::image type="content" source="./media/system-integration/coeo-logo.png" alt-text="The logo of Coeo."::: |**Coeo**<br>Coeo's team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Coeo](https://www.coeo.com/analytics/)<br>|
-| :::image type="content" source="./media/system-integration/cognizant-logo.png" alt-text="The logo of Cognizant."::: |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Cognizant](https://mbg.cognizant.com/technologies-capabilities/microsoft-azure/)<br>|
+| :::image type="content" source="./media/system-integration/cognizant-logo.png" alt-text="The logo of Cognizant."::: |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Cognizant](https://www.cognizant.com/about-cognizant/partners/microsoft)<br>|
| :::image type="content" source="./media/system-integration/neal-analytics-logo.png" alt-text="The logo of Neal Analytics."::: |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Azure AI services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Neal Analytics](https://fractal.ai/)<br>| | :::image type="content" source="./media/system-integration/pragmatic-works-logo.png" alt-text="The logo of Pragmatic Works."::: |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Pragmatic Works](https://www.pragmaticworks.com/)<br>|
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-apache-spark-notebook.md
To ensure the Spark instance is shut down, end any connected sessions(notebooks)
In this quickstart, you learned how to create a serverless Apache Spark pool and run a basic Spark SQL query. - [Azure Synapse Analytics](overview-what-is.md)-- [.NET for Apache Spark documentation](/dotnet/spark)
+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
synapse-analytics Gateway Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/gateway-ip-addresses.md
The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
-Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](https://learn.microsoft.com/azure/azure-sql/database/gateway-migration?view=azuresql&tabs=in-progress-ip). We strongly encourage customers to use the **Gateway IP address subnets** in order to not be impacted by this activity in a region.
+Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](/azure/azure-sql/database/gateway-migration?view=azuresql&tabs=in-progress-ip&preserve-view=true). We strongly encourage customers to use the **Gateway IP address subnets** in order to not be impacted by this activity in a region.
> [!IMPORTANT] > - Logins for SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse can land on **any of the Gateways in a region**. For consistent connectivity to SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse, allow network traffic to and from **ALL** Gateway IP addresses and Gateway IP address subnets for the region.
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
This document uses standard names to simplify instructions. Replace them with na
| **Container** | `container1` | The container in STG1 that the workspace will use by default. | | **Active directory tenant** | `contoso` | the active directory tenant name.|
-## STEP 1: Set up security groups
+## Step 1: Set up security groups
>[!Note] >During the preview, you were encouraged to create security groups and to map them to Azure Synapse **Synapse SQL Administrator** and **Synapse Apache Spark Administrator** roles. With the introduction of new finer-grained Synapse RBAC roles and scopes, you are now encouraged to use newer options to control access to your workspace. They give you greater configuration flexibility and they acknowledge that developers often use a mix of SQL and Spark to create analytics applications. So developers may need access to individual resources rather than an entire workspace. [Learn more](./synapse-workspace-synapse-rbac.md) about Synapse RBAC.
These five groups are sufficient for a basic setup. Later, you can add security
>[!Tip] >Individual Synapse users can use Azure Active Directory in the Azure portal to view their group memberships. This allows them to determine which roles they've been granted.
-## STEP 2: Prepare your ADLS Gen2 storage account
+## Step 2: Prepare your ADLS Gen2 storage account
Synapse workspaces use default storage containers for: - Storage of backing data files for Spark tables
Identify the following information about your storage:
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-## STEP 3: Create and configure your Synapse workspace
+## Step 3: Create and configure your Synapse workspace
In Azure portal, create a Synapse workspace:
In Azure portal, create a Synapse workspace:
- Assign the **Synapse Contributor** role to `workspace1_SynapseContributors` - Assign the **Synapse Compute Operator** role to `workspace1_SynapseComputeOperators`
-## STEP 4: Grant the workspace MSI access to the default storage container
+## Step 4: Grant the workspace MSI access to the default storage container
To run pipelines and perform system tasks, Azure Synapse requires managed service identity (MSI) to have access to `container1` in the default ADLS Gen2 account, for the workspace. For more information, see [Azure Synapse workspace managed identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics).
To run pipelines and perform system tasks, Azure Synapse requires managed servic
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-## STEP 5: Grant Synapse administrators an Azure Contributor role for the workspace
+## Step 5: Grant Synapse administrators an Azure Contributor role for the workspace
To create SQL pools, Apache Spark pools and Integration runtimes, users need an Azure Contributor role for the workspace, at minimum. A Contributor role also allows users to manage resources, including pausing and scaling. To use Azure portal or Synapse Studio to create SQL pools, Apache Spark pools and Integration runtimes, you need a Contributor role at the resource group level.
To create SQL pools, Apache Spark pools and Integration runtimes, users need an
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-## STEP 6: Assign an SQL Active Directory Admin role
+## Step 6: Assign an SQL Active Directory Admin role
The *workspace creator* is automatically assigned as *SQL Active Directory Admin* for the workspace. Only a single user or a group can be granted this role. In this step, you assign the SQL Active Directory Admin for the workspace to the `workspace1_SQLAdmins` security group. This gives the group highly privileged admin access to all SQL pools and databases in the workspace.
The *workspace creator* is automatically assigned as *SQL Active Directory Admin
>[!Note] >Step 6 is optional. You might choose to grant the `workspace1_SQLAdmins` group a less privileged role. To assign `db_owner` or other SQL roles, you must run scripts on each SQL database.
-## STEP 7: Grant access to SQL pools
+## Step 7: Grant access to SQL pools
The Synapse Administrator is by default given the SQL `db_owner` role for serverless SQL pools in the workspace as well.
Access to SQL pools for other users is controlled by SQL permissions. Assigning
> [!TIP] >You can grant access to all SQL databases by taking the following steps for **each** SQL pool. Section [Configure-Workspace-scoped permissions](#configure-workspace-scoped-permissions) is an exception to the rule and it allows you to assign a user a sysadmin role at the workspace level.
-### STEP 7.1: Serverless SQL pool, Built-in
+### Step 7a: Serverless SQL pool, Built-in
You can use the script examples in this section to give users permission to access an individual database or all databases in the serverless SQL pool, `Built-in`.
CREATE LOGIN [alias@domain.com] FROM EXTERNAL PROVIDER;
ALTER SERVER ROLE sysadmin ADD MEMBER [alias@domain.com]; ```
-### STEP 7.2: configure Dedicated SQL pools
+### Step 7b: configure Dedicated SQL pools
You can grant access to a **single**, dedicated, SQL pool database. Use these steps in the Azure Synapse SQL script editor:
You can grant access to a **single**, dedicated, SQL pool database. Use these st
You can run queries to confirm that serverless SQL pools can query storage accounts, after you have created your users.
-## STEP 8: Add users to security groups
+## Step 8: Add users to security groups
The initial configuration for your access control system is now complete. You can now add and remove users to the security groups you've set up, to manage access to them. You can manually assign users to Azure Synapse roles, but this sets permissions inconsistently. Instead, only add or remove users to your security groups.
-## STEP 9: Network security
+## Step 9: Network security
As a final step to secure your workspace, you should secure network access, using the [workspace firewall](./synapse-workspace-ip-firewall.md).
As a final step to secure your workspace, you should secure network access, usin
- Access from public networks can be controlled by enabling the [public network access feature](connectivity-settings.md#public-network-access) or the [workspace firewall](./synapse-workspace-ip-firewall.md). - Alternatively, you can connect to your workspace using a [managed private endpoint](synapse-workspace-managed-private-endpoints.md) and [private Link](/azure/azure-sql/database/private-endpoint-overview). Azure Synapse workspaces without the [Azure Synapse Analytics Managed Virtual Network](synapse-workspace-managed-vnet.md) do not have the ability to connect via managed private endpoints.
-## STEP 10: Completion
+## Step 10: Completion
Your workspace is now fully configured and secured.
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
The following table describes the built-in roles and the scopes at which they ca
|Synapse Administrator |Full Synapse access to SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential. Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. </br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential | |Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks and their outputs, and to libraries, linked services, and credentials.  Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| |Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services.  Includes read access to all other published code artifacts.  Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace|
-|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
-|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace
+|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including scheduled pipelines, credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
+|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs, including scheduled pipelines. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace
|Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime| |Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace |
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
You can pause or scale a dedicated SQL pool, configure a Spark pool, or an integ
With access to Synapse Studio, you can create new code artifacts, such as SQL scripts, KQL scripts, notebooks, spark jobs, linked services, pipelines, dataflows, triggers, and credentials. These artifacts can be published or saved with additional permissions.
-If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts.
+If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts, including scheduled pipelines.
### Execute your code
synapse-analytics Apache Spark Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-concepts.md
Spark instances are created when you connect to a Spark pool, create a session,
When you submit a second job, if there's capacity in the pool, the existing Spark instance also has capacity. Then, the existing instance will process the job. Otherwise, if capacity is available at the pool level, then a new Spark instance will be created.
-Billing for the instances starts when the Azure VM(s) starts. Billing for the Spark pool instances stops when pool instances changes to terminating. For more details on how Azure VMs are started, de-allocated see [States and billing status of Azure Virtual Machines](https://learn.microsoft.com/azure/virtual-machines/states-billing)
+Billing for the instances starts when the Azure VM(s) starts. Billing for the Spark pool instances stops when pool instances change to terminating. For more information on how Azure VMs are started and deallocated, see [States and billing status of Azure Virtual Machines](/azure/virtual-machines/states-billing).
## Examples
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
We provide rich operations to develop notebooks:
+ [Collapse a cell output](#collapse-a-cell-output) + [Notebook outline](#notebook-outline)
+> [!NOTE]
+>
+> In the notebooks, there is a SparkSession automatically created for you, stored in a variable called `spark`. Also there is a variable for SparkContext which is called `sc`. Users can access these variables directly and should not change the values of these variables.
++ <h3 id="add-a-cell">Add a cell</h3> There are multiple ways to add a new cell to your notebook.
Select the **Undo** / **Redo** button or press **Z** / **Shift+Z** to revoke the
![Screenshot of Synapse undo cells of aznb](./media/apache-spark-development-using-notebooks/synapse-undo-cells-aznb.png) Supported undo cell operations:
-+ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content will be kept along with the cell.
++ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content is kept along with the cell. + Reorder cell. + Toggle parameter. + Convert between Code cell and Markdown cell.
Select the **Cancel All** button to cancel the running cells or cells waiting in
### Notebook reference
-You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You will receive an exception if the statement depth is larger than **five**.
+You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You receive an exception if the statement depth is larger than **five**.
Example: ``` %run /<path>/Notebook1 { "parameterInt": 1, "parameterFloat": 2.5, "parameterBool": true, "parameterString": "abc" } ```.
Notebook reference works in both interactive mode and Synapse pipeline.
### Variable explorer
-Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables will show up automatically as they are defined in the code cells. Clicking on each column header will sort the variables in the table.
+Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables show up automatically as they are defined in the code cells. Clicking on each column header sorts the variables in the table.
You can select the **Variables** button on the notebook command bar to open or hide the variable explorer.
Parameterized session configuration allows you to replace the value in %%configu
} ```
-Notebook will use default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
+Notebook uses default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
During the pipeline run mode, you can configure pipeline Notebook activity settings as below: ![Screenshot of parameterized session configuration](./media/apache-spark-development-using-notebooks/parameterized-session-config.png)
You can access data in the primary storage account directly. There's no need to
## IPython Widgets
-Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (e.g. Scala, SQL, C#) yet.
+Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (for example, Scala, SQL, C#) yet.
### To use IPython Widget 1. You need to import `ipywidgets` module first to use the Jupyter Widget framework.
Widgets are eventful Python objects that have a representation in the browser, o
slider ```
-3. Run the cell, the widget will display at the output area.
+3. Run the cell, the widget displays at the output area.
![Screenshot of ipython widgets slider](./media/apache-spark-development-using-notebooks/ipython-widgets-slider.png)
-4. You can use multiple `display()` calls to render the same widget instance multiple times, but they will remain in sync with each other.
+4. You can use multiple `display()` calls to render the same widget instance multiple times, but they remain in sync with each other.
```python slider = widgets.IntSlider()
Widgets are eventful Python objects that have a representation in the browser, o
|`widgets.jslink()`|You can use `widgets.link()` function to link two similar widgets.| |`FileUpload` widget| Not support yet.|
-2. Global `display` function provided by Synapse does not support displaying multiple widgets in 1 call (i.e. `display(a, b)`), which is different from IPython `display` function.
+2. Global `display` function provided by Synapse does not support displaying multiple widgets in one call (that is, `display(a, b)`), which is different from IPython `display` function.
3. If you close a notebook that contains IPython Widget, you will not be able to see or interact with it until you execute the corresponding cell again.
Available cell magics:
<h2 id="reference-unpublished-notebook">Reference unpublished notebook</h2>
-Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run will fetch the current content in web cache, if you run a cell including a reference notebooks statement, you will reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
+Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run fetches the current content in web cache, if you run a cell including a reference notebooks statement, you reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
You can enable Reference unpublished notebook from Properties panel:
You can reuse your notebook sessions conveniently now without having to start ne
![Screenshot of notebook-manage-sessions](./media/apache-spark-development-using-notebooks/synapse-notebook-manage-sessions.png)
-In the **Active sessions** list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session will be detached from the previous notebook (if it's not idle) then attach to the current one.
+In the **Active sessions**, list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session is detached from the previous notebook (if it's not idle) then attach to the current one.
![Screenshot of notebook-sessions-list](./media/apache-spark-development-using-notebooks/synapse-notebook-sessions-list.png)
To parameterize your notebook, select the ellipses (...) to access the **more co
-Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
+Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine adds a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
### Assign parameters values from a pipeline
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
While Azure Synapse Analytics supports a variety of linked service connections (
- Azure Cosmos DB - Azure Data Explorer - Azure Database for MySQL
+ - Azure Database for PostgreSQL
- Azure Data Lake Store (Gen1) - Azure Key Vault - Azure Machine Learning
While Azure Synapse Analytics supports a variety of linked service connections (
- Azure SQL Data Warehouse (Dedicated and Serverless) - Azure Storage
- #### mssparkutils.credenials.getToken()
+ #### mssparkutils.credentials.getToken()
When you need an OAuth bearer token to access services directly, you can use the `getToken` method. The following resources are supported: | Service Name | String literal to be used in API call |
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
Title: Use .NET for Apache Spark
description: Learn about using .NET and Apache Spark to do batch processing, real-time streaming, machine learning, and write ad-hoc queries in Azure Synapse Analytics notebooks. --++ Previously updated : 05/01/2020 Last updated : 05/01/2020 # Use .NET for Apache Spark with Azure Synapse Analytics
-[.NET for Apache Spark](https://dot.net/spark) provides free, [open-source](https://github.com/dotnet/spark), and cross-platform .NET support for Spark.
+[.NET for Apache Spark](https://dot.net/spark) provides free, [open-source](https://github.com/dotnet/spark), and cross-platform .NET support for Spark.
It provides .NET bindings for Spark, which allows you to access Spark APIs through C# and F#. With .NET for Apache Spark, you can also write and execute user-defined functions for Spark written in .NET. The .NET APIs for Spark enable you to access all aspects of Spark DataFrames that help you analyze your data, including Spark SQL, Delta Lake, and Structured Streaming.
You can analyze data with .NET for Apache Spark through Spark batch job definiti
>[!IMPORTANT] > The [.NET for Apache Spark](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet) is an open-source project under the .NET Foundation that currently requires the .NET 3.1 library, which has reached the out-of-support status. We would like to inform users of Azure Synapse Spark of the removal of the .NET for Apache Spark library in the Azure Synapse Runtime for Apache Spark version 3.3. Users may refer to the [.NET Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for more details on this matter. >
-> As a result, it will no longer be possible for users to utilize Apache Spark APIs via C# and F#, or execute C# code in notebooks within Synapse or through Apache Spark Job definitions in Synapse. It is important to note that this change affects only Azure Synapse Runtime for Apache Spark 3.3 and above.
->
+> As a result, it will no longer be possible for users to utilize Apache Spark APIs via C# and F#, or execute C# code in notebooks within Synapse or through Apache Spark Job definitions in Synapse. It is important to note that this change affects only Azure Synapse Runtime for Apache Spark 3.3 and above.
+>
> We will continue to support .NET for Apache Spark in all previous versions of the Azure Synapse Runtime according to [their lifecycle stages](runtime-for-apache-spark-lifecycle-and-supportability.md). However, we do not have plans to support .NET for Apache Spark in Azure Synapse Runtime for Apache Spark 3.3 and future versions. We recommend that users with existing workloads written in C# or F# migrate to Python or Scala. Users are advised to take note of this information and plan accordingly. ## Submit batch jobs using the Spark job definition
The required .NET Spark version will be noted in the Synapse Studio interface un
:::image type="content" source="./media/apache-spark-job-definitions/net-spark-workspace-compatibility.png" alt-text="Screenshot that shows properties, including the .NET Spark version."::: Create your project as a .NET console application that outputs an Ubuntu x86 executable.
-
+ ``` <Project Sdk="Microsoft.NET.Sdk">
-
+ <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp3.1</TargetFramework> </PropertyGroup>
-
+ <ItemGroup> <PackageReference Include="Microsoft.Spark" Version="2.1.0" /> </ItemGroup>
-
+ </Project> ``` 2. Run the following commands to publish your app. Be sure to replace *mySparkApp* with the path to your app.
-
+ ```dotnetcli cd mySparkApp dotnet publish -c Release -f netcoreapp3.1 -r ubuntu.18.04-x64 ```
-3. Zip the contents of the publish folder, `publish.zip` for example, that was created as a result of Step 1. All the assemblies should be in the root of the ZIP file and there should be no intermediate folder layer. This means when you unzip `publish.zip`, all assemblies are extracted into your current working directory.
+3. Zip the contents of the publish folder, `publish.zip` for example, that was created as a result of Step 1. All the assemblies should be in the root of the ZIP file and there should be no intermediate folder layer. This means when you unzip `publish.zip`, all assemblies are extracted into your current working directory.
**On Windows:** Using Windows PowerShell or PowerShell 7, create a .zip from the contents of your publish directory.+ ```PowerShell Compress-Archive publish/* publish.zip -Update ```
The required .NET Spark version will be noted in the Synapse Studio interface un
zip -r publish.zip ```
-## .NET for Apache Spark in Azure Synapse Analytics notebooks
+## .NET for Apache Spark in Azure Synapse Analytics notebooks
-Notebooks are a great option for prototyping your .NET for Apache Spark pipelines and scenarios. You can start working with, understanding, filtering, displaying, and visualizing your data quickly and efficiently.
+Notebooks are a great option for prototyping your .NET for Apache Spark pipelines and scenarios. You can start working with, understanding, filtering, displaying, and visualizing your data quickly and efficiently.
Data engineers, data scientists, business analysts, and machine learning engineers are all able to collaborate over a shared, interactive document. You see immediate results from data exploration, and can visualize your data in the same notebook.
The following features are available when you use .NET for Apache Spark in the A
* Access to the standard C# library (such as System, LINQ, Enumerables, and so on). * Support for C# 8.0 language features. * `spark` as a pre-defined variable to give you access to your Apache Spark session.
-* Support for defining [.NET user-defined functions that can run within Apache Spark](/dotnet/spark/how-to-guides/udf-guide). We recommend [Write and call UDFs in .NET for Apache Spark Interactive environments](/dotnet/spark/how-to-guides/dotnet-interactive-udf-issue) for learning how to use UDFs in .NET for Apache Spark Interactive experiences.
+* Support for defining [.NET user-defined functions that can run within Apache Spark](/previous-versions/dotnet/spark/how-to-guides/udf-guide). We recommend [Write and call UDFs in .NET for Apache Spark Interactive environments](/previous-versions/dotnet/spark/how-to-guides/dotnet-interactive-udf-issue) for learning how to use UDFs in .NET for Apache Spark Interactive experiences.
* Support for visualizing output from your Spark jobs using different charts (such as line, bar, or histogram) and layouts (such as single, overlaid, and so on) using the `XPlot.Plotly` library. * Ability to include NuGet packages into your C# notebook.+ ## Troubleshooting ### `DotNetRunner: null` / `Futures timeout` in Synapse Spark Job Definition Run+ Synapse Spark Job Definitions on Spark Pools using Spark 2.4 require `Microsoft.Spark` 1.0.0. Clear your `bin` and `obj` directories, and publish the project using 1.0.0.
-### OutOfMemoryError: java heap space at org.apache.spark...
+
+### OutOfMemoryError: java heap space at org.apache.spark
+ Dotnet Spark 1.0.0 uses a different debug architecture than 1.1.1+. You will have to use 1.0.0 for your published version and 1.1.1+ for local debugging. ## Next steps * [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
-* [.NET for Apache Spark Interactive guides](/dotnet/spark/how-to-guides/dotnet-interactive-udf-issue)
+* [.NET for Apache Spark Interactive guides](/previous-versions/dotnet/spark/how-to-guides/dotnet-interactive-udf-issue)
* [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) * [.NET Interactive](https://devblogs.microsoft.com/dotnet/creating-interactive-net-documentation/)
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
All queries executed on SQL pool are logged to [sys.dm_pdw_exec_requests](/sql/r
Here are steps to follow to investigate query execution plans and times for a particular query.
-### STEP 1: Identify the query you wish to investigate
+### Step 1: Identify the query you wish to investigate
```sql -- Monitor active queries
FROM sys.dm_pdw_exec_requests
WHERE [label] = 'My Query'; ```
-### STEP 2: Investigate the query plan
+### Step 2: Investigate the query plan
Use the Request ID to retrieve the query's distributed SQL (DSQL) plan from [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
To investigate further details about a single step, inspect the `operation_type`
* For **SQL operations** (OnOperation, RemoteOperation, ReturnOperation), proceed with [STEP 3](#step-3-investigate-sql-on-the-distributed-databases) * For **Data Movement operations** (ShuffleMoveOperation, BroadcastMoveOperation, TrimMoveOperation, PartitionMoveOperation, MoveOperation, CopyOperation), proceed with [STEP 4](#step-4-investigate-data-movement-on-the-distributed-databases).
-### STEP 3: Investigate SQL on the distributed databases
+### Step 3: Investigate SQL on the distributed databases
Use the Request ID and the Step Index to retrieve details from [sys.dm_pdw_sql_requests](/sql/t-sql/database-console-commands/dbcc-pdw-showexecutionplan-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), which contains execution information of the query step on all of the distributed databases.
When the query step is running, [DBCC PDW_SHOWEXECUTIONPLAN](/sql/t-sql/database
DBCC PDW_SHOWEXECUTIONPLAN(1, 78); ```
-### STEP 4: Investigate data movement on the distributed databases
+### Step 4: Investigate data movement on the distributed databases
Use the Request ID and the Step Index to retrieve information about a data movement step running on each distribution from [sys.dm_pdw_dms_workers](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-dms-workers-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-external-tables.md
The following table lists the data formats supported:
## Prerequisites
-Your first step is to create a database where the tables will be created. Before creating a database scoped credential, the database must have a master key to protect the credential. For more information on this, see [CREATE MASTER KEY &#40;Transact-SQL&#41;](https://learn.microsoft.com/sql/t-sql/statements/create-master-key-transact-sql). Then create the following objects that are used in this sample:
+Your first step is to create a database where the tables will be created. Before creating a database scoped credential, the database must have a master key to protect the credential. For more information on this, see [CREATE MASTER KEY &#40;Transact-SQL&#41;](/sql/t-sql/statements/create-master-key-transact-sql). Then create the following objects that are used in this sample:
- DATABASE SCOPED CREDENTIAL `sqlondemand` that enables access to SAS-protected `https://sqlondemandstorage.blob.core.windows.net` Azure storage account. ```sql
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
To query a file located in Azure Storage, your serverless SQL pool endpoint need
To grant the ability manage credentials: -- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to the user. For example:
+- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to its login in the master database. For example:
```sql
- GRANT ALTER ANY CREDENTIAL TO [user_name];
+ GRANT ALTER ANY CREDENTIAL TO [login_name];
``` -- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the user. For example:
+- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the database user in the user database. For example:
```sql GRANT CONTROL ON DATABASE::[database_name] TO [user_name];
To grant the ability manage credentials:
Database users who access external storage must have permission to use credentials. To use the credential, a user must have the `REFERENCES` permission on a specific credential.
-To grant the `REFERENCES` permission on a server-level credential for a user, use the following T-SQL query:
+To grant the `REFERENCES` permission on a server-level credential for a login, use the following T-SQL query in the master database:
```sql
-GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [user];
+GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [login_name];
```
-To grant a `REFERENCES` permission on a database-scoped credential for a user, use the following T-SQL query:
+To grant a `REFERENCES` permission on a database-scoped credential for a database user, use the following T-SQL query in the user database:
```sql
-GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user];
+GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user_name];
``` ## Server-level credential
These articles help you learn how query different folder types, file types, and
- [Query Parquet files](query-parquet-files.md) - [Create and use views](create-use-views.md) - [Query JSON files](query-json-files.md)-- [Query Parquet nested types](query-parquet-nested-types.md)
+- [Query Parquet nested types](query-parquet-nested-types.md)
synapse-analytics Get Started Power Bi Professional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-power-bi-professional.md
Open the Power BI desktop application and select the **Get data** option.
![Open Power BI desktop application and select get data.](./media/get-started-power-bi-professional/step-0-open-powerbi.png)
-### Step 1 - Select data source
+### Step 1: Select data source
Select **Azure** in the menu and then **Azure SQL Database**. ![Select data source.](./media/get-started-power-bi-professional/step-1-select-data-source.png)
-### Step 2 - Select database
+### Step 2: Select database
Write the URL for the database and the name of the database where the view resides. ![Select database on the endpoint.](./media/get-started-power-bi-professional/step-2-db.png)
synapse-analytics How To Pause Resume Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/how-to-pause-resume-pipelines.md
Evaluate the desired state, Pause or Resume, and the current status, Online, or
1. On the Activities tab, select **+ Add Case**. Add the cases `Paused-Resume` and `Online-Pause`. ![Check status condition of the dedicated SQL pool](./media/how-to-pause-resume-pipelines/check-condition.png)
-### Step 5c: Pause or Resume dedicated SQL pools
+## Step 5c: Pause or Resume dedicated SQL pools
The final and only relevant step for some requirements, is to initiate the pause or resume of your dedicated SQL pool. This step again uses a Web activity, calling the [Pause or Resume compute REST API for Azure Synapse](../sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md#pause-compute). 1. Select the activity edit pencil and add a **Web** activity to the State-PauseorResume canvas.
time-series-insights Time Series Insights How To Scale Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-how-to-scale-your-environment.md
However, changing the pricing tier SKU is not allowed. For example, an environme
- For more information, review [Understanding retention in Azure Time Series Insights](time-series-insights-concepts-retention.md). -- Learn about [configuring data retention in Azure Azure Time Series Insights](time-series-insights-how-to-configure-retention.md).
+- Learn about [configuring data retention in Azure Time Series Insights](time-series-insights-how-to-configure-retention.md).
- Learn about [planning out your environment](time-series-insights-environment-planning.md).
update-center Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md
Title: Assessment options in update management center (preview).
-description: The article describes the assessment options available in Update management center (preview).
-
+ Title: Assessment options in Update Manager (preview).
+description: The article describes the assessment options available in Update Manager (preview).
+ Last updated 05/23/2023
-# Assessment options in update management center (preview)
+# Assessment options in Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article provides an overview of the assessment options available by update management center (preview).
+This article provides an overview of the assessment options available by Update Manager (preview).
-Update management center (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines.
+Update Manager (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines.
## Periodic assessment
- Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by update management center (preview). We recommend that you enable this property on your machines as it allows update management center (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
+ Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager (preview). We recommend that you enable this property on your machines as it allows Update Manager (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
:::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png"::: ## Check for updates now/On-demand assessment
-Update management center (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from update management center (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md).
+Update Manager (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md).
## Update assessment scan You can initiate a software updates compliance scan on a machine to get a current list of operating system updates available.
In the **Scheduling** section, you can either **create a maintenance configurati
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/configure-wu-agent.md
Title: Configure Windows Update settings in Update management center (Preview)
-description: This article tells how to configure Windows update settings to work with Update management center (Preview).
-
+ Title: Configure Windows Update settings in Azure Update Manager (preview)
+description: This article tells how to configure Windows update settings to work with Azure Update Manager (preview).
+ Last updated 05/02/2023
-# Configure Windows update settings for update management center (preview)
+# Configure Windows update settings for Azure Update Manager (preview)
-Update management center (Preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by:
+Azure Update Manager (preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by:
- Local Group Policy Editor - Group Policy - PowerShell - Directly editing the Registry
-The Update management center (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update management center (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window.
+The Update Manager (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update Manager (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window.
For additional recommendations on setting up WSUS in your Azure subscription and to secure your Windows virtual machines up to date, review [Plan your deployment for updating Windows virtual machines in Azure using WSUS](/azure/architecture/example-scenario/wsus). ## Pre-download updates
-To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, update management center (Preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in update management center (preview).
+To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, Update Manager (preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in Update Manager (preview)
You can enable this setting in PowerShell:
By default, the Windows Update client is configured to provide updates only for
Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
+- For Servers configured to patch on a schedule from Update Manager (preview) (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
Use one of the following options to perform the settings change at scale:
$ServiceManager.AddService2($ServiceId,7,"") ``` -- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+- For servers running Server 2016 or later which are not using Update Manager (preview) scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
## Make WSUS configuration settings
-Update management center (Preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS.
+Update Manager (preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS.
To restrict machines to the internal update service, see [do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#do-not-connect-to-any-windows-update-internet-locations).
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
Title: Deploy updates and track results in update management center (preview).
-description: The article details how to use update management center (preview) in the Azure portal to deploy updates and view results for supported machines.
-
+ Title: Deploy updates and track results in Azure Update Manager (preview).
+description: The article details how to use Azure Update Manager (preview) in the Azure portal to deploy updates and view results for supported machines.
+ Last updated 08/08/2023
-# Deploy updates now and track results with update management center (preview)
+# Deploy updates now and track results with Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The article describes how to perform an on-demand update on a single VM or multiple VMs using update management center (preview).
+The article describes how to perform an on-demand update on a single VM or multiple VMs using Update Manager (preview).
See the following sections for detailed information: - [Install updates on a single VM](#install-updates-on-single-vm)
See the following sections for detailed information:
## Supported regions
-Update management center (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
+Update Manager (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
+
+## Configure reboot settings
+
+The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
## Install updates on single VM >[!NOTE]
-> You can install the updates from the Overview or Machines blade in update management center (preview) page or from the selected VM.
+> You can install the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/install-single-overview)
To install one time updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates.
+1. In **Update Manager (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates.
:::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
To install one time updates on a single VM, follow these steps:
- In **Select resources**, choose the machine and select **Add**.
-1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, its necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
+1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, it's necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
> [!NOTE]
- > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in update center management (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
- > - Update management center (preview) doesn't support driver updates.
+ > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in Update Manager (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
+ > - Update Manager (preview) doesn't support driver updates.
- Select **+Include update classification**, in the **Include update classification** select the appropriate classification(s) that must be installed on your machines. :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot on including update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png":::
- - Select **Include KB ID/package** to include in the updates. Enter a comma-separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update management center (preview) shows a preview of OS updates under the **Selected Updates** section.
+ - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update Manager (preview) shows a preview of OS updates under the **Selected Updates** section.
- To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend checking this option because updates that are not displayed here might be installed, as newer updates might be available.
- - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date , choose the date and select **Add** and **Next**.
+ - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date, choose the date and select **Add** and **Next**.
:::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot on including patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png":::
To install one time updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates.
+1. In **Update Manager (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates.
1. Select to **Install now** to proceed with installing updates.
To install one time updates on a single VM, follow these steps:
1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates**, select **Go to Updates using Update Center**.
+1. In **Updates**, select **Go to Updates using Azure Update Manager**.
1. In **Updates (Preview)**, select **One-time update** to install the updates. 1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
You can schedule updates
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates.
+1. In **Update Manager (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates.
:::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
A notification appears to inform you the activity has started and another is cre
You can browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history).
-After your scheduled deployment starts, you can see it's status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
+After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
:::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot showing updates history." lightbox="./media/deploy-updates/updates-history-expanded.png"::: > [!NOTE]
-> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update management center (preview)** > **Manage** > **History**.
+> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update manager (preview)** > **Manage** > **History**.
A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid.
Select any one of the update deployments from the list to open the **Update depl
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Dynamic Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md
Title: An overview of dynamic scoping (preview) description: This article provides information about dynamic scoping (preview), its purpose and advantages.-+ Last updated 07/05/2023
The criteria will be evaluated at the scheduled run time, which will be the fina
> [!NOTE] > You can associate one dynamic scope to one schedule.
-## Prerequisites
[!INCLUDE [dynamic-scope-prerequisites.md](includes/dynamic-scope-prerequisites.md)]
update-center Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-migration-azure.md
+
+ Title: Patching guidance overview for Microsoft Configuration Manager to Azure
+description: Patching guidance overview for Microsoft Configuration Manager to Azure. View on how to get started with Azure Update Manager, mapping capabilities of MCM software and FAQs.
+++ Last updated : 08/23/2023+++
+# Guidance on patching while migrating from Microsoft Configuration Manager to Azure
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+This article provides the details on how to patch your migrated virtual machines on Azure.
+
+Microsoft Configuration Manager (MCM) helps you to manage PCs and servers, keep software up-to-date, set configuration and security policies, and monitor system status.
+
+ The [Azure Migration tool](https://learn.microsoft.com/mem/configmgr/core/support/azure-migration-tool) helps you to programmatically create Azure virtual machines (VMs) for Configuration Manager and installs the various site roles with default settings. The validation of new roles and removal of the on-premises site system role enables MCM to provide all the on-premises capabilities and experiences in Azure.
+
+Additionally, you can use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in Azure, on-premises, and on the other cloud platforms, from a single dashboard, with no operational cost for managing the patching infrastructure. Azure Update Manager is similar to the update management component of MCM that is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments.
+
+The MCM in Azure and Azure Update Manager can fulfill your patching requirements as per your requirement.
+- Using MCM, you can continue with the existing investments in MCM and the processes to maintain the patch update management cycle for Windows VMs.
+- Using Azure Update Manager, you can achieve a consistent management of VMs and operating system updates across your cloud and hybrid environments. You don't need to maintain Azure virtual machines for hosting the different Configuration Manager roles and don't need an MCM license thereby reducing the total cost for maintaining the patch update management cycle for all the machines in your environment. [Learn more](https://techcommunity.microsoft.com/t5/windows-it-pro-blog/what-s-uup-new-update-style-coming-next-week/ba-p/3773065).
++
+## Manage software updates using Azure Update Manager
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and search for Azure Update Manager (preview).
+
+ :::image type="content" source="./media/guidance-migration-azure/update-manager-service-selection-inline.png" alt-text="Screenshot of selecting the Azure Update Manager from Azure portal." lightbox="./media/guidance-migration-azure/update-manager-service-selection-expanded.png":::
+
+1. In the **Azure Update Manager (Preview)** home page, under **Manage** > **Machines**, select your subscription to view all your machines.
+1. Filter as per the available options to know the status of your specific machines.
+
+ :::image type="content" source="./media/guidance-migration-azure/filter-machine-status-inline.png" alt-text="Screenshot of selecting the filters in Azure Update Manager to view the machines." lightbox="./media/guidance-migration-azure/filter-machine-status-expanded.png":::
+
+1. Select the suitable [assessment](assessment-options.md) and [patching](updates-maintenance-schedules.md) options as per your requirement.
+
+## Map MCM capabilities to Azure Update Manager
+
+The following table explains the mapping capabilities of MCM software Update Management to Azure Update Manager.
+
+| **Capability** | **Microsoft Configuration Manager** | **Azure Update Manager**|
+| | | |
+|Synchronize software updates between sites(Central Admin site, Primary, Secondary sites)| The top site (either central admin site or stand-alone primary site) connects to Microsoft Update to retrieve software update. [Learn more](https://learn.microsoft.com/mem/configmgr/sum/understand/software-updates-introduction). After the top sites are synchronized, the child sites are synchronized. | There's no hierarchy of machines in Azure and therefore all machines connected to Azure receive updates from the source repository. |
+|Synchronize software updates/check for updates (retrieve patch metadata). | You can scan for updates periodically by setting configuration on the Software update point. [Learn more](https://learn.microsoft.com/mem/configmgr/sum/get-started/synchronize-software-updates#to-schedule-software-updates-synchronization). | You can enable periodic assessment to enable scan of patches every 24 hours. [Learn more](assessment-options.md). |
+|Configuring classifications/products to synchronize/scan/assess | You can choose the update classifications (security or critical updates) to synchronize/scan/assess. [Learn more](https://learn.microsoft.com/mem/configmgr/sum/get-started/configure-classifications-and-products). | There's no such capability here. The entire software metadata is scanned.|
+|Deploy software updates (install patches)| Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](https://learn.microsoft.com/mem/configmgr/sum/deploy-use/deploy-software-updates).| Manual deployment is mapped to deploying [one-time updates](deploy-updates.md) and Automatic deployment is mapped to [scheduled updates](scheduled-patching.md). (The [Automatic Deployment Rules (ADRs)](https://learn.microsoft.com/mem/configmgr/sum/deploy-use/automatically-deploy-software-updates#BKMK_CreateAutomaticDeploymentRule) can be mapped to schedules). There's no phased deployment option. |
+
+## Limitations in Azure Update Manager (preview)
+
+The following are the current limitations:
+
+- **Orchestration groups with Pre/Post scripts** - [Orchestration groups](https://learn.microsoft.com/mem/configmgr/sum/deploy-use/orchestration-groups) can't be created in Azure Update Manager to specify a maintenance sequence, allow some machines for updates at the same time and so on. (The orchestration groups allow you to use the pre/post scripts to run tasks before and after a patch deployment).
+
+### Patching machines
+After you set up configurations for assessment and patching, you can deploy/install either through [on-demand updates](deploy-updates.md) (one time or manual update) or [schedule updates](scheduled-patching.md) (automatic update) only. You can also deploy updates using [Azure Update Manager's API](manage-vms-programmatically.md).
+
+## Frequently asked questions
+
+### Where does Azure Update Manager get its updates from?
+
+Azure Update Manager refers to the repository that the machines point to. Most Windows machines by default point to the Windows Update catalog and Linux machines are configured to get updates from the `apt` or `yum` repositories. If the machines point to another repository such as [WSUS](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or a local repository then Azure Update Manager gets the updates from that repository.
+
+### Can Azure Update Manager patch OS, SQL and Third party software?
+
+Azure Update Manager refers to the repositories that the VMs point to. If the repository contains third party and SQL patches, Azure Update Manager can install SQL and third party patches.
+> [!NOTE]
+> By default, Windows VMs point to Windows Update repository that does not contain SQL and third party patches. If the VMs point to Microsoft Update, Azure Update Manager will patch OS, SQL, and third party updates.
+
+### Do I need to configure WSUS to use Azure Update Manager?
+
+You don't need WSUS to deploy patches in Azure Update Manager. Typically, all the machines connect to the internet repository to get updates (unless the machines point to WSUS or local repository that isn't connected to the internet). [Learn more](https://learn.microsoft.com/mem/configmgr/sum/).
+
+## Next steps
+- [An overview on Azure Update Manager](overview.md)
+- [Check update compliance](view-updates.md)
+- [Deploy updates now (on-demand) for single machine](deploy-updates.md)
+- [Schedule recurring updates](scheduled-patching.md)
update-center Guidance Patching Sql Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-patching-sql-server-azure-vm.md
+
+ Title: Guidance on patching for SQL Server on Azure VMs using Azure Update Manager.
+description: An overview on patching guidance for SQL Server on Azure VMs using Azure Update Manager
+++ Last updated : 08/29/2023+++
+# Guidance on patching for SQL Server on Azure VMs using Azure Update Manager
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+This article provides the details on how to integrate [Azure Update Manager](overview.md) with your [SQL virtual machines](https://learn.microsoft.com/azure/azure-sql/virtual-machines/windows/manage-sql-vm-portal?view=azuresql-vm&branch=main) resource for your [SQL Server on Azure Virtual Machines (VMs)](https://review.learn.microsoft.com/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview?view=azuresql-vm&branch=main)
+
+## Overview
+
+[Azure Update Manager](overview.md) is a unified service that allows you to manage and govern updates for all your Windows and Linux virtual machines across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard.
+
+Azure Update Manager designed as a standalone Azure service to provide SaaS experience to manage hybrid environments in Azure.
+
+Using Azure Update Manager you can manage and govern updates for all your SQL Server instances at scale. Unlike with [Automated Patching](https://learn.microsoft.com/azure/azure-sql/virtual-machines/windows/automated-patching?view=azuresql-vm&branch=main), Update Manager installs cumulative updates for SQL server.
+++
+
+## Next steps
+- [An overview on Azure Update Manager](overview.md)
+- [Check update compliance](view-updates.md)
+- [Deploy updates now (on-demand) for single machine](deploy-updates.md)
+- [Schedule recurring updates](scheduled-patching.md)
update-center Manage Arc Enabled Servers Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-arc-enabled-servers-programmatically.md
Title: Programmatically manage updates for Azure Arc-enabled servers in Update management center (preview)
-description: This article tells how to use Update management center (preview) using REST API with Azure Arc-enabled servers.
-
+ Title: Programmatically manage updates for Azure Arc-enabled servers in Azure Update Manager (preview)
+description: This article tells how to use Azure Update Manager (preview) using REST API with Azure Arc-enabled servers.
+ Last updated 06/15/2023
# How to programmatically manage updates for Azure Arc-enabled servers
-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with update management (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md).
+This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with Azure Update Manager (preview) in Azure. If you're new to Azure Update Manager (preview) and you want to learn more, see [overview of Update Manager (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md).
-Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure).
+Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure).
-Support for Azure REST API to manage Azure Arc-enabled servers is available through the update management center (preview) virtual machine extension.
+Support for Azure REST API to manage Azure Arc-enabled servers is available through the Update Manager (preview) virtual machine extension.
## Update assessment
DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur
## Next steps
-* To view update assessment and deployment logs generated by Update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Manage Dynamic Scoping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-dynamic-scoping.md
Title: Manage various operations of dynamic scoping (preview). description: This article describes how to manage dynamic scoping (preview) operations -+ Last updated 07/05/2023
This article describes how to view, add, edit and delete a dynamic scope (preview).
-## Prerequisites
- [!INCLUDE [dynamic-scope-prerequisites.md](includes/dynamic-scope-prerequisites.md)] ## Add a Dynamic scope (preview) To add a Dynamic scope to an existing configuration, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to add a Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** > **Add a dynamic scope**.
To add a Dynamic scope to an existing configuration, follow these steps:
To view the list of Dynamic scopes (preview) associated to a given maintenance configuration, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update management center (preview)**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update Manager (preview)**.
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope. 1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration. ## Edit a Dynamic scope (preview)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to edit. Under **Actions** column, select the edit icon.
To view the list of Dynamic scopes (preview) associated to a given maintenance c
## Delete a Dynamic scope (preview)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to delete. Select **Remove dynamic scope** and then select **Ok**. ## View patch history of a Dynamic scope (preview)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **History** > **Browse maintenance configurations** > **Maintenance configurations** to view the patch history of a dynamic scope.
Obtaining consent to apply updates is an important step in the workflow of dynam
#### [From Update Settings](#tab/us)
-1. In **Update management center**, go to **Overview** > **Update settings**.
+1. In **Update Manager**, go to **Overview** > **Update settings**.
1. In **Change Update settings**, select **+Add machine** to add the machines. 1. In the list of machines sorted as per the operating system, go to the **Patch orchestration** option and select **Azure-orchestrated with user managed schedules (Preview)** to confirm that:
Obtaining consent to apply updates is an important step in the workflow of dynam
* [Deploy updates now (on-demand) for single machine](deploy-updates.md) * [Schedule recurring updates](scheduled-patching.md) * [Manage update settings via Portal](manage-update-settings.md)
-* [Manage multiple machines using update management center](manage-multiple-machines.md)
+* [Manage multiple machines using update Manager](manage-multiple-machines.md)
update-center Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md
Title: Manage multiple machines in update management center (preview)
-description: The article details how to use Update management center (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal.
-
+ Title: Manage multiple machines in Azure Update Manager (preview)
+description: The article details how to use Azure Update Manager (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal.
+ Last updated 05/02/2023
-# Manage multiple machines with update management center (Preview)
+# Manage multiple machines with Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-This article describes the various features that update management center (Preview) offers to manage the system updates on your machines. Using the update management center (preview), you can:
+This article describes the various features that Update Manager (Preview) offers to manage the system updates on your machines. Using the Update Manager (preview), you can:
- Quickly assess the status of available operating system updates. - Deploy updates.
This article describes the various features that update management center (Previ
Instead of performing these actions from a selected Azure VM or Arc-enabled server, you can manage all your machines in the Azure subscription.
-## View update management center (Preview) status
+## View update Manager (preview) status
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update management center(Preview)**.
+1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update Manager(preview)**.
- :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update management center overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png":::
+ :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update manager overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png":::
In the **Overview** page - the summary tiles show the following status:
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
- **Update status of machines**ΓÇöshows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected and as per the classification selection, the tile is updated.
- The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used update management center (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days.
+ The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used Update Manager (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days.
From the assessment data available, machines are classified into the following categories:
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
## Summary of machine status
-Update management center (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to update management center (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings.
+Update Manager (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings.
- In the update management center (preview) page, select **Machines** from the left menu.
+ In the Update Manager (preview) page, select **Machines** from the left menu.
- :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of update management center(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png":::
+ :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of Update Manager(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png":::
On the page, the table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment. - **Update status**ΓÇöthe total number of updates available identified as applicable to the machine's OS.
For machines that haven't had a compliance assessment scan for the first time, y
:::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot of assessment banner on Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png":::
-Select a machine from the list to open update management center (Preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment.
+Select a machine from the list to open Update Manager (preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment.
### Deploy the updates
You can create a recurring update deployment for your machines. Select your mach
## Update deployment history
-Update management center (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update management center (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update management center (preview), select **History** from the left menu.
+Update Manager (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update Manager (preview), select **History** from the left menu.
## Update deployment history by machines
When you select any one maintenance run ID record, you can view an expanded stat
The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
-When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update management center (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included.
+When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included.
## Next steps * To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md)
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
+* To view update assessment and deployment logs generated by update manager (preview), see [query logs](query-logs.md).
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
Title: Manage update configuration settings in Update management center (preview)
-description: The article describes how to manage the update settings for your Windows and Linux machines managed by Update management center (preview).
-
+ Title: Manage update configuration settings in Azure Update Manager (preview)
+description: The article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview).
+ Last updated 05/30/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The article describes how to configure update settings from Update management center (preview) in Azure, to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines.
+The article describes how to configure update settings from Azure Update Manager (preview), to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines.
## Configure settings on single VM
The article describes how to configure update settings from Update management ce
To configure update settings on your machines on a single VM, follow these steps: >[!NOTE]
-> You can schedule updates from the Overview blade or Machines blade in update management center (preview) page or from the selected VM.
+> You can schedule updates from the Overview blade or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/manage-single-overview) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Overview**, select your **Subscription**, and select **Update settings**.
+1. In **Update Manager**, select **Overview**, select your **Subscription**, and select **Update settings**.
1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. 1. In **Select resources**, select the machine and select **Add**. 1. In the **Change update settings** page, you will see the machine classified as per the operating system with the list of following updates that you can select and apply.
To configure update settings on your machines on a single VM, follow these steps
- **Periodic assessment** - The **periodic Assessment** is set to run every 24 hours. You can either enable or disable this setting.
- - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use update management center (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting.
+ - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use Update Manager (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting.
- **Patch orchestration** option provides the following:
To configure update settings on your machines on a single VM, follow these steps
# [From Machines blade](#tab/manage-single-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Machines** > your **subscription**.
+1. In **Update Manager**, select **Machines** > your **subscription**.
1. Select the checkbox of your machine from the list and select **Update settings**. 1. Select **Update Settings** to proceed with the type of update for your machine. 1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings.
To configure update settings on your machines at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Overview**, select your **Subscription** and select **Update settings**.
+1. In **Update Manager**, select **Overview**, select your **Subscription** and select **Update settings**.
1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). # [From Machines blade](#tab/manage-scale-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list.
+1. In **Update Manager**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list.
1. Select **Update Settings** to proceed with the type of update for your machines. 1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm).
A notification appears to confirm that the update settings are successfully chan
## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md
Title: Overview of customized images in Update management center (preview).
+ Title: Overview of customized images in Azure Update Manager (preview).
description: The article describes about customized images, how to register, validate the customized images for public preview and its limitations.-+ Last updated 05/02/2023
This article describes the customized image support, how to enable the subscript
## Asynchronous check to validate customized image support
-If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update management Center (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported.
+If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported.
-Unlike marketplace images where support is validated even before Update management center operation is triggered. Here, there are no pre-existing validations in place and the Update management center operations are triggered and only their success or failure determines support.
+Unlike marketplace images where support is validated even before Update Manager operation is triggered. Here, there are no pre-existing validations in place and the Update Manager operations are triggered and only their success or failure determines support.
For instance, assessment call, will attempt to fetch the latest patch that is available from the image's OS family to check support. It stores this support-related data in Azure Resource Graph (ARG) table, which you can query to see the support status for your Azure Compute Gallery image.
update-center Manage Vms Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md
Title: Programmatically manage updates for Azure VMs
-description: This article tells how to use update management center (preview) in Azure using REST API with Azure virtual machines.
-
+description: This article tells how to use Azure Update Manager (preview) in Azure using REST API with Azure virtual machines.
+ Last updated 06/15/2023
# How to programmatically manage updates for Azure VMs
-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with update management center (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md).
+This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with Azure Update Manager (preview) in Azure. If you're new to Update Manager (preview) and you want to learn more, see [overview of Azure Update Manager (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md).
-Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/).
+Azure Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/).
-Support for Azure REST API to manage Azure VMs is available through the update management center (preview) virtual machine extension.
+Support for Azure REST API to manage Azure VMs is available through the Update Manager (preview) virtual machine extension.
## Update assessment
DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Manage Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md
Title: Create reports using workbooks in update management center (preview)..
+ Title: Create reports using workbooks in Azure Update Manager (preview).
description: This article describes how to create and manage workbooks for VM insights.-+ Last updated 05/23/2023
-# Create reports in update management center (preview)
+# Create reports in Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
This article describes how to create a workbook and how to edit a workbook to cr
## Create a workbook
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
-1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
+1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery.
1. Select **Quick start** tile > **Empty** or alternatively, you can select **+New** to create a workbook. 1. Select **+Add** to select any [elements](../azure-monitor/visualize/workbooks-create-workbook.md#create-a-new-azure-workbook) to add to the workbook.
This article describes how to create a workbook and how to edit a workbook to cr
1. Select **Done Editing**. ## Edit a workbook
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
-1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery.
-1. Select **Update management center** tile > **Overview** to view the Update management center (Preview)|Workbooks|Overview page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
+1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery.
+1. Select **Update Manager** tile > **Overview** to view the Update Manager (preview)|Workbooks|Overview page.
1. Select your subscription, and select **Edit** to enable the edit mode for all the four options. - Machines overall status & configuration
This article describes how to create a workbook and how to edit a workbook to cr
* [Deploy updates now (on-demand) for single machine](deploy-updates.md) * [Schedule recurring updates](scheduled-patching.md) * [Manage update settings via Portal](manage-update-settings.md)
-* [Manage multiple machines using update management center](manage-multiple-machines.md)
+* [Manage multiple machines using update manager](manage-multiple-machines.md)
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
Title: Update management center (preview) overview
-description: The article tells what update management center (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments.
-
+ Title: Azure Update Manager (preview) overview
+description: The article tells what Azure Update Manager (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments.
+ Last updated 07/05/2023
-# About Update management center (preview)
+# About Azure Update Manager (preview)
> [!Important]
-> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update management center (Preview) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
-> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC.
+> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update manager (preview) is the v2 version of Automation Update management and the future of Update management in Azure. Azure Update Manager (preview) is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
+> - Guidance for migrating from Automation Update management to Update manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Update Manager (preview).
-Update management center (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update management center (preview) to make real-time updates or schedule them within a defined maintenance window.
+Update Manager (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update Manager (preview) to make real-time updates or schedule them within a defined maintenance window.
-You can use the update management center (preview) in Azure to:
+You can use the Update Manager (preview) in Azure to:
- Oversee update compliance for your entire fleet of machines in Azure, on-premises, and other cloud environments. - Instantly deploy critical updates to help secure your machines.
You can use the update management center (preview) in Azure to:
We also offer other capabilities to help you manage updates for your Azure Virtual Machines (VM) that you should consider as part of your overall update management strategy. Review the Azure VM [Update options](../virtual-machines/updates-maintenance-overview.md) to learn more about the options available.
-Before you enable your machines for update management center (preview), make sure that you understand the information in the following sections.
+Before you enable your machines for Update Manager (preview), make sure that you understand the information in the following sections.
> [!IMPORTANT]
-> - Update management center (preview) doesnΓÇÖt store any customer data.
-> - Update management center (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA).
-> - While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - Update Manager (preview) doesnΓÇÖt store any customer data.
+> - Update Manager (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA).
+> - While update manager is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Key benefits
-Update management center (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update management center (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below:
+Update Manager (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below:
- Provides native experience with zero on-boarding. - Built as native functionality on Azure Compute and Azure Arc for Servers platform for ease of use.
Update management center (preview) has been redesigned and doesn't depend on Azu
- Global availability in all Azure Compute and Azure Arc regions. - Works with Azure roles and identity. - Granular access control at per resource level instead of access control at Automation account and Log Analytics workspace level.
- - Update management center now as Azure Resource Manager based operations. It allows RBAC and roles based of ARM in Azure.
+ - Azure Update Manager now as Azure Resource Manager based operations. It allows RBAC and roles based of ARM in Azure.
- Enhanced flexibility - Ability to take immediate action either by installing updates immediately or schedule them for a later date. - Check updates automatically or on demand. - Helps secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md) or custom maintenance schedules. - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month.
-The following diagram illustrates how update management center (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux.
+The following diagram illustrates how Update Manager (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux.
-![Update center workflow](./media/overview/update-management-center-overview.png)
+![Update Manager workflow](./media/overview/update-management-center-overview.png)
-To support management of your Azure VM or non-Azure machine, update management center (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any update management center operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The update management center (preview) extension is installed and managed using the following:
+To support management of your Azure VM or non-Azure machine, Update Manager (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update manager (preview) operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The Update Manager (preview) extension is installed and managed using the following:
- [Azure virtual machine Windows agent](../virtual-machines/extensions/agent-windows.md) or [Azure virtual machine Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs. - [Azure arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers.
- The extension agent installation and configuration are managed by the update management center (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The update management center (preview) extension runs code locally on the machine to interact with the operating system, and it includes:
+ The extension agent installation and configuration are managed by the Update Manager (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager (preview) extension runs code locally on the machine to interact with the operating system, and it includes:
- Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager. - Initiating the download and installation of approved updates with Windows Update client or Linux package manager.
-All assessment information and update installation results are reported to update management center (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results.
+All assessment information and update installation results are reported to Update Manager (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results.
-The machines assigned to update management center (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in update management center (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository.
+The machines assigned to Update Manager (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in Update Manager (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository.
>[!NOTE]
-> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with update management center (preview).
+> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with Update Manager (preview).
## Prerequisites
-Along with the prerequisites listed below, see [support matrix](support-matrix.md) for update management center (preview).
+Along with the prerequisites listed below, see [support matrix](support-matrix.md) for Update Manager (preview).
### Role
Arc enabled server | [Azure Connected Machine Resource Administrator](../azure-a
### Permissions
-You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the update management center (preview).
+You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the Update Manager (preview).
**Actions** |**Permission** |**Scope** | | | |
You need the following permissions to create and manage update deployments. The
For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems). > [!NOTE]
-> Currently, update management center (preview) has the following limitations regarding the operating system support:
+> Currently, Update Manager (preview) has the following limitations regarding the operating system support:
> - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.
-> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in update management center (preview).
+> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview).
>
-> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update management center (preview). [Learn more](support-matrix.md#supported-operating-systems).
+> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview). [Learn more](support-matrix.md#supported-operating-systems).
## VM Extensions
To view the available extensions for a VM in the Azure portal, follow these step
### Network planning
-To prepare your network to support update management center (preview), you may need to configure some infrastructure components.
+To prepare your network to support Update Manager (preview), you may need to configure some infrastructure components.
For Windows machines, you must allow traffic to any endpoints required by Windows Update agent. You can find an updated list of required endpoints in [Issues related to HTTP/Proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) (WSUS) deployment, you must also allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry).
For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../v
- [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md) - [Manage update settings via Portal](manage-update-settings.md)-- [Manage multiple machines using update management center](manage-multiple-machines.md)
+- [Manage multiple machines using Update manager](manage-multiple-machines.md)
update-center Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/periodic-assessment-at-scale.md
Title: Enable periodic assessment using policy
-description: This article describes how to manage the update settings for your Windows and Linux machines managed by update management center (preview).
-
+description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview).
+ Last updated 04/21/2022
# Automate assessment at scale using Policy to see latest update status
-This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, update management center (preview) fetches updates on your machine once every 24 hours.
+This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, Update Manager (preview) fetches updates on your machine once every 24 hours.
## Enable Periodic assessment for your Azure machines using Policy
You can monitor the compliance of resources under **Compliance** and remediation
## Enable Periodic assessment for your Arc machines using Policy 1. Go to **Policy** from the Azure portal and under **Authoring**, **Definitions**.
-1. From the **Category** dropdown, select **Update management center**. Select *[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers* for Arc-enabled machines.
+1. From the **Category** dropdown, select **Update Manager**. Select *[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers* for Arc-enabled machines.
1. When the Policy Definition opens, select **Assign**. 1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope and select **Next**. 1. In **Parameters**, uncheck **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select *AutomaticByPlatform*, select *Operating system* and select **Next**. You need to create separate policies for Windows and Linux.
You can monitor compliance of resources under **Compliance** and remediation sta
## Monitor if Periodic Assessment is enabled for your machines (both Azure and Arc-enabled machines) 1. Go to **Policy** from the Azure portal and under **Authoring**, go to **Definitions**.
-1. From the Category dropdown above, select **Update management center**. Select *[Preview]: Machines should be configured to periodically check for missing system updates*.
+1. From the Category dropdown above, select **Update Manager**. Select *[Preview]: Machines should be configured to periodically check for missing system updates*.
1. When the Policy Definition opens, select **Assign**. 1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope. Select **Next.** 1. In **Parameters** and **Remediation**, select **Next.**
You can monitor compliance of resources under **Compliance** and remediation sta
## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md
Title: Configure schedule patching on Azure VMs to ensure business continuity in update management center (preview).
-description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Update management center (preview).
-
+ Title: Configure schedule patching on Azure VMs to ensure business continuity in Azure Update Manager (preview).
+description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager (preview).
+ Last updated 05/09/2023
Additionally, in some instances, when you remove the schedule from a VM, there i
To identify the list of VMs with the associated schedules for which you have to enable new VM property, follow these steps:
-1. Go to **Update management center (Preview)** home page and select **Machines** tab.
+1. Go to **Update Manager (preview)** home page and select **Machines** tab.
1. In **Patch orchestration** filter, select **Azure Managed - Safe Deployment**. 1. Use the **Select all** option to select the machines and then select **Export to CSV**. 1. Open the CSV file and in the column **Associated schedules**, select the rows that have an entry.
You can update the patch orchestration option for existing VMs that either alrea
To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Update management center (Preview)**, select **Update Settings**.
+1. Go to **Update Manager (preview)**, select **Update Settings**.
1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select *Customer Managed Schedules* and then select **Save**.
To update the patch mode, follow these steps:
To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Update management center (Preview)**, select **Update Settings**.
+1. Go to **Update Manager (preview)**, select **Update Settings**.
1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select ***Azure Managed - Safe Deployment*** and then select **Save**.
Scenario 8 | No | False | No | Neither the autopatch nor the schedule patch will
## Next steps
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/query-logs.md
Title: Query logs and results from Update management center (preview)
-description: The article provides details on how you can review logs and search results from update management center (preview) in Azure using Azure Resource Graph
+ Title: Query logs and results from Update Manager (preview)
+description: The article provides details on how you can review logs and search results from update manager (preview) in Azure using Azure Resource Graph
Last updated 04/21/2022
-# Overview of query logs in update management center (Preview)
+# Overview of query logs in Azure Update Manager (preview)
-Logs created from operations like update assessments and installations are stored by Update management center (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update management center (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources.
+Logs created from operations like update assessments and installations are stored by Update Manager (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources.
Azure Resource Graph's query language is based on the [Kusto query language](../governance/resource-graph/concepts/query-language.md) used by Azure Data Explorer.
-The article describes the structure of the logs from Update management center (Preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs.
+The article describes the structure of the logs from Update Manager (preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs.
## Log structure
-Update management center (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph.
+Update Manager (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph.
### Patch assessment results
If the `PROPERTIES` property for the resource type is `patchassessmentresults/so
|`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null.| |`classifications` |Category of which the specific update belongs to as per the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of category, then the value is `Others` (for Linux) or `Updates` (for Windows Server). | |`rebootRequired` |Value indicates if the specific update requires the OS to reboot to complete the installation. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't require a reboot, then the value is `false`.|
-|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if update management center (preview) can reboot the target machine. |
+|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if Update Manager (preview) can reboot the target machine. |
|`patchName` |Name or label for the specific update generated by the machine's OS package manager or update service.| |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service.| |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`.|
If the `PROPERTIES` property for the resource type is `patchinstallationresults/
|`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null. | |`classifications` |Category that the specific update belongs to as per the OS vendor. As provided by machine's OS update service or package manager. If your OS package manager or update service, doesn't provide the detail of category, then the value of the field will be Others (for Linux) and Updates (for Windows Server). | |`rebootRequired` |Flag to specify if the specific update requires the OS to reboot to complete installation. As provided by machine's OS update service or package manager. If your OS package manager or update service doesn't provide information regarding need of OS reboot, then the value of the field will be set to 'false'. |
-|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing update management center (preview) to reboot the OS. |
+|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing Update Manager (preview) to reboot the OS. |
|`patchName` |Name or Label for the specific update as provided by the machine's OS package manager or update service. | |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service. | |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`. |
If the `PROPERTIES` property for the resource type is `configurationassignments`
## Next steps - For details of sample queries, see [Sample query logs](sample-query-logs.md).-- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) update management center (preview).
+- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Quickstart On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md
Title: Quickstart - deploy updates in using update management center in the Azure portal
-description: This quickstart helps you to deploy updates immediately and view results for supported machines in update management center (preview) using the Azure portal.
+ Title: Quickstart - deploy updates in using update manager in the Azure portal
+description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager (preview) using the Azure portal.
Last updated 04/21/2022
# Quickstart: Check and install on-demand updates
-Using the Update management center (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually.
+Using the Update Manager (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually.
This quickstart details you how to perform manual assessment and apply updates on a selected Azure virtual machine(s) or Arc-enabled server on-premises or in cloud environments.
This quickstart details you how to perform manual assessment and apply updates o
## Check updates
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. SelectΓÇ»**Getting started**, **On-demand assessment and updates**, selectΓÇ»**Check for updates**.
For the assessed machines that are reporting updates, you can configure [hotpatc
To configure the settings on your machines, follow these steps:
-1. In **Update management center (Preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**.
+1. In **Update Manager (preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**.
In the **Change update settings** page, by default **Properties** is selected. 1. Select from the list of update settings to apply them to the selected machines.
To configure the settings on your machines, follow these steps:
As per the last assessment performed on the selected machines, you can now select resources and machines to install the updates
-1. In the **Update management center(Preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**.
+1. In the **Update Manager (preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**.
1. In the **Install one-time updates** page, select one or more machines from the list in the **Machines** tab and click **Next**.
As per the last assessment performed on the selected machines, you can now selec
1. In **Review + install**, verify the update deployment options and select **Install**.
-A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update management center**, **History** page.
+A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update Manager**, **History** page.
## Next steps
update-center Sample Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/sample-query-logs.md
Title: Sample query logs and results from Update management center (preview)
-description: The article provides details of sample query logs from update management center (preview) in Azure using Azure Resource Graph
-
+ Title: Sample query logs and results from Azure Update Manager (preview)
+description: The article provides details of sample query logs from Azure Update Manager (preview) in Azure using Azure Resource Graph
+ Last updated 04/21/2022
maintenanceresources
``` ## Next steps-- Review logs and search results from update management center (preview) in Azure using [Azure Resource Graph](query-logs.md).-- Troubleshoot issues in update management center (preview), see the [Troubleshoot](troubleshoot.md).
+- Review logs and search results from Update Manager (preview) in Azure using [Azure Resource Graph](query-logs.md).
+- Troubleshoot issues in Update Manager (preview), see the [Troubleshoot](troubleshoot.md).
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Title: Scheduling recurring updates in Update management center (preview)
-description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines.
-
+ Title: Scheduling recurring updates in Azure Update Manager (preview)
+description: The article details how to use Azure Update Manager (preview) in Azure to set update schedules that install recurring updates on your machines.
+ Last updated 05/30/2023
> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)** by **30th June 2023**. If you fail to update the patch orchestration by **30th June 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-You can use update management center (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale.
+You can use Update Manager (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale.
-Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
+Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
## Prerequisites for scheduled patching
-1. See [Prerequisites for Update management center (preview)](./overview.md#prerequisites)
+1. See [Prerequisites for Update Manager (preview)](./overview.md#prerequisites)
1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules (Preview)**. For more information, see [how to enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. > [!Note]
Update management center (preview) uses maintenance control schedule instead of
1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. 1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently.
+## Configure reboot settings
+
+The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
+ ## Service limits The following are the recommended limits for the mentioned indicators:
The following are the recommended limits for the mentioned indicators:
## Schedule recurring updates on single VM >[!NOTE]
-> You can schedule updates from the Overview or Machines blade in update management center (preview) page or from the selected VM.
+> You can schedule updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/schedule-updates-single-overview)
To schedule recurring updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**.
1. In **Create new maintenance configuration**, you can create a schedule for a single VM.
To schedule recurring updates on a single VM, follow these steps:
1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note]
- > Update management center (preview) doesn't support driver updates.
+ > Update Manager (preview) doesn't support driver updates.
1. In the **Tags** page, assign tags to maintenance configurations.
To schedule recurring updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**.
1. In **Create new maintenance configuration**, you can create a schedule for a single VM, assign machine and tags. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule.
To schedule recurring updates at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Overview**, select your **Subscription** and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Overview**, select your **Subscription** and select **Schedule updates**.
1. In the **Create new maintenance configuration** page, you can create a schedule for multiple machines.
To schedule recurring updates at scale, follow these steps:
1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note]
- > Update management center (preview) doesn't support driver updates.
+ > Update Manager (preview) doesn't support driver updates.
1. In the **Tags** page, assign tags to maintenance configurations.
To schedule recurring updates at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**.
In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule.
A notification appears that the deployment is created.
## Attach a maintenance configuration A maintenance configuration can be attached to multiple machines. It can be attached to machines at the time of creating a new maintenance configuration or even after you've created one.
- 1. In **Update management center**, select **Machines** and select your **Subscription**.
+ 1. In **Update Manager**, select **Machines** and select your **Subscription**.
1. Select your machine and in **Updates (Preview)**, select **Scheduled updates** to create a maintenance configuration or attach existing maintenance configuration to the scheduled recurring updates. 1. In **Scheduling**, select **Attach maintenance configuration**. 1. Select the maintenance configuration that you would want to attach and select **Attach**.
You can create a new Guest OS update maintenance configuration or modify an exis
## Onboarding to Schedule using Policy
-The update management center (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case.
+The update Manager (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case.
> [!NOTE] > This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules (Preview)** as it is a prerequisite for scheduled patching.
Policy allows you to assign standards and assess compliance at scale. [Learn mor
1. Under **Basics**, in the **Assign policy** page: - In **Scope**, choose your subscription, resource group, and choose **Select**. - Select **Policy definition** to view a list of policies.
- - In **Available Definitions**, select **Built in** for Type and in search, enter - *[Preview] Schedule recurring updates using Update Management Center* and click **Select**.
+ - In **Available Definitions**, select **Built in** for Type and in search, enter - *[Preview] Schedule recurring updates using Update Manager* and click **Select**.
:::image type="content" source="./media/scheduled-updates/dynamic-scoping-defintion.png" alt-text="Screenshot that shows on how to select the definition.":::
To view the current compliance state of your existing resources:
:::image type="content" source="./media/scheduled-updates/dynamic-scoping-policy-compliance.png" alt-text="Screenshot that shows on policy compliance."::: ## Check your scheduled patching run
-You can check the deployment status and history of your maintenance configuration runs from the Update management center portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id).
+You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id).
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Security Awareness Ubuntu Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/security-awareness-ubuntu-support.md
+
+ Title: Security awareness and Ubuntu Pro support in Azure Update Manager
+description: Guidance on security awareness and Ubuntu Pro support in Azure Update Manager.
+++ Last updated : 08/24/2023+++
+# Guidance on security awareness and Ubuntu Pro support
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
++
+This article provides the details on security vulnerabilities and Ubuntu Pro support in Azure Update Manager.
+
+If you are using Ubuntu 18.04 LTS, you must take the necessary steps against security vulnerabilities as the Ubuntu 18.04 image has reached the end of its [standard security maintenance](https://ubuntu.com/blog/18-04-end-of-standard-support) in May 2023. As Canonical has stopped publishing new security or critical updates after May 2023, the risk of systems and data to potential security threats is high. Without software updates, you may experience performance issues or compatibility issues whenever a new hardware or software is released.
+
+You can either upgrade to [Ubuntu Pro](https://ubuntu.com/azure/pro) or migrate to a newer version of LTS to avoid any future disruption to the patching mechanisms. When you [upgrade to Ubuntu Pro](https://ubuntu.com/blog/enhancing-the-ubuntu-experience-on-azure-introducing-ubuntu-pro-updates-awareness), you can avoid any security or performance issues.
++
+## Ubuntu Pro on Azure Update Manager
+
+Azure Update Manager assesses both Azure and Arc-enabled VMs to indicate any action. AUM helps to identify Ubuntu instances that don't have the available security updates and allows an upgrade to Ubuntu Pro from the Azure portal. For example, an Ubuntu server 18.04 LTS instance on Azure Update Manager has information about upgrading to Ubuntu Pro.
++
+You can continue to use the Azure Update Manager [capabilities](updates-maintenance-schedules.md) to remain secure after migrating to a supported model from Canonical.
+
+> [!NOTE]
+> - [Ubuntu Pro](https://ubuntu.com/azure/pro) will provide the support on 18.04 LTS from Canonical until 2028 through Expanded Security Maintenance (ESM). You can also [upgrade to Ubuntu Pro from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-pro-bionic?tab=Overview) as well.
+> - Ubuntu offers 20.04 LTS and 22.04 LTS as a migration from 18.04 LTS. [Learn more](https://ubuntu.com/18-04/azure).
+
+
+## Next steps
+- [An overview on Azure Update Manager](overview.md)
+- [View updates for single machine](view-updates.md)
+- [Deploy updates now (on-demand) for single machine](deploy-updates.md)
+- [Schedule recurring updates](scheduled-patching.md)
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
Title: Update management center (preview) support matrix
+ Title: Azure Update Manager (preview) support matrix
description: Provides a summary of supported regions and operating system settings.-+ Last updated 07/11/2023
-# Support matrix for update management center (preview)
+# Support matrix for Azure Update Manager (preview)
-This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by update management center (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers.
+This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Manager (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers.
## Update sources supported
-**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the update management center (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations)
+**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the Update Manager (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations)
-**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in update management center (preview) depend on where the machines are configured to report.
+**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager (preview) depend on where the machines are configured to report.
## Types of updates supported ### Operating system updates
-Update management center (preview) supports operating system updates for both Windows and Linux.
+Update Manager (preview) supports operating system updates for both Windows and Linux.
> [!NOTE]
-> Update management center (preview) doesn't support driver Updates.
+> Update Manager (preview) doesn't support driver Updates.
### First party updates on Windows By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software. Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
+- For Servers configured to patch on a schedule from Update Manager (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
Use one of the following options to perform the settings change at scale:
$ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" $ServiceManager.AddService2($ServiceId,7,"") ```-- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+- For servers running Server 2016 or later which aren't using Update Manager scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
> [!NOTE] > Run the following PowerShell script on the server to disable first party updates.
Use one of the following options to perform the settings change at scale:
### Third-party updates
-**Windows**: Update Management relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows update management to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
+**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
-**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it is scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it.
+**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it.
+
+> [!NOTE]
+> Update Manager does not support managing the Microsoft Configuration Manager client.
## Supported regions
-Update management center (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use update management center (preview).
+Update Manager (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use Update Manager (preview).
# [Azure virtual machine](#tab/azurevm)
-Update management center (preview) is available in all Azure public regions where compute virtual machines are available.
+Update Manager (preview) is available in all Azure public regions where compute virtual machines are available.
# [Azure Arc-enabled servers](#tab/azurearc)
-Update management center (preview) is supported in the following regions currently. It implies that VMs must be in below regions:
+Update Manager (preview) is supported in the following regions currently. It implies that VMs must be in below regions:
**Geography** | **Supported Regions** |
Africa | South Africa North
Asia Pacific | East Asia </br> South East Asia Australia | Australia East Brazil | Brazil South
-Canada | Canada Central
+Canada | Canada Central </br> Canada East
Europe | North Europe </br> West Europe France | France Central India | Central India Japan | Japan East Korea | Korea Central
+Sweden | Sweden Central
Switzerland | Switzerland North United Kingdom | UK South </br> UK West United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
United States | Central US </br> East US </br> East US 2</br> North Central US <
> [!NOTE] > - All operating systems are assumed to be x64. x86 isn't supported for any operating system.
-> - Update management center (preview) doesn't support CIS hardened images.
+> - Update Manager (preview) doesn't support CIS hardened images.
# [Azure VMs](#tab/azurevm-os) > [!NOTE]
-> Currently, update management center has the following limitations regarding the operating system support:
-> - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.
-> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in update management center (preview).
+> Currently, Update Manager has the following limitation regarding the operating system support:
+> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview).
>
-> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update management center (preview).
-
-**Marketplace/PIR images**
-
-Currently, we support a combination of Offer, Publisher, and Sku of the image. Ensure that you match all the three to confirm support. For more information, see [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images).
-
-**Custom images**
-
-We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update manage center to manage updates on custom images.
+> For the above limitation, we recommend that you use [Automation Update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview).
++
+### Marketplace/PIR images
+
+The Marketplace image in Azure has the following attributes:
+- **Publisher** - The organization that creates the image. Examples: Canonical, MicrosoftWindowsServer
+- **Offer**- The name of the group of related images created by the publisher. Examples: UbuntuServer, WindowsServer
+- **SKU**- An instance of an offer, such as a major release of a distribution. Examples: 18.04LTS, 2019-Datacenter
+- **Version** - The version number of an image SKU.
+
+Azure Update Manager supports the following operating system versions. However, you could experience failures if there are any configuration changes on the VMs such as package or repository.
+
+#### Windows operating systems
+
+| **Publisher**| **Versions(s)**
+|-|-|
+|Microsoft Windows Server | 1709, 1803, 1809, 2012, 2016, 2019, 2022|
+|Microsoft Windows Server HPC Pack | 2012, 2016, 2019 |
+|Microsoft SQL Server | 2008, 2012, 2014, 2016, 2017, 2019, 2022 |
+|Microsoft Visual Studio | ws2012r2, ws2016, ws2019, ws2022 |
+|Microsoft Azure Site Recovery | Windows 2012
+|Microsoft Biz Talk Server | 2016, 2020 |
+|Microsoft DynamicsAx | ax7 |
+|Microsoft Power BI | 2016, 2017, 2019, 2022 |
+|Microsoft Sharepoint | sp* |
+
+#### Linux operating systems
+
+| **Publisher**| **Versions(s)**
+|-|-|
+|Canonical | Ubuntu 16.04, 18.04, 20.04, 22.04 |
+|RedHat | RHEL 7,8,9|
+|Openlogic | CentOS 7|
+|SUSE 12 |sles, sles-byos, sap, sap-byos, sapcal, sles-standard |
+|SUSE 15 | basic, hpc, opensuse, sles, sap, sapcal|
+|Oracle Linux | 7*, ol7*, ol8*, ol9* |
+|Oracle Database | 21, 19-0904, 18.*|
+
+#### Unsupported Operating systems
+
+The following table lists the operating systems for marketplace images that aren't supported:
+
+| **Publisher**| **OS Offer** | **SKU**|
+|-|-|--|
+|OpenLogic | CentOS | 8* |
+|OpenLogic | centos-hpc| * |
+|Oracle | Oracle-Linux | 8, 8-ci, 81, 81-ci , 81-gen2, ol82, ol8_2-gen2,ol82-gen2, ol83-lvm, ol83-lvm-gen2, ol84-lvm,ol84-lvm-gen2 |
+|Red Hat | RHEL | 74-gen2 |
+|Red Hat | RHEL-HANA | 7.4, 7.5, 7.6, 8.1, 81_gen2 |
+|Red Hat | RHEL-SAP | 7.4, 7.5, 7.7 |
+|Red Hat | RHEL-SAP-HANA | 7.5 |
+|Microsoft SQL Server | SQL 2019-SLES* | * |
+|Microsoft SQL Server | SQL 2019-RHEL7 | * |
+|Microsoft SQL Server | SQL 2017-RHEL7 | * |
+|Microsoft | microsoft-ads |*.* |
+|SUSE| sles-sap-15-*-byos | gen *|
+
+### Custom images
+
+We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update Manager (preview) to manage updates on custom images.
|**Windows Operating System**| |-- |
The following table lists the operating systems that aren't supported:
| Azure Kubernetes Nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/azure/aks/node-updates-kured).|
-As the Update management center (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md).
+As the Update Manager (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md).
## Next steps
update-center Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md
Title: Troubleshoot known issues with update management center (preview)
-description: The article provides details on the known issues and troubleshooting any problems with update management center (preview).
-
+ Title: Troubleshoot known issues with Azure Update Manager (preview)
+description: The article provides details on the known issues and troubleshooting any problems with Azure Update Manager (preview).
+ Last updated 05/30/2023
-# Troubleshoot issues with update management center (preview)
+# Troubleshoot issues with Azure Update Manager (preview)
-This article describes the errors that might occur when you deploy or use update management center (preview), how to resolve them and the known issues and limitations of scheduled patching.
+This article describes the errors that might occur when you deploy or use Update Manager (preview), how to resolve them and the known issues and limitations of scheduled patching.
## General troubleshooting
If you don't want any patch installation to be orchestrated by Azure or aren't u
### Cause
-The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Management relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Management can't properly report on the patches that are needed or installed.
+The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed.
### Resolution
To review the logs related to all actions performed by the extension, on Windows
- For concurrent/conflicting schedule, only one schedule will be triggered. The other schedule will be triggered once a schedule is finished. - If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in case of Azure VMs.-- Policy definition *[Preview]: Schedule recurring updates using Update Management Center* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false.
+- Policy definition *[Preview]: Schedule recurring updates using Update Manager* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false.
### Scenario: Unable to apply patches for the shutdown machines
Setting a longer time range for maximum duration when triggering an [on-demand u
## Next steps
-* To learn more about Azure Update management center (preview), see the [Overview](overview.md).
-* To view logged results from all your machines, see [Querying logs and results from update management center (preview)](query-logs.md).
+* To learn more about Azure Update Manager (preview), see the [Overview](overview.md).
+* To view logged results from all your machines, see [Querying logs and results from update Manager (preview)](query-logs.md).
update-center Tutorial Dynamic Grouping For Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/tutorial-dynamic-grouping-for-scheduled-patching.md
Title: Schedule updates on Dynamic scoping (preview). description: In this tutorial, you learn how to group machines, dynamically apply the updates at scale.-+ Last updated 07/05/2023
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Prerequisites
--- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*.-- Associate a Schedule with the VM. ## Create a Dynamic scope To create a dynamic scope, follow the steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Overview** > **Schedule updates** > **Create a maintenance configuration**. 1. In the **Create a maintenance configuration** page, enter the details in the **Basics** tab and select **Maintenance scope** as *Guest* (Azure VM, Arc-enabled VMs/servers). 1. Select **Dynamic Scopes** and follow the steps to [Add Dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope-preview).
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Title: Updates and maintenance in update management center (preview).
-description: The article describes the updates and maintenance options available in Update management center (preview).
-
+ Title: Updates and maintenance in Azure Update Manager (preview).
+description: The article describes the updates and maintenance options available in Azure Update Manager (preview).
+ Last updated 05/23/2023
-# Update options in update management center (preview)
+# Update options in Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
> - For Arc-enabled servers, the updates and maintenance options such as Automatic VM Guest patching in Azure, Windows automatic updates and Hotpatching aren't supported.
-This article provides an overview of the various update and maintenance options available by update management center (preview).
+This article provides an overview of the various update and maintenance options available by Update Manager (preview).
-Update management center (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on.
+Update Manager (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on.
## Update Now/One-time update
-Update management center (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm).
+Update Manager (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm).
+ ## Scheduled patching You can create a schedule on a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications.
-Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
+Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. > [!NOTE]
Start using [scheduled patching](scheduled-patching.md) to create and save recur
This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).
-In **Update management center** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property.
+In **Update Manager** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property.
## Windows automatic updates
This mode of patching allows operating system to automatically install updates a
Hotpatching allows you to install updates on supported Windows Server Azure Edition virtual machines without requiring a reboot after installation. It reduces the number of reboots required on your mission critical application workloads running on Windows Server. For more information, see [Hotpatch for new virtual machines](../automanage/automanage-hotpatch.md)
-Hotpatching property is available as a setting in Update management center (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm)
+Hotpatching property is available as a setting in Update Manager (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm)
:::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png"::: ## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center View Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/view-updates.md
Title: Check update compliance in Update management center (preview)
-description: The article details how to use Azure Update management center (preview) in the Azure portal to assess update compliance for supported machines.
-
+ Title: Check update compliance in Azure Update Manager (preview)
+description: The article details how to use Azure Update Manager (preview) in the Azure portal to assess update compliance for supported machines.
+ Last updated 05/31/2023
-# Check update compliance with update management center (preview)
+# Check update compliance with Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article details how to check the status of available updates on a single VM or multiple VMs using update management center (preview).
+This article details how to check the status of available updates on a single VM or multiple VMs using Update Manager (preview).
## Check updates on single VM >[!NOTE]
-> You can check the updates from the Overview or Machines blade in update management center (preview) page or from the selected VM.
+> You can check the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/singlevm-overview) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (Preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
+1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
1. In **Select resources and check for updates**, choose the machine for which you want to check the updates and select **Check for updates**.
This article details how to check the status of available updates on a single VM
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (preview), **Machines**, select your **Subscription** to view all your machines.
+1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines.
1. Select your machine from the checkbox and select **Check for updates**, **Assess now** or alternatively, you can select your machine, in **Updates Preview**, select **Assess updates**, and in **Trigger assess now**, select **OK**.
This article details how to check the status of available updates on a single VM
1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates**, select **Go to Updates using Update Management Center**.
+1. In **Updates**, select **Go to Updates using Update Manager**.
:::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot showing selection of updates from Home page.":::
To check the updates on your machines at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
+1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
1. In **Select resources and check for updates**, choose your machines for which you want to check the updates and select **Check for updates**.
To check the updates on your machines at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (preview), **Machines**, select your **Subscription** to view all your machines.
+1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines.
1. Select the **Select all** to choose all your machines and select **Check for updates**. 1. Select **Assess now** to perform the assessment.
- A notification appears when the operation is initiated and completed. After a successful scan, the **Update management center (Preview) | Machines** page is refreshed to display the updates.
+ A notification appears when the operation is initiated and completed. After a successful scan, the **Update Manager (preview) | Machines** page is refreshed to display the updates.
> [!NOTE]
-> In update management center (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository.
+> In update Manager (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository.
## Next steps * Learn about deploying updates on your machines to maintain security compliance by reading [deploy updates](deploy-updates.md).
-* To view the update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update management center (preview).
+* To view the update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update Manager (preview).
update-center Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md
Title: What's new in Update management center (Preview)
-description: Learn about what's new and recent updates in the Update management center (Preview) service.
-
+ Title: What's new in Azure Update Manager (preview)
+description: Learn about what's new and recent updates in the Azure Update Manager (preview) service.
+ Previously updated : 07/05/2023 Last updated : 08/30/2023
-# What's new in Update management center (Preview)
+# What's new in Azure Update Manager (Preview)
-[Update management center (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update management center (Preview).
+[Azure Update Manager (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update Manager (preview).
+
+## August 2023
+
+### New region support
+
+Azure Update Manager (preview) is now available in Canada East and Sweden Central regions for Arc-enabled servers. [Learn more](support-matrix.md#supported-regions).
+
+### SQL Server patching (preview)
+
+SQL Server patching (preview) allows you to patch SQL Servers. You can now manage and govern updates for all your SQL Servers using the patching capabilities provided by Azure Update Manager. [Learn more](guidance-patching-sql-server-azure-vm.md).
## July 2023
Dynamic scope (preview) is an advanced capability of schedule patching. You can
### Customized image support
-Update management center (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems).
+Update Manager (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems).
### Multi-subscription support
-The limit on the number of subscriptions that you can manage to use the Update management center (preview) portal has now been removed. You can now manage all your subscriptions using the update management center (preview) portal.
+The limit on the number of subscriptions that you can manage to use the Update Manager (preview) portal has now been removed. You can now manage all your subscriptions using the update Manager (preview) portal.
## April 2023
A new patch orchestration - **Customer Managed Schedules (Preview)** is introduc
### New region support
-Update management center (Preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions).
+Update Manager (preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions).
## October 2022
update-center Whats Upcoming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md
Title: What's upcoming in Update management center (Preview)
-description: Learn about what's upcoming and updates in the Update management center (Preview) service.
-
+ Title: What's upcoming in Azure Update Manager (preview)
+description: Learn about what's upcoming and updates in the Update manager (preview) service.
+ Last updated 06/01/2023
-# What's upcoming in Update management center (Preview)
+# What are the upcoming features in Azure Update Manager (preview)
-The primary [what's New in Update management center (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features.
+The primary [what's New in Azure Update Manager (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features.
## Expanded support for Operating system and VM images Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), VMs created by Azure Migrate, Azure Backup, Azure Site Recovery, and marketplace images are upcoming in Q3, CY 2023. Until then, we recommend that you continue using [Automation update management](../automation/update-management/overview.md) for these images. [Learn more](support-matrix.md#supported-operating-systems).
-## Update management center will be GA soon
+## Update Manager will be GA soon
-Update management center will be declared GA soon.
+Update Manager will be declared GA soon.
+
+## Prescript and postscript
+
+The prescript and postscript will be available soon.
+
+## SQL Server patching
+SQL Server patching using Update Manager will be available soon.
## Next steps
update-center Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/workbooks.md
Title: An overview of Workbooks description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports.-+ Last updated 01/16/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update management center (preview).
+Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update Manager (preview).
## Key benefits - Provides a canvas for data analysis and creation of visual reports
The gallery lists all the saved workbooks and templates for your workspace. You
- In the **Recently modified** tile, you can view and edit the workbooks. -- In the **Update management center** tile, you can view the following summary:
+- In the **Update Manager** tile, you can view the following summary:
:::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot of workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png":::
virtual-desktop Administrative Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/administrative-template.md
Title: Administrative template for Azure Virtual Desktop
description: Learn how to use the administrative template (ADMX) for Azure Virtual Desktop with Intune or Group Policy to configure certain settings on your session hosts. Previously updated : 06/28/2023 Last updated : 08/25/2023
We've created an administrative template for Azure Virtual Desktop to configure
You can configure the following features with the administrative template: - [Graphics related data logging](connection-latency.md#connection-graphics-data-preview)-- [Screen capture protection](screen-capture-protection.md) - [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks)
+- [Screen capture protection](screen-capture-protection.md)
- [Watermarking](watermarking.md) ## Prerequisites
To configure the administrative template, select a tab for your scenario and fol
# [Intune](#tab/intune)
-> [!IMPORTANT]
-> The administrative template for Azure Virtual Desktop is only available with the *templates* profile type, not the *settings catalog*. You can use the templates profile type with Windows 10 and Windows 11, but you can't use this with multi-session versions of these operating systems as they only support the settings catalog. You'll need to use one of the other methods with multi-session.
- 1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
-1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Templates** profile type and **Administrative templates** template name.
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
-1. Browse to **Computer configuration** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see policy settings for Azure Virtual Desktop available for you to configure, as shown in the following screenshot:
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see settings in the Azure Virtual Desktop subcategory available for you to configure, as shown in the following screenshot:
- :::image type="content" source="media/administrative-template/azure-virtual-desktop-intune-template.png" alt-text="Screenshot of the Intune admin center showing Azure Virtual Desktop policy settings." lightbox="media/administrative-template/azure-virtual-desktop-intune-template.png":::
+ :::image type="content" source="media/administrative-template/azure-virtual-desktop-intune-settings-catalog.png" alt-text="Screenshot of the Intune admin center showing Azure Virtual Desktop settings." lightbox="media/administrative-template/azure-virtual-desktop-intune-settings-catalog.png":::
-1. Apply the configuration profile to your session hosts, then restart your clients.
+1. Once you've configured settings, apply the configuration profile to your session hosts, then restart your session hosts for the settings to take effect.
# [Group Policy (AD)](#tab/group-policy-domain)
To configure the administrative template, select a tab for your scenario and fol
:::image type="content" source="media/administrative-template/azure-virtual-desktop-gpo.png" alt-text="Screenshot of the Group Policy Management Editor showing Azure Virtual Desktop policy settings." lightbox="media/administrative-template/azure-virtual-desktop-gpo.png":::
-1. Apply the policy to your session hosts, then restart your session hosts.
+1. Once you've configured settings, apply the policy to your session hosts, then restart your session hosts for the settings to take effect.
# [Local Group Policy](#tab/local-group-policy)
To configure the administrative template, select a tab for your scenario and fol
:::image type="content" source="media/administrative-template/azure-virtual-desktop-gpo.png" alt-text="Screenshot of the Local Group Policy Editor showing Azure Virtual Desktop policy settings." lightbox="media/administrative-template/azure-virtual-desktop-gpo.png":::
-1. Restart your session hosts for the settings to take effect.
+1. Once you've configured settings, restart your session hosts for the settings to take effect.
virtual-desktop Host Pool Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/host-pool-load-balancing.md
Title: Azure Virtual Desktop host pool load-balancing - Azure
-description: Learn about host pool load-balancing algorithms for a Azure Virtual Desktop environment.
-
+ Title: Host pool load balancing algorithms in Azure Virtual Desktop - Azure
+description: Learn about the host pool load balancing algorithms available for pooled host pools in Azure Virtual Desktop.
+ Previously updated : 09/19/2022-- Last updated : 08/25/2023+
-# Host pool load-balancing algorithms
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/host-pool-load-balancing-2019.md).
+# Host pool load balancing algorithms in Azure Virtual Desktop
-Azure Virtual Desktop supports two load-balancing algorithms. Each algorithm determines which session host will host a user's session when they connect to a resource in a pooled host pool. The information in this article only applies to pooled host pools.
+Azure Virtual Desktop supports two load balancing algorithms for pooled host pools. Each algorithm determines which session host is used when a user starts a remote session. Load balancing doesn't apply to personal host pools because users always have a 1:1 mapping to a session host within the host pool.
-The following load-balancing algorithms are available in Azure Virtual Desktop:
+The following load balancing algorithms are available for pooled host pools:
-- Breadth-first load balancing allows you to evenly distribute user sessions across the session hosts in a host pool. You don't have to specify a maximum session limit for the number of sessions.-- Depth-first load balancing allows you to saturate a session host with user sessions in a host pool. You have to specify a maximum session limit for the number of sessions. Once the first session host reaches its session limit threshold, the load balancer directs any new user connections to the next session host in the host pool until it reaches its limit, and so on.
+- **Breadth-first**, which aims to evenly distribute new user sessions across the session hosts in a host pool. You don't have to specify a maximum session limit for the number of sessions.
-Each host pool can only configure one type of load-balancing specific to it. However, both load-balancing algorithms share the following behaviors no matter which host pool they're in:
+- **Depth-first**, which keeps starting new user sessions on one session host until the maximum session limit is reached. Once the session limit is reached, any new user connections are directed to the next session host in the host pool until it reaches its session limit, and so on.
-- If a user already has an active or disconnected session in the host pool and signs in again, the load balancer will successfully redirect them to the session host with their existing session. This behavior applies even if that session host's AllowNewConnections property is set to False (drain mode is enabled).-- If a user doesn't already have a session in the host pool, then the load balancer won't consider session hosts whose AllowNewConnections property is set to False during load balancing.-- If you lower the maximum session limit on a session host while it has active user sessions, the change won't affect the active user sessions.
+You can only configure one of the load balancing at a time per pooled host pool, but you can change which one is used after a host pool is created. However, both load balancing algorithms share the following behaviors:
-## Breadth-first load-balancing algorithm
+- If a user already has an active or disconnected session in the host pool and signs in again, the load balancer will successfully redirect them to the session host with their existing session. This behavior applies even if [drain mode](drain-mode.md) has been enabled for that session host.
-The breadth-first load-balancing algorithm allows you to distribute user sessions across session hosts to optimize for session performance. This algorithm is ideal for organizations that want to provide the best experience for users connecting to their pooled virtual desktop environment.
+- If a user doesn't already have a session on a session host in the host pool, the load balancer doesn't consider a session host where drain mode has been enabled.
-The breadth-first algorithm first queries session hosts that allow new connections. The algorithm then selects a session host randomly from half the set of session hosts with the least number of sessions. For example, if there are nine machines with 11, 12, 13, 14, 15, 16, 17, 18, and 19 sessions, a new session you create won't automatically go to the first machine. Instead, it can go to any of the first five machines with the lowest number of sessions (11, 12, 13, 14, 15).
+- If you lower the maximum session limit on a session host while it has active user sessions, the change doesn't affect existing user sessions.
-## Depth-first load-balancing algorithm
+## Breadth-first load balancing algorithm
-The depth-first load-balancing algorithm allows you to saturate one session host at a time to optimize for scale down scenarios. This algorithm is ideal for cost-conscious organizations that want more granular control on the number of virtual machines they've allocated for a host pool.
+The breadth-first load balancing algorithm aims to distribute user sessions across session hosts to optimize for session performance. Breadth-first is ideal for organizations that want to provide the best experience for users connecting to their remote resources as session host resources, such as CPU, memory, and disk, are generally less contended.
-The depth-first algorithm first queries session hosts that allow new connections and haven't gone over their maximum session limit. The algorithm then selects the session host with highest number of sessions. If there's a tie, the algorithm selects the first session host in the query.
+The breadth-first algorithm first queries session hosts in a host pool that allow new connections. The algorithm then selects a session host randomly from half the set of available session hosts with the fewest sessions. For example, if there are nine session hosts with 11, 12, 13, 14, 15, 16, 17, 18, and 19 sessions, a new session doesn't automatically go to the session host with the fewest sessions. Instead, it can go to any of the first five session hosts with the fewest sessions at random. Due to the randomization, some sessions may not be evenly distributed across all session hosts.
+
+## Depth-first load balancing algorithm
+
+The depth-first load balancing algorithm aims to saturate one session host at a time. This algorithm is ideal for cost-conscious organizations that want more granular control on the number of session hosts available in a host pool, enabling you to more easily scale down when there are fewer users.
+
+The depth-first algorithm first queries session hosts that allow new connections and haven't reached their maximum session limit. The algorithm then selects the session host with most sessions. If there's a tie, the algorithm selects the first session host from the query.
+
+You must [set a maximum session limit](configure-host-pool-load-balancing.md#configure-depth-first-load-balancing) when using the depth-first algorithm. You can use Azure Virtual Desktop Insights to monitor [the number of sessions on each session host](insights-use-cases.md#session-host-utilization) and [session host performance](insights-use-cases.md#session-host-performance) to help determine the best maximum session limit for your environment.
> [!IMPORTANT]
-> The maximum session limit parameter is required when you use the depth-first load balancing algorithm. For the best possible user experience, make sure to change the maximum session host limit parameter to a number that best suits your environment.
->
-> Once all session hosts have reached the maximum session limit, you will need to increase the limit or deploy more session hosts.
+> Once all session hosts have reached the maximum session limit, you need to increase the limit or [add more session hosts to the host pool](add-session-hosts-host-pool.md).
+
+## Next steps
+
+- Learn how to [configure load balancing for a host pool](configure-host-pool-load-balancing.md).
+
+- Understand how [autoscale](autoscale-scenarios.md) can automatically scale the number of available session hosts in a host pool.
virtual-desktop Insights Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-use-cases.md
+
+ Title: Use cases for Azure Virtual Desktop Insights - Azure Virtual Desktop
+description: Learn about how using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop, including some use cases and example scenarios.
+++ Last updated : 08/24/2023++
+# Use cases for Azure Virtual Desktop Insights
+
+Using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop. It can help with checks such as which client versions are connecting, opportunities for cost saving, or knowing if you have resource limitations or connectivity issues. If you make changes, you can continually validate that the changes have had the intended effect, and iterate if needed. This article provides some use cases for Azure Virtual Desktop Insights and example scenarios using the Azure portal.
+
+## Prerequisites
+
+- An existing host pool with session hosts, and a workspace [configured to use Azure Virtual Desktop Insights](insights.md).
+
+- You need to have had active sessions for a period of time before you can make informed decisions.
+
+## Connectivity
+
+Connectivity issues can have a severe impact on the quality and reliability of the end-user experience with Azure Virtual Desktop. Azure Virtual Desktop Insights can help you identify connectivity issues and understand where improvements can be made.
+
+### High latency
+
+High latency can cause poor quality and slowness of a remote session. Maintaining ideal interaction times requires latency to generally be below 100 milliseconds, with a session broadly becoming of low quality over 200 ms. Azure Virtual Desktop Insights can help pinpoint gateway regions and users impacted by latency by looking at the *round-trip time*, so that you can more easily find cases of user impact that are related to connectivity.
+
+To view round-trip time:
+
+1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi).
+
+1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Connection Performance** tab.
+
+1. Review the section for **Round-trip time** and focus on the table for **RTT by gateway region** and the graph **RTT median and 95th percentile for all regions**. In the example below, most median latencies are under the ideal threshold of 100 ms, but several are higher. In many cases, the 95th percentile (p95) is substantially higher than the median, meaning that there are some users experiencing periods of higher latency.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-1.png" alt-text="A screenshot of a table and graph showing the round-trip time." lightbox="media/insights-use-cases/insights-connection-performance-latency-1.png":::
+
+1. For the table **RTT by gateway region**, select **Median**, until the arrow next to it points down, to sort by the median latency in descending order. This order highlights gateways your users are reaching with the highest latency that could be having the most impact. Select a gateway to view the graph of its RTT median and 95th percentile, and filter the list of 20 top users by RTT median to the specific region.
+
+ In this example, the **SAN** gateway region has the highest median latency, and the graph indicates that over time users are substantially over the threshold for poor connection quality.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-2.png" alt-text="A screenshot of a table and graph showing the round-trip time for a selected gateway." lightbox="media/insights-use-cases/insights-connection-performance-latency-2.png":::
+
+ The list of users can be used to identify who is being impacted by these issues. You can select the magnifying glass icon in the **Details** column to drill down further into the data.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-3.png" alt-text="A screenshot of a table showing the round-trip time per user." lightbox="media/insights-use-cases/insights-connection-performance-latency-3.png":::
+
+There are several possibilities for why latency may be higher than anticipated for some users, such as a poor Wi-Fi connection, or issues with their Internet Service Provider (ISP). However, with a list of impacted users, you have the ability to proactively contact and attempt to resolve end-user experience problems by understanding their network connectivity.
+
+You should periodically review the round-trip time in your environment and the overall trend to identify potential performance concerns.
+
+## Session host performance
+
+Issues with session hosts, such as where session hosts have too many sessions to cope with the workload end-users are running, can be a major cause of poor end-user experience. Azure Virtual Desktop Insights can provide detailed information about resource utilization and [user input delay](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) to allow you to more easily and quickly find if users are impacted by limitations for resources like CPU or memory.
+
+To view session host performance:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry to go to the Azure Virtual Desktop overview.
+
+1. Select **Host pools**, then select the name of the host pool for which you want to view session host performance.
+
+1. Select **Insights**, specify a **time range**, then select the **Host Performance** tab.
+
+1. Review the table for **Input delay by host** and the graph **Median input delay over time** to find a summary of the median and 95th percentile user input delay values for each session host in the host pool. Ideally the user input delay for each host should be below 100 milliseconds, and a lower value is better.
+
+ In the following example, the session hosts have a reasonable median user input delay, but occasionally values peak above the threshold of 100 ms, implying potential for impacting end-users.
+
+ :::image type="content" source="media/insights-use-cases/insights-session-host-performance-1.png" alt-text="A screenshot of a table and graph showing the input delay of session hosts." lightbox="media/insights-use-cases/insights-session-host-performance-1.png":::
+
+1. If you find higher than expected user input delay (>100 ms), it can be useful to then look at the aggregated statistics for CPU, memory, and disk activity for the session hosts to see if there are periods of higher-than-expected utilization. The graphs for **Host CPU and memory metrics**, **Host disk timing metrics**, and **Host disk queue length** show either the aggregate across session hosts, or a selected session host's resource metrics.
+
+ In this example there are some periods of higher disk read times that correlate with the higher user input delay above.
+
+ :::image type="content" source="media/insights-use-cases/insights-session-host-performance-2.png" alt-text="A screenshot of graphs showing session host metrics." lightbox="media/insights-use-cases/insights-session-host-performance-2.png":::
+
+1. For more information about a specific session host, select the **Host Diagnostics** tab.
+
+1. Review the section for **Performance counters** to see a quick summary of any devices that have crossed the specified thresholds for:
+ - Available MBytes (available memory)
+ - Page Faults/sec
+ - CPU Utilization
+ - Disk Space
+ - Input Delay per Session
+
+ Selecting a parameter allows you to drill down and see the trend for a selected session host. In the following example, one session host had higher CPU usage (> 60%) for the selected duration (1 minute).
+
+ :::image type="content" source="media/insights-use-cases/insights-session-host-performance-3.png" alt-text="A screenshot showing values from the performance counters of session hosts." lightbox="media/insights-use-cases/insights-session-host-performance-3.png":::
+
+In cases where a session host has extended periods of high resource utilization, itΓÇÖs worth considering increasing the [Azure VM size](../virtual-machines/sizes.md) of the session host to better accommodate user workloads.
+
+## Client version usage
+
+A common source of issues for end-users of Azure Virtual Desktop is using older clients that may either be missing new or updated features, or have known issues that have been resolved with more recent versions. Azure Virtual Desktop Insights contains a list of the different clients in use, as well as identifying clients that may be out of date.
+
+To view a list of users with outdated clients:
+
+1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi).
+
+1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Clients** tab.
+
+1. Review the section for **Users with potentially outdated clients (all activity types)**. A summary table shows the highest version level of each client found connecting to your environment (marked as **Newest**) in the selected time range, and the count of users using outdated versions (in parentheses).
+
+ In the below example, the newest version of the Microsoft Remote Desktop Client for Windows (MSRDC) is 1.2.4487.0, and 993 users are currently using a version older than that. It also shows a count of connections and the number of days behind the latest version the older clients are.
+
+ :::image type="content" source="media/insights-use-cases/insights-client-version-usage-1.png" alt-text="A screenshot showing a table of outdated clients." lightbox="media/insights-use-cases/insights-client-version-usage-1.png":::
+
+1. To find more information, expand a client for a list of users using an outdated version of that client, their versions, and the date last seen connecting with that version. You can export the data using the button in the top right-hand corner of the table for communication with the users or monitor the propagation of updates.
+
+ :::image type="content" source="media/insights-use-cases/insights-client-version-usage-2.png" alt-text="A screenshot showing a table of users with outdated clients." lightbox="media/insights-use-cases/insights-client-version-usage-2.png":::
+
+You should periodically review the versions of clients in use to ensure your users are getting the best experience.
+
+## Cost saving opportunities
+
+Understanding the utilization of session hosts can help illustrate where there's potential to reduce spend by using a scaling plan, resize virtual machines, or reduce the number of session hosts in the pool. Azure Virtual Desktop Insights can provide visibility into usage patterns to help you make the most informed decisions about how best to manage your resources based on real user usage.
+
+### Session host utilization
+
+Knowing when your session hosts are in peak demand, or when there are few or no sessions can help you make decisions about how to manage your session hosts. You can use [autoscale](autoscale-scenarios.md) to scale session hosts based on usage patterns. Azure Virtual Desktop Insights can help you identify broad patterns of user activity across multiple host pools. If you find opportunities to scale session hosts, you can use this information to [create a scaling plan](autoscale-scaling-plan.md).
+
+To view session host utilization:
+
+1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi).
+
+1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Utilization** tab.
+
+1. Review the **Session history** chart, which displays the number of active and idle (disconnected) sessions over time. Identify any periods of high activity, and periods of low activity from the peak user session count and the time period in which the peaks occur. If you find a regular, repeated pattern of activity, this usually implies there's a good opportunity to implement a scaling plan.
+
+ In this example, the graph shows the number of users sessions over the course of a week. Peaks occur at around midday on weekdays, and there's a noticeable lack of activity over the weekend. This suggests that there's an opportunity to scale session hosts to meet demand during the week, and reduce the number of session hosts over the weekend.
+
+ :::image type="content" source="media/insights-use-cases/insights-session-count-over-time.png" alt-text="A screenshot of a graph showing the number of users sessions over the course of a week." lightbox="media/insights-use-cases/insights-session-count-over-time.png":::
+
+1. Use the **Session host count** chart to note the average number of active session hosts over time, and particularly the average number of session hosts that are idle (no sessions). Ideally session hosts should be actively supporting connected sessions and active workloads, and powered off when not in use by using a scaling plan. You'll likely need to keep a minimum number of session hosts powered on to ensure availability for users at irregular times, so understanding usage over time can help find an appropriate number of session hosts to keep powered on as a buffer.
+
+ Even if a scaling plan is ultimately not a good fit for your usage patterns, there's still an opportunity to balance the total number of session hosts available as a buffer by analyzing the session demand and potentially reducing the number of idle devices.
+
+ In this example, the graph shows there are long periods over the course of a week where idle session hosts are powered on and therefore increasing costs.
+
+ :::image type="content" source="media/insights-use-cases/insights-session-host-idle-count-over-time.png" alt-text="A screenshot of a graph showing the number of active and idle session hosts over the course of a week." lightbox="media/insights-use-cases/insights-session-host-idle-count-over-time.png":::
+
+1. Use the drop-down lists to reduce the scope to a single host pool and repeat the analysis for **session history** and **session host count**. At this scope you can identify patterns that are specific to the session hosts in a particular host pool to help develop a scaling plan for that host pool.
+
+ In this example, the first graph shows the pattern of user activity throughout a week between 6AM and 10PM. On the weekend, there's minimal activity. The second graph shows the number of active and idle session hosts throughout the same week. There are long periods of time where idle session hosts are powered on. Use this information to help determine optimal ramp-up and ramp-down times for a scaling plan.
+
+ :::image type="content" source="media/insights-use-cases/insights-session-count-over-time-single-host-pool.png" alt-text="A graph showing the number of users sessions over the course of a week for a single host pool." lightbox="media/insights-use-cases/insights-session-count-over-time-single-host-pool.png":::
+
+ :::image type="content" source="media/insights-use-cases/insights-session-host-idle-count-over-time-single-host-pool.png" alt-text="A graph showing the number of active and idle session hosts over the course of a week for a single host pool." lightbox="media/insights-use-cases/insights-session-host-idle-count-over-time-single-host-pool.png":::
+
+1. [Create a scaling plan](autoscale-scaling-plan.md) based on the usage patterns you've identified, then [assign the scaling plan to your host pool](autoscale-new-existing-host-pool.md).
+
+After a period of time, you should repeat this process to validate that your session hosts are being utilized effectively. You can make changes to the scaling plan if needed, and continue to iterate until you find the optimal scaling plan for your usage patterns.
+
+## Next steps
+
+- [Create a scaling plan](autoscale-scaling-plan.md)
virtual-desktop Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md
Title: Use Azure Virtual Desktop Insights to monitor your deployment - Azure
description: How to set up Azure Virtual Desktop Insights to monitor your Azure Virtual Desktop environments. Previously updated : 06/14/2023 Last updated : 08/24/2023
Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks t
Before you start using Azure Virtual Desktop Insights, you'll need to set up the following things: - All Azure Virtual Desktop environments you monitor must be based on the latest release of Azure Virtual Desktop thatΓÇÖs compatible with Azure Resource Manager.+ - At least one configured Log Analytics Workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected from session hosts in your Azure Virtual Desktop deployment.+ - Enable data collection for the following things in your Log Analytics workspace:
- - Diagnostics from your Azure Virtual Desktop environment
- - Recommended performance counters from your Azure Virtual Desktop session hosts
- - Recommended Windows Event Logs from your Azure Virtual Desktop session hosts
- The data setup process described in this article is the only one you'll need to monitor Azure Virtual Desktop. You can disable all other items sending data to your Log Analytics workspace to save costs.
+ - Diagnostics from your Azure Virtual Desktop environment
+ - Recommended performance counters from your Azure Virtual Desktop session hosts
+ - Recommended Windows Event Logs from your Azure Virtual Desktop session hosts
+
+ The data setup process described in this article is the only one you'll need to monitor Azure Virtual Desktop. You can disable all other items sending data to your Log Analytics workspace to save costs.
+
+- Anyone monitoring Azure Virtual Desktop Insights for your environment will also need to have the following Azure role-based access control (RBAC) roles assigned as a minimum:
-Anyone monitoring Azure Virtual Desktop Insights for your environment will also need the following read-access permissions:
+ - [Desktop Virtualization Reader](../role-based-access-control/built-in-roles.md#desktop-virtualization-reader) assigned on the resource group or subscription where the host pools, workspaces and session hosts are.
+ - [Log Analytics Reader](../role-based-access-control/built-in-roles.md#log-analytics-reader) assigned on any Log Analytics workspace used with Azure Virtual Desktop Insights.
-- Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources.-- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts.-- Read access to the Log Analytics workspace. In the case that multiple Log Analytics workspaces are used, read access should be granted to each to allow viewing data.
+ You can also create a custom role to reduce the scope of assignment on the Log Analytics workspace. For more information, see [Manage access to Log Analytics workspaces](../azure-monitor/logs/manage-access.md).
-> [!NOTE]
-> Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal.
+ > [!NOTE]
+ > Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal.
## Open Azure Virtual Desktop Insights
virtual-desktop Install Client Per User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-client-per-user.md
- Title: Install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager - Azure
-description: How to install the Azure Virtual Desktop client on a per-user basis with Intune or Configuration Manager.
-- Previously updated : 06/09/2023---
-# Install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager
-
-You can install the Remote Desktop client on either a per-system or per-user basis. Installing it on a per-system basis installs the client on the machines for all users by default, and updates are controlled by the admin. Per-user installation installs the application into each user's profile, giving them control over when to apply updates.
-
-Per-system is the default way to install the client. However, if you're deploying the Remote Desktop client with Intune or Configuration Manager, using the per-system method can cause the Remote Desktop client auto-update feature to stop working. In these cases, you must use the per-user method instead.
-
-## Prerequisites
-
-In order to install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager, you need the following things:
--- An Azure Virtual Desktop or Windows 365 deployment.-- Download the latest version of [the Remote Desktop client](./users/connect-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json).-- Microsoft Intune, Configuration Manager or other enterprise software distribution product.-
-## Install the Remote Desktop client using a batch file
-
-To install the client on a per-user basis using a batch file:
-
-#### [Intune](#tab/intune)
-
-1. Create a new folder containing the Remote Desktop client MSI file.
-
-1. Within that folder, create an `install.bat` batch file with the following content:
-
- ```batch
- cd "%~dp0"
-
- msiexec /i RemoteDesktop_x64.msi /qn ALLUSERS=2 MSIINSTALLPERUSER=1
- ```
-
- >[!NOTE]
- >The RemoteDesktop_x64.msi installer name must match the MSI contained in the folder.
-
-1. Follow the directions in [Prepare Win32 app content for upload](/mem/intune/apps/apps-win32-prepare) to convert the folder into an `.intunewin` file.
-
-1. Open the **Microsoft Intune admin center**, then go to **Apps** > **All apps** and select **Add**.
-
-1. For the app type, select **Windows app (Win32)**.
-
-1. Upload your `.intunewin` file, then fill out the required app information fields.
-
-1. In the **Program** tab, select the install.bat file as the installer, then for the uninstall command use `msiexec /x (6CE4170F-A4CD-47A0-ABFD-61C59E5F4B43)`, as shown in the following screenshot.
-
- :::image type="content" source="./media/install-client-per-user/uninstall-command.png" alt-text="A screenshot of the Program tab. The product code in the Uninstall command field is msiexec /x (6CE4170F-A4CD-47A0-ABFD-61C59E5F4B43)." lightbox="./media/install-client-per-user/uninstall-command.png" :::
-
-1. Toggle the **Install behavior** to **User**.
-
-1. In the **Detection rules** tab, enter the same MSI product code you used for the uninstall command.
-
-1. Follow the rest of the prompts until you complete the workflow.
-
-1. Follow the instructions in [Assign apps to groups with Microsoft Intune](/mem/intune/apps/apps-deploy) to deploy the client app to your users.
-
-#### [Configuration Manager](#tab/configmanager)
-
-1. Create a new folder in your package share.
-
-1. In this new folder, add the Remote Desktop client MSI file and an `install.bat` batch file with the following content:
-
- ```batch
- msiexec /i RemoteDesktop_x64.msi /qn ALLUSERS=2 MSIINSTALLPERUSER=1
- ```
-
- >[!NOTE]
- >The RemoteDesktop_x64.msi installer name must match the MSI contained in the folder.
-
-1. Open the **Configuration Manager** and go to **Software Library** > **Application Management** > **Applications**.
-
-1. Follow the directions in [Manually specify application information](/mem/configmgr/apps/deploy-use/create-applications#bkmk_manual-app) to create a new application with manually specified information.
-
-1. Enter the variables that apply to your organization into the **General Information** and **Software Center settings** fields.
-
-1. In the Deployment Types tab, select the **Add** button.
-
-1. Select **Script Installer** as the deployment type, then select **Next**.
-
-1. Enter the location of the folder you created in step 1 for the **Content location** field.
-
-1. Enter the path of the install.bat file in the **Installation program** field.
-
-1. For the **Uninstall program** field, enter `msiexec /x (6CE4170F-A4CD-47A0-ABFD-61C59E5F4B43)`.
-
- :::image type="content" source="./media/install-client-per-user/content-location-uninstall-id.png" alt-text="A screenshot of the Specify information about the content to be delivered to target devices window. The command msiexec /x (6CE4170F-A4CD-47A0-ABFD-61C59E5F4B43) is entered into the Uninstall program field ." lightbox="./media/install-client-per-user/content-location-uninstall-id.png" :::
-
-1. Next, enter the same MSI product ID you used for the uninstall command into the **Detection program** field.
-
- :::image type="content" source="./media/install-client-per-user/msi-product-code.png" alt-text="A screenshot of the Specify how this deployment type is detected window. In the rules box, the clause lists the product ID msiexec /x (6CE4170F-A4CD-47A0-ABFD-61C59E5F4B43)." lightbox="./media/install-client-per-user/msi-product-code.png" :::
-
-1. For User Experience, toggle the installation behavior to **Install for user**.
-
-1. Follow the rest of the prompts until you've finished the workflow.
-
-1. Once you're finished, follow the instructions in [Deploy applications with Configuration Manager](/mem/configmgr/apps/deploy-use/deploy-applications) to deploy the client app to your users.
---
-## Next steps
-
-Learn more about the Remote Desktop client at [Use features of the Remote Desktop client for Windows](./users/client-features-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json).
virtual-desktop Install Windows Client Per User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-windows-client-per-user.md
+
+ Title: Install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager - Azure
+description: How to install the Azure Virtual Desktop client on a per-user basis with Intune or Configuration Manager.
++ Last updated : 09/01/2023+++
+# Install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager
+
+You can install the [Remote Desktop client for Windows](./users/connect-windows.md) on either a per-system or per-user basis. Installing it on a per-system basis installs the client on the machines for all users by default, and administrators control updates. Per-user installation installs the application to a subfolder within the local AppData folder of each user's profile, enabling users to install updates with needing administrative rights.
+
+When you install the client using `msiexec.exe`, per-system is the default method of client installation. You can use the parameters `ALLUSERS=2 MSIINSTALLPERUSER=1` with msiexec to install the client per-user, however if you're deploying the client with Intune or Configuration Manager, using msiexec directly to install the client causes it to be installed per-system, regardless of the parameters used. Wrapping the msiexec command in a PowerShell script enables the client to be successfully installed per-user.
+
+## Prerequisites
+
+In order to install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager, you need the following things:
+
+- Download the latest version of [the Remote Desktop client for Windows](./users/connect-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json).
+
+- Supported Windows devices managed by Microsoft Intune or Configuration Manager with permission to add applications.
+
+- For Intune, you need a local Windows device to use the [Microsoft Win32 Content Prep Tool](https://github.com/Microsoft/Microsoft-Win32-Content-Prep-Tool).
+
+## Install the Remote Desktop client per-user using a PowerShell script
+
+To install the client on a per-user basis using a PowerShell script, select the relevant tab for your scenario and follow the steps.
+
+#### [Intune](#tab/intune)
+
+Here's how to install the client on a per-user basis using a PowerShell script with Intune as a *Windows app (Win32)*.
+
+1. Create a new folder on your local Windows device and add the Remote Desktop client MSI file you downloaded.
+
+1. Within that folder, create a PowerShell script file called `Install.ps1` and add the following content, replacing `<RemoteDesktop>` with the filename of the `.msi` file you downloaded:
+
+ ```powershell
+ msiexec /i <RemoteDesktop>.msi /qn ALLUSERS=2 MSIINSTALLPERUSER=1
+ ```
+
+1. In the same folder, create a PowerShell script file called `Uninstall.ps1` and add the following content:
+
+ ```powershell
+ $productCode = (Get-WmiObject -Class Win32_Product | Where-Object {$_.Name -eq 'Remote Desktop' -and $_.Vendor -eq 'Microsoft Corporation'}).IdentifyingNumber
+
+ msiexec /x $productCode /qn
+ ```
+
+1. Follow the steps in [Prepare Win32 app content for upload](/mem/intune/apps/apps-win32-prepare) to package the contents of the folder into an `.intunewin` file.
+
+1. Follow the steps in [Add, assign, and monitor a Win32 app in Microsoft Intune](/mem/intune/apps/apps-win32-add) to add the Remote Desktop client. You need to specify the following information during the process:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Install command | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Install.ps1` |
+ | Uninstall command | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Uninstall.ps1` |
+ | Install behavior | Select **User**. |
+ | Operating system architecture | Select **64-bit**. |
+ | Detection rules format | Select **Manually configure detection rules**. |
+ | Detection rule type | Select **File**. |
+ | Detection rule path | `%LOCALAPPDATA%\Programs\Remote Desktop\` |
+ | Detection rule file or folder | `msrdc.exe` |
+ | Detection method | Select **File or folder exists**. |
+ | Assignments | Assign to users you want to use the Remote Desktop client. |
+
+#### [Configuration Manager](#tab/configmgr)
+
+Here's how to install the client on a per-user basis using a PowerShell script with Configuration Manager as a *Script Installer*.
+
+1. Create a new folder in your content location share for Configuration Manager and add the Remote Desktop client MSI file you downloaded.
+
+1. Within that folder, create a PowerShell script file called `Install.ps1` and add the following content, replacing `<RemoteDesktop>` with the filename of the `.msi` file you downloaded:
+
+ ```powershell
+ msiexec /i <RemoteDesktop>.msi /qn ALLUSERS=2 MSIINSTALLPERUSER=1
+ ```
+
+1. In the same folder, create a PowerShell script file called `Uninstall.ps1` and add the following content:
+
+ ```powershell
+ $productCode = (Get-WmiObject -Class Win32_Product | Where-Object {$_.Name -eq 'Remote Desktop' -and $_.Vendor -eq 'Microsoft Corporation'}).IdentifyingNumber
+
+ msiexec /x $productCode /qn
+ ```
+
+1. Follow the steps in [Create applications in Configuration Manager](/mem/configmgr/apps/deploy-use/create-applications) and [manually specify application information](/mem/configmgr/apps/deploy-use/create-applications#bkmk_manual-app) to add the Remote Desktop client. You need to specify the following information during the process:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Deployment type | Select **Script Installer**. |
+ | Content location | Enter the UNC path to the new folder you created. |
+ | Installation program | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Install.ps1` |
+ | Uninstall program | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Uninstall.ps1` |
+ | Detection method | Select **Configure rules to detect the presence of this deployment type**. |
+ | Detection rule setting type | Select **File System**. |
+ | Detection rule type | Select **File**. |
+ | Detection rule path | `%LOCALAPPDATA%\Programs\Remote Desktop\` |
+ | Detection rule file or folder name | `msrdc.exe` |
+ | Detection rule criteria | Select **The file system setting must exist on the target system to indicate presence of this application**. |
+ | Installation behavior | Select **Install for user**. |
+
+1. Follow the steps in [Deploy applications with Configuration Manager](/mem/configmgr/apps/deploy-use/deploy-applications) to deploy the Remote Desktop client to your users.
+++
+## Next steps
+
+Learn more about the Remote Desktop client at [Use features of the Remote Desktop client for Windows](./users/client-features-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json).
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Before you can use multimedia redirection on Azure Virtual Desktop, you'll need
- Windows Desktop client: - To use video playback redirection, you must install [Windows Desktop client, version 1.2.3916 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). This feature is only compatible with version 1.2.3916 or later of the Windows Desktop client.
- - To use call redirection, you must install the Windows Desktop client, version 1.2.4237 or later with [Insider releases enabled](./users/client-features-windows.md#enable-insider-releases).
+ - To use call redirection, you must install the Windows Desktop client, version 1.2.4337 or later with [Insider releases enabled](./users/client-features-windows.md#enable-insider-releases).
- Microsoft Visual C++ Redistributable 2015-2022, version 14.32.31332.0 or later installed on your session hosts and Windows client devices. You can download the latest version from [Microsoft Visual C++ Redistributable latest supported downloads](/cpp/windows/latest-supported-vc-redist).
The following section will show you how to use advanced features for call redire
#### Enable call redirection for all sites
-Call redirection is currently limited to the web apps listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. If you're using a listed calling app with an internal URL, you must turn the **Enable WebRTC for all sites** setting to use call redirection. You can also enable call redirection for all sites to test the feature with web apps that aren't officially supported yet.
+Call redirection is currently limited to the web apps listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. If you're using one of the calling apps listed in [Call redirection](multimedia-redirection-intro.md#call-redirection) with an internal URL, you must turn the **Enable WebRTC for all sites** setting to use call redirection. You can also enable call redirection for all sites to test the feature with web apps that aren't officially supported yet.
To enable call redirection for all sites:
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
Private Link with Azure Virtual Desktop has the following limitations:
- Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't currently supported. -- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You'll need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0.
+- Early in the preview of Private Link with Azure Virtual Desktop, the private endpoint for the initial feed discovery (for the *global* sub-resource) shared the private DNS zone name of `privatelink.wvd.microsoft.com` with other private endpoints for workspaces and host pools. In this configuration, users are unable to establish private endpoints exclusively for host pools and workspaces. Starting September 1, 2023, sharing the private DNS zone in this configuration will no longer be supported. You need to create a new private endpoint for the *global* sub-resource to use the private DNS zone name of `privatelink-global.wvd.microsoft.com`. For the steps to do this, see [Initial feed discovery](private-link-setup.md#initial-feed-discovery).
+
+- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0.
## Next steps
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
To resolve this issue, first reinstall the side-by-side stack:
1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you must [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
-## Error: Session host VMs are stuck in Unavailable state
+## Error: Session hosts are stuck in Unavailable state
If your session host VMs are stuck in the Unavailable state, your VM didn't pass one of the health checks listed in [Health check](troubleshoot-statuses-checks.md#health-check). You must resolve the issue that's causing the VM to not pass the health check.
-## Error: VMs are stuck in the "Needs Assistance" state
+## Error: Session hosts are stuck in the Needs Assistance state
+
+There are several health checks that can cause your session host VMs to be stuck in the **Needs Assistance** state, *UrlsAccessibleCheck*. *MetaDataServiceCheck*, and *MonitoringAgentCheck*.
+
+### UrlsAccessibleCheck
If the session host doesn't pass the *UrlsAccessibleCheck* health check, you'll need to identify which [required URL](safe-url-list.md) your deployment is currently blocking. Once you know which URL is blocked, identify which setting is blocking that URL and remove it.
If your local hosts file is blocking the required URLs, make sure none of the re
**Name:** DataBasePath
+### MetaDataServiceCheck
+ If the session host doesn't pass the *MetaDataServiceCheck* health check, then the service can't access the IMDS endpoint. To resolve this issue, you'll need to do the following things: - Reconfigure your networking, firewall, or proxy settings to unblock the IP address 169.254.169.254.
If your issue is caused by a web proxy, add an exception for 169.254.169.254 in
netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="169.254.169.254" ```
+### MonitoringAgentCheck
+
+If the session host doesn't pass the *MonitoringAgentCheck* health check, you'll need to check the *Remote Desktop Services Infrastructure Geneva Agent* and validate if it is functioning correctly on the session host:
+
+1. Verify if the Remote Desktop Services Infrastructure Geneva Agent is installed on the session host. You can verify this in the list of installed programs on the session host. If you see multiple version of this agent installed, uninstall older versions and only keep the latest version installed.
+
+1. If you don't find the Remote Desktop Services Infrastructure Geneva Agent installed on the session host, please review logs located under *C:\Program Files\Microsoft RDInfra\GenevaInstall.txt* and see if installation is failing due to an error.
+
+1. Verify if scheduled task *GenevaTask_\<version\>* is created. This scheduled task must be enabled and running. If it's not, please reinstall the agent using the `.msi` file named **Microsoft.RDInfra.Geneva.Installer-x64-\<version\>.msi**, which is available at **C:\Program Files\Microsoft RDInfra**.
+ ## Error: Connection not found: RDAgent does not have an active connection to the broker Your session host VMs may be at their connection limit and can't accept new connections.
You must generate a new registration key that is used to re-register your sessio
### Step 4: Reinstall the agent and boot loader
-Reinstalling the latest version of the agent and boot loader also automatically installs the side-by-side stack and Geneva monitoring agent. To reinstall the agent and boot loader:
+Reinstalling the latest version of the agent and boot loader also automatically installs the side-by-side stack and Geneva monitoring agent. To reinstall the agent and boot loader, follow these steps. This is the latest downloadable version of the Azure Virtual Desktop Agent in [non-validation environments](terminology.md#validation-environment). For more information about the rollout of new versions of the agent, see [What's new in the Azure Virtual Desktop Agent](whats-new-agent.md#latest-agent-versions).
1. Sign in to your session host VM as an administrator and run the agent installer and bootloader for your session host VM:
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
If your data isn't displaying properly, check the following common solutions:
- [Log Analytics Firewall Requirements](../azure-monitor/agents/log-analytics-agent.md#firewall-requirements). - Not seeing data from recent activity? You may want to wait for 15 minutes and refresh the feed. Azure Monitor has a 15-minute latency period for populating log data. To learn more, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md).
-If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. Review [known issues and limitations](#known-issues-and-limitations).
+If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. For more information, see [known issues and limitations](#known-issues-and-limitations).
# [Azure Monitor Agent (preview)](#tab/monitor)
If this article doesn't have the data point you need to resolve an issue, you ca
- To learn how to leave feedback, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md). - You can also leave feedback for Azure Virtual Desktop at the [Azure Virtual Desktop feedback hub](https://support.microsoft.com/help/4021566/windows-10-send-feedback-to-microsoft-with-feedback-hub-app). ++ ## Known issues and limitations The following are issues and limitations we're aware of and working to fix:
The following are issues and limitations we're aware of and working to fix:
- Do you see contradicting or unexpected connection times? While rare, a connection's completion event can go missing and can impact some visuals and metrics. - Time to connect includes the time it takes users to enter their credentials; this correlates to the experience but in some cases can show false peaks. -- ## Next steps - To get started, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
virtual-desktop Troubleshoot Statuses Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md
Title: Azure Virtual Desktop session host statuses and health checks description: How to troubleshoot the failed session host statuses and failed health checks-+ Last updated 05/03/2023--+ # Azure Virtual Desktop session host statuses and health checks
The following table lists all statuses for session hosts in the Azure portal eac
| Session host status | Description | How to resolve related issues | |||| |Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it's still listed as ΓÇ£Available." |N/A|
-|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
+|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: Session hosts are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status changes to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. | |Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.| |Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This status doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
The health check is a test run by the agent on the session host. The following t
| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this issue, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. | | Integrated Maintenance Data System (IMDS) reachable | Verifies that the service can't access the IMDS endpoint. | If this check fails, it's semi-fatal. There may be successful connections, but they won't contain logging information. To resolve this issue, you'll need to reconfigure your networking, firewall, or proxy settings. | | Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If restarting doesn't work, contact Microsoft support. |
-| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: VMs are stuck in the Needs Assistance state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state). |
+| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: Session hosts are stuck in the Needs Assistance state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state). |
| TURN (Traversal Using Relay NAT) Relay Access Health Check | When using [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks#how-rdp-shortpath-works) with an indirect connection, TURN uses User Datagram Protocol (UDP) to relay traffic between the client and session host through an intermediate server when direct connection isn't possible. | If this check fails, it's not fatal. Connections revert to the websocket TCP and the session host enters the "Needs assistance" state. To resolve the issue, follow the instructions in [Disable RDP Shortpath on managed and unmanaged windows clients using group policy](configure-rdp-shortpath.md?tabs=public-networks#disable-rdp-shortpath-on-managed-and-unmanaged-windows-clients-using-group-policy). | | App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps stop working for end-users. | | Domain reachable | Verifies the domain the session host is joined to is still reachable. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the domain. |
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
Title: Deploy the diagnostics tool for Azure Virtual Desktop (classic) - Azure
description: How to deploy the diagnostics UX tool for Azure Virtual Desktop (classic). + Last updated 12/15/2020
You can also interact with users on the session host:
## Next steps - Learn how to monitor activity logs at [Use diagnostics with Log Analytics](diagnostics-log-analytics-2019.md).-- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).
+- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).
virtual-desktop Manage Resources Using Ui Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md
Last updated 03/30/2020 -+
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 08/08/2023 Last updated : 09/06/2023
New versions of the Azure Virtual Desktop Agent are installed automatically. Whe
A rollout may take several weeks before the agent is available in all environments. Some agent versions may not reach non-validation environments, so you may see multiple versions of the agent deployed across your environments.
-## Version 1.0.7255.800
+| Release | Latest version |
+|--|--|
+| Production | 1.0.7033.1401 |
+| Validation | 1.0.7255.800 |
+
+## Version 1.0.7255.800 (validation)
This update was released at the end of July 2023 and includes the following changes:
virtual-desktop Whats New Client Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-android-chrome-os.md
description: Learn about recent changes to the Remote Desktop client for Android
Previously updated : 01/04/2023 Last updated : 08/21/2023 # What's new in the Remote Desktop client for Android and Chrome OS
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
description: Learn about recent changes to the Remote Desktop client for macOS
Previously updated : 06/26/2023 Last updated : 08/21/2023 # What's new in the Remote Desktop client for macOS
virtual-desktop Whats New Client Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows-azure-virtual-desktop-app.md
Title: What's new in the Azure Virtual Desktop Store app for Windows (preview) - Azure Virtual Desktop description: Learn about recent changes to the Azure Virtual Desktop Store app for Windows. -- Previously updated : 08/04/2023++ Last updated : 08/29/2023 # What's new in the Azure Virtual Desktop Store app for Windows (preview)
Last updated 08/04/2023
In this article you'll learn about the latest updates for the Azure Virtual Desktop Store app for Windows. To learn more about using the Azure Virtual Desktop Store app for Windows with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md) and [Use features of the Azure Virtual Desktop Store app for Windows when connecting to Azure Virtual Desktop](users/client-features-windows-azure-virtual-desktop-app.md).
-## Latest client versions
+## Supported client versions
-The following table lists the current version available for the public release. To enable Insider releases, see [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases).
+The following table lists the current versions available for the public and Insider releases. To enable Insider releases, see [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases).
| Release | Latest version | Download | |-||-| | Public | 1.2.4487 | [Microsoft Store](https://aka.ms/AVDStoreClient) |
-| Insider | 1.2.4487 | Download the public release, then [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases) and check for updates. |
+| Insider | 1.2.4577 | Download the public release, then [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases) and check for updates. |
+
+## Updates for version 1.2.4577 (Insider)
+
+*Date published: August 29, 2023*
+
+In this release, we've made the following changes:
+
+- Fixed an issue when using the default display settings and a change is made to the system display settings, where the bar does not show when hovering over top of screen after it is hidden.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Accessibility improvements:
+ - Narrator now announces the view mode selector as "*View combo box*", instead of "*Tile view combo box*" or "*List view combo box*".
+ - Narrator now focuses on and announces **Learn more** hyperlinks.
+ - Keyboard focus is now set correctly when a warning dialog loads.
+ - Tooltip for the close button on the **About** panel now dismisses when keyboard focus moves.
+ - Keyboard focus is now properly displayed for certain drop-down selectors in the **Settings** panel for published desktops.
## Updates for version 1.2.4487
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 08/01/2023 Last updated : 08/31/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.4487 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.4487 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.4577 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+
+## Updates for version 1.2.4577 (Insider)
+
+*Date published: August 29, 2023*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+
+In this release, we've made the following changes:
+
+- Fixed an issue when using the default display settings and a change is made to the system display settings, where the bar does not show when hovering over top of screen after it is hidden.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Accessibility improvements:
+ - Narrator now announces the view mode selector as "*View combo box*", instead of "*Tile view combo box*" or "*List view combo box*".
+ - Narrator now focuses on and announces **Learn more** hyperlinks.
+ - Keyboard focus is now set correctly when a warning dialog loads.
+ - Tooltip for the close button on the **About** panel now dismisses when keyboard focus moves.
+ - Keyboard focus is now properly displayed for certain drop-down selectors in the **Settings** panel for published desktops.
## Updates for version 1.2.4487
In this release, we've made the following changes:
Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17f1J), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17mKo), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17sgF)
-In this release, we've made the following changes:
+In this release, we've made the following changes:
-- Added a new RDP file property called *allowed security protocols*. This property restricts the list of security protocols the client can negotiate. -- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Added a new RDP file property called *allowed security protocols*. This property restricts the list of security protocols the client can negotiate.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
- Accessibility improvements: - Narrator now describes the toggle button in the display settings side panel as *toggle button* instead of *button*.
- - Control types for text now correctly say that they're *text* and not *custom*.
- - Fixed an issue where Narrator didn't read the error message that appears after the user selects **Delete**.
- - Added heading-level description to **Subscribe with URL**.
+ - Control types for text now correctly say that they're *text* and not *custom*.
+ - Fixed an issue where Narrator didn't read the error message that appears after the user selects **Delete**.
+ - Added heading-level description to **Subscribe with URL**.
- Dialog improvements:
- - Updated **file** and **URI launch** dialog error handling messages to be more specific and user-friendly.
+ - Updated **file** and **URI launch** dialog error handling messages to be more specific and user-friendly.
- The client now displays an error message after unsuccessfully checking for updates instead of incorrectly notifying the user that the client is up to date. - Fixed an issue where, after having been automatically reconnected to the remote session, the **connection information** dialog gave inconsistent information about identity verification.
In this release, we've made the following changes:
*Date published: July 6, 2023*
-In this release, we've made the following changes:
+In this release, we've made the following changes:
- General improvements to Narrator experience. - Fixed an issue that caused the text in the message for subscribing to workspaces to be cut off when the user increases the text size. - Fixed an issue that caused the client to sometimes stop responding when attempting to start new connections.-- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
-## Updates for version 1.2.4337
+## Updates for version 1.2.4337
-*Date published: June 13, 2023*
+*Date published: June 13, 2023*
-In this release, we've made the following changes:
+In this release, we've made the following changes:
- Fixed the vulnerability known as [CVE-2023-29362](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-29362). - Fixed the vulnerability known as [CVE-2023-29352](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-29352).
In this release, we've made the following changes:
- Fixed an application compatibility issue that affected preview versions of Windows. - Moved the identity verification method from the lock window message in the connection bar to the end of the connection info message. - Changed the error message that appears when the session host can't reach the authenticator to validate a user's credentials to be clearer.-- Added a reconnect button to the disconnect message boxes that appear whenever the local PC goes into sleep mode or the session is locked.
+- Added a reconnect button to the disconnect message boxes that appear whenever the local PC goes into sleep mode or the session is locked.
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
-## Updates for version 1.2.4240
+## Updates for version 1.2.4240
*Date published: May 16, 2023*
-In this release, we've made the following changes:
+In this release, we've made the following changes:
- Fixed an issue where the connection bar remained visible on local sessions when the user changed their contrast themes.-- Made minor changes to connection bar UI, including improved button sizing. -- Fixed an issue where the client stopped responding if closed from the system tray. -- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Made minor changes to connection bar UI, including improved button sizing.
+- Fixed an issue where the client stopped responding if closed from the system tray.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
## Updates for version 1.2.4159
In this release, we've made the following changes:
- Fixed a bug where users aren't able to update the client if the client is installed with the flags *ALLUSERS=2* and *MSIINSTALLPERUSER=1* - Fixed an issue that made the client disconnect and display error message 0x3000018 instead of showing a prompt to reconnect if the endpoint doesn't let users save their credentials. - Fixed the vulnerability known as [CVE-2023-28267](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-28267).-- Fixed an issue that generated duplicate Activity IDs for unique connections.
+- Fixed an issue that generated duplicate Activity IDs for unique connections.
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Fixed an application compatibility issue for preview versions of Windows.
In this release, we've made the following changes:
In this release, we've made the following changes: - Fixed a bug where refreshes increased memory usage.-- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
- Updates to Teams for Azure Virtual Desktop, including the following: - Bug fix for Background Effects persistence between Teams sessions. - Updates to MMR for Azure Virtual Desktop, including the following: - Various bug fixes for multimedia redirection (MMR) video playback redirection.
- - [Multimedia redirection for Azure Virtual Desktop](multimedia-redirection.md) is now generally available.
+ - [Multimedia redirection for Azure Virtual Desktop](multimedia-redirection.md) is now generally available.
>[!IMPORTANT] >This is the final version of the Remote Desktop client with Windows 7 support. After this version, if you try to use the Remote Desktop client with Windows 7, it may not work as expected. For more information about which versions of Windows the Remote Desktop client currently supports, see [Prerequisites](./users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&tabs=subscribe#prerequisites).
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
description: Learn about new and updated articles to the Azure Virtual Desktop d
Previously updated : 08/01/2023 Last updated : 08/30/2023 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop regularly. In this article we highlight articles for new features and where there have been important updates to existing articles.
+## August 2023
+
+In August 2023, we published the following changes:
+
+- Updated [Administrative template for Azure Virtual Desktop](administrative-template.md) to include being able to configure settings using the settings catalog in Intune.
+- A new article for [Use cases for Azure Virtual Desktop Insights](insights-use-cases.md) that includes example scenarios for how you can use Azure Virtual Desktop Insights to help understand your Azure Virtual Desktop environment.
+ ## July 2023 In July 2023, we published the following changes:
In June 2023, we published the following changes:
- Updated [Use Azure Virtual Desktop Insights](insights.md) to use the Azure Monitor Agent. - Updated [Supported features for Microsoft Teams on Azure Virtual Desktop](teams-supported-features.md) to include simulcast, mirror my video, manage breakout rooms, call health panel.-- New article to [Assign RBAC roles to the Azure Virtual Desktop service principal](service-principal-assign-roles.md).
+- A new article to [Assign RBAC roles to the Azure Virtual Desktop service principal](service-principal-assign-roles.md).
- Added Intune to [Administrative template for Azure Virtual Desktop](administrative-template.md). - Updated [Configure single sign-on using Azure AD Authentication](configure-single-sign-on.md) to include how to use an Active Directory domain admin account with single sign-on, and highlight the need to create a Kerberos server object.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 07/18/2023 Last updated : 08/22/2023
Make sure to check back here often to keep up with new updates.
Here's what changed in July 2023:
+### Watermarking is now generally available
+
+[Watermarking](watermarking.md), when used with [screen capture protection](#screen-capture-protection), helps protect your sensitive information from capture on client endpoints. When you enable watermarking, QR code watermarks appear as part of remote desktops. The QR code contains the connection ID of a remote session that admins can use to trace the session. You can configure watermarking on session hosts and enforce it with the Remote Desktop client.
+
+### Audio call redirection for Azure Virtual Desktop in preview
+
+Call redirection, which optimizes audio calls for WebRTC-based calling apps, is now in preview. Multimedia redirection redirects media content from Azure Virtual Desktop to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature when using the Windows Desktop client.
+
+For more information about which sites are compatible with this feature, see [Call redirection](multimedia-redirection-intro.md#call-redirection).
+ ### Autoscale for personal host pools is currently in preview Autoscale for personal host pools is now in preview. Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down according to a schedule to optimize deployment costs.
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 08/08/2023 Last updated : 08/30/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Monitor Application Health | Application health extension | Application health extension or Azure load balancer probe | Application health extension | | Instance Repair (Virtual Machine Scale Set) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | N/A | | Instance Protection | No, use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) | Yes | No |
-| Scale In Policy | No | Yes | No |
+| Scale In Policy | Yes | Yes | No |
| VMSS Get Instance View | No | Yes | N/A | | VM Batch Operations (Start all, Stop all, delete subset, etc.) | Yes | Yes | No |
The following Virtual Machine Scale Set parameters aren't currently supported wi
- Application health via SLB health probe - use Application Health Extension on instances - Virtual Machine Scale Set upgrade policy - must be null or empty - Unmanaged disks-- Virtual Machine Scale Set Scale in Policy - Virtual Machine Scale Set Instance Protection - Basic Load Balancer - Port Forwarding via Standard Load Balancer NAT Pool - you can configure NAT rules
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
The scale-in policy feature provides users a way to configure the order in which
2. NewestVM 3. OldestVM
-> [!IMPORTANT]
-> Flexible orchestration for Virtual Machine Scale Sets does not currently support scale-in policy.
- ### Default scale-in policy
+#### Flexible orchestration
+With this policy, virtual machines are scaled-in after balancing across availability zones (if the scale set is in zonal configuration), and the oldest virtual machine as per `createdTime` is scaled-in first.
+Note that balancing across fault domain is not available in Default policy with flexible orchestration mode.
+
+#### Uniform orchestration
By default, Virtual Machine Scale Set applies this policy to determine which instance(s) will be scaled in. With the *Default* policy, VMs are selected for scale-in in the following order: 1. Balance virtual machines across availability zones (if the scale set is deployed in zonal configuration)
virtual-machines Bpsv2 Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bpsv2-arm.md
Last updated 06/09/2023
-# Bpsv2-series (Public Preview)
+# Bpsv2-series
The Bpsv2-series virtual machines are based on the Arm architecture, featuring the Ampere® Altra® Arm-based processor operating at 3.0 GHz, delivering outstanding price-performance for general-purpose workloads, These virtual machines offer a range of VM sizes, from 0.5 GiB to up to 4 GiB of memory per vCPU, to meet the needs of applications that do not need the full performance of the CPU continuously, such as development and test servers, low traffic web servers, small databases, micro services, servers for proof-of-concepts, build servers, and code repositories. These workloads typically have burstable performance requirements. The Bpsv2-series VMs provides you with the ability to purchase a VM size with baseline performance that can build up credits when it is using less than its baseline performance. When the VM has accumulated credits, the VM can burst above the baseline using up to 100% of the vCPU when your application requires higher CPU performance.
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/classic-vm-deprecation.md
VMs created using the classic deployment model will follow the [Modern Lifecycle
## How does this affect me? - As of February 28, 2020, customers who didn't utilize IaaS VMs through ASM in the month of February 2020 can no longer create VMs (classic). -- On September 6, 2023, customers will no longer be able to start IaaS VMs by using ASM. Any that are still running or allocated will be stopped and deallocated. -- On September 6, 2023, subscriptions that are not migrated to Azure Resource Manager will be informed regarding timelines for deleting any remaining VMs (classic).
+- On September 6, 2023, Any classic VM that has not been migrated to ARM will be stopped and deallocated.
This retirement does *not* affect the following Azure services and functionality: - Storage accounts *not* used by VMs (classic)
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
Previously updated : 08/08/2023 Last updated : 08/18/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows
-There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. You aren't yet able to switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one.
+There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. Premium SSD and Standard SSD are also available with [Zone-redundant storage](disks-redundancy.md#zone-redundant-storage-for-managed-disks). You aren't yet able to switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one.
This functionality isn't supported for unmanaged disks. But you can easily convert an unmanaged disk to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
$rgName = 'yourResourceGroup'
# Name of the your virtual machine $vmName = 'yourVM'
-# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+# Choose between Standard_LRS, StandardSSD_LRS, StandardSSD_ZRS, Premium_ZRS, and Premium_LRS based on your scenario
$storageType = 'Premium_LRS' # Premium capable size
vmName='yourVM'
#Required only if converting from Standard to Premium size='Standard_DS2_v2'
-#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+#Choose between Standard_LRS, StandardSSD_LRS, StandardSSD_ZRS, Premium_ZRS, and Premium_LRS based on your scenario
sku='Premium_LRS' #Deallocate the VM before changing the size of the VM
For your dev/test workload, you might want a mix of Standard and Premium disks t
$diskName = 'yourDiskName' # resource group that contains the managed disk $rgName = 'yourResourceGroupName'
-# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+# Choose between Standard_LRS, StandardSSD_LRS, StandardSSD_ZRS, Premium_ZRS, and Premium_LRS based on your scenario
$storageType = 'Premium_LRS' # Premium capable size $size = 'Standard_DS2_v2'
diskName='yourManagedDiskName'
#Required only if converting from Standard to Premium size='Standard_DS2_v2'
-#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+#Choose between Standard_LRS, StandardSSD_LRS, StandardSSD_ZRS, Premium_ZRS, and Premium_LRS based on your scenario
sku='Premium_LRS' #Get the parent VM Id
virtual-machines Disks Enable Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-performance.md
region=desiredRegion
sku=desiredSKU #Size must be 513 or larger size=513
-az disk create -g $myRG -n $myDisk --size-gb $size --sku $sku -l $region ΓÇôperformance-plus true
+az disk create -g $myRG -n $myDisk --size-gb $size --sku $sku -l $region --performance-plus true
az vm disk attach --vm-name $myVM --name $myDisk --resource-group $myRG ```
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 08/11/2023 Last updated : 08/17/2023 ms.devlang: azurecli
ms.devlang: azurecli
# [Azure CLI](#tab/azure-cli)
-You can use the Azure CLI to create an incremental snapshot. You'll need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
+You can use the Azure CLI to create an incremental snapshot. You need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
-The following script will create an incremental snapshot of a particular disk:
+The following script creates an incremental snapshot of a particular disk:
```azurecli # Declare variables
yourDiskID=$(az disk show -n $diskName -g $resourceGroupName --query "id" --outp
az snapshot create -g $resourceGroupName -n $snapshotName --source $yourDiskID --incremental true ```
-> [!IMPORTANT]
-> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
- You can identify incremental snapshots from the same disk with the `SourceResourceId` property of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. You can use `SourceResourceId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots:
az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremen
# [Azure PowerShell](#tab/azure-powershell)
-You can use the Azure PowerShell module to create an incremental snapshot. You'll need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
+You can use the Azure PowerShell module to create an incremental snapshot. You need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
```PowerShell Install-Module -Name Az -AllowClobber -Scope CurrentUser
$snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk
New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig ```
-> [!IMPORTANT]
-> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
- You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes. You can use `SourceResourceId` and `SourceUniqueId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots:
$incrementalSnapshots
# [Portal](#tab/azure-portal) [!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../includes/virtual-machines-disks-incremental-snapshots-portal.md)]
-> [!IMPORTANT]
-> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
- # [Resource Manager Template](#tab/azure-resource-manager) You can also use Azure Resource Manager templates to create an incremental snapshot. You'll need to make sure the apiVersion is set to **2022-03-22** and that the incremental property is also set to true. The following snippet is an example of how to create an incremental snapshot with Resource Manager templates:
You can also use Azure Resource Manager templates to create an incremental snaps
] } ```
-> [!IMPORTANT]
-> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
--
-## Check status of snapshots or disks
+## Check snapshot status
-Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed. Similarly, Premium SSD v2 or Ultra Disks created from incremental snapshots can't be attached to a VM until the background process copying the data into the disk has completed.
+Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed.
-You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot and you can use the [Check disk creation status](#check-disk-creation-status) section to check the status of a background copy from a snapshot to a disk.
+You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot.
### CLI
The following script returns a list of all snapshots associated with a particula
subscriptionId="yourSubscriptionId" resourceGroupName="yourResourceGroupNameHere" diskName="yourDiskNameHere"- az account set --subscription $subscriptionId- diskId=$(az disk show -n $diskName -g $resourceGroupName --query [id] -o tsv)- az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremental]" -g $resourceGroupName --output table ```
The following script returns a list of all incremental snapshots associated with
$resourceGroupName = "yourResourceGroupNameHere" $snapshots = Get-AzSnapshot -ResourceGroupName $resourceGroupName $diskName = "yourDiskNameHere"- $yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName- $incrementalSnapshots = New-Object System.Collections.ArrayList- foreach ($snapshot in $snapshots) { if($snapshot.Incremental -and $snapshot.CreationData.SourceResourceId -eq $yourDisk.Id -and $snapshot.CreationData.SourceUniqueId -eq $yourDisk.UniqueId)
foreach ($snapshot in $snapshots)
} } }- $incrementalSnapshots ```
You can check the `CompletionPercent` property of an individual snapshot to get
```azurepowershell $resourceGroupName = "yourResourceGroupNameHere" $snapshotName = "yourSnapshotName"- $targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName- $targetSnapshot.CompletionPercent ```
-### Check disk creation status
-
-When creating a disk from either a Premium SSD v2 or an Ultra Disk snapshot, you must wait for the background copy process to complete before you can attach it. Currently, you must use the Azure CLI to check the progress of the copy process.
-
-The following script gives you the status of an individual disk's copy process. The value of `completionPercent` must be 100 before the disk can be attached.
-
-```azurecli
-subscriptionId=yourSubscriptionID
-resourceGroupName=yourResourceGroupName
-diskName=yourDiskName
-
-az account set --subscription $subscriptionId
-
-az disk show -n $diskName -g $resourceGroupName --query [completionPercent] -o tsv
-```
+ ## Check sector size
az snapshot show -g resourcegroupname -n snapshotname --query [creationData.logi
See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions.
-If you have additional questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
+If you have more questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots).
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 07/12/2023 Last updated : 08/17/2023
To deploy a Premium SSD v2, see [Deploy a Premium SSD v2](disks-deploy-premium-v
## Premium SSDs
-Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs only supports 512E sector size.
+Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)).
To learn more about individual Azure VM types and sizes for Windows or Linux, including size compatibility for premium storage, see [Sizes for virtual machines in Azure](sizes.md). You'll need to check each individual VM size article to determine if it's premium storage-compatible.
For Premium SSDs, each I/O operation less than or equal to 256 kB of throughput
## Standard SSDs
-Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSD only supports 512E sector size.
+Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)).
### Standard SSD size
Standard SSDs offer disk bursting, which provides better tolerance for the unpre
## Standard HDDs
-Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs only supports 512E sector size.
+Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)).
### Standard HDD size [!INCLUDE [disk-storage-standard-hdd-sizes](../../includes/disk-storage-standard-hdd-sizes.md)]
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
The memory-optimized Ebsv5 and Ebdsv5 Azure virtual machine (VM) series deliver higher remote storage performance in each VM size than the [Ev4 series](ev4-esv4-series.md). The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads. For example, relational databases and data analytics applications.
-The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
+The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. We recommend choosing Premium SSD, Premium SSD v2 or Ultra disks to attain the published disk performance.
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
- SCSI Interface: Supported on Generation 1 and 2 VMs ## Ebdsv5 Series (SCSI)
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | | Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
-| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 4 | 12500 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 |
| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 | | Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 | | Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 | | Standard_E96bds_v5 | 96 | 672 | 3600 | 32 | 450000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 25000 | ## Ebdsv5 Series (NVMe)
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- NVMe Interface: Supported only on Generation 2 VMs - SCSI Interface: Supported on Generation 1 and Generation 2 VMs ## Ebsv5 Series (SCSI)
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
| Standard_E96bs_v5 | 96 | 672 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 25000 | ## Ebsv5 Series (NVMe)
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Testing has confirmed that the following systems work with the Azure Linux VM Ag
Other supported systems: -- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [Github repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.
+- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [GitHub repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.
The Linux agent depends on these system packages to function properly:
virtual-machines Key Vault Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md
The following JSON shows the schema for the Key Vault VM extension. The extensio
| pollingIntervalInS | 3600 | string | | certificateStoreName | It is ignored on Linux | string | | linkOnRenewal | false | boolean |
-| certificateStoreLocation | /var/lib/waagent/Microsoft.Azure.KeyVault | string |
+| certificateStoreLocation | /var/lib/waagent/Microsoft.Azure.KeyVault.Store | string |
| requireInitialSync | true | boolean | | observedCertificates | ["https://myvault.vault.azure.net/secrets/mycertificate", "https://myvault.vault.azure.net/secrets/mycertificate2"] | string array | msiEndpoint | http://169.254.169.254/metadata/identity | string |
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
The Key Vault VM extension supports the following certificate content types:
> [!NOTE] > The Key Vault VM extension downloads all certificates to the Windows certificate store or to the location specified in the `certificateStoreLocation` property in the VM extension settings.
-## Updates in Version 3.0
+## Updates in Version 3.0+
Version 3.0 of the Key Vault VM extension for Windows adds support for the following features: - Add ACL permissions to downloaded certificates - Enable Certificate Store configuration per certificate - Export private keys
+- IIS Certificate Rebind support
## Prerequisites
By default, Administrators and SYSTEM receive Full Control.
The extension relies on the default behavior of the [PFXImportCertStore API](/windows/win32/api/wincrypt/nf-wincrypt-pfximportcertstore). By default, if a certificate has a Provider Name attribute that matches with CAPI1, then the certificate is imported by using CAPI1 APIs. Otherwise, the certificate is imported by using CNG APIs.
-#### Does the extension support IIS certificate autobinding?
+#### Does the extension support certificate auto-rebinding?
-No. The Azure Key Vault VM extension doesn't support IIS automatic rebinding. The automatic rebinding process requires certificate services lifecycle notifications, and the extension doesn't write a certificate-renewal event (event ID 1001) upon newer versions.
+Yes, the Azure Key Vault VM extension supports certificate auto-rebinding. The Key Vault VM extension does support S-channel binding on certificate renewal when the `linkOnRenewal` property is set to true.
-The recommended approach is to use the Key Vault VM extension schema's `linkOnRenewal` property. Upon installation, when the `linkOnRenewal` property is set to `true`, the previous version of a certificate is chained to its successor via the `CERT_RENEWAL_PROP_ID` certificate extension property. The chaining enables the S-channel to pick up the most recent (latest) valid certificate with a matching SAN. This feature enables autorotation of SSL certificates without necessitating a redeployment or binding.
+For IIS, you can configure auto-rebind by enabling automatic rebinding of certificate renewals in IIS. The Azure Key Vault VM extension generates Certificate Lifecycle Notifications when a certificate with a matching SAN is installed. IIS uses this event to auto-rebind the certificate. For more information, see [Certifcate Rebind in IIS](https://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.html)
### View extension status
Here are some other options to help you resolve deployment issues:
- If you don't find an answer on the site, you can post a question for input from Microsoft or other members of the community. -- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/).
+- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/).
virtual-machines Network Watcher Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md
Title: Update Network Watcher extension to the latest version
-description: Learn how to update the Azure Network Watcher extension to the latest version.
-
+description: Learn how to update the Azure Network Watcher Agent virtual machine (VM) extension to the latest version.
-tags: azure-resource-manager
-- Previously updated : 07/12/2023 -++ Last updated : 08/30/2023+ # Update Azure Network Watcher extension to the latest version
## Latest version
-The latest version of the Network Watcher extension is `1.4.2573.1`.
### Identify latest version
virtual-machines Hbv4 Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-performance.md
Performance expectations using common HPC microbenchmarks are as follows:
## Memory bandwidth test
-The STREAM memory test can be run using the scripts in this github repository.
+The STREAM memory test can be run using the scripts in this GitHub repository.
```bash git clone https://github.com/Azure/woc-benchmarking cd woc-benchmarking/apps/hpc/stream/
sh stream_run_script.sh $PWD ΓÇ£hbrs_v4ΓÇ¥
``` ## Compute performance test
-The HPL benchmark can be run using the script in this github repository.
+The HPL benchmark can be run using the script in this GitHub repository.
```bash git clone https://github.com/Azure/woc-benchmarking cd woc-benchmarking/apps/hpc/hpl
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
az feature register --namespace Microsoft.VirtualMachineImages --name MooncakePu
## OS support
-VM Image Builder supports the following Azure Marketplace base operating system images:
-- Ubuntu 18.04-- Ubuntu 16.04-- RHEL 7.6, 7.7-- CentOS 7.6, 7.7-- SLES 12 SP4-- SLES 15, SLES 15 SP1-- Windows 10 RS5 Enterprise/Enterprise multi-session/Professional-- Windows 2016-- Windows 2019-- CBL-Mariner-
->[!IMPORTANT]
-> These operating systems have been tested and now work with VM Image Builder. However, VM Image Builder should work with any Linux or Windows image in the marketplace.
+VM Image Builder is designed to work with all Azure Marketplace base operating system images.
++ > [!NOTE] > You can now use the Azure Image Builder service inside the portal as of March 2023. [Get started](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal.
virtual-machines Image Builder Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-reliability.md
- Title: Reliability in Azure Image Builder
-description: Find out about reliability in Azure Image Builder
------ Previously updated : 02/03/2023--
-# Reliability in Azure Image Builder
-
-This article describes reliability support in Azure Image Builder, and covers both regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
-
-Azure Image Builder (AIB) is a regional service with cluster serving single regions. The AIB regional setup keeps data and resources within the regional boundary. AIB as a service doesn't do fail over for cluster and SQL database in region down scenarios.
--
-## Availability zone support
-
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the event of a local zone failure, availability zones are designed so that if the one zone is affected. Regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
-
-Azure availability zones-enabled services are designed to provide the right level of reliability and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
-
-> [!NOTE]
-> Azure Image Builder doesn't currently support availability zones at this time. Availability zone outage within a region is considered Regional outage for Azure Image Builder and customers are recommended to follow guidance as per the Disaster Recovery and failover to backup region.
---
-## Disaster recovery: cross-region failover
-
-In the event of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
-
-To ensure fast and easy recovery for Azure Image Builder (AIB), it's recommended to run an image template in region pairs or multiple regions when designing your AIB solution. You'll also want to replicate resources from the start when you're setting up your image templates.
--
-### Cross-region disaster recovery in multi-region geography
-
-Microsoft will be responsible for outage detection, notifications, and support in the event of disaster recovery scenarios for Azure Image Builder. Customers will need to set up disaster recovery for the control plane (service side) and data plane.
--
-#### Outage detection, notification, and management
-
-Microsoft will send a notification if there's an outage for the Azure Image Builder (AIB) Service. The common outage symptom includes image templates getting 500 errors when attempting to run. Customers can review Azure Image Builder outage notifications and status updates through [support request management.](../azure-portal/supportability/how-to-manage-azure-support-request.md)
--
-#### Set up disaster recovery and outage detection
-
-Customers are responsible for setting up disaster recovery for their Azure Image Builder (AIB) environment, as there isn't a region failover at the AIB service side. Both the control plane (service side) and data plane will need to configure by the customer.
-
-The high level guidelines include creating a AIB resource in another region close by and replicating your resources. For more information, see the [supported regions](./image-builder-overview.md#regions) and what resources are involved in [AIB]( /azure/virtual-machines/image-builder-overview#how-it-works) creation.
-
-### Single-region geography disaster recovery
-
-On supporting single-region geography for Azure Image Builder, the challenge will be to get the image template resource since the region isn't available. For those cases, customers can either maintain a copy of an image template locally or can use [Azure Resource Graph](../governance/resource-graph/index.yml) from the Azure portal or Azure CLI to get an Image template resource.
-
-Below are instructions on how to get an image template resource using Resource Graph from the Azure portal:
-
-1. Go to the search bar in Azure portal and search for *resource graph explorer*.
-
- ![Screenshot of Azure Resource Graph Explorer in the portal](./media/image-builder-reliability/resource-graph-explorer-portal.png#lightbox)
-
-1. Use the search bar on the far left to search resource by type and name to see how the details will give you properties of the image template. The *See details* option on the bottom right will show the image template's properties attribute and tags separately. Template name, location, ID, and tenant ID can be used to get the correct image template resource.
-
- ![Screenshot of using Azure Resource Graph Explorer search](./media/image-builder-reliability/resource-graph-explorer-search.png#lightbox)
--
-### Capacity and proactive disaster recovery resiliency
-
-Microsoft and its customers operate under the Shared responsibility model. This means that for customer-enabled DR (customer-responsible services), the customer must address DR for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated.
-
-When planning where to replicate a template, consider:
--- AIB region availability:
- - Choose [AIB supported regions](./image-builder-overview.md#regions) close to your users.
- - AIB continually expands into new regions.
-- Azure paired regions:
- - For your geographic area, choose two regions paired together.
- - Recovery efforts for paired regions where prioritization is needed.
-
-## Additional guidance
-
-In regards to customer data processing information, refer to the Azure Image Builder [data residency](./linux/image-builder-json.md#data-residency) details.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Reliability in Azure](../reliability/overview.md)
-> [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md)
-> [Azure Image Builder overview](./image-builder-overview.md)
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
$targetSubID = "<subscription ID for the target>"
$sourceTenantID = "<tenant ID where for the source image>" $sourceImageID = "<resource ID of the source image>"
-# Login to the subscription where the new image will be created
-Connect-AzAccount -UseDeviceAuthentication -Subscription $targetSubID
- # Login to the tenant where the source image is published Connect-AzAccount -Tenant $sourceTenantID -UseDeviceAuthentication 
-# Login to the subscription again where the new image will be created and set the context
+# Login to the subscription where the new image will be created and set the context
Connect-AzAccount -UseDeviceAuthentication -Subscription $targetSubID Set-AzContext -Subscription $targetSubID  # Create the image version from another image version in a different tenant
-New-AzGalleryImageVersion \
- -ResourceGroupName myResourceGroup -GalleryName myGallery \
- -GalleryImageDefinitionName myImageDef \
- -Location "West US 2" \
- -Name 1.0.0 \
+New-AzGalleryImageVersion `
+ -ResourceGroupName myResourceGroup -GalleryName myGallery `
+ -GalleryImageDefinitionName myImageDef `
+ -Location "West US 2" `
+ -Name 1.0.0 `
-SourceImageId $sourceImageID ```
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
Last updated 01/04/2023--+ # Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release) for Linux VMs
If you would like to use certificate authentication and wrap the encryption key
## Next steps
-[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md)
+[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md)
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 01/03/2023 Last updated : 08/25/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using AzCopy. This process, direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for standard HDD, standard SSD, and premium SSD managed disks. It isn't supported for ultra disks, yet.
+This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using AzCopy. This process, direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for Ultra Disks, Premium SSD v2, Premium SSD, Standard SSD, and Standard HDD.
If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs.
Create an empty standard HDD for uploading by specifying both the **-ΓÇôfor-uplo
Replace `<yourdiskname>`, `<yourresourcegroupname>`, `<yourregion>` with values of your choosing. The `--upload-size-bytes` parameter contains an example value of `34359738880`, replace it with a value appropriate for you.
-> [!TIP]
+> [!IMPORTANT]
> If you're creating an OS disk, add `--hyper-v-generation <yourGeneration>` to `az disk create`. > > If you're using Azure AD to secure disk uploads, add `-dataAccessAuthmode 'AzureActiveDirectory'`.
+> When uploading to an Ultra Disk or Premium SSD v2 you need to select the correct sector size of the target disk. If you're using a VHDX file with a 4k logical sector size, the target disk must be set to 4k. If you're using a VHD file with a 512 logical sector size, the target disk must be set to 512.
+>
+> VHDX files with logical sector size of 512k aren't supported.
```azurecli
+##For Ultra Disk or Premium SSD v2, add --logical-sector-size and specify either 512 or 4096, depending on if you're using a VHD or VHDX
+ az disk create -n <yourdiskname> -g <yourresourcegroupname> -l <yourregion> --os-type Linux --for-upload --upload-size-bytes 34359738880 --sku standard_lrs ```
-If you would like to upload either a premium SSD or a standard SSD, replace **standard_lrs** with either **premium_LRS** or **standardssd_lrs**. Ultra disks are not supported for now.
+If you would like to upload a different disk type, replace **standard_lrs** with **premium_lrs**, **premium_zrs**, **standardssd_lrs**, **standardssd_zrs**, **premiumv2_lrs**, or **ultrassd_lrs**.
### (Optional) Grant access to the disk
Sample returned value:
} ```
-## Upload a VHD
+## Upload a VHD or VHDX
Now that you have a SAS for your empty managed disk, you can use it to set your managed disk as the destination for your upload command.
-Use AzCopy v10 to upload your local VHD file to a managed disk by specifying the SAS URI you generated.
+Use AzCopy v10 to upload your local VHD or VHDX file to a managed disk by specifying the SAS URI you generated.
This upload has the same throughput as the equivalent [standard HDD](../disks-types.md#standard-hdds). For example, if you have a size that equates to S4, you will have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you will have a throughput of up to 500 MiB/s.
sourceDiskSizeBytes=$(az disk show -g $sourceRG -n $sourceDiskName --query '[dis
az disk create -g $targetRG -n $targetDiskName -l $targetLocation --os-type $targetOS --for-upload --upload-size-bytes $(($sourceDiskSizeBytes+512)) --sku standard_lrs
-targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 -o tsv)
+targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 --query [accessSas] -o tsv)
sourceSASURI=$(az disk grant-access -n $sourceDiskName -g $sourceRG --duration-in-seconds 86400 --query [accessSas] -o tsv)
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
The `customization.log` file includes the following stages:
- Ensure that Azure Policy and Firewall allow connectivity to remote resources. - Output comments to the console by using `Write-Host` or `echo`. Doing so lets you search the *customization.log* file. + ## Troubleshoot common build errors
+### The template deployment failed because of policy violation
+
+#### Error
+
+```text
+{
+ "statusCode": "BadRequest",
+ "serviceRequestId": null,
+ "statusMessage": "{\"error\":{\"code\":\"InvalidTemplateDeployment\",\"message\":\"The template deployment failed because of policy violation. Please see details for more information.\",\"details\":[{\"code\":\"RequestDisallowedByPolicy\",\"target\":\"<target_name>\",\"message\":\"Resource '<resource_name>' was disallowed by policy. Policy identifiers: '[{\\\"policyAssignment\\\":{\\\"name\\\":\\\"[Initiative] KeyVault (Microsoft.KeyVault)\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policyAssignments/Microsoft.KeyVault\\\"},\\\"policyDefinition\\\":{\\\"name\\\":\\\"Azure Key Vault should disable public network access\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policyDefinitions/KeyVault.disablePublicNetworkAccess_deny_deny\\\"},\\\"policySetDefinition\\\":{\\\"name\\\":\\\"[Initiative] KeyVault (Microsoft.KeyVault)\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policySetDefinitions/Microsoft.KeyVault\\\"}}]'.\",\"additionalInfo\":[{\"type\":\"PolicyViolation\"}]}]}}",
+ "eventCategory": "Administrative",
+ "entity": "/subscriptions/<subscription_ID>/<resourcegroups>/<resourcegroupname>/providers/Microsoft.Resources/deployments/<deployment_name>",
+ "message": "Microsoft.Resources/deployments/validate/action",
+ "hierarchy": "<subscription_ID>/<resourcegroupname>/<policy_name>/<managementGroup_name>/<deployment_ID>"
+}
+```
+
+#### Cause
+
+The above policy violation error is a result of using an Azure Key Vault with public access disabled. At this time, Azure Image Builder doesn't support this configuration.
+
+#### Solution
+
+The Azure Key Vault must be created with public access enabled.
+ ### Packer build command failure #### Error
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
With Scheduled Events, your application can discover when maintenance will occur
Scheduled Events provides events in the following use cases: -- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json) (for example, VM reboot, live migration or memory preserving updates for host).
+- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json) (for example, VM reboot, live migration or memory preserving updates for host).
- Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon. - Virtual machine was running on a host that suffered a hardware failure. - User-initiated maintenance (for example, a user restarts or redeploys a VM).
Scheduled Events provides events in the following use cases:
Scheduled events are delivered to and can be acknowledged by: - Standalone Virtual Machines.-- All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml).
+- All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml).
- All the VMs in an availability set. - All the VMs in a scale set placement group. > [!NOTE]
-> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM Scale Set (VMSS) regardless of Availability Zone usage.
-> For example, if you have 100 VMs in a availability set and there's an update to one of them, the scheduled event will go to all 100, whereas if there are 100 single VMs in a zone, then event will only go to the VM which is getting impacted.
+> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a Virtual Machine Scale Set regardless of Availability Zone usage.
As a result, check the `Resources` field in the event to identify which VMs are affected.
For VNET enabled VMs, Metadata Service is available from a static nonroutable IP
> `http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01`
-If the VM isn't created within a Virtual Network, the default cases for cloud services and classic VMs, extra logic is required to discover the IP address to use.
+If the VM isn't created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the IP address to use.
To learn how to [discover the host endpoint](https://github.com/azure-samples/virtual-machines-python-scheduled-events-discover-endpoint-for-non-vnet-vm), see this sample.
-### Version and Region Availability
+### Version and region availability
The Scheduled Events service is versioned. Versions are mandatory; the current version is `2020-07-01`. | Version | Release Type | Regions | Release Notes |
The Scheduled Events service is versioned. Versions are mandatory; the current v
> [!NOTE] > Previous preview releases of Scheduled Events supported {latest} as the api-version. This format is no longer supported and will be deprecated in the future.
-### Enabling and Disabling Scheduled Events
-Scheduled Events are enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes. Scheduled Events are disabled for your service if it doesn't make a request for 24 hours.
+### Enabling and disabling Scheduled Events
+Scheduled Events is enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes. Scheduled Events is disabled for your service if it doesn't make a request to the endpoint for 24 hours.
-Scheduled events are disabled by default for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). To enable scheduled events for these operations, first enable them using [OSImageNotificationProfile](https://learn.microsoft.com/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#osimagenotificationprofile).
-
-### User-initiated Maintenance
+### User-initiated maintenance
User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance.
-If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This arrangement will prevent delays in recovering your application back to a good state.
+If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. Immediately approving events prevents delays in recovering your application back to a good state.
+Scheduled events for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) are supported for general purpose VM sizes that [support memory preserving updates](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot) only. It doesn't work for G, M, N, and H series. Scheduled events for VMSS Guest OS upgrades and reimages are disabled by default. To enable scheduled events for these operations on supported VM sizes, first enable them using [OSImageNotificationProfile](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP).
+ ## Use the API
+### High level overview
+
+There are two major components to handling Scheduled Events, preparation and recovery. All current events impacting the customer will be available via the IMDS Scheduled Events endpoint. When the event has reached a terminal state, it is removed from the list of events. The following diagram shows the various state transitions that a single scheduled event can experience:
+
+![State diagram showing the various transitions a scheduled event can take.](media/scheduled-events/scheduled-events-states.png)
+
+For events in the EventStatus:"Scheduled" state, you'll need to take steps to prepare your workload. Once the preparation is complete, you should then approve the event using the scheduled event API. Otherwise, the event will be automatically approved when the NotBefore time is reached. If the VM is on shared infrastructure, the system will then wait for all other tenants on the same hardware to also approve the job or timeout. Once approvals are gathered from all impacted VMs or the NotBefore time is reached then Azure generates a new scheduled event payload with EventStatus:"Started" and triggers the start of the maintenance event. When the event has reached a terminal state, it is removed from the list of events which serves as the signal for the tenant to recover their VM(s)ΓÇ¥
+
+Below is psudeo code demonstrating a process for how to read and manage scheduled events in your application:
+```
+current_list_of_scheduled_events = get_latest_from_se_endpoint()
+#prepare for new events
+for each event in current_list_of_scheduled_events:
+ if event not in previous_list_of_scheduled_events:
+ prepare_for_event(event)
+#recover from completed events
+for each event in previous_list_of_scheduled_events:
+ if event not in current_list_of_scheduled_events:
+ receover_from_event(event)
+#prepare for future jobs
+previous_list_of_scheduled_events = current_list_of_scheduled_events
+```
+As scheduled events are often used for applications with high availability requirements, there are a few exceptional cases that should be considered:
+
+1. Once a scheduled event is completed and removed from the array there will be no further impacts without a new event including another EventStatus:"Scheduled" event
+2. Azure monitors maintenance operations across the entire fleet and in rare circumstances determines that a maintenance operation too high risk to apply. In that case the scheduled event will go directly from ΓÇ£ScheduledΓÇ¥ to being removed from the events array
+3. In the case of hardware failure, Azure will bypass the ΓÇ£ScheduledΓÇ¥ state and immediately move to the EventStatus:"Started" state.
+4. While the event is still in EventStatus:"Started" state, there may be additional impacts of a shorter duration than what was advertised in the scheduled event.
+
+As part of AzureΓÇÖs availability guarantee, VMs in different fault domains won't be impacted by routine maintenance operations at the same time. However, they may have operations serialized one after another. VMs in one fault domain can receive scheduled events with EventStatus:"Scheduled" shortly after another fault domainΓÇÖs maintenance is completed. Regardless of what architecture you chose, always keep checking for new events pending against your VMs.
+
+While the exact timings of events vary, the following diagram provides a rough guideline for how a typical maintenance operation proceeds:
+
+- EventStatus:"Scheduled" to Approval Timeout: 15 minutes
+- Impact Duration: 7 seconds
+- EventStatus:"Started" to Completed (event removed from Events array): 10 minutes
+
+![Diagram of a timeline showing the flow of a scheduled event.](media/scheduled-events/scheduled-events-timeline.png)
++ ### Headers When you query Metadata Service, you must provide the header `Metadata:true` to ensure the request wasn't unintentionally redirected. The `Metadata:true` header is required for all scheduled events requests. Failure to include the header in the request results in a "Bad Request" response from Metadata Service.
You can query for scheduled events by making the following call:
``` curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
+#### PowerShell sample
+```
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01" | ConvertTo-Json -Depth 64
+```
#### Python sample ```` import json
In the case where there are scheduled events, the response contains an array of
} ```
-### Event Properties
+### Event properties
|Property | Description | | - | - | | Document Incarnation | Integer that increases when the events array changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. | | EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 |
-| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). This event is delivered on a best effort basis. <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. |
+| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). This event is made available on a best effort basis <li> `Terminate`: The virtual machine is scheduled to be deleted. |
| ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`| | Resources| List of resources this event affects. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] | | EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished. | NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. |
-| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li> `0`: The event won't interrupt the VM or impact its availability (for example, update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
+| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`0`: The event won't interrupt the VM or impact its availability (eg. update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. |
-### Event Scheduling
+### Event scheduling
Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in an event's `NotBefore` property. |EventType | Minimum notice |
Each event is scheduled a minimum amount of time in the future based on the even
| Redeploy | 10 minutes | | Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes |
-Once an event is scheduled it will move into the started state after it is either approved or the not before time passes. However in rare cases the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array and the impact will not occur as previously scheduled.
+Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be canceled by Azure before it starts. In that case the event will be removed from the Events array, and the impact won't occur as previously scheduled.
> [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there's a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
-
+ >[!NOTE] > In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with `EventType = Reboot` and `EventStatus = Started`.
-
+ ### Polling frequency You can poll the endpoint for updates as frequently or infrequently as you like. However, the longer the time between requests, the more time you potentially lose to react to an upcoming event. Most events have 5 to 15 minutes of advance notice, although in some cases advance notice might be as little as 30 seconds. To ensure that you have as much time as possible to take mitigating actions, we recommend that you poll the service once per second. ### Start an event
-After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible). The event may not start immediately upon approval, in some cases Azure will require the approval of all the VMs hosted on the node before proceeding with the event.
+After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible). The event may not start immediately upon approval, in some cases Azure will require the approval of all the VMs hosted on the node before proceeding with the event.
The following JSON sample is expected in the `POST` request body. The request should contain a list of `StartRequests`. Each `StartRequest` contains `EventId` for the event you want to expedite:+ ``` { "StartRequests" : [
The following JSON sample is expected in the `POST` request body. The request sh
} ```
-The service will always return a 200 success code for a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
+The service will always return a 200 success code if it is passed a valid event ID, even if the event was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
+> [!Note]
+> Events will not proceed unless they are either approved via a POST message or the NotBefore time elapses. This includes user triggered events such as VM restarts from the Azure portal.
#### Bash sample ``` curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
+#### PowerShell sample
+```
+Invoke-RestMethod -Headers @{"Metadata" = "true"} -Method POST -body '{"StartRequests": [{"EventId": "5DD55B64-45AD-49D3-BBC9-F57D4EA97BD7"}]}' -Uri http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 | ConvertTo-Json -Depth 64
+```
#### Python sample ```` import json
def confirm_scheduled_event(event_id):
> [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field.
-## Example Responses
-The following response is an example of a series of events that were seen by two VMs that were live migrated to another node.
+## Example responses
+The following events are an example that was seen by two VMs that were live migrated to another node.
-The `DocumentIncarnation` is changing every time there's new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take.
+The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take.
```JSON {
def advanced_sample(last_document_incarnation):
int(event["DurationInSeconds"]) < 9): confirm_scheduled_event(event["EventId"])
- # Events that may be impactful (for example, Reboot or redeploy) may need custom
+ # Events that may be impactful (eg. Reboot or redeploy) may need custom
# handling for your application else: #TODO Custom handling for impactful events
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md
cat /sys/class/ptp/ptp0/clock_name
This should return `hyperv`, meaning the Azure host.
-In Linux VMs with Accelerated Networking enabled, you may see multiple PTP devices listed because the Mellanox mlx5 driver also creates a /dev/ptp device. Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be `/dev/ptp0` or it might be `/dev/ptp1`, which makes it difficult to configure `chronyd` with the correct clock source. To solve this problem, the most recent Linux images have a `udev` rule that creates the symlink `/dev/ptp_hyperv` to whichever `/dev/ptp` entry corresponds to the Azure host. Chrony should be configured to use this symlink instead of `/dev/ptp0` or `/dev/ptp1`.
+In some Linux VMs you may see multiple PTP devices listed. One example is for Accelerated Networking the Mellanox mlx5 driver also creates a /dev/ptp device. Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be `/dev/ptp0` or it might be `/dev/ptp1`, which makes it difficult to configure `chronyd` with the correct clock source. To solve this problem, the most recent Linux images have a `udev` rule that creates the symlink `/dev/ptp_hyperv` to whichever `/dev/ptp` entry corresponds to the Azure host. Chrony should always be configured to use the `/dev/ptp_hyperv` symlink instead of `/dev/ptp0` or `/dev/ptp1`.
### chrony
virtual-machines Migration Classic Resource Manager Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md
This article catalogs the most common errors and mitigations during the migratio
| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it's a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, remove the web/worker role from the deployment and try migration again. | | Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, don't attempt any self-mitigation as this might have unintended consequences on your environment. | | The virtual network {virtual-network-name} doesn't exist. |This can happen if you created the Virtual Network in the new Azure portal. The actual Virtual Network name follows the pattern "Group * \<VNET name>" |
-| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. If these extensions are left installed on the virtual machine, they're automatically uninstalled before completing the migration. |
+| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |**NOTE:** Error Message is in processs of getting updated, moving forward <b>it is required to uninstall the extension before the migration</b> XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. |
| VM {vm-name} in HostedService {hosted-service-name} contains Extension VMSnapshot/VMSnapshotLinux, which is currently not supported for Migration. Uninstall it from the VM and add it back using Azure Resource Manager after the Migration is Complete |This is the scenario where the virtual machine is configured for Azure Backup. Since this is currently an unsupported scenario, follow the workaround at https://aka.ms/vmbackupmigration | | VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} whose Status isn't being reported from the VM. Hence, this VM can't be migrated. Ensure that the Extension status is being reported or uninstall the extension from the VM and retry migration. <br><br> VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} reporting Handler Status: {handler-status}. Hence, the VM can't be migrated. Ensure that the Extension handler status being reported is {handler-status} or uninstall it from the VM and retry migration. <br><br> VM Agent for VM {vm-name} in HostedService {hosted-service-name} is reporting the overall agent status as Not Ready. Hence, the VM may not be migrated, if it has a migratable extension. Ensure that the VM Agent is reporting overall agent status as Ready. Refer to https://aka.ms/classiciaasmigrationfaqs. |Azure guest agent & VM Extensions need outbound internet access to the VM storage account to populate their status. Common causes of status failure include <li> a Network Security Group that blocks outbound access to the internet <li> If the VNET has on premises DNS servers and DNS connectivity is lost <br><br> If you continue to see an unsupported status, you can uninstall the extensions to skip this check and move forward with migration. | | Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it has multiple Availabilities Sets. |Currently, only hosted services that have 1 or less Availability sets can be migrated. To work around this problem, move the additional availability sets, and Virtual machines in those availability sets, to a different hosted service. |
virtual-machines Migration Classic Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-ps.md
Set your Azure subscription for the current session. This example sets the defau
## Step 5: Run commands to migrate your IaaS resources
-* [Migrate VMs in a cloud service (not in a virtual network)](#step-51-option-1migrate-virtual-machines-in-a-cloud-service-not-in-a-virtual-network)
-* [Migrate VMs in a virtual network](#step-51-option-2migrate-virtual-machines-in-a-virtual-network)
-* [Migrate a storage account](#step-52-migrate-a-storage-account)
+* [Migrate VMs in a cloud service (not in a virtual network)](#step-5a-option-1migrate-virtual-machines-in-a-cloud-service-not-in-a-virtual-network)
+* [Migrate VMs in a virtual network](#step-5a-option-2migrate-virtual-machines-in-a-virtual-network)
+* [Migrate a storage account](#step-5b-migrate-a-storage-account)
> [!NOTE] > All the operations described here are idempotent. If you have a problem other than an unsupported feature or a configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform then tries the action again.
-### Step 5.1: Option 1 - Migrate virtual machines in a cloud service (not in a virtual network)
+### Step 5a: Option 1 - Migrate virtual machines in a cloud service (not in a virtual network)
Get the list of cloud services by using the following command. Then pick the cloud service that you want to migrate. If the VMs in the cloud service are in a virtual network or if they have web or worker roles, the command returns an error message. ```powershell
If the prepared configuration looks good, you can move forward and commit the re
Move-AzureService -Commit -ServiceName $serviceName -DeploymentName $deploymentName ```
-### Step 5.1: Option 2 - Migrate virtual machines in a virtual network
+### Step 5a: Option 2 - Migrate virtual machines in a virtual network
To migrate virtual machines in a virtual network, you migrate the virtual network. The virtual machines automatically migrate with the virtual network. Pick the virtual network that you want to migrate. > [!NOTE]
If the prepared configuration looks good, you can move forward and commit the re
Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName ```
-### Step 5.2: Migrate a storage account
+### Step 5b: Migrate a storage account
After you're done migrating the virtual machines, perform the following prerequisite checks before you migrate the storage accounts. > [!NOTE]
virtual-machines N Series Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/n-series-migration.md
The [NC (v1)-Series](./nc-series.md) VMs are AzureΓÇÖs oldest GPU-accelerated co
Today, given the relatively low compute performance of the aging NVIDIA K80 GPU platform, in comparison to VM series featuring newer GPUs, a popular use case for the NC-series is real-time inference and analytics workloads, where an accelerated VM must be available in a steady state to serve request from applications as they arrive. In these cases the volume or batch size of requests may be insufficient to benefit from more performant GPUs. NC VMs are also popular for developers and students learning about, developing for, or experimenting with GPU acceleration, who need an inexpensive cloud-based CUDA deployment target upon which to iterate that doesnΓÇÖt need to perform to production levels.
-In general, NC-Series customers should consider moving directly across from NC sizes to [NC T4 v3](./nct4-v3-series.md) sizes, AzureΓÇÖs new GPU-accelerated platform for light workloads powered by NVIDIA Tesla T4 GPUs, although other VM SKUs should be considered for workloads running on InfiniBand-enabled [NDm A100 v4](./ndm-a100-v4-series.md) size.
+In general, NC-Series customers should consider moving directly across from NC sizes to [NC T4 v3](./nct4-v3-series.md) sizes, AzureΓÇÖs new GPU-accelerated platform for light workloads powered by NVIDIA Tesla T4 GPUs.
| Current VM Size | Target VM Size | Difference in Specification | |||| Standard_NC6 <br> Standard_NC6_Promo | Standard_NC4as_T4_v3 <br>or<br>Standard_NC8as_T4 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 1 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 4 (-2) or 8 (+2)<br>Memory GiB: 16 (-40) or 56 (same)<br>Temp Storage (SSD) GiB: 180 (-160) or 360 (+20)<br>Max data disks: 8 (-4) or 16 (+4)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)| | Standard_NC12<br>Standard_NC12_Promo | Standard_NC16as_T4_v3 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 1 (-1)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 16 (+4)<br>Memory GiB: 110 (-2)<br>Temp Storage (SSD) GiB: 360 (-320)<br>Max data disks: 48 (+16)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) | | Standard_NC24<br>Standard_NC24_Promo | Standard_NC64as_T4_v3* | CPU: Intel Haswell vs AMD Rome<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 64 (+40)<br>Memory GiB: 440 (+216)<br>Temp Storage (SSD) GiB: 2880 (+1440)<br>Max data disks: 32 (-32)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
-|Standard_NC24r<br>Standard_NC24r_Promo<br><br>(InfiniBand clustering-enabled sizes) | Standard_ND96amsr_A100_v4 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 8 (+4)<br>GPU generation: NVIDIA Keppler vs. Ampere (+3 generations)<br>GPU memory (GiB per GPU): 80 (+72)<br>vCPU: 96 (+72)<br>Memory GiB:1900 (+1676) <br>Temp Storage (SSD) GiB: 6400 (+4960)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)<br>InfiniBand interconnect: Yes |
+| Standard_NC24r<br>Standard_NC24r_Promo | Standard_NC64as_T4_v3* | CPU: Intel Haswell vs AMD Rome<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 64 (+40)<br>Memory GiB: 440 (+216)<br>Temp Storage (SSD) GiB: 2880 (+1440)<br>Max data disks: 32 (-32)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) <br>InfiniBand interconnect: No |
+### NC v2-Series VMs featuring NVIDIA Tesla P100 GPUs
+
+The NC v2-series virtual machines are a flagship platform originally designed for AI and Deep Learning workloads. They offered excellent performance for Deep Learning training, with per-GPU performance roughly 2x that of the original NC-Series and are powered by NVIDIA Tesla P100 GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. Like the NC and ND -Series, the NC v2-Series offers a configuration with a secondary low-latency, high-throughput network through RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning many GPUs.
+
+In general, NCv2-Series customers should consider moving directly across to [NC A100 v4](./nc-a100-v4-series.md) sizes, AzureΓÇÖs new GPU-accelerated platform powered by NVIDIA Ampere A100 PCIe GPUs.
+
+| Current VM Size | Target VM Size | Difference in Specification |
+||||
+| Standard_NC6s_v2 | Standard_NC24ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan <br>GPU count: 1 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generation)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 24 (+18)<br>Memory GiB: 220 (+108)<br>Temp Storage (SSD) GiB: 1123 (+387)<br>Max data disks: 12 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
+| Standard_NC12s_v2 | Standard_NC48ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan<br>GPU count: 2 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generations)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 48 (+36)<br>Memory GiB: 440 (+216)<br>Temp Storage (SSD) GiB: 2246 (+772)<br>Max data disks: 24 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
+| Standard_NC24s_v2 | Standard_NC96ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generations)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 96 (+72)<br>Memory GiB: 880 (+432)<br>Temp Storage (SSD) GiB: 4492 (+1544)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
+| Standard_NC24rs_v2 | Standard_NC96ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan <br>GPU count: 4 (Same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generations)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 96 (+72)<br>Memory GiB: 880 (+432)<br>Temp Storage (SSD) GiB: 4492 (+1544)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)<br>InfiniBand interconnect: No (-)|
### ND-Series VMs featuring NVIDIA Tesla P40 GPUs
The ND-series virtual machines are a midrange platform originally designed for A
| Standard_ND24 | Standard_NC64as_T4_v3* | CPU: Intel Broadwell vs AMD Rome<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Pascal vs. Turing (+1 generations)<br>GPU memory (GiB per GPU): 16 (-8)<br>vCPU: 64 (+40)<br>Memory GiB: 440 (same)<br>Temp Storage (SSD) GiB: 2880 (same)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) | | Standard_ND24r |Standard_ND96amsr_A100_v4 | CPU: Intel Broadwell vs AMD Rome<br>GPU count: 8 (+4)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generation)<br>GPU memory (GiB per GPU): 80 (+56)<br>vCPU: 96 (+72)<br>Memory GiB: 1900 (+1452)<br>Temp Storage (SSD) GiB: 6400 (+3452)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)<br>InfiniBand interconnect: Yes (Same) |
-### NC v2-Series VMs featuring NVIDIA Tesla P100 GPUs
-
-The NC v2-series virtual machines are a flagship platform originally designed for AI and Deep Learning workloads. They offered excellent performance for Deep Learning training, with per-GPU performance roughly 2x that of the original NC-Series and are powered by NVIDIA Tesla P100 GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. Like the NC and ND -Series, the NC v2-Series offers a configuration with a secondary low-latency, high-throughput network through RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning many GPUs.
-In general, NCv2-Series customers should consider moving directly across to [NC A100 v4](./nc-a100-v4-series.md) sizes, AzureΓÇÖs new GPU-accelerated platform powered by NVIDIA Ampere A100 PCIe GPUs, although other VM SKUs should be considered for workloads running on InfiniBand-enabled [NDm A100 v4](./ndm-a100-v4-series.md) size.
-
-| Current VM Size | Target VM Size | Difference in Specification |
-||||
-| Standard_NC6s_v2 | Standard_NC24ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan <br>GPU count: 1 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generation)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 24 (+18)<br>Memory GiB: 220 (+108)<br>Temp Storage (SSD) GiB: 1123 (+387)<br>Max data disks: 12 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
-| Standard_NC12s_v2 | Standard_NC48ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan<br>GPU count: 2 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generations)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 48 (+36)<br>Memory GiB: 440 (+216)<br>Temp Storage (SSD) GiB: 2246 (+772)<br>Max data disks: 24 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
-| Standard_NC24s_v2 | Standard_NC96ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generations)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 96 (+72)<br>Memory GiB: 880 (+432)<br>Temp Storage (SSD) GiB: 4492 (+1544)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
-| Standard_NC24rs_v2 | Standard_ND96amsr_A100_v4 | CPU: Intel Broadwell vs AMD Rome <br>GPU count: 8 (+4)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generations)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 96 (+72)<br>Memory GiB: 1900 (same)<br>Temp Storage (SSD) GiB: 6400 (+3452)<br>Max data disks: 32 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)<br>InfiniBand interconnect: Yes (Same)|
## Migration Steps
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
virtual-machines Reserved Vm Instance Size Flexibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/reserved-vm-instance-size-flexibility.md
Azure keeps link and schema updated so that you can use the file programmaticall
## View VM size recommendations
-Azure shows VM size recommendations in the purchase experience. To view the smallest size recommendations, select **Group by smallest size**.
+Azure shows VM size recommendations in the purchase experience. When enabled, the **Optimize for instance size flexibility (preview)** option groups and sorts recommendations by instance size flexibility.
:::image type="content" source="./media/reserved-vm-instance-size-flexibility/select-product-recommended-quantity.png" alt-text="Screenshot showing recommended quantities." lightbox="./media/reserved-vm-instance-size-flexibility/select-product-recommended-quantity.png" :::
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md
The following figure illustrates the architecture for the popular MPI libraries.
![Architecture for popular MPI libraries](./media/hpc/mpi-architecture.png)
-## UCX
-
-[Unified Communication X (UCX)](https://github.com/openucx/ucx) is a framework of communication APIs for HPC. It is optimized for MPI communication over InfiniBand and works with many MPI implementations such as OpenMPI and MPICH.
-
-```bash
-wget https://github.com/openucx/ucx/releases/download/v1.4.0/ucx-1.4.0.tar.gz
-tar -xvf ucx-1.4.0.tar.gz
-cd ucx-1.4.0
-./configure --prefix=<ucx-install-path>
-make -j 8 && make install
-```
-
-> [!NOTE]
-> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
- ## HPC-X The [HPC-X software toolkit](https://www.mellanox.com/products/hpc-x-toolkit) contains UCX and HCOLL and can be built against UCX.
cat /sys/class/infiniband/mlx5_0/ports/1/pkeys/1
0x7fff ```
-Use the partition other than default (0x7fff) partition key. UCX requires the MSB of p-key to be cleared. For example, set UCX_IB_PKEY as 0x000b for 0x800b.
+Please note interfaces are named as mlx5_ib* inside HPC VM image.
Also note that as long as the tenant (Availability Set or Virtual Machine Scale Set) exists, the PKEYs remain the same. This is true even when nodes are added/deleted. New tenants get different PKEYs.
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
Previously updated : 05/24/2023 Last updated : 08/15/2023
ms.devlang: azurecli
# Share images using a community gallery (preview)
-To share a gallery with all Azure users, you can create a [community gallery (preview)](azure-compute-gallery.md#community-gallery). Community galleries can be used by anyone with an Azure subscription. Someone creating a VM can browse images shared with the community using the portal, REST, or the Azure CLI.
+To share a gallery with all Azure users, you can create a [community gallery](azure-compute-gallery.md#community-gallery). Community galleries can be used by anyone with an Azure subscription. Someone creating a VM can browse images shared with the community using the portal, REST, or the Azure CLI.
-Sharing images to the community is a new capability in [Azure Compute Gallery](./azure-compute-gallery.md#community). In the preview, you can make your image galleries public, and share them to all Azure customers. When a gallery is marked as a community gallery, all images under the gallery become available to all Azure customers as a new resource type under Microsoft.Compute/communityGalleries. All Azure customers can see the galleries and use them to create VMs. Your original resources of the type `Microsoft.Compute/galleries` are still under your subscription, and private.
+Sharing images to the community is a new capability in [Azure Compute Gallery](./azure-compute-gallery.md#community). You can make your image galleries public, and share them to all Azure customers. When a gallery is marked as a community gallery, all images under the gallery become available to all Azure customers as a new resource type under Microsoft.Compute/communityGalleries. All Azure customers can see the galleries and use them to create VMs. Your original resources of the type `Microsoft.Compute/galleries` are still under your subscription, and private.
> [!IMPORTANT]
-> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
> Microsoft does not provide support for images you share to the community. >
-> [!INCLUDE [community-gallery-artifacts](./includes/community-gallery-artifacts.md)]
->
-> To publish a community gallery, you'll need to enable the preview feature using the Azure CLI: `az feature register --name CommunityGalleries --namespace Microsoft.Compute` or PowerShell: `Register-AzProviderFeature -FeatureName "CommunityGalleries" -ProviderNamespace "Microsoft.Compute"`. For more information on enabling preview features and checking the status, see [Set up preview features in your Azure subscription](../azure-resource-manager/management/preview-features.md). Creating VMs from community gallery images is open to all Azure users.
->
-> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
++ There are three main ways to share images in an Azure Compute Gallery, depending on who you want to share with:
There are three main ways to share images in an Azure Compute Gallery, depending
| RBAC + [Direct shared gallery](./share-gallery-direct.md) | Yes | Yes | Yes | Yes | No | | RBAC + [Community gallery](./share-gallery-community.md) | Yes | Yes | Yes | No | Yes |
+## Disclaimer
++ ## Limitations for images shared to the community There are some limitations for sharing your gallery to the community:-- For the preview, image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available during the public preview.-- For the preview, you can't share [VM Applications](vm-applications.md) to the community.-- The image version region in the gallery should be same as the region home region, creating of cross-region version where the home region is different than the gallery isn't supported, however once the image is in the home region it can be replicated to other regions-- To find images shared to the community from the Azure portal, you need to go through the VM create or scale set creation pages. You can't search the portal or Azure Marketplace for the images
+- You can't convert an existing private gallery to Community gallery.
+- You can't use a third party image from Marketplace and publish it to the community. For a list of approved operating system base images, please see: [approved base images](https://go.microsoft.com/fwlink/?linkid=2245050).
+- Encrypted images are not supported
+- Image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available.
+- You can't share [VM Applications](vm-applications.md) to the community yet.
## How sharing with the community works
The end-users can only interact with the proxy resources, they never interact wi
Azure users can see the latest image versions shared to the community in the portal, or query for them using the CLI. Only the latest version of an image is listed in the community gallery.
-When creating a community gallery, you will need to provide contact information for your images. The objective and underlying intention of this information is to facilitate communication between the consumer of the image and the publisher, like if the consumer needs assistance. Be aware that Microsoft does not offer support for these images. This information will be shown **publicly**, so be careful when providing it:
+When creating a community gallery, you will need to provide contact information for your images. The objective and underlying intention of this information is to facilitate communication between the consumer of the image and the publisher, like if the consumer needs assistance. Microsoft doesn't offer support for these images. This information will be shown **publicly**, so be careful when providing it:
- Community gallery prefix - Publisher support email - Publisher URL
Information from your image definitions will also be publicly available, like wh
> [!WARNING] > If you want to stop sharing a gallery publicly, you can update the gallery to stop sharing, but making the gallery private will prevent existing virtual machine scale set users from scaling their resources.
->
-> If you stop sharing your gallery during the preview, you won't be able to re-share it.
+ ## Why share to the community?
Why use a marketplace mage?
- Microsoft certified images - Can be used for production workloads - First party and third party images-- Free and Paid images with additional software offerings
+- Paid images with additional software offerings
- Supported by Microsoft When to use a community image? - You trust and know how to contact the publisher - You're looking for a community version of an image published by open-source community - Using the image for testing
-"- Community images are free images. However, if the image used to build the community image was an Azure marketplace image with a cost associated with it, you will be billed for those same costs. You can see what costs are associated with a Marketplace image by looking at the **Plans + Pricing** tab for the image in the Azure portal."
+- Community images are free
- Supported by the owner of the image, not Microsoft. ## Reporting issues with a community image Using community-submitted virtual machine images has several risks. Images could contain malware, security vulnerabilities, or violate someone's intellectual property. To help create a secure and reliable experience for the community, you can report images when you see these issues.
-Use the following links to report issues:
+The easiest way to report issues with a community gallery is to use the portal, which will pre-fill information for the report:
+- For issues with links or other information in the fields of an image definition, select **Report community image**.
+- If an image version contains malicious code or there are other issues with a specific version of an image, select **Report** under the **Report version** column in the table of image versions.
+
+You can also use the following links to report issues, but the forms won't be pre-filled:
+ - Malicious images: Contact [Abuse Report](https://msrc.microsoft.com/report/abuse). - Intellectual Property violations: Contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
This article covers how to share an Azure Compute Gallery with specific subscrip
> [!IMPORTANT] > Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Please submit the form and share your use case, We will evaluate the request and follow up in 10 business days after submitting the form. No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with. In most scenarios RBAC/Cross-tenant sharing using service principal is sufficient, request access to this feature only if you wish to share images widely with all users in the subscription/tenant.
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). Please submit the form and share your business case. No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with. In most scenarios RBAC/Cross-tenant sharing using service principal is sufficient, and encourage customers to leverage RBAC sharing. Request access to Direct shared gallery feature only if you wish to share images widely with all users in the subscription/tenant and if your business case requires access to direct shared gallery.
> > During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
During the preview:
- A direct shared gallery can't contain encrypted image versions. Encrypted images can't be created within a gallery that is directly shared. - Only the owner of a subscription, or a user or service principal assigned to the `Compute Gallery Sharing Admin` role at the subscription or gallery level will be able to enable group-based sharing. - You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.-- TrustedLaunch and ConfidentialVM are not supported - PowerShell, Ansible, and Terraform aren't supported at this time. - The image version region in the gallery should be same as the region home region, creating of cross-region version where the home region is different than the gallery is not supported, however once the image is in the home region it can be replicated to other regions - Not available in Government clouds
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
Previously updated : 05/23/2023 Last updated : 08/15/2023 #Customer intent: As an IT administrator, I want to learn about how to create shared VM images to minimize the number of post-deployment configuration tasks.
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md
This article describes the available sizes and options for the Azure virtual mac
| Type | Sizes | Description | ||-|-|
-| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. |
+| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5, DCasv5, DCadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. |
| [Compute optimized](sizes-compute.md) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. |
-| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
+| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2, ECasv5, ECadsv5 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
| [Storage optimized](sizes-storage.md) | Lsv2, Lsv3, Lasv3 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. | | [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, NC A100 v4, ND, NDv2, NGads V620, NV, NVv3, NVv4, NDasrA100_v4, NDm_A100_v4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. | | [High performance compute](sizes-hpc.md) | HB, HBv2, HBv3, HBv4, HC, HX | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). |
This article describes the available sizes and options for the Azure virtual mac
## REST API
-For information on using the REST API to query for VM sizes, see the following:
+For information on using the REST API to query for VM sizes, see the following articles:
- [List available virtual machine sizes for resizing](/rest/api/compute/virtualmachines/listavailablesizes) - [List available virtual machine sizes for a subscription](/rest/api/compute/resourceskus/list)
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
An individual VM restore point is a resource that stores VM configuration and po
VM restore points supports both application consistency and crash consistency (in preview). Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency.
-Crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a Virtual Machine. This is same as the status of data in the VM after a power outage or a crash. "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
+Multi-disk crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a virtual machine. This is the same as the status of data in the VM after a power outage or a crash. The "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
+
+> [!NOTE]
+> For disks configured with read/write host caching, multi-disk crash consistency can't be guaranteed because writes occurring while the snapshot is taken might not have been acknowledged by Azure Storage. If maintaining consistency is crucial, we advise using the application consistency mode.
VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Management resource that contains the restore points for a specific VM. If you want to utilize ARM templates for creating restore points and restore point collections, visit the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository on GitHub.
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
Previously updated : 03/23/2023 Last updated : 08/15/2023
New-AzVM -ResourceGroupName $resourceGroup -Location $location -VM $vmConfig
```
-## Community gallery
+
+<a name="community-gallery"></a>
+
+## Community gallery (preview)
> [!IMPORTANT]
-> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
->To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal). Creating VMs from community gallery images is open to all Azure users.
->
> Microsoft does not provide support for images in the [community gallery](azure-compute-gallery.md#community). ## Reporting issues with a community image Using community-submitted virtual machine images has several risks. Images could contain malware, security vulnerabilities, or violate someone's intellectual property. To help create a secure and reliable experience for the community, you can report images when you see these issues.
-Use the following links to report issues:
+The easiest way to report issues with a community gallery is to use the portal, which will pre-fill information for the report:
+- For issues with links or other information in the fields of an image definition, select **Report community image**.
+- If an image version contains malicious code or there are other issues with a specific version of an image, select **Report** under the **Report version** column in the table of image versions.
+
+You can also use the following links to report issues, but the forms won't be pre-filled:
+ - Malicious images: Contact [Abuse Report](https://msrc.microsoft.com/report/abuse). - Intellectual Property violations: Contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-specialized-image-version.md
Previously updated : 03/23/2023 Last updated : 08/15/2023
New-AzVM `
## Community gallery > [!IMPORTANT]
-> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
> Microsoft does not provide support for images in the [community gallery](azure-compute-gallery.md#community). ## Reporting issues with a community image Using community-submitted virtual machine images has several risks. Images could contain malware, security vulnerabilities, or violate someone's intellectual property. To help create a secure and reliable experience for the community, you can report images when you see these issues.
-Use the following links to report issues:
+The easiest way to report issues with a community gallery is to use the portal, which will pre-fill information for the report:
+- For issues with links or other information in the fields of an image definition, select **Report community image**.
+- If an image version contains malicious code or there are other issues with a specific version of an image, select **Report** under the **Report version** column in the table of image versions.
+
+You can also use the following links to report issues, but the forms won't be pre-filled:
+ - Malicious images: Contact [Abuse Report](https://msrc.microsoft.com/report/abuse). - Intellectual Property violations: Contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
To create the VM from community gallery image, you must accept the license agree
1. For **Security type**, make sure *Standard* is selected. 1. For your **Image**, select **See all images**. The **Select an image** page will open. :::image type="content" source="media/shared-image-galleries/see-all-images.png" alt-text="Screenshot showing the link to select to see more image options.":::
-1. In the left menu, under **Other Items**, select **Community images (PREVIEW)**. The **Other Items | Community Images (PREVIEW)** page will open.
+1. In the left menu, under **Other Items**, select **Community images**. The **Other Items | Community Images** page will open.
:::image type="content" source="media/shared-image-galleries/community.png" alt-text="Screenshot showing where to select community gallery images."::: 1. Select an image from the list. Make sure that the **OS state** is *Specialized*. If you want to use a specialized image, see [Create a VM using a generalized image version](vm-generalized-image-version.md). Depending on the image choose, the **Region** the VM will be created in will change to match the image. 1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page.
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
Last updated 01/04/2023---+ # Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)
If you would like to use certificate authentication and wrap the encryption key
## Next steps
-[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)
+[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell
description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 03/31/2023 Last updated : 08/25/2023 linux
**Applies to:** :heavy_check_mark: Windows VMs
-This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using the Azure PowerShell module. The process of uploading a managed disk, also known as direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for standard HDD, standard SSD, and premium SSDs. It isn't supported for ultra disks, yet.
+This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using the Azure PowerShell module. The process of uploading a managed disk, also known as direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for Ultra Disks, Premium SSD v2, Premium SSD, Standard SSD, and Standard HDD.
If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs.
For detailed steps on assigning a role, see [Assign Azure roles using Azure Powe
There are two ways you can upload a VHD with the Azure PowerShell module: You can either use the [Add-AzVHD](/powershell/module/az.compute/add-azvhd) command, which will automate most of the process for you, or you can perform the upload manually with AzCopy.
-Generally, you should use [Add-AzVHD](#use-add-azvhd). However, if you need to upload a VHD that is larger than 50 GiB, consider [uploading the VHD manually with AzCopy](#manual-upload). VHDs 50 GiB and larger upload faster using AzCopy.
+For Premium SSDs, Standard SSDs, and Standard HDDs, you should generally use [Add-AzVHD](#use-add-azvhd). However, if you're uploading to an Ultra Disk, or a Premium SSD v2, or if you need to upload a VHD that is larger than 50 GiB, you must [upload the VHD or VHDX manually with AzCopy](#manual-upload). VHDs 50 GiB and larger upload faster using AzCopy and Add-AzVhd doesn't currently support uploading to an Ultra Disk or a Premium SSD v2.
For guidance on how to copy a managed disk from one region to another, see [Copy a managed disk](#copy-a-managed-disk).
Now, on your local shell, create an empty standard HDD for uploading by specifyi
Replace `<yourdiskname>`, `<yourresourcegroupname>`, and `<yourregion>` then run the following commands:
-> [!TIP]
+> [!IMPORTANT]
> If you're creating an OS disk, add `-HyperVGeneration '<yourGeneration>'` to `New-AzDiskConfig`. > > If you're using Azure AD to secure your uploads, add `-dataAccessAuthMode 'AzureActiveDirectory'` to `New-AzDiskConfig`.
+> When uploading to an Ultra Disk or Premium SSD v2 you need to select the correct sector size of the target disk. If you're using a VHDX file with a 4k logical sector size, the target disk must be set to 4k. If you're using a VHD file with a 512 logical sector size, the target disk must be set to 512.
+>
+> VHDX files with logical sector size of 512k aren't supported.
```powershell $vhdSizeBytes = (Get-Item "<fullFilePathHere>").length
+## For Ultra Disks or Premium SSD v2, add -LogicalSectorSize and specify either 4096 or 512, depending on if you're using a VHDX or a VHD
+ $diskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -OsType 'Windows' -UploadSizeInBytes $vhdSizeBytes -Location '<yourregion>' -CreateOption 'Upload' New-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' -Disk $diskconfig ```
-If you would like to upload either a premium SSD or a standard SSD, replace **Standard_LRS** with either **Premium_LRS** or **StandardSSD_LRS**. Ultra disks aren't currently supported.
+If you would like to upload a different disk type, replace **Standard_LRS** with **Premium_LRS**, **Premium_ZRS**, **StandardSSD_ZRS**, **StandardSSD_LRS**, or **UltraSSD_LRS**.
### Generate writeable SAS
$diskSas = Grant-AzDiskAccess -ResourceGroupName '<yourresourcegroupname>' -Disk
$disk = Get-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' ```
-### Upload a VHD
+### Upload a VHD or VHDX
Now that you have a SAS for your empty managed disk, you can use it to set your managed disk as the destination for your upload command.
-Use AzCopy v10 to upload your local VHD file to a managed disk by specifying the SAS URI you generated.
+Use AzCopy v10 to upload your local VHD or VHDX file to a managed disk by specifying the SAS URI you generated.
This upload has the same throughput as the equivalent [standard HDD](../disks-types.md#standard-hdds). For example, if you have a size that equates to S4, you will have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you will have a throughput of up to 500 MiB/s.
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-portal.md
Previously updated : 08/29/2022 Last updated : 08/28/2023
Sign in to the [Azure portal](https://portal.azure.com).
1. Enter *virtual machines* in the search. 1. Under **Services**, select **Virtual machines**. 1. In the **Virtual machines** page, select **Create** and then **Azure virtual machine**. The **Create a virtual machine** page opens.
-1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2022 Datacenter - Gen 2* for the **Image**. Leave the other defaults.
+1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2* for the **Image**. Leave the other defaults.
- :::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size.":::
+ :::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size." lightbox="media/quick-create-portal/instance-details.png":::
> [!NOTE] > Some users will now see the option to create VMs in multiple zones. To learn more about this new capability, see [Create virtual machines in an availability zone](../create-portal-availability-zone.md).
Sign in to the [Azure portal](https://portal.azure.com).
1. After validation runs, select the **Create** button at the bottom of the page.
- :::image type="content" source="media/quick-create-portal/validation.png" alt-text="Screenshot showing that validation has passed. Select the Create button to create the VM.":::
+ :::image type="content" source="media/quick-create-portal/validation.png" alt-text="Screenshot showing that validation has passed. Select the Create button to create the VM." lightbox="media/quick-create-portal/validation.png":::
1. After deployment is complete, select **Go to resource**.
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-powershell.md
New-AzVm `
-ResourceGroupName 'myResourceGroup' ` -Name 'myVM' ` -Location 'East US' `
+ -Image 'MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition:latest' `
-VirtualNetworkName 'myVnet' ` -SubnetName 'mySubnet' ` -SecurityGroupName 'myNetworkSecurityGroup' `
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
Scheduled events are delivered to and can be acknowledged by:
- All the VMs in a scale set placement group. > [!NOTE]
-> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM Scale Set (VMSS) regardless of Availability Zone usage.
+> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a Virtual Machine Scale Set regardless of Availability Zone usage.
As a result, check the `Resources` field in the event to identify which VMs are affected.
Scheduled Events is enabled for your service the first time you make a request f
### User-initiated maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance.
-If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This will prevent delays in recovering your application back to a good state.
+If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. Immediately approving events prevents delays in recovering your application back to a good state.
-Scheduled events are disabled by default for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). To enable scheduled events for these operations, first enable them using [OSImageNotificationProfile](https://learn.microsoft.com/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#osimagenotificationprofile).
+Scheduled events for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) are supported for general purpose VM sizes that [support memory preserving updates](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot) only. It doesn't work for G, M, N, and H series. Scheduled events for VMSS Guest OS upgrades and reimages are disabled by default. To enable scheduled events for these operations on supported VM sizes, first enable them using [OSImageNotificationProfile](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP).
## Use the API
+### High level overview
+
+There are two major components to handling Scheduled Events, preparation and recovery. All current events impacting the customer will be available via the IMDS Scheduled Events endpoint. When the event has reached a terminal state, it is removed from the list of events. The following diagram shows the various state transitions that a single scheduled event can experience:
+
+![State diagram showing the various transitions a scheduled event can take.](media/scheduled-events/scheduled-events-states.png)
+
+For events in the EventStatus:"Scheduled" state, you'll need to take steps to prepare your workload. Once the preparation is complete, you should then approve the event using the scheduled event API. Otherwise, the event will be automatically approved when the NotBefore time is reached. If the VM is on shared infrastructure, the system will then wait for all other tenants on the same hardware to also approve the job or timeout. Once approvals are gathered from all impacted VMs or the NotBefore time is reached then Azure generates a new scheduled event payload with EventStatus:"Started" and triggers the start of the maintenance event. When the event has reached a terminal state, it is removed from the list of events which serves as the signal for the tenant to recover their VM(s)ΓÇ¥
+
+Below is psudeo code demonstrating a process for how to read and manage scheduled events in your application:
+```
+current_list_of_scheduled_events = get_latest_from_se_endpoint()
+#prepare for new events
+for each event in current_list_of_scheduled_events:
+ if event not in previous_list_of_scheduled_events:
+ prepare_for_event(event)
+#recover from completed events
+for each event in previous_list_of_scheduled_events:
+ if event not in current_list_of_scheduled_events:
+ receover_from_event(event)
+#prepare for future jobs
+previous_list_of_scheduled_events = current_list_of_scheduled_events
+```
+As scheduled events are often used for applications with high availability requirements, there are a few exceptional cases that should be considered:
+
+1. Once a scheduled event is completed and removed from the array there will be no further impacts without a new event including another EventStatus:"Scheduled" event
+2. Azure monitors maintenance operations across the entire fleet and in rare circumstances determines that a maintenance operation too high risk to apply. In that case the scheduled event will go directly from ΓÇ£ScheduledΓÇ¥ to being removed from the events array
+3. In the case of hardware failure, Azure will bypass the ΓÇ£ScheduledΓÇ¥ state and immediately move to the EventStatus:"Started" state.
+4. While the event is still in EventStatus:"Started" state, there may be additional impacts of a shorter duration than what was advertised in the scheduled event.
+
+As part of AzureΓÇÖs availability guarantee, VMs in different fault domains won't be impacted by routine maintenance operations at the same time. However, they may have operations serialized one after another. VMs in one fault domain can receive scheduled events with EventStatus:"Scheduled" shortly after another fault domainΓÇÖs maintenance is completed. Regardless of what architecture you chose, always keep checking for new events pending against your VMs.
+
+While the exact timings of events vary, the following diagram provides a rough guideline for how a typical maintenance operation proceeds:
+
+- EventStatus:"Scheduled" to Approval Timeout: 15 minutes
+- Impact Duration: 7 seconds
+- EventStatus:"Started" to Completed (event removed from Events array): 10 minutes
+
+![Diagram of a timeline showing the flow of a scheduled event.](media/scheduled-events/scheduled-events-timeline.png)
++ ### Headers When you query Metadata Service, you must provide the header `Metadata:true` to ensure the request wasn't unintentionally redirected. The `Metadata:true` header is required for all scheduled events requests. Failure to include the header in the request results in a "Bad Request" response from Metadata Service.
Each event is scheduled a minimum amount of time in the future based on the even
| Redeploy | 10 minutes | | Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes |
-Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array, and the impact will not occur as previously scheduled.
+Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be canceled by Azure before it starts. In that case the event will be removed from the Events array, and the impact won't occur as previously scheduled.
> [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there's a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
The following JSON sample is expected in the `POST` request body. The request sh
} ```
-The service will always return a 200 success code in the case of a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
+The service will always return a 200 success code if it is passed a valid event ID, even if the event was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed.
> [!Note] > Events will not proceed unless they are either approved via a POST message or the NotBefore time elapses. This includes user triggered events such as VM restarts from the Azure portal.
def confirm_scheduled_event(event_id):
> Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field. ## Example responses
-The following is an example of a series of events that were seen by two VMs that were live migrated to another node.
+The following events are an example that was seen by two VMs that were live migrated to another node.
The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take.
virtual-machines Ubuntu Pro In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md
description: Learn how to do an in-place upgrade from Ubuntu Server to Ubuntu Pr
-+ Last updated 08/07/2023
**Applies to:** :heavy_check_mark: Linux virtual machines
-Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of Ubuntu 18.04 LTS going EOL to Ubuntu Pro. [Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)....](https://ubuntu.com/18-04/azure) Canonical no longer provides technical support, software updates, or security patches for this version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS.
-
-## What's Ubuntu Pro
-Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The secure use of open-source software allows teams to utilize the latest technologies while meeting internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional 24/7 phone and ticket support.
-
-Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI.
-
-## Why developers and devops choose Ubuntu Pro for Azure
-* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt)
-* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and help you meet the Azure Linux Security Baseline policy)
+Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure
+Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of
+Ubuntu 18.04 LTS going EOL to Ubuntu Pro.
+[Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)](https://ubuntu.com/18-04/azure).
+Canonical no longer provides technical support, software updates, or security patches for this
+version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS.
+
+## What is Ubuntu Pro?
+
+Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The
+secure use of open-source software allows teams to utilize the latest technologies while meeting
+internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with
+Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and
+management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS
+is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu
+packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional
+24/7 phone and ticket support.
+
+Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive
+security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI.
+
+## Why developers and devops choose Ubuntu Pro for Azure
+
+* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and
+ PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt)
+* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and
+ help you meet the Azure Linux Security Baseline policy)
* FIPS 140-2 certified modules
-* Common Criteria (CC) EAL2 provisioning packages
-* Kernel Live patch: kernel patches delivered immediately, without the need to reboot
-* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance and advanced device support
-* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028
-* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads
-* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools to innovate with the latest technologies
-* Non-stop security: Canonical publishes images frequently, ensuring security is present from the moment an instance launches
-* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go across regions or out to the Internet for updates
-* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same experience regardless of the platform. It ensures consistency of your CI/CD pipelines and management mechanisms.
-
-**This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs:**
-
-1. Converting to Ubuntu Pro license
-
-2. Validating the license
+* Common Criteria (CC) EAL2 provisioning packages
+* Kernel Live patch: kernel patches delivered immediately, without the need to reboot
+* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance
+ and advanced device support
+* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028
+* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads
+* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools
+ to innovate with the latest technologies
+* Non-stop security: Canonical publishes images frequently, ensuring security is present from the
+ moment an instance launches
+* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go
+ across regions or out to the Internet for updates
+* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same
+ experience regardless of the platform. It ensures consistency of your CI/CD pipelines and
+ management mechanisms.
+
+> [!NOTE]
+> This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to
+> Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs:
+>
+> 1. Converting to Ubuntu Pro license
+> 2. Validating the license
+>
+> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running
+> detach. Open a support ticket for any exceptions.
+
+## Convert to Ubuntu Pro using the Azure CLI
->[!NOTE]
-> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running detach. Open a support ticket for any exceptions.
-
-## Convert to Ubuntu Pro using the Azure CLI
```azurecli-interactive # The following will enable Ubuntu Pro on a virtual machine
-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
+az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
```
-```In-VM commands
+```In-VM commands
# The next step is to execute two in-VM commands
-sudo apt install ubuntu-advantage-tools
-sudo pro auto-attach
+sudo apt install ubuntu-advantage-tools
+sudo pro auto-attach
```
-(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28)
-## Validate the license
+(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28)
+
+## Validate the license
+ Expected output: ![Screenshot of the expected output.](./expected-output.png) ## Create an Ubuntu Pro VM using the Azure CLI+ You can also create a new VM using the Ubuntu Server images and apply Ubuntu Pro at create time. For example: ```azurecli-interactive # The following will enable Ubuntu Pro on a virtual machine
-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
+az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
``` ```In-VM commands # The next step is to execute two in-VM commands
-sudo apt install ubuntu-advantage-tools
-sudo pro auto-attach
+sudo apt install ubuntu-advantage-tools
+sudo pro auto-attach
``` >[!NOTE] > For systems with advantage tools version 28 or higher installed the system will perform a pro attach during a reboot. ## Check licensing model using the Azure CLI+ You can use the az vm get-instance-view command to check the status. Look for a licenseType field in the response. If the licenseType field exists and the value is UBUNTU_PRO, your virtual machine has Ubuntu Pro enabled. ```Azure CLI
-az vm get-instance-view -g MyResourceGroup -n MyVm
+az vm get-instance-view -g MyResourceGroup -n MyVm
``` ## Check the licensing model of an Ubuntu Pro enabled VM using Azure Instance Metadata Service+ From within the virtual machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the virtual machine's licenseType value. A licenseType value of UBUNTU_PRO indicates that your virtual machine has Ubuntu Pro enabled. [Learn more about attested metadata](../../instance-metadata-service.md). ## Billing
-You are charged for Ubuntu Pro as part of the Preview. Visit the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro pricing. To cancel the Pro subscription during the preview period, open a support ticket through the Azure portal.
+
+You are charged for Ubuntu Pro as part of the Preview. Visit the
+[pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro
+pricing. To cancel the Pro subscription during the preview period, open a support ticket through the
+Azure portal.
## Frequently Asked Questions
-#### I launched an Ubuntu Pro VM. Do I need to configure it or enable something else?
-With the availability of outbound internet access, Ubuntu Pro automatically enables premium features such as Extended Security Maintenance for [Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and [live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required (for example CIS), check the using 'usg' to [harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview) tutorial. Should you require FIPS, check enabling FIPS tutorials.
+### What are the next step after launching an Ubuntu Pro VM?
+
+With the availability of outbound internet access, Ubuntu Pro automatically enables premium features
+such as Extended Security Maintenance for
+[Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and
+[live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required
+(for example CIS), check the using 'usg' to
+[harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview)
+tutorial. Should you require FIPS, check enabling FIPS tutorials.
-For more information about networking requirements for making sure Pro enablement process works (such as egress traffic, endpoints and ports) [check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html).
+For more information about networking requirements for making sure Pro enablement process works
+(such as egress traffic, endpoints and ports)
+[check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html).
+
+### Does shutting down the machine stop billing?
-#### If I shut down the machine, does the billing continue?
If you launch Ubuntu Pro from Azure Marketplace you pay as you go, so, if you donΓÇÖt have any machine running, you wonΓÇÖt pay anything additional.
-#### Can I get volume discounts?
+### Are there volume discounts?
+ Yes. Contact your Microsoft sales representative.
-#### Are Reserved Instances available?
+### Are Reserved Instances available?
+ Yes
-#### If the customer doesn't do the auto attach will they still get attached to pro on reboot?
-If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot. However, this applies only if they have v28 of the Pro client.
+### If the customer doesn't do the auto attach will they still get attached to pro on reboot?
+
+If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot.
+However, this applies only if they have v28 of the Pro client.
+ * For Jammy and Focal, this process works as expected. * For Bionic and Xenial this process doesn't work due to the older versions of the Pro client installed.
virtual-machines Configure Oracle Asm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-asm.md
Complete following steps to setup Oracle ASM.
3. In the **Create Disk Group** dialog box: 1. Enter the disk group name **FRA**.
- 2. Under **Select Member Disks**, select **/dev/oracleasm/disks/VOL2**
- 3. Under **Allocation Unit Size**, select **4**.
- 4. Click **ok** to create the disk group.
- 5. Click **ok** to close the confirmation window.
+ 2. For Redundancy option, select External (None).
+ 3. Under **Select Member Disks**, select **/dev/oracleasm/disks/VOL2**
+ 4. Under **Allocation Unit Size**, select **4**.
+ 5. Click **ok** to create the disk group.
+ 6. Click **ok** to close the confirmation window.
:::image type="content" source="./media/oracle-asm/asm-config-assistant-02.png" alt-text="Screenshot of the Create Disk Group dialog box.":::
virtual-machines Oracle Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-migration.md
Last updated 06/03/2023
# Migrate Oracle workload to Azure VMs (IaaS)
-This article describes how to move your on-premises Oracle workload to the Azure VM infrastructure as a service (IaaS). It's based on several considerations and recommendations defined in the Azure [cloud adoption framework](https://learn.microsoft.com/azure/cloud-adoption-framework/adopt/cloud-adoption).
+This article describes how to move your on-premises Oracle workload to the Azure VM infrastructure as a service (IaaS). It's based on several considerations and recommendations defined in the Azure [cloud adoption framework](/azure/cloud-adoption-framework/adopt/cloud-adoption).
First step in the migration journey starts with understanding the customerΓÇÖs Oracle setup, identifying the right size of Azure VMs with optimized licensing & deployment of Oracle on Azure VMs. During migration of Oracle workloads to the Azure IaaS, the key thing is to know how well one can prepare their VM based architecture to deploy onto Azure following a clearly defined sequential process. Getting your complex Oracle setup onto Azure requires detailed understanding of each migration step and Azure Infrastructure as a Service offering. This article describes each of the nine migration steps.
Take AWR reports from heavy usage time periods of the databases (such as peak ho
3. **Arrive at best Azure VM size for migration:** The output of the [AWR based workload analysis](https://techcommunity.microsoft.com/t5/data-architecture-blog/using-oracle-awr-and-infra-info-to-give-customers-complete/ba-p/3361648) indicates the required amount of memory, number of virtual cores, number, size and type of disks, and number of network interfaces. However, it's still up to the user to decide on which Azure VM type to select among the [many that Azure offers](https://azure.microsoft.com/pricing/details/virtual-machines/series/) keeping future requirements also in consideration.
-4. **Optimize Azure compute and choose deployment** **architecture:** Finalize the VM configuration that meets the requirements by optimizing compute and licenses, choose the right [deployment architecture](https://learn.microsoft.com/azure/virtual-machines/workloads/oracle/oracle-reference-architecture) (HA, Backup, etc.).
+4. **Optimize Azure compute and choose deployment** **architecture:** Finalize the VM configuration that meets the requirements by optimizing compute and licenses, choose the right [deployment architecture](/azure/virtual-machines/workloads/oracle/oracle-reference-architecture) (HA, Backup, etc.).
5. **Tuning parameters of Oracle on Azure:** Ensure the VM selected, and deployment architecture meet the performance requirements. Two major factors are throughput & read/write IOPSΓÇô meet the requirements by choosing right [storage](oracle-storage.md) and [backup options](oracle-database-backup-strategies.md). 6. Move your **on-premises Oracle data to the Oracle on Azure VM:** Now that your required Oracle setup is done, pending task is to move data from on premise to cloud. There are many approaches. Best approaches are: -- Azure databox: [Copy your on-premises](https://learn.microsoft.com/training/modules/move-data-with-azure-data-box/3-how-azure-data-box-family-works) data and ship to Azure cloud securely. This suits high volume data scenarios. Data box [provides multiple options.](https://azure.microsoft.com/products/databox/data)-- Data Factory [data pipeline to](https://learn.microsoft.com/azure/data-factory/connector-oracle?tabs=data-factory) move data from one premise to Oracle on Azure ΓÇô heavily dependent on bandwidth.
+- Azure databox: [Copy your on-premises](/training/modules/move-data-with-azure-data-box/3-how-azure-data-box-family-works) data and ship to Azure cloud securely. This suits high volume data scenarios. Data box [provides multiple options.](https://azure.microsoft.com/products/databox/data)
+- Data Factory [data pipeline to](/azure/data-factory/connector-oracle?tabs=data-factory) move data from one premise to Oracle on Azure ΓÇô heavily dependent on bandwidth.
Depending on the size of your data, you can also select from the following available options.
Depending on the size of your data, you can also select from the following avail
Azure Data Box Disk is a powerful and flexible tool for businesses looking to transfer large amounts of data to Azure quickly and securely.
- Learn more [Microsoft Azure Data Box Heavy overview | Microsoft Learn](https://learn.microsoft.com/azure/databox/data-box-heavy-overview)
+ Learn more [Microsoft Azure Data Box Heavy overview | Microsoft Learn](/azure/databox/data-box-heavy-overview)
- **Azure Data Box Heavy**: Azure Data Box Heavy is a powerful and flexible tool for businesses looking to transfer massive amounts of data to Azure quickly and securely.
- To learn more about data box, see [Microsoft Azure Data Box Heavy overview | Microsoft Learn](https://learn.microsoft.com/azure/databox/data-box-heavy-overview)
+ To learn more about data box, see [Microsoft Azure Data Box Heavy overview | Microsoft Learn](/azure/databox/data-box-heavy-overview)
7. **Load data received at cloud to Oracle on Azure VM:**
Use the following handy tools and approaches.
- Use a change control management tool and consider checking in data changes, not just code changes, into the system. ## Next steps-- [Storage options for Oracle on Azure VMs](oracle-storage.md)
+- [Storage options for Oracle on Azure VMs](oracle-storage.md)
virtual-machines Oracle Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md
You can also implement high availability and disaster recovery for Oracle Databa
We recommend placing the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. If you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway. To walk through the basic setup procedure on Azure, see Implement Oracle Data Guard on an Azure Linux virtual machine.
-With Oracle Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see Active Data Guard and GoldenGate. If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+With Oracle Active Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard and GoldenGate](https://www.oracle.com/docs/tech/database/oow14-con7715-adg-gg-bestpractices.pdf). If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+ To walk through the basic setup procedure on Azure, see [Implement Oracle Golden Gate on an Azure Linux VM](configure-oracle-golden-gate.md). In addition to having a high availability and disaster recovery solution architected in Azure, you should have a backup strategy in place to restore your database.
Different [backup strategies](oracle-database-backup-strategies.md) are availabl
- Using [Azure backup](oracle-database-backup-azure-backup.md) - Using [Oracle RMAN Streaming data](oracle-rman-streaming-backup.md) backup ## Deploy Oracle applications on Azure
-Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](/azure/developer/terraform).
+Use Terraform templates, AZ CLI, or the Azure Portal to set up Azure infrastructure and install Oracle applications. You also use Ansible to configure DB inside the VM. For more information, see [Terraform on Azure](/azure/developer/terraform).
Oracle has certified the following applications to run in Azure when connecting to an Oracle database by using the Azure with Oracle Cloud interconnect solution: - E-Business Suite
You can deploy custom applications in Azure that connect with OCI and other Azur
According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are supported on any public cloud offering that meets their specific Minimum Technical Requirements (MTR). You need to create custom images that meet their MTR specifications for operating system and software application compatibility. For more information, see [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html). ## Licensing Deployment of Oracle solutions in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle.
-Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
+Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. For more information, see [Oracle Processor Core Factor Table](https://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf). Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](/azure/virtual-machines/constrained-vcpu) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/). ## Next steps
virtual-machines Oracle Rman Streaming Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-rman-streaming-backup.md
Title: Stream database backups using Oracle Recovery Manager | Microsoft Docs
+ Title: Stream database backups using Oracle Recovery Manager
description: Streaming database backups using Oracle Recovery Manager (RMAN).
Each of these options has advantages or disadvantages in the areas of capacity,
| **Type** | **Tier** | **Docs** | **Mount protocol for VM** | **Support model** | **Prices** | **Notes** | ||||||||
-| **Managed disk** | Standard HDD | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
-| **Managed disk** | Standard SSD | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
-| **Managed disk** | Premium SSD | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
-| **Managed disk** | Premium SSD v2 | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
-| **Managed disk** | UltraDisk | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
-| **Azure blob** | Block blobs | [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](https://learn.microsoft.com/azure/storage/blobs/network-file-system-protocol-support-how-to?tabs=linux) | NFS v3.0 | Microsoft | [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) | 2 |
-| **Azure** **blobfuse** | v1 | [How to mount Azure Blob Storage as a file system with BlobFuse v1](https://learn.microsoft.com/azure/storage/blobs/storage-how-to-mount-container-linux?tabs=RHEL) | Fuse | Open source/Github | n/a | 3, 5, 6 |
-| **Azure** **blobfuse** | v2 | [What is BlobFuse? - BlobFuse2](https://learn.microsoft.com/azure/storage/blobs/blobfuse2-what-is) | Fuse | Open source/Github | n/a | 3, 5, 6 |
-| **Azure Files** | Standard | [What is Azure Files?](https://learn.microsoft.com/azure/storage/files/storage-files-introduction) | SMB/CIFS | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 6 |
-| **Azure Files** | Premium | [What is Azure Files?](https://learn.microsoft.com//azure/storage/files/storage-files-introduction) | SMB/CIFS, NFS v4.1 | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 7 |
+| **Managed disk** | Standard HDD | [Introduction to Azure managed disks](/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | Standard SSD | [Introduction to Azure managed disks](/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | Premium SSD | [Introduction to Azure managed disks](/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | Premium SSD v2 | [Introduction to Azure managed disks](/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | UltraDisk | [Introduction to Azure managed disks](/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Azure blob** | Block blobs | [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](/azure/storage/blobs/network-file-system-protocol-support-how-to?tabs=linux) | NFS v3.0 | Microsoft | [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) | 2 |
+| **Azure** **blobfuse** | v1 | [How to mount Azure Blob Storage as a file system with BlobFuse v1](/azure/storage/blobs/storage-how-to-mount-container-linux?tabs=RHEL) | Fuse | Open source/GitHub | n/a | 3, 5, 6 |
+| **Azure** **blobfuse** | v2 | [What is BlobFuse? - BlobFuse2](/azure/storage/blobs/blobfuse2-what-is) | Fuse | Open source/GitHub | n/a | 3, 5, 6 |
+| **Azure Files** | Standard | [What is Azure Files?](/azure/storage/files/storage-files-introduction) | SMB/CIFS | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 6 |
+| **Azure Files** | Premium | [What is Azure Files?](/azure/storage/files/storage-files-introduction) | SMB/CIFS, NFS v4.1 | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 7 |
| **Azure NetApp Files** | Standard | [Azure NetApp Files ](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 8, 11 | | **Azure NetApp Files** | Premium | [Azure NetApp Files ](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 9, 11 | | **Azure NetApp Files** | Ultra | [Azure NetApp Files](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 10, 11 |
Each of these options has advantages or disadvantages in the areas of capacity,
<sup>1</sup> Restricted by device-level and cumulative VM-level I/O limits on IOPS and I/O throughput. - device limits are specified in the pricing documentation. -- cumulative limits for VM sizes are specified in the documentation [Sizes for virtual machines in Azure](https://learn.microsoft.com/azure/virtual-machines/sizes)
+- cumulative limits for VM sizes are specified in the documentation [Sizes for virtual machines in Azure](/azure/virtual-machines/sizes)
<sup>2 </sup>Choose _hierarchical storage_ in 1<sup>st</sup> drop-down, then _blob only_ in the 2<sup>nd</sup> drop-down.
Each of these options has advantages or disadvantages in the areas of capacity,
## Next steps [Storage options for oracle on Azure VMs](oracle-storage.md)---
virtual-machines Oracle Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-storage.md
In this article, you learn about the storage choices available to you for Oracle
## Azure managed disks versus shared files The throughput & IOPs are limited by the SKU of the selected disk and the virtual machine ΓÇôwhichever is lower. Managed disks are less expensive and simpler to manage than shared storage; however, managed disks may offer lower IOPs and throughput than a given virtual machine allows.
-For example, while AzureΓÇÖs Ultra Disks provides 160k IOPs and 2k MB/sec throughput that would become a bottleneck when attached to a Standard_L80s_v2 virtual machine that allows reads of more than 3 million IOPs and 20k MB/sec throughput. When high IOPs are required, consider selecting an appropriate virtual machine with shared storage choices like [Azure Elastic SAN](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction), [Azure NetApp Files.](https://learn.microsoft.com/azure/azure-netapp-files/performance-oracle-multiple-volumes)
+For example, while AzureΓÇÖs Ultra Disks provides 160k IOPs and 2k MB/sec throughput that would become a bottleneck when attached to a Standard_L80s_v2 virtual machine that allows reads of more than 3 million IOPs and 20k MB/sec throughput. When high IOPs are required, consider selecting an appropriate virtual machine with shared storage choices like [Azure Elastic SAN](/azure/storage/elastic-san/elastic-san-introduction), [Azure NetApp Files.](/azure/azure-netapp-files/performance-oracle-multiple-volumes)
## Azure managed disks
-The [Azure Managed Disk](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) are block-level storage volumes managed by Azure for use with Azure Virtual Machines (VMs). They come in several performance tiers (Ultra Disk, Premium SSD, Standard SSD, and Standard HDD), offering different performance and cost options.
+The [Azure Managed Disk](/azure/virtual-machines/managed-disks-overview) are block-level storage volumes managed by Azure for use with Azure Virtual Machines (VMs). They come in several performance tiers (Ultra Disk, Premium SSD, Standard SSD, and Standard HDD), offering different performance and cost options.
-- **Ultra Disk**: Azure [Ultra Disks](https://learn.microsoft.com/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal) are high-performing managed disks designed for I/O-intensive workloads, including Oracle databases. They deliver high throughput and low latency, offering unparalleled performance for your data applications. Can deliver 160,000 I/O operations per second (IOPS), 2000 MB/s per disk with dynamic scalability. Compatible with VM series ESv3, DSv3, FS, and M series, which are commonly used to host Oracles on Azure.
+- **Ultra Disk**: Azure [Ultra Disks](/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal) are high-performing managed disks designed for I/O-intensive workloads, including Oracle databases. They deliver high throughput and low latency, offering unparalleled performance for your data applications. Can deliver 160,000 I/O operations per second (IOPS), 2000 MB/s per disk with dynamic scalability. Compatible with VM series ESv3, DSv3, FS, and M series, which are commonly used to host Oracles on Azure.
-- **Premium SSD**: Azure [Premium SSDs](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance) are high-performance managed disks designed for production and performance-sensitive workloads. They offer a balance between cost and performance, making them a popular choice for many business applications, including Oracle databases. Can deliver 20,000 I/O operations per second (IOPS) per disk, highly available (99.9%) and compatible with DS, Gs & FS VM series.
+- **Premium SSD**: Azure [Premium SSDs](/azure/virtual-machines/premium-storage-performance) are high-performance managed disks designed for production and performance-sensitive workloads. They offer a balance between cost and performance, making them a popular choice for many business applications, including Oracle databases. Can deliver 20,000 I/O operations per second (IOPS) per disk, highly available (99.9%) and compatible with DS, Gs & FS VM series.
- **Standard SSD**: Suitable for dev/test environments and noncritical workloads.
The [Azure Managed Disk](https://learn.microsoft.com/azure/virtual-machines/mana
## Azure Elastic SAN
-The [Azure Elastic SAN](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction) is a cloud-native service that offers a scalable, cost-effective, high-performance, and comprehensive storage solution for a range of compute options. Gain higher resiliency and minimize downtime with rapid provisioning. Can deliver up to 64,000 IOPs & supports Volume groups.
+The [Azure Elastic SAN](/azure/storage/elastic-san/elastic-san-introduction) is a cloud-native service that offers a scalable, cost-effective, high-performance, and comprehensive storage solution for a range of compute options. Gain higher resiliency and minimize downtime with rapid provisioning. Can deliver up to 64,000 IOPs & supports Volume groups.
## Azure NetApp Files
It is highly recommended to use Oracle's [dNFS](/azure/azure-netapp-files/perfor
## Lightbits on Azure
-The [Lightbits](https://www.lightbitslabs.com/azure/) Cloud Data Platform provides scalable and cost-efficient high-performance storage that is easy to consume on Azure. It removes the bottlenecks associated with native storage on the public cloud, such as scalable performance and consistently low latency. Removing these bottlenecks offers rich data services and resiliency that enterprises have come to rely on. It can deliver up to 1 million IOPS/volume and up to 3 million IOPs per VM. Lightbits cluster can scale vertically and horizontally. Lightbits support different sizes of [Lsv3](https://learn.microsoft.com/azure/virtual-machines/lsv3-series) and [Lasv3](https://learn.microsoft.com/azure/virtual-machines/lasv3-series) VMs for their clusters. For options, see L32sv3/L32asv3: 7.68 TB, L48sv3/L48asv3: 11.52 TB, L64sv3/L64asv3: 15.36 TB, L80sv3/L80asv3: 19.20 TB.
+The [Lightbits](https://www.lightbitslabs.com/azure/) Cloud Data Platform provides scalable and cost-efficient high-performance storage that is easy to consume on Azure. It removes the bottlenecks associated with native storage on the public cloud, such as scalable performance and consistently low latency. Removing these bottlenecks offers rich data services and resiliency that enterprises have come to rely on. It can deliver up to 1 million IOPS/volume and up to 3 million IOPs per VM. Lightbits cluster can scale vertically and horizontally. Lightbits support different sizes of [Lsv3](/azure/virtual-machines/lsv3-series) and [Lasv3](/azure/virtual-machines/lasv3-series) VMs for their clusters. For options, see L32sv3/L32asv3: 7.68 TB, L48sv3/L48asv3: 11.52 TB, L64sv3/L64asv3: 15.36 TB, L80sv3/L80asv3: 19.20 TB.
## Next steps-- [Deploy premium SSD to Azure Virtual Machine](https://learn.microsoft.com/azure/virtual-machines/disks-deploy-premium-v2?tabs=azure-cli) -- [Deploy an Elastic SAN](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) -- [Setup Azure NetApp Files & create NFS Volume](https://learn.microsoft.com/azure/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes?tabs=azure-portal)
+- [Deploy premium SSD to Azure Virtual Machine](/azure/virtual-machines/disks-deploy-premium-v2?tabs=azure-cli)
+- [Deploy an Elastic SAN](/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal)
+- [Setup Azure NetApp Files & create NFS Volume](/azure/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes?tabs=azure-portal)
- [Create Lightbits solution on Azure VM](https://www.lightbitslabs.com/resources/lightbits-on-azure-solution-brief/)
virtual-network-manager Concept Event Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-event-logs.md
Azure Virtual Network Manager uses Azure Monitor for data collection and analysi
Azure Virtual Network Manager currently provides the following log categories: - Network group membership change - Track when a particular virtual networkΓÇÖs network group membership is modified. In other words, a log is emitted when a virtual network is added to or removed from a network group. This can be used to trace network group membership changes over time and to capture a snapshot of a particular virtual networkΓÇÖs network group membership.
+- Rule collection change
+ - Track when a particular virtual networkΓÇÖs set of applied security admin rule collections changes. A log is emitted for every rule collection deployed to a virtual network via the network group the rule collection is targeting. Any removal of a rule collection from a network group through a deployment process will also result in a log for each affected virtual network. This schema can be used to track what rule collection(s) have been deployed to a particular virtual network over time.
+ - If a virtual network is receiving security admin rule collection(s) from multiple network managers, logs will be emitted separately for each network manager for their respective rule collection changes.
+ - If a virtual network is added to or removed from a network group that already has a rule collection(s) deployed onto it, a log will be emitted for that virtual network showing the state of applied rule collection(s).
## Network group membership change attributes
This category emits one log per network group membership change. So, when a virt
| time | Datetime when the event was logged. | | resourceId | Resource ID of the network manager. | | location | Location of the virtual network resource. |
-| operationName | Operation that resulted in the VNet being added or removed. Always the Microsoft.Network/virtualNetworks/networkGroupMembership/write operation. |
+| operationName | Operation that resulted in the virtual network being added or removed. Always the Microsoft.Network/virtualNetworks/networkGroupMembership/write operation. |
| category | Category of this log. Always NetworkGroupMembershipChange. | | resultType | Indicates successful or failed operation. | | correlationId | GUID that can help relate or debug logs. |
Within the `properties` attribute are several nested attributes:
| properties attributes | Description | |--|-|
-| Message | Basic success or failure message. |
+| Message | A static message stating if a network group membership change was successful or unsuccessful. |
| MembershipId | Default membership ID of the virtual network. | | GroupMemberships | Collection of what network groups the virtual network belongs to. There may be multiple `NetworkGroupId` and `Sources` listed within this property since a virtual network can belong to multiple network groups simultaneously. | | MemberResourceIds | Resource ID of the virtual network that was added to or removed from a network group. |
Within the `Sources` attribute are several nested attributes:
| PolicyAssignmentId | If the Type value is Policy, this property appears. ID of the Azure Policy assignment that associates the Azure Policy definition to the network group. | | PolicyDefinitionId | If the Type value is Policy, this property appears. ID of the Azure Policy definition that contains the conditions for the network groupΓÇÖs membership. |
+## Rule collection change attributes
+
+This category emits one log per security admin rule collection change per virtual network. So, when a security admin rule collection is applied to or removed from a virtual network through its network group, a log is emitted correlating to that change in rule collection for that particular virtual network. The following attributes correspond to the logs that would be sent to your storage account; Log Analytics logs will have slightly different attributes.
+
+| Attribute | Description |
+|--|-|
+| time | Datetime when the event was logged. |
+| resourceId | Resource ID of the network manager. |
+| location | Location of the virtual network resource. |
+| operationName | Operation that resulted in the virtual network being added or removed. Always the Microsoft.Network/networkManagers/securityAdminRuleCollections/write operation. |
+| category | Category of this log. Always RuleCollectionChange. |
+| resultType | Indicates successful or failed operation. |
+| correlationId | GUID that can help relate or debug logs. |
+| level | Always Info. |
+| properties | Collection of properties of the log. |
+
+Within the `properties` attribute are several nested attributes:
+
+| properties attributes | Description |
+|--|-|
+| TargetResourceIds | Resource ID of the virtual network that experienced a change in rule collection application. |
+| Message | A static message stating if a rule collection change was successful or unsuccessful. |
+| AppliedRuleCollectionIds | Collection of what security admin rule collections are applied to the virtual network at the time the log was emitted. There may be multiple rule collection IDs listed since a virtual network can belong to multiple network groups and have multiple rule collections applied simultaneously. |
+ ## Accessing logs Depending on how you consume event logs, you need to set up a Log Analytics workspace or a storage account for storing your log events.
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
Here are some scenarios where security admin rules can be used:
| **Enforcing application-level security** | Security admin rules can be used to enforce application-level security by blocking traffic to or from specific applications or services. | With Azure Virtual Network Manager, you have a centralized location to manage security admin rules. Centralization allows you to define security policies at scale and apply them to multiple virtual networks at once.+
+> [!NOTE]
+> Currently, security admin rules do not apply to private endpoints that fall under the scope of a managed virtual network.
+ ## How do security admin rules work? Security admin rules allow or deny traffic on specific ports, protocols, and source/destination IP prefixes in a specified direction. When you define a security admin rule, you specify the following conditions:
Security admin rules allow or deny traffic on specific ports, protocols, and sou
- The protocol to be used To enforce security policies across multiple virtual networks, you [create and deploy a security admin configuration](how-to-block-network-traffic-portal.md). This configuration contains a set of rule collections, and each rule collection contains one or more security admin rules. Once created, you associate the rule collection with the network groups requiring security admin rules. The rules are then applied to all virtual networks contained in the network groups when the configuration is deployed. A single configuration provides a centralized and scalable enforcement of security policies across multiple virtual networks.+ ### Evaluation of security admin rules and network security groups (NSGs) Security admin rules and network security groups (NSGs) can be used to enforce network security policies in Azure. However, they have different scopes and priorities.
virtual-network-manager Concept Virtual Network Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-virtual-network-flow-logs.md
+
+ Title: Monitoring security admin rules with Virtual Network Flow Logs
+description: This article covers using Network Watcher and Virtual Network Flow Logs to monitor traffic through security admin rules in Azure Virtual Network Manager.
++++ Last updated : 08/11/2023++
+# Monitoring Azure Virtual Network Manager with VNet flow logs (Preview)
+
+Monitoring traffic is critical to understanding how your network is performing and to troubleshoot issues. Administrators can utilize VNet flow logs (Preview) to show whether traffic is flowing through or blocked on a VNet by a [security admin rule]. VNet flow logs (Preview) are a feature of Network Watcher.
+
+Learn more about [VNet flow logs (Preview)](../network-watcher/vnet-flow-logs-overview.md) including usage and how to enable.
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
+>
+> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Enable VNet flow logs (Preview)
+
+Currently, you need to enable Virtual Network flow logs (Preview) on each VNet you want to monitor. You can enable Virtual Network Flow Logs on a VNet by using [PowerShell](../network-watcher/vnet-flow-logs-powershell.md) or the [Azure CLI](../network-watcher/vnet-flow-logs-cli.md).
+
+Here's an example of a flow log
+
+```json
+{
+ "records": [
+ {
+ "time": "2022-09-14T09:00:52.5625085Z",
+ "flowLogVersion": 4,
+ "flowLogGUID": "a1b2c3d4-e5f6-g7h8-i9j0-k1l2m3n4o5p6",
+ "macAddress": "00224871C205",
+ "category": "FlowLogFlowEvent",
+ "flowLogResourceID": "/SUBSCRIPTIONS/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG",
+ "targetResourceID": "/subscriptions/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet01",
+ "operationName": "FlowLogFlowEvent",
+ "flowRecords": {
+ "flows": [
+ {
+ "aclID": "9a8b7c6d-5e4f-3g2h-1i0j-9k8l7m6n5o4p3",
+ "flowGroups": [
+ {
+ "rule": "DefaultRule_AllowInternetOutBound",
+ "flowTuples": [
+ "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0",
+ "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580",
+ "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0",
+ "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108",
+ "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0",
+ "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466"
+ ]
+ }
+ ]
+ },
+ {
+ "aclID": "b1c2d3e4-f5g6-h7i8-j9k0-l1m2n3o4p5q6",
+ "flowGroups": [
+ {
+ "rule": "BlockHighRiskTCPPortsFromInternet",
+ "flowTuples": [
+ "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0",
+ "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0"
+ ]
+ },
+ {
+ "rule": "Internet",
+ "flowTuples": [
+ "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0",
+ "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0",
+ "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0",
+ "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0",
+ "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0",
+ "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0",
+ "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
++
+## Next steps
+> [!div class="nextstepaction"]
+> Learn more about [VNet Flow Logs](../network-watcher/vnet-flow-logs-overview.md) and how to use them.
+> Learn more about [Event log options for Azure Virtual Network Manager](concept-event-logs.md).
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using the Azure portal'
-description: Use this quickstart to learn how to create a mesh network topology with Virtual Network Manager by using the Azure portal.
+ Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager - Azure portal'
+description: Learn to a mesh virtual network topology with Azure Virtual Network Manager by using the Azure portal.
Previously updated : 04/12/2023 Last updated : 08/24/2023
-# Quickstart: Create a mesh network topology with Azure Virtual Network Manager by using the Azure portal
+# Quickstart: Create a mesh network topology with Azure Virtual Network Manager - Azure portal
Get started with Azure Virtual Network Manager by using the Azure portal to manage connectivity for all your virtual networks.
virtual-network-manager Create Virtual Network Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-template.md
Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template'
-description: In this article, you create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template, ARM template.
+ Title: 'Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template'
+description: In this article, you deploy various network topologies with Azure Virtual Network Manager using Azure Resource Manager template(ARM template).
-# Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template -ARM template
+# Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template
Get started with Azure Virtual Network Manager by using Azure Resource Manager templates to manage connectivity for all your virtual networks.
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Yes,
In Azure, VNet peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While VNet peering works by creating a 1:1 mapping between each peered VNet, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each VNet without the need for individual peering relationships.
+### Do security admin rules apply to Azure Private Endpoints?
+
+Currently, security admin rules don't apply to Azure Private Endpoints that fall under the scope of a virtual network managed by Azure Virtual Network Manager.
### How can I explicitly allow Azure SQL Managed Instance traffic before having deny rules? Azure SQL Managed Instance has some network requirements. If your security admin rules can block the network requirements, you can use the below sample rules to allow SQLMI traffic with higher priority than the deny rules that can block the traffic of SQL Managed Instance.
virtual-network-manager How To Configure Event Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-event-logs.md
In this article, you learn how to monitor Azure Virtual Network Manager for virt
Depending on how you consume event logs, you need to set up a Log Analytics workspace or a storage account for storing your log events. These are as storage targets when configuring diagnostic settings for Azure Virtual Network Manager. Once you have configured your diagnostic settings, you can view the event logs in the Log Analytics workspace or storage account. > [!NOTE]
-> At least one virtual network must be added or removed from a network group in order to generate logs. A log will generate for this event a couple minutes after network group membership change occurs.
+> At least one virtual network must be added or removed from a network group in order to generate logs for the Network Group Membership Change schema. A log will generate for this event a couple minutes after network group membership change occurs.
### Configure event logs with Log Analytics Log analytics is one option for storing event logs. In this task, you configure your Azure Virtual Network Manager Instance to use a Log Analytics workspace. This task assumes you have already deployed a Log Analytics workspace. If you haven't, see [Create a Log Analytics workspace](../azure-monitor/essentials/tutorial-resource-logs.md#create-a-log-analytics-workspace).
Log analytics is one option for storing event logs. In this task, you configure
1. Navigate to the network manager you want to obtain the logs of. 1. Under the **Monitoring** in the left pane, select the **Diagnostic settings**. 1. Select **+ Add diagnostic setting** and enter a diagnostic setting name.
-1. Under **Logs**, select **Network Group Membership Change**.
+1. Under **Logs**, select **Network Group Membership Change** or **Rule Collection Change**.
1. Under **Destination details**, select **Send to Log Analytics** and choose your subscription and Log Analytics workspace from the dropdown menus. :::image type="content" source="media/how-to-configure-event-logging/log-analytics-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings page for setting up Log Analytics workspace.":::
A storage account is another option for storing event logs. In this task, you co
1. Under the **Monitoring** in the left pane, select the **Diagnostic settings**. 1. Select **+ Add diagnostic setting** and enter a diagnostic setting name. 1. Under **Destination details**, select **Send to storage account** and choose your subscription and storage account from the dropdown menus.
-1. Under **Logs**, select **Network Group Membership Change** and enter a retention period.
+1. Under **Logs**, select **Network Group Membership Change** or **Rule Collection Change** and enter a retention period.
:::image type="content" source="media/how-to-configure-event-logging/storage-account-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings for storage account.":::
A storage account is another option for storing event logs. In this task, you co
In this task, you access the event logs for your Azure Virtual Network Manager instance. 1. Under the **Monitoring** in the left pane, select the **Logs**.
-1. In the **Diagnostics** window, select **Run** or **Load to editor** under **Get recent Network Group Membership Changes**.
+1. In the **Diagnostics** window, select **Run** or **Load to editor** under **Get recent Network Group Membership Changes** or any other preloaded query available from your selected schema(s).
:::image type="content" source="media/how-to-configure-event-logging/run-query.png" alt-text="Screenshot of Run and Load to editor buttons in the diagnostics window.":::
In this task, you access the event logs for your Azure Virtual Network Manager i
- Learn about [Security admin rules](concept-security-admins.md) - Learn how to [Use queries in Azure Monitor Log Analytics](../azure-monitor/logs/queries.md)-- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
+- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
virtual-network-manager How To Define Network Group Membership Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-define-network-group-membership-azure-policy.md
List of supported operators:
## Basic editor
-Assume you have the following virtual networks in your subscription. Each virtual network has an associated tag named **environment** with the respective value of *Production* or *Test*.
+Assume you have the following virtual networks in your subscription. Each virtual network has an associated tag named **environment** with the respective value of *production* or *test*.
-| **Virtual Network** | **Tag** |
-| - | - |
-| myVNet01-EastUS | Production |
-| myVNet01-WestUS | Production |
-| myVNet02-WestUS | Test |
-| myVNet03-WestUS | Test |
+| **Virtual Network** | **Tag Name** | **Tag Value** |
+| - | - | |
+| myVNet01-EastUS | environment | production |
+| myVNet01-WestUS | environment | production |
+| myVNet02-WestUS | environment | test |
+| myVNet03-WestUS | environment | test |
-You only want to select virtual networks that contain **WestUS** in the name. To begin using the basic editor to create your conditional statement, you need to create a new network group.
+You only want to select virtual networks that whose tag has a key value pair of **environment** equal to **production**. To begin using the basic editor to create your conditional statement, you need to create a new network group.
1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under **Settings**. Then select **+ Create** to create a new network group. 1. Enter a **Name** and an optional **Description** for the network group, and select **Add**. 1. Select the network group from the list and select **Create Azure Policy**. 1. Enter a **Policy name** and leave the **Scope** selections unless changes are needed.
-1. Under **Criteria**, select **Name** from the drop-down under **Parameter** and then select **Contains** from the drop-down under *Operator*.
-1. Enter **WestUS** under **Condition** and select **Preview Resources**. You should see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list.
+1. Under **Criteria**, select **Tags** from the drop-down under **Parameter** and then select **Key value pair** from the drop-down under **Operator**.
+1. Enter **environment** and **production** under **Condition** and select **Preview Resources**. You should see myVNet01-EastUS and myVNet01-WestUS show up in the list.
+
+ :::image type="content" source="media/how-to-define-network-group-membership-azure-policy/add-key-value-pair-tag.png" alt-text="Screenshot of Create Azure Policy window setting tag with key value pair.":::
+ 1. Select **Close** and **Save**.
-1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list.
+1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS and myVNet01-WestUS.
> [!IMPORTANT] > The **basic editor** is only available during the creation of an Azure Policy. Once a policy is created, all edits will be done using JSON in the **Policies** section of virtual network manager or via Azure Policy.
->
-> When using the basic editor, your condition options are limited through the portal experience. For complex conditions like creating a network group for VNets based on a [customer-defined tag](#example-3-using-custom-tag-values-with-advanced-editor), you must use the advanced editor. Learn more about [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
## Advanced editor
virtual-network Accelerated Networking Mana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-overview.md
Several [Azure Marketplace](https://learn.microsoft.com/marketplace/azure-market
We recommend using an operating system with support for MANA to maximize performance. In instances where the operating system doesn't or can't support MANA, network connectivity is provided through the hypervisorΓÇÖs virtual switch. The virtual switch is also used during some infrastructure servicing events where the Virtual Function (VF) is revoked. ### Using DPDK
-Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers.
-
-DPDK requires the following set of drivers:
-1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later)
-1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later)
-1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later)
-1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later)
-
-DPDK only functions on Linux VMs.
+For information about DPDK on MANA hardware, see [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
## Evaluating performance Differences in VM SKUs, operating systems, applications, and tuning parameters can all affect network performance on Azure. For this reason, we recommend that you benchmark and test your workloads to ensure you achieve the expected network performance.
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
Previously updated : 12/30/2022 Last updated : 08/23/2023
In this tutorial, you learn to create a virtual network peering between virtual networks created through Resource Manager. The virtual networks exist in different subscriptions that may belong to different Azure Active Directory (Azure AD) tenants. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
-Depending on whether the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+Depending on whether, the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
Learn how to create a virtual network peering in other scenarios by selecting the scenario from the following table:
This tutorial peers virtual networks in the same region. You can also peer virtu
- Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
+- Sign-in to the [Azure portal](https://portal.azure.com).
+ # [**PowerShell**](#tab/create-peering-powershell) - An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The following resources and account examples are used in the steps in this artic
| User account | Resource group | Subscription | Virtual network | | | -- | | |
-| **UserA** | **myResourceGroupA** | **SubscriptionA** | **myVNetA** |
-| **UserB** | **myResourceGroupB** | **SubscriptionB** | **myVNetB** |
+| **user-1** | **test-rg** | **subscription-1** | **vnet-1** |
+| **user-2** | **test-rg-2** | **subscription-2** | **vnet-2** |
-## Create virtual network - myVNetA
+## Create virtual network - vnet-1
> [!NOTE] > If you are using a single account to complete the steps, you can skip the steps for logging out of the portal and assigning another user permissions to the virtual networks. # [**Portal**](#tab/create-peering-portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com) as **UserA**.
-
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-3. Select **+ Create**.
+<a name="create-virtual-network"></a>
-4. In the **Basics** tab of **Create virtual network**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your **SubscriptionA**. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroupA** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNetA**. |
- | Region | Select a region. |
-
-5. Select **Next: IP Addresses**.
-
-6. In **IPv4 address space**, enter **10.1.0.0/16**.
-
-7. Select **+ Add subnet**.
-
-8. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
-
-9. Select **Add**.
-
-10. Select **Review + create**.
-
-11. Select **Create**.
# [**PowerShell**](#tab/create-peering-powershell)
-### Sign in to SubscriptionA
+### Sign in to subscription-1
-Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**.
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-1**.
```azurepowershell-interactive Connect-AzAccount ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
```azurepowershell-interactive
-Set-AzContext -Subscription SubscriptionA
+Set-AzContext -Subscription subscription-1
```
-### Create a resource group - myResourceGroupA
+### Create a resource group - test-rg
An Azure resource group is a logical container where Azure resources are deployed and managed.
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resourc
```azurepowershell-interactive $rsg = @{
- Name = 'myResourceGroupA'
- Location = 'westus3'
+ Name = 'test-rg'
+ Location = 'eastus2'
} New-AzResourceGroup @rsg ``` ### Create the virtual network
-Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVNetA** in the **West US 3** location:
+Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a subnet-1 virtual network named **vnet-1** in the **West US 3** location:
```azurepowershell-interactive $vnet = @{
- Name = 'myVNetA'
- ResourceGroupName = 'myResourceGroupA'
- Location = 'westus3'
- AddressPrefix = '10.1.0.0/16'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
+ Location = 'eastus2'
+ AddressPrefix = '10.0.0.0/16'
} $virtualNetwork = New-AzVirtualNetwork @vnet ``` ### Add a subnet
-Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
+Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **subnet-1** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
```azurepowershell-interactive $subnet = @{
- Name = 'default'
+ Name = 'subnet-1'
VirtualNetwork = $virtualNetwork
- AddressPrefix = '10.1.0.0/24'
+ AddressPrefix = '10.0.0.0/24'
} $subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet ```
$virtualNetwork | Set-AzVirtualNetwork
# [**Azure CLI**](#tab/create-peering-cli)
-### Sign in to SubscriptionA
+### Sign in to subscription-1
-Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**.
```azurecli-interactive az login ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [az account set](/cli/azure/account#az-account-set).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [az account set](/cli/azure/account#az-account-set).
```azurecli-interactive
-az account set --subscription "SubscriptionA"
+az account set --subscription "subscription-1"
```
-### Create a resource group - myResourceGroupA
+### Create a resource group - test-rg
An Azure resource group is a logical container where Azure resources are deployed and managed.
Create a resource group with [az group create](/cli/azure/group#az-group-create)
```azurecli-interactive az group create \
- --name myResourceGroupA \
- --location westus3
+ --name test-rg \
+ --location eastus2
``` ### Create the virtual network
-Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a default virtual network named **myVNetA** in the **West US 3** location.
+Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a subnet-1 virtual network named **vnet-1** in the **West US 3** location.
```azurecli-interactive az network vnet create \
- --resource-group myResourceGroupA\
- --location westus3 \
- --name myVNetA \
- --address-prefixes 10.1.0.0/16 \
- --subnet-name default \
- --subnet-prefixes 10.1.0.0/24
+ --resource-group test-rg\
+ --location eastus2 \
+ --name vnet-1 \
+ --address-prefixes 10.0.0.0/16 \
+ --subnet-name subnet-1 \
+ --subnet-prefixes 10.0.0.0/24
```
-## Assign permissions for UserB
+## Assign permissions for user-2
A user account in the other subscription that you want to peer with must be added to the network you previously created. If you're using a single account for both subscriptions, you can skip this section. # [**Portal**](#tab/create-peering-portal)
-1. Remain signed in to the portal as **UserA**.
+1. Remain signed in to the portal as **user-1**.
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **myVNetA**.
+1. Select **vnet-1**.
-4. Select **Access control (IAM)**.
+1. Select **Access control (IAM)**.
-5. Select **+ Add** -> **Add role assignment**.
+1. Select **+ Add** -> **Add role assignment**.
-6. In **Add role assignment** in the **Role** tab, select **Network Contributor**.
+1. In **Add role assignment** in the **Role** tab, select **Network Contributor**.
-7. Select **Next**.
+1. Select **Next**.
-8. In the **Members** tab, select **+ Select members**.
+1. In the **Members** tab, select **+ Select members**.
-9. In **Select members** in the search box, enter **UserB**.
+1. In **Select members** in the search box, enter **user-2**.
-10. Select **Select**.
+1. Select **Select**.
-11. Select **Review + assign**.
+1. Select **Review + assign**.
-12. Select **Review + assign**.
+1. Select **Review + assign**.
# [**PowerShell**](#tab/create-peering-powershell)
-Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. Assign **UserB** from **SubscriptionB** to **myVNetA** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-2** from **subscription-2** to **vnet-1** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
-Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **UserB**.
+Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **user-2**.
-**UserB** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionB** that you wish to assign permissions to **myVNetA**. You can skip this step if you're using the same account for both subscriptions.
+**user-2** is used in this example for the user account. Replace this value with the display name for the user from **subscription-2** that you wish to assign permissions to **vnet-1**. You can skip this step if you're using the same account for both subscriptions.
```azurepowershell-interactive $id = @{
- Name = 'myVNetA'
- ResourceGroupName = 'myResourceGroupA'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnet = Get-AzVirtualNetwork @id
-$obj = Get-AzADUser -DisplayName 'UserB'
+$obj = Get-AzADUser -DisplayName 'user-2'
$role = @{ ObjectId = $obj.id
New-AzRoleAssignment @role
# [**Azure CLI**](#tab/create-peering-cli)
-Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetA**. Assign **UserB** from **SubscriptionB** to **myVNetA** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
+Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-1**. Assign **user-2** from **subscription-2** to **vnet-1** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
-Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **UserB**.
+Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **user-2**.
-**UserB** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionB** that you wish to assign permissions to **myVNetA**. You can skip this step if you're using the same account for both subscriptions.
+**user-2** is used in this example for the user account. Replace this value with the display name for the user from **subscription-2** that you wish to assign permissions to **vnet-1**. You can skip this step if you're using the same account for both subscriptions.
```azurecli-interactive
-az ad user list --display-name UserB
+az ad user list --display-name user-2
``` ```output [ { "businessPhones": [],
- "displayName": "UserB",
+ "displayName": "user-2",
"givenName": null, "id": "16d51293-ec4b-43b1-b54b-3422c108321a", "jobTitle": null,
- "mail": "userB@fabrikam.com",
+ "mail": "user-2@fabrikam.com",
"mobilePhone": null, "officeLocation": null, "preferredLanguage": null, "surname": null,
- "userPrincipalName": "userb_fabrikam.com#EXT#@contoso.onmicrosoft.com"
+ "userPrincipalName": "user-2_fabrikam.com#EXT#@contoso.onmicrosoft.com"
} ] ```
-Make note of the object ID of **UserB** in field **id**. In this example, its **16d51293-ec4b-43b1-b54b-3422c108321a**.
+Make note of the object ID of **user-2** in field **id**. In this example, its **16d51293-ec4b-43b1-b54b-3422c108321a**.
```azurecli-interactive vnetid=$(az network vnet show \
- --name myVNetA \
- --resource-group myResourceGroupA \
+ --name vnet-1 \
+ --resource-group test-rg \
--query id \ --output tsv)
az role assignment create \
--scope $vnetid ```
-Replace the example guid in **`--assignee`** with the real object ID for **UserB**.
+Replace the example guid in **`--assignee`** with the real object ID for **user-2**.
-## Obtain resource ID of myVNetA
+## Obtain resource ID of vnet-1
# [**Portal**](#tab/create-peering-portal)
-1. Remain signed in to the portal as **UserA**.
+1. Remain signed in to the portal as **user-1**.
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **myVNetA**.
+1. Select **vnet-1**.
-4. In **Settings**, select **Properties**.
+1. In **Settings**, select **Properties**.
-5. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVnetA`**.
+1. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1`**.
-6. Sign out of the portal as **UserA**.
+1. Sign out of the portal as **user-1**.
# [**PowerShell**](#tab/create-peering-powershell)
-The resource ID of **myVNetA** is required to set up the peering connection from **myVNetB** to **myVNetA**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**.
+The resource ID of **vnet-1** is required to set up the peering connection from **vnet-2** to **vnet-1**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**.
```azurepowershell-interactive $id = @{
- Name = 'myVNetA'
- ResourceGroupName = 'myResourceGroupA'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnetA = Get-AzVirtualNetwork @id
$vnetA.id
# [**Azure CLI**](#tab/create-peering-cli)
-The resource ID of **myVNetA** is required to set up the peering connection from **myVNetB** to **myVNetA**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetA**.
+The resource ID of **vnet-1** is required to set up the peering connection from **vnet-2** to **vnet-1**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-1**.
```azurecli-interactive vnetidA=$(az network vnet show \
- --name myVNetA \
- --resource-group myResourceGroupA \
+ --name vnet-1 \
+ --resource-group test-rg \
--query id \ --output tsv)
echo $vnetidA
-## Create virtual network - myVNetB
+## Create virtual network - vnet-2
-In this section, you sign in as **UserB** and create a virtual network for the peering connection to **myVNetA**.
+In this section, you sign in as **user-2** and create a virtual network for the peering connection to **vnet-1**.
# [**Portal**](#tab/create-peering-portal)
-1. Sign in to the portal as **UserB**. If you're using one account for both subscriptions, change to **SubscriptionB** in the portal.
-
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+Repeat the steps in the [previous section](#create-virtual-network) to create a second virtual network with the following values:
-3. Select **+ Create**.
-
-4. In the **Basics** tab of **Create virtual network**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your **SubscriptionB**. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroupB** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNetB**. |
- | Region | Select a region. |
-
-5. Select **Next: IP Addresses**.
-
-6. In **IPv4 address space**, enter **10.2.0.0/16**.
-
-7. Select **+ Add subnet**.
-
-8. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.2.0.0/24**. |
-
-9. Select **Add**.
-
-10. Select **Review + create**.
-
-11. Select **Create**.
+| Setting | Value |
+| | |
+| Subscription | **subscription-2** |
+| Resource group | **test-rg-2** |
+| Name | **vnet-2** |
+| Address space | **10.1.0.0/16** |
+| Subnet name | **subnet-1** |
+| Subnet address range | **10.1.0.0/24** |
# [**PowerShell**](#tab/create-peering-powershell)
-### Sign in to SubscriptionB
+### Sign in to subscription-2
-Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**.
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-2**.
```azurepowershell-interactive Connect-AzAccount ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
```azurepowershell-interactive
-Set-AzContext -Subscription SubscriptionB
+Set-AzContext -Subscription subscription-2
```
-### Create a resource group - myResourceGroupB
+### Create a resource group - test-rg-2
An Azure resource group is a logical container where Azure resources are deployed and managed.
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resourc
```azurepowershell-interactive $rsg = @{
- Name = 'myResourceGroupB'
- Location = 'westus3'
+ Name = 'test-rg-2'
+ Location = 'eastus2'
} New-AzResourceGroup @rsg ``` ### Create the virtual network
-Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVNetB** in the **West US 3** location:
+Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a subnet-1 virtual network named **vnet-2** in the **West US 3** location:
```azurepowershell-interactive $vnet = @{
- Name = 'myVNetB'
- ResourceGroupName = 'myResourceGroupB'
- Location = 'westus3'
- AddressPrefix = '10.2.0.0/16'
+ Name = 'vnet-2'
+ ResourceGroupName = 'test-rg-2'
+ Location = 'eastus2'
+ AddressPrefix = '10.1.0.0/16'
} $virtualNetwork = New-AzVirtualNetwork @vnet ``` ### Add a subnet
-Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
+Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **subnet-1** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
```azurepowershell-interactive $subnet = @{
- Name = 'default'
+ Name = 'subnet-1'
VirtualNetwork = $virtualNetwork
- AddressPrefix = '10.2.0.0/24'
+ AddressPrefix = '10.1.0.0/24'
} $subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet ```
$virtualNetwork | Set-AzVirtualNetwork
# [**Azure CLI**](#tab/create-peering-cli)
-### Sign in to SubscriptionB
+### Sign in to subscription-2
-Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**.
```azurecli-interactive az login ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [az account set](/cli/azure/account#az-account-set).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [az account set](/cli/azure/account#az-account-set).
```azurecli-interactive
-az account set --subscription "SubscriptionB"
+az account set --subscription "subscription-2"
```
-### Create a resource group - myResourceGroupB
+### Create a resource group - test-rg-2
An Azure resource group is a logical container where Azure resources are deployed and managed.
Create a resource group with [az group create](/cli/azure/group#az-group-create)
```azurecli-interactive az group create \
- --name myResourceGroupB \
- --location westus3
+ --name test-rg-2 \
+ --location eastus2
``` ### Create the virtual network
-Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a default virtual network named **myVNetB** in the **West US 3** location.
+Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a subnet-1 virtual network named **vnet-2** in the **West US 3** location.
```azurecli-interactive az network vnet create \
- --resource-group myResourceGroupB\
- --location westus3 \
- --name myVNetB \
- --address-prefixes 10.2.0.0/16 \
- --subnet-name default \
- --subnet-prefixes 10.2.0.0/24
+ --resource-group test-rg-2\
+ --location eastus2 \
+ --name vnet-2 \
+ --address-prefixes 10.1.0.0/16 \
+ --subnet-name subnet-1 \
+ --subnet-prefixes 10.1.0.0/24
```
-## Assign permissions for UserA
+## Assign permissions for user-1
A user account in the other subscription that you want to peer with must be added to the network you previously created. If you're using a single account for both subscriptions, you can skip this section. # [**Portal**](#tab/create-peering-portal)
-1. Remain signed in to the portal as **UserB**.
+1. Remain signed in to the portal as **user-2**.
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **myVNetB**.
+1. Select **vnet-2**.
-4. Select **Access control (IAM)**.
+1. Select **Access control (IAM)**.
-5. Select **+ Add** -> **Add role assignment**.
+1. Select **+ Add** -> **Add role assignment**.
-6. In **Add role assignment** in the **Role** tab, select **Network Contributor**.
+1. In **Add role assignment** in the **Role** tab, select **Network Contributor**.
-7. Select **Next**.
+1. Select **Next**.
-8. In the **Members** tab, select **+ Select members**.
+1. In the **Members** tab, select **+ Select members**.
-9. In **Select members** in the search box, enter **UserA**.
+1. In **Select members** in the search box, enter **user-1**.
-10. Select **Select**.
+1. Select **Select**.
-11. Select **Review + assign**.
+1. Select **Review + assign**.
-12. Select **Review + assign**.
+1. Select **Review + assign**.
# [**PowerShell**](#tab/create-peering-powershell)
-Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. Assign **UserA** from **SubscriptionA** to **myVNetB** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-1** from **subscription-1** to **vnet-2** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
-Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **UserA**.
+Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **user-1**.
-**UserA** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionA** that you wish to assign permissions to **myVNetB**. You can skip this step if you're using the same account for both subscriptions.
+**user-1** is used in this example for the user account. Replace this value with the display name for the user from **subscription-1** that you wish to assign permissions to **vnet-2**. You can skip this step if you're using the same account for both subscriptions.
```azurepowershell-interactive $id = @{
- Name = 'myVNetB'
- ResourceGroupName = 'myResourceGroupB'
+ Name = 'vnet-2'
+ ResourceGroupName = 'test-rg-2'
} $vnet = Get-AzVirtualNetwork @id
-$obj = Get-AzADUser -DisplayName 'UserA'
+$obj = Get-AzADUser -DisplayName 'user-1'
$role = @{ ObjectId = $obj.id
New-AzRoleAssignment @role
# [**Azure CLI**](#tab/create-peering-cli)
-Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetB**. Assign **UserA** from **SubscriptionA** to **myVNetB** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
+Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-2**. Assign **user-1** from **subscription-1** to **vnet-2** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
-Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **UserA**.
+Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **user-1**.
-**UserA** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionA** that you wish to assign permissions to **myVNetB**. You can skip this step if you're using the same account for both subscriptions.
+**user-1** is used in this example for the user account. Replace this value with the display name for the user from **subscription-1** that you wish to assign permissions to **vnet-2**. You can skip this step if you're using the same account for both subscriptions.
```azurecli-interactive
-az ad user list --display-name UserA
+az ad user list --display-name user-1
``` ```output [ { "businessPhones": [],
- "displayName": "UserA",
+ "displayName": "user-1",
"givenName": null, "id": "ee0645cc-e439-4ffc-b956-79577e473969", "jobTitle": null,
- "mail": "userA@contoso.com",
+ "mail": "user-1@contoso.com",
"mobilePhone": null, "officeLocation": null, "preferredLanguage": null, "surname": null,
- "userPrincipalName": "usera_contoso.com#EXT#@fabrikam.onmicrosoft.com"
+ "userPrincipalName": "user-1_contoso.com#EXT#@fabrikam.onmicrosoft.com"
} ] ```
-Make note of the object ID of **UserA** in field **id**. In this example, it's **ee0645cc-e439-4ffc-b956-79577e473969**.
+Make note of the object ID of **user-1** in field **id**. In this example, it's **ee0645cc-e439-4ffc-b956-79577e473969**.
```azurecli-interactive vnetid=$(az network vnet show \
- --name myVNetB \
- --resource-group myResourceGroupB \
+ --name vnet-2 \
+ --resource-group test-rg-2 \
--query id \ --output tsv)
az role assignment create \
-## Obtain resource ID of myVNetB
+## Obtain resource ID of vnet-2
-The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use the following steps to obtain the resource ID of **myVNetB**.
+The resource ID of **vnet-2** is required to set up the peering connection from **vnet-1** to **vnet-2**. Use the following steps to obtain the resource ID of **vnet-2**.
# [**Portal**](#tab/create-peering-portal)
-1. Remain signed in to the portal as **UserB**.
+1. Remain signed in to the portal as **user-2**.
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **myVNetB**.
+1. Select **vnet-2**.
-4. In **Settings**, select **Properties**.
+1. In **Settings**, select **Properties**.
-5. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB`**.
+1. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/test-rg-2/providers/Microsoft.Network/virtualNetworks/vnet-2`**.
-6. Sign out of the portal as **UserB**.
+1. Sign out of the portal as **user-2**.
# [**PowerShell**](#tab/create-peering-powershell)
-The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetB**.
+The resource ID of **vnet-2** is required to set up the peering connection from **vnet-1** to **vnet-2**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-2**.
```azurepowershell-interactive $id = @{
- Name = 'myVNetB'
- ResourceGroupName = 'myResourceGroupB'
+ Name = 'vnet-2'
+ ResourceGroupName = 'test-rg-2'
} $vnetB = Get-AzVirtualNetwork @id
$vnetB.id
# [**Azure CLI**](#tab/create-peering-cli)
-The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetB**.
+The resource ID of **vnet-2** is required to set up the peering connection from **vnet-1** to **vnet-2**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-2**.
```azurecli-interactive vnetidB=$(az network vnet show \
- --name myVNetB \
- --resource-group myResourceGroupB \
+ --name vnet-2 \
+ --resource-group test-rg-2 \
--query id \ --output tsv)
echo $vnetidB
-## Create peering connection - myVNetA to myVNetB
+## Create peering connection - vnet-1 to vnet-2
-You need the **Resource ID** for **myVNetB** from the previous steps to set up the peering connection.
+You need the **Resource ID** for **vnet-2** from the previous steps to set up the peering connection.
# [**Portal**](#tab/create-peering-portal)
-1. Sign in to the [Azure portal](https://portal.azure.com) as **UserA**. If you're using one account for both subscriptions, change to **SubscriptionA** in the portal.
+1. Sign in to the [Azure portal](https://portal.azure.com) as **user-1**. If you're using one account for both subscriptions, change to **subscription-1** in the portal.
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **myVNetA**.
+1. Select **vnet-1**.
-4. Select **Peerings**.
+1. Select **Peerings**.
-5. Select **+ Add**.
+1. Select **+ Add**.
-6. Enter or select the following information in **Add peering**:
+1. Enter or select the following information in **Add peering**:
| Setting | Value | | - | -- | | **This virtual network** | |
- | Peering link name | Enter **myVNetAToMyVNetB**. |
- | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
- | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
- | Virtual network gateway or Route Server | Leave the default of **None (default)**. |
+ | Peering link name | Enter **vnet-1-to-vnet-2**. |
+ | Allow access to remote virtual network | Leave the default of selected. |
+ | Allow traffic to remote virtual network | Select the checkbox. |
+ | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Leave the default of cleared. |
+ | Use remote virtual network gateway or route server | Leave the default of cleared. |
| **Remote virtual network** | | | Peering link name | Leave blank. | | Virtual network deployment model | Select **Resource manager**. | | Select the box for **I know my resource ID**. | |
- | Resource ID | Enter or paste the **Resource ID** for **myVNetB**. |
+ | Resource ID | Enter or paste the **Resource ID** for **vnet-2**. |
-7. In the pull-down box, select the **Directory** that corresponds with **myVNetB** and **UserB**.
+1. In the pull-down box, select the **Directory** that corresponds with **vnet-2** and **user-2**.
-8. Select **Authenticate**.
+1. Select **Authenticate**.
-9. Select **Add**.
+ :::image type="content" source="./media/create-peering-different-subscriptions/vnet-1-to-vnet-2-peering.png" alt-text="Screenshot of peering from vnet-1 to vnet-2.":::
-10. Sign out of the portal as **UserA**.
+1. Select **Add**.
+
+1. Sign out of the portal as **user-1**.
# [**PowerShell**](#tab/create-peering-powershell)
-### Sign in to SubscriptionA
+### Sign in to subscription-1
-Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**.
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-1**.
```azurepowershell-interactive Connect-AzAccount ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
```azurepowershell-interactive
-Set-AzContext -Subscription SubscriptionA
+Set-AzContext -Subscription subscription-1
```
-### Sign in to SubscriptionB
+### Sign in to subscription-2
-Authenticate to **SubscriptionB** so that the peering can be set up.
+Authenticate to **subscription-2** so that the peering can be set up.
-Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**.
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-2**.
```azurepowershell-interactive Connect-AzAccount ```
-### Change to SubscriptionA (optional)
+### Change to subscription-1 (optional)
-You may have to switch back to **SubscriptionA** to continue with the actions in **SubscriptionA**.
+You may have to switch back to **subscription-1** to continue with the actions in **subscription-1**.
-Change context to **SubscriptionA**.
+Change context to **subscription-1**.
```azurepowershell-interactive
-Set-AzContext -Subscription SubscriptionA
+Set-AzContext -Subscription subscription-1
``` ### Create peering connection
-Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetA** and **myVNetB**.
+Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-1** and **vnet-2**.
```azurepowershell-interactive $netA = @{
- Name = 'myVNetA'
- ResourceGroupName = 'myResourceGroupA'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnetA = Get-AzVirtualNetwork @netA $peer = @{
- Name = 'myVNetAToMyVNetB'
+ Name = 'vnet-1-to-vnet-2'
VirtualNetwork = $vnetA
- RemoteVirtualNetworkId = '/subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB'
+ RemoteVirtualNetworkId = '/subscriptions/<subscription-2-Id>/resourceGroups/test-rg-2/providers/Microsoft.Network/virtualNetworks/vnet-2'
} Add-AzVirtualNetworkPeering @peer ```
-Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **myVNetA** to **myVNetB**.
+Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **vnet-1** to **vnet-2**.
```azurepowershell-interactive $status = @{
- ResourceGroupName = 'myResourceGroupA'
- VirtualNetworkName = 'myVNetA'
+ ResourceGroupName = 'test-rg'
+ VirtualNetworkName = 'vnet-1'
} Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState ```
PS /home/azureuser> Get-AzVirtualNetworkPeering @status | Format-Table VirtualNe
VirtualNetworkName PeeringState
-myVNetA Initiated
+vnet-1 Initiated
``` # [**Azure CLI**](#tab/create-peering-cli)
-### Sign in to SubscriptionA
+### Sign in to subscription-1
-Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**.
```azurecli-interactive az login ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [az account set](/cli/azure/account#az-account-set).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [az account set](/cli/azure/account#az-account-set).
```azurecli-interactive
-az account set --subscription "SubscriptionA"
+az account set --subscription "subscription-1"
```
-### Sign in to SubscriptionB
+### Sign in to subscription-2
-Authenticate to **SubscriptionB** so that the peering can be set up.
+Authenticate to **subscription-2** so that the peering can be set up.
-Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionB**.
+Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-2**.
```azurecli-interactive az login ```
-### Change to SubscriptionA (optional)
+### Change to subscription-1 (optional)
-You may have to switch back to **SubscriptionA** to continue with the actions in **SubscriptionA**.
+You may have to switch back to **subscription-1** to continue with the actions in **subscription-1**.
-Change context to **SubscriptionA**.
+Change context to **subscription-1**.
```azurecli-interactive
-az account set --subscription "SubscriptionA"
+az account set --subscription "subscription-1"
``` ### Create peering connection
-Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetA** and **myVNetB**.
+Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-1** and **vnet-2**.
```azurecli-interactive az network vnet peering create \
- --name myVNetAToMyVNetB \
- --resource-group myResourceGroupA \
- --vnet-name myVNetA \
- --remote-vnet /subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/VirtualNetworks/myVNetB \
+ --name vnet-1-to-vnet-2 \
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --remote-vnet /subscriptions/<subscription-2-Id>/resourceGroups/test-rg-2/providers/Microsoft.Network/VirtualNetworks/vnet-2 \
--allow-vnet-access ```
-Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **myVNetA** to **myVNetB**.
+Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **vnet-1** to **vnet-2**.
```azurecli-interactive az network vnet peering list \
- --resource-group myResourceGroupA \
- --vnet-name myVNetA \
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
--output table ```
-The peering connection shows in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **myVNetB**.
+The peering connection shows in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **vnet-2**.
-## Create peering connection - myVNetB to myVNetA
+## Create peering connection - vnet-2 to vnet-1
-You need the **Resource IDs** for **myVNetA** from the previous steps to set up the peering connection.
+You need the **Resource IDs** for **vnet-1** from the previous steps to set up the peering connection.
# [**Portal**](#tab/create-peering-portal)
-1. Sign in to the [Azure portal](https://portal.azure.com) as **UserB**. If you're using one account for both subscriptions, change to **SubscriptionB** in the portal.
+1. Sign in to the [Azure portal](https://portal.azure.com) as **user-2**. If you're using one account for both subscriptions, change to **subscription-2** in the portal.
-2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **myVNetB**.
+1. Select **vnet-2**.
-4. Select **Peerings**.
+1. Select **Peerings**.
-5. Select **+ Add**.
+1. Select **+ Add**.
-6. Enter or select the following information in **Add peering**:
+1. Enter or select the following information in **Add peering**:
| Setting | Value | | - | -- | | **This virtual network** | |
- | Peering link name | Enter **myVNetBToMyVNetA**. |
- | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
- | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
- | Virtual network gateway or Route Server | Leave the default of **None (default)**. |
+ | Peering link name | Enter **vnet-2-to-vnet-1**. |
+ | Allow access to remote virtual network | Leave the default of selected. |
+ | Allow traffic to remote virtual network | Select the checkbox. |
+ | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Leave the default of cleared. |
+ | Use remote virtual network gateway or route server | Leave the default of cleared. |
| **Remote virtual network** | | | Peering link name | Leave blank. | | Virtual network deployment model | Select **Resource manager**. | | Select the box for **I know my resource ID**. | |
- | Resource ID | Enter or paste the **Resource ID** for **myVNetA**. |
+ | Resource ID | Enter or paste the **Resource ID** for **vnet-1**. |
+
+1. In the pull-down box, select the **Directory** that corresponds with **vnet-1** and **user-1**.
-7. In the pull-down box, select the **Directory** that corresponds with **myVNetA** and **UserA**.
+1. Select **Authenticate**.
-8. Select **Authenticate**.
+ :::image type="content" source="./media/create-peering-different-subscriptions/vnet-2-to-vnet-1-peering.png" alt-text="Screenshot of peering from vnet-2 to vnet-1.":::
-9. Select **Add**.
+1. Select **Add**.
# [**PowerShell**](#tab/create-peering-powershell)
-### Sign in to SubscriptionB
+### Sign in to subscription-2
-Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**.
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-2**.
```azurepowershell-interactive Connect-AzAccount ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
```azurepowershell-interactive
-Set-AzContext -Subscription SubscriptionB
+Set-AzContext -Subscription subscription-2
```
-## Sign in to SubscriptionA
+## Sign in to subscription-1
-Authenticate to **SubscriptionA** so that the peering can be set up.
+Authenticate to **subscription-1** so that the peering can be set up.
-Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**.
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-1**.
```azurepowershell-interactive Connect-AzAccount ```
-### Change to SubscriptionB (optional)
+### Change to subscription-2 (optional)
-You may have to switch back to **SubscriptionB** to continue with the actions in **SubscriptionB**.
+You may have to switch back to **subscription-2** to continue with the actions in **subscription-2**.
-Change context to **SubscriptionB**.
+Change context to **subscription-2**.
```azurepowershell-interactive
-Set-AzContext -Subscription SubscriptionB
+Set-AzContext -Subscription subscription-2
``` ### Create peering connection
-Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetB** and **myVNetA**.
+Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-2** and **vnet-1**.
```azurepowershell-interactive $netB = @{
- Name = 'myVNetB'
- ResourceGroupName = 'myResourceGroupB'
+ Name = 'vnet-2'
+ ResourceGroupName = 'test-rg-2'
} $vnetB = Get-AzVirtualNetwork @netB $peer = @{
- Name = 'myVNetBToMyVNetA'
+ Name = 'vnet-2-to-vnet-1'
VirtualNetwork = $vnetB
- RemoteVirtualNetworkId = '/subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVNetA'
+ RemoteVirtualNetworkId = '/subscriptions/<subscription-1-Id>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1'
} Add-AzVirtualNetworkPeering @peer ```
-User [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **myVNetB** to **myVNetA**.
+User [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **vnet-2** to **vnet-1**.
```azurepowershell-interactive $status = @{
- ResourceGroupName = 'myResourceGroupB'
- VirtualNetworkName = 'myVNetB'
+ ResourceGroupName = 'test-rg-2'
+ VirtualNetworkName = 'vnet-2'
} Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState ```
PS /home/azureuser> Get-AzVirtualNetworkPeering @status | Format-Table VirtualNe
VirtualNetworkName PeeringState
-myVNetB Connected
+vnet-2 Connected
``` # [**Azure CLI**](#tab/create-peering-cli)
-### Sign in to SubscriptionB
+### Sign in to subscription-2
-Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionB**.
+Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-2**.
```azurecli-interactive az login ```
-If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [az account set](/cli/azure/account#az-account-set).
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [az account set](/cli/azure/account#az-account-set).
```azurecli-interactive
-az account set --subscription "SubscriptionB"
+az account set --subscription "subscription-2"
```
-### Sign in to SubscriptionA
+### Sign in to subscription-1
-Authenticate to **SubscriptionA** so that the peering can be set up.
+Authenticate to **subscription-1** so that the peering can be set up.
-Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**.
```azurecli-interactive az login ```
-### Change to SubscriptionB (optional)
+### Change to subscription-2 (optional)
-You may have to switch back to **SubscriptionB** to continue with the actions in **SubscriptionB**.
+You may have to switch back to **subscription-2** to continue with the actions in **subscription-2**.
-Change context to **SubscriptionB**.
+Change context to **subscription-2**.
```azurecli-interactive
-az account set --subscription "SubscriptionB"
+az account set --subscription "subscription-2"
``` ### Create peering connection
-Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetB** and **myVNetA**.
+Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-2** and **vnet-1**.
```azurecli-interactive az network vnet peering create \
- --name myVNetBToMyVNetA \
- --resource-group myResourceGroupB \
- --vnet-name myVNetB \
- --remote-vnet /subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/VirtualNetworks/myVNetA \
+ --name vnet-2-to-vnet-1 \
+ --resource-group test-rg-2 \
+ --vnet-name vnet-2 \
+ --remote-vnet /subscriptions/<subscription-1-Id>/resourceGroups/test-rg/providers/Microsoft.Network/VirtualNetworks/vnet-1 \
--allow-vnet-access ```
-Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **myVNetB** to **myVNetA**.
+Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **vnet-2** to **vnet-1**.
```azurecli-interactive az network vnet peering list \
- --resource-group myResourceGroupB \
- --vnet-name myVNetB \
+ --resource-group test-rg-2 \
+ --vnet-name vnet-2 \
--output table ```
-The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using default Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server or use Azure DNS.
+The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using subnet-1 Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server or use Azure DNS.
For more information about using your own DNS for name resolution, see, [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
virtual-network Deploy Container Networking Docker Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-linux.md
Previously updated : 12/22/2022 Last updated : 08/28/2023 # Deploy container networking for a stand-alone Linux Docker host
-The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you'll learn how to install and configure the CNI plugin for a standalone Linux Docker host.
+The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you learn how to install and configure the CNI plugin for a standalone Linux Docker host.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create virtual network
-
-A virtual network contains the virtual machine used in this article. In this section, you'll create a virtual network and subnet. You'll enable Azure Bastion during the virtual network deployment. The Azure Bastion host is used to securely connect to the virtual machine to complete the steps in this article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-3. Select **+ Create**.
-
-4. Enter or select the following information in the **Basics** tab of **Create virtual network**:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select a region. |
-
-5. Select **Next: IP Addresses**.
-
-6. In **IPv4 address space**, enter **10.1.0.0/16**.
-
-7. Select **+ Add subnet**.
-
-8. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
-
-9. Select **Add**.
-
-10. Select **Next: Security**.
-
-11. Select **Enable** in **BastionHost**.
-
- >[!NOTE]
- >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-
-12. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Bastion name | Enter **myBastion**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26**. |
- | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. </br> Select **OK**. |
-
-13. Select **Review + create**.
-
-14. Select **Create**.
It can take a few minutes for the Bastion host to deploy. You can continue with the steps while the Bastion host is deploying.
-## Create virtual machine
-
-In this section, you'll create an Ubuntu virtual machine for the stand-alone Docker host. Ubuntu is used for the example in this article. The CNI plug-in supports Windows and other Linux distributions.
-
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-
-2. Select **+ Create** > **Azure virtual machine**.
-
-3. Enter or select the following information in the **Basics** tab of **Create a virtual machine**:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select a region. |
- | Availability options | Select **No infrastructure required**. |
- | Security type | Select **Standard**. |
- | Image | Select **Ubuntu Server 20.04 LTS -x64 Gen2**. |
- | VM architecture | Leave the default of **x64**. |
- | Run with Azure Spot discount | Leave the default of unchecked. |
- | Size | Select a size. |
- | **Administrator account** | |
- | Authentication type | Select **Password**. |
- | Username | Enter a username. |
- | Password | Enter a password. |
- | Confirm password | Reenter password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
-
-4. Select **Next: Disks**, then **Next: Networking**.
-
-5. Enter or select the following information in the **Networking** tab:
-
- | Setting | Value |
- | - | -- |
- | **Network interface** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet (10.1.0.0/24)**. |
- | Public IP | Select **None**. |
-
-6. Select **Review + create**.
-
-7. Select **Create**
## Add IP configuration
-The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container will start but won't have an IP address.
+The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container starts but doesn't have an IP address.
-In this section, you'll add an IP configuration to the virtual network interface of the virtual machine you created previously.
+In this section, you add an IP configuration to the virtual network interface of the virtual machine you created previously.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In **Settings**, select **Networking**.
+1. In **Settings**, select **Networking**.
-4. Select the name of the network interface next to **Network Interface:**. The network interface is named **myvm** with a random number. In this example, it's **myvm27**.
+1. Select the name of the network interface next to **Network Interface:**. The network interface is named **vm-1** with a random number.
-5. In **Settings** of the network interface, select **IP configurations**.
+1. In **Settings** of the network interface, select **IP configurations**.
-6. in **IP configurations**, select **ipconfig1** in **Name**.
+1. in **IP configurations**, select **ipconfig1** in **Name**.
-7. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**.
+1. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**.
-8. Select **Save**.
+1. Select **Save**.
-9. Return to **IP configurations**.
+1. Return to **IP configurations**.
-10. Select **+ Add**.
+1. Select **+ Add**.
-11. Enter or select the following information for **Add IP configuration**:
+1. Enter or select the following information for **Add IP configuration**:
| Setting | Value | | - | -- |
- | Name | Enter **ipconfig2**. |
+ | Name | Enter **ipconfig-2**. |
| **Private IP address settings** | | | Allocation | Select **Static**. |
- | IP address | Enter **10.1.0.5**. |
+ | IP address | Enter **10.0.0.5**. |
-12. Select **OK**.
+1. Select **OK**.
-13. Verify **ipconfig2** has been added as a secondary IP configuration.
+1. Verify **ipconfig-2** has been added as a secondary IP configuration.
-Repeat steps 1 through 13 to add as many configurations as containers you wish to deploy on the container host.
+Repeat the previous steps to add as many configurations as containers you wish to deploy on the container host.
## Install Docker
Sign-in to the virtual machine you created previously with the Azure Bastion hos
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In the **Overview** of **myVM**, select **Connect** then **Bastion**.
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**.
-4. Enter the username and password you created when you deployed the virtual machine in the previous steps.
+1. Enter the username and password you created when you deployed the virtual machine in the previous steps.
-5. Select **Connect**.
+1. Select **Connect**.
For install instructions for Docker on an Ubuntu container host, see [Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/).
After Docker is installed on your virtual machine, continue with the steps in th
## Install CNI plugin and create a test container
-The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you'll use **`git`** within the virtual machine to clone the repository for the plugin and then install and configure the plugin.
+The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you use **`git`** within the virtual machine to clone the repository for the plugin and then install and configure the plugin.
For more information about the Azure CNI plugin, see [Microsoft Azure Container Networking](https://github.com/Azure/azure-container-networking). 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In the **Overview** of **myVM**, select **Connect** then **Bastion**.
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**.
-4. Enter the username and password you created when you deployed the virtual machine in the previous steps.
+1. Enter the username and password you created when you deployed the virtual machine in the previous steps.
-5. Select **Connect**.
+1. Select **Connect**.
-6. The application **jq** is required for the install script for the CNI plugin, use the following example to install the application:
+1. The application **jq** is required for the install script for the CNI plugin, use the following example to install the application:
```bash sudo apt-get update sudo apt-get install jq ```
-7. Next, you'll clone the repository for the CNI plugin. Use the following example to clone the repository:
+1. Next, you clone the repository for the CNI plugin. Use the following example to clone the repository:
```bash git clone https://github.com/Azure/azure-container-networking.git ```
-8. Configure permissions and install the CNI plugin. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases).
+1. Configure permissions and install the CNI plugin. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases).
```bash cd ./azure-container-networking/scripts
For more information about the Azure CNI plugin, see [Microsoft Azure Container
chmod u+x docker-run.sh ```
-9. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example will create an Alpine container with the CNI plugin script:
+1. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example creates an Alpine container with the CNI plugin script:
```bash sudo ./docker-run.sh vnetdocker1 default alpine ```
-10. To verify that the container received the IP address you previously configured, connect to the container and view the IP:
+1. To verify that the container received the IP address you previously configured, connect to the container and view the IP:
```bash sudo docker exec -it vnetdocker1 /bin/sh ```
-11. Use the **`ifconfig`** command in the following example to verify the IP address was assigned to the container:
+1. Use the **`ifconfig`** command in the following example to verify the IP address was assigned to the container:
```bash ifconfig ``` :::image type="content" source="./media/deploy-container-networking-docker-linux/ifconfig-output.png" alt-text="Screenshot of ifconfig output in Bash prompt of test container.":::
-## Clean up resources
-
-If you're not going to continue to use this application, delete the virtual network and virtual machine with the following steps:
-
-1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
-
-2. Select **myResourceGroup**.
-
-3. In the **Overview** of **myResourceGroup**, select **Delete resource group**.
-
-4. In **TYPE THE RESOURCE GROUP NAME:**, enter **myResourceGroup**.
-
-5. Select **Delete**.
## Next steps
virtual-network Deploy Container Networking Docker Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-windows.md
Previously updated : 12/26/2022 Last updated : 08/28/2023 # Deploy container networking for a stand-alone Windows Docker host
-The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you'll learn how to install and configure the CNI plugin for a standalone Windows Docker host.
+The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you learn how to install and configure the CNI plugin for a standalone Windows Docker host.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create virtual network
-
-A virtual network contains the virtual machine used in this article. In this section, you'll create a virtual network and subnet. You'll enable Azure Bastion during the virtual network deployment. The Azure Bastion host is used to securely connect to the virtual machine to complete the steps in this article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-3. Select **+ Create**.
-
-4. Enter or select the following information in the **Basics** tab of **Create virtual network**:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select a region. |
-
-5. Select **Next: IP Addresses**.
-
-6. In **IPv4 address space**, enter **10.1.0.0/16**.
-
-7. Select **+ Add subnet**.
-
-8. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Subnet name | Enter **mySubnet**. |
- | Subnet address range | Enter **10.1.0.0/24**. |
-
-9. Select **Add**.
-
-10. Select **Next: Security**.
-
-11. Select **Enable** in **BastionHost**.
-
- >[!NOTE]
- >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-
-12. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Bastion name | Enter **myBastion**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26**. |
- | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. </br> Select **OK**. |
-
-13. Select **Review + create**.
-
-14. Select **Create**.
It can take a few minutes for the network and Bastion host to deploy. Continue with the next steps when the deployment is complete or the virtual network creation is complete.
-## Create virtual machine
-
-In this section, you'll create a Windows Server 2022 virtual machine for the stand-alone Docker host. The CNI plug-in supports Windows and Linux.
-
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-
-2. Select **+ Create** > **Azure virtual machine**.
-
-3. Enter or select the following information in the **Basics** tab of **Create a virtual machine**:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select a region. |
- | Availability options | Select **No infrastructure required**. |
- | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. |
- | VM architecture | Leave the default of **x64**. |
- | Run with Azure Spot discount | Leave the default of unchecked. |
- | Size | Select a size. |
- | **Administrator account** | |
- | Authentication type | Select **Password**. |
- | Username | Enter a username. |
- | Password | Enter a password. |
- | Confirm password | Reenter password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
-
-4. Select **Next: Disks**, then **Next: Networking**.
-
-5. Enter or select the following information in the **Networking** tab:
-
- | Setting | Value |
- | - | -- |
- | **Network interface** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet (10.1.0.0/24)**. |
- | Public IP | Select **None**. |
-
-6. Select **Review + create**.
-
-7. Select **Create**
## Add IP configuration
-The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container will start but won't have an IP address.
+The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container starts but doesn't have an IP address.
-In this section, you'll add an IP configuration to the virtual network interface of the virtual machine you created previously.
+In this section, you add an IP configuration to the virtual network interface of the virtual machine you created previously.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In **Settings**, select **Networking**.
+1. In **Settings**, select **Networking**.
-4. Select the name of the network interface next to **Network Interface:**. The network interface is named **myvm** with a random number. In this example, it's **myvm418**.
+1. Select the name of the network interface next to **Network Interface:**. The network interface is named **vm-1** with a random number.
- :::image type="content" source="./media/deploy-container-networking-docker-windows/select-nic-portal.png" alt-text="Screenshot of the network interface in settings for the virtual machine in the Azure portal.":::
+1. In **Settings** of the network interface, select **IP configurations**.
-5. In **Settings** of the network interface, select **IP configurations**.
+1. in **IP configurations**, select **ipconfig1** in **Name**.
-6. in **IP configurations**, select **ipconfig1** in **Name**.
+1. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**.
- :::image type="content" source="./media/deploy-container-networking-docker-windows/nic-ip-configuration.png" alt-text="Screenshot of IP configuration of the virtual machine network interface.":::
+1. Select **Save**.
-7. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**.
+1. Return to **IP configurations**.
-8. Select **Save**.
+1. Select **+ Add**.
-9. Return to **IP configurations**.
-
-10. Select **+ Add**.
-
-11. Enter or select the following information for **Add IP configuration**:
+1. Enter or select the following information for **Add IP configuration**:
| Setting | Value | | - | -- |
- | Name | Enter **ipconfig2**. |
+ | Name | Enter **ipconfig-2**. |
| **Private IP address settings** | | | Allocation | Select **Static**. |
- | IP address | Enter **10.1.0.5**. |
-
-12. Select **OK**.
+ | IP address | Enter **10.0.0.5**. |
-13. Verify **ipconfig2** has been added as a secondary IP configuration.
+1. Select **OK**.
- :::image type="content" source="./media/deploy-container-networking-docker-windows/verify-ip-configuration.png" alt-text="Screenshot of IP configuration of the virtual machine network interface with the secondary configuration.":::
+1. Verify **ipconfig2** has been added as a secondary IP configuration.
Repeat steps 1 through 13 to add as many configurations as containers you wish to deploy on the container host.
To assign multiple IP addresses to a Windows virtual machine, the IP addressees
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In the **Overview** of **myVM**, select **Connect** then **Bastion**.
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**.
-4. Enter the username and password you created when you deployed the virtual machine in the previous steps.
+1. Enter the username and password you created when you deployed the virtual machine in the previous steps.
-5. Select **Connect**.
+1. Select **Connect**.
-6. Open the network connections configuration on the virtual machine. Select **Start** -> **Run** and enter **`ncpa.cpl`**.
+1. Open the network connections configuration on the virtual machine. Select **Start** -> **Run** and enter **`ncpa.cpl`**.
-7. Select **OK**.
+1. Select **OK**.
-8. Select the network interface of the virtual machine, then **Properties**:
+1. Select the network interface of the virtual machine, then **Properties**:
:::image type="content" source="./media/deploy-container-networking-docker-windows/select-network-interface.png" alt-text="Screenshot of select network interface in Windows OS.":::
-9. In **Ethernet Properties**, select **Internet Protocol Version 4 (TCP/IPv4)**, then **Properties**.
+1. In **Ethernet Properties**, select **Internet Protocol Version 4 (TCP/IPv4)**, then **Properties**.
-10. Enter or select the following information in the **General** tab:
+1. Enter or select the following information in the **General** tab:
| Setting | Value | | - | -- | | Select **Use the following IP address:** | |
- | IP address: | Enter **10.1.0.4** |
+ | IP address: | Enter **10.0.0.4** |
| Subnet mask: | Enter **255.255.255.0** |
- | Default gateway | Enter **10.1.0.1** |
+ | Default gateway | Enter **10.0.0.1** |
| Select **Use the following DNS server addresses:** | | | Preferred DNS server: | Enter **168.63.129.16** *This IP is the DHCP assigned IP address for the default Azure DNS* |
- :::image type="content" source="./media/deploy-container-networking-docker-windows/ip-address-configuration.png" alt-text="Screenshot of the primary IP configuration in Windows.":::
+1. Select **Advanced...**.
-11. Select **Advanced...**.
+1. in **IP addresses**, select **Add...**.
-12. in **IP addresses**, select **Add...**.
-
- :::image type="content" source="./media/deploy-container-networking-docker-windows/advanced-ip-configuration.png" alt-text="Screenshot of the advanced IP configuration in Windows.":::
-
-13. Enter or select the following information:
+1. Enter or select the following information:
| Setting | Value | | - | -- | | **TCP/IP Address** | |
- | IP address: | Enter **10.1.0.5** |
+ | IP address: | Enter **10.0.0.5** |
| Subnet mask: | Enter **255.255.255.0** |
- :::image type="content" source="./media/deploy-container-networking-docker-windows/secondary-ip-address.png" alt-text="Screenshot of the secondary IP configuration addition.":::
-
-14. Select **Add**.
+1. Select **Add**.
-15. To add more IP addresses that correspond with any extra IP configurations created previously, select **Add**.
+1. To add more IP addresses that correspond with any extra IP configurations created previously, select **Add**.
-16. Select **OK**.
+1. Select **OK**.
-17. Select **OK**.
+1. Select **OK**.
-18. Select **OK**.
+1. Select **OK**.
-The Bastion connection will drop for a few seconds as the network configuration is applied. Wait a few seconds then attempt to reconnect. Continue when a reconnection is successful.
+The Bastion connection drops for a few seconds as the network configuration is applied. Wait a few seconds then attempt to reconnect. Continue when a reconnection is successful.
## Install Docker
Sign-in to the virtual machine you created previously with the Azure Bastion hos
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In the **Overview** of **myVM**, select **Connect** then **Bastion**.
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**.
-4. Enter the username and password you created when you deployed the virtual machine in the previous steps.
+1. Enter the username and password you created when you deployed the virtual machine in the previous steps.
-5. Select **Connect**.
+1. Select **Connect**.
-6. Open **Windows PowerShell** on **myVM**.
+1. Open **Windows PowerShell** on **vm-1**.
-7. The following example installs **Docker CE/Moby**:
+1. The following example installs **Docker CE/Moby**:
```powershell Invoke-WebRequest -UseBasicParsing "https://raw.githubusercontent.com/microsoft/Windows-Containers/Main/helpful_tools/Install-DockerCE/install-docker-ce.ps1" -o install-docker-ce.ps1
Sign-in to the virtual machine you created previously with the Azure Bastion hos
.\install-docker-ce.ps1 ```
-The virtual machine will reboot to install the container support in Windows. Reconnect to the virtual machine and the Docker install will continue.
+The virtual machine reboots to install the container support in Windows. Reconnect to the virtual machine and the Docker install continues.
For more information about Windows containers, see, [Get started: Prep Windows for containers](/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1).
After Docker is installed on your virtual machine, continue with the steps in th
## Install CNI plugin and jq
-The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you'll download the CNI plugin repository within the virtual machine and then install and configure the plugin.
+The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you download the CNI plugin repository within the virtual machine and then install and configure the plugin.
For more information about the Azure CNI plugin, see [Microsoft Azure Container Networking](https://github.com/Azure/azure-container-networking). 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **myVM**.
+1. Select **vm-1**.
-3. In the **Overview** of **myVM**, select **Connect** then **Bastion**.
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**.
-4. Enter the username and password you created when you deployed the virtual machine in the previous steps.
+1. Enter the username and password you created when you deployed the virtual machine in the previous steps.
-5. Select **Connect**.
+1. Select **Connect**.
-6. Use the following example to download and extract the CNI plugin to a temporary folder in the virtual machine:
+1. Use the following example to download and extract the CNI plugin to a temporary folder in the virtual machine:
```powershell Invoke-WebRequest -Uri https://github.com/Azure/azure-container-networking/archive/refs/heads/master.zip -OutFile azure-container-networking.zip
For more information about the Azure CNI plugin, see [Microsoft Azure Container
Expand-Archive azure-container-networking.zip -DestinationPath azure-container-networking ```
-7. To install the CNI plugin, change to the scripts directory of the CNI plugin folder you downloaded in the previous step. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases).
+1. To install the CNI plugin, change to the scripts directory of the CNI plugin folder you downloaded in the previous step. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases).
```powershell cd .\azure-container-networking\azure-container-networking-master\scripts\
For more information about the Azure CNI plugin, see [Microsoft Azure Container
.\Install-CniPlugin.ps1 v1.4.39 ```
-8. The CNI plugin comes with a built-in network configuration file for the plugin. Use the following example to copy the file to the network configuration directory:
+1. The CNI plugin comes with a built-in network configuration file for the plugin. Use the following example to copy the file to the network configuration directory:
```powershell Copy-Item -Path "c:\k\azurecni\bin\10-azure.conflist" -Destination "c:\k\azurecni\netconf"
The script that creates the containers with the Azure CNI plugin requires the ap
1. Open a web browser in the virtual machine and download the **jq** application.
-2. The download is a self-contained executable for the application. Copy the executable **`jq-win64.exe`** to the **`C:\Windows`** directory.
+1. The download is a self-contained executable for the application. Copy the executable **`jq-win64.exe`** to the **`C:\Windows`** directory.
## Create test container
-1. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example will create a Windows Server container with the CNI plugin script:
+1. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example creates a Windows Server container with the CNI plugin script:
```powershell cd .\azure-container-networking\azure-container-networking-master\scripts\ .\docker-exec.ps1 vnetdocker1 default mcr.microsoft.com/windows/servercore/iis add ```
- It can take a few minutes for the image for the container to download for the first time. When the container starts and initializes the network, the Bastion connection will disconnect. Wait a few seconds and the connection will reestablish.
+ It can take a few minutes for the image for the container to download for the first time. When the container starts and initializes the network, the Bastion connection disconnects. Wait a few seconds and the connection reestablish.
-2. To verify that the container received the IP address you previously configured, connect to the container and view the IP:
+1. To verify that the container received the IP address you previously configured, connect to the container and view the IP:
```powershell docker exec -it vnetdocker1 powershell ```
-3. Use the **`ipconfig`** command in the following example to verify the IP address was assigned to the container:
+1. Use the **`ipconfig`** command in the following example to verify the IP address was assigned to the container:
```powershell ipconfig ``` :::image type="content" source="./media/deploy-container-networking-docker-windows/ipconfig-output.png" alt-text="Screenshot of ipconfig output in PowerShell prompt of test container.":::
-4. Exit the container and close the Bastion connection to **myVM**.
-
-## Clean up resources
-
-If you're not going to continue to use this application, delete the virtual network and virtual machine with the following steps:
-
-1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
-
-2. Select **myResourceGroup**.
-
-3. In the **Overview** of **myResourceGroup**, select **Delete resource group**.
-
-4. In **TYPE THE RESOURCE GROUP NAME:**, enter **myResourceGroup**.
+1. Exit the container and close the Bastion connection to **vm-1**.
-5. Select **Delete**.
## Next steps
virtual-network How To Create Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-create-encryption-portal.md
Azure Virtual Network encryption is a feature of Azure Virtual Network. Virtual
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-## Create a virtual network
-
-In this section, you create a virtual network and enable virtual network encryption.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the search box at the top of the portal, begin typing **Virtual networks**. When **Virtual networks** appears in the search results, select it.
-
-1. In **Virtual networks**, select **+ Create**.
-
-1. Enter or select the following information in the **Basics** tab of **Create virtual network**:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select **Create new**, then enter **test-rg** in **Name**. Select **OK**. |
- | **Instance details** | |
- | Virtual network name | Enter **vnet-1**. |
- | Region | Select **(US) East US 2**. |
-
-1. Select **Review + create**.
-
-1. Select **Create**.
> [!IMPORTANT] > Azure Virtual Network encryption requires supported virtual machine SKUs in the virtual network for traffic to be encrypted. The setting **dropUnencrypted** will drop traffic between unsupported virtual machine SKUs if they are deployed in the virtual network. For more information, see [Azure Virtual Network encryption requirements](virtual-network-encryption-overview.md#requirements).
virtual-network Add Dual Stack Ipv6 Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-cli.md
Title: Add a dual-stack network to an existing virtual machine - Azure CLI description: Learn how to add a dual-stack network to an existing virtual machine using the Azure CLI.--++ Previously updated : 08/24/2022 Last updated : 08/24/2023 ms.devlang: azurecli # Add a dual-stack network to an existing virtual machine using the Azure CLI
-In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address.
+In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address.
## Prerequisites
In this article, you'll add IPv6 support to an existing virtual network. You'll
## Add IPv6 to virtual network
-In this section, you'll add an IPv6 address space and subnet to your existing virtual network.
+In this section, you add an IPv6 address space and subnet to your existing virtual network.
Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the virtual network.
az network vnet subnet update \
## Create IPv6 public IP address
-In this section, you'll create a IPv6 public IP address for the virtual machine.
+In this section, you create a IPv6 public IP address for the virtual machine.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP address.
virtual-network Add Dual Stack Ipv6 Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-portal.md
Title: Add a dual-stack network to an existing virtual machine - Azure portal description: Learn how to add a dual stack network to an existing virtual machine using the Azure portal.--++ Previously updated : 08/19/2022 Last updated : 08/24/2023 # Add a dual-stack network to an existing virtual machine using the Azure portal
-In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address.
+In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address.
## Prerequisites
In this article, you'll add IPv6 support to an existing virtual network. You'll
## Add IPv6 to virtual network
-In this section, you'll add an IPv6 address space and subnet to your existing virtual network.
+In this section, you add an IPv6 address space and subnet to your existing virtual network.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this section, you'll add an IPv6 address space and subnet to your existing vi
## Create IPv6 public IP address
-In this section, you'll create a IPv6 public IP address for the virtual machine.
+In this section, you create a IPv6 public IP address for the virtual machine.
1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
In this section, you'll create a IPv6 public IP address for the virtual machine.
## Add IPv6 configuration to virtual machine
-The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You'll stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface.
+The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
virtual-network Add Dual Stack Ipv6 Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-powershell.md
Title: Add a dual-stack network to an existing virtual machine - Azure PowerShell description: Learn how to add a dual-stack network to an existing virtual machine using Azure PowerShell.--++ Previously updated : 08/24/2022 Last updated : 08/24/2023 # Add a dual-stack network to an existing virtual machine using Azure PowerShell
-In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address.
+In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address.
## Prerequisites
If you choose to install and use PowerShell locally, this article requires the A
## Add IPv6 to virtual network
-In this section, you'll add an IPv6 address space and subnet to your existing virtual network.
+In this section, you add an IPv6 address space and subnet to your existing virtual network.
Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the virtual network.
Set-AzVirtualNetwork -VirtualNetwork $vnet
## Create IPv6 public IP address
-In this section, you'll create a IPv6 public IP address for the virtual machine.
+In this section, you create a IPv6 public IP address for the virtual machine.
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP address.
virtual-network Associate Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/associate-public-ip-address-vm.md
Title: Associate a public IP address to a virtual machine
description: Learn how to associate a public IP address to a virtual machine (VM) by using the Azure portal, the Azure CLI, or Azure PowerShell. -+ Previously updated : 03/17/2023- Last updated : 08/24/2023+
virtual-network Configure Public Ip Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-application-gateway.md
Title: Manage a public IP address with an Azure Application Gateway description: Learn about the ways a public IP address is used with an Azure Application Gateway and how to change and manage the configuration.--++ Previously updated : 06/28/2021 Last updated : 08/24/2023
Azure Application Gateway is a web traffic load balancer that manages traffic to
An Application Gateway frontend can be a private IP address, public IP address, or both. The V1 SKU of Application Gateway supports basic dynamic public IPs. The V2 SKU supports standard SKU public IPs that are static only. Application Gateway V2 SKU doesn't support an internal IP address as it's only frontend. For more information, see [Application Gateway frontend IP address configuration](../../application-gateway/configuration-frontend-ip.md).
-In this article, you'll learn how to create an Application Gateway using an existing public IP in your subscription.
+In this article, you learn how to create an Application Gateway using an existing public IP in your subscription.
## Prerequisites
In this article, you'll learn how to create an Application Gateway using an exis
## Create Application Gateway existing public IP
-In this section, you'll create an Application Gateway resource. You'll select the IP address you created in the prerequisites as the public IP for the Application Gateway.
+In this section, you create an Application Gateway resource. You select the IP address you created in the prerequisites as the public IP for the Application Gateway.
1. Sign in to the [Azure portal](https://portal.azure.com).
virtual-network Configure Public Ip Bastion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-bastion.md
Title: Manage a public IP address with Azure Bastion description: Learn about the ways a public IP address is used with Azure Bastion and how to change the configuration.--++ Previously updated : 06/28/2021 Last updated : 08/24/2023
Azure Bastion is deployed to provide secure management connectivity to virtual m
An Azure Bastion host requires a public IP address for its configuration.
-In this article, you'll learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion doesn't support public IP prefixes.
+In this article, you learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion doesn't support public IP prefixes.
>[!NOTE] >[!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
In this article, you'll learn how to create an Azure Bastion host using an exist
## Create Azure Bastion using existing IP
-In this section, you'll create an Azure Bastion host. You'll select the IP address you created in the prerequisites as the public IP for bastion host.
+In this section, you create an Azure Bastion host. You select the IP address you created in the prerequisites as the public IP for bastion host.
1. Sign in to the [Azure portal](https://portal.azure.com).
virtual-network Configure Public Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-firewall.md
Title: Manage a public IP address by using Azure Firewall description: Learn about the ways a public IP address is used with Azure Firewall and how to change the configuration.--++ Previously updated : 03/28/2023 Last updated : 08/24/2023
virtual-network Configure Public Ip Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-load-balancer.md
Title: Manage a public IP address with a load balancer description: Learn about the ways a public IP address is used with an Azure Load Balancer and how to change the configuration.--++ Previously updated : 12/15/2022 Last updated : 08/24/2023
Finally, the article reviews unique aspects of using public IPs and public IP pr
## Create load balancer using existing public IP
-In this section, you'll create a standard SKU load balancer. You'll select the IP address you created in the prerequisites as the frontend IP of the load balancer.
+In this section, you create a standard SKU load balancer. You select the IP address you created in the prerequisites as the frontend IP of the load balancer.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this section, you'll create a standard SKU load balancer. You'll select the I
## Change or remove public IP address
-In this section, you'll change the frontend IP address of the load balancer.
+In this section, you change the frontend IP address of the load balancer.
An Azure Load Balancer must have an IP address associated with a frontend. A separate public IP address can be utilized as a frontend for ingress and egress traffic.
-To change the IP, you'll associate a new public IP address previously created with the load balancer frontend.
+To change the IP, you associate a new public IP address previously created with the load balancer frontend.
1. Sign in to the [Azure portal](https://portal.azure.com).
Standard load balancer supports outbound rules for Source Network Address Transl
Multiple IPs avoid SNAT port exhaustion. Each Frontend IP provides 64,000 ephemeral ports that the load balancer can use. For more information, see [Outbound Rules](../../load-balancer/outbound-rules.md).
-In this section, you'll change the frontend configuration used for outbound connections to use a public IP prefix.
+In this section, you change the frontend configuration used for outbound connections to use a public IP prefix.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this section, you'll change the frontend configuration used for outbound conn
* Cross-region load balancers are a special type of standard public load balancer that can span multiple regions. The frontend of a cross-region load balancer can only be used with the global tier option of standard SKU public IPs. Traffic sent to the frontend IP of a cross-region load balancer is distributed across the regional public load balancers. The regional frontend IPs are contained in the backend pool of the cross-region load balancer. For more information, see [Cross-region load balancer](../../load-balancer/cross-region-overview.md).
-* By default, a public load balancer won't allow you to use multiple load-balancing rules with the same backend port. If a multiple rule configuration to the same backend port is required, then enable the floating IP option for a load-balancing rule. This setting overwrites the destination IP address of the traffic sent to the backend pool. Without floating IP enabled, the destination will be the backend pool private IP. With floating IP enabled, the destination IP will be the load balancer frontend public IP. The backend instance must have this public IP configured in its network configuration to correctly receive this traffic. A loopback interface with the frontend IP address must be configured in the instance. For more information, see [Azure Load Balancer Floating IP configuration](../../load-balancer/load-balancer-floating-ip.md).
+* By default, a public load balancer can't use multiple load-balancing rules with the same backend port. If a multiple rule configuration to the same backend port is required, then enable the floating IP option for a load-balancing rule. This setting overwrites the destination IP address of the traffic sent to the backend pool. Without floating IP enabled, the destination is the backend pool private IP. With floating IP enabled, the destination IP is the load balancer frontend public IP. The backend instance must have this public IP configured in its network configuration to correctly receive this traffic. A loopback interface with the frontend IP address must be configured in the instance. For more information, see [Azure Load Balancer Floating IP configuration](../../load-balancer/load-balancer-floating-ip.md).
* With a load balancer setup, members of backend pool can often also be assigned instance-level public IPs. With this architecture, sending traffic directly to these IPs bypasses the load balancer.
virtual-network Configure Public Ip Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-nat-gateway.md
Title: Manage a public IP address with a NAT gateway description: Learn about the ways a public IP address is used with an Azure Virtual Network NAT gateway and how to change the configuration.--++ Previously updated : 12/15/2022 Last updated : 08/24/2023
In this article, you learn how to:
## Create NAT gateway using existing public IP
-In this section, you'll create a NAT gateway resource. You'll select the IP address you created in the prerequisites as the public IP for the NAT gateway.
+In this section, you create a NAT gateway resource. You select the IP address you created in the prerequisites as the public IP for the NAT gateway.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this section, you'll create a NAT gateway resource. You'll select the IP addr
## Change or remove public IP address
-In this section, you'll change the IP address of the NAT gateway.
+In this section, you change the IP address of the NAT gateway.
-To change the IP, you'll associate a new public IP address created previously with the NAT gateway. A NAT gateway must have at least one IP address assigned.
+To change the IP, you associate a new public IP address created previously with the NAT gateway. A NAT gateway must have at least one IP address assigned.
1. Sign in to the [Azure portal](https://portal.azure.com).
Public IP prefixes extend the extensibility of SNAT for outbound connections fro
> [!NOTE] > When assigning a public IP prefix to a NAT gateway, the entire range will be used.
-In this section, you'll change the outbound IP configuration to use a public IP prefix you created previously.
+In this section, you change the outbound IP configuration to use a public IP prefix you created previously.
> [!NOTE] > You can choose to remove the single IP address associated with the NAT gateway and reuse, or leave it associated to the NAT gateway to increase the outbound SNAT ports. NAT gateway supports a combination of public IPs and prefixes in the outbound IP configuration. If you created a public IP prefix with 16 addresses, remove the single public IP. The number of allocated IPs can't exceed 16.
virtual-network Configure Public Ip Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-vm.md
Title: Manage a public IP address with an Azure Virtual Machine description: Learn about the ways a public IP address is used with Azure Virtual Machines and how to change the configuration.--++ Previously updated : 06/28/2021 Last updated : 08/24/2023
virtual-network Configure Public Ip Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-vpn-gateway.md
Title: Manage a public IP address with a VPN gateway description: Learn about the ways a public IP address is used with a VPN gateway and how to change the configuration.--++
virtual-network Configure Routing Preference Virtual Machine Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-routing-preference-virtual-machine-cli.md
Title: 'Tutorial: Configure routing preference for a VM - Azure CLI'
-description: In this tutorial, learn how to create a VM with a public IP address with routing preference choice using the Azure CLI.
--
+description: In this tutorial, learn how to configure routing preference for a VM using a public IP address with the Azure CLI.
++ Previously updated : 10/01/2021 Last updated : 08/24/2023 ms.devlang: azurecli
virtual-network Configure Routing Preference Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-routing-preference-virtual-machine-powershell.md
Title: 'Tutorial: Configure routing preference for a VM - Azure PowerShell'
-description: In this tutorial, learn how to create a VM with a public IP address with routing preference choice using Azure PowerShell.
--
+description: In this tutorial, learn how to configure routing preference for a VM using a public IP address with Azure PowerShell.
++ Previously updated : 10/01/2021 Last updated : 08/24/2023
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
Title: Create a custom IPv4 address prefix - Azure CLI
-description: Learn about how to create a custom IP address prefix using the Azure CLI
-
+description: Learn how to create a custom IP address prefix using the Azure CLI
++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv4 address prefix using the Azure CLI
virtual-network Create Custom Ip Address Prefix Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-cli.md
Title: Create a custom IPv6 address prefix - Azure CLI
-description: Learn about how to create a custom IPv6 address prefix using Azure CLI
-
+description: Learn how to create a custom IPv6 address prefix using Azure CLI
++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv6 address prefix using Azure CLI
-A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. You continue to own the range, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
The steps in this article detail the process to:
* Provision the range for IP allocation
-* Enable the range to be advertised by Microsoft
+* Commission the IPv6 prefixes to advertise the range to the Internet
## Differences between using BYOIPv4 and BYOIPv6 > [!IMPORTANT] > Onboarded custom IPv6 address prefixes have several unique attributes which make them different than custom IPv4 address prefixes.
-* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size.
+* Custom IPv6 prefixes use a *parent*/*child* model. In this model, the Microsoft Wide Area Network (WAN) advertises the global (parent) range, and the respective Azure regions advertise the regional (child) ranges. Global ranges must be /48 in size, while regional ranges must always be /64 size. You can have multiple /64 ranges per region.
* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
-* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error.
+* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes beyond this space results in an error.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.37 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed. - Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.-- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
- - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+- A customer owned IPv6 range to provision in Azure.
+ - In this example, a sample customer range (2a05:f500:2::/48) is used. This range won't be validated by Azure. Replace the example range with yours.
> [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs). ## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range.
+To utilize the Azure BYOIP feature, you must perform preparation steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. All these steps should be completed for the IPv6 global (parent) range.
## Provisioning for IPv6
The following command creates a custom IP prefix in the specified region and res
### Provision a regional custom IPv6 address prefix
-After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The *children* custom IP prefixes are advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges are advertised from a specific region, zones can be utilized.)
```azurecli-interactive az network custom-ip prefix create \
After the global custom IP prefix is in a **Provisioned** state, regional custom
--zone 1 2 3 ```
-Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised.
> [!IMPORTANT] > Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range.
-### Commission the custom IPv6 address prefixes
+## Commission the custom IPv6 address prefixes
When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
az network custom-ip prefix update \
> [!NOTE] > The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes.
-It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+It's possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned).
> [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
virtual-network Create Custom Ip Address Prefix Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-portal.md
Title: Create a custom IPv6 address prefix - Azure portal
-description: Learn about how to onboard a custom IPv6 address prefix using the Azure portal
-
+description: Learn how to onboard a custom IPv6 address prefix using the Azure portal
++ Previously updated : 05/03/2022- Last updated : 08/24/2023 # Create a custom IPv6 address prefix using the Azure portal
The steps in this article detail the process to:
> [!IMPORTANT] > Onboarded custom IPv6 address prefixes have several unique attributes which make them different than custom IPv4 address prefixes.
-* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size.
+* Custom IPv6 prefixes use a *parent*/*child* model. In this model, the Microsoft Wide Area Network (WAN) advertises the global (parent) range, and the respective Azure regions advertise the regional (child) ranges. Global ranges must be /48 in size, while regional ranges must always be /64 size. You can have multiple /64 ranges per region.
* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
The steps in this article detail the process to:
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
+- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but wouldn't be validated by Azure; you need to replace the example range with yours.
> [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs). ## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-portal.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range.
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-portal.md#pre-provisioning-steps) for details. All these steps should be completed for the IPv6 global (parent) range.
## Provisioning for IPv6
Sign in to the [Azure portal](https://portal.azure.com).
6. Select **Create**.
-The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
+The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
### Provision a regional custom IPv6 address prefix
-After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
In the same **Create a custom IP prefix** page as before, enter or select the following information:
In the same **Create a custom IP prefix** page as before, enter or select the fo
| Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. | | Availability Zones | Select **Zone-redundant**. |
-Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised.
> [!IMPORTANT] > Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range. ### Commission the custom IPv6 address prefixes
-When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
+When you commission custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
:::image type="content" source="./media/create-custom-ip-address-prefix-ipv6/any-region-prefix.png" alt-text="Diagram of custom IPv6 prefix showing parent prefix and child prefixes across multiple regions.":::
To commission a custom IPv6 prefix (regional or global) using the portal:
3. In **Custom IP Prefixes**, select the desired custom IPv6 prefix.
-4. In **Overview** page of the custom IPv6 prefix, select the **Commission** button near the top of the screen. If the range is global it will begin advertising from the Microsoft WAN. If the range is regional it will advertise only from the specific region.
+4. In **Overview** page of the custom IPv6 prefix, select the **Commission** button near the top of the screen. If the range is global, it begins advertising from the Microsoft WAN. If the range is regional, it advertises only from the specific region.
Using the example ranges above, the sequence would be to first commission myCustomIPv6RegionalPrefix, followed by a commission of myCustomIPv6GlobalPrefix. > [!NOTE] > The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes.
-It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+It's possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned).
> [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
Title: Create a custom IPv6 address prefix - Azure PowerShell
-description: Learn about how to create a custom IPv6 address prefix using Azure PowerShell
-
+description: Learn how to create a custom IPv6 address prefix using Azure PowerShell
++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv6 address prefix using Azure PowerShell
-A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. You maintain ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
The steps in this article detail the process to:
> [!IMPORTANT] > Onboarded custom IPv6 address prefixes have several unique attributes which make them different than custom IPv4 address prefixes.
-* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size.
+* Custom IPv6 prefixes use a *parent*/*child* model. In this model, the Microsoft Wide Area Network (WAN) advertises the global (parent) range, and the respective Azure regions advertise the regional (child) ranges. Global ranges must be /48 in size, while regional ranges must always be /64 size. You can have multiple /64 ranges per region.
* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
-* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error.
+* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes beyond this space results in an error.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.-- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.
+- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary.
+- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but wouldn't be validated by Azure; you need to replace the example range with yours.
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
$myCustomIPv6GlobalPrefix = New-AzCustomIPPrefix @prefix
### Provision a regional custom IPv6 address prefix
-After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
```azurepowershell-interactive $prefix =@{
$prefix =@{
} $myCustomIPv6RegionalPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3 ```
-Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised.
> [!IMPORTANT] > Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range.
Update-AzCustomIpPrefix -ResourceId $myCustomIPv6GlobalPrefix.Id -Commission
> [!NOTE] > The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes.
-It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+It's possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned).
> [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
Title: Create a custom IPv4 address prefix - Azure portal
-description: Learn about how to onboard a custom IP address prefix using the Azure portal
-
+description: Learn how to onboard a custom IP address prefix using the Azure portal
++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv4 address prefix using the Azure portal
-A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. You maintain ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
To utilize the Azure BYOIP feature, you must perform the following steps prior t
### Requirements and prefix readiness
-* The address range must be owned by you and registered under your name with the one of the 5 major Regional Internet Registries:
+* The address range must be owned by you and registered under your name with the one of the five major Regional Internet Registries:
* [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/)
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
-* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR will require the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR.
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR requires the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR.
For this ROA:
Sign in to the [Azure portal](https://portal.azure.com).
6. Select **Create**.
-The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
+The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
> [!NOTE] > The estimated time to complete the provisioning process is 30 minutes.
The range will be pushed to the Azure IP Deployment Pipeline. The deployment pro
## Create a public IP prefix from custom IP prefix
-When you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier.
+When you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier.
1. In the search box at the top of the portal, enter **Custom IP**.
When you create a prefix, you must create static IP addresses from the prefix. I
6. Select **Review + create**, and then **Create** on the following page.
-10. Repeat steps 1-5 to return to the **Overview** page for **myCustomIPPrefix**. You'll see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
+10. Repeat steps 1-5 to return to the **Overview** page for **myCustomIPPrefix**. You see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
## Commission the custom IP address prefix
The operation is asynchronous. You can check the status by reviewing the **Commi
> The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]
-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. To prevent these issues during initial deployment, you can choose the regional only commissioning option where your custom IP prefix will only be advertised within the Azure region it is deployed in. See [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. To prevent these issues during initial deployment, you can choose the regional only commissioning option where your custom IP prefix will only be advertised within the Azure region it is deployed in. For more information, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
## Next steps
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
Title: Create a custom IP address prefix - Azure PowerShell
-description: Learn about how to create a custom IPv4 address prefix using Azure PowerShell
-
+description: Learn how to create a custom IPv4 address prefix using Azure PowerShell
++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv4 address prefix using Azure PowerShell
-A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. You continue ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
+- Ensure your `Az.Network` module is 5.1.1 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary.
- A customer owned IPv4 range to provision in Azure. - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
To utilize the Azure BYOIP feature, you must perform the following steps prior t
### Requirements and prefix readiness
-* The address range must be owned by you and registered under your name with the one of the 5 major Regional Internet Registries:
+* The address range must be owned by you and registered under your name with the one of the five major Regional Internet Registries:
* [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/)
$prefix =@{
$myCustomIpPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3 ```
-The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. To determine the status, execute the following command:
+The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. To determine the status, execute the following command:
```azurepowershell-interactive Get-AzCustomIpPrefix -ResourceId $myCustomIpPrefix.Id
As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell
> The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]
-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in-- see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in--see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
## Next steps
virtual-network Create Public Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-cli.md
Title: 'Quickstart: Create a public IP - Azure CLI'
-description: Learn how to create a public IP using the Azure CLI
+description: Learn how to create a public IP address using the Azure CLI
-++ Previously updated : 10/01/2021- Last updated : 08/24/2023 ms.devlang: azurecli # Quickstart: Create a public IP address using the Azure CLI
-In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
+In this quickstart, you learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
To create an IPv6 address, modify the **`--version`** parameter to **IPv6**.
# [**Basic SKU**](#tab/create-public-ip-basic)
-In this section, you'll create a basic IP. Basic public IPs don't support availability zones.
+In this section, you create a basic IP. Basic public IPs don't support availability zones.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a basic static public IPv4 address named **myBasicPublicIP** in **QuickStartCreateIP-rg**.
If it's acceptable for the IP address to change over time, **Dynamic** IP assign
## Create a zonal or no-zone IP address
-In this section, you'll learn how to create a zonal or no-zone public IP address.
+In this section, you learn how to create a zonal or no-zone public IP address.
# [**Zonal**](#tab/create-public-ip-zonal)
To create an IPv6 address, modify the **`--version`** parameter to **IPv6**.
# [**Non-zonal**](#tab/create-public-ip-non-zonal)
-In this section, you'll create a non-zonal IP address.
+In this section, you create a non-zonal IP address.
>[!NOTE] >The following command works for API version 2020-08-01 or later. For more information about the API version currently being used, please refer to [Resource Providers and Types](../../azure-resource-manager/management/resource-providers-and-types.md).
Standard SKU static public IPv4 addresses support Routing Preference or the Glob
# [**Routing Preference**](#tab/routing-preference)
-By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user.
+By default, the routing preference for public IP addresses is set to **Microsoft network**, which delivers traffic over Microsoft's global wide area network to the user.
The selection of **Internet** minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate.
virtual-network Create Public Ip Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-portal.md
Title: 'Quickstart: Create a public IP address - Azure portal' description: In this quickstart, you learn how to create a public IP address for a Standard SKU and a Basic SKU. You also learn about routing preferences and tiers.--++ Previously updated : 03/24/2023 Last updated : 08/24/2023
virtual-network Create Public Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-powershell.md
Title: 'Quickstart: Create a public IP - PowerShell' description: In this quickstart, learn how to create a public IP using Azure PowerShell-++ Previously updated : 10/01/2021- Last updated : 08/24/2023 # Quickstart: Create a public IP address using PowerShell
-In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
+In this quickstart, you learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
## Prerequisites
New-AzResourceGroup @rg
> >The following command works for Az.Network module version 4.5.0 or later. For more information about the PowerShell modules currently being used, please refer to the [PowerShellGet documentation](/powershell/module/powershellget/).
-In this section, you'll create a public IP with zones. Public IP addresses can be zone-redundant or zonal.
+In this section, you create a public IP with zones. Public IP addresses can be zone-redundant or zonal.
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a standard zone-redundant public IPv4 address named **myStandardPublicIP** in **QuickStartCreateIP-rg**.
New-AzPublicIpAddress @ip
>[!NOTE] >Standard SKU public IP is recommended for production workloads. For more information about SKUs, see **[Public IP addresses](public-ip-addresses.md)**.
-In this section, you'll create a basic IP. Basic public IPs don't support availability zones.
+In this section, you create a basic IP. Basic public IPs don't support availability zones.
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a basic static public IPv4 address named **myBasicPublicIP** in **QuickStartCreateIP-rg**.
If it's acceptable for the IP address to change over time, **Dynamic** IP assign
## Create a zonal or no-zone public IP address
-In this section, you'll learn how to create a zonal or no-zone public IP address.
+In this section, you learn how to create a zonal or no-zone public IP address.
# [**Zonal**](#tab/create-public-ip-zonal)
New-AzPublicIpAddress @ip
# [**Non-zonal**](#tab/create-public-ip-non-zonal)
-In this section, you'll create a non-zonal IP address.
+In this section, you create a non-zonal IP address.
>[!NOTE] >The following command works for Az.Network module version 4.5.0 or later. For more information about the PowerShell modules currently being used, please refer to the [PowerShellGet documentation](/powershell/module/powershellget/).
Standard SKU static public IPv4 addresses support Routing Preference or the Glob
# [**Routing Preference**](#tab/routing-preference)
-By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user.
+By default, the routing preference for public IP addresses is set to **Microsoft network**, which delivers traffic over Microsoft's global wide area network to the user.
The selection of **Internet** minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate.
virtual-network Create Public Ip Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-cli.md
Title: 'Quickstart: Create a public IP address prefix - Azure CLI'
description: Learn how to create a public IP address prefix using the Azure CLI. -++ Previously updated : 10/01/2021- Last updated : 08/24/2023 ms.devlang: azurecli
Create a resource group with [az group create](/cli/azure/group#az-group-create)
## Create a public IP address prefix
-In this section, you'll create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell.
+In this section, you create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell.
The prefixes in the examples are:
The removal of the **`--zone`** parameter is the default selection for standard
-# [**Routing Preference Interent IPv4 prefix**](#tab/ipv4-routing-pref)
+# [**Routing Preference Internet IPv4 prefix**](#tab/ipv4-routing-pref)
To create a IPv4 public IP prefix with routing preference Internet, enter **RoutingPreference=Internet** in the **`--ip-tags`** parameter.
The removal of the **`--zone`** parameter is the default selection for standard
## Create a static public IP address from a prefix
-Once you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier.
+Once you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier.
Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) in the **myPublicIpPrefix** prefix.
To create a IPv6 public IP prefix, enter **IPv6** in the **`--version`** paramet
## Delete a prefix
-In this section, you'll learn how to delete a prefix.
+In this section, you learn how to delete a prefix.
To delete a public IP prefix, use [az network public-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-delete).
virtual-network Create Public Ip Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-portal.md
Title: 'Quickstart: Create a public IP address prefix - Azure portal'
description: Learn how to create a public IP address prefix using the Azure portal. -++ Previously updated : 06/05/2023- Last updated : 08/24/2023
virtual-network Create Public Ip Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-powershell.md
Title: 'Quickstart: Create a public IP address prefix - PowerShell'
description: Learn how to create a public IP address prefix using PowerShell. -++ Previously updated : 10/01/2021- Last updated : 08/24/2023
New-AzResourceGroup @rg
## Create a public IP address prefix
-In this section, you'll create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell.
+In this section, you create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell.
The prefixes in the examples are:
The removal of the **`-Zone`** parameter is the default selection for standard p
## Create a static public IP address from a prefix
-Once you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier.
+Once you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier.
Create a public IP address with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) in the **myPublicIpPrefix** prefix.
New-AzPublicIpAddress @ipv6
## Delete a prefix
-In this section, you'll learn how to delete a prefix.
+In this section, you learn how to delete a prefix.
To delete a public IP prefix, use [Remove-AzPublicIpPrefix](/powershell/module/az.network/remove-azpublicipprefix).
virtual-network Create Public Ip Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-template.md
Title: 'Quickstart: Create a public IP using a Resource Manager template'
description: Learn how to create a public IP using a Resource Manager template -++ Previously updated : 10/01/2021- Last updated : 08/24/2023
For more information on resources this public IP can be associated to and the di
## Create standard SKU public IP with zones
-In this section, you'll create a public IP with zones. Public IP addresses can be zone-redundant or zonal.
+In this section, you create a public IP with zones. Public IP addresses can be zone-redundant or zonal.
### Zone redundant
Template section to add:
## Create standard public IP without zones
-In this section, you'll create a non-zonal IP address.
+In this section, you create a non-zonal IP address.
The code in this section creates a standard no-zone public IPv4 address named **myStandardPublicIP**. The code section is valid for all regions with or without [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones).
Template section to add:
## Create a basic public IP
-In this section, you'll create a basic IP. Basic public IPs don't support availability zones.
+In this section, you create a basic IP. Basic public IPs don't support availability zones.
The code in this section creates a basic public IPv4 address named **myBasicPublicIP**.
Standard SKU static public IPv4 addresses support Routing Preference or the Glob
### Routing preference
-By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user.
+By default, the routing preference for public IP addresses is set to **Microsoft network**, which delivers traffic over Microsoft's global wide area network to the user.
The selection of **Internet** minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate.
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
Title: Create an Azure virtual machine with a dual-stack network - Azure CLI description: In this article, learn how to use the Azure CLI to create a virtual machine with a dual-stack virtual network in Azure.--++ Previously updated : 04/19/2023 Last updated : 08/24/2023 ms.devlang: azurecli
virtual-network Create Vm Dual Stack Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md
Title: Create an Azure virtual machine with a dual-stack network - Azure portal description: In this article, learn how to use the Azure portal to create a virtual machine with a dual-stack virtual network in Azure.--++ Previously updated : 08/17/2022 Last updated : 08/24/2023
virtual-network Create Vm Dual Stack Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-powershell.md
Title: Create an Azure virtual machine with a dual-stack network - PowerShell description: In this article, learn how to use PowerShell to create a virtual machine with a dual-stack virtual network in Azure.--++ Previously updated : 08/15/2022 Last updated : 08/24/2023
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
Title: Custom IP address prefix (BYOIP)
description: Learn about what an Azure custom IP address prefix is and how it enables customers to utilize their own ranges in Azure. -++ Previously updated : 05/27/2023- Last updated : 08/24/2023 # Custom IP address prefix (BYOIP)
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
Title: Default outbound access in Azure
description: Learn about default outbound access in Azure. -++ Previously updated : 05/28/2023- Last updated : 08/24/2023
If you deploy a virtual machine in Azure and it doesn't have explicit outbound c
* Customers don't own the default outbound access IP. This IP may change, and any dependency on it could cause issues in the future.
-## How can I disable default outbound access?
+## How can I transition to an explicit method of public connectivity (and disable default outbound access)?
There are multiple ways to turn off default outbound access:
There are multiple ways to turn off default outbound access:
* Use Flexible orchestration mode for Virtual Machine Scale Sets.
- * Flexible scale sets are secure by default. Any instances created via Flexible scale sets don't have the default outbound access IP associated with them. For more information, see [Flexible orchestration mode for Virtual Machine Scale Sets](../../virtual-machines/flexible-virtual-machine-scale-sets.md)
+ * Flexible scale sets are secure by default. Any instances created via Flexible scale sets don't have the default outbound access IP associated with them, so an explicit outbound method is required. For more information, see [Flexible orchestration mode for Virtual Machine Scale Sets](../../virtual-machines/flexible-virtual-machine-scale-sets.md)
>[!Important]
-> When a backend pool is configured by IP address, it will use default outbound access due to an ongoing known issue. For secure by default configuration and applications with demanding outbound needs, associate a NAT gateway to the VMs in your load balancer's backend pool to secure traffic. See more on existing [known issues](../../load-balancer/whats-new.md#known-issues).
+> When a load balancer backend pool is configured by IP address, it will use default outbound access due to an ongoing known issue. For secure by default configuration and applications with demanding outbound needs, associate a NAT gateway to the VMs in your load balancer's backend pool to secure traffic. See more on existing [known issues](../../load-balancer/whats-new.md#known-issues).
## If I need outbound access, what is the recommended way?
NAT gateway is the recommended approach to have explicit outbound connectivity.
## Constraints
-* Connectivity maybe needed for Windows Updates.
+* Public connectivity is required for Windows Activation and Windows Updates. It is recommended to set up an explicit form of public outbound connectivity.
* Default outbound access IP doesn't support fragmented packets.
virtual-network Ip Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ip-services-overview.md
Title: What is Azure Virtual Network IP Services? description: Overview of Azure Virtual Network IP Services. Learn how IP services work and how to use IP resources in Azure.--++ Last updated : 08/24/2023 Previously updated : 04/19/2023
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
Title: Overview of IPv6 for Azure Virtual Network description: IPv6 description of IPv6 endpoints and data paths in an Azure virtual network. -++ Last updated : 08/24/2023 Previously updated : 05/03/2023- # What is IPv6 for Azure Virtual Network?
The current IPv6 for Azure Virtual Network release has the following limitations
- While it's possible to create NSG rules for IPv4 and IPv6 within the same NSG, it isn't currently possible to combine an IPv4 subnet with an IPv6 subnet in the same rule when specifying IP prefixes. -- When using a dual stack configuration with a load balancer, health probes will not function for IPv6 if a Network Security Group is not active.
+- When using a dual stack configuration with a load balancer, health probes won't function for IPv6 if a Network Security Group isn't active.
- ICMPv6 isn't currently supported in Network Security Groups. - Azure Virtual WAN currently supports IPv4 traffic only. -- Azure Firewall doesn't currently support IPv6. It can operate in a dual stack VNet using only IPv4, but the firewall subnet must be IPv4-only.
+- Azure Firewall doesn't currently support IPv6. It can operate in a dual stack virtual network using only IPv4, but the firewall subnet must be IPv4-only.
## Pricing
virtual-network Ipv6 Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-virtual-machine-scale-set.md
Title: Deploy virtual machine scale sets with IPv6 in Azure
description: This article shows how to deploy virtual machine scale sets with IPv6 in an Azure virtual network. -++ Last updated : 08/24/2023 Previously updated : 03/31/2020- # Deploy virtual machine scale sets with IPv6 in Azure
-This article shows you how to deploy a dual stack (IPv4 + IPv6) Virtual Machine Scale Set with a dual stack external load balancer in an Azure virtual network. The process to create an IPv6-capable virtual machine scale set is nearly identical to the process for creating individual VMs described [here](../../load-balancer/ipv6-configure-standard-load-balancer-template-json.md). You'll start with the steps that are similar to ones described for individual VMs:
+This article shows you how to deploy a dual stack (IPv4 + IPv6) Virtual Machine Scale Set with a dual stack external load balancer in an Azure virtual network. The process to create an IPv6-capable virtual machine scale set is nearly identical to the process for creating individual VMs described [here](../../load-balancer/ipv6-configure-standard-load-balancer-template-json.md). You start with the steps that are similar to ones described for individual VMs:
1. Create IPv4 and IPv6 Public IPs. 2. Create a dual stack load balancer. 3. Create network security group (NSG) rules.
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
Title: Manage a custom IP address prefix
description: Learn about custom IP address prefixes and how to manage and delete them. -++ Last updated : 08/24/2023 Previously updated : 05/27/2023-+ # Manage a custom IP address prefix
virtual-network Manage Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-public-ip-address-prefix.md
Title: Create, change, or delete an Azure public IP address prefix description: Learn about public IP address prefixes and how to create, change, or delete them.-++ Last updated : 08/24/2023 Previously updated : 03/30/2023- # Manage a public IP address prefix
virtual-network Monitor Public Ip Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip-reference.md
Title: Monitoring Public IP addresses data reference description: Important reference material needed when you monitor Public IP addresses -++ Last updated : 08/24/2023 - Previously updated : 06/29/2022 # Monitoring Public IP addresses data reference
virtual-network Monitor Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip.md
Title: Monitoring Public IP addresses description: Start here to learn how to monitor Public IP addresses--++ Last updated : 08/24/2023 Previously updated : 06/29/2022 # Monitoring Public IP addresses
virtual-network Private Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/private-ip-addresses.md
Title: Private IP addresses in Azure description: Learn about private IP addresses in Azure.-++ Last updated : 08/24/2023 Previously updated : 05/03/2023- # Private IP addresses
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
Title: Azure Public IP address prefix
description: Learn about what an Azure public IP address prefix is and how it can help you assign public IP addresses to your resources. -++ Last updated : 08/24/2023 Previously updated : 04/19/2023- # Public IP address prefix
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
Title: Public IP addresses in Azure
description: Learn about public IP addresses in Azure. - Previously updated : 05/28/2023-++ Last updated : 08/24/2023 # Public IP addresses
If a custom domain is desired for services that use a public IP, you can use [Az
Public IP addresses with a standard SKU can be created as nonzonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md).
-A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "nonzonal" public IP addresses are placed into a zone for you by Azure and doesn't give a guarantee of redundancy.
+A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "nonzonal" public IP address is placed into a zone for you by Azure and doesn't give a guarantee of redundancy.
In regions without availability zones, all public IP addresses are created as nonzonal. Public IP addresses created in a region that is later upgraded to have availability zones remain nonzonal. A public IP's availability zone can't be changed after the public IP's creation.
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
Title: Upgrading a basic public IP address to standard SKU - Guidance description: Overview of upgrade options and guidance for migrating basic public IP to standard public IP for future basic public IP address retirement- - Previously updated : 05/28/2023++ Last updated : 08/24/2023 #customer-intent: As an cloud engineer with Basic public IP services, I need guidance and direction on migrating my workloads off basic to Standard SKUs
virtual-network Public Ip Upgrade Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-classic.md
Title: Migrate a classic reserved IP address to a public IP address description: In this article, learn how to upgrade a classic deployment model reserved IP to an Azure Resource Manager public IP address.--++ Last updated : 08/24/2023 Previously updated : 05/20/2021
In this section, you'll use the Azure classic CLI to migrate a classic reserved
> The reserved IP must be removed from any cloud service that the IP address is associated to. ```azurecli-interactive
-azure network reserved-ip validate migration myReservedIP
+azure network reserved-ip validate-migration myReservedIP
``` The previous command displays any warnings and errors that block migration. If validation is successful, you can continue with the following steps to **Prepare** and **Commit** the migration:
virtual-network Public Ip Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-cli.md
Title: 'Upgrade a public IP address - Azure CLI' description: In this article, learn how to upgrade a basic SKU public IP address using the Azure CLI.--++ Last updated : 08/24/2023 Previously updated : 10/28/2022 ms.devlang: azurecli
virtual-network Public Ip Upgrade Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-portal.md
Title: 'Upgrade a public IP address - Azure portal' description: In this article, you learn how to upgrade a basic SKU public IP address using the Azure portal.--++ Last updated : 08/24/2023 Previously updated : 10/28/2022
virtual-network Public Ip Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md
Title: 'Upgrade a public IP address - Azure PowerShell' description: In this article, you learn how to upgrade a basic SKU public IP address using Azure PowerShell.--++ Last updated : 08/24/2023 Previously updated : 10/28/2022
virtual-network Public Ip Upgrade Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-vm.md
Title: 'Upgrade public IP addresses attached to a VM from Basic to Standard'
description: This article shows you how to upgrade a public IP address attached to a VM to a standard public IP address + Last updated : 08/24/2023 Previously updated : 06/01/2023- # Upgrade public IP addresses attached to VM from Basic to Standard
There is no way to evaluate upgrading a Public IP without completing the action.
Yes, the process of upgrading a Zonal Basic SKU Public IP to a Zonal Standard SKU Public IP is identical and works in the script.
+## Use Resource Graph to list VMs with Public IPs requiring upgrade
+
+### Query to list virtual machines with Basic SKU public IP addresses
+
+This query returns a list of virtual machine IDs with Basic SKU public IP addresses attached.
+
+```kusto
+Resources
+| where type =~ 'microsoft.compute/virtualmachines'
+| project vmId = tolower(id), vmNics = properties.networkProfile.networkInterfaces
+| join (
+ Resources |
+ where type =~ 'microsoft.network/networkinterfaces' |
+ project nicVMId = tolower(tostring(properties.virtualMachine.id)), allVMNicID = tolower(id), nicIPConfigs = properties.ipConfigurations)
+ on $left.vmId == $right.nicVMId
+| join (
+ Resources
+ | where type =~ 'microsoft.network/publicipaddresses' and isnotnull(properties.ipConfiguration.id)
+ | where sku.name == 'Basic' // exclude to find all VMs with Public IPs
+ | project pipId = id, pipSku = sku.name, pipAssociatedNicId = tolower(tostring(split(properties.ipConfiguration.id, '/ipConfigurations/')[0])))
+ on $left.allVMNicID == $right.pipAssociatedNicId
+| project vmId, pipId, pipSku
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az graph query -q "Resources | where type =~ 'microsoft.compute/virtualmachines' | project vmId = tolower(id), vmNics = properties.networkProfile.networkInterfaces | join (Resources | where type =~ 'microsoft.network/networkinterfaces' | project nicVMId = tolower(tostring(properties.virtualMachine.id)), allVMNicID = tolower(id), nicIPConfigs = properties.ipConfigurations) on \$left.vmId == \$right.nicVMId | join ( Resources | where type =~ 'microsoft.network/publicipaddresses' and isnotnull(properties.ipConfiguration.id) | where sku.name == 'Basic' | project pipId = id, pipSku = sku.name, pipAssociatedNicId = tolower(tostring(split(properties.ipConfiguration.id, '/ipConfigurations/')[0]))) on \$left.allVMNicID == \$right.pipAssociatedNicId | project vmId, pipId, pipSku"
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Search-AzGraph -Query "Resources | where type =~ 'microsoft.compute/virtualmachines' | project vmId = tolower(id), vmNics = properties.networkProfile.networkInterfaces | join (Resources | where type =~ 'microsoft.network/networkinterfaces' | project nicVMId = tolower(tostring(properties.virtualMachine.id)), allVMNicID = tolower(id), nicIPConfigs = properties.ipConfigurations) on `$left.vmId == `$right.nicVMId | join ( Resources | where type =~ 'microsoft.network/publicipaddresses' and isnotnull(properties.ipConfiguration.id) | where sku.name == 'Basic' | project pipId = id, pipSku = sku.name, pipAssociatedNicId = tolower(tostring(split(properties.ipConfiguration.id, '/ipConfigurations/')[0]))) on `$left.allVMNicID == `$right.pipAssociatedNicId | project vmId, pipId, pipSku"
+```
+
+### [Portal](#tab/azure-portal)
+
+Try this query in Azure Resource Graph Explorer:
+
+- Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0A%7C%20project%20vmId%20%3D%20tolower%28id%29%2C%20vmNics%20%3D%20properties.networkProfile.networkInterfaces%0A%7C%20join%20%28%0A%20%20Resources%20%7C%0A%20%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%20%7C%0A%20%20project%20nicVMId%20%3D%20tolower%28tostring%28properties.virtualMachine.id%29%29%2C%20allVMNicID%20%3D%20tolower%28id%29%2C%20nicIPConfigs%20%3D%20properties.ipConfigurations%29%0A%20%20on%20%24left.vmId%20%3D%3D%20%24right.nicVMId%0A%7C%20join%20%28%0A%20%20Resources%0A%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%20and%20isnotnull%28properties.ipConfiguration.id%29%0A%20%20%7C%20where%20sku.name%20%3D%3D%20%27Basic%27%0A%20%20%7C%20project%20pipId%20%3D%20id%2C%20pipSku%20%3D%20sku.name%2C%20pipAssociatedNicId%20%3D%20tolower%28tostring%28split%28properties.ipConfiguration.id%2C%20%27%2FipConfigurations%2F%27%29%5B0%5D%29%29%29%0A%20%20on%20%24left.allVMNicID%20%3D%3D%20%24right.pipAssociatedNicId%0A%7C%20project%20vmId%2C%20pipId%2C%20pipSku" target="_blank">portal.azure.com</a>
+- Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0A%7C%20project%20vmId%20%3D%20tolower%28id%29%2C%20vmNics%20%3D%20properties.networkProfile.networkInterfaces%0A%7C%20join%20%28%0A%20%20Resources%20%7C%0A%20%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%20%7C%0A%20%20project%20nicVMId%20%3D%20tolower%28tostring%28properties.virtualMachine.id%29%29%2C%20allVMNicID%20%3D%20tolower%28id%29%2C%20nicIPConfigs%20%3D%20properties.ipConfigurations%29%0A%20%20on%20%24left.vmId%20%3D%3D%20%24right.nicVMId%0A%7C%20join%20%28%0A%20%20Resources%0A%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%20and%20isnotnull%28properties.ipConfiguration.id%29%0A%20%20%7C%20where%20sku.name%20%3D%3D%20%27Basic%27%0A%20%20%7C%20project%20pipId%20%3D%20id%2C%20pipSku%20%3D%20sku.name%2C%20pipAssociatedNicId%20%3D%20tolower%28tostring%28split%28properties.ipConfiguration.id%2C%20%27%2FipConfigurations%2F%27%29%5B0%5D%29%29%29%0A%20%20on%20%24left.allVMNicID%20%3D%3D%20%24right.pipAssociatedNicId%0A%7C%20project%20vmId%2C%20pipId%2C%20pipSkuu" target="_blank">portal.azure.us</a>
+- Microsoft Azure operated by 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0A%7C%20project%20vmId%20%3D%20tolower%28id%29%2C%20vmNics%20%3D%20properties.networkProfile.networkInterfaces%0A%7C%20join%20%28%0A%20%20Resources%20%7C%0A%20%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%20%7C%0A%20%20project%20nicVMId%20%3D%20tolower%28tostring%28properties.virtualMachine.id%29%29%2C%20allVMNicID%20%3D%20tolower%28id%29%2C%20nicIPConfigs%20%3D%20properties.ipConfigurations%29%0A%20%20on%20%24left.vmId%20%3D%3D%20%24right.nicVMId%0A%7C%20join%20%28%0A%20%20Resources%0A%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%20and%20isnotnull%28properties.ipConfiguration.id%29%0A%20%20%7C%20where%20sku.name%20%3D%3D%20%27Basic%27%0A%20%20%7C%20project%20pipId%20%3D%20id%2C%20pipSku%20%3D%20sku.name%2C%20pipAssociatedNicId%20%3D%20tolower%28tostring%28split%28properties.ipConfiguration.id%2C%20%27%2FipConfigurations%2F%27%29%5B0%5D%29%29%29%0A%20%20on%20%24left.allVMNicID%20%3D%3D%20%24right.pipAssociatedNicId%0A%7C%20project%20vmId%2C%20pipId%2C%20pipSku" target="_blank">portal.azure.cn</a>
++ ## Next steps * [Upgrading a Basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md)
virtual-network Remove Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/remove-public-ip-address-vm.md
Title: Dissociate a public IP address from an Azure VM
description: Learn how to dissociate a public IP address from an Azure virtual machine (VM) using the Azure portal, Azure CLI or Azure PowerShell. -++ Last updated : 08/24/2023 Previously updated : 12/16/2022-
virtual-network Routing Preference Azure Kubernetes Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-azure-kubernetes-service-cli.md
Title: 'Tutorial: Configure routing preference for an Azure Kubernetes Service - Azure CLI' description: Use this tutorial to learn how to configure routing preference for an Azure Kubernetes Service.--++ Last updated : 08/24/2023 Previously updated : 10/01/2021 ms.devlang: azurecli
virtual-network Routing Preference Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-cli.md
Title: Configure routing preference for a public IP address using Azure CLI
description: Learn how to create a public IP with an Internet traffic routing preference by using the Azure CLI. - Last updated : 08/24/2023++ Previously updated : 02/22/2021- # Configure routing preference for a public IP address using Azure CLI
virtual-network Routing Preference Mixed Network Adapter Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-mixed-network-adapter-portal.md
Title: 'Tutorial: Configure both routing preference options for a virtual machine - Azure portal' description: Use this tutorial to learn how to configure both routing preference options for a virtual machine using the Azure portal.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021
virtual-network Routing Preference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-overview.md
Title: Routing preference in Azure description: Learn about how you can choose how your traffic routes between Azure and the Internet with routing preference.- Last updated : 08/24/2023++ # Customer intent: As an Azure customer, I want to learn more about routing choices for my internet egress traffic. Previously updated : 05/08/2023-
virtual-network Routing Preference Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-portal.md
Title: Configure routing preference for a public IP address - Azure portal description: Learn how to create a public IP with an Internet traffic routing preference - Last updated : 08/24/2023++ Previously updated : 02/22/2021- # Configure routing preference for a public IP address using the Azure portal
virtual-network Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-powershell.md
Title: Configure routing preference for a public IP address - Azure PowerShell
description: Learn how to Configure routing preference for a public IP address using Azure PowerShell. - Last updated : 08/24/2023++ Previously updated : 02/22/2021-
virtual-network Routing Preference Unmetered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-unmetered.md
Title: What is routing preference unmetered? description: Learn about how you can configure routing preference for your resources egressing data to CDN provider.- Last updated : 08/24/2023++ # Customer intent: As an Azure customer, I want to learn more about enabling routing preference for my CDN origin resources. Previously updated : 05/08/2023- # What is routing preference unmetered?
virtual-network Tutorial Routing Preference Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/tutorial-routing-preference-virtual-machine-portal.md
Title: 'Tutorial: Configure routing preference for a VM - Azure portal' description: In this tutorial, learn how to create a VM with a public IP address with routing preference choice using the Azure portal.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021
virtual-network Virtual Network Deploy Static Pip Arm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-cli.md
Title: Create a VM with a static public IP address - Azure CLI description: Create a virtual machine (VM) with a static public IP address using the Azure CLI. Static public IP addresses are addresses that never change.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021 ms.devlang: azurecli
virtual-network Virtual Network Deploy Static Pip Arm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-portal.md
Title: Create a VM with a static public IP address - Azure portal description: Learn how to create a VM with a static public IP address using the Azure portal. - Last updated : 08/24/2023++ Previously updated : 12/16/2022- # Create a virtual machine with a static public IP address using the Azure portal
virtual-network Virtual Network Deploy Static Pip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-ps.md
Title: Create a VM with a static public IP address - Azure PowerShell description: Create a virtual machine (VM) with a static public IP address using Azure PowerShell. Static public IP addresses are addresses that never change.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
Title: Assign multiple IP addresses to VMs - Azure CLI
description: Learn how to create a virtual machine with multiple IP addresses using the Azure CLI. - Last updated : 08/24/2023++ Previously updated : 04/19/2023- # Assign multiple IP addresses to virtual machines using the Azure CLI
virtual-network Virtual Network Multiple Ip Addresses Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal.md
Title: Assign multiple IP addresses to VMs - Azure portal description: Learn how to assign multiple IP addresses to a virtual machine using the Azure portal. - Last updated : 08/24/2023++ Previously updated : 12/08/2022- # Assign multiple IP addresses to virtual machines using the Azure portal
virtual-network Virtual Network Multiple Ip Addresses Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md
Title: Assign multiple IP addresses to VMs - Azure PowerShell
description: Learn how to create a virtual machine with multiple IP addresses using Azure PowerShell. - Last updated : 08/24/2023++ Previously updated : 12/12/2022-
virtual-network Virtual Network Network Interface Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-network-interface-addresses.md
Title: Configure IP addresses for an Azure network interface description: Learn how to add, change, and remove private and public IP addresses for a network interface. - Last updated : 08/24/2023++ Previously updated : 12/06/2022-
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
Title: Create, change, or delete an Azure public IP address
description: Manage public IP addresses. Learn how a public IP address is a resource with configurable settings. - Last updated : 08/24/2023++ Previously updated : 05/28/2023-
virtual-network Virtual Networks Static Private Ip Arm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-cli.md
Title: 'Create a VM with a static private IP address - Azure CLI' description: Learn how to create a virtual machine with a static private IP address using the Azure CLI.-- Last updated : 08/24/2023++ Previously updated : 10/28/2022 ms.devlang: azurecli
virtual-network Virtual Networks Static Private Ip Arm Pportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
Title: 'Create a VM with a static private IP address - Azure portal' description: Learn how to create a virtual machine with a static private IP address using the Azure portal.-- Last updated : 08/24/2023++ Previously updated : 03/17/2023
virtual-network Virtual Networks Static Private Ip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md
Title: 'Create a VM with a static private IP address - Azure PowerShell' description: Learn how to create a virtual machine with a static private IP address using Azure PowerShell.-- Last updated : 08/24/2023++ Previously updated : 10/28/2022
virtual-network Virtual Networks Static Private Ip Classic Pportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-pportal.md
Title: Configure private IP addresses for VMs (Classic) - Azure portal description: Learn how to configure private IP addresses for virtual machines (Classic) using the Azure portal. - Last updated : 08/24/2023++ Previously updated : 03/22/2023-
virtual-network Virtual Networks Static Private Ip Classic Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-ps.md
Title: Configure private IP addresses for VMs (Classic) - Azure PowerShell description: Learn how to configure private IP addresses for virtual machines (Classic) using PowerShell. - Last updated : 08/24/2023++ Previously updated : 03/22/2023-
virtual-network Manage Subnet Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-subnet-delegation.md
Previously updated : 02/09/2023 Last updated : 08/23/2023 -+ # Add or remove a subnet delegation
Subnet delegation gives explicit permissions to the service to create service-sp
## Prerequisites
+# [**Portal**](#tab/manage-subnet-delegation-portal)
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions.
+# [**PowerShell**](#tab/manage-subnet-delegation-powershell)
-- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions.
- Azure PowerShell installed locally or Azure Cloud Shell.
Subnet delegation gives explicit permissions to the service to create service-sp
- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+# [**Azure CLI**](#tab/manage-subnet-delegation-cli)
-## Create the virtual network
-
-In this section, you create a virtual network and the subnet that you'll later delegate to an Azure service.
-# [**Portal**](#tab/manage-subnet-delegation-portal)
-
-1. Sign-in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-1. Select **+ Create**.
-
-1. Enter or select the following information in the **Basics** tab of **Create virtual network**:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **East US 2** |
+- If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions.
-1. Select **Next: Security**, then **Next: IP Addresses**.
-1. Select **Add an IP address space**, in the **Add an IP address space** pane, enter or select the following information, then select **Add**.
+- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
- | Setting | Value |
- | - | -- |
- | Address space type | Leave as default **IPV6**. |
- | Starting address | Enter **10.1.0.0**. |
- | Address space size | Select **/16**. |
+
-1. Select **+ Add subnet** in the new IP address space.
+## Create the virtual network
-1. Enter or select the following information in **Add a subnet**. Then select **Add**.
+In this section, you create a virtual network and the subnet that you delegate to an Azure service.
- | Setting | Value |
- | - | -- |
- | Name | Enter **mySubnet**. |
- | Starting address | Enter **10.1.0.0**. |
- | Subnet size | Select **/16**. |
+# [**Portal**](#tab/manage-subnet-delegation-portal)
-1. Select **Review + create**, then select **Create**.
# [**PowerShell**](#tab/manage-subnet-delegation-powershell) ### Create a resource group
-Create a resource group with [New-AzResourceGroup](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
-The following example creates a resource group named **myResourceGroup** in the **eastus2** location:
+Create a resource group with [`New-AzResourceGroup`](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+The following example creates a resource group named **test-rg** in the **eastus2** location:
```azurepowershell-interactive $rg = @{
- Name = 'myResourceGroup'
+ Name = 'test-rg'
Location = 'eastus2' } New-AzResourceGroup @rg ``` ### Create virtual network
-Create a virtual network named **myVnet** with a subnet named **mySubnet** using [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) in the **myResourceGroup** using [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork).
+Create a virtual network named **vnet-1** with a subnet named **subnet-1** using [`New-AzVirtualNetworkSubnetConfig`](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) in the **test-rg** using [`New-AzVirtualNetwork`](/powershell/module/az.network/new-azvirtualnetwork).
-The IP address space for the virtual network is **10.1.0.0/16**. The subnet within the virtual network is **10.1.0.0/24**.
+The IP address space for the virtual network is **10.0.0.0/16**. The subnet within the virtual network is **10.0.0.0/24**.
```azurepowershell-interactive $sub = @{
- Name = 'mySubnet'
- AddressPrefix = '10.1.0.0/24'
+ Name = 'subnet-1'
+ AddressPrefix = '10.0.0.0/24'
} $subnet = New-AzVirtualNetworkSubnetConfig @sub $net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
Location = 'eastus2'
- AddressPrefix = '10.1.0.0/16'
+ AddressPrefix = '10.0.0.0/16'
Subnet = $subnet } New-AzVirtualNetwork @net
New-AzVirtualNetwork @net
### Create a resource group
-Create a resource group with [az group create](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
+Create a resource group with [`az group create`](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed.
-The following example creates a resource group named **myResourceGroup** in the **eastu2** location:
+The following example creates a resource group named **test-rg** in the **eastu2** location:
```azurecli-interactive az group create \
- --name myResourceGroup \
+ --name test-rg \
--location eastus2 ``` ### Create a virtual network
-Create a virtual network named **myVnet** with a subnet named **mySubnet** in the **myResourceGroup** using [az network vnet create](/cli/azure/network/vnet).
+
+Create a virtual network named **vnet-1** with a subnet named **subnet-1** in the **test-rg** using [`az network vnet create`](/cli/azure/network/vnet).
```azurecli-interactive az network vnet create \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--location eastus2 \
- --name myVNet \
- --address-prefix 10.1.0.0/16 \
- --subnet-name mySubnet \
- --subnet-prefix 10.1.0.0/24
+ --name vnet-1 \
+ --address-prefix 10.0.0.0/16 \
+ --subnet-name subnet-1 \
+ --subnet-prefix 10.0.0.0/24
```
In this section, you delegate the subnet that you created in the preceding secti
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-1. Select **myVNet**.
+1. Select **vnet-1**.
1. Select **Subnets** in **Settings**.
-1. Select **mySubnet**.
+1. Select **subnet-1**.
1. Enter or select the following information:
In this section, you delegate the subnet that you created in the preceding secti
# [**PowerShell**](#tab/manage-subnet-delegation-powershell)
-Use [Add-AzDelegation](/powershell/module/az.network/add-azdelegation) to update the subnet named **mySubnet** with a delegation named **myDelegation** to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation:
+Use [`Add-AzDelegation`](/powershell/module/az.network/add-azdelegation) to update the subnet named **subnet-1** with a delegation named **myDelegation** to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation:
```azurepowershell-interactive $net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnet = Get-AzVirtualNetwork @net $sub = @{
- Name = 'mySubnet'
+ Name = 'subnet-1'
VirtualNetwork = $vnet } $subnet = Get-AzVirtualNetworkSubnetConfig @sub
$subnet = Add-AzDelegation @del
Set-AzVirtualNetwork -VirtualNetwork $vnet ```
-Use [Get-AzDelegation](/powershell/module/az.network/get-azdelegation) to verify the delegation:
+Use [`Get-AzDelegation`](/powershell/module/az.network/get-azdelegation) to verify the delegation:
```azurepowershell-interactive $sub = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
}
-$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'mySubnet'
+$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'subnet-1'
$dg = @{ Name ='myDelegation'
Get-AzDelegation @dg
Actions : {Microsoft.Network/virtualNetworks/subnets/join/action} Name : myDelegation Etag : W/"9cba4b0e-2ceb-444b-b553-454f8da07d8a"
- Id : /subscriptions/3bf09329-ca61-4fee-88cb-7e30b9ee305b/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet/delegations/myDelegation
+ Id : /subscriptions/3bf09329-ca61-4fee-88cb-7e30b9ee305b/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1/subnets/subnet-1/delegations/myDelegation
``` # [**Azure CLI**](#tab/manage-subnet-delegation-cli)
-Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to update the subnet named **mySubnet** with a delegation to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation:
+Use [`az network virtual network subnet update`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to update the subnet named **subnet-1** with a delegation to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation:
```azurecli-interactive az network vnet subnet update \
- --resource-group myResourceGroup \
- --name mySubnet \
- --vnet-name myVNet \
+ --resource-group test-rg \
+ --name subnet-1 \
+ --vnet-name vnet-1 \
--delegations Microsoft.Sql/managedInstances ```
-To verify the delegation was applied, use [az network vnet subnet show](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is delegated to the subnet in the property **serviceName**:
+To verify the delegation was applied, use [`az network vnet subnet show`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is delegated to the subnet in the property **serviceName**:
```azurecli-interactive az network vnet subnet show \
- --resource-group myResourceGroup \
- --name mySubnet \
- --vnet-name myVNet \
+ --resource-group test-rg \
+ --name subnet-1 \
+ --vnet-name vnet-1 \
--query delegations ```
az network vnet subnet show \
"Microsoft.Network/virtualNetworks/subnets/unprepareNetworkPolicies/action" ], "etag": "W/\"30184721-8945-4e4f-9cc3-aa16b26589ac\"",
- "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/mySubnet/delegations/0",
+ "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1/subnets/subnet-1/delegations/0",
"name": "0", "provisioningState": "Succeeded",
- "resourceGroup": "myResourceGroup",
+ "resourceGroup": "test-rg",
"serviceName": "Microsoft.Sql/managedInstances", "type": "Microsoft.Network/virtualNetworks/subnets/delegations" }
az network vnet subnet show \
## Remove subnet delegation from an Azure service
-In this section, you'll remove a subnet delegation for an Azure service.
+In this section, you remove a subnet delegation for an Azure service.
# [**Portal**](#tab/manage-subnet-delegation-portal)
In this section, you'll remove a subnet delegation for an Azure service.
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-1. Select **myVNet**.
+1. Select **vnet-1**.
1. Select **Subnets** in **Settings**.
-1. Select **mySubnet**.
+1. Select **subnet-1**.
1. Enter or select the following information:
In this section, you'll remove a subnet delegation for an Azure service.
# [**PowerShell**](#tab/manage-subnet-delegation-powershell)
-Use [Remove-AzDelegation](/powershell/module/az.network/remove-azdelegation) to remove the delegation from the subnet named **mySubnet**:
+Use [`Remove-AzDelegation`](/powershell/module/az.network/remove-azdelegation) to remove the delegation from the subnet named **subnet-1**:
```azurepowershell-interactive $net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnet = Get-AzVirtualNetwork @net $sub = @{
- Name = 'mySubnet'
+ Name = 'subnet-1'
VirtualNetwork = $vnet } $subnet = Get-AzVirtualNetworkSubnetConfig @sub
$subnet = Remove-AzDelegation @del
Set-AzVirtualNetwork -VirtualNetwork $vnet ```
-Use [Get-AzDelegation](/powershell/module/az.network/get-azdelegation) to verify the delegation was removed:
+Use [`Get-AzDelegation`](/powershell/module/az.network/get-azdelegation) to verify the delegation was removed:
```azurepowershell-interactive $sub = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
}
-$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'mySubnet'
+$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'subnet-1'
$dg = @{ Name ='myDelegation'
Get-AzDelegation: Sequence contains no matching element
# [**Azure CLI**](#tab/manage-subnet-delegation-cli)
-Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to remove the delegation from the subnet named **mySubnet**:
+Use [`az network vnet subnet update`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to remove the delegation from the subnet named **subnet-1**:
```azurecli-interactive az network vnet subnet update \
- --resource-group myResourceGroup \
- --name mySubnet \
- --vnet-name myVNet \
+ --resource-group test-rg \
+ --name subnet-1 \
+ --vnet-name vnet-1 \
--remove delegations ```
-To verify the delegation was removed, use [az network vnet subnet show](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is removed from the subnet in the property **serviceName**:
+To verify the delegation was removed, use [`az network vnet subnet show`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is removed from the subnet in the property **serviceName**:
```azurecli-interactive az network vnet subnet show \
- --resource-group myResourceGroup \
- --name mySubnet \
- --vnet-name myVNet \
+ --resource-group test-rg \
+ --name subnet-1 \
+ --vnet-name vnet-1 \
--query delegations ``` Output from command is a null bracket:
Output from command is a null bracket:
-## Clean up resources
-
-When no longer needed, delete the resource group and all resources it contains:
-
-1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it.
-
-1. Select **Delete resource group**.
-
-1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps - Learn how to [manage subnets in Azure](virtual-network-manage-subnet.md).
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
Title: Create, change, or delete an Azure virtual network description: Create and delete a virtual network and change settings, like DNS servers and IP address spaces, for an existing virtual network.- - Previously updated : 11/16/2022 Last updated : 08/23/2023
The account you log into, or connect to Azure with, must be assigned to the [net
| Name | Enter a name for the virtual network you're creating. | The name must be unique in the resource group that you select to create the virtual network in. <br> You can't change the name after the virtual network is created. <br> For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks. | | Region | Select an Azure [region](https://azure.microsoft.com/regions/). | A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same region as the virtual network. |
-1. Select **IP Addresses** tab or **Next: IP Addresses >**, and enter the following IP address information:
+1. Select **IP Addresses** tab or **Next: Security >**, **Next: IP Addresses >** and enter the following IP address information:
+
- **IPv4 Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network. You can't add the following address ranges:
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
Application security groups enable you to configure network security as a natura
After the subscription is exempted from this block and the VMs are stopped and restarted, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that is routed directly to the internet. - **Pay-as-you-go:** Outbound port 25 communication is blocked from all resources. No requests to remove the restriction can be made, because requests aren't granted. If you need to send email from your virtual machine, you have to use an SMTP relay service.
- - **MSDN, Azure Pass, Azure in Open, Education, BizSpark, and Free trial**: Outbound port 25 communication is blocked from all resources. No requests to remove the restriction can be made, because requests aren't granted. If you need to send email from your virtual machine, you have to use an SMTP relay service.
+ - **MSDN, Azure Pass, Azure in Open, Education, and Free trial**: Outbound port 25 communication is blocked from all resources. No requests to remove the restriction can be made, because requests aren't granted. If you need to send email from your virtual machine, you have to use an SMTP relay service.
- **Cloud service provider**: Outbound port 25 communication is blocked from all resources. No requests to remove the restriction can be made, because requests aren't granted. If you need to send email from your virtual machine, you have to use an SMTP relay service. ## Next steps
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/30/2023
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureLoadBalancer** | The Azure infrastructure load balancer. The tag translates to the [virtual IP address of the host](./network-security-groups-overview.md#azure-platform-considerations) (168.63.129.16) where the Azure health probes originate. This only includes probe traffic, not real traffic to your backend resource. If you're not using Azure Load Balancer, you can override this rule. | Both | No | No | | **AzureLoadTestingInstanceManagement** | This service tag is used for inbound connectivity from Azure Load Testing service to the load generation instances injected into your virtual network in the private load testing scenario. <br/><br/>**Note:** This tag is intended to be used in Azure Firewall, NSG, UDR and all other gateways for inbound connectivity. | Inbound | No | Yes | | **AzureMachineLearning** | Azure Machine Learning. | Both | No | Yes |
+| **AzureMachineLearningInference** | This service tag is used for restricting public network ingress in private network managed inferencing scenarios. | Inbound | No | Yes |
| **AzureManagedGrafana** | Azure Managed Grafana instance endpoint. | Outbound | No | Yes | | **AzureMonitor** | Log Analytics, Application Insights, AzMon, and custom metrics (GiG endpoints).<br/><br/>**Note**: For Log Analytics, the **Storage** tag is also required. If Linux agents are used, **GuestAndHybridManagement** tag is also required. | Outbound | No | Yes | | **AzureOpenDatasets** | Azure Open Datasets.<br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.Frontend** and **Storage** tag. | Outbound | No | Yes |
virtual-network Setup Dpdk Mana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk-mana.md
+
+ Title: Microsoft Azure Network Adapter (MANA) and DPDK on Linux
+description: Learn about MANA and DPDK for Linux Azure VMs.
+++ Last updated : 07/10/2023+++
+# Microsoft Azure Network Adapter (MANA) and DPDK on Linux
+
+The Microsoft Azure Network Adapter (MANA) is new hardware for Azure virtual machines to enables higher throughput and reliability.
+To make use of MANA, users must modify their DPDK initialization routines. MANA requires two changes compared to legacy hardware:
+- [MANA EAL arguments](#mana-dpdk-eal-arguments) for the poll-mode driver (PMD) differ from previous hardware.
+- The Linux kernel must release control of the MANA network interfaces before DPDK initialization begins.
+
+The setup procedure for MANA DPDK is outlined in the [example code.](#example-testpmd-setup-and-netvsc-test).
+
+## Introduction
+
+Legacy Azure Linux VMs rely on the mlx4 or mlx5 drivers and the accompanying hardware for accelerated networking. Azure DPDK users would select specific interfaces to include or exclude by passing bus addresses to the DPDK EAL. The setup procedure for MANA DPDK differs slightly, since the assumption of one bus address per Accelerated Networking interface no longer holds true. Rather than using a PCI bus address, the MANA PMD uses the MAC address to determine which interface it should bind to.
+
+## MANA DPDK EAL Arguments
+The MANA PMD probes all devices and ports on the system when no `--vdev` argument is present; the `--vdev` argument is not mandatory. In testing environments it's often desirable to leave one (primary) interface available for servicing the SSH connection to the VM. To use DPDK with a subset of the available VFs, users should pass both the bus address of the MANA device and the MAC address of the interfaces in the `--vdev` argument. For more detail, example code is available to demonstrate [DPDK EAL initialization on MANA](#example-testpmd-setup-and-netvsc-test).
+
+For general information about the DPDK Environment Abstraction Layer (EAL):
+- [DPDK EAL Arguments for Linux](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#eal-in-a-linux-userland-execution-environment)
+- [DPDK EAL Overview](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html)
+
+## DPDK requirements for MANA
+
+Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers.
+
+MANA DPDK requires the following set of drivers:
+1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later)
+1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later)
+1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later)
+1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later)
+
+>[!NOTE]
+>MANA DPDK is not available for Windows; it will only work on Linux VMs.
+
+## Example: Check for MANA
+
+>[!NOTE]
+>This article assumes the pciutils package containing the lspci command is installed on the system.
+
+```bash
+# check for pci devices with ID:
+# vendor: Microsoft Corporation (1414)
+# class: Ethernet Controller (0200)
+# device: Microsft Azure Network Adapter VF (00ba)
+if [[ -n `lspci -d 1414:00ba:0200` ]]; then
+ echo "MANA device is available."
+else
+ echo "MANA was not detected."
+fi
+
+```
+
+## Example: DPDK installation (Ubuntu 22.04)
+
+>[!NOTE]
+>This article assumes compatible kernel and rdma-core are installed on the system.
+
+```bash
+DEBIAN_FRONTEND=noninteractive sudo apt-get install -q -y build-essential libudev-dev libnl-3-dev libnl-route-3-dev ninja-build libssl-dev libelf-dev python3-pip meson libnuma-dev
+
+pip3 install pyelftools
+
+# Try latest LTS DPDK, example uses DPDK tag v23.07-rc3
+git clone https://github.com/DPDK/dpdk.git -b v23.07-rc3 --depth 1
+pushd dpdk
+meson build
+cd build
+ninja
+sudo ninja install
+popd
+```
+
+## Example: Testpmd setup and netvsc test
+
+Note the following example code for running DPDK with MANA. The direct-to-vf 'netvsc' configuration on Azure is recommended for maximum performance with MANA.
+
+>[!NOTE]
+>DPDK requires either 2MB or 1GB hugepages to be enabled
+
+```bash
+# Enable 2MB hugepages.
+echo 1024 | tee /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
+
+# Assuming use of eth1 for DPDK in this demo
+PRIMARY="eth1"
+
+# $ ip -br link show master eth1
+# > enP30832p0s0 UP f0:0d:3a:ec:b4:0a <... # truncated
+# grab interface name for device bound to primary
+SECONDARY="`ip -br link show master $PRIMARY | awk '{ print $1 }'`"
+# Get mac address for MANA interface (should match primary)
+MANA_MAC="`ip -br link show master $PRIMARY | awk '{ print $3 }'`"
++
+# $ ethtool -i enP30832p0s0 | grep bus-info
+# > bus-info: 7870:00:00.0
+# get MANA device bus info to pass to DPDK
+BUS_INFO="`ethtool -i $SECONDARY | grep bus-info | awk '{ print $2 }'`"
+
+# Set MANA interfaces DOWN before starting DPDK
+ip link set $PRIMARY down
+ip link set $SECONDARY down
++
+## Move synthetic channel to user mode and allow it to be used by NETVSC PMD in DPDK
+DEV_UUID=$(basename $(readlink /sys/class/net/$PRIMARY/device))
+NET_UUID="f8615163-df3e-46c5-913f-f2d2f965ed0e"
+modprobe uio_hv_generic
+echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
+echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind
+echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind
+
+# MANA single queue test
+dpdk-testpmd -l 1-3 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --txd=128 --rxd=128 --stats 2
+
+# MANA multiple queue test (example assumes > 9 cores)
+dpdk-testpmd -l 1-9 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --nb-cores=8 --txd=128 --rxd=128 --txq=8 --rxq=8 --stats 2
+
+```
+
+## Troubleshooting
+
+### Fail to set interface down.
+Failure to set the MANA bound device to DOWN can result in low or zero packet throughput.
+The failure to release the device can result the EAL error message related to transmit queues.
+```
+mana_start_tx_queues(): Failed to create qp queue index 0
+mana_dev_start(): failed to start tx queues -19
+```
+
+### Failure to enable huge pages.
+
+Try enabling huge pages and ensuring the information is visible in meminfo.
+```
+EAL: No free 2048 kB hugepages reported on node 0
+EAL: FATAL: Cannot get hugepage information.
+EAL: Cannot get hugepage information.
+EAL: Error - exiting with code: 1
+Cause: Cannot init EAL: Permission denied
+```
+
+### Low throughput with use of --vdev="net_vdev_netvsc0,iface=eth1"
+
+Failover configuration of either the `net_failsafe` or `net_vdev_netvsc` poll-mode-drivers isn't recommended for high performance on Azure. The netvsc configuration with DPDK version 20.11 or higher may give better results. For optimal performance, ensure your Linux kernel, rdma-core, and DPDK packages meet the listed requirements for DPDK and MANA.
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
DPDK consists of sets of user-space libraries that provide access to lower-level
DPDK can run on Azure virtual machines that are supporting multiple operating system distributions. DPDK provides key performance differentiation in driving network function virtualization implementations. These implementations can take the form of network virtual appliances (NVAs), such as virtual routers, firewalls, VPNs, load balancers, evolved packet cores, and denial-of-service (DDoS) applications.
+A list of setup instructions for DPDK on MANA VMs is available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
+ ## Benefit **Higher packets per second (PPS)**: Bypassing the kernel and taking control of packets in the user space reduces the cycle count by eliminating context switches. It also improves the rate of packets that are processed per second in Azure Linux virtual machines.
The following distributions from the Azure Marketplace are supported:
The noted versions are the minimum requirements. Newer versions are supported too.
+A list of requirements for DPDK on MANA VMs is available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
+ **Custom kernel support** For any Linux kernel version that's not listed, see [Patches for building an Azure-tuned Linux kernel](https://github.com/microsoft/azure-linux-kernel). For more information, you can also contact [aznetdpdk@microsoft.com](mailto:aznetdpdk@microsoft.com).
In addition, DPDK uses RDMA verbs to create data queues on the Network Adapter.
## Install DPDK manually (recommended)
+DPDK installation instructions for MANA VMs are available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
+ ### Install build dependencies # [RHEL, CentOS](#tab/redhat)
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
Title: 'Tutorial: Connect virtual networks with VNet peering - Azure portal' description: In this tutorial, you learn how to connect virtual networks with virtual network peering using the Azure portal.- - Previously updated : 06/24/2022 Last updated : 08/22/2023 # Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
In this tutorial, you learn how to:
> * Deploy a virtual machine (VM) into each virtual network > * Communicate between VMs
-This tutorial uses the Azure portal. You can also complete it using [Azure CLI](tutorial-connect-virtual-networks-cli.md) or [PowerShell](tutorial-connect-virtual-networks-powershell.md).
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- ## Prerequisites
-* An Azure subscription
+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
-## Create virtual networks
-
-1. On the Azure portal, select **+ Create a resource**.
-
-1. Search for **Virtual Network**, and then select **Create**.
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vnet.png" alt-text="Screenshot of create a resource for virtual network.":::
-
-1. On the **Basics** tab, enter or select the following information and accept the defaults for the remaining settings:
-
- |Setting|Value|
- |||
- |Subscription| Select your subscription.|
- |Resource group| Select **Create new** and enter *myResourceGroup*.|
- |Name| Enter *myVirtualNetwork1*.|
- |Region| Select **East US**.|
--
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-basic-tab.png" alt-text="Screenshot of create virtual network basics tab.":::
-
-1. On the **IP Addresses** tab, enter *10.0.0.0/16* for the **IPv4 address Space** field. Select the **+ Add subnet** button below and enter *Subnet1* for **Subnet Name** and *10.0.0.0/24* for the **Subnet Address range**.
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/ip-addresses-tab.png" alt-text="Screenshot of create a virtual network IP addresses tab.":::
-
-1. Select **Review + create** and then select **Create**.
-1. Repeat steps 1-5 again to create a second virtual network with the following settings:
-
- | Setting | Value |
- | | |
- | Name | myVirtualNetwork2 |
- | Address space | 10.1.0.0/16 |
- | Resource group | myResourceGroup |
- | Subnet name | Subnet2 |
- | Subnet address range | 10.1.0.0/24 |
-
-## Peer virtual networks
-
-1. In the search box at the top of the Azure portal, look for *myVirtualNetwork1*. When **myVirtualNetwork1** appears in the search results, select it.
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/search-vnet.png" alt-text="Screenshot of searching for myVirtualNetwork1.":::
+Repeat the previous steps to create a second virtual network with the following values:
-1. Under **Settings**, select **Peerings**, and then select **+ Add**, as shown in the following picture:
+>[!NOTE]
+>The second virtual network can be in the same region as the first virtual network or in a different region. You can skip the **Security** tab and the Bastion deployment for the second virtual network. After the network peer, you can connect to both virtual machines with the same Bastion deployment.
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-peering.png" alt-text="Screenshot of creating peerings for myVirtualNetwork1.":::
-
-1. Enter or select the following information, accept the defaults for the remaining settings, and then select **Add**.
-
- | Setting | Value |
- | | |
- | **This virtual network** | |
- | Peering link name | Enter *myVirtualNetwork1-myVirtualNetwork2* for the name of the peering from **myVirtualNetwork1** to the remote virtual network. |
- | **Remote virtual network** | |
- | Peering link name | Enter *myVirtualNetwork2-myVirtualNetwork1* for the name of the peering from the remote virtual network to **myVirtualNetwork1**. |
- | Subscription | Select your subscription of the remote virtual network. |
- | Virtual network | Select **myVirtualNetwork2** for the name of the remote virtual network. The remote virtual network can be in the same region of **myVirtualNetwork1** or in a different region. |
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-inline.png" alt-text="Screenshot of virtual network peering configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-expanded.png":::
-
- In the **Peerings** page, the **Peering status** is **Connected**, as shown in the following picture:
+| Setting | Value |
+| | |
+| Name | **vnet-2** |
+| Address space | **10.1.0.0/16** |
+| Resource group | **test-rg** |
+| Subnet name | **subnet-1** |
+| Subnet address range | **10.1.0.0/24** |
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-status-connected.png" alt-text="Screenshot of virtual network peering connection status.":::
+<a name="peer-virtual-networks"></a>
- If you don't see a **Connected** status, select the **Refresh** button.
## Create virtual machines
-Create a VM in each virtual network so that you can test the communication between them.
+Create a virtual machine in each virtual network to test the communication between them.
-### Create the first VM
-1. On the Azure portal, select **+ Create a resource**.
-
-1. Select **Compute**, and then **Create** under **Virtual machine**.
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm.png" alt-text="Screenshot of create a resource for virtual machines.":::
-
-1. Enter or select the following information on the **Basics** tab. Accept the defaults for the remaining settings, and then select **Create**:
-
- | Setting | Value |
- | | |
- | Resource group| Select **myResourceGroup**. |
- | Name | Enter *myVm1*. |
- | Location | Select **(US) East US**. |
- | Image | Select an OS image. For this tutorial, *Windows Server 2019 Datacenter - Gen2* is selected. |
- | Size | Select a VM size. For this tutorial, *Standard_D2s_v3* is selected. |
- | Username | Enter a username. For this tutorial, the username *azure* is used. |
- | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-). |
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-inline.png" alt-text="Screenshot of virtual machine basic tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-expanded.png":::
-
-1. On the **Networking** tab, select the following values:
-
- | Setting | Value |
- | | |
- | Virtual network | Select **myVirtualNetwork1**. |
- | Subnet | Select **Subnet1**. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **RDP (3389)**. |
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-inline.png" alt-text="Screenshot of virtual machine networking tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-expanded.png":::
-
-1. Select the **Review + Create** and then **Create** to start the VM deployment.
-
-### Create the second VM
-
-Repeat steps 1-5 again to create a second virtual machine with the following changes:
+Repeat the previous steps to create a second virtual machine in the second virtual network with the following values:
| Setting | Value | | | |
-| Name | myVm2 |
-| Virtual network | myVirtualNetwork2 |
+| Virtual machine name | **vm-2** |
+| Region | **East US 2** or same region as **vnet-2**. |
+| Virtual network | Select **vnet-2**. |
+| Subnet | Select **subnet-1 (10.1.0.0/24)**. |
+| Public IP | **None** |
+| Network security group name | **nsg-2** |
-The VMs take a few minutes to create. Don't continue with the remaining steps until both VMs are created.
+Wait for the virtual machines to be created before continuing with the next steps.
+## Connect to a virtual machine
-## Communicate between VMs
-
-Test the communication between the two virtual machines over the virtual network peering by pinging from **myVm2** to **myVm1**.
+Use `ping` to test the communication between the virtual machines.
-1. In the search box at the top of the portal, look for *myVm1*. When **myVm1** appears in the search results, select it.
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/search-vm.png" alt-text="Screenshot of searching for myVm1.":::
+1. In the portal, search for and select **Virtual machines**.
-1. To connect to the virtual machine, select **Connect** and then select **RDP** from the drop-down. Select **Download RDP file** to download the remote desktop file.
+1. On the **Virtual machines** page, select **vm-1**.
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/connect-to-virtual-machine.png" alt-text="Screenshot of connect to virtual machine button.":::
+1. In the **Overview** of **vm-1**, select **Connect**.
-1. To connect to the VM, open the downloaded RDP file. If prompted, select **Connect**.
+1. In the **Connect to virtual machine** page, select the **Bastion** tab.
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-connect.png" alt-text="Screenshot of connection screen for remote desktop.":::
+1. Select **Use Bastion**.
-1. Enter the username and password you specified when creating **myVm1** (you may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM), then select **OK**.
+1. Enter the username and password you created when you created the VM, and then select **Connect**.
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials.png" alt-text="Screenshot of R D P credential screen.":::
+## Communicate between VMs
-1. You may receive a certificate warning during the sign-in process. Select **Yes** to continue with the connection.
+1. At the bash prompt for **vm-1**, enter `ping -c 4 vm-2`.
-1. In a later step, ping is used to communicate with **myVm1** from **myVm2**. Ping uses the Internet Control Message Protocol (ICMP), which is denied through the Windows Firewall, by default. On **myVm1**, enable ICMP through the Windows firewall, so that you can ping this VM from **myVm2** in a later step, using PowerShell:
+ You get a reply similar to the following message:
- ```powershell
- New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
+ ```output
+ azureuser@vm-1:~$ ping -c 4 vm-2
+ PING vm-2.3bnkevn3313ujpr5l1kqop4n4d.cx.internal.cloudapp.net (10.1.0.4) 56(84) bytes of data.
+ 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=1 ttl=64 time=1.83 ms
+ 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=2 ttl=64 time=0.987 ms
+ 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=3 ttl=64 time=0.864 ms
+ 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=4 ttl=64 time=0.890 ms
```
- Though ping is used to communicate between VMs in this tutorial, allowing ICMP through the Windows Firewall for production deployments isn't recommended.
-
-1. To connect to **myVm2** from **myVm1**, enter the following command from a command prompt on **myVm1**:
-
- ```
- mstsc /v:10.1.0.4
- ```
-1. Enter the username and password you specified when creating **myVm2** and select **Yes** if you receive a certificate warning during the sign-in process.
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials-to-second-vm.png" alt-text="Screenshot of R D P credential screen for R D P session from first virtual machine to second virtual machine.":::
-
-1. Since you enabled ping on **myVm1**, you can now ping it from **myVm2**:
-
- ```powershell
- ping 10.0.0.4
- ```
-
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/myvm2-ping-myvm1.png" alt-text="Screenshot of second virtual machine pinging first virtual machine.":::
+1. Close the Bastion connection to **vm-1**.
-1. Disconnect your RDP sessions to both *myVm1* and *myVm2*.
+1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**.
-## Clean up resources
+1. At the bash prompt for **vm-2**, enter `ping -c 4 vm-1`.
-When no longer needed, delete the resource group and all resources it contains:
+ You get a reply similar to the following message:
-1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it.
+ ```output
+ azureuser@vm-2:~$ ping -c 4 vm-1
+ PING vm-1.3bnkevn3313ujpr5l1kqop4n4d.cx.internal.cloudapp.net (10.0.0.4) 56(84) bytes of data.
+ 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=1 ttl=64 time=0.695 ms
+ 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=2 ttl=64 time=0.896 ms
+ 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=3 ttl=64 time=3.43 ms
+ 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=0.780 ms
+ ```
-1. Select **Delete resource group**.
+1. Close the Bastion connection to **vm-2**.
-1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps In this tutorial, you: * Created virtual network peering between two virtual networks.
-* Tested the communication between two virtual machines over the virtual network peering using ping command.
+
+* Tested the communication between two virtual machines over the virtual network peering with `ping`.
To learn more about a virtual network peering:
virtual-network Tutorial Create Route Table Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-portal.md
Title: 'Tutorial: Route network traffic with a route table - Azure portal' description: In this tutorial, learn how to route network traffic with a route table using the Azure portal.- -- Previously updated : 06/27/2022 Last updated : 08/21/2023 + # Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance.
In this tutorial, you learn how to:
> * Associate a route table to a subnet > * Route traffic from one subnet to another through an NVA
-This tutorial uses the Azure portal. You can also complete it using the [Azure CLI](tutorial-create-route-table-cli.md) or [PowerShell](tutorial-create-route-table-powershell.md).
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Overview
-
-This diagram shows the resources created in this tutorial along with the expected network routes.
-- ## Prerequisites
-* An Azure subscription
+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
-## Create a virtual network
-
-In this section, you'll create a virtual network, three subnets, and a bastion host. You'll use the bastion host to securely connect to the virtual machines.
-1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Virtual network**, or search for *Virtual Network* in the portal search box.
+## Create subnets
-2. Select **Create**.
+A **DMZ** and **Private** subnet are needed for this tutorial. The **DMZ** subnet is where you deploy the NVA, and the **Private** subnet is where you deploy the virtual machines that you want to route traffic to. The **subnet-1** is the subnet created in the previous steps. Use **subnet-1** for the public virtual machine.
-2. On the **Basics** tab of **Create virtual network**, enter or select this information:
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
- | Setting | Value |
- | - | -- |
- | Subscription | Select your subscription.|
- | Resource group | Select **Create new**, enter *myResourceGroup*. </br> Select **OK**. |
- | Name | Enter *myVirtualNetwork*. |
- | Region | Select **East US**.|
-
-3. Select the **IP Addresses** tab, or select the **Next: IP Addresses** button at the bottom of the page.
-
-4. In **IPv4 address space**, select the existing address space and change it to *10.0.0.0/16*.
+1. In **Virtual networks**, select **vnet-1**.
-4. Select **+ Add subnet**, then enter *Public* for **Subnet name** and *10.0.0.0/24* for **Subnet address range**.
+1. In **vnet-1**, select **Subnets** from the **Settings** section.
-5. Select **Add**.
+1. In the virtual network's subnet list, select **+ Subnet**.
-6. Select **+ Add subnet**, then enter *Private* for **Subnet name** and *10.0.1.0/24* for **Subnet address range**.
+1. In **Add subnet**, enter or select the following information:
-7. Select **Add**.
-
-8. Select **+ Add subnet**, then enter *DMZ* for **Subnet name** and *10.0.2.0/24* for **Subnet address range**.
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **subnet-private**. |
+ | Subnet address range | Enter **10.0.2.0/24**. |
-9. Select **Add**.
+ :::image type="content" source="./media/tutorial-create-route-table-portal/create-private-subnet.png" alt-text="Screenshot of private subnet creation in virtual network.":::
-10. Select the **Security** tab, or select the **Next: Security** button at the bottom of the page.
+1. Select **Save**.
-11. Under **BastionHost**, select **Enable**. Enter this information:
+1. Select **+ Subnet**.
- | Setting | Value |
- |--|-|
- | Bastion name | Enter *myBastionHost*. |
- | AzureBastionSubnet address space | Enter *10.0.3.0/24*. |
- | Public IP Address | Select **Create new**. </br> Enter *myBastionIP* for **Name**. </br> Select **OK**. |
+1. In **Add subnet**, enter or select the following information:
- >[!NOTE]
- >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **subnet-dmz**. |
+ | Subnet address range | Enter **10.0.3.0/24**. |
-12. Select the **Review + create** tab or select the **Review + create** button.
+ :::image type="content" source="./media/tutorial-create-route-table-portal/create-dmz-subnet.png" alt-text="Screenshot of DMZ subnet creation in virtual network.":::
-13. Select **Create**.
+1. Select **Save**.
## Create an NVA virtual machine
-Network virtual appliances (NVAs) are virtual machines that help with network functions, such as routing and firewall optimization. In this section, you'll create an NVA using a **Windows Server 2019 Datacenter** virtual machine. You can select a different operating system if you want.
+Network virtual appliances (NVAs) are virtual machines that help with network functions, such as routing and firewall optimization. In this section, create an NVA using an **Ubuntu 22.04** virtual machine.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-1. From the Azure portal menu, select **+ Create a resource** > **Compute** > **Virtual machine**, or search for *Virtual machine* in the portal search box.
+1. Select **+ Create** then **Azure virtual machine**.
-1. Select **Create**.
-
-2. On the **Basics** tab of **Create a virtual machine**, enter or select this information:
+1. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
- | Setting | Value |
- |--|-|
- | **Project Details** | |
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
| Subscription | Select your subscription. |
- | Resource Group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Virtual machine name | Enter *myVMNVA*. |
- | Region | Select **(US) East US**. |
- | Availability Options | Select **No infrastructure redundancy required**. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **vm-nva**. |
+ | Region | Select **(US) East US 2**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
| Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Select **No**. |
- | Size | Choose VM size or take default setting. |
- | **Administrator account** | |
+ | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
| Username | Enter a username. |
- | Password | Enter a password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
+ | Password | Enter a password. |
| Confirm password | Reenter password. |
- | **Inbound port rules** | |
+ | **Inbound port rules** | |
| Public inbound ports | Select **None**. |
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
+1. Select **Next: Disks** then **Next: Networking**.
+
+1. In the Networking tab, enter or select the following information:
| Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | Select **myVirtualNetwork**. |
- | Subnet | Select **DMZ** |
- | Public IP | Select **None** |
- | NIC network security group | Select **Basic**|
- | Public inbound ports network | Select **None**. |
-
-5. Select the **Review + create** tab, or select **Review + create** button at the bottom of the page.
-
-6. Review the settings, and then select **Create**.
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **vnet-1**. |
+ | Subnet | Select **subnet-dmz (10.0.3.0/24)**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **Create new**. </br> In **Name** enter **nsg-nva**. </br> Select **OK**. |
-## Create public and private virtual machines
+1. Leave the rest of the options at the defaults and select **Review + create**.
+
+1. Select **Create**.
-You'll create two virtual machines in **myVirtualNetwork** virtual network, then you'll allow Internet Control Message Protocol (ICMP) on them so you can use *tracert* tool to trace traffic.
+## Create public and private virtual machines
-> [!NOTE]
-> For production environments, we don't recommend allowing ICMP through the Windows Firewall.
+Create two virtual machines in the **vnet-1** virtual network. One virtual machine is in the **subnet-1** subnet, and the other virtual machine is in the **subnet-private** subnet. Use the same virtual machine image for both virtual machines.
### Create public virtual machine
-1. From the Azure portal menu, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine**, enter or select this information in the **Basics** tab:
+The public virtual machine is used to simulate a machine in the public internet. The public and private virtual machine are used to test the routing of network traffic through the NVA virtual machine.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
- | Setting | Value |
- |--|-|
- | **Project Details** | |
+1. Select **+ Create** then **Azure virtual machine**.
+
+1. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
| Subscription | Select your subscription. |
- | Resource Group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Virtual machine name | Enter *myVMPublic*. |
- | Region | Select **(US) East US**. |
- | Availability Options | Select **No infrastructure redundancy required**. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **vm-public**. |
+ | Region | Select **(US) East US 2**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
| Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Select **No**. |
- | Size | Choose VM size or take default setting. |
- | **Administrator account** | |
+ | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
| Username | Enter a username. |
- | Password | Enter a password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
+ | Password | Enter a password. |
| Confirm password | Reenter password. |
- | **Inbound port rules** | |
+ | **Inbound port rules** | |
| Public inbound ports | Select **None**. |
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
+1. Select **Next: Disks** then **Next: Networking**.
+
+1. In the Networking tab, enter or select the following information:
| Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | Select **myVirtualNetwork**. |
- | Subnet | Select **Public**. |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **vnet-1**. |
+ | Subnet | Select **subnet-1 (10.0.0.0/24)**. |
| Public IP | Select **None**. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports network | Select **None**. |
-
-5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-6. Review the settings, and then select **Create**.
+ | NIC network security group | Select **None**. |
+
+1. Leave the rest of the options at the defaults and select **Review + create**.
+
+1. Select **Create**.
### Create private virtual machine
-1. From the Azure portal menu, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine**, enter or select this information in the **Basics** tab:
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+1. Select **+ Create** then **Azure virtual machine**.
- | Setting | Value |
- |--|-|
- | **Project Details** | |
+1. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
| Subscription | Select your subscription. |
- | Resource Group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Virtual machine name | Enter *myVMPrivate*. |
- | Region | Select **(US) East US**. |
- | Availability Options | Select **No infrastructure redundancy required**. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **vm-private**. |
+ | Region | Select **(US) East US 2**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
| Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Select **No**. |
- | Size | Choose VM size or take default setting. |
- | **Administrator account** | |
+ | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
| Username | Enter a username. |
- | Password | Enter a password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
+ | Password | Enter a password. |
| Confirm password | Reenter password. |
- | **Inbound port rules** | |
+ | **Inbound port rules** | |
| Public inbound ports | Select **None**. |
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
+1. Select **Next: Disks** then **Next: Networking**.
+
+1. In the Networking tab, enter or select the following information:
| Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | Select **myVirtualNetwork**. |
- | Subnet | Select **Private**. |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **vnet-1**. |
+ | Subnet | Select **subnet-private (10.0.2.0/24)**. |
| Public IP | Select **None**. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports network | Select **None**. |
-
-5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-6. Review the settings, and then select **Create**.
+ | NIC network security group | Select **None**. |
-### Allow ICMP in Windows firewall
+1. Leave the rest of the options at the defaults and select **Review + create**.
-1. Select **Go to resource** or Search for *myVMPrivate* in the portal search box.
+1. Select **Create**.
-1. In the **Overview** page of **myVMPrivate**, select **Connect** then **Bastion**.
+## Enable IP forwarding
-1. Enter the username and password you created for **myVMPrivate** virtual machine previously.
+To route traffic through the NVA, turn on IP forwarding in Azure and in the operating system of **vm-nva**. When IP forwarding is enabled, any traffic received by **vm-nva** that's destined for a different IP address, isn't dropped and is forwarded to the correct destination.
-1. Select **Connect** button.
+### Enable IP forwarding in Azure
-1. Open Windows PowerShell after you connect.
+In this section, you turn on IP forwarding for the network interface of the **vm-nva** virtual machine.
-1. Enter this command:
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
- ```powershell
- New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
- ```
+1. In **Virtual machines**, select **vm-nva**.
-1. From PowerShell, open a remote desktop connection to the **myVMPublic** virtual machine:
+1. In **vm-nva**, select **Networking** from the **Settings** section.
- ```powershell
- mstsc /v:myvmpublic
- ```
+1. Select the name of the interface next to **Network Interface:**. The name begins with **vm-nva** and has a random number assigned to the interface. The name of the interface in this example is **vm-nva124**.
-1. After you connect to **myVMPublic** VM, open Windows PowerShell and enter the same command from step 6.
+ :::image type="content" source="./media/tutorial-create-route-table-portal/nva-network-interface.png" alt-text="Screenshot of network interface of NVA virtual machine.":::
-1. Close the remote desktop connection to **myVMPublic** VM.
+1. In the network interface overview page, select **IP configurations** from the **Settings** section.
-## Turn on IP forwarding
+1. In **IP configurations**, select the box next to **Enable IP forwarding**.
-To route traffic through the NVA, turn on IP forwarding in Azure and in the operating system of **myVMNVA** virtual machine. Once IP forwarding is enabled, any traffic received by **myVMNVA** VM that's destined for a different IP address, won't be dropped and will be forwarded to the correct destination.
+ :::image type="content" source="./media/tutorial-create-route-table-portal/enable-ip-forwarding.png" alt-text="Screenshot of enablement of IP forwarding.":::
-### Turn on IP forwarding in Azure
+1. Select **Apply**.
-In this section, you'll turn on IP forwarding for the network interface of **myVMNVA** virtual machine in Azure.
+### Enable IP forwarding in the operating system
-1. Search for *myVMNVA* in the portal search box.
+In this section, turn on IP forwarding for the operating system of the **vm-nva** virtual machine to forward network traffic. Use the Azure Bastion service to connect to the **vm-nva** virtual machine.
-3. In the **myVMNVA** overview page, select **Networking** from the **Settings** section.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-4. In the **Networking** page of **myVMNVA**, select the network interface next to **Network Interface:**. The name of the interface will begin with **myvmnva**.
+1. In **Virtual machines**, select **vm-nva**.
- :::image type="content" source="./media/tutorial-create-route-table-portal/virtual-machine-networking.png" alt-text="Screenshot showing Networking page of network virtual appliance virtual machine in Azure portal." border="true":::
+1. Select **Bastion** in the **Operations** section.
-5. In the network interface overview page, select **IP configurations** from the **Settings** section.
+1. Enter the username and password you entered when the virtual machine was created.
-6. In the **IP configurations** page, set **IP forwarding** to **Enabled**, then select **Save**.
+1. Select **Connect**.
- :::image type="content" source="./media/tutorial-create-route-table-portal/enable-ip-forwarding.png" alt-text="Screenshot showing Enabled I P forwarding in Azure portal." border="true":::
+1. Enter the following information at the prompt of the virtual machine to enable IP forwarding:
-### Turn on IP forwarding in the operating system
+ ```bash
+ sudo vim /etc/sysctl.conf
+ ```
-In this section, you'll turn on IP forwarding for the operating system of **myVMNVA** virtual machine to forward network traffic. You'll use the same bastion connection to **myVMPrivate** VM, that you started in the previous steps, to open a remote desktop connection to **myVMNVA** VM.
+1. In the Vim editor, remove the **`#`** from the line **`net.ipv4.ip_forward=1`**:
-1. From PowerShell on **myVMPrivate** VM, open a remote desktop connection to the **myVMNVA** VM:
+ Press the **Insert** key.
- ```powershell
- mstsc /v:myvmnva
+ ```bash
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
```
-2. After you connect to **myVMNVA** VM, open Windows PowerShell and enter this command to turn on IP forwarding:
+ Press the **Esc** key.
- ```powershell
- Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters -Name IpEnableRouter -Value 1
- ```
+ Enter **`:wq`** and press **Enter**.
-3. Restart **myVMNVA** VM.
+1. Close the Bastion session.
- ```powershell
- Restart-Computer
- ```
+1. Restart the virtual machine.
## Create a route table
-In this section, you'll create a route table.
+In this section, create a route table to define the route of the traffic through the NVA virtual machine. The route table is associated to the **subnet-1** subnet where the **vm-public** virtual machine is deployed.
-1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Route table**, or search for *Route table* in the portal search box.
+1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
-3. Select **Create**.
+1. Select **+ Create**.
-4. On the **Basics** tab of **Create route table**, enter or select this information:
+1. In **Create Route table** enter or select the following information:
| Setting | Value | | - | -- | | **Project details** | |
- | Subscription | Select your subscription.|
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Region | Select **East US**. |
- | Name | Enter *myRouteTablePublic*. |
- | Propagate gateway routes | Select **Yes**. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Region | Select **East US 2**. |
+ | Name | Enter **route-table-public**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
- :::image type="content" source="./media/tutorial-create-route-table-portal/create-route-table.png" alt-text="Screenshot showing Basics tab of Create route table in Azure portal." border="true":::
+1. Select **Review + create**.
-5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+1. Select **Create**.
## Create a route
-In this section, you'll create a route in the route table that you created in the previous steps.
+In this section, create a route in the route table that you created in the previous steps.
+
+1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
-1. Select **Go to resource** or Search for *myRouteTablePublic* in the portal search box.
+1. Select **route-table-public**.
-3. In the **myRouteTablePublic** page, select **Routes** from the **Settings** section.
+1. In **Settings** select **Routes**.
-4. In the **Routes** page, select the **+ Add** button.
+1. Select **+ Add** in **Routes**.
-5. In **Add route**, enter or select this information:
+1. Enter or select the following information in **Add route**:
| Setting | Value | | - | -- |
- | Route name | Enter *ToPrivateSubnet*. |
- | Address prefix destination | Select **IP Addresses**. |
- | Destination IP addresses/CIDR ranges| Enter *10.0.1.0/24* (The address range of the **Private** subnet created earlier). |
+ | Route name | Enter **to-private-subnet**. |
+ | Destination type | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **10.0.2.0/24**. |
| Next hop type | Select **Virtual appliance**. |
- | Next hop address | Enter *10.0.2.4* (The address of **myVMNVA** VM created earlier in the **DMZ** subnet). |
-
- :::image type="content" source="./media/tutorial-create-route-table-portal/add-route-inline.png" alt-text="Screenshot showing Add route configuration in Azure portal." lightbox="./media/tutorial-create-route-table-portal/add-route-expanded.png":::
-
-6. Select **Add**.
+ | Next hop address | Enter **10.0.3.4**. </br> **_This is the IP address you of vm-nva you created in the earlier steps._**. |
-## Associate a route table to a subnet
+ :::image type="content" source="./media/tutorial-create-route-table-portal/add-route.png" alt-text="Screenshot of route creation in route table.":::
-In this section, you'll associate the route table that you created in the previous steps to a subnet.
+1. Select **Add**.
-1. Search for *myVirtualNetwork* in the portal search box.
+1. Select **Subnets** in **Settings**.
-3. In the **myVirtualNetwork** page, select **Subnets** from the **Settings** section.
+1. Select **+ Associate**.
-4. In the virtual network's subnet list, select **Public**.
+1. Enter or select the following information in **Associate subnet**:
-5. In **Route table**, select **myRouteTablePublic** that you created in the previous steps.
-
-6. Select **Save** to associate your route table to the **Public** subnet.
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Select **vnet-1 (test-rg)**. |
+ | Subnet | Select **subnet-1**. |
- :::image type="content" source="./media/tutorial-create-route-table-portal/associate-route-table-inline.png" alt-text="Screenshot showing Associate route table to the Public subnet in the virtual network in Azure portal." lightbox="./media/tutorial-create-route-table-portal/associate-route-table-expanded.png":::
+1. Select **OK**.
## Test the routing of network traffic
-You'll test routing of network traffic using [tracert](/windows-server/administration/windows-commands/tracert) tool from **myVMPublic** VM to **myVMPrivate** VM, and then you'll test the routing in the opposite direction.
+Test routing of network traffic from **vm-public** to **vm-private**. Test routing of network traffic from **vm-private** to **vm-public**.
-### Test network traffic from myVMPublic VM to myVMPrivate VM
+### Test network traffic from vm-public to vm-private
-1. From PowerShell on **myVMPrivate** VM, open a remote desktop connection to the **myVMPublic** VM:
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
- ```powershell
- mstsc /v:myvmpublic
- ```
+1. In **Virtual machines**, select **vm-public**.
-2. After you connect to **myVMPublic** VM, open Windows PowerShell and enter this *tracert* command to trace the routing of network traffic from **myVMPublic** VM to **myVMPrivate** VM:
+1. Select **Bastion** in the **Operations** section.
+1. Enter the username and password you entered when the virtual machine was created.
- ```powershell
- tracert myvmprivate
- ```
+1. Select **Connect**.
- The response is similar to this example:
+1. In the prompt, enter the following command to trace the routing of network traffic from **vm-public** to **vm-private**:
- ```powershell
- Tracing route to myvmprivate.q04q2hv50taerlrtdyjz5nza1f.bx.internal.cloudapp.net [10.0.1.4]
- over a maximum of 30 hops:
+ ```bash
+ tracepath vm-private
+ ```
- 1 1 ms * 2 ms myvmnva.internal.cloudapp.net [10.0.2.4]
- 2 2 ms 1 ms 1 ms myvmprivate.internal.cloudapp.net [10.0.1.4]
+ The response is similar to the following example:
- Trace complete.
+ ```output
+ azureuser@vm-public:~$ tracepath vm-private
+ 1?: [LOCALHOST] pmtu 1500
+ 1: vm-nva.internal.cloudapp.net 1.766ms
+ 1: vm-nva.internal.cloudapp.net 1.259ms
+ 2: vm-private.internal.cloudapp.net 2.202ms reached
+ Resume: pmtu 1500 hops 2 back 1
```
- You can see that there are two hops in the above response for *tracert* ICMP traffic from **myVMPublic** VM to **myVMPrivate** VM. The first hop is **myVMNVA** VM, and the second hop is the destination **myVMPrivate** VM.
+ You can see that there are two hops in the above response for **`tracepath`** ICMP traffic from **vm-public** to **vm-private**. The first hop is **vm-nva**. The second hop is the destination **vm-private**.
- Azure sent the traffic from **Public** subnet through the NVA and not directly to **Private** subnet because you previously added **ToPrivateSubnet** route to **myRouteTablePublic** route table and associated it to **Public** subnet.
+ Azure sent the traffic from **subnet-1** through the NVA and not directly to **subnet-private** because you previously added the **to-private-subnet** route to **route-table-public** and associated it to **subnet-1**.
-1. Close the remote desktop connection to **myVMPublic** VM.
+1. Close the Bastion session.
-### Test network traffic from myVMPrivate VM to myVMPublic VM
+### Test network traffic from vm-private to vm-public
-1. From PowerShell on **myVMPrivate** VM, and enter this *tracert* command to trace the routing of network traffic from **myVmPrivate** VM to **myVmPublic** VM.
-
- ```powershell
- tracert myvmpublic
- ```
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
- The response is similar to this example:
+1. In **Virtual machines**, select **vm-private**.
- ```powershell
- Tracing route to myvmpublic.q04q2hv50taerlrtdyjz5nza1f.bx.internal.cloudapp.net [10.0.0.4]
- over a maximum of 30 hops:
+1. Select **Bastion** in the **Operations** section.
- 1 1 ms 1 ms 1 ms myvmpublic.internal.cloudapp.net [10.0.0.4]
+1. Enter the username and password you entered when the virtual machine was created.
- Trace complete.
- ```
+1. Select **Connect**.
- You can see that there's one hop in the above response, which is the destination **myVMPublic** virtual machine.
+1. In the prompt, enter the following command to trace the routing of network traffic from **vm-private** to **vm-public**:
- Azure sent the traffic directly from **Private** subnet to **Public** subnet. By default, Azure routes traffic directly between subnets.
+ ```bash
+ tracepath vm-public
+ ```
-1. Close the bastion session.
+ The response is similar to the following example:
-## Clean up resources
+ ```output
+ azureuser@vm-private:~$ tracepath vm-public
+ 1?: [LOCALHOST] pmtu 1500
+ 1: vm-public.internal.cloudapp.net 2.584ms reached
+ 1: vm-public.internal.cloudapp.net 2.147ms reached
+ Resume: pmtu 1500 hops 1 back 2
+ ```
-When the resource group is no longer needed, delete **myResourceGroup** and all the resources it contains:
+ You can see that there's one hop in the above response, which is the destination **vm-public**.
-1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it.
+ Azure sent the traffic directly from **subnet-private** to **subnet-1**. By default, Azure routes traffic directly between subnets.
-1. Select **Delete resource group**.
+1. Close the Bastion session.
-1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps In this tutorial, you: * Created a route table and associated it to a subnet.+ * Created a simple NVA that routed traffic from a public subnet to a private subnet.
-You can deploy different pre-configured NVAs from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking), which provide many useful network functions.
+You can deploy different preconfigured NVAs from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking), which provide many useful network functions.
To learn more about routing, see [Routing overview](virtual-networks-udr-overview.md) and [Manage a route table](manage-route-table.md).
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
You can test throughput from Windows VMs by using [NTTTCP](https://github.com/mi
# [Windows](#tab/windows)
-### Set up NTTTPS and test configuration
+### Set up NTTTCP and test configuration
1. On both the sender and receiver VMs, [download the latest version of NTTTCP](https://github.com/microsoft/ntttcp/releases/latest) into a separate folder like *c:\\tools*.
Run *ntttcp.exe* from the Windows command line, not from PowerShell. Run the tes
# [Linux](#tab/linux)
-### Prepare VMs and install NTTTPS-for-Linux
+### Prepare VMs and install NTTTCP-for-Linux
To measure throughput from Linux machines, use [NTTTCP-for-Linux](https://github.com/Microsoft/ntttcp-for-linux).
virtual-network Virtual Network Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md
Virtual network encryption has the following requirements:
- Global Peering is supported in regions where virtual network encryption is supported. -- Traffic to unsupported Virtual Machines is unencrypted. Use Virtual Network Flow Logs to confirm flow encryption between virtual machines. For more information about Virtual Network Flow Logs, see [Virtual Network Flow Logs](/azure/network-watcher/network-watcher-nsg-flow-logging-portal).
+- Traffic to unsupported Virtual Machines is unencrypted. Use Virtual Network Flow Logs to confirm flow encryption between virtual machines. For more information, see [VNet flow logs](../network-watcher/vnet-flow-logs-overview.md).
- The start/stop of existing virtual machines may be required after enabling encryption in a virtual network. ## Availability
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Title: Create, change, or delete an Azure virtual network peering description: Learn how to create, change, or delete a virtual network peering. With virtual network peering, you connect virtual networks in the same region and across regions.- tags: azure-resource-manager - Previously updated : 11/14/2022 Last updated : 08/24/2023 -+ # Create, change, or delete a virtual network peering
-Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global VNet Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing the [virtual network peering tutorial](tutorial-connect-virtual-networks-portal.md).
+Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global Virtual Network Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing the [virtual network peering tutorial](tutorial-connect-virtual-networks-portal.md).
## Prerequisites If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article: -- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with an Azure account that has the [necessary permissions](#permissions) to work with peerings.
+# [**Portal**](#tab/peering-portal)
+
+Sign in to the [Azure portal](https://portal.azure.com) with an Azure account that has the [necessary permissions](#permissions) to work with peerings.
+
+# [**PowerShell**](#tab/peering-powershell)
-- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
+If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
-- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
+# [**Azure CLI**](#tab/peering-cli)
- If you're running Azure CLI locally, use Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
+Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
+If you're running Azure CLI locally, use Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings.
-The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md) that gets assigned the appropriate actions listed in [Permissions](#permissions).
+The account you use connect to Azure must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md) that gets assigned the appropriate actions listed in [Permissions](#permissions).
++ ## Create a peering
Before creating a peering, familiarize yourself with the [requirements and const
# [**Portal**](#tab/peering-portal)
-1. In the search box at the top of the Azure portal, enter *Virtual networks* in the search box. When **Virtual networks** appear in the search results, select it. Don't select **Virtual networks (classic)**, as you can't create a peering from a virtual network deployed through the classic deployment model.
-
- :::image type="content" source="./media/virtual-network-manage-peering/search-vnet.png" alt-text="Screenshot of searching for virtual networks.":::
-
-1. Select the virtual network in the list that you want to create a peering for.
+1. In the search box at the top of the Azure portal, enter **Virtual network**. Select **Virtual networks** in the search results.
- :::image type="content" source="./media/virtual-network-manage-peering/select-vnet.png" alt-text="Screenshot of selecting VNetA from the virtual networks page.":::
+1. In **Virtual networks**, select the network you want to create a peering for.
-1. Select **Peerings** under **Settings** and then select **+ Add**.
+1. Select **Peerings** in **Settings**.
- :::image type="content" source="./media/virtual-network-manage-peering/vneta-peerings.png" alt-text="Screenshot of peerings page for VNetA.":::
+1. Select **+ Add**.
1. <a name="add-peering"></a>Enter or select values for the following settings, and then select **Add**.
Before creating a peering, familiarize yourself with the [requirements and const
| -- | -- | | **This virtual network** | | | Peering link name | The name of the peering on this virtual network. The name must be unique within the virtual network. |
- | Traffic to remote virtual network | - Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Allow**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). </br> - Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Selecting the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
- | Traffic forwarded from remote virtual network | Select **Allow (default)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
- | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br> - If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use this virtual network's gateway or Router Server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network's gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **None (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
+ | Allow access to remote virtual network | Option is selected by **default**. </br></br> - Select **Allow access to remote virtual network (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). |
+ | Allow traffic to remote virtual network | Option is deselected by **default**. </br></br> - Select **Allow traffic to remote virtual network** if you want traffic to flow to the peered virtual network. You can deselect this setting if you have a peering between virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is deselected, traffic doesn't flow between the peered virtual networks. Traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Deselecting the **Allow traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
+ | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Option is deselected by **default**. </br></br> - Select **Allow traffic forwarded from the remote virtual network (allow gateway transit)** if you want traffic **forwarded** by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. </br> For example, consider three virtual networks named ****Spoke1****, ****Spoke2****, and ****Hub****. A peering exists between each spoke virtual network and the ****Hub**** virtual network, but peerings don't exist between the spoke virtual networks. </br> A network virtual appliance is deployed in the **Hub** virtual network. User-defined routes are applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the **hub** virtual network, traffic doesn't flow between the spoke virtual networks because the **hub** isn't forwarding the traffic between the virtual networks. </br> While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
+ | Use remote virtual network gateway or route server | Option is deselected by **default**. </br></br> - Select **Use remote virtual network gateway or route Server** </br></br> If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. </br> For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br></br> If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **deselected (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*.|
| **Remote virtual network** | | | Peering link name | The name of the peering on the remote virtual network. The name must be unique within the virtual network. | | Virtual network deployment model | Select which deployment model the virtual network you want to peer with was deployed through. |
- | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, check this checkbox. Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. |
- | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant.
+ | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, select this checkbox. </br> Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the checkbox. </br> The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br></br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br></br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. |
+ | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant.
| Subscription | Select the [subscription](../azure-glossary-cloud-terminology.md#subscription) of the virtual network you want to peer with. One or more subscriptions are listed, depending on how many subscriptions your account has read access to. If you checked the **I know my resource ID** checkbox, this setting isn't available. | | Virtual network | Select the virtual network you want to peer with. You can select a virtual network created through either Azure deployment model. If you want to select a virtual network in a different region, you must select a virtual network in a [supported region](#cross-region). You must have read access to the virtual network for it to be visible in the list. If a virtual network is listed, but grayed out, it may be because the address space for the virtual network overlaps with the address space for this virtual network. If virtual network address spaces overlap, they can't be peered. If you checked the **I know my resource ID** checkbox, this setting isn't available. |
- | Traffic to remote virtual network | - Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Allow**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). </br> - Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Selecting the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
- | Traffic forwarded from remote virtual network | Select **Allow (default)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
- | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br>- If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use this virtual network's gateway or Router Server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network's gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **None (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
+ | Allow access to current virtual network | Option is selected by **default**. </br></br> - Select **Allow access to current virtual network** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). |
+ | Allow traffic to current virtual network | Option is selected by **default**. </br></br> - Select **Allow traffic to current virtual network** if you want traffic to flow to the peered virtual network by default. You can deselect this setting if you have a peering between two virtual networks but occasionally want to disable traffic flow between them. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks. Traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Deselecting the **Allow traffic to current virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* |
+ | Allow traffic forwarded from current virtual network (allow gateway transit) | Option is deselected by **default**. </br></br> - Select **Allow traffic forwarded from current virtual network (allow gateway transit)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named **Spoke1**, **Spoke2**, and **Hub**. A peering exists between each spoke virtual network and the **Hub** virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the **Hub** virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the **hub** virtual network, traffic doesn't flow between the spoke virtual networks because the **hub** isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* |
+ | Use current virtual network gateway or route server | Option is deselected by **default**. </br></br> Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use current virtual network gateway or route server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use remote virtual network gateway or route server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **Deselected (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a **hub** (see the **hub** and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. |
- :::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page." lightbox="./media/virtual-network-manage-peering/add-peering-expanded.png":::
+ :::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page.":::
> [!NOTE]
- > If you use a Virtual Network Gateway to send on-premises traffic transitively to a peered VNet, the peered VNet IP range for the on-premises VPN device must be set to 'interesting' traffic. You may need to add all Azure VNet's CIDR addresses to the Site-2-Site IPSec VPN Tunnel configuration on the on-premises VPN device. CIDR addresses include resources like such as Hub, Spokes, and Point-2-Site IP address pools. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet.
- > Intersting traffic is communicated through Phase 2 security associations. The security association creates a dedicated VPN tunnel for each specified subnet. The on-premises and Azure VPN Gateway tier have to support the same number of Site-2-Site VPN tunnels and Azure VNet subnets. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet. Consult your on-premises VPN documentation for instructions to create Phase 2 security associations for each specified Azure VNet subnet.
+ > If you use a Virtual Network Gateway to send on-premises traffic transitively to a peered virtual network, the peered virtual network IP range for the on-premises VPN device must be set to 'interesting' traffic. You may need to add all Azure virtual network's CIDR addresses to the Site-2-Site IPSec VPN Tunnel configuration on the on-premises VPN device. CIDR addresses include resources like such as **Hub**, Spokes, and Point-2-Site IP address pools. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet.
+ > Interesting traffic is communicated through Phase 2 security associations. The security association creates a dedicated VPN tunnel for each specified subnet. The on-premises and Azure VPN Gateway tier have to support the same number of Site-2-Site VPN tunnels and Azure VNet subnets. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet. Consult your on-premises VPN documentation for instructions to create Phase 2 security associations for each specified Azure VNet subnet.
-1. Select the **Refresh** button after a few seconds, and the peering status will change from *Updating* to *Connected*.
+1. Select the **Refresh** button after a few seconds, and the peering status will change from **Updating** to **Connected**.
:::image type="content" source="./media/virtual-network-manage-peering/vnet-peering-connected.png" alt-text="Screenshot of virtual network peering status on peerings page.":::
For step-by-step instructions for implementing peering between virtual networks
Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create virtual network peerings. ```azurepowershell-interactive
-## Place the virtual network VNetA configuration into a variable. ##
-$vnetA = Get-AzVirtualNetwork -Name VNetA -ResourceGroupName myResourceGroup
-## Place the virtual network VNetB configuration into a variable. ##
-$vnetB = Get-AzVirtualNetwork -Name VNetB -ResourceGroupName myResourceGroup
-## Create peering from VNetA to VNetB. ##
-Add-AzVirtualNetworkPeering -Name VNetAtoVNetB -VirtualNetwork $vnetA -RemoteVirtualNetworkId $vnetB.Id
-## Create peering from VNetB to VNetA. ##
-Add-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetwork $vnetB -RemoteVirtualNetworkId $vnetA.Id
+## Place the virtual network vnet-1 configuration into a variable. ##
+$net-1 = @{
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
+}
+$vnet-1 = Get-AzVirtualNetwork @net-1
+
+## Place the virtual network vnet-2 configuration into a variable. ##
+$net-2 = @{
+ Name = 'vnet-2'
+ ResourceGroupName = 'test-rg-2'
+}
+$vnet-2 = Get-AzVirtualNetwork @net-2
+
+## Create peering from vnet-1 to vnet-2. ##
+$peer1 = @{
+ Name = 'vnet-1-to-vnet-2'
+ VirtualNetwork = $vnet-1
+ RemoteVirtualNetworkId = $vnet-2.Id
+}
+Add-AzVirtualNetworkPeering @peer1
+
+## Create peering from vnet-2 to vnet-1. ##
+$peer2 = @{
+ Name = 'vnet-2-to-vnet-1'
+ VirtualNetwork = $vnet-2
+ RemoteVirtualNetworkId = $vnet-1.Id
+}
+Add-AzVirtualNetworkPeering @peer2
``` # [**Azure CLI**](#tab/peering-cli)
Add-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetwork $vnetB -RemoteVir
1. Use [az network vnet peering create](/cli/azure/network/vnet/peering#az-network-vnet-peering-create) to create virtual network peerings. ```azurecli-interactive
-## Create peering from VNetA to VNetB. ##
-az network vnet peering create --name VNetAtoVNetB --vnet-name VNetA --remote-vnet VNetB --resource-group myResourceGroup --allow-vnet-access --allow-forwarded-traffic
-## Create peering from VNetB to VNetA. ##
-az network vnet peering create --name VNetBtoVNetA --vnet-name VNetB --remote-vnet VNetA --resource-group myResourceGroup --allow-vnet-access --allow-forwarded-traffic
+## Create peering from vnet-1 to vnet-2. ##
+az network vnet peering create \
+ --name vnet-1-to-vnet-2 \
+ --vnet-name vnet-1 \
+ --remote-vnet vnet-2 \
+ --resource-group test-rg \
+ --allow-vnet-access \
+ --allow-forwarded-traffic
+
+## Create peering from vnet-2 to vnet-1. ##
+az network vnet peering create \
+ --name vnet-2-to-vnet-1 \
+ --vnet-name vnet-2 \
+ --remote-vnet vnet-1 \
+ --resource-group test-rg-2 \
+ --allow-vnet-access \
+ --allow-forwarded-traffic
```
Before changing a peering, familiarize yourself with the [requirements and const
# [**Portal**](#tab/peering-portal)
-1. Select the virtual network that you would like to view or change its peering settings.
+1. In the search box at the top of the Azure portal, enter **Virtual network**. Select **Virtual networks** in the search results.
- :::image type="content" source="./media/virtual-network-manage-peering/vnet-list.png" alt-text="Screenshot of the list of virtual networks in the subscription.":::
+1. Select the virtual network that you would like to view or change its peering settings in **Virtual networks**.
-1. Select **Peerings** under *Settings* and then select the peering you want to view or change settings for.
+1. Select **Peerings** in **Settings** and then select the peering you want to view or change settings for.
:::image type="content" source="./media/virtual-network-manage-peering/select-peering.png" alt-text="Screenshot of select a peering to change settings from the virtual network.":::
Before changing a peering, familiarize yourself with the [requirements and const
:::image type="content" source="./media/virtual-network-manage-peering/change-peering-settings.png" alt-text="Screenshot of changing virtual network peering settings."::: - # [**PowerShell**](#tab/peering-powershell) Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to list peerings of a virtual network and their settings. ```azurepowershell-interactive
-Get-AzVirtualNetworkPeering -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup
+$peer = @{
+ VirtualNetworkName = 'vnet-1'
+ ResourceGroupName = 'test-rg'
+}
+Get-AzVirtualNetworkPeering @peer
``` Use [Set-AzVirtualNetworkPeering](/powershell/module/az.network/set-azvirtualnetworkpeering) to change peering settings. ```azurepowershell-interactive ## Place the virtual network peering configuration into a variable. ##
-$peering = Get-AzVirtualNetworkPeering -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup -Name VNetAtoVNetB
+$peer = @{
+ Name = 'vnet-1-to-vnet-2'
+ ResourceGroupName = 'test-rg'
+}
+$peering = Get-AzVirtualNetworkPeering @peer
+ # Allow traffic forwarded from remote virtual network. ## $peering.AllowForwardedTraffic = $True+ ## Update the peering with changes made. ## Set-AzVirtualNetworkPeering -VirtualNetworkPeering $peering ``` - # [**Azure CLI**](#tab/peering-cli) Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to list peerings of a virtual network. ```azurecli-interactive
-az network vnet peering list --resource-group myResourceGroup --vnet-name VNetA --out table
+az network vnet peering list \
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --out table
``` Use [az network vnet peering show](/cli/azure/network/vnet/peering#az-network-vnet-peering-show) to show settings for a specific peering. ```azurecli-interactive
-az network vnet peering show --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA
+az network vnet peering show \
+ --resource-group test-rg \
+ --name vnet-1-to-vnet-2 \
+ --vnet-name vnet-1
``` Use [az network vnet peering update](/cli/azure/network/vnet/peering#az-network-vnet-peering-update) to change peering settings. ```azurecli-interactive ## Block traffic forwarded from remote virtual network. ##
-az network vnet peering update --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA --set allowForwardedTraffic=false
+az network vnet peering update \
+ --resource-group test-rg \
+ --name vnet-1-to-vnet-2 \
+ --vnet-name vnet-1 \
+ --set allowForwardedTraffic=false
```
Before deleting a peering, familiarize yourself with the [requirements and const
# [**Portal**](#tab/peering-portal)
-When a peering between two virtual networks is deleted, traffic can no longer flow between the virtual networks. If you want virtual networks to communicate sometimes, but not always, rather than deleting a peering, you can set the **Traffic to remote virtual network** setting to **Block all traffic to the remote virtual network** instead. You may find disabling and enabling network access easier than deleting and recreating peerings.
+When a peering between two virtual networks is deleted, traffic can no longer flow between the virtual networks. If you want virtual networks to communicate sometimes, but not always, rather than deleting a peering,
+deselect the **Allow traffic to remote virtual network** setting if you want to block traffic to the remote virtual network. You may find disabling and enabling network access easier than deleting and recreating peerings.
-1. Select the virtual network in the list that you want to delete a peering for.
+1. In the search box at the top of the Azure portal, enter **Virtual network**. Select **Virtual networks** in the search results.
- :::image type="content" source="./media/virtual-network-manage-peering/vnet-list.png" alt-text="Screenshot of selecting a virtual network in the subscription.":::
+1. Select the virtual network that you would like to view or change its peering settings in **Virtual networks**.
-1. Select **Peerings** under *Settings*.
+1. Select **Peerings** in **Settings**.
:::image type="content" source="./media/virtual-network-manage-peering/select-peering.png" alt-text="Screenshot of select a peering to delete from the virtual network.":::
When a peering between two virtual networks is deleted, traffic can no longer fl
Use [Remove-AzVirtualNetworkPeering](/powershell/module/az.network/remove-azvirtualnetworkpeering) to delete virtual network peerings ```azurepowershell-interactive
-## Delete VNetA to VNetB peering. ##
-Remove-AzVirtualNetworkPeering -Name VNetAtoVNetB -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup
-## Delete VNetB to VNetA peering. ##
-Remove-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetworkName VNetB -ResourceGroupName myResourceGroup
+## Delete vnet-1 to vnet-2 peering. ##
+$peer1 = @{
+ Name = 'vnet-1-to-vnet-2'
+ ResourceGroupName = 'test-rg'
+}
+Remove-AzVirtualNetworkPeering @peer1
+
+## Delete vnet-2 to vnet-1 peering. ##
+$peer2 = @{
+ Name = 'vnet-2-to-vnet-1'
+ ResourceGroupName = 'test-rg-2'
+}
+Remove-AzVirtualNetworkPeering @peer2
``` - # [**Azure CLI**](#tab/peering-cli) Use [az network vnet peering delete](/cli/azure/network/vnet/peering#az-network-vnet-peering-delete) to delete virtual network peerings. ```azurecli-interactive
-## Delete VNetA to VNetB peering. ##
-az network vnet peering delete --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA
-## Delete VNetB to VNetA peering. ##
-az network vnet peering delete --resource-group myResourceGroup --name VNetBtoVNetA --vnet-name VNetB
+## Delete vnet-1 to vnet-2 peering. ##
+az network vnet peering delete \
+ --resource-group test-rg \
+ --name vnet-1-to-vnet-2 \
+ --vnet-name vnet-1
+
+## Delete vnet-2 to vnet-1 peering. ##
+az network vnet peering delete \
+ --resource-group test-rg-2 \
+ --name vnet-2-to-vnet-1 \
+ --vnet-name vnet-2
``` ## Requirements and constraints -- <a name="cross-region"></a>You can peer virtual networks in the same region, or different regions. Peering virtual networks in different regions is also referred to as *Global VNet Peering*.
+- <a name="cross-region"></a>You can peer virtual networks in the same region, or different regions. Peering virtual networks in different regions is also referred to as **Global Virtual Network Peering**.
-- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a VNet in Azure public cloud can't be peered to a VNet in Microsoft Azure operated by 21Vianet cloud.
+- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a virtual network in Azure public cloud can't be peered to a virtual network in Microsoft Azure operated by 21Vianet cloud.
-- Resources in one virtual network can't communicate with the front-end IP address of a Basic Load Balancer (internal or public) in a globally peered virtual network. Support for Basic Load Balancer only exists within the same region. Support for Standard Load Balancer exists for both, VNet Peering and Global VNet Peering. Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global VNet Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).
+- Resources in one virtual network can't communicate with the front-end IP address of a basic load balancer (internal or public) in a globally peered virtual network. Support for basic load balancer only exists within the same region. Support for standard load balancer exists for both, Virtual Network Peering and Global Virtual Network Peering. Some services that use a basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global Virtual Network Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-virtual-network-peering-and-load-balancers).
- You can use remote gateways or allow gateway transit in globally peered virtual networks and locally peered virtual networks. - The virtual networks can be in the same, or different [subscriptions](#next-steps). When you peer virtual networks in different subscriptions, both subscriptions can be associated to the same or different Azure Active Directory tenant. If you don't already have an AD tenant, you can [create one](../active-directory/develop/quickstart-create-new-tenant.md). -- The virtual networks you peer must have non-overlapping IP address spaces.
+- The virtual networks you peer must have nonoverlapping IP address spaces.
- You can peer two virtual networks deployed through Resource Manager or a virtual network deployed through Resource Manager with a virtual network deployed through the classic deployment model. You can't peer two virtual networks created through the classic deployment model. If you're not familiar with Azure deployment models, read the [Understand Azure deployment models](../azure-resource-manager/management/deployment-models.md) article. You can use a [VPN Gateway](../vpn-gateway/design.md#V2V) to connect two virtual networks created through the classic deployment model. -- When peering two virtual networks created through Resource Manager, a peering must be configured for each virtual network in the peering. You see one of the following types for peering status:
+- When you peer two virtual networks created through Resource Manager, a peering must be configured for each virtual network in the peering. You see one of the following types for peering status:
- - *Initiated:* When you create the first peering, its status is *Initiated*.
- - *Connected:* When you create the second peering, peering status becomes *Connected* for both peerings. The peering isn't successfully established until the peering status for both virtual network peerings is *Connected*.
+ - **Initiated:** When you create the first peering, its status is *Initiated*.
+
+ - **Connected:** When you create the second peering, peering status becomes **Connected** for both peerings. The peering isn't successfully established until the peering status for both virtual network peerings is **Connected**.
+
+- When peering a virtual network created through Resource Manager with a virtual network created through the classic deployment model, you only configure a peering for the virtual network deployed through Resource Manager. You can't configure peering for a virtual network (classic), or between two virtual networks deployed through the classic deployment model. When you create the peering from the virtual network (Resource Manager) to the virtual network (Classic), the peering status is **Updating**, then shortly changes to **Connected**.
-- When peering a virtual network created through Resource Manager with a virtual network created through the classic deployment model, you only configure a peering for the virtual network deployed through Resource Manager. You can't configure peering for a virtual network (classic), or between two virtual networks deployed through the classic deployment model. When you create the peering from the virtual network (Resource Manager) to the virtual network (Classic), the peering status is *Updating*, then shortly changes to *Connected*. - A peering is established between two virtual networks. Peerings by themselves aren't transitive. If you create peerings between: - VirtualNetwork1 and VirtualNetwork2 + - VirtualNetwork2 and VirtualNetwork3
- There's no connectivity between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want VirtualNetwork1 and VirtualNetwork3 to directly communicate, you have to create an explicit peering between VirtualNetwork1 and VirtualNetwork3, or go through an NVA in the Hub network. To learn more, see [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
+ There's no connectivity between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want VirtualNetwork1 and VirtualNetwork3 to directly communicate, you have to create an explicit peering between VirtualNetwork1 and VirtualNetwork3, or go through an NVA in the **Hub** network. To learn more, see [**Hub**-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
- You can't resolve names in peered virtual networks using default Azure name resolution. To resolve names in other virtual networks, you must use [Azure Private DNS](../dns/private-dns-overview.md) or a custom DNS server. To learn how to set up your own DNS server, see [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). - Resources in peered virtual networks in the same region can communicate with each other with the same latency as if they were within the same virtual network. The network throughput is based on the bandwidth that's allowed for the virtual machine, proportionate to its size. There isn't any extra restriction on bandwidth within the peering. Each virtual machine size has its own maximum network bandwidth. To learn more about maximum network bandwidth for different virtual machine sizes, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md). - A virtual network can be peered to another virtual network, and also be connected to another virtual network with an Azure virtual network gateway. When virtual networks are connected through both peering and a gateway, traffic between the virtual networks flows through the peering configuration, rather than the gateway.+ - Point-to-Site VPN clients must be downloaded again after virtual network peering has been successfully configured to ensure the new routes are downloaded to the client.+ - There's a nominal charge for ingress and egress traffic that utilizes a virtual network peering. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/virtual-network). ## Permissions
az network vnet peering delete --resource-group myResourceGroup --name VNetBtoVN
The accounts you use to work with virtual network peering must be assigned to the following roles: - [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor): For a virtual network deployed through Resource Manager.-- [Classic Network Contributor](../role-based-access-control/built-in-roles.md#classic-network-contributor): For a virtual network deployed through the classic deployment model.+
+- [Classic Network Contributor](../role-based-access-control/built-in-roles.md#classic-network-contributor): For a virtual network deployed through, the classic deployment model.
If your account isn't assigned to one of the previous roles, it must be assigned to a [custom role](../role-based-access-control/custom-roles.md) that is assigned the necessary actions from the following table: | Action | Name | | | |
-| Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write | Required to create a peering from virtual network A to virtual network B. Virtual network A must be a virtual network (Resource Manager) |
-| Microsoft.Network/virtualNetworks/peer/action | Required to create a peering from virtual network B (Resource Manager) to virtual network A |
-| Microsoft.ClassicNetwork/virtualNetworks/peer/action | Required to create a peering from virtual network B (classic) to virtual network A |
-| Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read | Read a virtual network peering |
-| Microsoft.Network/virtualNetworks/virtualNetworkPeerings/delete | Delete a virtual network peering |
+| **Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write** | Required to create a peering from virtual network A to virtual network B. Virtual network A must be a virtual network (Resource Manager) |
+| **Microsoft.Network/virtualNetworks/peer/action** | Required to create a peering from virtual network B (Resource Manager) to virtual network A |
+| **Microsoft.ClassicNetwork/virtualNetworks/peer/action** | Required to create a peering from virtual network B (classic) to virtual network A |
+| **Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read** | Read a virtual network peering |
+| **Microsoft.Network/virtualNetworks/virtualNetworkPeerings/delete** | Delete a virtual network peering |
## Next steps
If your account isn't assigned to one of the previous roles, it must be assigned
|One Resource Manager, one classic |[Same](create-peering-different-deployment-models.md)| | |[Different](create-peering-different-deployment-models-subscriptions.md)| -- Learn how to create a [hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke)
+- Learn how to create a [**hub** and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke)
+ - Create a virtual network peering using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or using Azure [Resource Manager templates](template-samples.md)+ - Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
The following constraints apply only when virtual networks are globally peered:
* Resources in one virtual network can't communicate with the front-end IP address of a Basic Load Balancer (internal or public) in a globally peered virtual network.
-* Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [What are the constraints related to Global VNet Peering and Load Balancers?](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).
+* Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [What are the constraints related to Global VNet Peering and Load Balancers?](virtual-networks-faq.md#what-are-the-constraints-related-to-global-virtual-network-peering-and-load-balancers).
For more information, see [Requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints). To learn more about the supported number of peerings, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
Gateway Transit is a peering property that enables a virtual network to utilize
* To learn about all virtual network peering settings, see [Create, change, or delete a virtual network peering](virtual-network-manage-peering.md).
-* For answers to common virtual network peering and global virtual network peering questions, see [VNet Peering](virtual-networks-faq.md#vnet-peering).
+* For answers to common virtual network peering and global virtual network peering questions, see [VNet Peering](virtual-networks-faq.md#virtual-network-peering).
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
# Azure Virtual Network frequently asked questions (FAQ)
-## Virtual Network basics
+## Basics
-### What is an Azure Virtual Network (VNet)?
-An Azure Virtual Network (VNet) is a representation of your own network in the cloud. It is a logical isolation of the Azure cloud dedicated to your subscription. You can use VNets to provision and manage virtual private networks (VPNs) in Azure and, optionally, link the VNets with other VNets in Azure, or with your on-premises IT infrastructure to create hybrid or cross-premises solutions. Each VNet you create has its own CIDR block and can be linked to other VNets and on-premises networks as long as the CIDR blocks do not overlap. You also have control of DNS server settings for VNets, and segmentation of the VNet into subnets.
+### What is a virtual network?
-Use VNets to:
+A virtual network is a representation of your own network in the cloud, as provided by the Azure Virtual Network service. A virtual network is a logical isolation of the Azure cloud that's dedicated to your subscription.
-* Create a dedicated private cloud-only VNet. Sometimes you don't require a cross-premises configuration for your solution. When you create a VNet, your services and VMs within your VNet can communicate directly and securely with each other in the cloud. You can still configure endpoint connections for the VMs and services that require Internet communication, as part of your solution.
+You can use virtual networks to provision and manage virtual private networks (VPNs) in Azure. Optionally, you can link virtual networks with other virtual networks in Azure, or with your on-premises IT infrastructure, to create hybrid or cross-premises solutions.
-* Securely extend your data center. With VNets, you can build traditional site-to-site (S2S) VPNs to securely scale your datacenter capacity. S2S VPNs use IPSEC to provide a secure connection between your corporate VPN gateway and Azure.
+Each virtual network that you create has its own CIDR block. You can link a virtual network to other virtual networks and on-premises networks as long as the CIDR blocks don't overlap. You also have control of DNS server settings for virtual networks, along with segmentation of the virtual network into subnets.
-* Enable hybrid cloud scenarios. VNets give you the flexibility to support a range of hybrid cloud scenarios. You can securely connect cloud-based applications to any type of on-premises system such as mainframes and Unix systems.
+Use virtual networks to:
+
+* Create a dedicated, private, cloud-only virtual network. Sometimes you don't require a cross-premises configuration for your solution. When you create a virtual network, your services and virtual machines (VMs) within your virtual network can communicate directly and securely with each other in the cloud. You can still configure endpoint connections for the VMs and services that require internet communication, as part of your solution.
+
+* Securely extend your datacenter. With virtual networks, you can build traditional site-to-site (S2S) VPNs to securely scale your datacenter capacity. S2S VPNs use IPsec to provide a secure connection between your corporate VPN gateway and Azure.
+
+* Enable hybrid cloud scenarios. You can securely connect cloud-based applications to any type of on-premises system, including mainframes and Unix systems.
### How do I get started?
-Visit the [Virtual network documentation](./index.yml) to get started. This content provides overview and deployment information for all of the VNet features.
-### Can I use VNets without cross-premises connectivity?
-Yes. You can use a VNet without connecting it to your premises. For example, you could run Microsoft Windows Server Active Directory domain controllers and SharePoint farms solely in an Azure VNet.
+Visit the [Azure Virtual Network documentation](./index.yml) to get started. This content provides overview and deployment information for all of the virtual network features.
+
+### Can I use virtual networks without cross-premises connectivity?
+
+Yes. You can use a virtual network without connecting it to your premises. For example, you could run Microsoft Windows Server Active Directory domain controllers and SharePoint farms solely in an Azure virtual network.
-### Can I perform WAN optimization between VNets or a VNet and my on-premises data center?
-Yes. You can deploy a [WAN optimization network virtual appliance](https://azuremarketplace.microsoft.com/en-us/marketplace/?term=wan%20optimization) from several vendors through the Azure Marketplace.
+### Can I perform WAN optimization between virtual networks or between a virtual network and my on-premises datacenter?
+
+Yes. You can deploy a [network virtual appliance for WAN optimization](https://azuremarketplace.microsoft.com/marketplace/?term=wan%20optimization) from several vendors through Azure Marketplace.
## Configuration
-### What tools do I use to create a VNet?
-You can use the following tools to create or configure a VNet:
+### What tools do I use to create a virtual network?
+
+You can use the following tools to create or configure a virtual network:
* Azure portal * PowerShell * Azure CLI
-* A network configuration file (netcfg - for classic VNets only). See the [Configure a VNet using a network configuration file](/previous-versions/azure/virtual-network/virtual-networks-using-network-configuration-file) article.
+* [Network configuration file](/previous-versions/azure/virtual-network/virtual-networks-using-network-configuration-file) (`netcfg`, for classic virtual networks only)
+
+### What address ranges can I use in my virtual networks?
+
+We recommend that you use the following address ranges, which are enumerated in [RFC 1918](https://tools.ietf.org/html/rfc1918). The IETF has set aside these ranges for private, non-routable address spaces.
+
+* 10.0.0.0 to 10.255.255.255 (10/8 prefix)
+* 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
+* 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)
-### What address ranges can I use in my VNets?
-We recommend that you use the address ranges enumerated in [RFC 1918](https://tools.ietf.org/html/rfc1918), which have been set aside by the IETF for private, non-routable address spaces:
-* 10.0.0.0 - 10.255.255.255 (10/8 prefix)
-* 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
-* 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
+You can also deploy the shared address space reserved in [RFC 6598](https://datatracker.ietf.org/doc/html/rfc6598), which is treated as a private IP address space in Azure:
-You can also deploy the Shared Address space reserved in [RFC 6598](https://datatracker.ietf.org/doc/html/rfc6598), which is treated as Private IP Address space in Azure:
-* 100.64.0.0 - 100.127.255.255 (100.64/10 prefix)
+* 100.64.0.0 to 100.127.255.255 (100.64/10 prefix)
-Other address spaces, including all other IETF-recognized private, non-routable address spaces, may work but may have undesirable side effects.
+Other address spaces, including all other IETF-recognized private, non-routable address spaces, might work but have undesirable side effects.
-In addition, you cannot add the following address ranges:
-* 224.0.0.0/4 (Multicast)
-* 255.255.255.255/32 (Broadcast)
-* 127.0.0.0/8 (Loopback)
-* 169.254.0.0/16 (Link-local)
-* 168.63.129.16/32 (Internal DNS)
+In addition, you can't add the following address ranges:
+
+* 224.0.0.0/4 (multicast)
+* 255.255.255.255/32 (broadcast)
+* 127.0.0.0/8 (loopback)
+* 169.254.0.0/16 (link local)
+* 168.63.129.16/32 (internal DNS)
+
+### Can I have public IP addresses in my virtual networks?
-### Can I have public IP addresses in my VNets?
Yes. For more information about public IP address ranges, see [Create a virtual network](manage-virtual-network.md#create-a-virtual-network). Public IP addresses are not directly accessible from the internet.
-### Is there a limit to the number of subnets in my VNet?
-Yes. See [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) for details. Subnet address spaces cannot overlap one another.
+### Is there a limit to the number of subnets in my virtual network?
+
+Yes. See [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) for details. Subnet address spaces can't overlap one another.
### Are there any restrictions on using IP addresses within these subnets?
-Yes. Azure reserves the first four and last IP address for a total of 5 IP addresses within each subnet.
+Yes. Azure reserves the first four addresses and the last address, for a total of five IP addresses within each subnet.
For example, the IP address range of 192.168.1.0/24 has the following reserved addresses: -- 192.168.1.0 : Network address-- 192.168.1.1 : Reserved by Azure for the default gateway-- 192.168.1.2, 192.168.1.3 : Reserved by Azure to map the Azure DNS IPs to the VNet space-- 192.168.1.255 : Network broadcast address.
+* 192.168.1.0: Network address.
+* 192.168.1.1: Reserved by Azure for the default gateway.
+* 192.168.1.2, 192.168.1.3: Reserved by Azure to map the Azure DNS IP addresses to the virtual network space.
+* 192.168.1.255: Network broadcast address.
+
+### How small and how large can virtual networks and subnets be?
+
+The smallest supported IPv4 subnet is /29, and the largest is /2 (using CIDR subnet definitions). IPv6 subnets must be exactly /64 in size.
+
+### Can I bring my VLANs to Azure by using virtual networks?
+
+No. Virtual networks are Layer 3 overlays. Azure does not support any Layer 2 semantics.
+
+### Can I specify custom routing policies on my virtual networks and subnets?
+
+Yes. You can create a route table and associate it with a subnet. For more information about routing in Azure, see [Custom routes](virtual-networks-udr-overview.md#custom-routes).
-### How small and how large can VNets and subnets be?
-The smallest supported IPv4 subnet is /29, and the largest is /2 (using CIDR subnet definitions). IPv6 subnets must be exactly /64 in size.
+### What's the behavior when I apply both an NSG and a UDR at the subnet?
-### Can I bring my VLANs to Azure using VNets?
-No. VNets are Layer-3 overlays. Azure does not support any Layer-2 semantics.
+For inbound traffic, network security group (NSG) inbound rules are processed. For outbound traffic, NSG outbound rules are processed, followed by user-defined route (UDR) rules.
-### Can I specify custom routing policies on my VNets and subnets?
-Yes. You can create a route table and associate it to a subnet. For more information about routing in Azure, see [Routing overview](virtual-networks-udr-overview.md#custom-routes).
+### What's the behavior when I apply an NSG at a NIC and a subnet for a VM?
-### What would be the behavior when I apply both NSG and UDR at subnet?
-For inbound traffic, NSG inbound rules are processed. For outbound, NSG outbound rules are processed followed by UDR rules.
+When you apply NSGs at both a network adapter (NIC) and a subnet for a VM:
-### What would be the behavior when I apply NSG at NIC and subnet for a VM?
-When NSGs are applied both at NIC & Subnets for a VM, subnet level NSG followed by NIC level NSG is processed for inbound and NIC level NSG followed by subnet level NSG for outbound traffic.
+* A subnet-level NSG, followed by a NIC-level NSG, is processed for inbound traffic.
+* A NIC-level NSG, followed by a subnet-level NSG, is processed for outbound traffic.
+
+### Do virtual networks support multicast or broadcast?
-### Do VNets support multicast or broadcast?
No. Multicast and broadcast are not supported.
-### What protocols can I use within VNets?
-You can use TCP, UDP, ESP, AH, and ICMP TCP/IP protocols within VNets. Unicast is supported within VNets. Multicast, broadcast, IP-in-IP encapsulated packets, and Generic Routing Encapsulation (GRE) packets are blocked within VNets. You cannot use Dynamic Host Configuration Protocol (DHCP) via Unicast (source port UDP/68 / destination port UDP/67). UDP source port 65330 which is reserved for the host. See ["Can I deploy a DHCP server in a VNet"](#can-i-deploy-a-dhcp-server-in-a-vnet) for more detail what is and is not supported for DHCP.
+### What protocols can I use in virtual networks?
+
+You can use TCP, UDP, ESP, AH, and ICMP TCP/IP protocols in virtual networks.
+
+Unicast is supported in virtual networks. Multicast, broadcast, IP-in-IP encapsulated packets, and Generic Routing Encapsulation (GRE) packets are blocked in virtual networks. You can't use Dynamic Host Configuration Protocol (DHCP) via Unicast (source port UDP/68, destination port UDP/67). UDP source port 65330 is reserved for the host.
+
+### Can I deploy a DHCP server in a virtual network?
+
+Azure virtual networks provide DHCP service and DNS to VMs and client/server DHCP (source port UDP/68, destination port UDP/67) not supported in a virtual network.
-### Can I deploy a DHCP server in a VNet?
-Azure VNets provide DHCP service and DNS to VMs and client/server DHCP (source port UDP/68, destination port UDP/67) not supported in a VNet. You cannot deploy your own DHCP service to receive and provide unicast/broadcast client/server DHCP traffic for endpoints inside a VNet. It is also an *unsupported* scenario to deploy a DHCP server VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) DHCP traffic.
+You can't deploy your own DHCP service to receive and provide unicast or broadcast client/server DHCP traffic for endpoints inside a virtual network. Deploying a DHCP server VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) traffic is also an *unsupported* scenario.
-### Can I ping default gateway within a VNet?
-No. Azure provided default gateway does not respond to ping. But you can use ping in your VNets to check connectivity and troubleshooting between VMs.
+### Can I ping a default gateway in a virtual network?
+
+No. An Azure-provided default gateway doesn't respond to a ping. But you can use pings in your virtual networks to check connectivity and for troubleshooting between VMs.
### Can I use tracert to diagnose connectivity?
-Yes.
-### Can I add subnets after the VNet is created?
-Yes. Subnets can be added to VNets at any time as long as the subnet address range is not part of another subnet and there is available space left in the virtual network's address range.
+Yes.
+
+### Can I add subnets after the virtual network is created?
+
+Yes. You can add subnets to virtual networks at any time, as long as both of these conditions exist:
+
+* The subnet address range is not part of another subnet.
+* There's available space in the virtual network's address range.
### Can I modify the size of my subnet after I create it?
-Yes. You can add, remove, expand, or shrink a subnet if there are no VMs or services deployed within it.
-### Can I modify Vnet after I created them?
-Yes. You can add, remove, and modify the CIDR blocks used by a VNet.
+Yes. You can add, remove, expand, or shrink a subnet if no VMs or services are deployed in it.
+
+### Can I modify a virtual network after I create it?
+
+Yes. You can add, remove, and modify the CIDR blocks that a virtual network uses.
+
+### If I'm running my services in a virtual network, can I connect to the internet?
+
+Yes. All services deployed in a virtual network can connect outbound to the internet. To learn more about outbound internet connections in Azure, see [Use Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
+If you want to connect inbound to a resource deployed through Azure Resource Manager, the resource must have a public IP address assigned to it. For more information, see [Create, change, or delete an Azure public IP address](./ip-services/virtual-network-public-ip-address.md).
+
+Every cloud service deployed in Azure has a publicly addressable virtual IP (VIP) assigned to it. You define input endpoints for platform as a service (PaaS) roles and endpoints for virtual machines to enable these services to accept connections from the internet.
+
+### Do virtual networks support IPv6?
+
+Yes. Virtual networks can be IPv4 only or dual stack (IPv4 + IPv6). For details, see [What is IPv6 for Azure Virtual Network?](./ip-services/ipv6-overview.md).
+
+### Can a virtual network span regions?
-### If I am running my services in a VNet, can I connect to the internet?
-Yes. All services deployed within a VNet can connect outbound to the internet. To learn more about outbound internet connections in Azure, see [Outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json). If you want to connect inbound to a resource deployed through Resource Manager, the resource must have a public IP address assigned to it. To learn more about public IP addresses, see [Public IP addresses](./ip-services/virtual-network-public-ip-address.md). Every Azure Cloud Service deployed in Azure has a publicly addressable VIP assigned to it. You define input endpoints for PaaS roles and endpoints for virtual machines to enable these services to accept connections from the internet.
+No. A virtual network is limited to a single region. But a virtual network does span availability zones. To learn more about availability zones, see [What are Azure regions and availability zones?](../reliability/availability-zones-overview.md).
-### Do VNets support IPv6?
-Yes, VNets can be IPv4-only or dual stack (IPv4+IPv6). For details, see [Overview of IPv6 for Azure Virtual Networks](./ip-services/ipv6-overview.md).
+You can connect virtual networks in different regions by using virtual network peering. For details, see [Virtual network peering](virtual-network-peering-overview.md).
-### Can a VNet span regions?
-No. A VNet is limited to a single region. A virtual network does, however, span availability zones. To learn more about availability zones, see [Availability zones overview](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). You can connect virtual networks in different regions with virtual network peering. For details, see [Virtual network peering overview](virtual-network-peering-overview.md)
+### Can I connect a virtual network to another virtual network in Azure?
-### Can I connect a VNet to another VNet in Azure?
-Yes. You can connect one VNet to another VNet using either:
-- **Virtual network peering**: For details, see [VNet peering overview](virtual-network-peering-overview.md)-- **An Azure VPN Gateway**: For details, see [Configure a VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+Yes. You can connect one virtual network to another virtual network by using either:
-## Name Resolution (DNS)
+* Virtual network peering. For details, see [Virtual network peering](virtual-network-peering-overview.md).
+* An Azure VPN gateway. For details, see [Configure a network-to-network VPN gateway connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-### What are my DNS options for VNets?
-Use the decision table on the [Name Resolution for VMs and Role Instances](virtual-networks-name-resolution-for-vms-and-role-instances.md) page to guide you through all the DNS options available.
+## Name resolution (DNS)
-### Can I specify DNS servers for a VNet?
-Yes. You can specify DNS server IP addresses in the VNet settings. The setting is applied as the default DNS server(s) for all VMs in the VNet.
+### What are my DNS options for virtual networks?
+
+Use the decision table in [Name resolution for resources in Azure virtual networks](virtual-networks-name-resolution-for-vms-and-role-instances.md) to guide you through the available DNS options.
+
+### Can I specify DNS servers for a virtual network?
+
+Yes. You can specify IP addresses for DNS servers in the virtual network settings. The setting is applied as the default DNS server or servers for all VMs in the virtual network.
### How many DNS servers can I specify?
-Reference [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits).
-### Can I modify my DNS servers after I have created the network?
-Yes. You can change the DNS server list for your VNet at any time. If you change your DNS server list, you need to perform a DHCP lease renewal on all affected VMs in the VNet, for the new DNS settings to take effect. For VMs running Windows OS you can do this by typing `ipconfig /renew` directly on the VM. For other OS types, refer to the DHCP lease renewal documentation for the specific OS type.
+See [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits).
+
+### Can I modify my DNS servers after I create the network?
+
+Yes. You can change the DNS server list for your virtual network at any time.
+
+If you change your DNS server list, you need to perform a DHCP lease renewal on all affected VMs in the virtual network. The new DNS settings take effect after lease renewal. For VMs running Windows, you can renew the lease by entering `ipconfig /renew` directly on the VM. For other OS types, refer to the documentation for the DHCP lease renewal.
+
+### What is Azure-provided DNS, and does it work with virtual networks?
+
+Azure-provided DNS is a multitenant DNS service from Microsoft. Azure registers all of your VMs and cloud service role instances in this service. This service provides name resolution:
+
+* By host name for VMs and role instances in the same cloud service.
+* By fully qualified domain main (FQDN) for VMs and role instances in the same virtual network.
-### What is Azure-provided DNS and does it work with VNets?
-Azure-provided DNS is a multi-tenant DNS service offered by Microsoft. Azure registers all of your VMs and cloud service role instances in this service. This service provides name resolution by hostname for VMs and role instances contained within the same cloud service, and by FQDN for VMs and role instances in the same VNet. To learn more about DNS, see [Name Resolution for VMs and Cloud Services role instances](virtual-networks-name-resolution-for-vms-and-role-instances.md).
+To learn more about DNS, see [Name resolution for resources in Azure virtual networks](virtual-networks-name-resolution-for-vms-and-role-instances.md).
-There is a limitation to the first 100 cloud services in a VNet for cross-tenant name resolution using Azure-provided DNS. If you are using your own DNS server, this limitation does not apply.
+There's a limitation to the first 100 cloud services in a virtual network for cross-tenant name resolution through Azure-provided DNS. If you're using your own DNS server, this limitation doesn't apply.
-### Can I override my DNS settings on a per-VM or cloud service basis?
-Yes. You can set DNS servers per VM or cloud service to override the default network settings. However, it's recommended that you use network-wide DNS as much as possible.
+### Can I override my DNS settings for each VM or cloud service?
+
+Yes. You can set DNS servers for each VM or cloud service to override the default network settings. However, we recommend that you use network-wide DNS as much as possible.
### Can I bring my own DNS suffix?
-No. You cannot specify a custom DNS suffix for your VNets.
+
+No. You can't specify a custom DNS suffix for your virtual networks.
## Connecting virtual machines
-### Can I deploy VMs to a VNet?
-Yes. All network interfaces (NIC) attached to a VM deployed through the Resource Manager deployment model must be connected to a VNet. VMs deployed through the classic deployment model can optionally be connected to a VNet.
+### Can I deploy VMs to a virtual network?
+
+Yes. All network adapters (NICs) attached to a VM that's deployed through the Resource Manager deployment model must be connected to a virtual network. Optionally, you can connect VMs deployed through the classic deployment model to a virtual network.
+
+### What are the types of IP addresses that I can assign to VMs?
+
+* **Private**: Assigned to each NIC within each VM, through the static or dynamic method. Private IP addresses are assigned from the range that you specified in the subnet settings of your virtual network.
+
+ Resources deployed through the classic deployment model are assigned private IP addresses, even if they're not connected to a virtual network. The behavior of the allocation method is different depending on whether you deployed a resource by using the Resource Manager or classic deployment model:
+
+ * **Resource Manager**: A private IP address assigned through the dynamic or static method remains assigned to a virtual machine (Resource Manager) until the resource is deleted. The difference is that you select the address to assign when you're using the static method, and Azure chooses when you're using the dynamic method.
+ * **Classic**: A private IP address assigned through the dynamic method might change when a virtual machine (classic) is restarted after being in the stopped (deallocated) state. If you need to ensure that the private IP address for a resource deployed through the classic deployment model never changes, assign a private IP address by using the static method.
+
+* **Public**: Optionally assigned to NICs attached to VMs deployed through the Resource Manager deployment model. You can assign the address by using the static or dynamic allocation method.
+
+ All VMs and Azure Cloud Services role instances deployed through the classic deployment model exist within a cloud service. The cloud service is assigned a dynamic, public VIP address. You can optionally assign a public static IP address, called a [reserved IP address](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip), as a VIP.
-### What are the different types of IP addresses I can assign to VMs?
-* **Private:** Assigned to each NIC within each VM. The address is assigned using either the static or dynamic method. Private IP addresses are assigned from the range that you specified in the subnet settings of your VNet. Resources deployed through the classic deployment model are assigned private IP addresses, even if they're not connected to a VNet. The behavior of the allocation method is different depending on whether a resource was deployed with the Resource Manager or classic deployment model:
+ You can assign public IP addresses to individual VMs or Cloud Services role instances deployed through the classic deployment model. These addresses are called [instance-level public IP](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) addresses and can be assigned dynamically.
- - **Resource Manager**: A private IP address assigned with the dynamic or static method remains assigned to a virtual machine (Resource Manager) until the resource is deleted. The difference is that you select the address to assign when using static, and Azure chooses when using dynamic.
- - **Classic**: A private IP address assigned with the dynamic method may change when a virtual machine (classic) VM is restarted after having been in the stopped (deallocated) state. If you need to ensure that the private IP address for a resource deployed through the classic deployment model never changes, assign a private IP address with the static method.
+### Can I reserve a private IP address for a VM that I'll create at a later time?
-* **Public:** Optionally assigned to NICs attached to VMs deployed through the Azure Resource Manager deployment model. The address can be assigned with the static or dynamic allocation method. All VMs and Cloud Services role instances deployed through the classic deployment model exist within a cloud service, which is assigned a *dynamic*, public virtual IP (VIP) address. A public *static* IP address, called a [Reserved IP address](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip), can optionally be assigned as a VIP. You can assign public IP addresses to individual VMs or Cloud Services role instances deployed through the classic deployment model. These addresses are called [Instance level public IP (ILPIP)](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) addresses and can be assigned dynamically.
+No. You can't reserve a private IP address. If a private IP address is available, the DHCP server assigns it to a VM or role instance. The VM might or might not be the one that you want the private IP address assigned to. You can, however, change the private IP address of an existing VM to any available private IP address.
-### Can I reserve a private IP address for a VM that I will create at a later time?
-No. You cannot reserve a private IP address. If a private IP address is available, it is assigned to a VM or role instance by the DHCP server. The VM may or may not be the one that you want the private IP address assigned to. You can, however, change the private IP address of an already created VM, to any available private IP address.
+### Do private IP addresses change for VMs in a virtual network?
-### Do private IP addresses change for VMs in a VNet?
-It depends. If the VM was deployed through Resource Manager, no, regardless of whether the IP address was assigned with the static or dynamic allocation method. If the VM was deployed through the classic deployment model, dynamic IP addresses can change when a VM is started after having been in the stopped (deallocated) state. The address is released from a VM deployed through either deployment model when the VM is deleted.
+It depends. If you deployed the VM by using Resource Manager, IP addresses can't change, regardless of whether you assigned the addresses by using the static or dynamic allocation method. If you deployed the VM by using the classic deployment model, dynamic IP addresses can change when you start a VM that was in the stopped (deallocated) state.
+
+The address is released from a VM deployed through either deployment model when you delete the VM.
### Can I manually assign IP addresses to NICs within the VM operating system?
-Yes, but it's not recommended unless necessary, such as when assigning multiple IP addresses to a virtual machine. For details, see [Adding multiple IP addresses to a virtual machine](./ip-services/virtual-network-multiple-ip-addresses-portal.md#os-config). If the IP address assigned to an Azure NIC attached to a VM changes, and the IP address within the VM operating system is different, you lose connectivity to the VM.
-### If I stop a Cloud Service deployment slot or shutdown a VM from within the operating system, what happens to my IP addresses?
-Nothing. The IP addresses (public VIP, public, and private) remain assigned to the cloud service deployment slot or VM.
+Yes, but we don't recommend it unless it's necessary, such as when you're assigning multiple IP addresses to a virtual machine. For details, see [Assign multiple IP addresses to virtual machines](./ip-services/virtual-network-multiple-ip-addresses-portal.md#os-config).
+
+If the IP address assigned to an Azure NIC that's attached to a VM changes, and the IP address within the VM operating system is different, you lose connectivity to the VM.
+
+### If I stop a cloud service deployment slot or shut down a VM from within the operating system, what happens to my IP addresses?
+
+Nothing. The IP addresses (public VIP, public, and private) remain assigned to the cloud service deployment slot or the VM.
+
+### Can I move VMs from one subnet to another subnet in a virtual network without redeploying?
-### Can I move VMs from one subnet to another subnet in a VNet without redeploying?
-Yes. You can find more information in the [How to move a VM or role instance to a different subnet](/previous-versions/azure/virtual-network/virtual-networks-move-vm-role-to-subnet) article.
+Yes. You can find more information in [Move a VM or role instance to a different subnet](/previous-versions/azure/virtual-network/virtual-networks-move-vm-role-to-subnet).
### Can I configure a static MAC address for my VM?
-No. A MAC address cannot be statically configured.
-### Will the MAC address remain the same for my VM once it's created?
-Yes, the MAC address remains the same for a VM deployed through both the Resource Manager and classic deployment models until it's deleted. Previously, the MAC address was released if the VM was stopped (deallocated), but now the MAC address is retained even when the VM is in the deallocated state. The MAC address remains assigned to the network interface until the network interface is deleted or the private IP address assigned to the primary IP configuration of the primary network interface is changed.
+No. You can't statically configure a MAC address.
+
+### Does the MAC address remain the same for my VM after it's created?
+
+Yes. The MAC address remains the same for a VM that you deployed through both the Resource Manager and classic deployment models until you delete it.
+
+Previously, the MAC address was released if you stopped (deallocated) the VM. But now, the VM retains the MAC address when it's in the deallocated state. The MAC address remains assigned to the network adapter until you do one of these tasks:
+
+* Delete the network adapter.
+* Change the private IP address that's assigned to the primary IP configuration of the primary network adapter.
-### Can I connect to the internet from a VM in a VNet?
-Yes. All VMs and Cloud Services role instances deployed within a VNet can connect to the Internet.
+### Can I connect to the internet from a VM in a virtual network?
-## Azure services that connect to VNets
+Yes. All VMs and Cloud Services role instances deployed within a virtual network can connect to the internet.
-### Can I use Azure App Service Web Apps with a VNet?
-Yes. You can deploy Web Apps inside a VNet using an ASE (App Service Environment), connect the backend of your apps to your VNets with VNet Integration, and lock down inbound traffic to your app with service endpoints. For more information, see the following articles:
+## Azure services that connect to virtual networks
+
+### Can I use Web Apps with a virtual network?
+
+Yes. You can deploy the Web Apps feature of Azure App Service inside a virtual network by using an App Service Environment. You can then:
+
+* Connect the back end of your apps to your virtual networks by using virtual network integration.
+* Lock down inbound traffic to your app by using service endpoints.
+
+For more information, see the following articles:
* [App Service networking features](../app-service/networking-features.md)
-* [Creating Web Apps in an App Service Environment](../app-service/environment/using.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
-* [Integrate your app with an Azure Virtual Network](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
-* [App Service access restrictions](../app-service/app-service-ip-restrictions.md)
+* [Use an App Service Environment](../app-service/environment/using.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+* [Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+* [Set up Azure App Service access restrictions](../app-service/app-service-ip-restrictions.md)
+
+### Can I deploy Cloud Services with web and worker roles (PaaS) in a virtual network?
-### Can I deploy Cloud Services with web and worker roles (PaaS) in a VNet?
-Yes. You can (optionally) deploy Cloud Services role instances within VNets. To do so, you specify the VNet name and the role/subnet mappings in the network configuration section of your service configuration. You do not need to update any of your binaries.
+Yes. You can (optionally) deploy Cloud Services role instances in virtual networks. To do so, you specify the virtual network name and the role/subnet mappings in the network configuration section of your service configuration. You don't need to update any of your binaries.
-### Can I connect a virtual machine scale set to a VNet?
-Yes. You must connect a virtual machine scale set to a VNet.
+### Can I connect a virtual machine scale set to a virtual network?
-### Is there a complete list of Azure services that can I deploy resources from into a VNet?
-Yes, For details, see [Virtual network integration for Azure services](virtual-network-for-azure-services.md).
+Yes. You must connect a virtual machine scale set to a virtual network.
-### How can I restrict access to Azure PaaS resources from a VNet?
+### Is there a complete list of Azure services that can I deploy resources from into a virtual network?
-Resources deployed through some Azure PaaS services (such as Azure Storage and Azure SQL Database), can restrict network access to VNet through the use of virtual network service endpoints or Azure Private Link. For details, see [Virtual network service endpoints overview](virtual-network-service-endpoints-overview.md), [Azure Private Link overview](../private-link/private-link-overview.md)
+Yes. For details, see [Deploy dedicated Azure services into virtual networks](virtual-network-for-azure-services.md).
-### Can I move my services in and out of VNets?
-No. You cannot move services in and out of VNets. To move a resource to another VNet, you have to delete and redeploy the resource.
+### How can I restrict access to Azure PaaS resources from a virtual network?
+
+Resources deployed through some Azure PaaS services (such as Azure Storage and Azure SQL Database) can restrict network access to virtual networks through the use of virtual network service endpoints or Azure Private Link. For details, see [Virtual network service endpoints](virtual-network-service-endpoints-overview.md) and [What is Azure Private Link?](../private-link/private-link-overview.md).
+
+### Can I move my services in and out of virtual networks?
+
+No. You can't move services in and out of virtual networks. To move a resource to another virtual network, you have to delete and redeploy the resource.
## Security
-### What is the security model for VNets?
-VNets are isolated from one another, and other services hosted in the Azure infrastructure. A VNet is a trust boundary.
+### What is the security model for virtual networks?
+
+Virtual networks are isolated from one another and from other services hosted in the Azure infrastructure. A virtual network is a trust boundary.
-### Can I restrict inbound or outbound traffic flow to VNet-connected resources?
-Yes. You can apply [Network Security Groups](./network-security-groups-overview.md) to individual subnets within a VNet, NICs attached to a VNet, or both.
+### Can I restrict inbound or outbound traffic flow to resources that are connected to a virtual network?
-### Can I implement a firewall between VNet-connected resources?
-Yes. You can deploy a [firewall network virtual appliance](https://azure.microsoft.com/marketplace/?term=firewall) from several vendors through the Azure Marketplace.
+Yes. You can apply [network security groups](./network-security-groups-overview.md) to individual subnets within a virtual network, NICs attached to a virtual network, or both.
-### Is there information available about securing VNets?
-Yes. For details, see [Azure Network Security Overview](../security/fundamentals/network-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+### Can I implement a firewall between resources that are connected to a virtual network?
-### Do Virtual Networks store customer data?
-No. Virtual Networks doesn't store any customer data.
+Yes. You can deploy a [firewall network virtual appliance](https://azure.microsoft.com/marketplace/?term=firewall) from several vendors through Azure Marketplace.
+
+### Is information available about securing virtual networks?
+
+Yes. See [Azure network security overview](../security/fundamentals/network-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
+### Do virtual networks store customer data?
+
+No. Virtual networks don't store any customer data.
+
+### Can I set the FlowTimeoutInMinutes property for an entire subscription?
+
+No. You must set the [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property at the virtual network. The following code can help you set this property automatically for larger subscriptions:
-### Can I set [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property for an entire subscription?
-No. This must be set at the virtual network. The following can assist automate setting this property for larger subscriptions:
```Powershell $Allvnet = Get-AzVirtualNetwork
-$time = 4 #The value should be between 4 and 30 minutes (inclusive) to enable tracking, or null to disable tracking. $null to disable.
+$time = 4 #The value should be 4 to 30 minutes (inclusive) to enable tracking, or null to disable tracking.
ForEach ($vnet in $Allvnet) { $vnet.FlowTimeoutInMinutes = $time
ForEach ($vnet in $Allvnet)
## APIs, schemas, and tools
-### Can I manage VNets from code?
-Yes. You can use REST APIs for VNets in the [Azure Resource Manager](/rest/api/virtual-network) and [classic](/previous-versions/azure/ee460799(v=azure.100)) deployment models.
+### Can I manage virtual networks from code?
+
+Yes. You can use REST APIs for virtual networks in the [Azure Resource Manager](/rest/api/virtual-network) and [classic](/previous-versions/azure/ee460799(v=azure.100)) deployment models.
+
+### Is there tooling support for virtual networks?
-### Is there tooling support for VNets?
Yes. Learn more about using:-- The Azure portal to deploy VNets through the [Azure Resource Manager](manage-virtual-network.md#create-a-virtual-network) and [classic](/previous-versions/azure/virtual-network/virtual-networks-create-vnet-classic-pportal) deployment models.-- PowerShell to manage VNets deployed through the [Resource Manager](/powershell/module/az.network) deployment model.-- The Azure CLI or Azure classic CLI to deploy and manage VNets deployed through the [Resource Manager](/cli/azure/network/vnet) and [classic](/previous-versions/azure/virtual-machines/azure-cli-arm-commands?toc=%2fazure%2fvirtual-network%2ftoc.json#network-resources) deployment models.
-## VNet peering
+* The Azure portal to deploy virtual networks through the [Azure Resource Manager](manage-virtual-network.md#create-a-virtual-network) and [classic](/previous-versions/azure/virtual-network/virtual-networks-create-vnet-classic-pportal) deployment models.
+* PowerShell to manage virtual networks deployed through the [Resource Manager](/powershell/module/az.network) deployment model.
+* The Azure CLI or Azure classic CLI to deploy and manage virtual networks deployed through the [Resource Manager](/cli/azure/network/vnet) and [classic](/previous-versions/azure/virtual-machines/azure-cli-arm-commands?toc=%2fazure%2fvirtual-network%2ftoc.json#network-resources) deployment models.
+
+## Virtual network peering
+
+### What is virtual network peering?
+
+Virtual network peering enables you to connect virtual networks. A peering connection between virtual networks enables you to route traffic between them privately through IPv4 addresses.
+
+Virtual machines in peered virtual networks can communicate with each other as if they're within the same network. These virtual networks can be in the same region or in different regions (also known as global virtual network peering).
+
+You can also create virtual network peering connections across Azure subscriptions.
+
+### Can I create a peering connection to a virtual network in a different region?
+
+Yes. Global virtual network peering enables you to peer virtual networks in different regions. Global virtual network peering is available in all Azure public regions, China cloud regions, and government cloud regions. You can't globally peer from Azure public regions to national cloud regions.
+
+### What are the constraints related to global virtual network peering and load balancers?
+
+If the two virtual networks in two regions are peered over global virtual network peering, you can't connect to resources that are behind a basic load balancer through the front-end IP of the load balancer. This restriction doesn't exist for a standard load balancer.
+
+The following resources can use basic load balancers, which means you can't reach them through a load balancer's front-end IP over global virtual network peering. But you can use global virtual network peering to reach the resources directly through their private virtual network IPs, if permitted.
-### What is VNet peering?
-VNet peering (or virtual network peering) enables you to connect virtual networks. A VNet peering connection between virtual networks enables you to route traffic between them privately through IPv4 addresses. Virtual machines in the peered VNets can communicate with each other as if they are within the same network. These virtual networks can be in the same region or in different regions (also known as Global VNet Peering). VNet peering connections can also be created across Azure subscriptions.
+* VMs behind basic load balancers
+* Virtual machine scale sets with basic load balancers
+* Azure Cache for Redis
+* Azure Application Gateway v1
+* Azure Service Fabric
+* Azure API Management stv1
+* Azure Active Directory Domain Services (AD DS)
+* Azure Logic Apps
+* Azure HDInsight
+* Azure Batch
+* App Service Environment v1 and v2
-### Can I create a peering connection to a VNet in a different region?
-Yes. Global VNet peering enables you to peer VNets in different regions. Global VNet peering is available in all Azure public regions, China cloud regions, and Government cloud regions. You cannot globally peer from Azure public regions to national cloud regions.
+You can connect to these resources via Azure ExpressRoute or network-to-network connections through virtual network gateways.
-### What are the constraints related to Global VNet Peering and Load Balancers?
-If the two virtual networks in two different regions are peered over Global VNet Peering, you cannot connect to resources that are behind a Basic Load Balancer through the Front End IP of the Load Balancer. This restriction does not exist for a Standard Load Balancer.
-The following resources can use Basic Load Balancers which means you cannot reach them through the Load Balancer's Front End IP over Global VNet Peering. You can however use Global VNet peering to reach the resources directly through their private VNet IPs, if permitted.
-- VMs behind Basic Load Balancers-- Virtual machine scale sets with Basic Load Balancers -- Redis Cache -- Application Gateway (v1) SKU-- Service Fabric-- API Management (stv1)-- Active Directory Domain Service (ADDS)-- Logic Apps-- HDInsight-- Azure Batch-- App Service Environment v1 and v2
+### Can I enable virtual network peering if my virtual networks belong to subscriptions within different Microsoft Entra tenants?
-You can connect to these resources via ExpressRoute or VNet-to-VNet through VNet Gateways.
+Yes. It's possible to establish virtual network peering (whether local or global) if your subscriptions belong to different Microsoft Entra tenants. You can do this via the Azure portal, PowerShell, or the Azure CLI.
-### Can I enable VNet Peering if my virtual networks belong to subscriptions within different Azure Active Directory tenants?
-Yes. It is possible to establish VNet Peering (whether local or global) if your subscriptions belong to different Azure Active Directory tenants. You can do this via Portal, PowerShell or CLI.
+### My virtual network peering connection is in an Initiated state. Why can't I connect?
-### My VNet peering connection is in *Initiated* state, why can't I connect?
-If your peering connection is in an *Initiated* state, this means you have created only one link. A bidirectional link must be created in order to establish a successful connection. For example, to peer VNet A to VNet B, a link must be created from VNetA to VNetB and from VNetB to VNetA. Creating both links will change the state to *Connected*.
+If your peering connection is in an **Initiated** state, you created only one link. You must create a bidirectional link to establish a successful connection.
-### My VNet peering connection is in *Disconnected* state, why can't I create a peering connection?
-If your VNet peering connection is in a *Disconnected* state, it means one of the links created was deleted. In order to re-establish a peering connection, you will need to delete the link and recreate it.
+For example, to peer VNetA to VNetB, you must create a link from VNetA to VNetB and from VNetB to VNetA. Creating both links changes the state to *Connected*.
-### Can I peer my VNet with a VNet in a different subscription?
-Yes. You can peer VNets across subscriptions and across regions.
+### My virtual network peering connection is in a Disconnected state. Why can't I create a peering connection?
-### Can I peer two VNets with matching or overlapping address ranges?
-No. Address spaces must not overlap to enable VNet Peering.
+If your virtual network peering connection is in a **Disconnected** state, one of the links that you created was deleted. To re-establish a peering connection, you need to delete the remaining link and re-create both.
-### Can I peer a VNet to two different VNets with the 'Use Remote Gateway' option enabled on both the peerings?
-No. You can only enable the 'Use Remote Gateway' option on one peering to one of the VNets.
+### Can I peer my virtual network with a virtual network that's in a different subscription?
-### How much do VNet peering links cost?
-There is no charge for creating a VNet peering connection. Data transfer across peering connections is charged. [See here](https://azure.microsoft.com/pricing/details/virtual-network/).
+Yes. You can peer virtual networks across subscriptions and across regions.
-### Is VNet peering traffic encrypted?
-When Azure traffic moves between datacenters (outside physical boundaries not controlled by Microsoft or on behalf of Microsoft), [MACsec data-link layer encryption](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit) is utilized on the underlying network hardware. This is applicable to VNet peering traffic.
+### Can I peer two virtual networks that have matching or overlapping address ranges?
-### Why is my peering connection in a *Disconnected* state?
-VNet peering connections go into *Disconnected* state when one VNet peering link is deleted. You must delete both links in order to reestablish a successful peering connection.
+No. You can't enable virtual network peering if address spaces overlap.
+
+### Can I peer a virtual network to two virtual networks with the Use Remote Gateway option enabled on both peerings?
+
+No. You can enable the **Use Remote Gateway** option on only one peering to one of the virtual networks.
+
+### How much do virtual network peering links cost?
+
+There's no charge for creating a virtual network peering connection. Data transfer across peering connections is charged. For more information, see the [Azure Virtual Network pricing page](https://azure.microsoft.com/pricing/details/virtual-network/).
+
+### Is virtual network peering traffic encrypted?
+
+When Azure traffic moves between datacenters (outside physical boundaries not controlled by Microsoft or on behalf of Microsoft), the underlying network hardware uses [MACsec data-link layer encryption](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). This encryption is applicable to virtual network peering traffic.
+
+### Why is my peering connection in a Disconnected state?
+
+Virtual network peering connections go into a **Disconnected** state when one virtual network peering link is deleted. You must delete both links to re-establish a successful peering connection.
### If I peer VNetA to VNetB and I peer VNetB to VNetC, does that mean VNetA and VNetC are peered?
-No. Transitive peering is not supported. You must peer VNetA and VNetC for this to take place.
+
+No. Transitive peering is not supported. You must manually peer VNetA to VNetC.
### Are there any bandwidth limitations for peering connections?
-No. VNet peering, whether local or global, does not impose any bandwidth restrictions. Bandwidth is only limited by the VM or the compute resource.
-### How can I troubleshoot VNet Peering issues?
-Here is a [troubleshooter guide](https://support.microsoft.com/en-us/help/4486956/troubleshooter-for-virtual-network-peering-issues) you can try.
+No. Virtual network peering, whether local or global, does not impose any bandwidth restrictions. Bandwidth is limited only by the VM or the compute resource.
+
+### How can I troubleshoot problems with virtual network peering?
+
+Try the [troubleshooting guide](https://support.microsoft.com/help/4486956/troubleshooter-for-virtual-network-peering-issues).
## Virtual network TAP ### Which Azure regions are available for virtual network TAP?
-Virtual network TAP preview is available in all Azure regions. The monitored network interfaces, the virtual network TAP resource, and the collector or analytics solution must be deployed in the same region.
-### Does Virtual Network TAP support any filtering capabilities on the mirrored packets?
-Filtering capabilities are not supported with the virtual network TAP preview. When a TAP configuration is added to a network interface a deep copy of all the ingress and egress traffic on the network interface is streamed to the TAP destination.
+The preview of virtual network terminal access point (TAP) is available in all Azure regions. You must deploy the monitored network adapters, the virtual network TAP resource, and the collector or analytics solution in the same region.
+
+### Does virtual network TAP support any filtering capabilities on the mirrored packets?
+
+Filtering capabilities are not supported with the virtual network TAP preview. When you add a TAP configuration to a network adapter, a deep copy of all the ingress and egress traffic on the network adapter is streamed to the TAP destination.
+
+### Can I add multiple TAP configurations to a monitored network adapter?
-### Can multiple TAP configurations be added to a monitored network interface?
-A monitored network interface can have only one TAP configuration. Check with the individual [partner solution](virtual-network-tap-overview.md#virtual-network-tap-partner-solutions) for the capability to stream multiple copies of the TAP traffic to the analytics tools of your choice.
+A monitored network adapter can have only one TAP configuration. Check with the individual [partner solution](virtual-network-tap-overview.md#virtual-network-tap-partner-solutions) for the capability to stream multiple copies of the TAP traffic to the analytics tools of your choice.
-### Can the same virtual network TAP resource aggregate traffic from monitored network interfaces in more than one virtual network?
-Yes. The same virtual network TAP resource can be used to aggregate mirrored traffic from monitored network interfaces in peered virtual networks in the same subscription or a different subscription. The virtual network TAP resource and the destination load balancer or destination network interface must be in the same subscription. All subscriptions must be under the same Azure Active Directory tenant.
+### Can the same virtual network TAP resource aggregate traffic from monitored network adapters in more than one virtual network?
-### Are there any performance considerations on production traffic if I enable a virtual network TAP configuration on a network interface?
+Yes. You can use the same virtual network TAP resource to aggregate mirrored traffic from monitored network adapters in peered virtual networks in the same subscription or a different subscription.
-Virtual network TAP is in preview. During preview, there is no service level agreement. The capability should not be used for production workloads. When a virtual machine network interface is enabled with a TAP configuration, the same resources on the Azure host allocated to the virtual machine to send the production traffic is used to perform the mirroring function and send the mirrored packets. Select the correct [Linux](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine size to ensure that sufficient resources are available for the virtual machine to send the production traffic and the mirrored traffic.
+The virtual network TAP resource and the destination load balancer or destination network adapter must be in the same subscription. All subscriptions must be under the same Microsoft Entra tenant.
-### Is accelerated networking for [Linux](create-vm-accelerated-networking-cli.md) or [Windows](create-vm-accelerated-networking-powershell.md) supported with virtual network TAP?
+### Are there any performance considerations on production traffic if I enable a virtual network TAP configuration on a network adapter?
-You will be able to add a TAP configuration on a network interface attached to a virtual machine that is enabled with accelerated networking. But the performance and latency on the virtual machine will be affected by adding TAP configuration since the offload for mirroring traffic is currently not supported by Azure accelerated networking.
+Virtual network TAP is in preview. During preview, there is no service-level agreement. You shouldn't use the capability for production workloads.
+
+When you enable a virtual machine network adapter with a TAP configuration, the same resources on the Azure host allocated to the virtual machine to send the production traffic are used to perform the mirroring function and send the mirrored packets. Select the correct [Linux](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine size to ensure that sufficient resources are available for the virtual machine to send the production traffic and the mirrored traffic.
+
+### Is accelerated networking for Linux or Windows supported with virtual network TAP?
+
+You can add a TAP configuration on a network adapter attached to a virtual machine that's enabled with accelerated networking for [Linux](create-vm-accelerated-networking-cli.md) or [Windows](create-vm-accelerated-networking-powershell.md). But adding the TAP configuration will affect the performance and latency on the virtual machine because Azure accelerated networking currently doesn't support the offload for mirroring traffic.
## Virtual network service endpoints ### What is the right sequence of operations to set up service endpoints to an Azure service?+ There are two steps to secure an Azure service resource through service endpoints:+ 1. Turn on service endpoints for the Azure service.
-2. Set up VNet ACLs on the Azure service.
+2. Set up virtual network access control lists (ACLs) on the Azure service.
+
+The first step is a network-side operation, and the second step is a service resource-side operation. The same administrator or different administrators can perform the steps, based on the Azure role-based access control (RBAC) permissions granted to the administrator role.
-The first step is a network side operation and the second step is a service resource side operation. Both steps can be performed either by the same administrator or different administrators based on the Azure RBAC permissions granted to the administrator role. We recommend that you first turn on service endpoints for your virtual network prior to setting up VNet ACLs on Azure service side. Hence, the steps must be performed in the sequence listed above to set up VNet service endpoints.
+We recommend that you turn on service endpoints for your virtual network before you set up virtual network ACLs on the Azure service side. To set up virtual network service endpoints, you must perform the steps in the preceding sequence.
>[!NOTE]
-> Both the operations described above must be completed before you can limit the Azure service access to the allowed VNet and subnet. Only turning on service endpoints for the Azure service on the network side does not provide you the limited access. In addition, you must also set up VNet ACLs on the Azure service side.
+> You must complete both of the preceding operations before you can limit the Azure service access to the allowed virtual network and subnet. Only turning on service endpoints for the Azure service on the network side does not give you the limited access. You must also set up virtual network ACLs on the Azure service side.
+
+Certain services (such as Azure SQL and Azure Cosmos DB) allow exceptions to the preceding sequence through the `IgnoreMissingVnetServiceEndpoint` flag. After you set the flag to `True`, you can set up virtual network ACLs on the Azure service side before turning on the service endpoints on the network side. Azure services provide this flag to help customers in cases where the specific IP firewalls are configured on Azure services.
+
+Turning on the service endpoints on the network side can lead to a connectivity drop, because the source IP changes from a public IPv4 address to a private address. Setting up virtual network ACLs on the Azure service side before turning on service endpoints on the network side can help avoid a connectivity drop.
+
+### Do all Azure services reside in the Azure virtual network that the customer provides? How does a virtual network service endpoint work with Azure services?
+
+Not all Azure services reside in the customer's virtual network. Most Azure data services (such as Azure Storage, Azure SQL, and Azure Cosmos DB) are multitenant services that can be accessed over public IP addresses. For more information, see [Deploy dedicated Azure services into virtual networks](virtual-network-for-azure-services.md).
+
+When you turn on virtual network service endpoints on the network side and set up appropriate virtual network ACLs on the Azure service side, access to an Azure service is restricted from an allowed virtual network and subnet.
+
+### How do virtual network service endpoints provide security?
+
+Virtual network service endpoints limit the Azure service's access to the allowed virtual network and subnet. In this way, they provide network-level security and isolation of the Azure service traffic.
-Certain services (such as Azure SQL and Azure Cosmos DB) allow exceptions to the above sequence through the `IgnoreMissingVnetServiceEndpoint` flag. Once the flag is set to `True`, VNet ACLs can be set on the Azure service side prior to setting up the service endpoints on the network side. Azure services provide this flag to help customers in cases where the specific IP firewalls are configured on Azure services and turning on the service endpoints on the network side can lead to a connectivity drop since the source IP changes from a public IPv4 address to a private address. Setting up VNet ACLs on the Azure service side before setting service endpoints on the network side can help avoid a connectivity drop.
+All traffic that uses virtual network service endpoints flows over the Microsoft backbone to provide another layer of isolation from the public internet. Customers can also choose to fully remove public internet access to the Azure service resources and allow traffic only from their virtual network through a combination of IP firewall and virtual network ACLs. Removing internet access helps protect the Azure service resources from unauthorized access.
-### Do all Azure services reside in the Azure virtual network provided by the customer? How does VNet service endpoint work with Azure services?
+### What does the virtual network service endpoint protect - virtual network resources or Azure service resources?
-No, not all Azure services reside in the customer's virtual network. The majority of Azure data services such as Azure Storage, Azure SQL, and Azure Cosmos DB, are multi-tenant services that can be accessed over public IP addresses. You can learn more about virtual network integration for Azure services [here](virtual-network-for-azure-services.md).
+Virtual network service endpoints help protect Azure service resources. Virtual network resources are protected through network security groups.
-When you use the VNet service endpoints feature (turning on VNet service endpoint on the network side and setting up appropriate VNet ACLs on the Azure service side), access to an Azure service is restricted from an allowed VNet and subnet.
+### Is there any cost for using virtual network service endpoints?
-### How does VNet service endpoint provide security?
+No. There's no additional cost for using virtual network service endpoints.
-The VNet service endpoint feature (turning on VNet service endpoint on the network side and setting up appropriate VNet ACLs on the Azure service side) limits the Azure service access to the allowed VNet and subnet, thus providing a network level security and isolation of the Azure service traffic. All traffic using VNet service endpoints flows over Microsoft backbone, thus providing another layer of isolation from the public internet. Moreover, customers can choose to fully remove public Internet access to the Azure service resources and allow traffic only from their virtual network through a combination of IP firewall and VNet ACLs, thus protecting the Azure service resources from unauthorized access.
+### Can I turn on virtual network service endpoints and set up virtual network ACLs if the virtual network and the Azure service resources belong to different subscriptions?
-### What does the VNet service endpoint protect - VNet resources or Azure service?
-VNet service endpoints help protect Azure service resources. VNet resources are protected through Network Security Groups (NSGs).
+Yes, it's possible. Virtual networks and Azure service resources can be in the same subscription or in different subscriptions. The only requirement is that both the virtual network and the Azure service resources must be under the same Microsoft Entra tenant.
-### Is there any cost for using VNet service endpoints?
+### Can I turn on virtual network service endpoints and set up virtual network ACLs if the virtual network and the Azure service resources belong to different Microsoft Entra tenants?
-No, there is no additional cost for using VNet service endpoints.
+Yes, it's possible when you're using service endpoints for Azure Storage and Azure Key Vault. For other services, virtual network service endpoints and virtual network ACLs are not supported across Microsoft Entra tenants.
-### Can I turn on VNet service endpoints and set up VNet ACLs if the virtual network and the Azure service resources belong to different subscriptions?
+### Can an on-premises device's IP address that's connected through an Azure virtual network gateway (VPN) or ExpressRoute gateway access Azure PaaS services over virtual network service endpoints?
-Yes, it is possible. Virtual networks and Azure service resources can be either in the same or different subscriptions. The only requirement is that both the virtual network and Azure service resources must be under the same Active Directory (AD) tenant.
+By default, Azure service resources secured to virtual networks are not reachable from on-premises networks. If you want to allow traffic from on-premises, you must also allow public (typically, NAT) IP addresses from on-premises or ExpressRoute. You can add these IP addresses through the IP firewall configuration for the Azure service resources.
-### Can I turn on VNet service endpoints and set up VNet ACLs if the virtual network and the Azure service resources belong to different AD tenants?
-Yes, it is possible when using service endpoints for Azure Storage and Azure Key Vault. For rest of services, VNet service endpoints and VNet ACLs are not supported across AD tenants.
+### Can I use virtual network service endpoints to secure Azure services to multiple subnets within a virtual network or across multiple virtual networks?
-### Can an on-premises device's IP address that is connected through Azure Virtual Network gateway (VPN) or ExpressRoute gateway access Azure PaaS Service over VNet service endpoints?
-By default, Azure service resources secured to virtual networks are not reachable from on-premises networks. If you want to allow traffic from on-premises, you must also allow public (typically, NAT) IP addresses from your on-premises or ExpressRoute. These IP addresses can be added through the IP firewall configuration for the Azure service resources.
+To secure Azure services to multiple subnets within a virtual network or across multiple virtual networks, enable service endpoints on the network side on each of the subnets independently. Then, secure Azure service resources to all of the subnets by setting up appropriate virtual network ACLs on the Azure service side.
-### Can I use VNet Service Endpoint feature to secure Azure service to multiple subnets within a virtual network or across multiple virtual networks?
-To secure Azure services to multiple subnets within a virtual network or across multiple virtual networks, enable service endpoints on the network side on each of the subnets independently and then secure Azure service resources to all of the subnets by setting up appropriate VNet ACLs on the Azure service side.
-
### How can I filter outbound traffic from a virtual network to Azure services and still use service endpoints?
-If you want to inspect or filter the traffic destined to an Azure service from a virtual network, you can deploy a network virtual appliance within the virtual network. You can then apply service endpoints to the subnet where the network virtual appliance is deployed and secure Azure service resources only to this subnet through VNet ACLs. This scenario might also be helpful if you wish to restrict Azure service access from your virtual network only to specific Azure resources using network virtual appliance filtering. For more information, see [egress with network virtual appliances](/azure/architecture/reference-architectures/dmz/nva-ha).
-### What happens when you access an Azure service account that has a virtual network access control list (ACL) enabled from outside the VNet?
-The HTTP 403 or HTTP 404 error is returned.
+If you want to inspect or filter the traffic destined to an Azure service from a virtual network, you can deploy a network virtual appliance within the virtual network. You can then apply service endpoints to the subnet where the network virtual appliance is deployed and secure Azure service resources only to this subnet through virtual network ACLs.
-### Are subnets of a virtual network created in different regions allowed to access an Azure service account in another region?
-Yes, for most of the Azure services, virtual networks created in different regions can access Azure services in another region through the VNet service endpoints. For example, if an Azure Cosmos DB account is in West US or East US and virtual networks are in multiple regions, the virtual network can access Azure Cosmos DB. Storage and SQL are exceptions and are regional in nature and both the virtual network and the Azure service need to be in the same region.
+This scenario might also be helpful if you want to restrict Azure service access from your virtual network only to specific Azure resources by using network virtual appliance filtering. For more information, see [Deploy highly available NVAs](/azure/architecture/reference-architectures/dmz/nva-ha).
+
+### What happens when someone accesses an Azure service account that has a virtual network ACL enabled from outside the virtual network?
+
+The service returns an HTTP 403 or HTTP 404 error.
+
+### Are subnets of a virtual network created in different regions allowed to access an Azure service account in another region?
+
+Yes. For most of the Azure services, virtual networks created in different regions can access Azure services in another region through the virtual network service endpoints. For example, if an Azure Cosmos DB account is in the West US or East US region, and virtual networks are in multiple regions, the virtual networks can access Azure Cosmos DB.
+
+Azure Storage and Azure SQL are exceptions and are regional in nature. Both the virtual network and the Azure service need to be in the same region.
-### Can an Azure service have both a VNet ACL and an IP firewall?
-Yes, a VNet ACL and an IP firewall can co-exist. Both features complement each other to ensure isolation and security.
-
-### What happens if you delete a virtual network or subnet that has service endpoint turned on for Azure service?
-Deletion of VNets and subnets are independent operations and are supported even when service endpoints are turned on for Azure services. In cases where the Azure services have VNet ACLs set up, for those VNets and subnets, the VNet ACL information associated with that Azure service is disabled when a VNet or subnet that has VNet service endpoint turned on is deleted.
-
-### What happens if an Azure service account that has a VNet Service endpoint enabled is deleted?
-The deletion of an Azure service account is an independent operation and is supported even when the service endpoint is enabled on the network side and VNet ACLs are set up on Azure service side.
-
-### What happens to the source IP address of a resource (like a VM in a subnet) that has VNet service endpoint enabled?
-When virtual network service endpoints are enabled, the source IP addresses of the resources in your virtual network's subnet switches from using public IPV4 addresses to the Azure virtual network's private IP addresses for traffic to Azure service. Note that this can cause specific IP firewalls that are set to public IPV4 address earlier on the Azure services to fail.
+### Can an Azure service have both a virtual network ACL and an IP firewall?
+
+Yes. A virtual network ACL and an IP firewall can coexist. The features complement each other to help ensure isolation and security.
+
+### What happens if you delete a virtual network or subnet that has service endpoints turned on for Azure services?
+
+Deletion of virtual networks and deletion of subnets are independent operations. They're supported even when you turn on service endpoints for Azure services.
+
+If you set up virtual network ACLs for Azure services, the ACL information associated with those Azure services is disabled when you delete a virtual network or subnet that has virtual network service endpoints turned on.
+
+### What happens if I delete an Azure service account that has a virtual network service endpoint turned on?
+
+The deletion of an Azure service account is an independent operation. It's supported even if you turned on the service endpoint on the network side and set up virtual network ACLs on the Azure service side.
+
+### What happens to the source IP address of a resource (like a VM in a subnet) that has virtual network service endpoints turned on?
+
+When you turn on virtual network service endpoints, the source IP addresses of the resources in your virtual network's subnet switch from using public IPv4 addresses to using the Azure virtual network's private IP addresses for traffic to Azure services. This switch can cause specific IP firewalls that are set to a public IPv4 address earlier on the Azure services to fail.
### Does the service endpoint route always take precedence?
-Service endpoints add a system route which takes precedence over BGP routes and provides optimum routing for the service endpoint traffic. Service endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network. For more information about how Azure selects a route, see [Azure Virtual network traffic routing](virtual-networks-udr-overview.md).
+
+Service endpoints add a system route that takes precedence over Border Gateway Protocol (BGP) routes and provides optimum routing for the service endpoint traffic. Service endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network.
+
+For more information about how Azure selects a route, see [Virtual network traffic routing](virtual-networks-udr-overview.md).
### Do service endpoints work with ICMP?
-No, ICMP traffic that is sourced from a subnet with service endpoints enabled will not take the service tunnel path to the desired endpoint. Service endpoints will only handle TCP traffic. This means that if you want to test latency or connectivity to an endpoint via service endpoints, tools like ping and tracert will not show the true path that the resources within the subnet will take.
-
-### How does NSG on a subnet work with service endpoints?
-To reach the Azure service, NSGs need to allow outbound connectivity. If your NSGs are opened to all Internet outbound traffic, then the service endpoint traffic should work. You can also limit the outbound traffic to service IPs only using the Service tags.
-
+
+No. ICMP traffic that's sourced from a subnet with service endpoints enabled won't take the service tunnel path to the desired endpoint. Service endpoints handle only TCP traffic. If you want to test latency or connectivity to an endpoint via service endpoints, tools like ping and tracert won't show the true path that the resources within the subnet will take.
+
+### How do NSGs on a subnet work with service endpoints?
+
+To reach the Azure service, NSGs need to allow outbound connectivity. If your NSGs are opened to all internet outbound traffic, the service endpoint traffic should work. You can also limit the outbound traffic to only service IP addresses by using the service tags.
+ ### What permissions do I need to set up service endpoints?
-Service endpoints can be configured on a virtual network independently by a user with write access to the virtual network. To secure Azure service resources to a VNet, the user must have permission **Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action** for the subnets being added. This permission is included in the built-in service administrator role by default and can be modified by creating custom roles. Learn more about built-in roles and assigning specific permissions to [custom roles](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-
-### Can I filter virtual network traffic to Azure services, allowing only specific Azure service resources, over VNet service endpoints?
+You can configure service endpoints on a virtual network independently if you have write access to that network.
+
+To secure Azure service resources to a virtual network, you must have **Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action** permission for the subnets that you're adding. This permission is included in the built-in service administrator role by default and can be modified through the creation of custom roles.
+
+For more information about built-in roles and assigning specific permissions to custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
+### Can I filter virtual network traffic to Azure services over service endpoints?
+
+You can use virtual network service endpoint policies to filter virtual network traffic to Azure services, allowing only specific Azure service resources over the service endpoints. Endpoint policies provide granular access control from the virtual network traffic to the Azure services.
-Virtual network (VNet) service endpoint policies allow you to filter virtual network traffic to Azure services, allowing only specific Azure service resources over the service endpoints. Endpoint policies provide granular access control from the virtual network traffic to the Azure services. You can learn more about the service endpoint policies [here](virtual-network-service-endpoint-policies-overview.md).
+To learn more, see [Virtual network service endpoint policies for Azure Storage](virtual-network-service-endpoint-policies-overview.md).
-### Does Azure Active Directory (Azure AD) support VNet service endpoints?
+### Does Microsoft Entra ID support virtual network service endpoints?
-Azure Active Directory (Azure AD) doesn't support service endpoints natively. Complete list of Azure Services supporting VNet service endpoints can be seen [here](./virtual-network-service-endpoints-overview.md). Note that the "Microsoft.AzureActiveDirectory" tag listed under services supporting service endpoints is used for supporting service endpoints to ADLS Gen 1. For ADLS Gen 1, virtual network integration for Azure Data Lake Storage Gen1 makes use of the virtual network service endpoint security between your virtual network and Azure Active Directory (Azure AD) to generate additional security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access. Learn more about [Azure Data Lake Store Gen 1 VNet Integration](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+Microsoft Entra ID doesn't support service endpoints natively. For a complete list of Azure services that support virtual network service endpoints, see [Virtual network service endpoints](./virtual-network-service-endpoints-overview.md).
-### Are there any limits on how many VNet service endpoints I can set up from my VNet?
-There is no limit on the total number of VNet service endpoints in a virtual network. For an Azure service resource (such as an Azure Storage account), services may enforce limits on the number of subnets used for securing the resource. The following table shows some example limits:
+In that list, the *Microsoft.AzureActiveDirectory* tag listed under services that support service endpoints is used for supporting service endpoints to Azure Data Lake Storage Gen1. [Virtual network integration for Data Lake Storage Gen1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json) makes use of the virtual network service endpoint security between your virtual network and Microsoft Entra ID to generate additional security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access.
-|Azure service| Limits on VNet rules|
+### Are there any limits on how many service endpoints I can set up from my virtual network?
+
+There is no limit on the total number of service endpoints in a virtual network. For an Azure service resource (such as an Azure Storage account), services might enforce limits on the number of subnets that you use for securing the resource. The following table shows some example limits:
+
+|Azure service| Limits on virtual network rules|
||| |Azure Storage| 200| |Azure SQL| 128| |Azure Synapse Analytics| 128|
-|Azure KeyVault| 200 |
+|Azure Key Vault| 200 |
|Azure Cosmos DB| 64|
-|Azure Event Hub| 128|
+|Azure Event Hubs| 128|
|Azure Service Bus| 128|
-|Azure Data Lake Store V1| 100|
-
+|Azure Data Lake Storage Gen1| 100|
+ >[!NOTE]
-> The limits are subjected to changes at the discretion of the Azure service. Refer to the respective service documentation for services details.
+> The limits are subject to change at the discretion of the Azure services. Refer to the respective service documentation for details.
+
+## Migration of classic network resources to Resource Manager
-## Migrate classic network resources to Resource Manager
+### What is Azure Service Manager, and what does the term "classic" mean?
-### What is Azure Service Manager and the term classic mean?
+Azure Service Manager is the old deployment model of Azure that was responsible for creating, managing, and deleting resources. The word *classic* in a networking service refers to resources managed by the Azure Service Manager model. For more information, see the [comparison of deployment models](../azure-resource-manager/management/deployment-models.md).
-Azure Service Manager is the old deployment model of Azure responsible for creating, managing, and deleting resources. The word *classic* in a networking service refers to resources managed by the Azure Service Manager model. For more information, see [Comparison between deployment models](../azure-resource-manager/management/deployment-models.md).
+### What is Azure Resource Manager?
-### What is Azure Resource Manager?
+Azure Resource Manager is the latest deployment and management model in Azure that's responsible for creating, managing, and deleting resources in your Azure subscription. For more information, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md).
-Azure Resource Manager is the latest deployment and management model in Azure responsible for creating, managing, deleting resources in your Azure subscription. For more information, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md)
+### Can I revert the migration after resources have been committed to Resource Manager?
-### Can I revert the migration after resources have been committed to Resource Manager?
+You can cancel the migration as long as resources are still in the prepared state. Rolling back to the previous deployment model isn't supported after you successfully migrate resources through the commit operation.
-You can cancel the migration as long as resources are still in the prepared state. Rolling back to the previous deployment model isn't supported after resources have been successfully migrated through the commit operation.
+### Can I revert the migration if the commit operation failed?
-### Can I revert the migration if the commit operation failed?
+You can't reverse a migration if the commit operation failed. All migration operations, including the commit operation, can't be changed after you start them. We recommend that you retry the operation after a short period. If the operation continues to fail, submit a support request.
-You can't reverse a migration if the commit operation failed. All migration operations, including the commit operation can't be changed once started. It's recommended that you retry the operation again after a short period. If the operation continues to fail, submit a support request.
+### Can I validate my subscription or resources to see if they're capable of migration?
-### Can I validate my subscription or resources to see if they're capable of migration?
+Yes. The first step in preparing for migration is to validate that resources can be migrated. If the validation fails, you'll receive messages for all the reasons why the migration can't be completed.
-Yes. As part of the migration procedure, the first step in preparing for migration is to validate if resources are capable of being migrated. In the case the validate operation fails, you'll receive messages for all the reasons the migration can't be completed.
+### Are Application Gateway resources migrated as part of the virtual network migration from classic to Resource Manager?
-### Are Application Gateway resources migrated as part of the classic to Resource Manager VNet migration?
+Azure Application Gateway resources aren't migrated automatically as part of the virtual network migration process. If one is present in the virtual network, the migration won't be successful. To migrate an Application Gateway resource to Resource Manager, you have to remove and re-create the Application Gateway instance after the migration is complete.
-Application Gateway resources won't be migrated automatically as part of the VNet migration process. If one is present in the virtual network, the migration won't be successful. In order to migrate an Application Gateway resource to Resource Manager, you'll have to remove and recreate the Application Gateway once the migration is complete.
+### Are VPN Gateway resources migrated as part of the virtual network migration from classic to Resource Manager?
-### Does the VPN Gateway get migrated as part of the classic to Resource Manager VNet migration?
+Azure VPN Gateway resources are migrated as part of the virtual network migration process. The migration is completed one virtual network at a time with no other requirements. The migration steps are the same as for migrating a virtual network without a VPN gateway.
-VPN Gateway resources are migrated as part of VNet migration process. The migration is completed one virtual network at a time with no other requirements. The migration steps are the same as migrating a virtual network without a VPN gateway.
+### Is a service interruption associated with migrating classic VPN gateways to Resource Manager?
-### Is there a service interruption associated with migrating classic VPN gateways to Resource Manager?
-
-You won't experience any service interruption with your VPN connection when migrating to Resource Manager. Therefore existing workloads will continue to function without loss of on-premises connectivity during the migration.
+You won't experience any service interruption with your VPN connection when you're migrating to Resource Manager. Existing workloads will continue to function with full on-premises connectivity during the migration.
-### Do I need to reconfigure my on-premises device after the VPN Gateway has been migrated to Resource Manager?
+### Do I need to reconfigure my on-premises device after the VPN gateway is migrated to Resource Manager?
-The public IP address associated with the VPN gateway will remain the same even after the migration. You don't need to reconfigure your on-premises router.
+The public IP address associated with the VPN gateway remains the same after the migration. You don't need to reconfigure your on-premises router.
-### What are the supported scenarios for classic VPN Gateway migration to Resource Manager?
+### What are the supported scenarios for VPN gateway migration from classic to Resource Manager?
-Most of the common VPN connectivity scenarios are covered by the classic to Resource Manager migration. The supported scenarios include:
+The migration from classic to Resource Manager covers most of the common VPN connectivity scenarios. The supported scenarios include:
-* Point-to-site connectivity
+* Point-to-site connectivity.
-* Site-to-site connectivity with a VPN Gateway connected to an on-premises location
+* Site-to-site connectivity with a VPN gateway connected to an on-premises location.
-* VNet-to-VNet connectivity between two virtual networks using VPN gateways
+* Network-to-network connectivity between two virtual networks that use VPN gateways.
-* Multiple VNets connected to same on-premises location
+* Multiple virtual networks connected to same on-premises location.
-* Multi-site connectivity
+* Multiple-site connectivity.
-* Forced tunneling enabled virtual networks
+* Virtual networks with forced tunneling enabled.
-### Which scenarios aren't supported for classic VPN Gateway migration to Resource Manager?
+### Which scenarios aren't supported for VPN gateway migration from classic to Resource Manager?
-Scenarios that aren't supported include:
+Scenarios that aren't supported include:
-* Virtual network with both an ExpressRoute Gateway and a VPN Gateway is currently not supported.
+* A virtual network with both an ExpressRoute gateway and a VPN gateway.
-* Virtual network with an ExpressRoute Gateway connected to a circuit in a different subscription.
+* A virtual network with an ExpressRoute gateway connected to a circuit in a different subscription.
* Transit scenarios where VM extensions are connected to on-premises servers.
-### Where can I find more information regarding classic to Azure Resource Manager migration?
+### Where can I find more information about migration from classic to Resource Manager?
-For more information, see [FAQ about classic to Azure Resource Manager migration](../virtual-machines/migration-classic-resource-manager-faq.yml).
+See [Frequently asked questions about classic to Azure Resource Manager migration](../virtual-machines/migration-classic-resource-manager-faq.yml).
-### How can I report an issue?
+### How can I report a problem?
-You can post your questions about your migration issues to theΓÇ»[Microsoft Q&A](/answers/topics/azure-virtual-network.html) page. It's recommended that you post all your questions on this forum. If you have a support contract, you can also file a support request.
+You can post questions about migration problems to theΓÇ»[Microsoft Q&A](/answers/topics/azure-virtual-network.html) page. We recommend that you post all your questions on this forum. If you have a support contract, you can also file a support request.
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
Title: What is Azure Virtual Network? description: Learn about Azure Virtual Network concepts and features, including address space, subnets, regions, and subscriptions.
-# Customer intent: As someone with a basic network background that is new to Azure, I want to understand the capabilities of Azure Virtual Network, so that my Azure resources such as VMs, can securely communicate with each other, the internet, and my on-premises resources.
+# Customer intent: As someone with a basic network background who is new to Azure, I want to understand the capabilities of Azure Virtual Network so that my Azure resources can securely communicate with each other, the internet, and my on-premises resources.
Last updated 05/08/2023
# What is Azure Virtual Network?
-Azure Virtual Network is the fundamental building block for your private network in Azure. A virtual network enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the internet, and on-premises networks. A virtual network is similar to a traditional network that you'd operate in your own data center. An Azure Virtual Network brings with it extra benefits of Azure's infrastructure such as scale, availability, and isolation.
+Azure Virtual Network is a service that provides the fundamental building block for your private network in Azure. An instance of the service (a virtual network) enables many types of Azure resources to securely communicate with each other, the internet, and on-premises networks. These Azure resources include virtual machines (VMs).
-## Why use an Azure Virtual network?
-Azure virtual network enables Azure resources to securely communicate with each other, the internet, and on-premises networks.
+A virtual network is similar to a traditional network that you'd operate in your own datacenter. But it brings extra benefits of the Azure infrastructure, such as scale, availability, and isolation.
+
+## Why use an Azure virtual network?
Key scenarios that you can accomplish with a virtual network include: -- Communication of Azure resources with the internet
+- Communication of Azure resources with the internet.
-- Communication between Azure resources
+- Communication between Azure resources.
-- Communication with on-premises resources
+- Communication with on-premises resources.
-- Filtering network traffic
+- Filtering of network traffic.
-- Routing network traffic
+- Routing of network traffic.
- Integration with Azure services. ### Communicate with the internet
-All resources in a virtual network can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public load balancer. You can also use public IP, NAT gateway, or public load balancer to manage your outbound connections. To learn more about outbound connections in Azure, see [Outbound connections](../load-balancer/load-balancer-outbound-connections.md), [Public IP addresses](./ip-services/virtual-network-public-ip-address.md), [NAT Gateway](../nat-gateway/nat-overview.md) and [Load Balancer](../load-balancer/load-balancer-overview.md).
+All resources in a virtual network can communicate outbound with the internet, by default. You can also use a [public IP address](./ip-services/virtual-network-public-ip-address.md), [NAT gateway](../nat-gateway/nat-overview.md), or [public load balancer](../load-balancer/load-balancer-overview.md) to manage your [outbound connections](../load-balancer/load-balancer-outbound-connections.md). You can communicate inbound with a resource by assigning a public IP address or a public load balancer.
->[!NOTE]
->When using only an internal [Standard Load Balancer](../load-balancer/load-balancer-overview.md), outbound connectivity is not available until you define how you want [outbound connections](../load-balancer/load-balancer-outbound-connections.md) to work with an instance-level public IP or a public load balancer.
+When you're using only an [internal standard load balancer](../load-balancer/load-balancer-overview.md), outbound connectivity is not available until you define how you want outbound connections to work with an instance-level public IP address or a public load balancer.
### Communicate between Azure resources Azure resources communicate securely with each other in one of the following ways: -- **Through a virtual network**: You can deploy VMs, and other types of Azure resources to a virtual network. Examples of resources include Azure App Service Environments, the Azure Kubernetes Service (AKS), and Azure Virtual Machine Scale Sets. To view a complete list of Azure resources that you can deploy into a virtual network, see [Virtual network service integration](virtual-network-for-azure-services.md).
+- **Virtual network**: You can deploy VMs and other types of Azure resources in a virtual network. Examples of resources include App Service Environments, Azure Kubernetes Service (AKS), and Azure Virtual Machine Scale Sets. To view a complete list of Azure resources that you can deploy in a virtual network, see [Deploy dedicated Azure services into virtual networks](virtual-network-for-azure-services.md).
-- **Through a virtual network service endpoint**: Extend your virtual network private address space and the identity of your virtual network to Azure service resources. Examples of resources include Azure Storage accounts and Azure SQL Database, over a direct connection. Service endpoints allow you to secure your critical Azure service resources to only a virtual network. To learn more, see [Virtual network service endpoints overview](virtual-network-service-endpoints-overview.md).
+- **Virtual network service endpoint**: You can extend your virtual network's private address space and the identity of your virtual network to Azure service resources over a direct connection. Examples of resources include Azure Storage accounts and Azure SQL Database. Service endpoints allow you to secure your critical Azure service resources to only a virtual network. To learn more, see [Virtual network service endpoints](virtual-network-service-endpoints-overview.md).
-- **Through virtual network peering**: You can connect virtual networks to each other, enabling resources in either virtual network to communicate with each other, using virtual network peering. The virtual networks you connect can be in the same, or different, Azure regions. To learn more, see [Virtual network peering](virtual-network-peering-overview.md).
+- **Virtual network peering**: You can connect virtual networks to each other by using virtual peering. The resources in either virtual network can then communicate with each other. The virtual networks that you connect can be in the same, or different, Azure regions. To learn more, see [Virtual network peering](virtual-network-peering-overview.md).
### Communicate with on-premises resources
-You can connect your on-premises computers and networks to a virtual network using any of the following options:
+You can connect your on-premises computers and networks to a virtual network by using any of the following options:
-- **Point-to-site virtual private network (VPN):** Established between a virtual network and a single computer in your network. Each computer that wants to establish connectivity with a virtual network must configure its connection. This connection type is great if you're just getting started with Azure, or for developers, because it requires little or no changes to your existing network. The communication between your computer and a virtual network is sent through an encrypted tunnel over the internet. To learn more, see [Point-to-site VPN](../vpn-gateway/point-to-site-about.md?toc=/azure/virtual-network/toc.json#).
+- **Point-to-site virtual private network (VPN)**: Established between a virtual network and a single computer in your network. Each computer that wants to establish connectivity with a virtual network must configure its connection. This connection type is useful if you're just getting started with Azure, or for developers, because it requires few or no changes to an existing network. The communication between your computer and a virtual network is sent through an encrypted tunnel over the internet. To learn more, see [About point-to-site VPN](../vpn-gateway/point-to-site-about.md?toc=/azure/virtual-network/toc.json#).
-- **Site-to-site VPN:** Established between your on-premises VPN device and an Azure VPN Gateway that is deployed in a virtual network. This connection type enables any on-premises resource that you authorize to access a virtual network. The communication between your on-premises VPN device and an Azure VPN gateway is sent through an encrypted tunnel over the internet. To learn more, see [Site-to-site VPN](../vpn-gateway/design.md?toc=/azure/virtual-network/toc.json#s2smulti).
+- **Site-to-site VPN**: Established between your on-premises VPN device and an Azure VPN gateway that's deployed in a virtual network. This connection type enables any on-premises resource that you authorize to access a virtual network. The communication between your on-premises VPN device and an Azure VPN gateway is sent through an encrypted tunnel over the internet. To learn more, see [Site-to-site VPN](../vpn-gateway/design.md?toc=/azure/virtual-network/toc.json#s2smulti).
-- **Azure ExpressRoute:** Established between your network and Azure, through an ExpressRoute partner. This connection is private. Traffic doesn't go over the internet. To learn more, see [ExpressRoute](../expressroute/expressroute-introduction.md?toc=/azure/virtual-network/toc.json).
+- **Azure ExpressRoute**: Established between your network and Azure, through an ExpressRoute partner. This connection is private. Traffic doesn't go over the internet. To learn more, see [What is Azure ExpressRoute?](../expressroute/expressroute-introduction.md?toc=/azure/virtual-network/toc.json).
### Filter network traffic
-You can filter network traffic between subnets using either or both of the following options:
+You can filter network traffic between subnets by using either or both of the following options:
-- **Network security groups:** Network security groups and application security groups can contain multiple inbound and outbound security rules. These rules enable you to filter traffic to and from resources by source and destination IP address, port, and protocol. To learn more, see [Network security groups](./network-security-groups-overview.md#network-security-groups) or [Application security groups](./network-security-groups-overview.md#application-security-groups).
+- **Network security groups**: Network security groups and application security groups can contain multiple inbound and outbound security rules. These rules enable you to filter traffic to and from resources by source and destination IP address, port, and protocol. To learn more, see [Network security groups](./network-security-groups-overview.md) and [Application security groups](./application-security-groups.md).
-- **Network virtual appliances:** A network virtual appliance is a VM that performs a network function, such as a firewall, WAN optimization, or other network function. To view a list of available network virtual appliances that you can deploy in a virtual network, see [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances).
+- **Network virtual appliances**: A network virtual appliance is a VM that performs a network function, such as a firewall or WAN optimization. To view a list of available network virtual appliances that you can deploy in a virtual network, go to [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances).
### Route network traffic
-Azure routes traffic between subnets, connected virtual networks, on-premises networks, and the Internet, by default. You can implement either or both of the following options to override the default routes Azure creates:
+Azure routes traffic between subnets, connected virtual networks, on-premises networks, and the internet, by default. You can implement either or both of the following options to override the default routes that Azure creates:
-- **Route tables:** You can create custom route tables with routes that control where traffic is routed to for each subnet. Learn more about [route tables](virtual-networks-udr-overview.md#user-defined).
+- **Route tables**: You can create [custom route tables](virtual-networks-udr-overview.md#user-defined) that control where traffic is routed to for each subnet.
-- **Border gateway protocol (BGP) routes:** If you connect your virtual network to your on-premises network using an Azure VPN Gateway or ExpressRoute connection, you can propagate your on-premises BGP routes to your virtual networks. Learn more about using BGP with [Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md?toc=/azure/virtual-network/toc.json) and [ExpressRoute](../expressroute/expressroute-routing.md?toc=/azure/virtual-network/toc.json#dynamic-route-exchange).
+- **Border gateway protocol (BGP) routes**: If you connect your virtual network to your on-premises network by using an [Azure VPN gateway](../vpn-gateway/vpn-gateway-bgp-overview.md?toc=/azure/virtual-network/toc.json) or an [ExpressRoute](../expressroute/expressroute-routing.md?toc=/azure/virtual-network/toc.json#dynamic-route-exchange) connection, you can propagate your on-premises BGP routes to your virtual networks.
-### Virtual network integration for Azure services
+### Integrate with Azure services
-Integrating Azure services to an Azure virtual network enables private access to the service from virtual machines or compute resources in the virtual network.
-You can integrate Azure services in your virtual network with the following options:
+Integrating Azure services with an Azure virtual network enables private access to the service from virtual machines or compute resources in the virtual network. You can use the following options for this integration:
-- Deploying [dedicated instances of the service](virtual-network-for-azure-services.md) into a virtual network. The services can then be privately accessed within the virtual network and from on-premises networks.
+- Deploy [dedicated instances of the service](virtual-network-for-azure-services.md) into a virtual network. The services can then be privately accessed within the virtual network and from on-premises networks.
-- Using [Private Link](../private-link/private-link-overview.md) to access privately a specific instance of the service from your virtual network and from on-premises networks.
+- Use [Azure Private Link](../private-link/private-link-overview.md) to privately access a specific instance of the service from your virtual network and from on-premises networks.
-- You can also access the service using public endpoints by extending a virtual network to the service, through [service endpoints](virtual-network-service-endpoints-overview.md). Service endpoints allow service resources to be secured to the virtual network.
+- Access the service over public endpoints by extending a virtual network to the service, through [service endpoints](virtual-network-service-endpoints-overview.md). Service endpoints allow service resources to be secured to the virtual network.
-## Azure Virtual Network limits
+## Limits
-There are certain limits around the number of Azure resources you can deploy. Most Azure networking limits are at the maximum values. However, you can [increase certain networking limits](../azure-portal/supportability/networking-quota-requests.md) as specified on the [virtual network limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
+There are limits to the number of Azure resources that you can deploy. Most Azure networking limits are at the maximum values. However, you can [increase certain networking limits](../azure-portal/supportability/networking-quota-requests.md). For more information, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
## Virtual networks and availability zones
Virtual networks and subnets span all availability zones in a region. You don't
## Pricing
-There's no charge for using Azure Virtual Network; it's free of cost. Standard charges are applicable for resources, such as Virtual Machines (VMs) and other products. To learn more, see [VNet pricing](https://azure.microsoft.com/pricing/details/virtual-network/) and the Azure [pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+There's no charge for using Azure Virtual Network. It's free of cost. Standard charges apply for resources, such as VMs and other products. To learn more, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/) and the Azure [pricing calculator](https://azure.microsoft.com/pricing/calculator/).
## Next steps - Learn about [Azure Virtual Network concepts and best practices](concepts-and-best-practices.md). -- To get started using a virtual network, create one, deploy a few VMs to it, and communicate between the VMs. To learn how, see the [Create a virtual network](quick-create-portal.md) quickstart.
+- Get started with using a virtual network by creating one, deploying a few VMs to it, and communicating between the VMs. To learn how, see the [Use the Azure portal to create a virtual network](quick-create-portal.md) quickstart.
-- [Learn module: Introduction to Azure Virtual Networks](/training/modules/introduction-to-azure-virtual-networks)
+- Follow a training module on designing and implementing core Azure networking infrastructure, including virtual networks: [Introduction to Azure virtual networks](/training/modules/introduction-to-azure-virtual-networks).
virtual-network Vnet Integration For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/vnet-integration-for-azure-services.md
To compare and understand the differences, see the following table.
| Impacts the cost of your solution | No | Yes (see [Private link pricing](https://azure.microsoft.com/pricing/details/private-link/)) | | Impacts the [composite SLA](/azure/architecture/framework/resiliency/business-metrics#composite-slas) of your solution | No | Yes (Private link service itself has a [99.99% SLA](https://azure.microsoft.com/support/legal/sla/private-link/)) | | Setup and maintenance | Simple to set up with less management overhead | Extra effort is required |
-| Limits | No limit on the total number of service endpoints in a virtual network. Azure services may enforce limits on the number of subnets used for securing the resource. (see [virtual network FAQ](virtual-networks-faq.md#are-there-any-limits-on-how-many-vnet-service-endpoints-i-can-set-up-from-my-vnet)) | Yes (see [Private Link limits](../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits)) |
+| Limits | No limit on the total number of service endpoints in a virtual network. Azure services may enforce limits on the number of subnets used for securing the resource. (see [virtual network FAQ](virtual-networks-faq.md#are-there-any-limits-on-how-many-service-endpoints-i-can-set-up-from-my-virtual-network)) | Yes (see [Private Link limits](../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits)) |
-**Azure service resources secured to virtual networks aren't reachable from on-premises networks. If you want to allow traffic from on-premises, allow public (typically, NAT) IP addresses from your on-premises or ExpressRoute. These IP addresses can be added through the IP firewall configuration for the Azure service resources. For more information, see the [virtual network FAQ](virtual-networks-faq.md#can-an-on-premises-devices-ip-address-that-is-connected-through-azure-virtual-network-gateway-vpn-or-expressroute-gateway-access-azure-paas-service-over-vnet-service-endpoints).
+**Azure service resources secured to virtual networks aren't reachable from on-premises networks. If you want to allow traffic from on-premises, allow public (typically, NAT) IP addresses from your on-premises or ExpressRoute. These IP addresses can be added through the IP firewall configuration for the Azure service resources. For more information, see the [virtual network FAQ](virtual-networks-faq.md#can-an-on-premises-devices-ip-address-thats-connected-through-an-azure-virtual-network-gateway-vpn-or-expressroute-gateway-access-azure-paas-services-over-virtual-network-service-endpoints).
## Next steps
virtual-wan About Nva Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-nva-hub.md
NVA in the virtual hub is available in the following regions:
||| | North America| Canada Central, Canada East, Central US, East US, East US 2, South Central US, North Central US, West Central US, West US, West US 2 | | South America | Brazil South, Brazil Southeast |
-| Europe | France Central, France South, Germany North, Germany West Central, North Europe, Norway East, Norway West, Switzerland North, Switzerland West, UK South, UK West, West Europe|
-| Middle East | UAE North |
+| Europe | France Central, France South, Germany North, Germany West Central, North Europe, Norway East, Norway West, Switzerland North, Switzerland West, UK South, UK West, West Europe, Sweden Central|
+| Middle East | UAE North, Qatar Central |
| Asia | East Asia, Japan East, Japan West, Korea Central, Korea South, Southeast Asia | | Australia | Australia South East, Australia East, Australia Central, Australia Central 2| | Africa | South Africa North |
virtual-wan About Vpn Profile Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-vpn-profile-download.md
Previously updated : 02/08/2021 Last updated : 08/24/2023
virtual-wan Azure Vpn Client Optional Configurations Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/azure-vpn-client-optional-configurations-windows.md
description: Learn how to configure the Azure VPN Client optional configuration
Previously updated : 07/20/2022 Last updated : 08/24/2023
virtual-wan Certificates Point To Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/certificates-point-to-site.md
description: Learn how to create a self-signed root certificate, export a public
Previously updated : 07/06/2022 Last updated : 08/23/2023
virtual-wan Cross Tenant Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md
Previously updated : 09/28/2020 Last updated : 08/24/2023 # Connect cross-tenant virtual networks to a Virtual WAN hub
To use the steps in this article, you must have the following configuration alre
* A virtual WAN and virtual hub in your parent subscription * A virtual network configured in a subscription in a different (remote) tenant
-Make sure that the virtual network address space in the remote tenant does not overlap with any other address space within any other virtual networks already connected to the parent virtual hub.
+Make sure that the virtual network address space in the remote tenant doesn't overlap with any other address space within any other virtual networks already connected to the parent virtual hub.
### Working with Azure PowerShell
In the following steps, you'll use commands to add a static route to the virtual
```
- This update command will remove the previous manual configuration route in your routing table.
+ This update command removes the previous manual configuration route in your routing table.
1. Verify that the static route is established to a next-hop IP address.
virtual-wan How To Forced Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-forced-tunnel.md
description: Learn to configure forced tunneling for P2S VPN in Virtual WAN.
Previously updated : 07/12/2022 Last updated : 08/24/2023
An example EAP XML file is the following.
### IKEv2 with RADIUS server authentication with user certificates (EAP-TLS)
-To use certificate-based RADIUS authentication (EAP-TLS) to authenticate remote users, use the sample PowerShell script below. Note that in order to import the contents of the VpnSettings and EAP XML files into PowerShell, you will have to navigate to the appropriate directory before running the **Get-Content** PowerShell command.
+To use certificate-based RADIUS authentication (EAP-TLS) to authenticate remote users, use the sample PowerShell script below. Note that in order to import the contents of the VpnSettings and EAP XML files into PowerShell, you'll have to navigate to the appropriate directory before running the **Get-Content** PowerShell command.
```azurepowershell-interactive # specify the name of the VPN Connection to be installed on the client
virtual-wan Howto Always On Device Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-always-on-device-tunnel.md
Previously updated : 05/26/2021 Last updated : 08/24/2023
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
description: This article answers common questions about virtual hub settings an
Previously updated : 07/12/2022 Last updated : 08/24/2023
virtual-wan Install Client Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/install-client-certificates.md
description: Learn how to install client certificates for User VPN P2S certifica
Previously updated : 07/06/2022 Last updated : 08/24/2023
virtual-wan Nat Rules Vpn Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway-powershell.md
Previously updated : 04/11/2022 Last updated : 08/24/2023
You can configure your Virtual WAN VPN gateway with static one-to-one NAT rules. A NAT rule provides a mechanism to set up one-to-one translation of IP addresses. NAT can be used to interconnect two IP networks that have incompatible or overlapping IP addresses. A typical scenario is branches with overlapping IPs that want to access Azure VNet resources.
-This configuration uses a flow table to route traffic from an external (host) IP Address to an internal IP address associated with an endpoint inside a virtual network (virtual machine, computer, container, etc.). In order to use NAT, VPN devices need to use any-to-any (wildcard) traffic selectors. Policy Based (narrow) traffic selectors are not supported in conjunction with NAT configuration.
+This configuration uses a flow table to route traffic from an external (host) IP Address to an internal IP address associated with an endpoint inside a virtual network (virtual machine, computer, container, etc.). In order to use NAT, VPN devices need to use any-to-any (wildcard) traffic selectors. Policy Based (narrow) traffic selectors aren't supported in conjunction with NAT configuration.
## Prerequisites
virtual-wan Openvpn Azure Ad Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-mfa.md
Previously updated : 09/22/2020 Last updated : 08/23/2023
virtual-wan Point To Site Ipsec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/point-to-site-ipsec.md
Previously updated : 02/24/2021 Last updated : 08/24/2023 #Customer intent: As a Virtual WAN software-defined connectivity provider, I want to know the IPsec policies for point-to-site VPN
virtual-wan Quickstart Any To Any Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/quickstart-any-to-any-template.md
description: Learn how to create an any-to-any configuration using an Azure Reso
Previously updated : 06/14/2022 Last updated : 08/24/2023
virtual-wan Routing Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/routing-deep-dive.md
Previously updated : 05/08/2022 Last updated : 08/24/2023 # Virtual WAN routing deep dive
-[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios, it is not required any deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts.
+[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios, it isn't required that you have deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts.
-This document explores sample Virtual WAN scenarios that explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they are just sample topologies designed to demonstrate certain Virtual WAN functionalities.
+This document explores sample Virtual WAN scenarios that explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they're just sample topologies designed to demonstrate certain Virtual WAN functionalities.
## Scenario 1: topology with default routing preference
The first scenario in this article analyzes a topology with two Virtual WAN hubs
In each hub, the VPN and SDWAN appliances serve a dual purpose: on one side they advertise their own individual prefixes (`10.4.1.0/24` over VPN in hub 1 and `10.5.3.0/24` over SDWAN in hub 2), and on the other they advertise the same prefixes as the ExpressRoute circuits in the same region (`10.4.2.0/24` in hub 1 and `10.5.2.0/24` in hub 2). This difference will be used to demonstrate how the [Virtual WAN hub routing preference][virtual-wan-hrp] works.
-All VNet and branch connections are associated and propagating to the default route table. Although the hubs are secured (there is an Azure Firewall deployed in every hub), they are not configured to secure private or Internet traffic. Doing so would result in all connections propagating to the `None` route table, which would remove all non-static routes from the `Default` route table and defeat the purpose of this article since the effective route blade in the portal would be almost empty (except for the static routes to send traffic to the Azure Firewall).
+All VNet and branch connections are associated and propagating to the default route table. Although the hubs are secured (there is an Azure Firewall deployed in every hub), they aren't configured to secure private or Internet traffic. Doing so would result in all connections propagating to the `None` route table, which would remove all non-static routes from the `Default` route table and defeat the purpose of this article since the effective route blade in the portal would be almost empty (except for the static routes to send traffic to the Azure Firewall).
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1.png" alt-text="Diagram that shows a Virtual WAN design with two ExpressRoute circuits and two V P N branches." :::
The NVA in VNet 12 injects the route 10.1.20.0/22 over BGP, as the Next Hop Type
In hub 2 there is an integrated SDWAN Network Virtual Appliance. For more details on supported NVAs for this integration please visit [About NVAs in a Virtual WAN hub][virtual-wan-nva]. Note that the route to the SDWAN branch `10.5.3.0/24` has a next hop of `VPN_S2S_Gateway`. This type of next hop can indicate today either routes coming from an Azure Virtual Network Gateway or from NVAs integrated in the hub.
-In hub 2, the route for `10.2.20.0/22` to the indirect spokes VNet 221 (10.2.21.0/24) and VNet 222 (10.2.22.0/24) is installed as a static route, as indicated by the origin `defaultRouteTable`. If you check in the effective routes for hub 1, that route is not there. The reason is because static routes are not propagated via BGP, but need to be configured in every hub. Hence, a static route is required in hub 1 to provide connectivity between the VNets and branches in hub 1 to the indirect spokes in hub 2 (VNets 221 and 222):
+In hub 2, the route for `10.2.20.0/22` to the indirect spokes VNet 221 (10.2.21.0/24) and VNet 222 (10.2.22.0/24) is installed as a static route, as indicated by the origin `defaultRouteTable`. If you check in the effective routes for hub 1, that route isn't there. The reason is because static routes aren't propagated via BGP, but need to be configured in every hub. Hence, a static route is required in hub 1 to provide connectivity between the VNets and branches in hub 1 to the indirect spokes in hub 2 (VNets 221 and 222):
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-add-route.png" alt-text="Screenshot that shows how to add a static route to a Virtual WAN hub." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-add-route-expanded.png":::
After adding the static route, hub 1 will contain the `10.2.20.0/22` route as we
## Scenario 2: Global Reach and hub routing preference
-Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and hub 2 knows the ExpressRoute prefix from circuit 1 (`10.4.2.0/24`), ExpressRoute routes from remote regions are not advertised back to on-premises ExpressRoute links. Consequently, [ExpressRoute Global Reach][er-gr] is required for the ExpressRoute locations to communicate to each other:
+Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and hub 2 knows the ExpressRoute prefix from circuit 1 (`10.4.2.0/24`), ExpressRoute routes from remote regions aren't advertised back to on-premises ExpressRoute links. Consequently, [ExpressRoute Global Reach][er-gr] is required for the ExpressRoute locations to communicate to each other:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2.png" alt-text="Diagram showing a Virtual WAN design with two ExpressRoute circuits with Global Reach and two V P N branches.":::
Hub 2 will show a similar table for the effective routes, where the VNets and br
## Scenario 3: Cross-connecting the ExpressRoute circuits to both hubs
-In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it is often desirable connecting a single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows:
+In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it's often desirable connecting a single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3.png" alt-text="Diagram that shows a Virtual WAN design with two ExpressRoute circuits in bow tie with Global Reach and two V P N branches." :::
Virtual WAN shows that both circuits are connected to both hubs:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-circuits.png" alt-text="Screenshot of Virtual WAN showing both ExpressRoute circuits connected to both virtual hubs." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-circuits-expanded.png":::
-Going back to the default hub routing preference of ExpressRoute, the routes to remote branches and VNets in hub 1 will show again ExpressRoute as next hop. Although this time the reason is not Global Reach, but the fact that the ExpressRoute circuits bounce back the route advertisements they get from one hub to the other. For example, the effective routes of hub 1 with hub routing preference of ExpressRoute are as follows:
+Going back to the default hub routing preference of ExpressRoute, the routes to remote branches and VNets in hub 1 will show again ExpressRoute as next hop. Although this time the reason isn't Global Reach, but the fact that the ExpressRoute circuits bounce back the route advertisements they get from one hub to the other. For example, the effective routes of hub 1 with hub routing preference of ExpressRoute are as follows:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-er-hub-1.png" alt-text="Screenshot of effective routes in Virtual hub 1 in bow tie design with Global Reach and routing preference ExpressRoute." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-er-hub-1-expanded.png":::
virtual-wan Scenario 365 Expressroute Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-365-expressroute-private.md
Previously updated : 04/08/2022 Last updated : 08/24/2023
virtual-wan Scenario Any To Any https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-any-to-any.md
Previously updated : 04/27/2021 Last updated : 08/24/2023
virtual-wan Scenario Isolate Vnets Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets-custom.md
Previously updated : 04/27/2021 Last updated : 08/24/2023
virtual-wan Scenario Isolate Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets.md
Previously updated : 05/26/2021 Last updated : 08/24/2023
When working with Virtual WAN virtual hub routing, there are quite a few availab
## <a name="design"></a>Design
-In this scenario, the workload within a certain VNet remains isolated and is not able to communicate with other VNets. However, the VNets are required to reach all branches (VPN, ER, and User VPN). In order to figure out how many route tables will be needed, you can build a connectivity matrix. For this scenario it will look like the following table, where each cell represents whether a source (row) can communicate to a destination (column):
+In this scenario, the workload within a certain VNet remains isolated and isn't able to communicate with other VNets. However, the VNets are required to reach all branches (VPN, ER, and User VPN). In order to figure out how many route tables will be needed, you can build a connectivity matrix. For this scenario it will look like the following table, where each cell represents whether a source (row) can communicate to a destination (column):
| From | To | *VNets* | *Branches* | | -- | -- | - | |
In this scenario, the workload within a certain VNet remains isolated and is not
Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination prefix (the "To" side of the flow, the column headers in italics). In this scenario there are no firewalls or Network Virtual Appliances, so communications flows directly over Virtual WAN (hence the word "Direct" in the table).
-This connectivity matrix gives us two different row patterns, which translate to two route tables. Virtual WAN already has a Default route table, so we will need another route table. For this example, we will name the route table **RT_VNET**.
+This connectivity matrix gives us two different row patterns, which translate to two route tables. Virtual WAN already has a Default route table, so we'll need another route table. For this example, we'll name the route table **RT_VNET**.
-VNets will be associated to this **RT_VNET** route table. Because they need connectivity to branches, branches will need to propagate to **RT_VNET** (otherwise the VNets would not learn the branch prefixes). Since the branches are always associated to the Default route table, VNets will need to propagate to the Default route table. As a result, this is the final design:
+VNets will be associated to this **RT_VNET** route table. Because they need connectivity to branches, branches need to propagate to **RT_VNET** (otherwise the VNets wouldn't learn the branch prefixes). Since the branches are always associated to the Default route table, VNets need to propagate to the Default route table. As a result, this is the final design:
* Virtual networks: * Associated route table: **RT_VNET**
In order to configure this scenario, take the following steps into consideration
2. When you create the **RT_VNet** route table, configure the following settings: * **Association**: Select the VNets you want to isolate.
- * **Propagation**: Select the option for branches, implying branch(VPN/ER/P2S) connections will propagate routes to this route table.
+ * **Propagation**: Select the option for branches, implying branch(VPN/ER/P2S) connections propagate routes to this route table.
:::image type="content" source="./media/routing-scenarios/isolated/isolated-vnets.png" alt-text="Isolated VNets":::
virtual-wan Scenario Shared Services Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-shared-services-vnet.md
Previously updated : 04/27/2021 Last updated : 08/24/2023
We can use a connectivity matrix to summarize the requirements of this scenario:
Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination (the "To" side of the flow, the column headers in italics). In this scenario there are no firewalls or Network Virtual Appliances, so communication flows directly over Virtual WAN (hence the word "Direct" in the table).
-Similarly to the [Isolated VNet scenario](scenario-isolate-vnets.md), this connectivity matrix gives us two different row patterns, which translate to two route tables (the shared services VNets and the branches have the same connectivity requirements). Virtual WAN already has a Default route table, so we will need another custom route table, which we will call **RT_SHARED** in this example.
+Similarly to the [Isolated VNet scenario](scenario-isolate-vnets.md), this connectivity matrix gives us two different row patterns, which translate to two route tables (the shared services VNets and the branches have the same connectivity requirements). Virtual WAN already has a Default route table, so we'll need another custom route table, which we will call **RT_SHARED** in this example.
-VNets will be associated to the **RT_SHARED** route table. Because they need connectivity to branches and to the shared service VNets, the shared service VNet and branches will need to propagate to **RT_SHARED** (otherwise the VNets would not learn the branch and shared VNet prefixes). Because the branches are always associated to the Default route table, and the connectivity requirements are the same for shared services VNets, we will associate the shared service VNets to the Default route table too.
+VNets will be associated to the **RT_SHARED** route table. Because they need connectivity to branches and to the shared service VNets, the shared service VNet and branches will need to propagate to **RT_SHARED** (otherwise the VNets wouldn't learn the branch and shared VNet prefixes). Because the branches are always associated to the Default route table, and the connectivity requirements are the same for shared services VNets, we'll associate the shared service VNets to the Default route table too.
As a result, this is the final design:
To configure the scenario, consider the following steps:
2. Create a custom route table. In the example, we refer to the route table as **RT_SHARED**. For steps to create a route table, see [How to configure virtual hub routing](how-to-virtual-hub-routing.md). Use the following values as a guideline: * **Association**
- * For **VNets *except* the shared services VNet**, select the VNets to isolate. This will imply that all these VNets (except the shared services VNet) will be able to reach destination based on the routes of RT_SHARED route table.
+ * For **VNets *except* the shared services VNet**, select the VNets to isolate. This implies that all these VNets (except the shared services VNet) will be able to reach destination based on the routes of RT_SHARED route table.
* **Propagation** * For **Branches**, propagate routes to this route table, in addition to any other route tables you may have already selected. Because of this step, the RT_SHARED route table will learn routes from all branch connections (VPN/ER/User VPN). * For **VNets**, select the **shared services VNet**. Because of this step, RT_SHARED route table will learn routes from the shared services VNet connection.
-This will result in the routing configuration shown in the following figure:
+This results in the routing configuration shown in the following figure:
:::image type="content" source="./media/routing-scenarios/shared-service-vnet/shared-services.png" alt-text="Diagram for shared services VNet." lightbox="./media/routing-scenarios/shared-service-vnet/shared-services.png":::
virtual-wan Sd Wan Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/sd-wan-connectivity-architecture.md
Previously updated : 02/23/2022 Last updated : 08/24/2023
virtual-wan Virtual Wan Expressroute About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-about.md
This article provides details on ExpressRoute connections in Azure Virtual WAN.
A virtual hub can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Users using private connectivity in Virtual WAN can connect their ExpressRoute circuits to an ExpressRoute gateway in a Virtual WAN hub. For a tutorial on connecting an ExpressRoute circuit to an Azure Virtual WAN hub, see [How to Connect an ExpressRoute Circuit to Virtual WAN](virtual-wan-expressroute-portal.md). ## ExpressRoute circuit SKUs supported in Virtual WAN
-The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus).
+The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus). ExpressRoute Local circuits can only be connected to ExpressRoute gateways in the same region, but they can still access resources in spoke virtual networks located in other regions.
## ExpressRoute performance
Dynamic routing (BGP) is supported. For more information, please see [Dynamic Ro
## ExpressRoute connection concepts | Concept| Description| Notes| | --| --| --|
-| Propagate Default Route|If the Virtual WAN hub is configured with a 0.0.0.0/0 default route, this setting controls whether the 0.0.0.0/0 route is advertised to connecting users. The default route doesn't originate in the Virtual WAN hub. The route can be a static route in the default route table or 0.0.0.0/0 advertised from on-premises. | This field can be set to enabled or disabled.|
+| Propagate Default Route|If the Virtual WAN hub is configured with a 0.0.0.0/0 default route, this setting controls whether the 0.0.0.0/0 route is advertised to your ExpressRoute-connected site. The default route doesn't originate in the Virtual WAN hub. The route can be a static route in the default route table or 0.0.0.0/0 advertised from on-premises. | This field can be set to enabled or disabled.|
| Routing Weight|If the Virtual WAN hub learns the same prefix from multiple connected ExpressRoute circuits, then the ExpressRoute connection with the higher weight will be preferred for traffic destined for this prefix. | This field can be set to a number between 0 and 32000.| ## ExpressRoute circuit concepts
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs
### How are Availability Zones and resiliency handled in Virtual WAN?
-Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there is resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
+Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there's resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
-Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There is currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall.
+Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There's currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall.
While the concept of Virtual WAN is global, the actual Virtual WAN resource is Resource Manager-based and deployed regionally. If the virtual WAN region itself were to have an issue, all hubs in that virtual WAN will continue to function as is, but the user won't be able to create new hubs until the virtual WAN region is available.
A Network Virtual Appliance (NVA) can be deployed inside a virtual hub. For step
No. The spoke VNet can't have a virtual network gateway if it's connected to the virtual hub.
+### Can a spoke VNet have an Azure Route Server?
+
+No. The spoke VNet can't have a Route Server if it's connected to the virtual WAN hub.
+ ### Is there support for BGP in VPN connectivity? Yes, BGP is supported. When you create a VPN site, you can provide the BGP parameters in it. This will imply that any connections created in Azure for that site will be enabled for BGP.
A simple configuration of one Virtual WAN with one hub and one vpnsite can be cr
### Can spoke VNets connected to a virtual hub communicate with each other (V2V Transit)?
-Yes. Standard Virtual WAN supports VNet-to-VNet transitive connectivity via the Virtual WAN hub that the VNets are connected to. In Virtual WAN terminology, we refer to these paths as "local Virtual WAN VNet transit" for VNets connected to a Virtual Wan hub within a single region, and "global Virtual WAN VNet transit" for VNets connected through multiple Virtual WAN hubs across two or more regions.
+Yes. Standard Virtual WAN supports VNet-to-VNet transitive connectivity via the Virtual WAN hub that the VNets are connected to. In Virtual WAN terminology, we refer to these paths as "local Virtual WAN VNet transit" for VNets connected to a Virtual WAN hub within a single region, and "global Virtual WAN VNet transit" for VNets connected through multiple Virtual WAN hubs across two or more regions.
In some scenarios, spoke VNets can also be directly peered with each other using [virtual network peering](../virtual-network/virtual-network-peering-overview.md) in addition to local or global Virtual WAN VNet transit. In this case, VNet Peering takes precedence over the transitive connection via the Virtual WAN hub.
If a virtual hub learns the same route from multiple remote hubs, the order in w
* **AS Path** 1. Prefer routes with the shortest BGP AS-Path length irrespective of the source of the route advertisements.
- Note: In vWANs with multiple remote virtual hubs, If there is a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred.
+ Note: In vWANs with multiple remote virtual hubs, If there's a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred.
2. Prefer routes from local virtual hub connections over routes learned from remote virtual hub. 3. If there are routes from both ExpressRoute and Site-to-site VPN connections:
Transit between ER-to-ER is always via Global reach. Virtual hub gateways are de
### Is there a concept of weight in Azure Virtual WAN ExpressRoute circuits or VPN connections
-When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There is no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub.
+When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There's no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub.
### Does Virtual WAN prefer ExpressRoute over VPN for traffic egressing Azure
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?
-The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It is recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
### Can hubs be created in different resource groups in Virtual WAN?
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premises branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
-### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure Portal?
+### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?
-When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure Portal.
+When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure portal.
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case.
+Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, please open a support case.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
There are several limitations with the virtual hub router upgrade
-* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
-
-* If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks. To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement.
+* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you'll have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you'll also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
-* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We are actively working on removing this limitation.
+* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We're actively working on removing this limitation.
* If your Virtual WAN hub is connected to more than 100 spoke virtual networks, then the upgrade may fail.
-If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
+If the update fails for any reason, your hub will be auto recovered to the old version to ensure there's still a working setup.
Additional things to note: * The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
-* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
+* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you'll need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
Previously updated : 09/15/2022 Last updated : 08/09/2023
The instructions you follow depend on the authentication method you want to use.
[!INCLUDE [Point to site page](../../includes/virtual-wan-p2s-gateway-include.md)] + ## <a name="download"></a>Generate client configuration files When you connect to VNet using User VPN (P2S), you can use the VPN client that is natively installed on the operating system from which you're connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients.
virtual-wan Virtual Wan Point To Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-powershell.md
Previously updated : 07/05/2022 Last updated : 08/24/2023 # Create a P2S User VPN connection using Azure Virtual WAN - PowerShell
virtual-wan Virtual Wan Route Table Nva Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva-portal.md
Previously updated : 08/19/2021 Last updated : 08/24/2023 # Customer intent: As someone with a networking background, I want to create a route table using the portal.
Verify that you have met the following criteria:
* A private IP address must be assigned to the NVA network interface.
- * The NVA is not deployed in the virtual hub. It must be deployed in a separate virtual network.
+ * The NVA isn't deployed in the virtual hub. It must be deployed in a separate virtual network.
* The NVA virtual network may have one or many virtual networks connected to it. In this article, we refer to the NVA virtual network as an 'indirect spoke VNet'. These virtual networks can be connected to the NVA VNet by using VNet peering. The VNet Peering links are depicted by black arrows in the above figure between VNet 1, VNet 2, and NVA VNet. * You have created two virtual networks. They will be used as spoke VNets.
Verify that you have met the following criteria:
* Ensure there are no virtual network gateways in any of the VNets.
- * The VNets do not require a gateway subnet.
+ * The VNets don't require a gateway subnet.
## <a name="signin"></a>1. Sign in
Repeat the following procedure for each virtual network that you want to connect
* **Connection name** - Name your connection. * **Hubs** - Select the hub you want to associate with this connection. * **Subscription** - Verify the subscription.
- * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network cannot have an already existing virtual network gateway.
+ * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network can't have an already existing virtual network gateway.
4. Click **OK** to create the connection. ## Next steps
virtual-wan Virtual Wan Route Table Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva.md
Previously updated : 09/22/2020 Last updated : 08/24/2023 # Customer intent: As someone with a networking background, I want to work with routing tables for NVA.
Verify that you have met the following criteria:
* You have a Network Virtual Appliance (NVA). This is a third-party software of your choice that is typically provisioned from Azure Marketplace in a virtual network. * You have a private IP assigned to the NVA network interface.
-* The NVA cannot be deployed in the virtual hub. It must be deployed in a separate VNet. For this article, the NVA VNet is referred to as the 'DMZ VNet'.
+* The NVA can't be deployed in the virtual hub. It must be deployed in a separate VNet. For this article, the NVA VNet is referred to as the 'DMZ VNet'.
* The ΓÇÿDMZ VNetΓÇÖ may have one or many virtual networks connected to it. In this article, this VNet is referred to as ΓÇÿIndirect spoke VNetΓÇÖ. These VNets can be connected to the DMZ VNet using VNet peering. * Verify that you have 2 VNets already created. These will be used as spoke VNets. For this article, the VNet spoke address spaces are 10.0.2.0/24 and 10.0.3.0/24. If you need information on how to create a VNet, see [Create a virtual network using PowerShell](../virtual-network/quick-create-powershell.md). * Ensure there are no virtual network gateways in any VNets. ## <a name="signin"></a>1. Sign in
-Make sure you install the latest version of the Resource Manager PowerShell cmdlets. For more information about installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). This is important because earlier versions of the cmdlets do not contain the current values that you need for this exercise.
+Make sure you install the latest version of the Resource Manager PowerShell cmdlets. For more information about installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). This is important because earlier versions of the cmdlets don't contain the current values that you need for this exercise.
-1. Open your PowerShell console with elevated privileges, and sign in to your Azure account. This cmdlet prompts you for the sign-in credentials. After signing in, it downloads your account settings so that they are available to Azure PowerShell.
+1. Open your PowerShell console with elevated privileges, and sign in to your Azure account. This cmdlet prompts you for the sign-in credentials. After signing in, it downloads your account settings so that they're available to Azure PowerShell.
```powershell Connect-AzAccount
virtual-wan Vpn Client Certificate Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-client-certificate-windows.md
description: Learn how to configure VPN clients on Windows computers for User VP
Previously updated : 07/25/2022 Last updated : 08/24/2023
virtual-wan Vpn Over Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-over-expressroute.md
Previously updated : 09/22/2020 Last updated : 08/24/2023 # ExpressRoute encryption: IPsec over ExpressRoute for Virtual WAN
This article shows you how to use Azure Virtual WAN to establish an IPsec/IKE VP
The following diagram shows an example of VPN connectivity over ExpressRoute private peering: The diagram shows a network within the on-premises network connected to the Azure hub VPN gateway over ExpressRoute private peering. The connectivity establishment is straightforward:
In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c
The following Azure resources and the corresponding on-premises configurations must be in place before you proceed: -- An Azure virtual WAN-- A virtual WAN hub with an [ExpressRoute gateway](virtual-wan-expressroute-portal.md) and a [VPN gateway](virtual-wan-site-to-site-portal.md)
+- An Azure virtual WAN.
+- A virtual WAN hub with an [ExpressRoute gateway](virtual-wan-expressroute-portal.md) and a [VPN gateway](virtual-wan-site-to-site-portal.md).
For the steps to create an Azure virtual WAN and a hub with an ExpressRoute association, see [Create an ExpressRoute association using Azure Virtual WAN](virtual-wan-expressroute-portal.md). For the steps to create a VPN gateway in the virtual WAN, see [Create a site-to-site connection using Azure Virtual WAN](virtual-wan-site-to-site-portal.md).
The site resource is the same as the non-ExpressRoute VPN sites for a virtual WA
> The IP address for the on-premises VPN device *must* be part of the address prefixes advertised to the virtual WAN hub via Azure ExpressRoute private peering. >
-1. Go to the Azure portal in your browser.
-1. Select the hub that you created. On the virtual WAN hub page, under **Connectivity**, select **VPN sites**.
-1. On the **VPN sites** page, select **+Create site**.
-1. On the **Create site** page, fill in the following fields:
- * **Subscription**: Verify the subscription.
- * **Resource Group**: Select or create the resource group that you want to use.
- * **Region**: Enter the Azure region for the VPN site resource.
- * **Name**: Enter the name by which you want to refer to your on-premises site.
- * **Device vendor**: Enter the vendor of the on-premises VPN device.
+1. Go to **YourVirtualWAN > VPN sites** and create a site for your on-premises network. For basic steps, see [Create a site](virtual-wan-site-to-site-portal.md). Keep in mind the following settings values:
+ * **Border Gateway Protocol**: Select "Enable" if your on-premises network uses BGP. * **Private address space**: Enter the IP address space that's located on your on-premises site. Traffic destined for this address space is routed to the on-premises network via the VPN gateway.
- * **Hubs**: Select one or more hubs to connect this VPN site. The selected hubs must have VPN gateways already created.
-1. Select **Next: Links >** for the VPN link settings:
- * **Link Name**: The name by which you want to refer to this connection.
+
+1. Select **Links** to add information about the physical links. Keep in mind the following settings information:
+ * **Provider Name**: The name of the internet service provider for this site. For an ExpressRoute on-premises network, it's the name of the ExpressRoute service provider. * **Speed**: The speed of the internet service link or ExpressRoute circuit. * **IP address**: The public IP address of the VPN device that resides on your on-premises site. Or, for ExpressRoute on-premises, it's the private IP address of the VPN device via ExpressRoute.
- If BGP is enabled, it will apply to all connections created for this site in Azure. Configuring BGP on a virtual WAN is equivalent to configuring BGP on an Azure VPN gateway.
-
- Your on-premises BGP peer address *must not* be the same as the IP address of your VPN to the device or the virtual network address space of the VPN site. Use a different IP address on the VPN device for your BGP peer IP. It can be an address assigned to the loopback interface on the device. However, it *can't* be an APIPA (169.254.*x*.*x*) address. Specify this address in the corresponding VPN site that represents the location. For BGP prerequisites, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md).
+ * If BGP is enabled, it applies to all connections created for this site in Azure. Configuring BGP on a virtual WAN is equivalent to configuring BGP on an Azure VPN gateway.
+
+ * Your on-premises BGP peer address *must not* be the same as the IP address of your VPN to the device or the virtual network address space of the VPN site. Use a different IP address on the VPN device for your BGP peer IP. It can be an address assigned to the loopback interface on the device. However, it *can't* be an APIPA (169.254.*x*.*x*) address. Specify this address in the corresponding VPN site that represents the location. For BGP prerequisites, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md).
-1. Select **Next: Review + create >** to check the setting values and create the VPN site. If you selected **Hubs** to connect, the connection will be established between the on-premises network and the hub VPN gateway.
+1. Select **Next: Review + create >** to check the setting values and create the VPN site, then **Create** the site.
+1. Next, connect the site to the hub using these basic [Steps](virtual-wan-site-to-site-portal.md#connectsites) as a guideline. It can take up to 30 minutes to update the gateway.
## <a name="hub"></a>3. Update the VPN connection setting to use ExpressRoute After you create the VPN site and connect to the hub, use the following steps to configure the connection to use ExpressRoute private peering:
-1. Go back to the virtual WAN resource page, and select the hub resource. Or navigate from the VPN site to the connected hub.
+1. Go to the virtual hub. You can either do this by going to the Virtual WAN and selecting the hub to open the hub page, or you can go to the connected virtual hub from the VPN site.
- :::image type="content" source="./media/vpn-over-expressroute/hub-selection.png" alt-text="Select a hub":::
1. Under **Connectivity**, select **VPN (Site-to-Site)**.
- :::image type="content" source="./media/vpn-over-expressroute/vpn-select.png" alt-text="Select VPN (Site-to-Site)":::
-1. Select the ellipsis (**...**) on the VPN site over ExpressRoute, and select **Edit VPN connection to this hub**.
+1. Select the ellipsis (**...**) or right click the VPN site over ExpressRoute, and select **Edit VPN connection to this hub**.
- :::image type="content" source="./media/vpn-over-expressroute/config-menu.png" alt-text="Enter configuration menu":::
-1. For **Use Azure Private IP Address**, select **Yes**. The setting configures the hub VPN gateway to use private IP addresses within the hub address range on the gateway for this connection, instead of the public IP addresses. This will ensure that the traffic from the on-premises network traverses the ExpressRoute private peering paths rather than using the public internet for this VPN connection. The following screenshot shows the setting:
+1. On the **Basics** page, leave the defaults.
- :::image type="content" source="./media/vpn-over-expressroute/vpn-link-configuration.png" alt-text="Setting for using a private IP address for the VPN connection" border="false":::
-1. Select **Save**.
+1. On the **Link connection 1** page, configure the following settings:
-After you save your changes, the hub VPN gateway will use the private IP addresses on the VPN gateway to establish the IPsec/IKE connections with the on-premises VPN device over ExpressRoute.
+ - For **Use Azure Private IP Address**, select **Yes**. The setting configures the hub VPN gateway to use private IP addresses within the hub address range on the gateway for this connection, instead of the public IP addresses. This ensures that the traffic from the on-premises network traverses the ExpressRoute private peering paths rather than using the public internet for this VPN connection.
+1. Click **Create** to update the settings. After the settings have been created, the hub VPN gateway will use the private IP addresses on the VPN gateway to establish the IPsec/IKE connections with the on-premises VPN device over ExpressRoute.
## <a name="associate"></a>4. Get the private IP addresses for the hub VPN gateway
The device configuration file contains the settings to use when you're configuri
"Instance0":"10.51.230.4" "Instance1":"10.51.230.5" ```
- * Configuration details for the VPN gateway connection, such as BGP and pre-shared key. The pre-shared key is automatically generated for you. You can always edit the connection on the **Overview** page for a custom pre-shared key.
+ * Configuration details for the VPN gateway connection, such as BGP and preshared key. The preshared key is automatically generated for you. You can always edit the connection on the **Overview** page for a custom preshared key.
### Example device configuration file
The device configuration file contains the settings to use when you're configuri
If you need instructions to configure your device, you can use the instructions on the [VPN device configuration scripts page](~/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md#configscripts) with the following caveats:
-* The instructions on the VPN device page are not written for a virtual WAN. But you can use the virtual WAN values from the configuration file to manually configure your VPN device.
+* The instructions on the VPN device page aren't written for a virtual WAN. But you can use the virtual WAN values from the configuration file to manually configure your VPN device.
* The downloadable device configuration scripts that are for the VPN gateway don't work for the virtual WAN, because the configuration is different. * A new virtual WAN can support both IKEv1 and IKEv2. * A virtual WAN can use only route-based VPN devices and device instructions.
virtual-wan Vpn Profile Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-profile-intune.md
Previously updated : 02/04/2021 Last updated : 08/24/2023
vpn-gateway Ipsec Ike Policy Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ipsec-ike-policy-howto.md
This section walks you through the steps to create a Site-to-Site VPN connection
:::image type="content" source="./media/ipsec-ike-policy-howto/site-to-site-diagram.png" alt-text="Site-to-Site policy" border="false" lightbox="./media/ipsec-ike-policy-howto/site-to-site-diagram.png":::
-### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet1
+### Step 1: Create the virtual network, VPN gateway, and local network gateway for TestVNet1
Create the following resources.For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
Create the following resources.For steps, see [Create a Site-to-Site VPN connect
* **Enable active-active mode:** Disabled * **Configure BGP:** Disabled
-### Step 2 - Configure the local network gateway and connection resources
+### Step 2: Configure the local network gateway and connection resources
1. Create the local network gateway resource **Site6** using the following values.
Create the following resources.For steps, see [Create a Site-to-Site VPN connect
* **Shared key:** abc123 (example value - must match the on-premises device key used) * **IKE protocol:** IKEv2
-### Step 3 - Configure a custom IPsec/IKE policy on the S2S VPN connection
+### Step 3: Configure a custom IPsec/IKE policy on the S2S VPN connection
Configure a custom IPsec/IKE policy with the following algorithms and parameters:
The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are simil
:::image type="content" source="./media/ipsec-ike-policy-howto/vnet-policy.png" alt-text="Screenshot shows VNet-to-VNet policy diagram." border="false" lightbox="./media/ipsec-ike-policy-howto/vnet-policy.png":::
-### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet2
+### Step 1: Create the virtual network, VPN gateway, and local network gateway for TestVNet2
Use the steps in the [Create a VNet-to-VNet connection](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2 and create a VNet-to-VNet connection to TestVNet1.
Example values:
* **Enable active-active mode:** Disabled * **Configure BGP:** Disabled
-### Step 2 - Configure the VNet-to-VNet connection
+### Step 2: Configure the VNet-to-VNet connection
1. From the VNet1GW gateway, add a VNet-to-VNet connection to VNet2GW, **VNet1toVNet2**.
Example values:
:::image type="content" source="./media/ipsec-ike-policy-howto/vnet-connections.png" alt-text="Screenshot shows VNet-to-VNet connections." border="false" lightbox="./media/ipsec-ike-policy-howto/vnet-connections.png":::
-### Step 3 - Configure a custom IPsec/IKE policy on VNet1toVNet2
+### Step 3: Configure a custom IPsec/IKE policy on VNet1toVNet2
1. From the **VNet1toVNet2** connection resource, go to the **Configuration** page.
Example values:
1. Select **Save** at the top of the page to apply the policy changes on the connection resource.
-### Step 4 - Configure a custom IPsec/IKE policy on VNet2toVNet1
+### Step 4: Configure a custom IPsec/IKE policy on VNet2toVNet1
1. Apply the same policy to the VNet2toVNet1 connection, VNet2toVNet1. If you don't, the IPsec/IKE VPN tunnel won't connect due to policy mismatch.
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
description: Learn how to set up an Azure AD tenant for P2S OpenVPN authenticati
Previously updated : 10/25/2022 Last updated : 08/18/2023
Assign the users to your applications.
1. Go to your Azure Active Directory and select **Enterprise applications**. 1. From the list, locate the application you just registered and click to open it.
-1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**, then **Save**.
+1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**.
+1. For **Assignment required**, change the value to **Yes**. For more information about this setting, see [Application properties](../active-directory/manage-apps/application-properties.md#enabled-for-users-to-sign-in).
+1. If you've made changes, click **Save** to save your settings.
1. In the left pane, click **Users and groups**. On the **Users and groups** page, click **+ Add user/group** to open the **Add Assignment** page. 1. Click the link under **Users and groups** to open the **Users and groups** page. Select the users and groups that you want to assign, then click **Select**. 1. After you finish selecting users and groups, click **Assign**.
In this step, you configure P2S Azure AD authentication for the virtual network
1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
- :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png":::
+ :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png":::
Configure the following values:
In this step, you configure P2S Azure AD authentication for the virtual network
For **Azure Active Directory** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values. * **Tenant**: `https://login.microsoftonline.com/{TenantID}`
- * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the ""Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
+ * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the "Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
* **Issuer**: `https://sts.windows.net/{TenantID}` For the Issuer value, make sure to include a trailing **/** at the end. 1. Once you finish configuring settings, click **Save** at the top of the page.
In this section, you generate and download the Azure VPN Client profile configur
## Next steps
-* * To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).--
vpn-gateway Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/packet-capture.md
description: Learn about packet capture functionality that you can use on VPN ga
Previously updated : 01/31/2022 Last updated : 08/24/2023
Connectivity and performance-related problems are often complex. It can take sig
There are some commonly available packet capture tools. Getting relevant packet captures with these tools can be cumbersome, especially in high-volume traffic scenarios. The filtering capabilities provided by Azure VPN Gateway packet capture are a major differentiator. You can use VPN Gateway packet capture together with commonly available packet capture tools.
-## VPN Gateway packet capture filtering capabilities
+## About packet capture for VPN Gateway
-You can run VPN Gateway packet capture on the gateway or on a specific connection, depending on your needs. You can also run packet capture on multiple tunnels at the same time. You can capture one-way or bi-directional traffic, IKE and ESP traffic, and inner packets along with filtering on a VPN gateway.
+You can run VPN Gateway packet capture on the gateway, or on a specific connection, depending on your needs. You can also run packet capture on multiple tunnels at the same time. You can capture one-way or bi-directional traffic, IKE and ESP traffic, and inner packets along with filtering on a VPN gateway.
It's helpful to use a five-tuple filter (source subnet, destination subnet, source port, destination port, protocol) and TCP flags (SYN, ACK, FIN, URG, PSH, RST) when you're isolating problems in high-volume traffic.
The following examples of JSON and a JSON schema provide explanations of each pr
> [!NOTE] > Set the **CaptureSingleDirectionTrafficOnly** option to **false** if you want to capture both inner and outer packets.
-### Example JSON
+**Example JSON**
+ ```JSON-interactive { "TracingFlags": 11,
The following examples of JSON and a JSON schema provide explanations of each pr
] } ```
-### JSON schema
+
+**JSON schema**
+ ```JSON-interactive { "type": "object",
The following examples of JSON and a JSON schema provide explanations of each pr
} ```
-## Start packet capture - portal
+### Key considerations
+
+- Running packet capture can affect performance. Remember to stop the packet capture when you don't need it.
+- Suggested minimum packet capture duration is 600 seconds. Because of sync issues among multiple components on the path, shorter packet captures might not provide complete data.
+- Packet capture data files are generated in PCAP format. Use Wireshark or other commonly available applications to open PCAP files.
+- Packet captures aren't supported on policy-based gateways.
+- The maximum filesize of packet capture data files is 500 MB.
+- If the `SASurl` parameter isn't configured correctly, the trace might fail with Storage errors. For examples of how to correctly generate an `SASurl` parameter, see [Stop-AzVirtualNetworkGatewayPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewaypacketcapture).
+- If you're configuring a User Delegated SAS, make sure the user account is granted proper RBAC permissions on the storage account such as Storage Blob Data Owner.
+
+## Packet capture - portal
-You can set up packet capture in the Azure portal by navigating to the VPN Gateway Packet Capture blade in the Azure portal and clicking the **Start Packet Capture button**
+This section helps you start and stop a packet capture using the Azure portal.
-> [!NOTE]
-> Do not select the **Capture Single Direction Traffic Only** option if you want to capture both inner and outer packets.
+### Start packet capture - portal
+
+You can set up packet capture in the Azure portal.
+1. Go to your VPN gateway in the Azure portal.
+1. On the left, select **VPN Gateway Packet Capture** to open the VPN Gateway Packet Capture page.
+1. Select **Start Packet Capture**.
-## Stop packet capture - portal
+ :::image type="content" source="./media/packet-capture/packet-capture-portal.png" alt-text="Screenshot of start packet capture in the portal." lightbox="./media/packet-capture/packet-capture-portal.png":::
-A valid SAS (or Shared Access Signature) Uri with read/write access is required to complete a packet capture. When a packet capture is stopped, the output of the packet capture is written to the container that is referenced by the SAS Uri. To get the SAS Uri, navigate to the required storage account and generate a SAS token and URL with the correct permissions.
+1. On the **Start Packet Capture** page, make any necessary adjustments. Don't select the "Capture Single Direction Traffic Only" option if you want to capture both inner and outer packets.
+1. Once you've configured the settings, click **Start Packet Capture**.
+### Stop packet capture - portal
-* Copy the Blob SAS URL as it will be needed in the next step.
+To complete a packet capture, you need to provide a valid SAS (or Shared Access Signature) URL with read/write access. When a packet capture is stopped, the output of the packet capture is written to the container that is referenced by the SAS URL.
-* Navigate to the VPN Gateway Packet Capture blade in the Azure portal and clicking the **Stop Packet Capture** button
+1. To get the SAS URL, go to the storage account.
+1. Go to the container you want to use and right-click to show the dropdown list. Select **Generate SAS** to open the Generate SAS page.
+1. On the Generate SAS page, configure your settings. Make sure that you have granted read and write access.
+1. Click **Generate SAS token and URL**.
+1. The SAS token and SAS URL is generated and appears below the button immediately. Copy the Blob SAS URL.
-* Paste the SAS URL (from the previous step) in the **Output Sas Uri** text box and click **Stop Packet Capture**.
+ :::image type="content" source="./media/packet-capture/generate-sas.png" alt-text="Screenshot of generate SAS token." lightbox="./media/packet-capture/generate-sas.png":::
+1. Go back to the VPN Gateway Packet Capture page in the Azure portal and click the **Stop Packet Capture** button.
-* The packet capture (pcap) file will be stored in the specified account
+1. Paste the SAS URL (from the previous step) in the **Output Sas Url** text box and click **Stop Packet Capture**.
+
+1. The packet capture (pcap) file will be stored in the specified account.
## Packet capture - PowerShell The following examples show PowerShell commands that start and stop packet captures. For more information on parameter options, see [Start-AzVirtualnetworkGatewayPacketCapture](/powershell/module/az.network/start-azvirtualnetworkgatewaypacketcapture).
->
-### Prerequisite
+**Prerequisites**
+
+* Packet capture data needs to be logged into a storage account on your subscription. See [create storage account](../storage/common/storage-account-create.md).
-* Packet capture data will need to be logged into a storage account on your subscription. See [create storage account](../storage/common/storage-account-create.md).
-* To stop the packet capture, you will need to generate the `SASUrl` for your storage account. See [create a user delegation SAS](../storage/blobs/storage-blob-user-delegation-sas-create-powershell.md).
+* To stop the packet capture, you'll need to generate the `SASUrl` for your storage account. See [create a user delegation SAS](../storage/blobs/storage-blob-user-delegation-sas-create-powershell.md).
### Start packet capture for a VPN gateway
Stop-AzVirtualNetworkGatewayConnectionPacketCapture -ResourceGroupName "YourReso
For more information on parameter options, see [Stop-AzVirtualNetworkGatewayConnectionPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewayconnectionpacketcapture).
-## Key considerations
--- Running packet capture can affect performance. Remember to stop the packet capture when you don't need it.-- Suggested minimum packet capture duration is 600 seconds. Because of sync issues among multiple components on the path, shorter packet captures might not provide complete data.-- Packet capture data files are generated in PCAP format. Use Wireshark or other commonly available applications to open PCAP files.-- Packet captures aren't supported on policy-based gateways.-- The maximum filesize of packet capture data files is 500MB.-- If the `SASurl` parameter isn't configured correctly, the trace might fail with Storage errors. For examples of how to correctly generate an `SASurl` parameter, see [Stop-AzVirtualNetworkGatewayPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewaypacketcapture).-- If you are configuring a User Delegated SAS, make sure the user account is granted proper RBAC permissions on the storage account such as Storage Blob Data Owner.--- ## Next steps
-For more information about VPN Gateway, see [What is VPN Gateway?](vpn-gateway-about-vpngateways.md).
+For more information about VPN Gateway, see [What is VPN Gateway?](vpn-gateway-about-vpngateways.md)
vpn-gateway Site To Site Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-tunneling.md
description: Learn how to split or force tunnel traffic for VPN Gateway site-to-
+ Last updated 08/04/2023
vpn-gateway Vpn Gateway Activeactive Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-activeactive-rm-powershell.md
The other properties are the same as the non-active-active gateways.
* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/). * You need to install the Azure Resource Manager PowerShell cmdlets if you don't want to use Cloud Shell in your browser. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets.
-### Step 1 - Create and configure VNet1
+### Step 1: Create and configure VNet1
#### 1. Declare your variables
$gwsub1 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName1 -AddressPrefix $GWS
New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1 ```
-### Step 2 - Create the VPN gateway for TestVNet1 with active-active mode
+### Step 2: Create the VPN gateway for TestVNet1 with active-active mode
#### 1. Create the public IP addresses and gateway IP configurations
To establish a cross-premises connection, you need to create a Local Network Gat
Before proceeding, make sure you have completed [Part 1](#aagateway) of this exercise.
-### Step 1 - Create and configure the local network gateway
+### Step 1: Create and configure the local network gateway
#### 1. Declare your variables
New-AzResourceGroup -Name $RG5 -Location $Location5
New-AzLocalNetworkGateway -Name $LNGName51 -ResourceGroupName $RG5 -Location $Location5 -GatewayIpAddress $LNGIP51 -AddressPrefix $LNGPrefix51 -Asn $LNGASN5 -BgpPeeringAddress $BGPPeerIP51 ```
-### Step 2 - Connect the VNet gateway and local network gateway
+### Step 2: Connect the VNet gateway and local network gateway
#### 1. Get the two gateways
The connection should be established after a few minutes, and the BGP peering se
:::image type="content" source="./media/vpn-gateway-activeactive-rm-powershell/active-active.png" alt-text="Diagram showing active-active connection." lightbox="./media/vpn-gateway-activeactive-rm-powershell/active-active.png":::
-### Step 3 - Connect two on-premises VPN devices to the active-active VPN gateway
+### Step 3: Connect two on-premises VPN devices to the active-active VPN gateway
If you have two VPN devices at the same on-premises network, you can achieve dual redundancy by connecting the Azure VPN gateway to the second VPN device.
Once the connection (tunnels) are established, you'll have dual redundant VPN de
This section creates an active-active VNet-to-VNet connection with BGP. The following instructions continue from the previous steps. You must complete [Part 1](#aagateway) to create and configure TestVNet1 and the VPN Gateway with BGP.
-### Step 1 - Create TestVNet2 and the VPN gateway
+### Step 1: Create TestVNet2 and the VPN gateway
It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges.
Create the VPN gateway with the AS number and the "EnableActiveActiveFeature" fl
New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gw2ipconf1,$gw2ipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet2ASN -EnableActiveActiveFeature ```
-### Step 2 - Connect the TestVNet1 and TestVNet2 gateways
+### Step 2: Connect the TestVNet1 and TestVNet2 gateways
In this example, both gateways are in the same subscription. You can complete this step in the same PowerShell session.
vpn-gateway Vpn Gateway Bgp Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md
To establish a cross-premises connection, you need to create a *local network ga
Before proceeding, make sure you enabled BGP for the VPN gateway in the previous section.
-### Step 1 - Create and configure the local network gateway
+### Step 1: Create and configure the local network gateway
#### 1. Declare your variables
Create the local network gateway. Notice the two additional parameters for the l
New-AzLocalNetworkGateway -Name $LNGName5 -ResourceGroupName $RG5 -Location $Location5 -GatewayIpAddress $LNGIP5 -AddressPrefix $LNGPrefix50 -Asn $LNGASN5 -BgpPeeringAddress $BGPPeerIP5 ```
-### Step 2 - Connect the VNet gateway and local network gateway
+### Step 2: Connect the VNet gateway and local network gateway
#### 1. Get the two gateways
This section adds a VNet-to-VNet connection with BGP, as shown in the Diagram 4.
The following instructions continue from the previous steps. You must first complete the steps in the [Enable BGP for the VPN gateway](#enablebgp) section to create and configure TestVNet1 and the VPN gateway with BGP.
-### Step 1 - Create TestVNet2 and the VPN gateway
+### Step 1: Create TestVNet2 and the VPN gateway
It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges.
Create the VPN gateway with the AS number. You must override the default ASN on
New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gwipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet2ASN ```
-### Step 2 - Connect the TestVNet1 and TestVNet2 gateways
+### Step 2: Connect the TestVNet1 and TestVNet2 gateways
In this example, both gateways are in the same subscription. You can complete this step in the same PowerShell session.
vpn-gateway Vpn Gateway Classic Resource Manager Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md
Previously updated : 06/09/2023 Last updated : 08/21/2023
vpn-gateway Vpn Gateway Connect Multiple Policybased Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md
The workflow to enable this connectivity:
This section shows you how to enable policy-based traffic selectors on a connection. Make sure you have completed [Part 3 of the Configure IPsec/IKE policy article](vpn-gateway-ipsecikepolicy-rm-powershell.md). The steps in this article use the same parameters.
-### Step 1 - Create the virtual network, VPN gateway, and local network gateway
+### Step 1: Create the virtual network, VPN gateway, and local network gateway
#### Connect to your subscription and declare your variables
This section shows you how to enable policy-based traffic selectors on a connect
New-AzLocalNetworkGateway -Name $LNGName6 -ResourceGroupName $RG1 -Location $Location1 -GatewayIpAddress $LNGIP6 -AddressPrefix $LNGPrefix61,$LNGPrefix62 ```
-### Step 2 - Create an S2S VPN connection with an IPsec/IKE policy
+### Step 2: Create an S2S VPN connection with an IPsec/IKE policy
1. Create an IPsec/IKE policy.
vpn-gateway Vpn Gateway Delete Vnet Gateway Classic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Delete a virtual network gateway using PowerShell (classic) This article helps you delete a VPN gateway in the classic (legacy) deployment model by using PowerShell. After the virtual network gateway has been deleted, modify the network configuration file to remove elements that you're no longer using.
-The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md).
+The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md)**.
## <a name="connect"></a>Step 1: Connect to Azure
vpn-gateway Vpn Gateway Delete Vnet Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-portal.md
description: Learn how to delete a virtual network gateway using the Azure portal. Previously updated : 07/28/2023 Last updated : 08/22/2023
> [!div class="op_single_selector"] > * [Azure portal](vpn-gateway-delete-vnet-gateway-portal.md) > * [PowerShell](vpn-gateway-delete-vnet-gateway-powershell.md)
-> * [PowerShell (classic)](vpn-gateway-delete-vnet-gateway-classic-powershell.md)
+> * [PowerShell (classic - legacy gateways)](vpn-gateway-delete-vnet-gateway-classic-powershell.md)
This article helps you delete a virtual network gateway. There are a couple of different approaches you can take when you want to delete a gateway for a VPN gateway configuration.
If you aren't concerned about keeping any of your resources in the resource grou
1. In **All resources**, locate the resource group and click to open the blade. 1. Click **Delete**. On the Delete blade, view the affected resources. Make sure that you want to delete all of these resources. If not, use the steps in Delete a VPN gateway at the top of this article. 1. To proceed, type the name of the resource group that you want to delete, then click **Delete**.+
+## Next steps
+
+For FAQ information, see the [Azure VPN Gateway FAQ](vpn-gateway-vpn-faq.md).
vpn-gateway Vpn Gateway Delete Vnet Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-powershell.md
description: Learn how to delete a virtual network gateway using PowerShell. Previously updated : 04/29/2021 Last updated : 08/23/2023
There are a couple of different approaches you can take when you want to delete a virtual network gateway for a VPN gateway configuration. -- If you want to delete everything and start over, as in the case of a test environment, you can delete the resource group. When you delete a resource group, it deletes all the resources within the group. This is method is only recommended if you don't want to keep any of the resources in the resource group. You can't selectively delete only a few resources using this approach.
+* If you want to delete everything and start over, as in the case of a test environment, you can delete the resource group. When you delete a resource group, it deletes all the resources within the group. This is method is only recommended if you don't want to keep any of the resources in the resource group. You can't selectively delete only a few resources using this approach.
-- If you want to keep some of the resources in your resource group, deleting a virtual network gateway becomes slightly more complicated. Before you can delete the virtual network gateway, you must first delete any resources that are dependent on the gateway. The steps you follow depend on the type of connections that you created and the dependent resources for each connection.
+* If you want to keep some of the resources in your resource group, deleting a virtual network gateway becomes slightly more complicated. Before you can delete the virtual network gateway, you must first delete any resources that are dependent on the gateway. The steps you follow depend on the type of connections that you created and the dependent resources for each connection.
-## Before beginning
+## <a name="S2S"></a>Delete a site-to-site VPN gateway
+To delete a virtual network gateway for a S2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. In the following examples, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes:
+* VNet name: VNet1
+* Resource Group name: TestRG1
+* Virtual network gateway name: VNet1GW
-### 1. Download the latest Azure Resource Manager PowerShell cmdlets.
+1. Get the virtual network gateway that you want to delete.
-Download and install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information about downloading and installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/).
+ ```azurepowershell-interactive
+ $GW=get-Azvirtualnetworkgateway -Name "VNet1GW" -ResourceGroupName "TestRG1"
+ ```
-### 2. Connect to your Azure account.
+1. Check to see if the virtual network gateway has any connections.
-Open your PowerShell console and connect to your account. Use the following example to help you connect:
+ ```azurepowershell-interactive
+ get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
+ $Conns=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
+ ```
-```powershell
-Connect-AzAccount
-```
+1. Delete all connections. You may be prompted to confirm the deletion of each of the connections.
-Check the subscriptions for the account.
+ ```azurepowershell-interactive
+ $Conns | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName}
+ ```
-```powershell
-Get-AzSubscription
-```
+1. Delete the virtual network gateway. You may be prompted to confirm the deletion of the gateway. If you have a P2S configuration to this VNet in addition to your S2S configuration, deleting the virtual network gateway will automatically disconnect all P2S clients without warning.
-If you have more than one subscription, specify the subscription that you want to use.
+ ```azurepowershell-interactive
+ Remove-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1"
+ ```
-```powershell
-Select-AzSubscription -SubscriptionName "Replace_with_your_subscription_name"
-```
+ At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used.
-## <a name="S2S"></a>Delete a Site-to-Site VPN gateway
+1. To delete the local network gateways, first get the list of the corresponding local network gateways.
-To delete a virtual network gateway for a S2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When working with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes:
+ ```azurepowershell-interactive
+ $LNG=Get-AzLocalNetworkGateway -ResourceGroupName "TestRG1" | where-object {$_.Id -In $Conns.LocalNetworkGateway2.Id}
+ ```
-VNet name: VNet1<br>
-Resource Group name: RG1<br>
-Virtual network gateway name: GW1<br>
+ Next, delete the local network gateways. You may be prompted to confirm the deletion of each of the local network gateway.
-The following steps apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
+ ```azurepowershell-interactive
+ $LNG | ForEach-Object {Remove-AzLocalNetworkGateway -Name $_.Name -ResourceGroupName $_.ResourceGroupName}
+ ```
-### 1. Get the virtual network gateway that you want to delete.
+1. To delete the Public IP address resources, first get the IP configurations of the virtual network gateway.
-```powershell
-$GW=get-Azvirtualnetworkgateway -Name "GW1" -ResourceGroupName "RG1"
-```
+ ```azurepowershell-interactive
+ $GWIpConfigs = $Gateway.IpConfigurations
+ ```
-### 2. Check to see if the virtual network gateway has any connections.
+ Next, get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses.
-```powershell
-get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
-$Conns=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
-```
+ ```azurepowershell-interactive
+ $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
+ ```
-### 3. Delete all connections.
+ Delete the Public IP resources.
-You may be prompted to confirm the deletion of each of the connections.
+ ```azurepowershell-interactive
+ $PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "TestRG1"}
+ ```
-```powershell
-$Conns | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName}
-```
+1. Delete the gateway subnet and set the configuration.
-### 4. Delete the virtual network gateway.
-
-You may be prompted to confirm the deletion of the gateway. If you have a P2S configuration to this VNet in addition to your S2S configuration, deleting the virtual network gateway will automatically disconnect all P2S clients without warning.
--
-```powershell
-Remove-AzVirtualNetworkGateway -Name "GW1" -ResourceGroupName "RG1"
-```
-
-At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used.
-
-### 5 Delete the local network gateways.
-
-Get the list of the corresponding local network gateways.
-
-```powershell
-$LNG=Get-AzLocalNetworkGateway -ResourceGroupName "RG1" | where-object {$_.Id -In $Conns.LocalNetworkGateway2.Id}
-```
-
-Delete the local network gateways. You may be prompted to confirm the deletion of each of the local network gateway.
-
-```powershell
-$LNG | ForEach-Object {Remove-AzLocalNetworkGateway -Name $_.Name -ResourceGroupName $_.ResourceGroupName}
-```
-
-### 6. Delete the Public IP address resources.
-
-Get the IP configurations of the virtual network gateway.
-
-```powershell
-$GWIpConfigs = $Gateway.IpConfigurations
-```
-
-Get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses.
-
-```powershell
-$PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
-```
-
-Delete the Public IP resources.
-
-```powershell
-$PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "RG1"}
-```
-
-### 7. Delete the gateway subnet and set the configuration.
-
-```powershell
-$GWSub = Get-AzVirtualNetwork -ResourceGroupName "RG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet"
-Set-AzVirtualNetwork -VirtualNetwork $GWSub
-```
+ ```azurepowershell-interactive
+ $GWSub = Get-AzVirtualNetwork -ResourceGroupName "TestRG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet"
+ Set-AzVirtualNetwork -VirtualNetwork $GWSub
+ ```
## <a name="v2v"></a>Delete a VNet-to-VNet VPN gateway
-To delete a virtual network gateway for a V2V configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When working with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes:
-
-VNet name: VNet1<br>
-Resource Group name: RG1<br>
-Virtual network gateway name: GW1<br>
-
-The following steps apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
-
-### 1. Get the virtual network gateway that you want to delete.
+To delete a virtual network gateway for a V2V configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. In the following examples, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes:
-```powershell
-$GW=get-Azvirtualnetworkgateway -Name "GW1" -ResourceGroupName "RG1"
-```
+* VNet name: VNet1
+* Resource Group name: TestRG1
+* Virtual network gateway name: VNet1GW
-### 2. Check to see if the virtual network gateway has any connections.
+1. Get the virtual network gateway that you want to delete.
-```powershell
-get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
-```
-
-There may be other connections to the virtual network gateway that are part of a different resource group. Check for additional connections in each additional resource group. In this example, we are checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway.
+ ```azurepowershell-interactive
+ $GW=get-Azvirtualnetworkgateway -Name "VNet1GW" -ResourceGroupName "TestRG1"
+ ```
-```powershell
-get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG2" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id}
-```
+1. Check to see if the virtual network gateway has any connections.
-### 3. Get the list of connections in both directions.
+ ```azurepowershell-interactive
+ get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
+ ```
-Because this is a VNet-to-VNet configuration, you need the list of connections in both directions.
+1. There may be other connections to the virtual network gateway that are part of a different resource group. Check for additional connections in each additional resource group. In this example, we're checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway.
-```powershell
-$ConnsL=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
-```
-
-In this example, we are checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway.
+ ```azurepowershell-interactive
+ get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG2" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id}
+ ```
-```powershell
- $ConnsR=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "<NameOfResourceGroup2>" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id}
- ```
+1. Get the list of connections in both directions. Because this is a VNet-to-VNet configuration, you need the list of connections in both directions.
-### 4. Delete all connections.
+ ```azurepowershell-interactive
+ $ConnsL=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id}
+ ```
-You may be prompted to confirm the deletion of each of the connections.
+1. In this example, we're checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway.
-```powershell
-$ConnsL | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName}
-$ConnsR | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName}
-```
+ ```azurepowershell-interactive
+ $ConnsR=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "<NameOfResourceGroup2>" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id}
+ ```
-### 5. Delete the virtual network gateway.
+1. Delete all connections. You may be prompted to confirm the deletion of each of the connections.
-You may be prompted to confirm the deletion of the virtual network gateway. If you have P2S configurations to your VNets in addition to your V2V configuration, deleting the virtual network gateways will automatically disconnect all P2S clients without warning.
+ ```azurepowershell-interactive
+ $ConnsL | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName}
+ $ConnsR | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName}
+ ```
-```powershell
-Remove-AzVirtualNetworkGateway -Name "GW1" -ResourceGroupName "RG1"
-```
+1. Delete the virtual network gateway. You may be prompted to confirm the deletion of the virtual network gateway. If you have P2S configurations to your VNets in addition to your V2V configuration, deleting the virtual network gateways will automatically disconnect all P2S clients without warning.
-At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used.
+ ```azurepowershell-interactive
+ Remove-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1"
+ ```
-### 6. Delete the Public IP address resources
+ At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used.
-Get the IP configurations of the virtual network gateway.
+1. To delete the Public IP address resources, get the IP configurations of the virtual network gateway.
-```powershell
-$GWIpConfigs = $Gateway.IpConfigurations
-```
+ ```azurepowershell-interactive
+ $GWIpConfigs = $Gateway.IpConfigurations
+ ```
-Get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses.
+1. Next, get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses.
-```powershell
-$PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
-```
+ ```azurepowershell-interactive
+ $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
+ ```
-Delete the Public IP resources. You may be prompted to confirm the deletion of the Public IP.
+1. Delete the Public IP resources. You may be prompted to confirm the deletion of the Public IP.
-```powershell
-$PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"}
-```
+ ```azurepowershell-interactive
+ $PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"}
+ ```
-### 7. Delete the gateway subnet and set the configuration.
+1. Delete the gateway subnet and set the configuration.
-```powershell
-$GWSub = Get-AzVirtualNetwork -ResourceGroupName "RG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet"
-Set-AzVirtualNetwork -VirtualNetwork $GWSub
-```
+ ```azurepowershell-interactive
+ $GWSub = Get-AzVirtualNetwork -ResourceGroupName "TestRG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet"
+ Set-AzVirtualNetwork -VirtualNetwork $GWSub
+ ```
-## <a name="deletep2s"></a>Delete a Point-to-Site VPN gateway
+## <a name="deletep2s"></a>Delete a point-to-site VPN gateway
-To delete a virtual network gateway for a P2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When working with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes:
-
-VNet name: VNet1<br>
-Resource Group name: RG1<br>
-Virtual network gateway name: GW1<br>
-
-The following steps apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
+To delete a virtual network gateway for a P2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When you work with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes:
+* VNet name: VNet1
+* Resource Group name: TestRG1
+* Virtual network gateway name: VNet1GW
>[!NOTE] > When you delete the VPN gateway, all connected clients will be disconnected from the VNet without warning.
->
->
-
-### 1. Get the virtual network gateway that you want to delete.
-
-```powershell
-$GW=get-Azvirtualnetworkgateway -Name "GW1" -ResourceGroupName "RG1"
-```
-### 2. Delete the virtual network gateway.
+1. Get the virtual network gateway that you want to delete.
-You may be prompted to confirm the deletion of the virtual network gateway.
+ ```azurepowershell-interactive
+ GW=get-Azvirtualnetworkgateway -Name "VNet1GW" -ResourceGroupName "TestRG1"
+ ```
-```powershell
-Remove-AzVirtualNetworkGateway -Name "GW1" -ResourceGroupName "RG1"
-```
+1. Delete the virtual network gateway. You may be prompted to confirm the deletion of the virtual network gateway.
-At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used.
+ ```azurepowershell-interactive
+ Remove-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1"
+ ```
-### 3. Delete the Public IP address resources
+ At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used.
-Get the IP configurations of the virtual network gateway.
+1. To delete the Public IP address resources, first get the IP configurations of the virtual network gateway.
-```powershell
-$GWIpConfigs = $Gateway.IpConfigurations
-```
+ ```azurepowershell-interactive
+ $GWIpConfigs = $Gateway.IpConfigurations
+ ```
-Get the list of Public IP addresses used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses.
+ Next, get the list of Public IP addresses used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses.
-```powershell
-$PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
-```
+ ```azurepowershell-interactive
+ $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
+ ```
-Delete the Public IPs. You may be prompted to confirm the deletion of the Public IP.
+1. Delete the Public IPs. You may be prompted to confirm the deletion of the Public IP.
-```powershell
-$PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"}
-```
+ ```azurepowershell-interactive
+ $PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"}
+ ```
-### 4. Delete the gateway subnet and set the configuration.
+1. Delete the gateway subnet and set the configuration.
-```powershell
-$GWSub = Get-AzVirtualNetwork -ResourceGroupName "RG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet"
-Set-AzVirtualNetwork -VirtualNetwork $GWSub
-```
+ ```azurepowershell-interactive
+ $GWSub = Get-AzVirtualNetwork -ResourceGroupName "TestRG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet"
+ Set-AzVirtualNetwork -VirtualNetwork $GWSub
+ ```
## <a name="delete"></a>Delete a VPN gateway by deleting the resource group
-If you are not concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. The following steps apply only to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
+If you aren't concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything.
-### 1. Get a list of all the resource groups in your subscription.
+1. Get a list of all the resource groups in your subscription.
-```powershell
-Get-AzResourceGroup
-```
+ ```azurepowershell-interactive
+ Get-AzResourceGroup
+ ```
-### 2. Locate the resource group that you want to delete.
+1. Locate the resource group that you want to delete.
-Locate the resource group that you want to delete and view the list of resources in that resource group. In the example, the name of the resource group is RG1. Modify the example to retrieve a list of all the resources.
+ Locate the resource group that you want to delete and view the list of resources in that resource group. In the example, the name of the resource group is TestRG1. Modify the example to retrieve a list of all the resources.
-```powershell
-Find-AzResource -ResourceGroupNameContains RG1
-```
+ ```azurepowershell-interactive
+ Find-AzResource -ResourceGroupNameContains TestRG1
+ ```
-### 3. Verify the resources in the list.
+1. Verify the resources in the list.
-When the list is returned, review it to verify that you want to delete all the resources in the resource group, as well as the resource group itself. If you want to keep some of the resources in the resource group, use the steps in the earlier sections of this article to delete your gateway.
+ When the list is returned, review it to verify that you want to delete all the resources in the resource group, and the resource group itself. If you want to keep some of the resources in the resource group, use the steps in the earlier sections of this article to delete your gateway.
-### 4. Delete the resource group and resources.
+1. Delete the resource group and resources. To delete the resource group and all the resource contained in the resource group, modify the example and run.
-To delete the resource group and all the resource contained in the resource group, modify the example and run.
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name TestRG1
+ ```
-```powershell
-Remove-AzResourceGroup -Name RG1
-```
+1. Check the status. It takes some time for Azure to delete all the resources. You can check the status of your resource group by using this cmdlet.
-### 5. Check the status.
+ ```azurepowershell-interactive
+ Get-AzResourceGroup -ResourceGroupName TestRG1
+ ```
-It takes some time for Azure to delete all the resources. You can check the status of your resource group by using this cmdlet.
+ The result that is returned shows 'Succeeded'.
-```powershell
-Get-AzResourceGroup -ResourceGroupName RG1
-```
+ ```azurepowershell-interactive
+ ResourceGroupName : TestRG1
+ Location : eastus
+ ProvisioningState : Succeeded
+ ```
-The result that is returned shows 'Succeeded'.
+## Next steps
-```
-ResourceGroupName : RG1
-Location : eastus
-ProvisioningState : Succeeded
-```
+For FAQ information, see the [Azure VPN Gateway FAQ](vpn-gateway-vpn-faq.md).
vpn-gateway Vpn Gateway Howto Point To Site Classic Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md
description: Learn how to create a classic Point-to-Site VPN Gateway connection
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Configure a Point-to-Site connection by using certificate authentication (classic)
-This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
+This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md)**.
You use a Point-to-Site (P2S) VPN gateway to create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location. When you have only a few clients that need to connect to a VNet, a P2S VPN is a useful solution to use instead of a Site-to-Site VPN. A P2S VPN connection is established by starting it from the client computer.
vpn-gateway Vpn Gateway Howto Site To Site Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Create a Site-to-Site connection using the Azure portal (classic)
-This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md).
+This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md)**.
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
vpn-gateway Vpn Gateway Howto Vnet Vnet Portal Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Configure a VNet-to-VNet connection (classic) This article helps you create a VPN gateway connection between virtual networks. The virtual networks can be in the same or different regions, and from the same or different subscriptions.
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
+The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).**
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-portal-classic/classic-diagram.png" alt-text="Diagram showing classic VNet-to-VNet architecture.":::
vpn-gateway Vpn Gateway Ipsecikepolicy Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md
The steps of creating a VNet-to-VNet connection with an IPsec/IKE policy are sim
See [Create a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) for more detailed steps for creating a VNet-to-VNet connection.
-### Step 1 - Create the second virtual network and VPN gateway
+### Step 1: Create the second virtual network and VPN gateway
#### 1. Declare your variables
New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Lo
It can take about 45 minutes or more to create the VPN gateway.
-### Step 2 - Create a VNet-toVNet connection with the IPsec/IKE policy
+### Step 2: Create a VNet-toVNet connection with the IPsec/IKE policy
Similar to the S2S VPN connection, create an IPsec/IKE policy, then apply the policy to the new connection. If you used Azure Cloud Shell, your connection may have timed out. If so, re-connect and state the necessary variables again.
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Add a Site-to-Site connection to a VNet with an existing VPN gateway connection (classic) This article walks you through using PowerShell to add Site-to-Site (S2S) connections to a VPN gateway that has an existing connection using the classic (legacy) deployment model. This type of connection is sometimes referred to as a "multi-site" configuration. These steps don't apply to ExpressRoute/Site-to-Site coexisting connection configurations.
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md).
+The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)**.
[!INCLUDE [deployment models](../../includes/vpn-gateway-classic-deployment-model-include.md)]
vpn-gateway Vpn Gateway Peering Gateway Transit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-peering-gateway-transit.md
Previously updated : 11/09/2022 Last updated : 08/18/2023
This article helps you configure gateway transit for virtual network peering. [V
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png" alt-text="Diagram of Gateway transit." lightbox="./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png":::
-In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks. The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the classic deployment model.
+In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks.
+
+The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the legacy classic deployment model.
>
-In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks will propagate to the routing tables for the peered virtual networks using gateway transit. You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md).
+In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks propagate to the routing tables for the peered virtual networks using gateway transit.
+
+You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md).
-There are two scenarios in this article:
+There are two scenarios in this article. Select the scenario that applies to your environment. Most people use the **Same deployment model** scenario. If you aren't working with a classic deployment model VNet (legacy VNet) that already exists in your environment, you won't need to work with the **Different deployment models** scenario.
* **Same deployment model**: Both virtual networks are created in the Resource Manager deployment model.
-* **Different deployment models**: The spoke virtual network is created in the classic deployment model, and the hub virtual network and gateway are in the Resource Manager deployment model.
+* **Different deployment models**: The spoke virtual network is created in the classic deployment model, and the hub virtual network and gateway are in the Resource Manager deployment model. This scenario is useful when you need to connect a legacy VNet that already exists in the classic deployment model.
>[!NOTE] > If you make a change to the topology of your network and have Windows VPN clients, the VPN client package for Windows clients must be downloaded and installed again in order for the changes to be applied to the client.
There are two scenarios in this article:
## Prerequisites
-Before you begin, verify that you have the following virtual networks and permissions:
+This article requires the following VNets and permissions. If you aren't working with the different deployment model scenario, you don't need to create the classic VNet.
### <a name="vnet"></a>Virtual networks
-| VNet | Deployment model | Virtual network gateway |
-||--||
+| VNet | Configuration steps| Virtual network gateway|
+||||
| Hub-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | [Yes](tutorial-create-gateway-portal.md) | | Spoke-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | No | | Spoke-Classic | [Classic](vpn-gateway-howto-site-to-site-classic-portal.md#CreatVNet) | No |
Learn more about [built-in roles](../role-based-access-control/built-in-roles.md
## <a name="same"></a>Same deployment model
-In this scenario, the virtual networks are both in the Resource Manager deployment model. Use the following steps to create or update the virtual network peerings to enable gateway transit.
+This is the more common scenario. In this scenario, the virtual networks are both in the Resource Manager deployment model. Use the following steps to create or update the virtual network peerings to enable gateway transit.
### To add a peering and enable transit
-1. In the [Azure portal](https://portal.azure.com), create or update the virtual network peering from the Hub-RM. Navigate to the **Hub-RM** virtual network. Select **Peerings**, then **+ Add** to open **Add peering**.
+1. In the [Azure portal](https://portal.azure.com), create or update the virtual network peering from the Hub-RM. Go to the **Hub-RM** virtual network. Select **Peerings**, then **+ Add** to open **Add peering**.
1. On the **Add peering** page, configure the values for **This virtual network**. * Peering link name: Name the link. Example: **HubRMToSpokeRM** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**
- * Virtual network gateway: **Use this virtual network's gateway**
+ * Virtual network gateway: **Use this virtual network's gateway or Route Server**
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png" alt-text="Screenshot shows add peering." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png"::: 1. On the same page, continue on to configure the values for the **Remote virtual network**. * Peering link name: Name the link. Example: **SpokeRMtoHubRM**
- * Deployment model: **Resource Manager**
+ * Virtual network deployment model: **Resource Manager**
+ * I know my resource ID: Leave blank. You only need to select this if you don't have read access to the virtual network or subscription you want to peer with.
+ * Subscription: Select the subscription.
* Virtual Network: **Spoke-RM** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**
- * Virtual network gateway: **Use the remote virtual network's gateway**
+ * Virtual network gateway: **Use the remote virtual network's gateway or Route Server**
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-remote.png" alt-text="Screenshot shows values for remote virtual network." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-remote.png":::
In this scenario, the virtual networks are both in the Resource Manager deployme
### To modify an existing peering for transit
-If the peering was already created, you can modify the peering for transit.
+If you have an already existing peering, you can modify the peering for transit.
-1. Navigate to the virtual network. Select **Peerings** and select the peering that you want to modify.
-
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-modify.png" alt-text="Screenshot shows select peerings." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-modify.png":::
+1. Go to the virtual network. Select **Peerings** and select the peering that you want to modify. For example, on the Spoke-RM VNet, select the SpokeRMtoHubRM peering.
1. Update the VNet peering. * Traffic to remote virtual network: **Allow** * Traffic forwarded to virtual network; **Allow**
- * Virtual network gateway: **Use remote virtual network's gateway**
-
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png" alt-text="Screenshot shows modify peering gateway." lightbox="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png":::
+ * Virtual network gateway or Route Server: **Use the remote virtual network's gateway or Route Server**
1. **Save** the peering settings. ### <a name="ps-same"></a>PowerShell sample
-You can also use PowerShell to create or update the peering with the example above. Replace the variables with the names of your virtual networks and resource groups.
+You can also use PowerShell to create or update the peering. Replace the variables with the names of your virtual networks and resource groups.
```azurepowershell-interactive $SpokeRG = "SpokeRG1"
In this configuration, the spoke VNet **Spoke-Classic** is in the classic deploy
For this configuration, you only need to configure the **Hub-RM** virtual network. You don't need to configure anything on the **Spoke-Classic** VNet.
-1. In the Azure portal, navigate to the **Hub-RM** virtual network, select **Peerings**, then select **+ Add**.
+1. In the Azure portal, go to the **Hub-RM** virtual network, select **Peerings**, then select **+ Add**.
1. On the **Add peering** page, configure the following values: * Peering link name: Name the link. Example: **HubRMToClassic** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**
- * Virtual network gateway: **Use this virtual network's gateway**
- * Remote virtual network: **Classic**
+ * Virtual network gateway or Route Server: **Use this virtual network's gateway or Route Server**
+ * Peering link name: This value disappears when you select Classic for the virtual network deployment model.
+ * Virtual network deployment model: **Classic**
+ * I know my resource ID: Leave blank. You only need to select this if you don't have read access to the virtual network or subscription you want to peer with.
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-classic.png" alt-text="Add peering page for Spoke-Classic" lightbox="./media/vpn-gateway-peering-gateway-transit/peering-classic.png"::: 1. Verify the subscription is correct, then select the virtual network from the dropdown. 1. Select **Add** to add the peering.
-1. Verify the peering status as **Connected** on the Hub-RM virtual network.
+1. Verify the peering status as **Connected** on the Hub-RM virtual network.
For this configuration, you don't need to configure anything on the **Spoke-Classic** virtual network. Once the status shows **Connected**, the spoke virtual network can use the connectivity through the VPN gateway in the hub virtual network. ### <a name="ps-different"></a>PowerShell sample
-You can also use PowerShell to create or update the peering with the example above. Replace the variables and subscription ID with the values of your virtual network and resource groups, and subscription. You only need to create virtual network peering on the hub virtual network.
+You can also use PowerShell to create or update the peering. Replace the variables and subscription ID with the values of your virtual network and resource groups, and subscription. You only need to create virtual network peering on the hub virtual network.
```azurepowershell-interactive $HubRG = "HubRG1"
vpn-gateway Vpn Gateway Radius Mfa Nsp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-radius-mfa-nsp.md
To enable MFA, the users must be in Azure Active Directory (Azure AD), which mus
-### Step 2 Configure the NPS for Azure AD MFA
+### Step 2: Configure the NPS for Azure AD MFA
1. On the NPS server, [install the NPS extension for Azure AD MFA](../active-directory/authentication/howto-mfa-nps-extension.md#install-the-nps-extension). 2. Open the NPS console, right-click **RADIUS Clients**, and then select **New**. Create the RADIUS client by specifying the following settings:
To enable MFA, the users must be in Azure Active Directory (Azure AD), which mus
4. Go to **Policies** > **Network Policies**, double-click **Connections to Microsoft Routing and Remote Access server** policy, select **Grant access**, and then click **OK**.
-### Step 3 Configure the virtual network gateway
+### Step 3: Configure the virtual network gateway
1. Log on to [Azure portal](https://portal.azure.com). 2. Open the virtual network gateway that you created. Make sure that the gateway type is set to **VPN** and that the VPN type is **route-based**.
vpn-gateway Vpn Gateway Troubleshoot Site To Site Cannot Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md
Check the type of the Azure VPN gateway.
1. Go to the virtual network gateway for your VNet. On the **Overview** page, you see the Gateway type, VPN type, and gateway SKU.
-### Step 1. Check whether the on-premises VPN device is validated
+### Step 1: Check whether the on-premises VPN device is validated
1. Check whether you're using a [validated VPN device and operating system version](vpn-gateway-about-vpn-devices.md#devicetable). If the device isn't a validated VPN device, you might have to contact the device manufacturer to see if there's a compatibility issue. 2. Make sure that the VPN device is correctly configured. For more information, see [Edit device configuration samples](vpn-gateway-about-vpn-devices.md#editing).
-### Step 2. Verify the shared key
+### Step 2: Verify the shared key
Compare the shared key for the on-premises VPN device to the Azure Virtual Network VPN to make sure that the keys match.
For the classic deployment model:
Get-AzureVNetGatewayKey -VNetName -LocalNetworkSiteName ```
-### Step 3. Verify the VPN peer IPs
+### Step 3: Verify the VPN peer IPs
- The IP definition in the **Local Network Gateway** object in Azure should match the on-premises device IP. - The Azure gateway IP definition that is set on the on-premises device should match the Azure gateway IP.
-### Step 4. Check UDR and NSGs on the gateway subnet
+### Step 4: Check UDR and NSGs on the gateway subnet
Check for and remove user-defined routing (UDR) or Network Security Groups (NSGs) on the gateway subnet, and then test the result. If the problem is resolved, validate the settings that UDR or NSG applied.
-### Step 5. Check the on-premises VPN device external interface address
+### Step 5: Check the on-premises VPN device external interface address
If the Internet-facing IP address of the VPN device is included in the **Local network** definition in Azure, you might experience sporadic disconnections.
-### Step 6. Verify that the subnets match exactly (Azure policy-based gateways)
+### Step 6: Verify that the subnets match exactly (Azure policy-based gateways)
- Verify that the virtual network address space(s) match exactly between the Azure virtual network and on-premises definitions. - Verify that the subnets match exactly between the **Local Network Gateway** and on-premises definitions for the on-premises network.
-### Step 7. Verify the Azure gateway health probe
+### Step 7: Verify the Azure gateway health probe
1. Open health probe by browsing to the following URL:
If the Internet-facing IP address of the VPN device is included in the **Local n
> Basic SKU VPN gateways do not reply to health probe. > They are not recommended for [production workloads](vpn-gateway-about-vpn-gateway-settings.md#workloads).
-### Step 8. Check whether the on-premises VPN device has the perfect forward secrecy feature enabled
+### Step 8: Check whether the on-premises VPN device has the perfect forward secrecy feature enabled
The perfect forward secrecy feature can cause disconnection problems. If the VPN device has perfect forward secrecy enabled, disable the feature. Then update the VPN gateway IPsec policy.
vpn-gateway Vpn Gateway Troubleshoot Site To Site Disconnected Intermittently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently.md
Check the type of Azure virtual network gateway:
![The overview of the gateway](media/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently/gatewayoverview.png)
-### Step 1 Check whether the on-premises VPN device is validated
+### Step 1: Check whether the on-premises VPN device is validated
1. Check whether you are using a [validated VPN device and operating system version](vpn-gateway-about-vpn-devices.md#devicetable). If the VPN device is not validated, you may have to contact the device manufacturer to see if there is any compatibility issue. 2. Make sure that the VPN device is correctly configured. For more information, see [Editing device configuration samples](vpn-gateway-about-vpn-devices.md#editing).
-### Step 2 Check the Security Association settings(for policy-based Azure virtual network gateways)
+### Step 2: Check the Security Association settings(for policy-based Azure virtual network gateways)
1. Make sure that the virtual network, subnets and, ranges in the **Local network gateway** definition in Microsoft Azure are same as the configuration on the on-premises VPN device. 2. Verify that the Security Association settings match.
-### Step 3 Check for User-Defined Routes or Network Security Groups on Gateway Subnet
+### Step 3: Check for User-Defined Routes or Network Security Groups on Gateway Subnet
A user-defined route on the gateway subnet may be restricting some traffic and allowing other traffic. This makes it appear that the VPN connection is unreliable for some traffic and good for others.
-### Step 4 Check the "one VPN Tunnel per Subnet Pair" setting (for policy-based virtual network gateways)
+### Step 4: Check the "one VPN Tunnel per Subnet Pair" setting (for policy-based virtual network gateways)
Make sure that the on-premises VPN device is set to have **one VPN tunnel per subnet pair** for policy-based virtual network gateways.
-### Step 5 Check for Security Association Limitations
+### Step 5: Check for Security Association Limitations
The virtual network gateway has limit of 200 subnet Security Association pairs. If the number of Azure virtual network subnets multiplied times by the number of local subnets is greater than 200, you might see sporadic subnets disconnecting.
-### Step 6 Check on-premises VPN device external interface address
+### Step 6: Check on-premises VPN device external interface address
If the Internet facing IP address of the VPN device is included in the **Local network gateway address space** definition in Azure, you may experience sporadic disconnections.
-### Step 7 Check whether the on-premises VPN device has Perfect Forward Secrecy enabled
+### Step 7: Check whether the on-premises VPN device has Perfect Forward Secrecy enabled
The **Perfect Forward Secrecy** feature can cause the disconnection problems. If the VPN device has **Perfect forward Secrecy** enabled, disable the feature. Then [update the virtual network gateway IPsec policy](vpn-gateway-ipsecikepolicy-rm-powershell.md#managepolicy).
vpn-gateway Vpn Gateway Vnet Vnet Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
Previously updated : 09/02/2020 Last updated : 08/22/2023 # Configure a VNet-to-VNet VPN gateway connection using PowerShell
-This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When connecting VNets from different subscriptions, the subscriptions do not need to be associated with the same Active Directory tenant.
+This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When you connect virtual networks from different subscriptions, the subscriptions don't need to be associated with the same Active Directory tenant.
The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and use PowerShell. You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:
The steps in this article apply to the [Resource Manager deployment model](../az
> * [Connect different deployment models - Azure portal](vpn-gateway-connect-different-deployment-models-portal.md) > * [Connect different deployment models - PowerShell](vpn-gateway-connect-different-deployment-models-powershell.md) + ## <a name="about"></a>About connecting VNets
-There are multiple ways to connect VNets. The sections below describe different ways to connect virtual networks.
+There are multiple ways to connect VNets. The following sections describe different ways to connect virtual networks.
### VNet-to-VNet
-Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets.
+Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you don't see the local network gateway address space. It's automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets.
### Site-to-Site (IPsec)
-If you are working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-create-site-to-site-rm-powershell.md) steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to update the corresponding local network gateway to reflect the change. It does not automatically update.
+If you're working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-create-site-to-site-rm-powershell.md) steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to update the corresponding local network gateway to reflect the change. It doesn't automatically update.
### VNet peering
-You may want to consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md).
+You may want to consider connecting your VNets using VNet Peering. VNet peering doesn't use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md).
## <a name="why"></a>Why create a VNet-to-VNet connection?
VNet-to-VNet communication can be combined with multi-site configurations. This
## <a name="steps"></a>Which VNet-to-VNet steps should I use? In this article, you see two different sets of steps. One set of steps for [VNets that reside in the same subscription](#samesub) and one for [VNets that reside in different subscriptions](#difsub).
-The key difference between the sets is that you must use separate PowerShell sessions when configuring the connections for VNets that reside in different subscriptions.
+The key difference between the sets is that you must use separate PowerShell sessions when configuring the connections for VNets that reside in different subscriptions.
-For this exercise, you can combine configurations, or just choose the one that you want to work with. All of the configurations use the VNet-to-VNet connection type. Network traffic flows between the VNets that are directly connected to each other. In this exercise, traffic from TestVNet4 does not route to TestVNet5.
+For this exercise, you can combine configurations, or just choose the one that you want to work with. All of the configurations use the VNet-to-VNet connection type. Network traffic flows between the VNets that are directly connected to each other. In this exercise, traffic from TestVNet4 doesn't route to TestVNet5.
* [VNets that reside in the same subscription](#samesub): The steps for this configuration use TestVNet1 and TestVNet4.
- ![Diagram that shows V Net-to-V Net steps for V Nets that reside in the same subscription.](./media/vpn-gateway-vnet-vnet-rm-ps/v2vrmps.png)
- * [VNets that reside in different subscriptions](#difsub): The steps for this configuration use TestVNet1 and TestVNet5.
- ![v2v diagram](./media/vpn-gateway-vnet-vnet-rm-ps/v2vdiffsub.png)
- ## <a name="samesub"></a>How to connect VNets that are in the same subscription
-### Before you begin
+You can complete the following steps using Azure Cloud Shell. If you would rather install latest version of the Azure PowerShell module locally, see [How to install and configure Azure PowerShell](/powershell/azure/).
-
-* Because it takes 45 minutes or more to create a gateway, Azure Cloud Shell will timeout periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal.
-
-* If you would rather install latest version of the Azure PowerShell module locally, see [How to install and configure Azure PowerShell](/powershell/azure/).
+Because it takes 45 minutes or more to create a gateway, Azure Cloud Shell times out periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal.
### <a name="Step1"></a>Step 1 - Plan your IP address ranges
-In the following steps, you create two virtual networks along with their respective gateway subnets and configurations. You then create a VPN connection between the two VNets. ItΓÇÖs important to plan the IP address ranges for your network configuration. Keep in mind that you must make sure that none of your VNet ranges or local network ranges overlap in any way. In these examples, we do not include a DNS server. If you want name resolution for your virtual networks, see [Name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md).
+In the following steps, you create two virtual networks along with their respective gateway subnets and configurations. You then create a VPN connection between the two VNets. ItΓÇÖs important to plan the IP address ranges for your network configuration. Keep in mind that you must make sure that none of your VNet ranges or local network ranges overlap in any way. In these examples, we don't include a DNS server. If you want name resolution for your virtual networks, see [Name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md).
We use the following values in the examples:
We use the following values in the examples:
* VNet Name: TestVNet1 * Resource Group: TestRG1 * Location: East US
-* TestVNet1: 10.11.0.0/16 & 10.12.0.0/16
-* FrontEnd: 10.11.0.0/24
-* BackEnd: 10.12.0.0/24
-* GatewaySubnet: 10.12.255.0/27
+* TestVNet1: 10.1.0.0/16
+* FrontEnd: 10.1.0.0/24
+* GatewaySubnet: 10.1.255.0/27
* GatewayName: VNet1GW * Public IP: VNet1GWIP * VPNType: RouteBased
We use the following values in the examples:
**Values for TestVNet4:** * VNet Name: TestVNet4
-* TestVNet2: 10.41.0.0/16 & 10.42.0.0/16
+* TestVNet2: 10.41.0.0/16
* FrontEnd: 10.41.0.0/24
-* BackEnd: 10.42.0.0/24
-* GatewaySubnet: 10.42.255.0/27
+* GatewaySubnet: 10.41.255.0/27
* Resource Group: TestRG4 * Location: West US * GatewayName: VNet4GW
We use the following values in the examples:
* Connection: VNet4toVNet1 * ConnectionType: VNet2VNet - ### <a name="Step2"></a>Step 2 - Create and configure TestVNet1
-1. Verify your subscription settings.
-
- Connect to your account if you are running PowerShell locally on your computer. If you are using Azure Cloud Shell, you are connected automatically.
-
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
-
- Check the subscriptions for the account.
-
- ```azurepowershell-interactive
- Get-AzSubscription
- ```
+For the following steps, you can either use Azure Cloud Shell, or you can run PowerShell locally. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/).
- If you have more than one subscription, specify the subscription that you want to use.
+> [!NOTE]
+> You may see warnings saying "The output object type of this cmdlet will be modified in a future release". This is expected behavior and you can safely ignore these warnings.
- ```azurepowershell-interactive
- Select-AzSubscription -SubscriptionName nameofsubscription
- ```
-2. Declare your variables. This example declares the variables using the values for this exercise. In most cases, you should replace the values with your own. However, you can use these variables if you are running through the steps to become familiar with this type of configuration. Modify the variables if needed, then copy and paste them into your PowerShell console.
+1. Declare your variables. This example declares the variables using the values for this exercise. In most cases, you should replace the values with your own. However, you can use these variables if you're running through the steps to become familiar with this type of configuration. Modify the variables if needed, then copy and paste them into your PowerShell console.
```azurepowershell-interactive $RG1 = "TestRG1" $Location1 = "East US" $VNetName1 = "TestVNet1" $FESubName1 = "FrontEnd"
- $BESubName1 = "Backend"
- $VNetPrefix11 = "10.11.0.0/16"
- $VNetPrefix12 = "10.12.0.0/16"
- $FESubPrefix1 = "10.11.0.0/24"
- $BESubPrefix1 = "10.12.0.0/24"
- $GWSubPrefix1 = "10.12.255.0/27"
+ $VNetPrefix1 = "10.1.0.0/16"
+ $FESubPrefix1 = "10.1.0.0/24"
+ $GWSubPrefix1 = "10.1.255.0/27"
$GWName1 = "VNet1GW" $GWIPName1 = "VNet1GWIP" $GWIPconfName1 = "gwipconf1" $Connection14 = "VNet1toVNet4" $Connection15 = "VNet1toVNet5" ```
-3. Create a resource group.
+
+1. Create a resource group.
```azurepowershell-interactive New-AzResourceGroup -Name $RG1 -Location $Location1 ```
-4. Create the subnet configurations for TestVNet1. This example creates a virtual network named TestVNet1 and three subnets, one called GatewaySubnet, one called FrontEnd, and one called Backend. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails. For this reason, it is not assigned via variable below.
- The following example uses the variables that you set earlier. In this example, the gateway subnet is using a /27. While it is possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting at least /28 or /27. This will allow for enough addresses to accommodate possible additional configurations that you may want in the future.
+1. Create the subnet configurations for TestVNet1. This example creates a virtual network named TestVNet1 and two subnets, one called GatewaySubnet, and one called FrontEnd. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails. For this reason, it isn't assigned via variable in the example.
+
+ The following example uses the variables that you set earlier. In this example, the gateway subnet is using a /27. While it's possible to create a gateway subnet using /28 for this configuration, we recommend that you create a larger subnet that includes more addresses by selecting at least /27. This will allow for enough addresses to accommodate possible additional configurations that you may want in the future.
```azurepowershell-interactive $fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubName1 -AddressPrefix $FESubPrefix1
- $besub1 = New-AzVirtualNetworkSubnetConfig -Name $BESubName1 -AddressPrefix $BESubPrefix1
$gwsub1 = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix $GWSubPrefix1 ```
-5. Create TestVNet1.
+
+1. Create TestVNet1.
```azurepowershell-interactive New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 `
- -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1
+ -Location $Location1 -AddressPrefix $VNetPrefix1 -Subnet $fesub1,$gwsub1
```
-6. Request a public IP address to be allocated to the gateway you will create for your VNet. Notice that the AllocationMethod is Dynamic. You cannot specify the IP address that you want to use. It's dynamically allocated to your gateway.
+
+1. A VPN gateway must have an allocated public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. Use the following example to request a public IP address.
```azurepowershell-interactive $gwpip1 = New-AzPublicIpAddress -Name $GWIPName1 -ResourceGroupName $RG1 `
- -Location $Location1 -AllocationMethod Dynamic
+ -Location $Location1 -AllocationMethod Static -Sku Standard
```
-7. Create the gateway configuration. The gateway configuration defines the subnet and the public IP address to use. Use the example to create your gateway configuration.
+
+1. Create the gateway configuration. The gateway configuration defines the subnet and the public IP address to use. Use the example to create your gateway configuration.
```azurepowershell-interactive $vnet1 = Get-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1
We use the following values in the examples:
$gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName1 ` -Subnet $subnet1 -PublicIpAddress $gwpip1 ```
-8. Create the gateway for TestVNet1. In this step, you create the virtual network gateway for your TestVNet1. VNet-to-VNet configurations require a RouteBased VpnType. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
+
+1. Create the gateway for TestVNet1. In this step, you create the virtual network gateway for your TestVNet1. VNet-to-VNet configurations require a RouteBased VpnType. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 ` -Location $Location1 -IpConfigurations $gwipconf1 -GatewayType Vpn `
- -VpnType RouteBased -GatewaySku VpnGw1
+ -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2"
```
-After you finish the commands, it will take 45 minutes or more to create this gateway. If you are using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes.
+After you finish the commands, it will take 45 minutes or more to create this gateway. If you're using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes.
-### Step 3 - Create and configure TestVNet4
+### Step 3: Create and configure TestVNet4
-Once you've configured TestVNet1, create TestVNet4. Follow the steps below, replacing the values with your own when needed.
+Create TestVNet4. Use the following steps, replacing the values with your own when needed.
1. Connect and declare your variables. Be sure to replace the values with the ones that you want to use for your configuration.
Once you've configured TestVNet1, create TestVNet4. Follow the steps below, repl
$Location4 = "West US" $VnetName4 = "TestVNet4" $FESubName4 = "FrontEnd"
- $BESubName4 = "Backend"
- $VnetPrefix41 = "10.41.0.0/16"
- $VnetPrefix42 = "10.42.0.0/16"
+ $VnetPrefix4 = "10.41.0.0/16"
$FESubPrefix4 = "10.41.0.0/24"
- $BESubPrefix4 = "10.42.0.0/24"
- $GWSubPrefix4 = "10.42.255.0/27"
+ $GWSubPrefix4 = "10.41.255.0/27"
$GWName4 = "VNet4GW" $GWIPName4 = "VNet4GWIP" $GWIPconfName4 = "gwipconf4" $Connection41 = "VNet4toVNet1" ```
-2. Create a resource group.
+
+1. Create a resource group.
```azurepowershell-interactive New-AzResourceGroup -Name $RG4 -Location $Location4 ```
-3. Create the subnet configurations for TestVNet4.
+
+1. Create the subnet configurations for TestVNet4.
```azurepowershell-interactive $fesub4 = New-AzVirtualNetworkSubnetConfig -Name $FESubName4 -AddressPrefix $FESubPrefix4
- $besub4 = New-AzVirtualNetworkSubnetConfig -Name $BESubName4 -AddressPrefix $BESubPrefix4
$gwsub4 = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix $GWSubPrefix4 ```
-4. Create TestVNet4.
+
+1. Create TestVNet4.
```azurepowershell-interactive New-AzVirtualNetwork -Name $VnetName4 -ResourceGroupName $RG4 `
- -Location $Location4 -AddressPrefix $VnetPrefix41,$VnetPrefix42 -Subnet $fesub4,$besub4,$gwsub4
+ -Location $Location4 -AddressPrefix $VnetPrefix4 -Subnet $fesub4,$gwsub4
```
-5. Request a public IP address.
+
+1. Request a public IP address.
```azurepowershell-interactive $gwpip4 = New-AzPublicIpAddress -Name $GWIPName4 -ResourceGroupName $RG4 `
- -Location $Location4 -AllocationMethod Dynamic
+ -Location $Location4 -AllocationMethod Static -Sku Standard
```
-6. Create the gateway configuration.
+
+1. Create the gateway configuration.
```azurepowershell-interactive $vnet4 = Get-AzVirtualNetwork -Name $VnetName4 -ResourceGroupName $RG4 $subnet4 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet4 $gwipconf4 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName4 -Subnet $subnet4 -PublicIpAddress $gwpip4 ```
-7. Create the TestVNet4 gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
+
+1. Create the TestVNet4 gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName4 -ResourceGroupName $RG4 ` -Location $Location4 -IpConfigurations $gwipconf4 -GatewayType Vpn `
- -VpnType RouteBased -GatewaySku VpnGw1
+ -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2"
```
-### Step 4 - Create the connections
+### Step 4: Create the connections
Wait until both gateways are completed. Restart your Azure Cloud Shell session and copy and paste the variables from the beginning of Step 2 and Step 3 into the console to redeclare values.
Wait until both gateways are completed. Restart your Azure Cloud Shell session a
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $vnet4gw = Get-AzVirtualNetworkGateway -Name $GWName4 -ResourceGroupName $RG4 ```
-2. Create the TestVNet1 to TestVNet4 connection. In this step, you create the connection from TestVNet1 to TestVNet4. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
+
+1. Create the TestVNet1 to TestVNet4 connection. In this step, you create the connection from TestVNet1 to TestVNet4. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name $Connection14 -ResourceGroupName $RG1 ` -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet4gw -Location $Location1 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```
-3. Create the TestVNet4 to TestVNet1 connection. This step is similar to the one above, except you are creating the connection from TestVNet4 to TestVNet1. Make sure the shared keys match. The connection will be established after a few minutes.
+
+1. Create the TestVNet4 to TestVNet1 connection. This step is similar to previous step, except you're creating the connection from TestVNet4 to TestVNet1. Make sure the shared keys match. The connection will be established after a few minutes.
```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name $Connection41 -ResourceGroupName $RG4 ` -VirtualNetworkGateway1 $vnet4gw -VirtualNetworkGateway2 $vnet1gw -Location $Location4 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```
-4. Verify your connection. See the section [How to verify your connection](#verify).
+
+1. Verify your connection. See the section [How to verify your connection](#verify).
## <a name="difsub"></a>How to connect VNets that are in different subscriptions
-In this scenario, you connect TestVNet1 and TestVNet5. TestVNet1 and TestVNet5 reside in different subscriptions. The subscriptions do not need to be associated with the same Active Directory tenant.
+In this scenario, you connect TestVNet1 and TestVNet5. TestVNet1 and TestVNet5 reside in different subscriptions. The subscriptions don't need to be associated with the same Active Directory tenant.
The difference between these steps and the previous set is that some of the configuration steps need to be performed in a separate PowerShell session in the context of the second subscription. Especially when the two subscriptions belong to different organizations. Due to changing subscription context in this exercise, you may find it easier to use PowerShell locally on your computer, rather than using the Azure Cloud Shell, when you get to Step 8.
-### Step 5 - Create and configure TestVNet1
+### Step 5: Create and configure TestVNet1
-You must complete [Step 1](#Step1) and [Step 2](#Step2) from the previous section to create and configure TestVNet1 and the VPN Gateway for TestVNet1. For this configuration, you are not required to create TestVNet4 from the previous section, although if you do create it, it will not conflict with these steps. Once you complete Step 1 and Step 2, continue with Step 6 to create TestVNet5.
+You must complete [Step 1](#Step1) and [Step 2](#Step2) from the previous section to create and configure TestVNet1 and the VPN Gateway for TestVNet1. For this configuration, you aren't required to create TestVNet4 from the previous section, although if you do create it, it won't conflict with these steps. Once you complete Step 1 and Step 2, continue with Step 6 to create TestVNet5.
-### Step 6 - Verify the IP address ranges
+### Step 6: Verify the IP address ranges
-It is important to make sure that the IP address space of the new virtual network, TestVNet5, does not overlap with any of your VNet ranges or local network gateway ranges. In this example, the virtual networks may belong to different organizations. For this exercise, you can use the following values for the TestVNet5:
+It's important to make sure that the IP address space of the new virtual network, TestVNet5, doesn't overlap with any of your VNet ranges or local network gateway ranges. In this example, the virtual networks may belong to different organizations. For this exercise, you can use the following values for the TestVNet5:
**Values for TestVNet5:** * VNet Name: TestVNet5 * Resource Group: TestRG5 * Location: Japan East
-* TestVNet5: 10.51.0.0/16 & 10.52.0.0/16
+* TestVNet5: 10.51.0.0/16
* FrontEnd: 10.51.0.0/24
-* BackEnd: 10.52.0.0/24
-* GatewaySubnet: 10.52.255.0.0/27
+* GatewaySubnet: 10.51.255.0.0/27
* GatewayName: VNet5GW * Public IP: VNet5GWIP * VPNType: RouteBased * Connection: VNet5toVNet1 * ConnectionType: VNet2VNet
-### Step 7 - Create and configure TestVNet5
+### Step 7: Create and configure TestVNet5
This step must be done in the context of the new subscription. This part may be performed by the administrator in a different organization that owns the subscription.
This step must be done in the context of the new subscription. This part may be
$Location5 = "Japan East" $VnetName5 = "TestVNet5" $FESubName5 = "FrontEnd"
- $BESubName5 = "Backend"
$GWSubName5 = "GatewaySubnet"
- $VnetPrefix51 = "10.51.0.0/16"
- $VnetPrefix52 = "10.52.0.0/16"
+ $VnetPrefix5 = "10.51.0.0/16"
$FESubPrefix5 = "10.51.0.0/24"
- $BESubPrefix5 = "10.52.0.0/24"
- $GWSubPrefix5 = "10.52.255.0/27"
+ $GWSubPrefix5 = "10.51.255.0/27"
$GWName5 = "VNet5GW" $GWIPName5 = "VNet5GWIP" $GWIPconfName5 = "gwipconf5" $Connection51 = "VNet5toVNet1" ```
-2. Connect to subscription 5. Open your PowerShell console and connect to your account. Use the following sample to help you connect:
+
+1. Connect to subscription 5. Open your PowerShell console and connect to your account. Use the following sample to help you connect:
```azurepowershell-interactive Connect-AzAccount
This step must be done in the context of the new subscription. This part may be
```azurepowershell-interactive Select-AzSubscription -SubscriptionName $Sub5 ```
-3. Create a new resource group.
+
+1. Create a new resource group.
```azurepowershell-interactive New-AzResourceGroup -Name $RG5 -Location $Location5 ```
-4. Create the subnet configurations for TestVNet5.
+
+1. Create the subnet configurations for TestVNet5.
```azurepowershell-interactive $fesub5 = New-AzVirtualNetworkSubnetConfig -Name $FESubName5 -AddressPrefix $FESubPrefix5
- $besub5 = New-AzVirtualNetworkSubnetConfig -Name $BESubName5 -AddressPrefix $BESubPrefix5
$gwsub5 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName5 -AddressPrefix $GWSubPrefix5 ```
-5. Create TestVNet5.
+
+1. Create TestVNet5.
```azurepowershell-interactive New-AzVirtualNetwork -Name $VnetName5 -ResourceGroupName $RG5 -Location $Location5 `
- -AddressPrefix $VnetPrefix51,$VnetPrefix52 -Subnet $fesub5,$besub5,$gwsub5
+ -AddressPrefix $VnetPrefix5 -Subnet $fesub5,$gwsub5
```
-6. Request a public IP address.
+
+1. Request a public IP address.
```azurepowershell-interactive $gwpip5 = New-AzPublicIpAddress -Name $GWIPName5 -ResourceGroupName $RG5 `
- -Location $Location5 -AllocationMethod Dynamic
+ -Location $Location5 -AllocationMethod Static -Sku Standard
```
-7. Create the gateway configuration.
+
+1. Create the gateway configuration.
```azurepowershell-interactive $vnet5 = Get-AzVirtualNetwork -Name $VnetName5 -ResourceGroupName $RG5 $subnet5 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet5 $gwipconf5 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName5 -Subnet $subnet5 -PublicIpAddress $gwpip5 ```
-8. Create the TestVNet5 gateway.
+
+1. Create the TestVNet5 gateway.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName5 -ResourceGroupName $RG5 -Location $Location5 `
- -IpConfigurations $gwipconf5 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1
+ -IpConfigurations $gwipconf5 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2"
```
-### Step 8 - Create the connections
+### Step 8: Create the connections
In this example, because the gateways are in the different subscriptions, we've split this step into two PowerShell sessions marked as [Subscription 1] and [Subscription 5].
In this example, because the gateways are in the different subscriptions, we've
These two elements will have values similar to the following example output:
- ```
+ ```azurepowershell-interactive
PS D:\> $vnet1gw.Name VNet1GW PS D:\> $vnet1gw.Id /subscriptions/b636ca99-6f88-4df4-a7c3-2f8dc4545509/resourceGroupsTestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW ```
-2. **[Subscription 5]** Get the virtual network gateway for Subscription 5. Sign in and connect to Subscription 5 before running the following example:
+
+1. **[Subscription 5]** Get the virtual network gateway for Subscription 5. Sign in and connect to Subscription 5 before running the following example:
```azurepowershell-interactive $vnet5gw = Get-AzVirtualNetworkGateway -Name $GWName5 -ResourceGroupName $RG5
In this example, because the gateways are in the different subscriptions, we've
These two elements will have values similar to the following example output:
- ```
+ ```azurepowershell-interactive
PS C:\> $vnet5gw.Name VNet5GW PS C:\> $vnet5gw.Id /subscriptions/66c8e4f1-ecd6-47ed-9de7-7e530de23994/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW ```
-3. **[Subscription 1]** Create the TestVNet1 to TestVNet5 connection. In this step, you create the connection from TestVNet1 to TestVNet5. The difference here is that $vnet5gw cannot be obtained directly because it is in a different subscription. You will need to create a new PowerShell object with the values communicated from Subscription 1 in the steps above. Use the example below. Replace the Name, ID, and shared key with your own values. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
+
+1. **[Subscription 1]** Create the TestVNet1 to TestVNet5 connection. In this step, you create the connection from TestVNet1 to TestVNet5. The difference here is that $vnet5gw can't be obtained directly because it is in a different subscription. You'll need to create a new PowerShell object with the values communicated from Subscription 1 in the previous steps. Use the following example. Replace the Name, ID, and shared key with your own values. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
Connect to Subscription 1 before running the following example:
In this example, because the gateways are in the different subscriptions, we've
$Connection15 = "VNet1toVNet5" New-AzVirtualNetworkGatewayConnection -Name $Connection15 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet5gw -Location $Location1 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```
-4. **[Subscription 5]** Create the TestVNet5 to TestVNet1 connection. This step is similar to the one above, except you are creating the connection from TestVNet5 to TestVNet1. The same process of creating a PowerShell object based on the values obtained from Subscription 1 applies here as well. In this step, be sure that the shared keys match.
+
+1. **[Subscription 5]** Create the TestVNet5 to TestVNet1 connection. This step is similar previous step, except you're creating the connection from TestVNet5 to TestVNet1. The same process of creating a PowerShell object based on the values obtained from Subscription 1 applies here as well. In this step, be sure that the shared keys match.
Connect to Subscription 5 before running the following example:
In this example, because the gateways are in the different subscriptions, we've
## <a name="faq"></a>VNet-to-VNet FAQ
+For more information about VNet-to-VNet connections, see the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#V2VMulti).
## Next steps
web-application-firewall Protect Azure Open Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/protect-azure-open-ai.md
+
+ Title: Protect Azure OpenAI using Azure Web Application Firewall on Azure Front Door
+description: Learn how to Protect Azure OpenAI using Azure Web Application Firewall on Azure Front Door
++++ Last updated : 08/28/2023++
+# Protect Azure OpenAI using Azure Web Application Firewall on Azure Front Door
+
+There are a growing number of enterprises using Azure OpenAI APIs, and the number and complexity of security attacks against web applications is constantly evolving. A strong security strategy is necessary to protect Azure OpenAI APIs from various web application attacks.
+
+Azure Web Application Firewall (WAF) is an Azure Networking product that protects web applications and APIs from various OWASP top 10 web attacks, Common Vulnerabilities and Exposures (CVEs), and malicious bot attacks.
+
+This article describes how to use Azure Web Application Firewall (WAF) on Azure Front Door to protect Azure OpenAI endpoints.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+## Create Azure OpenAI instance using the gpt-35-turbo model
+First, create an OpenAI instance.
++
+1. Create an Azure OpenAI instance and deploy a gpt-35-turbo model using [Create and deploy an Azure OpenAI Service resource](../../ai-services/openai/how-to/create-resource.md).
+1. Identify the Azure OpenAI endpoint and the API key.
+
+ Open the Azure OpenAI studio and open the **Chat** option under **Playground**.
+ Use the **View code** option to display the endpoint and the API key.
+ :::image type="content" source="../media/protect-azure-open-ai/view-code.png" alt-text="Screenshot showing Azure AI Studio Chat playground." lightbox="../media/protect-azure-open-ai/view-code.png":::
+ <br>
+
+ :::image type="content" source="../media/protect-azure-open-ai/sample-code.png" alt-text="Screenshot showing Azure OpenAI sample code with Endpoint and Key.":::
+
+1. Validate Azure OpenAI call using [Postman](https://www.postman.com/).
+ Use the Azure OpenAPI endpoint and api-key values found in the earlier steps.
+ Use these lines of code in the POST body:
+
+ ```json
+ {
+ "model":"gpt-35-turbo",
+ "messages": [
+ {
+ "role": "user",
+ "content": "What is Azure OpenAI?"
+ }
+ ]
+ }
+
+ ```
+ :::image type="content" source="../media/protect-azure-open-ai/postman-body.png" alt-text="Screenshot showing the post body." lightbox="../media/protect-azure-open-ai/postman-body.png":::
+1. In response to the POST, you should receive a *200 OK*:
+ :::image type="content" source="../media/protect-azure-open-ai/post-200-ok.png" alt-text="Screenshot showing the POST 200 OK." lightbox="../media/protect-azure-open-ai/post-200-ok.png":::
+
+ The Azure OpenAI also generates a response using the GPT model.
+
+## Create an Azure Front Door instance with Azure WAF
+
+Now use the Azure portal to create an Azure Front Door instance with Azure WAF.
+
+1. Create an Azure Front Door premium optimized tier with an associated WAF security policy in the same resource group. Use the **Custom create** option.
+
+ 1. [Quickstart: Create an Azure Front Door profile - Azure portal](../../frontdoor/create-front-door-portal.md#create-a-front-door-for-your-application)
+1. Add endpoints and routes.
+1. Add the origin hostname: The origin hostname is `testazureopenai.openai.azure.com`.
+1. Add the WAF policy.
++
+## Configure a WAF policy to protect against web application and API vulnerabilities
+
+Enable the WAF policy in prevention mode and ensure **Microsoft_DefaultRuleSet_2.1** and **Microsoft_BotManagerRuleSet_1.0** are enabled.
++
+## Verify access to Azure OpenAI via Azure Front Door endpoint
+
+Now verify your Azure Front Door endpoint.
+
+1. Retrieve the Azure Front Door endpoint from the Front Door Manager.
+
+ :::image type="content" source="../media/protect-azure-open-ai/front-door-endpoint.png" alt-text="Screenshot showing the Azure Front Door endpoint." lightbox="../media/protect-azure-open-ai/front-door-endpoint.png":::
+2. Use Postman to send a POST request to the Azure Front Door endpoint.
+ 1. Replace the Azure OpenAI endpoint with the AFD endpoint in Postman POST request.
+ :::image type="content" source="../media/protect-azure-open-ai/test-final.png" alt-text="Screenshot showing the final POST." lightbox="../media/protect-azure-open-ai/test-final.png":::
+
+ Azure OpenAI also generates a response using the GPT model.
+
+## Validate WAF blocks an OWASP attack
+
+Send a POST request simulating an OWASP attack on the Azure OpenAI endpoint. WAF blocks the call with a *403 Forbidden response* code.
+
+## Configure IP restriction rules using WAF
+
+To restrict access to the Azure OpenAI endpoint to the required IP addresses, see [Configure an IP restriction rule with a WAF for Azure Front Door](waf-front-door-configure-ip-restriction.md).
+
+## Common issues
+
+The following items are common issues you may encounter when using Azure OpenAI with Azure Front Door and Azure WAF.
+
+- You get a *401: Access Denied* message when you send a POST request to your Azure OpenAI endpoint.
+
+ If you attempt to send a POST request to your Azure OpenAI endpoint immediately after you create it, you may receive a *401: Access Denied* message even if you have the correct API key in your request. This issue will usually resolve itself after some time without any direct intervention.
+
+- You get a *415: Unsupported Media Type* message when you send a POST request to your Azure OpenAI endpoint.
+
+ If you attempt to send a POST request to your Azure OpenAI endpoint with the Content-Type header `text/plain`, you get this message. Make sure to update your Content-Type header to `application/json` in the header section in Postman.
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
+|941180|Node-Validator Blocklist Keywords|
|941190|XSS using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript|
The following rule groups and rules are available when you use Azure Web Applica
|941370|JavaScript global variable found| |941380|AngularJS client side template injection detected|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-21"></a> SQLI: SQL injection |RuleId|Description| |||
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|
-|941180|Node-Validator Blacklist Keywords.|
+|941180|Node-Validator Blocklist Keywords.|
|941190|XSS Using style sheets.| |941200|XSS using VML frames.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).|
The following rule groups and rules are available when you use Azure Web Applica
|941370|JavaScript global variable found.| |941380|AngularJS client side template injection detected.|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-20"></a> SQLI: SQL injection |RuleId|Description| |||
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|
-|941180|Node-Validator Blacklist Keywords.|
+|941180|Node-Validator Blocklist Keywords.|
|941190|IE XSS Filters - Attack Detected.| |941200|IE XSS Filters - Attack Detected.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) found.|
The following rule groups and rules are available when you use Azure Web Applica
|941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-11"></a> SQLI: SQL injection |RuleId|Description| |||
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|
-|941180|Node-Validator Blacklist Keywords.|
+|941180|Node-Validator Blocklist Keywords.|
|941190|XSS Using style sheets.| |941200|XSS using VML frames.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).|
The following rule groups and rules are available when you use Azure Web Applica
|941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-10"></a> SQLI: SQL injection |RuleId|Description| |||
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
Previously updated : 08/31/2021 Last updated : 09/05/2023
You can configure a geo-filtering policy for your Azure Front Door instance by u
| BM | Bermuda| | BN | Brunei| | BO | Bolivia|
-| BQ | Bonaire|
+| BQ | Bonaire, Sint Eustatius and Saba|
| BR | Brazil| | BS | Bahamas| | BT | Bhutan|
You can configure a geo-filtering policy for your Azure Front Door instance by u
| SG | Singapore| | SH | St Helena, Ascension, Tristan da Cunha| | SI | Slovenia|
-| SJ | Svalbard|
+| SJ | Svalbard and Jan Mayen|
| SK | Slovakia| | SL | Sierra Leone| | SM | San Marino|
You can configure a geo-filtering policy for your Azure Front Door instance by u
| TM | Turkmenistan| | TN | Tunisia| | TO | Tonga|
-| TR | Turkey|
+| TR | T├╝rkiye|
| TT | Trinidad and Tobago| | TV | Tuvalu| | TW | Taiwan|
You can configure a geo-filtering policy for your Azure Front Door instance by u
| VU | Vanuatu| | WF | Wallis and Futuna| | WS | Samoa|
-| XE | Sint Eustatius|
-| XJ | Jan Mayen|
| XK | Kosovo|
-| XS | Saba|
| YE | Yemen| | YT | Mayotte| | ZA | South Africa|
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
The following rule groups and rules are available when using Web Application Fir
|941150|XSS Filter - Category 5: Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
+|941180|Node-Validator Blocklist Keywords|
|941190|XSS Using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript|
The following rule groups and rules are available when using Web Application Fir
|941380|AngularJS client side template injection detected| >[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
### <a name="drs942-21"></a> SQLI - SQL Injection |RuleId|Description|
web-application-firewall Application Gateway Customize Waf Rules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-cli.md
Previously updated : 11/14/2019 Last updated : 08/25/2023
az network application-gateway waf-config set --resource-group AdatumAppGatewayR
## Mandatory rules
-The following list contains conditions that cause the WAF to block the request while in Prevention Mode (in Detection Mode they are logged as exceptions). These can't be configured or disabled:
+The following list contains conditions that cause the WAF to block the request while in Prevention Mode (in Detection Mode they're logged as exceptions). These conditions can't be configured or disabled:
* Failure to parse the request body results in the request being blocked, unless body inspection is turned off (XML, JSON, form data) * Request body (with no files) data length is larger than the configured limit
The following list contains conditions that cause the WAF to block the request w
CRS 3.x specific:
-* Inbound anomaly score exceeded threshold
+* Inbound `anomaly score` exceeded threshold
## Next steps
web-application-firewall Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/best-practices.md
Title: Best practices for Web Application Firewall on Azure Application Gateway
-description: In this tutorial, you learn about the best practices for using the web application firewall with Application Gateway.
+ Title: Best practices for Azure Web Application Firewall (WAF) on Azure Application Gateway
+description: In this article, you learn about the best practices for using the Azure Web Application Firewall (WAF) on Azure Application Gateway.
- Previously updated : 09/06/2022+ Last updated : 08/28/2023
-# Best practices for Web Application Firewall on Application Gateway
+# Best practices for Azure Web Application Firewall (WAF) on Azure Application Gateway
-This article summarizes best practices for using the web application firewall (WAF) on Azure Application Gateway.
+This article summarizes best practices for using Azure Web Application Firewall (WAF) on Azure Application Gateway.
## General best practices ### Enable the WAF
-For internet-facing applications, we recommend you enable a web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks.
+For Internet-facing applications, we recommend you enable a web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks.
### Use WAF policies
For more information, see [Troubleshoot Web Application Firewall (WAF) for Azure
### Use prevention mode
-After you've tuned your WAF, you should configure it to [run in prevention mode](create-waf-policy-ag.md#configure-waf-rules-optional). By running in prevention mode, you ensure the WAF actually blocks requests that it detects are malicious. Running in detection mode is useful while you tune and configure your WAF, but provides no protection.
+After you tune your WAF, you should configure it to [run in **prevention** mode](create-waf-policy-ag.md#configure-waf-rules-optional). By running in **prevention** mode, you ensure the WAF actually blocks requests that it detects as malicious. Running in **detection** mode is useful for testing purposes while you tune and configure your WAF but it provides no protection. It logs the traffic, but it doesn't take any actions such as *allow* or *deny*.
### Define your WAF configuration as code When you tune your WAF for your application workload, you typically create a set of rule exclusions to reduce false positive detections. If you manually configure these exclusions by using the Azure portal, then when you upgrade your WAF to use a newer ruleset version, you need to reconfigure the same exceptions against the new ruleset version. This process can be time-consuming and error-prone.
-Instead, consider defining your WAF rule exclusions and other configuration as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions.
+Instead, consider defining your WAF rule exclusions and other configurations as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions.
## Managed ruleset best practices ### Enable core rule sets
-Microsoft's core rule sets are designed to protect your application by detecting and blocking common attacks. The rules are based on a various sources including the OWASP top 10 attack types and information from Microsoft Threat Intelligence.
+Microsoft's core rule sets are designed to protect your application by detecting and blocking common attacks. The rules are based on various sources including the OWASP top 10 attack types and information from Microsoft Threat Intelligence.
For more information, see [Web Application Firewall CRS rule groups and rules](application-gateway-crs-rulegroups-rules.md).
For more information, see [Geomatch custom rules](geomatch-custom-rules.md).
### Add diagnostic settings to save your WAF's logs
-Application Gateway's WAF integrates with Azure Monitor. It's important to save the WAF logs to a destination like Log Analytics. You should review the WAF logs regularly. Reviewing logs helps you to [tune your WAF policies to reduce false-positive detections](#tune-your-waf), and to understand whether your application has been the subject of attacks.
+Application Gateway's WAF integrates with Azure Monitor. It's important to enable the diagnostic settings and save the WAF logs to a destination like Log Analytics. You should review the WAF logs regularly. Reviewing logs helps you to [tune your WAF policies to reduce false-positive detections](#tune-your-waf), and to understand whether your application has been the subject of attacks.
For more information, see [Azure Web Application Firewall Monitoring and Logging](application-gateway-waf-metrics.md).
web-application-firewall Create Waf Policy Ag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-waf-policy-ag.md
Previously updated : 06/10/2022 Last updated : 08/24/2023
Associating a WAF policy with listeners allows for multiple sites behind a singl
You can make as many policies as you want. Once you create a policy, it must be associated to an Application Gateway to go into effect, but it can be associated with any combination of Application Gateways and listeners.
-If your Application Gateway has an associated policy, and then you associated a different policy to a listener on that Application Gateway, the listener's policy will take effect, but just for the listener(s) that they're assigned to. The Application Gateway policy still applies to all other listeners that don't have a specific policy assigned to them.
+If your Application Gateway has an associated policy, and then you associate a different policy to a listener on that Application Gateway, the listener's policy takes effect, but just for the listener(s) that they're assigned to. The Application Gateway policy still applies to all other listeners that don't have a specific policy assigned to them.
> [!NOTE] > Once a Firewall Policy is associated to a WAF, there must always be a policy associated to that WAF. You may overwrite that policy, but disassociating a policy from the WAF entirely isn't supported.
If it also shows Policy Settings and Managed Rules, then it's a full Web Applica
## Upgrade to WAF Policy
-If you have a Custom Rules only WAF Policy, then you may want to move to the new WAF Policy. Going forward, the firewall policy will support WAF policy settings, managed rulesets, exclusions, and disabled rule-groups. Essentially, all the WAF configurations that were previously done inside the Application Gateway are now done through the WAF Policy.
+If you have a Custom Rules only WAF Policy, then you may want to move to the new WAF Policy. Going forward, the firewall policy supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups. Essentially, all the WAF configurations that were previously done inside the Application Gateway are now done through the WAF Policy.
Edits to the custom rule only WAF policy are disabled. To edit any WAF settings such as disabling rules, adding exclusions, etc. you have to upgrade to a new top-level firewall policy resource.
Optionally, you can use a migration script to upgrade to a WAF policy. For more
## Force mode
-If you don't want to copy everything into a policy that is exactly the same as your current config, you can set the WAF into "force" mode. Run the following Azure PowerShell code and your WAF will be in force mode. Then you can associate any WAF Policy to your WAF, even if it doesn't have the exact same settings as your config.
+If you don't want to copy everything into a policy that is exactly the same as your current config, you can set the WAF into "force" mode. Run the following Azure PowerShell code to put your WAF in force mode. Then you can associate any WAF Policy to your WAF, even if it doesn't have the exact same settings as your config.
```azurepowershell-interactive $appgw = Get-AzApplicationGateway -Name <your Application Gateway name> -ResourceGroupName <your Resource Group name>
web-application-firewall Geomatch Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/geomatch-custom-rules.md
Previously updated : 07/17/2022 Last updated : 09/05/2023
If you're using the Geomatch operator, the selectors can be any of the following
| AE | United Arab Emirates| | AF | Afghanistan| | AG | Antigua and Barbuda|
+| AI | Anguilla|
| AL | Albania| | AM | Armenia| | AO | Angola|
+| AQ | Antarctica|
| AR | Argentina| | AS | American Samoa| | AT | Austria| | AU | Australia|
+| AW | Aruba|
+| AX | Åland Islands|
| AZ | Azerbaijan| | BA | Bosnia and Herzegovina| | BB | Barbados|
If you're using the Geomatch operator, the selectors can be any of the following
| BI | Burundi| | BJ | Benin| | BL | Saint Barthélemy|
-| BN | Brunei Darussalam|
+| BM | Bermuda|
+| BN | Brunei|
| BO | Bolivia|
+| BQ | Bonaire, Sint Eustatius and Saba|
| BR | Brazil| | BS | Bahamas| | BT | Bhutan|
+| BV | Bouvet Island|
| BW | Botswana| | BY | Belarus| | BZ | Belize| | CA | Canada|
-| CD | Democratic Republic of the Congo|
+| CC | Cocos (Keeling) Islands|
+| CD | Congo (DRC)|
| CF | Central African Republic|
+| CG | Congo|
| CH | Switzerland| | CI | Cote d'Ivoire|
+| CK | Cook Islands|
| CL | Chile| | CM | Cameroon| | CN | China|
If you're using the Geomatch operator, the selectors can be any of the following
| CR | Costa Rica| | CU | Cuba| | CV | Cabo Verde|
+| CW | Curaçao|
+| CX | Christmas Island|
| CY | Cyprus|
-| CZ | Czech Republic|
+| CZ | Czechia|
| DE | Germany|
+| DJ | Djibouti|
| DK | Denmark|
+| DM | Dominica|
| DO | Dominican Republic| | DZ | Algeria| | EC | Ecuador| | EE | Estonia| | EG | Egypt|
+| ER | Eritrea|
| ES | Spain| | ET | Ethiopia| | FI | Finland| | FJ | Fiji|
-| FM | Micronesia, Federated States of|
+| FK | Falkland Islands|
+| FM | Micronesia|
+| FO | Faroe Islands|
| FR | France|
+| GA | Gabon|
| GB | United Kingdom|
+| GD | Grenada|
| GE | Georgia| | GF | French Guiana|
+| GG | Guernsey|
| GH | Ghana|
+| GI | Gibraltar|
+| GL | Greenland|
+| GM | Gambia|
| GN | Guinea| | GP | Guadeloupe|
+| GQ | Equatorial Guinea|
| GR | Greece|
+| GS | South Georgia and South Sandwich Islands|
| GT | Guatemala|
+| GU | Guam|
+| GW | Guinea-Bissau|
| GY | Guyana| | HK | Hong Kong SAR|
+| HM | Heard Island and McDonald Islands|
| HN | Honduras| | HR | Croatia| | HT | Haiti|
If you're using the Geomatch operator, the selectors can be any of the following
| ID | Indonesia| | IE | Ireland| | IL | Israel|
+| IM | Isle of Man|
| IN | India|
+| IO | British Indian Ocean Territory|
| IQ | Iraq|
-| IR | Iran, Islamic Republic of|
+| IR | Iran|
| IS | Iceland| | IT | Italy|
+| JE | Jersey|
| JM | Jamaica| | JO | Jordan| | JP | Japan|
If you're using the Geomatch operator, the selectors can be any of the following
| KG | Kyrgyzstan| | KH | Cambodia| | KI | Kiribati|
+| KM | Comoros|
| KN | Saint Kitts and Nevis|
-| KP | Korea, Democratic People's Republic of|
-| KR | Korea, Republic of|
+| KP | North Korea|
+| KR | Korea|
| KW | Kuwait| | KY | Cayman Islands| | KZ | Kazakhstan|
-| LA | Lao People's Democratic Republic|
+| LA | Laos|
| LB | Lebanon|
+| LC | Saint Lucia|
| LI | Liechtenstein| | LK | Sri Lanka| | LR | Liberia|
If you're using the Geomatch operator, the selectors can be any of the following
| LV | Latvia| | LY | Libya | | MA | Morocco|
-| MD | Moldova, Republic of|
+| MC | Monaco|
+| MD | Moldova|
+| ME | Montenegro|
+| MF | Saint Martin|
| MG | Madagascar|
+| MH | Marshall Islands|
| MK | North Macedonia| | ML | Mali| | MM | Myanmar| | MN | Mongolia| | MO | Macao SAR|
+| MP | Northern Mariana Islands|
| MQ | Martinique| | MR | Mauritania|
+| MS | Montserrat|
| MT | Malta|
+| MU | Mauritius|
| MV | Maldives| | MW | Malawi| | MX | Mexico| | MY | Malaysia| | MZ | Mozambique| | NA | Namibia|
+| NC | New Caledonia|
| NE | Niger|
+| NF | Norfolk Island|
| NG | Nigeria| | NI | Nicaragua| | NL | Netherlands| | NO | Norway| | NP | Nepal| | NR | Nauru|
+| NU | Niue|
| NZ | New Zealand| | OM | Oman| | PA | Panama| | PE | Peru|
+| PF | French Polynesia|
+| PG | Papua New Guinea|
| PH | Philippines| | PK | Pakistan| | PL | Poland|
+| PM | Saint Pierre and Miquelon|
+| PN | Pitcairn Islands|
| PR | Puerto Rico|
+| PS | Palestinian Authority|
| PT | Portugal| | PW | Palau| | PY | Paraguay|
If you're using the Geomatch operator, the selectors can be any of the following
| RE | Reunion| | RO | Romania| | RS | Serbia|
-| RU | Russian Federation|
+| RU | Russia|
| RW | Rwanda| | SA | Saudi Arabia|
+| SB | Solomon Islands|
+| SC | Seychelles|
| SD | Sudan| | SE | Sweden| | SG | Singapore|
+| SH | St Helena, Ascension, Tristan da Cunha|
| SI | Slovenia|
+| SJ | Svalbard and Jan Mayen|
| SK | Slovakia|
+| SL | Sierra Leone|
+| SM | San Marino|
| SN | Senegal| | SO | Somalia| | SR | Suriname| | SS | South Sedan|
+| ST | São Tomé and Príncipe|
| SV | El Salvador|
-| SY | Syrian Arab Republic|
-| SZ | Swaziland|
+| SX | Sint Maarten|
+| SY | Syria|
+| SZ | Eswatini|
| TC | Turks and Caicos Islands|
+| TD | Chad|
+| TF | French Southern Territories|
| TG | Togo| | TH | Thailand|
+| TJ | Tajikistan|
+| TK | Tokelau|
+| TL | Timor-Leste|
+| TM | Turkmenistan|
| TN | Tunisia|
-| TR | T├╝rkiye |
+| TO | Tonga|
+| TR | Turkey|
| TT | Trinidad and Tobago|
+| TV | Tuvalu|
| TW | Taiwan|
-| TZ | Tanzania, United Republic of|
+| TZ | Tanzania|
| UA | Ukraine| | UG | Uganda|
+| UM | U.S. Outlying Islands|
| US | United States| | UY | Uruguay| | UZ | Uzbekistan|
+| VA | Vatican City|
| VC | Saint Vincent and the Grenadines| | VE | Venezuela|
-| VG | Virgin Islands, British|
+| VG | British Virgin Islands|
| VI | Virgin Islands, U.S.| | VN | Vietnam|
+| VU | Vanuatu|
+| WF | Wallis and Futuna|
+| WS | Samoa|
| YE | Yemen|
+| YT | Mayotte|
| ZA | South Africa| | ZM | Zambia| | ZW | Zimbabwe|
web-application-firewall Rate Limiting Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-configure.md
+
+ Title: Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+
+description: Learn how to configure rate limit custom rules for Application Gateway WAF v2.
++++ Last updated : 08/16/2023++++
+# Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+
+> [!IMPORTANT]
+> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Rate limiting enables you to detect and block abnormally high levels of traffic destined for your application. Rate Limiting works by counting all traffic that that matches the configured Rate Limit rule and performing the configured action for traffic matching that rule which exceeds the configured threshold. For more information, see [Rate limiting overview](rate-limiting-overview.md).
+
+## Configure Rate Limit Custom Rules
+
+Use the following information to configure Rate Limit Rules for Application Gateway WAFv2.
+
+**Scenario One** - Create rule to rate-limit traffic by Client IP that exceed the configured threshold, matching all traffic.
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 200 for Rate limit threshold (requests)
+1. Select Client address for Group rate limit traffic by
+1. Under Conditions, choose IP address for Match Type
+1. For Operation, select the Does not contain radio button
+1. For match condition, under IP address or range, enter 255.255.255.255/32
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator IPMatch -MatchValue 255.255.255.255/32 -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName ClientAddr
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name ClientIPRateLimitRule -Priority 90 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name ClientIPRateLimitRule --priority 90 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"ClientAddr"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator IPMatch --policy-name ExamplePolicy --name ClientIPRateLimitRule --resource-group ExampleRG --value 255.255.255.255/32 --negate true
+```
+* * *
+
+**Scenario Two** - Create Rate Limit Custom Rule to match all traffic except for traffic originating from the United States. Traffic will be grouped, counted and rate limited based on the GeoLocation of the Client Source IP address
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 500 for Rate limit threshold (requests)
+1. Select Geo location for Group rate limit traffic by
+1. Under Conditions, choose Geo location for Match Type
+1. In the Match variables section, select RemoteAddr for Match variable
+1. Select the Is not radio button for operation
+1. Select United States for Country/Region
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator GeoMatch -MatchValue "US" -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariablde -VariableName GeoLocation
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name GeoRateLimitRule -Priority 95 -RateLimitDuration OneMin -RateLimitThreshold 500 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name GeoRateLimitRule --priority 95 --rule-type RateLimitRule --rate-limit-threshold 500 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"GeoLocation"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator GeoMatch --policy-name ExamplePolicy --name GeoRateLimitRule --resource-group ExampleRG --value US --negate true
+```
+* * *
+
+**Scenario Three** - Create Rate Limit Custom Rule matching all traffic for the login page, and using the GroupBy None variable. This will group and count all traffic which matches the rule as one, and apply the action across all traffic matching the rule (/login).
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 100 for Rate limit threshold (requests)
+1. Select None for Group rate limit traffic by
+1. Under Conditions, choose String for Match Type
+1. In the Match variables section, select RequestUri for Match variable
+1. Select the Is not radio button for operation
+1. For Operator select contains
+1. Enter Login page path for match Value. In this example we use /login
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestUri
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator Contains -MatchValue "/login" -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName None
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name LoginRateLimitRule -Priority 99 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name LoginRateLimitRule --priority 99 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"None"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RequestUri --operator Contains --policy-name ExamplePolicy --name LoginRateLimitRule --resource-group ExampleRG --value '/login'
+```
+* * *
+
+## Next steps
+
+[Customize web application firewall rules](application-gateway-customize-waf-rules-portal.md)
web-application-firewall Rate Limiting Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-overview.md
+
+ Title: Azure Web Application Firewall (WAF) rate limiting (preview)
+description: This article is an overview of Azure Web Application Firewall (WAF) on Application Gateway rate limiting.
++++ Last updated : 08/16/2023+++
+# What is rate limiting for Web Application Firewall on Application Gateway (preview)?
+
+> [!IMPORTANT]
+> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Rate limiting for Web Application Firewall on Application Gateway (preview) allows you to detect and block abnormally high levels of traffic destined for your application. By using rate limiting on Application Gateway WAF_v2, you can mitigate many types of denial-of-service attacks, protect against clients that have accidentally been misconfigured to send large volumes of requests in a short time period, or control traffic rates to your site from specific geographies.
+
+## Rate limiting policies
+
+Rate limiting is configured using custom WAF rules in a policy.
+
+> [!NOTE]
+> Rate limit rules are only supported on Web Application Firewalls running the [latest WAF engine](waf-engine.md). In order to ensure you are using the latest engine, select CRS 3.2 for the default rule set.
+
+When you configure a rate limit rule, you must specify the threshold: the number of requests allowed within the specified time period. Rate limiting on Application Gateway WAF_v2 uses a sliding window algorithm to determine when traffic has breached the threshold and needs to be dropped. During the first window where the threshold for the rule is breached, any more traffic matching the rate limit rule is dropped. From the second window onwards, traffic up to the threshold within the window configured is allowed, producing a throttling effect.
+
+You must also specify a match condition, which tells the WAF when to activate the rate limit. You can configure multiple rate limit rules that match different variables and paths within your policy.
+
+Application Gateway WAF_v2 also introduces a *GroupByUserSession*, which must be configured. The *GroupByUserSession* specifies how requests are grouped and counted for a matching rate limit rule.
+
+The following three *GroupByVariables* are currently available:
+- *ClientAddr* ΓÇô This is the default setting and it means that each rate limit threshold and mitigation applies independently to every unique source IP address.
+- *GeoLocation* - Traffic is grouped by their geography based on a Geo-Match on the client IP address. So for a rate limit rule, traffic from the same geography is grouped together.
+- *None* - All traffic is grouped together and counted against the threshold of the Rate Limit rule. When the threshold is breached, the action triggers against all traffic matching the rule and doesn't maintain independent counters for each client IP address or geography. It's recommended to use *None* with specific match conditions such as a sign-in page or a list of suspicious User-Agents.
+
+## Rate limiting details
+
+The configured rate limit thresholds are counted and tracked independently for each endpoint the Web Application Firewall policy is attached to. For example, a single WAF policy attached to five different listeners maintains independent counters and threshold enforcement for each of the listeners.
+
+The rate limit thresholds aren't always enforced exactly as defined, so it shouldn't be used for fine-grain control of application traffic. Instead, it's recommended for mitigating anomalous rates of traffic and for maintaining application availability.
+
+The sliding window algorithm blocks all matching traffic for the first window in which the threshold is exceeded, and then throttles traffic in future windows. Use caution when defining thresholds for configuring wide-matching rules with either *GeoLocation* or *None* as the *GroupByVariables*. Incorrectly configured thresholds could lead to frequent short outages for matching traffic.
+
+## Availability
+
+Currently, Rate limiting is not available in the Azure Government and Azure China sovereign regions.
+
+## Next step
+
+- [Create rate limiting custom rules for Application Gateway WAF v2 (preview)](rate-limiting-configure.md)
web-application-firewall Waf Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-engine.md
description: This article provides an overview of the Azure WAF engine.
Previously updated : 05/03/2022 Last updated : 08/25/2023
The new WAF engine is a high-performance, scalable Microsoft proprietary engine
The new engine, released with CRS 3.2, provides the following benefits: * **Improved performance:** Significant improvements in WAF latency, including P99 POST and GET latencies. We observed a significant reduction in P99 tail latencies with up to approximately 8x reduction in processing POST requests and approximately 4x reduction in processing GET requests.
-* **Increased scale:** Higher requests per second (RPS), using the same compute power and with the ability to process larger request sizes. Our next-generation engine can scale up to 8 times more RPS using the same compute power, and has an ability to process 16 times larger request sizes (up to 2 MB request sizes), which was not possible with the previous engine.
+* **Increased scale:** Higher requests per second (RPS), using the same compute power and with the ability to process larger request sizes. Our next-generation engine can scale up to eight times more RPS using the same compute power, and has an ability to process 16 times larger request sizes (up to 2-MB request sizes), which wasn't possible with the previous engine.
* **Better protection:** New redesigned engine with efficient regex processing offers better protection against RegEx denial of service (DOS) attacks while maintaining a consistent latency experience. * **Richer feature set:** New features and future enhancement are available only through the new engine.
There are many new features that are only supported in the Azure WAF engine. The
* HTTP listeners limit * WAF IP address ranges per match condition * Exclusions limit
+* [Rate-limit Custom Rules](rate-limiting-overview.md)
-New WAF features will only be released with later versions of CRS on the new WAF engine.
+New WAF features are only released with later versions of CRS on the new WAF engine.
## Request logging for custom rules
web-application-firewall Waf Sensitive Data Protection Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection-configure.md
Previously updated : 06/13/2023 Last updated : 08/15/2023 # How to mask sensitive data on Azure Web Application Firewall
$logScrubbingRuleConfig = New-AzApplicationGatewayFirewallPolicyLogScrubbingConf
``` #### [CLI](#tab/cli)
-The Azure CLI commands to enable and configure Sensitive Data Protection are coming soon.
+Use the following Command Line Interface commands to [create and configure](/cli/azure/network/application-gateway/waf-policy/policy-setting) Log Scrubbing rules for Sensitive Data Protection:
+```CLI
+az network application-gateway waf-policy policy-setting update -g <MyResourceGroup> --policy-name <MyPolicySetting> --log-scrubbing-state <Enabled/Disabled> --scrubbing-rules "[{state:<Enabled/Disabled>,match-variable:<MatchVariable>,selector-match-operator:<Operator>,selector:<Selector>}]"
+```
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
Previously updated : 10/25/2019 Last updated : 08/24/2023 # Resource logs for Azure Web Application Firewall
Activity logging is automatically enabled for every Resource Manager resource. Y
``` > [!TIP]
->Activity logs do not require a separate storage account. The use of storage for access and performance logging incurs service charges.
+>Activity logs do not *require* a separate storage account. The use of storage for access and performance logging incurs service charges.
### Enable logging through the Azure portal
Activity logging is automatically enabled for every Resource Manager resource. Y
* Performance log * Firewall log
-2. To start collecting data, select **Turn on diagnostics**.
+2. Select **Add diagnostic setting**.
- ![Turning on diagnostics][1]
-3. The **Diagnostics settings** page provides the settings for the resource logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the resource logs.
+3. The **Diagnostic setting** page provides the settings for the resource logs. In this example, Log Analytics stores the logs. You can also use an event hub, a storage account, or a partner solution to save the resource logs.
- ![Starting the configuration process][2]
+ :::image type="content" source="../media/web-application-firewall-logs/figure2.png" alt-text="Screenshot showing Diagnostic settings.":::
5. Type a name for the settings, confirm the settings, and select **Save**.
The performance log is generated only if you have enabled it on each Application
## Firewall log
-The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged:
+The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the destination that you specified when you enabled the logging. The following data is logged:
|Value |Description |
We have published a Resource Manager template that installs and runs the popular
* [Visualize your Azure activity log with Power BI](https://powerbi.microsoft.com/blog/monitor-azure-audit-logs-with-power-bi/) blog post. * [View and analyze Azure activity logs in Power BI and more](https://azure.microsoft.com/blog/analyze-azure-audit-logs-in-powerbi-more/) blog post.
-[1]: ../media/web-application-firewall-logs/figure1.png
-[2]: ../media/web-application-firewall-logs/figure2.png
-[3]: ./media/application-gateway-diagnostics/figure3.png
-[4]: ./media/application-gateway-diagnostics/figure4.png
-[5]: ./media/application-gateway-diagnostics/figure5.png
-[6]: ./media/application-gateway-diagnostics/figure6.png
-[7]: ./media/application-gateway-diagnostics/figure7.png
-[8]: ./media/application-gateway-diagnostics/figure8.png
-[9]: ./media/application-gateway-diagnostics/figure9.png
-[10]: ./media/application-gateway-diagnostics/figure10.png
web-application-firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/overview.md
description: This article provides an overview of Azure Web Application Firewall
Previously updated : 06/10/2022 Last updated : 08/23/2023
web-application-firewall Waf Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/scripts/waf-custom-rules-powershell.md
- Title: Azure PowerShell Script Sample that uses WAF custom rules
-description: Azure PowerShell Script Sample - Create Web Application Firewall on Application Gateway custom rules
---- Previously updated : 09/30/2019----
-# Create WAF custom rules with Azure PowerShell
-
-This script creates an Application Gateway Web Application Firewall that uses custom rules. The custom rule blocks traffic if the request header contains User-Agent *evilbot*.
-
-## Prerequisites
-
-### Azure PowerShell module
-
-If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later.
-
-1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
-2. To create a connection with Azure, run `Connect-AzAccount`.
--
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/application-gateway/waf-rules/waf-custom-rules.ps1 "Custom WAF rules")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, application gateway, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name CustomRulesTest
-```
-
-## Script explanation
-
-This script uses the following commands to create the deployment. Each item in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates the subnet configuration. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates the virtual network using with the subnet configurations. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates the public IP address for the application gateway. |
-| [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration) | Creates the configuration that associates a subnet with the application gateway. |
-| [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig) | Creates the configuration that assigns a public IP address to the application gateway. |
-| [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport) | Assigns a port to be used to access the application gateway. |
-| [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool) | Creates a backend pool for an application gateway. |
-| [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting) | Configures settings for a backend pool. |
-| [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) | Creates a listener. |
-| [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule) | Creates a routing rule. |
-| [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku) | Specify the tier and capacity for an application gateway. |
-| [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway) | Create an application gateway. |
-|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. |
-|[New-AzApplicationGatewayAutoscaleConfiguration](/powershell/module/az.network/New-AzApplicationGatewayAutoscaleConfiguration)|Creates an autoscale configuration for the Application Gateway.|
-|[New-AzApplicationGatewayFirewallMatchVariable](/powershell/module/az.network/New-AzApplicationGatewayFirewallMatchVariable)|Creates a match variable for firewall condition.|
-|[New-AzApplicationGatewayFirewallCondition](/powershell/module/az.network/New-AzApplicationGatewayFirewallCondition)|Creates a match condition for custom rule.|
-|[New-AzApplicationGatewayFirewallCustomRule](/powershell/module/az.network/New-AzApplicationGatewayFirewallCustomRule)|Creates a new custom rule for the application gateway firewall policy.|
-|[New-AzApplicationGatewayFirewallPolicy](/powershell/module/az.network/New-AzApplicationGatewayFirewallPolicy)|Creates a application gateway firewall policy.|
-|[New-AzApplicationGatewayWebApplicationFirewallConfiguration](/powershell/module/az.network/New-AzApplicationGatewayWebApplicationFirewallConfiguration)|Creates a WAF configuration for an application gateway.|
-
-## Next steps
--- For more information about WAF custom rules, see [Custom rules for Web Application Firewall](../ag/custom-waf-rules-overview.md)-- For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).